One Year of ChatGPT: How A.I. Changed Silicon Valley Forever

At 1 p.m. on a Friday shortly before Christmas last year, Kent Walker, Google’s top lawyer, summoned four of his employees and ruined their weekend.

The group worked in SL1001, a bland building with a blue glass facade betraying no sign that dozens of lawyers inside were toiling to protect the interests of one of the world’s most influential companies. For weeks they had been prepping for a meeting of powerful executives to discuss the safety of Google’s products. The deck was done. But that afternoon Mr. Walker told his team the agenda had changed, and they would have to spend the next few days preparing new slides and graphs.

In fact, the entire agenda of the company had changed — all in the course of nine days. Sundar Pichai, Google’s chief executive, had decided to ready a slate of products based on artificial intelligence — immediately. He turned to Mr. Walker, the same lawyer he was trusting to defend the company in a profit-threatening antitrust case in Washington, D.C. Mr. Walker knew he would need to persuade the Advanced Technology Review Council, as Google called the group of executives, to throw off their customary caution and do as they were told.

It was an edict, and edicts didn’t happen very often at Google. But Google was staring at a real crisis. Its business model was potentially at risk.

What had set off Mr. Pichai and the rest of Silicon Valley was ChatGPT, the artificial intelligence program that had been released on Nov. 30, 2022, by an upstart called OpenAI. It had captured the imagination of millions of people who had thought A.I. was science fiction until they started playing with the thing. It was a sensation. It was also a problem.

At the Googleplex, famed for its free food, massages, fitness classes and laundry services, Mr. Pichai was also playing with ChatGPT. Its wonders did not wow him. Google had been developing its own A.I. technology that did many of the same things. Mr. Pichai was focused on ChatGPT’s flaws — that it got stuff wrong, that sometimes it turned into a biased pig. What amazed him was that OpenAI had gone ahead and released it anyway, and that consumers loved it. If OpenAI could do that, why couldn’t Google?

Why not plow ahead? That’s the question that loomed over A.I.’s adolescence — the year or so after the technology made the leap from lab to living room. There was hand-wringing over chatbots writing seductive phishing emails and spewing disinformation, or high schoolers using them to cheat their way to an A. Doomsayers insisted that unfettered A.I. could lead to the end of humankind.

For tech company bosses, the decision of when and how to turn A.I. into a (hopefully) profitable business was a more simple risk-reward calculus. But to win, you had to have a product.

By Monday morning, Dec. 12, the team at SL1001 had a new agenda with a deck labeled “Privileged and Confidential/Need to Know.” Most attendees tuned in over videoconference. Mr. Walker started the meeting by announcing that Google was moving ahead with a chatbot and A.I. capabilities that would be added to cloud, search and other products.

“What are your concerns? Let’s get in line,” Mr. Walker said, according to Jen Gennai, the director of responsible innovation.

There would be guardrails, but approvals would be fast-tracked. Mr. Walker called it the “green lane” approach. It was all laid out in the deck. Opportunities for “Green Lane streamlining” were identified. Dangers were color-coded. Blue indicated risks where “mitigations” were “required.” Risks that were “controllable with minimum thresholds/mitigations” were rendered in orange.

In one chart, under “Hate & Toxicity,” the plan was to “curb stereotypes, toxicity and hate speech in outputs.” One topic was: “What are we missing in order to fast-track approvals?”

Not everyone was on board. “My standards are as high if not higher than they usually are, and we will be going through a review process with all of this,” Ms. Gennai remembered a cloud executive saying.

Eventually a compromise was reached. They would limit the rollout, Ms. Gennai said. And they would avoid calling anything a product. For Google, it would be an experiment. That way it didn’t have to be perfect. (A Google spokeswoman said the A.T.R.C. did not have the power to decide how the products would be released.)

What played out at Google was repeated at other tech giants after OpenAI released ChatGPT in late 2022. They all had technology in various stages of development that relied on neural networks — A.I. systems that recognized sounds, generated images and chatted like a human. That technology had been pioneered by Geoffrey Hinton, an academic who had worked briefly with Microsoft and was now at Google. But the tech companies had been slowed by fears of rogue chatbots, and economic and legal mayhem.

Once ChatGPT was unleashed, none of that mattered as much, according to interviews with more than 80 executives and researchers, as well as corporate documents and audio recordings. The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.

Over 12 months, Silicon Valley was transformed. Turning artificial intelligence into actual products that individuals and companies could use became the priority. Worries about safety and whether machines would turn on their creators were not ignored, but they were shunted aside — at least for the moment.

At Meta, Mark Zuckerberg, who had once proclaimed the metaverse to be the future, reorganized parts of the company formerly known as Facebook around A.I.

Elon Musk, the billionaire who co-founded OpenAI but had left the lab in a huff, vowed to create his own A.I. company. He called it X.AI and added it to his already full plate.

Satya Nadella, Microsoft’s chief executive, had invested in OpenAI three years before and was letting the start-up’s cowboys tap into its computing power. He sped up his plans to incorporate A.I. into Microsoft’s products — and give Google a poke in its searching eye.

“Speed is even more important than ever,” Sam Schillace, a top executive, wrote Microsoft employees. It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”

The strange thing was that the leaders of OpenAI never thought ChatGPT would shake up Silicon Valley. In early November 2022, a few weeks before it was released to the world, it didn’t really exist as a product. Most of the 375 employees working in their new offices, a former mayonnaise factory, were focused on a more powerful version of technology, called GPT-4, that could answer almost any question using information gleaned from an enormous collection of data scraped from seemingly everywhere.

It was revolutionary, but there were problems. Sometimes the tech spewed hate speech and misinformation. The engineers at OpenAI kept postponing the launch and talking about what to do.

One option was to release an older, less powerful version of the technology — and just see what happened. The idea, according to four people familiar with OpenAI’s work, was to watch the public’s reaction and use it to work out the kinks.

And though some executives have downplayed it, they wanted to beat the competition. Lots of tech companies were working on their own A.I. chatbots. But the people to beat were at Anthropic, started the year before by researchers and engineers who left OpenAI because they thought that Sam Altman, its chief executive, had not made safety a priority as A.I. grew more powerful. The defectors had helped build the technology that OpenAI was so excited about before they trooped out the door.

In mid-November 2022, Mr. Altman; Greg Brockman, OpenAI’s president; and others met in a top-floor conference room to discuss the problems with their breakthrough tech yet again. Suddenly Mr. Altman made the decision — they would release the old, less-powerful technology.

The plan was to call it Chat with GPT 3.5 and put it out by the end of the month. They referred to it as a “low key research preview.” It didn’t feel like a big-deal decision to anyone in the room.

“We plan to frame it as a research release,” Mira Murati, OpenAI’s chief technology officer, told staff over Slack. “This reduces risk in all dimensions while allowing us to learn a lot,” she wrote. “We are aiming to move quickly over the next few days to make it happen.”

The underlying code was a bit of a blob. It needed to be converted into something regular people without Ph.D.s could interact with. Mr. Altman and other executives asked a group of engineers to graft a graphical user interface — a GUI, pronounced gooey — onto the blob. A GUI is the face of an application, where you type and press buttons.

A GUI had been created earlier that year to show the technology to Bill Gates, Microsoft’s founder, at his home outside Seattle. They stuck the same GUI on and changed the name to ChatGPT. About two weeks after Mr. Altman made his decision, they were good to go.

On Nov. 29, the night before the launch, Mr. Brockman hosted drinks for the team. He didn’t think ChatGPT would attract a lot of attention, he said. His prediction: “no more than one tweet thread with 5k likes.”

Mr. Brockman was wrong. On the morning of Nov. 30, Mr. Altman tweeted about OpenAI’s new product, and the company posted a jargon-heavy blog item. And then, ChatGPT took off. Almost immediately, sign-ups overwhelmed the company’s servers. Engineers rushed in and out of a messy space near the office kitchen, huddling over laptops to pull computing power from other projects. In five days, more than a million people had used ChatGPT. Within a few weeks, that number would top 100 million. Though nobody was quite sure why, it was a hit. Network news programs tried to explain how it worked. A late-night comedy show even used it to write (sort of funny) jokes.

After things settled down, OpenAI employees used DALL-E, the company’s A.I. image generator, to make a laptop sticker labeled “Low key research preview.” It showed a computer about to be consumed by flames.

Actually, months earlier Meta had released its own chatbot — to very little notice.

BlenderBot was a flop. The A.I.-powered bot, released in August 2022, was built to carry on conversations — and that it did. It said that Donald J. Trump was still president and that President Biden had lost in 2020. Mark Zuckerberg, it told a user, was “creepy.” Then two weeks before ChatGPT was released, Meta introduced Galactica. Designed for scientific research, it could instantly write academic articles and solve math problems. Someone asked it to write a research paper about the history of bears in space. It did. After three days, Galactica was shut down.

Mr. Zuckerberg’s head was elsewhere. He had spent the entire year reorienting the company around the metaverse and was focused on virtual and augmented reality.

But ChatGPT would demand his attention. His top A.I. scientist, Yann LeCun, arrived in the Bay Area from New York about six weeks later for a routine management meeting at Meta, according to a person familiar with the meeting. Dr. LeCun led a double life — as Meta’s chief A.I. scientist and a professor at New York University. The Frenchman had won the Turing Award, computer science’s most prestigious honor, alongside Dr. Hinton, for work on neural networks.

As they waited in line for lunch at a cafe in Meta’s Frank Gehry-designed headquarters, Dr. LeCun delivered a warning to Mr. Zuckerberg. He said Meta should match OpenAI’s technology and also push forward with work on an A.I. assistant that could do stuff on the internet on your behalf. Websites like Facebook and Instagram could become extinct, he warned. A.I. was the future.

Mr. Zuckerberg didn’t say much, but he was listening. There was plenty of A.I. at work across Meta’s apps — Facebook, Instagram, WhatsApp — but it was under the hood. Mr. Zuckerberg was frustrated. He wanted the world to recognize the power of Meta’s A.I. Dr. LeCun had always argued that going open-source, making the code public, would attract countless researchers and developers to Meta’s technology, and help improve it at a far faster pace. That would allow Meta to catch up — and put Mr. Zuckerberg back in league with his fellow moguls. But it would also allow anyone to manipulate the technology to do bad things.

At dinner that evening, Mr. Zuckerberg approached Dr. LeCun. “I have been thinking about what you said,” Mr. Zuckerberg told his chief A.I. scientist, according to a person familiar with the conversation. “And I think you’re right.”

In Paris, Dr. LeCun’s scientists had developed an A.I.-powered bot that they wanted to release as open-source technology. Open source meant that anyone could tinker with its code. They called it Genesis, and it was pretty much ready to go. But when they sought permission to release it, Meta’s legal and policy teams pushed back, according to five people familiar with the discussion.

Caution versus speed was furiously debated among the executive team in early 2023 as Mr. Zuckerberg considered Meta’s course in the wake of ChatGPT.

Had everyone forgotten about the last seven years of Facebook’s history? That was the question asked by the legal and policy teams. They reminded Mr. Zuckerberg about the uproar over hate speech and misinformation on Meta’s platforms and the scrutiny the company had endured by the news media and Congress after the 2016 election.

Open sourcing the code might put powerful tech into the hands of those with bad intentions and Meta would take the blame. Jennifer Newstead, Meta’s chief legal officer, told Mr. Zuckerberg that an open-source approach to A.I. could attract the attention of regulators who already had the company in their cross hairs, according to two people familiar with her concerns.

At a meeting in late January in his office, called the aquarium because it looked like one, Mr. Zuckerberg told executives that he had made his decision. Parts of Meta would be reorganized and its priorities changed. There would be weekly meetings to update executives on A.I. progress. Hundreds of employees would be moved around. Mr. Zuckerberg declared in a Facebook post that Meta would “turbocharge” its work on A.I.

Mr. Zuckerberg wanted to push out a project fast. The researchers in Paris were ready with Genesis. The name was changed to LLaMA, short for “Large Language Model Meta AI,” and released to 4,000 researchers outside the company. Soon Meta received over 100,000 requests for access to the code.

But within days of LLaMA’s release, someone put the code on 4chan, the fringe online message board. Meta had lost control of its chatbot, raising the possibility that the worst fears of its legal and policy teams would come true. Researchers at Stanford University showed that the Meta system could easily do things like generate racist material.

On June 6, Mr. Zuckerberg received a letter about LLaMA from Senators Josh Hawley of Missouri and Richard Blumental of Connecticut. “Hawley and Blumental demand answers from Meta,” said a news release.

The letter called Meta’s approach risky and vulnerable to abuse and compared it unfavorably with ChatGPT. Why, the senators seemed to want to know, couldn’t Meta be more like OpenAI?

For Mr. Nadella, the realization that OpenAI’s tech could change everything did not come as an “Aha!” moment. After investing $1 billion in 2019, Microsoft slowly started playing with the start-up’s code. First up was GitHub, the company’s code storage service. A few teams of engineers started experimenting with OpenAI’s tech to help them write code.

Over dinner in Microsoft’s boardroom with a friend in the summer of 2021, Mr. Nadella said he was beginning to see the technology as a game changer. It would touch every part of Microsoft’s business and every human being, he predicted. (The GitHub experiment eventually became a product: GitHub Copilot.)

A year later, Mr. Nadella got a peek at what would become GPT-4. Mr. Nadella asked it to translate a poem written in Persian by Rumi, who died in 1273, into Urdu. It did. He asked it to transliterate the Urdu into English characters. It did that, too. “Then I said, ‘God, this thing,’” Mr. Nadella recalled in an interview. From that moment, he was all in.

Microsoft’s $1 billion investment in OpenAI had already grown to $3 billion. Now Microsoft was planning to increase that to $10 billion.

Even for Microsoft, which was sitting on $105 billion in cash, that was real money. OpenAI was structured as a nonprofit. Microsoft would not get a board seat. But it had the right to use OpenAI’s code. That meant Microsoft and OpenAI were partners and competitors.

At the end of the summer of 2022, Microsoft’s offices weren’t yet back to their prepandemic bustle. But on Sept. 13, Mr. Nadella summoned his top executives to a meeting at Building 34, Microsoft’s executive nerve center. It was two months before Mr. Altman made the decision to release ChatGPT.

He and Mr. Brockman demonstrated GPT-4 for the group. First they asked it biology questions. Then Mr. Brockman let the executives try to stump the chatbot. At one point the chatbot was asked a question about photosynthesis. Not only did it answer, but it ruled out other possibilities. Peter Lee, the head of Microsoft Research, was surprised it seemed to know how to reason. He turned to Microsoft’s chief scientist, who was sitting next to him, and asked, “What is going on there?!”

Then Mr. Nadella took the lectern to tell his lieutenants that everything was about to change. This was an executive order from a leader who typically favored consensus. “We are pivoting the whole company on this technology,” Eric Horvitz, the chief scientist, later remembered him saying. “This is a central advancement in the history of computing, and we are going to be on that wave at the front of it.”

It all had to stay secret for the time being. Not everyone would be brought into the tent, and at Microsoft, tents were where the important stuff happened. Three “tented projects” were set up in early October to get the big pivot started. They were devoted to cybersecurity, the Bing search engine, Microsoft Word and related software.

About two months later, Yusuf Mehdi, a marketing executive, demonstrated the Bing chatbot for some members of the board. They weren’t sold on it. They found the product overly complicated and without a clear vision to communicate to consumers. Mr. Nadella’s team hadn’t nailed it.

Two weeks later, Mr. Mehdi met with the full board. This time the version he demonstrated was more simple and consumer-friendly. It was a go.

Microsoft invited journalists to its Redmond, Wash., campus on Feb. 7 to introduce a chatbot in Bing to the world. They were instructed not to tell anybody they were going to a Microsoft event, and the topic wasn’t disclosed.

But somehow, Google found out. On Feb. 6, to get out ahead of Microsoft, it put up a blog post by Mr. Pichai announcing that Google would be introducing its own chatbot, Bard. It didn’t say exactly when.

Mr. Altman had just arrived at Microsoft’s conference center for a dry run of the show when Mr. Mehdi grabbed him and showed him Mr. Pichai’s post.

“‘Oh my gosh, this is hysterical,’” Mr. Mehdi recalled Mr. Altman saying. Just then Mr. Nadella walked out of the room where he had been rehearsing. Mr. Altman suggested that he and Mr. Nadella take a selfie. He posted it on Twitter to tweak Google.

“Hello from redmond! excited for the event tomorrow,” tweeted Mr. Altman, who had more than 1.3 million Twitter followers.

By the morning of Feb. 8, the day after Microsoft announced the chatbot, its shares were up 5 percent. But for Google, the rushed announcement became an embarrassment. Researchers spotted errors in Google’s blog post. An accompanying GIF simulated Bard saying that the Webb telescope had captured the first pictures of an exoplanet, a planet outside the solar system. In fact, a telescope at the European Southern Observatory in northern Chile got the first image of an exoplanet in 2004. Bard had gotten it wrong, and Google was ribbed in the news media and on social media.

It was, as Mr. Pichai later said in an interview, “unfortunate.” Google’s stock dropped almost 8 percent, wiping out more than $100 billion in value.

There was no question the Bing chatbot put Microsoft ahead of Google, and in spring 2023 Mr. Nadella bought more than $2 billion in computer chips to keep it that way, according to two people familiar with the budget. “We have a big order coming to you, a really big order coming to you,” Mr. Nadella gleefully told Jensen Huang, Nvidia’s chief executive, Mr. Huang said.

Mr. Pichai, at Google, felt like a scuba diver. The fallout from Google’s announcement about Bard was tumultuous, and that was like navigating the rough top foot of an ocean. But underneath the surface, the water was calm, and he was focused on the coming release of Google’s A.I. products.

Mr. Pichai oversaw more than 2,000 researchers divided between two labs, Google Brain and DeepMind. In April, he merged them. Google DeepMind would develop an A.I. system called Gemini. To run it, Mr. Pichai chose Demis Hassabis, a founder of DeepMind. Mr. Hassabis had long and loudly warned that A.I. could destroy humanity. Now he would be in charge of leading Google to artificial intelligence supremacy.

Geoffrey Hinton, Google’s best-known scientist, had always poked fun at people like Dr. Hassabis — the doomers, rationalists and effective altruists who worried that A.I would end mankind in the near future. He had developed much of the science behind artificial intelligence as a professor at the University of Toronto and became a wealthy man after joining Google in 2013. He is often called the godfather of A.I.

But the new chatbots changed everything for him. The science had moved more quickly than he had expected. Microsoft’s introduction of its chatbot convinced him that Google would have no choice but to try to catch up. And the corporate race shaping up between tech giants seemed dangerous.

“If you think of Google as a company whose aim is to make profits,” Dr. Hinton said in April, “they can’t just let Bing take over from Google search. They’ve got to compete with that. When Microsoft decided to release a chatbot as the interface for Bing, that was the end of the holiday period.”

Dr. Hinton spent a lot of time mulling his own role in the development of A.I. Sometimes he felt regretful. Other times he jokingly sent friends a video of Edith Piaf singing “Non, Je Ne Regrette Rien.” But finally, he decided to quit.

For the first time in more than 50 years, he stepped away from research. And then in April, he called Mr. Pichai and said goodbye.

Susan Beachy contributed research.