Why is OpenAI struggling to retain top talent? An analysis of reverse talent drain

He was indifferent to the fate of the world as long as he ensured his own safety.

Here's the English translation:

1. The Prelude to "Ascension": It's a Safety Issue, but Not Only Safety

In ancient times, there was the PayPal Mafia; today, there are OpenAI defectors.

According to statistics, nearly 75 important employees have left OpenAI, founding about 30 AI companies.

  • Former VP of Research Dario Amodei & former VP of Safety and Policy Daniela Amodei founded Anthropic, valued at $18 billion;
  • Former Chief Scientist Ilya Sutskever founded SSI, valued at $10 billion;
  • Former VP of Engineering David Luan founded Adept AI (acquired by Amazon), valued at over $1 billion;
  • Former Technical Lead Jonas Schneider founded robotics startup Daedalus, valued at $40 million;
  • Former Research Scientist Aravind Srinivas founded Perplexity.AI, valued at $3 billion;
  • Former technical staff Tim Shi founded AI customer service platform Cresta AI, valued at $1.6 billion;

Among them, Anthropic has grown to be OpenAI's number one rival and a "sanctuary" for former employees; Perplexity.AI has become OpenAI's best wrapper and a challenger to Google search...

For ordinary researchers, leaving is about seeking better development; but for core members, especially the founding team, leaving is basically due to differences in values.

Typical representatives are Musk, the Amodei siblings, and Ilya Sutskever. Their confrontations with Altman actually consolidated Altman's position at OpenAI.

Step One: Overthrowing Musk's "Tyranny"

In 2015, Google acquired DeepMind. Musk and PayPal co-founder, Founders Fund founder Luke Nosek jointly proposed a competing bid, but ultimately failed. This became Musk's lingering regret.

Against this backdrop, a worried Musk attended a dinner that would go down in Silicon Valley history. About ten outstanding figures gathered at the dinner, with three particularly shining - Altman, Ilya Sutskever, and Greg Brockman.

They discussed with Musk the potentially catastrophic consequences of AI and the conditions needed to establish a project to rival Google's.

The team of four believed they had all the elements for success: Hinton's star pupil Ilya Sutskever was an AI scientist; Stripe CTO Brockman was an operational expert; YC CEO Sam Altman could coordinate all elements; Tesla founder Musk had money.

At the dinner, Musk promised to invest $1 billion and proposed naming the project OpenAI - to operate as a non-profit, focusing on developing safe AI beneficial to humanity rather than pursuing profit.

In 2017, Google published the famous Transformer paper, revealing that the key was processing large amounts of data, which required enormous computational power (Ilya Sutskever had this judgment at the beginning of OpenAI's establishment).

As a result, OpenAI began to run short of money (Musk had donated a total of $44 million to OpenAI and covered the rent).

Brockman and other OpenAI members suggested transforming the organization into a for-profit entity to raise money from investors like Microsoft.

At first, Musk strongly opposed. But when he realized the necessity of establishing a for-profit entity, he wanted to obtain majority ownership, initial board control, and serve as CEO. Musk even proposed merging OpenAI into Tesla, reasoning that Tesla was the only company with hope of rivaling Google.

Seeing no one agree, Musk began persuading OpenAI researchers to jump ship to Tesla.

Finally, the "constantly troublesome" Musk was voted out by the board.

Before leaving, he cursed that OpenAI's chances of defeating DeepMind/Google were 0%.

However, people close to Altman revealed that Musk was purely jealous of Altman stealing his thunder in the AI field, and he was more concerned with how to defeat OpenAI than AI safety. Those close to Musk insist that his concerns about AI safety are genuine and profound, such as developing xAI to replace OpenAI.

But regardless, driving away the "dictator" Musk was definitely the first step in Altman's ascension to power.

Step Two: Shedding the Cocoon, Pursuing Maximum Profit

In 2019, OpenAI received $1 billion from Microsoft to continue developing "good" AI.

When you get $1 billion, you must repay the "patron", which actually made some veterans skeptical.

But Altman was quite flexible - not clinging to the reputation of a non-profit organization, but clinging to the shell of a non-profit organization.

He creatively constructed a brand new architecture. On one hand, OpenAI could operate like a regular company, such as fundraising and issuing employee equity; on the other hand, OpenAI investors' returns were capped.

Essentially, OpenAI became a for-profit company controlled by a non-profit board of directors.

This sounds unstable, and internal team divisions began to gradually appear.

When Dario Amodei founded Anthropic in 2021, he said:

"There was a group of people at OpenAI who, after we created GPT-2 and GPT-3, had very strong beliefs about two things. One was that the more computational resources you put into these models, the better they would get, almost without limit. I think that view is now more widely accepted, but we were among the earliest believers.

The second point was that something beyond just scaling up models was needed, which was alignment or safety. Because just increasing computational resources doesn't tell the models what their values should be. So we held onto this idea and started our own company."

Although Anthropic appears to be "safer" and has put considerable effort into accuracy. For example, Anthropic used many complex factual questions to target known weaknesses in the model, categorizing answers as correct, incorrect (hallucinations), or admitting "I don't know". Correspondingly, Claude 3 can express that it doesn't know the answer rather than providing incorrect information. In addition to more accurate responses, Claude 3 can even "cite", pointing to exact sentences in reference materials to verify their answers.

But just as OpenAI has to repay Microsoft, Anthropic also has to repay Amazon. It's almost impossible to run a large AI company ethically.

Recently, Anthropic was exposed for scraping millions of website data in 24 hours and so on.

"Questioning, understanding, becoming" may be the necessary path for large model startups.

Step Three: Purging Traitors, "Donning the Yellow Robe"

In the Chen Qiao mutiny, Song Taizu Zhao Kuangyin was secretly draped with a yellow robe by mutinous soldiers while he slept soundly at midnight. The next day, he painfully accepted everyone's "coercion". It was all about the will of the people.

In last November's OpenAI palace coup, a "Silicon Valley version" of donning the yellow robe was also staged. At that time, hundreds of OpenAI employees signed a joint letter demanding that all "rebellious" board members resign and restore Altman's position, otherwise the employees who signed the letter would take action, "possibly joining Altman and Brockman's newly established subsidiary at Microsoft".

This prestige and influence is the dream of all CEOs.

Wait, there's another version of the story.

In fact, besides the "heart-shaped queue", there was another key factor in this turmoil - money.

Just before the coup, OpenAI had organized a stock sale event for employees, giving everyone a chance to cash out part of their equity. As a result, before they could get their hands on the money, the boss was ousted.

Some investors said that if Altman didn't return, they would suspend the execution of the tender offer. The prospect of taking this money and retiring early was suddenly ruined by the "villain" Ilya Sutskever! Who wouldn't be angry?

Therefore, signing the petition was indeed the will of the people. Moreover, when 95% of your colleagues around you sign in support, you're likely to sign too.

As for Ilya Sutskever's failed coup and forced departure, besides not understanding human nature well enough, it also had to do with Altman's extremely strong ability to manipulate power, as detailed below.

2. Portrait of the "New King": Autocracy, Deception, Profiteering, Laissez-faire

Let's go back to 2016. At that time, OpenAI's office was Brockman's private apartment - sofas, kitchen cabinets, even beds were employees' workstations. This inconspicuous place actually gathered 20 of the world's top AI geniuses.

At that time, Altman and Musk were not often present, it was Brockman and Ilya Sutskever who held up the team together.

Ilya Sutskever was a leader in the AI field, while Brockman was seen as the pillar of OpenAI's business operations.

Employees remember walking the streets of San Francisco with Ilya Sutskever, deeply discussing macro issues, wondering if they were on the right research path. Ilya Sutskever had advanced insights into AI, he could explain complex technical concepts with simple analogies, such as comparing neural networks to a special kind of computer program or circuit.

Even at the beginning of OpenAI's establishment, Ilya Sutskever realized that AI's great leap forward would not come from a specific adjustment or new invention, but from the accumulation of massive data, like continuously injecting fuel into an engine.

It was also for this reason that when Google's Transformer paper was published in 2017, Ilya Sutskever was able to lead OpenAI to promptly explore and adopt the Transformer architecture, becoming one of the pioneers in the industry to adopt this advanced technology.

Brockman's diligence is well-known. A former employee recalled that every morning when arriving