The New Yorker In-Depth Investigation: Why Do OpenAI Insiders Consider Altman Untrustworthy?

Byline: Little Cookie, Deep Tide TechFlow

In the fall of 2023, OpenAI Chief Scientist Ilya Sutskever sat in front of his computer and finished a 70-page document.

The document was compiled from Slack message logs, HR communication records, and internal meeting minutes—just to answer one question: Sam Altman, the person who may control the most dangerous technology in human history, can he really be trusted?

Sutskever’s answer, written on the first page in the very first line, has the list title: “Sam demonstrates a consistent pattern of behavior…”

First: lying.

Today, two and a half years later, investigative reporters Ronan Farrow and Andrew Marantz published a massive investigative report in The New Yorker. They interviewed more than 100 people involved, obtained internal memos that had never been made public before, and also obtained more than 200 pages of private notes left behind by Dario Amodei, the founder of Anthropic, during his time at OpenAI. The story pieced together from these documents is far uglier than the 2023 “court-intrigue” affair: how OpenAI—step by step—turned from a nonprofit created for human safety into a commercial machine, with nearly every safety guardrail being dismantled by the same person, personally.

Amodei’s conclusion in the notes is even more direct: “OpenAI’s problem is Sam himself.”

OpenAI’s “original sin” setup

To understand the weight of this report, you first need to clarify just how special OpenAI is.

In 2015, Altman and a group of Silicon Valley elites did something with almost no precedent in commercial history: they used a nonprofit organization to develop what could be the most powerful technology in human history. The board’s responsibilities were spelled out very clearly: safety comes before the company’s success, even before the company’s survival. Put simply, if one day OpenAI’s AI becomes dangerous, the board has an obligation to shut the company down with their own hands.

The entire architecture is built on one assumption: the person who controls AGI must be an extremely honest person.

What if they placed the bet wrong?

The core explosive element of the report is that 70-page document. Sutskever doesn’t play office politics; he’s one of the world’s top AI scientists. But by 2023, he had become increasingly convinced of one thing: that Altman had been continuously telling lies to executives and the board.

A specific example: in December 2022, at a board meeting, Altman assured the board that multiple features of the upcoming GPT-4 had already passed safety reviews. Board member Toner asked to see the approval documents—only to find that the two most controversial features (user-defined fine-tuning and personal assistant deployment) had not received approval from the safety panel at all.

Something even more bizarre happened in India. An employee reported to another board member about “that violation”: Microsoft hadn’t completed the necessary safety review, yet released an early version of ChatGPT in India ahead of schedule.

Sutskever also recorded another matter in his memo: Altman had told the former CTO Mira Murati that the safety approval process wasn’t that important—the company’s general counsel had already approved it. Murati went to confirm with the general counsel, and the other person replied, “I don’t know where Sam got that impression.”

Dario Amodei’s 200-plus pages of private notes

Sutskever’s document reads like a prosecutor’s indictment. The more than 200 pages of notes Amodei left behind are more like a diary written by a witness at the scene of the crime.

During the years Amodei worked at OpenAI as the person in charge of safety, he watched first-hand as the company stepped back, step by step, under commercial pressure. In his notes, he recorded a key detail from the 2019 Microsoft investment deal: he had inserted a “merger and assistance” clause into OpenAI’s charter, roughly stating that if another company found a safer path to AGI, OpenAI should stop competing and instead help that company. This was the safety assurance he valued most across the entire deal.

When the deal was about to be signed, Amodei discovered something: Microsoft had obtained veto power over that clause. What did that mean? Even if, one day, a competitor found a better way, Microsoft could, with a single sentence, block OpenAI’s obligation to assist. The clause still existed on paper, but from the day the signatures were put down, it became worthless.

Amodei later left OpenAI and founded Anthropic. The competition between the two companies, at its root, comes down to a fundamental disagreement about “how AI should be developed.”

The missing 20% compute pledge

The report includes a detail that makes your back feel chilled—about OpenAI’s “superalignment team.”

In mid-2023, Altman emailed a PhD student at Berkeley researching “deceptive alignment” (AI acts obedient during tests, then does its own thing after deployment), saying he was extremely concerned about this issue and was considering setting up a $1 billion global research prize. The PhD student was greatly encouraged, took a leave of absence, and joined OpenAI.

Then Altman changed his mind: no external awards—instead, they set up a “superalignment team” internally. The company loudly announced that it would allocate “20% of existing compute” to this team, with potential value exceeding $1 billion. The wording of the announcement was extremely serious, saying that if the alignment problem could not be solved, AGI might lead to “humans being stripped of power, even human extinction.”

Jan Leike, who was appointed to lead this team, later told reporters that this pledge itself was a very effective “talent-retention tool.”

What about reality? Four people who worked on the team or had close contact with it told reporters that the compute actually allocated was only 1% to 2% of the company’s total compute, and it was also the oldest hardware. The team was later disbanded, with the mission left unfinished.

When reporters asked to interview people responsible for “existential safety” research at OpenAI, the company’s PR response was both laughable and pathetic: “That’s not a… thing that actually exists.”

The sidelined CFO and the upcoming IPO

The New Yorker report is only half of the bad news on that day. On the same day, The Information broke another major story: serious disagreements had arisen between OpenAI’s CFO Sarah Friar and Altman.

Friar privately told colleagues that she believed OpenAI wasn’t ready to go public this year. There were two reasons: the amount of procedural and organizational work that still needed to be done was too large, and the financial risk from the $600 billion in compute spending over five years that Altman had promised was too high. She was even unsure whether OpenAI’s revenue growth could hold up those commitments.

But Altman wanted to sprint toward an IPO in the fourth quarter of this year.

Even more bizarrely, Friar was no longer reporting directly to Altman. Starting in August 2025, she began reporting instead to Fidji Simo (OpenAI’s Applications business CEO). And Simo had just taken sick leave for health reasons last week. Take a look at this situation: a company sprinting toward an IPO, with fundamental disagreements between the CEO and CFO, the CFO not reporting to the CEO, and the CFO’s boss also on leave.

Even executives inside Microsoft couldn’t stand it anymore, saying Altman “distorts the facts, reneges, and keeps overturning agreements that have already been reached.” One Microsoft executive even said: “I think there’s a certain probability he’ll ultimately be remembered as a Bernie Madoff or SBF-level con artist.”

Altman’s “two-faced man” portrait

A former OpenAI board member described two traits about Altman to reporters. This passage may be the harshest character sketch in the entire report.

The board member said Altman has an extremely rare combination of traits: in every face-to-face interaction, he has an intense desire to please the other person and be liked by them. At the same time, he has a near-sociopathic indifference to the consequences that deceiving other people might bring.

It’s extremely rare for both traits to appear in the same person. But for a salesperson, it’s the most perfect talent.

The report has a well-chosen metaphor: Jobs was known for his “reality distortion field.” He could make the whole world believe in his vision. But even Jobs had never told customers: “If you don’t buy my MP3 player, the people you love will die.”

Altman has said something similar—about AI.

A CEO’s integrity problem: why it becomes everyone’s risk

If Altman were only the CEO of an ordinary tech company, these accusations would be nothing more than an entertaining piece of business gossip. But OpenAI is not ordinary.

By its own account, it is developing what could be the most powerful technology in human history. It could reshape the global economy and the labor market (OpenAI itself has just released a policy white paper about the unemployment issues caused by AI), and it could also be used to manufacture large-scale biological weapons or launch cyberattacks.

All the safety guardrails are effectively meaningless. The founder’s nonprofit mission gave way to the IPO sprint. Both the former Chief Scientist and the former head of safety concluded that the CEO is “not trustworthy.” Partners compare the CEO to SBF. In that situation, on what basis does this CEO unilaterally decide when to release an AI model that could change the fate of humanity?

After reading the report, Gary Marcus (a professor of AI at New York University and a longtime AI safety advocate) wrote a single line: if a future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, would you really feel comfortable letting Altman alone decide whether to release it?

OpenAI’s response to The New Yorker was concise: “Most of this article is recycling events that have already been reported, using anonymous claims and selected anecdotes, and the sources clearly have personal motives.”

A very “Altman”-style response: no response to specific allegations in detail, no denial of the authenticity of the memos—only questioning the motives.

On the corpse of a nonprofit, a money tree grows

OpenAI’s decade, written as a story outline, looks like this:

A group of idealists concerned about AI risk creates a mission-driven nonprofit. The organization makes extraordinary technical breakthroughs. The breakthroughs attract huge amounts of capital. Capital needs returns. The mission begins to step aside. The safety team is disbanded. People who raise doubts are purged. The nonprofit structure is converted into a for-profit entity. The board that once had the power to shut down the company is now filled with the CEO’s allies. The company that once promised to set aside 20% of compute to protect human safety now has its PR people saying, “That’s not a… thing that actually exists.”

The story’s protagonists—more than a hundred firsthand participants—gave him the same label: “not bound by the truth.”

He is preparing to take this company public through an IPO, with a valuation of more than $850 billion.

The information in this article is synthesized from publicly reported information from multiple media outlets, including The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, and The Information.

LAYOUT REFERENCE (source): total_lines=107, non_empty_lines=54, blank_lines=53

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin