OpenAI CEO Sam Altman was fired for ‘outright lying,’ says former board member

A former OpenAI board member has explained why the directors made the now infamous decision to fire CEO Sam Altman last November. Speaking in an interview on The TED AI Show podcast, AI researcher Helen Toner accused Altman of lying to and obstructing OpenAI’s board, retaliating against those who criticised him, and creating a “toxic atmosphere”.

“The [OpenAI] board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary — was coming first over profits, investor interests, and other things,” Toner said to The TED AI Show host Bilawal Sidhu. “But for years, Sam had made it really difficult for the board to actually do that job by, you know, withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”

OpenAI fired Altman on Nov. 17 last year, a shock move that took many both inside and outside the company by surprise. According to Toner, the decision was not made lightly, involving weeks of intense discussion. The secrecy surrounding it was also by design, she said.

“It was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him he would pull out all the stops, do everything in his power to undermine the board, to prevent us from, you know, even getting to the point of being able to fire him,” said Toner. “So we were very careful, very deliberate about who we told, which was essentially almost no one in advance other than, obviously, our legal team.”

Unfortunately for Toner and the rest of OpenAI’s board, their careful planning didn’t produce the desired result. While Altman was initially ousted, OpenAI quickly rehired him as CEO following days of outcry, accusations, and uncertainty. The company also put in place an almost entirely new board, removing those who had tried to depose Altman.

Why did OpenAI’s board fire CEO Sam Altman?

Toner didn’t specifically discuss the aftermath of that tumultuous time on the podcast. However, she did elaborate on exactly why OpenAI’s board came to the conclusion that Alman had to go. 

Earlier this week, Toner and fellow former board member Tasha McCauley published an op-ed in The Economist stating that they decided to eject Altman due to “long-standing patterns of behavior.” Toner has now provided examples of said behaviour in her talk with Sidhu — including the claim that OpenAI’s own board weren’t told when ChatGPT was released, only finding out via social media.

“When ChatGPT came out [in] November 2022, the board was not informed in advance about that. We learned about GPT on Twitter,” Toner alleged. “Sam didn’t inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

Toner also accused Altman of deliberately targeting her after taking objection to a research paper she had co-authored. Entitled “Decoding Intentions: Artificial Intelligence and Costly Signals,” the paper discussed the dangers of AI, and included an analysis of both OpenAI and competitor Anthropic’s safety measures. 

However, Altman reportedly considered the academic paper too critical of OpenAI and complimentary of its rival. Toner told The TED AI Show that after the paper was published in October last year, Altman began spreading lies to the other board members in an effort to have her removed. This alleged incident only further damaged the board’s trust in him, she said, as they had already been seriously discussing firing Altman by that time.

Mashable Light Speed

“[F]or any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal or misinterpreted or whatever,” Toner said. “But the end effect was that after years of this kind of thing, all four of us who fired him [OpenAI board members Toner, McCauley, Adam D’Angelo, and Ilya Sutskever] came to the conclusion that we just couldn’t believe things that Sam was telling us. 

“And that’s a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just like, you know, helping the CEO to raise more money. Not trusting the word of the CEO, who is your main conduit to the company, your main source of information about the company, it’s just totally, totally impossible.”

Toner stated that OpenAI’s board did make attempts to address these issues, instituting new policies and processes. However, other executives then reportedly began telling the board of their own negative experiences with Altman and the “toxic atmosphere he was creating.” This included allegations of lying and manipulation, backed up screenshots of conversations and other documentation.

“They used the phrase ‘psychological abuse,’ telling us they didn’t think he was the right person to lead the company to [artificial general intelligence], telling us they had no belief that he either could or would change, no point in giving him feedback, no point in trying to work through these issues,” said Toner. 

OpenAI CEO accused of retaliation against critics

Toner further addressed the loud outcry from OpenAI employees against Altman’s firing. Many made social media posts in support of the ousted CEO, while over 500 of the company’s 700 employees stated they would quit if he was not reinstated. According to Toner, staff were led to believe the false dichotomy that if Altman did not return “immediately, with no accountability [and a] totally new board of his choosing,” OpenAI would be destroyed.

“I get why not wanting the company to be destroyed got a lot of people to fall in line, whether because they were in some cases about to make a lot of money from this upcoming tender offer, or just because they love their team, they didn’t want to lose their job, they cared about the work they’re doing,” said Toner. “And of course, a lot of people didn’t want the company to fall apart, us included.”

She also claimed that fear of retribution for opposing Altman may have contributed to the support he received from OpenAI’s staff.

“They had experienced him retaliate against people, retaliate against them for past instances of being critical,” Toner said. “They were really afraid of what might happen to them. So when some employees started to say, ‘wait, I don’t want the company to fall apart, like, let’s bring back Sam,’ it was very hard for those people who had had terrible experiences to actually say that for fear that if Sam did stay in power as he ultimately did, that would make their lives miserable.” 

Finally, Toner noted Altman’s turbulent work history, which initially emerged after his failed firing from OpenAI. Pointing to reports that Altman was fired from his previous role at Y Combinator due to his alleged self-interested behaviour, Toner claimed that OpenAI was far from the only company to have the same problems with him.

“And then at his job before that — which was his only other job in Silicon Valley, his startup Loopt — apparently the management team went to the board there twice and asked the board to fire him for what they called ‘deceptive and chaotic behaviour,'” Toner continued. 

“If you actually look at his track record, he doesn’t exactly have a glowing trail of references. This wasn’t a problem specific to the personalities on the board, as much as he would love to kind of portray it that way.”

Toner and McCauley are far from the only OpenAI alumni who have expressed misgivings about Altman’s leadership. Lead safety researcher Jan Leike resigned earlier this month, citing disagreements with management’s priorities and arguing that OpenAI should be more focused on issues such as security, safety, and societal impact. (Chief scientist and former board member Sutskever also resigned, though he cited his desire to work on a personal project.)

In response, Altman and president Greg Brockman defended OpenAI’s approach to safety. The company also announced this week that Altman would lead OpenAI’s new safety and security team. Meanwhile, Leike has joined Anthropic.

Topics
OpenAI

Leave a Reply

Your email address will not be published. Required fields are marked *