Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in […]
Former OpenAI exec that quit for ‘safety concerns’ joins rival company
Former OpenAI exec Jan Leike joins Anthropic, citing safety concerns. His departure follows other key resignations.
Anthropic hires former OpenAI safety lead to head up new team
Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to […]
OpenAI’s new safety committee is made up of all insiders
In light of criticism over its approach to AI safety, OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects […]
Former OpenAI execs call for more intense regulation, point to toxic leadership
Former OpenAI members Helen Toner and Tasha McCauley published an Op-Ed calling out OpenAI’s toxic leadership structure and calling for more intense regulation.
Major AI models are easily jailbroken and manipulated, new report finds
The UK’s AI Safety Institute found jailbreaking vulnerabilities in four major LLMs, hinting at larger security concerns.
UK opens office in San Francisco to tackle AI risk
Ahead of the AI safety summit kicking off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. […]
OpenAI’s Sam Altman and Greg Brockman respond to safety leader resignation
OpenAI’s co-head of its superalignment team resigned this week. Its CEO and president responded.
One of OpenAI’s safety leaders quit on Tuesday. He just explained why.
In a series of posts on X (formerly Twitter) on Friday, OpenAI co-head of alignment Jan Leike gave the public some hints as to why he left.