OpenAI was hacked last year, according to new report. It didn’t tell the public for this reason.

A hacker snatched details about OpenAI’s AI technologies early last year, The New York Times reported. The cybercriminal allegedly swiped sensitive information from a discussion forum where employees chatted about the company’s latest models.

The New York Times was hush-hush about the source of this news, claiming that “two people familiar with the incident” spilled the beans. However, they maintain that the cybercriminal only breached the forum — not the core systems that power OpenAI’s AI algorithms and framework.

OpenAI reportedly revealed the hack to employees during an all-hands meeting in April 2023. It also informed the board of directors. However, OpenAI executives decided against sharing the news publicly.

Why did OpenAI keep the breach under wraps?

According to The New York Times, OpenAI didn’t reveal the hack to the public because information about customers was not stolen.

The company also did not share the breach with the FBI or any other law enforcement entities.

Mashable Light Speed

“The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government,” the newspaper said.

The New York Times’ sources say that some OpenAI employees expressed fear that China-based adversaries could steal the company’s AI secrets, causing a threat to U.S. national security.

Leopold Aschenbrenner, leader of OpenAI’s superalignment team (a unit focused on ensuring that AI doesn’t get out of control) at the time, reportedly shared the same sentiments about the lax security and being an easy target for foreign enemies.

Aschenbrenner said he was fired early this year for sharing an internal document with three external researchers for “feedback.” He insinuates his firing was unfair; he scanned the document for any sensitive information, adding that it’s normal for OpenAI employees to reach out to other experts for a second opinion.

However, The New York Times points out that studies conducted by Anthropic and OpenAI reveal that AI “is not significantly more dangerous” than search engines like Google.

Still, AI companies should ensure that their security is tight. Legislators are pushing for regulations that slap hefty fines on companies with AI technologies that cause societal harm.

Leave a Reply

Your email address will not be published. Required fields are marked *