Tech / Technology

OpenAI’s response to the AI executive order? Silence.

Posted on:

Many leading AI companies issued statements in response to President Biden’s executive order, but OpenAI has yet to say anything.
OpenAI CEO Sam Altman making a grimace in front of a microphone

In the wake of President Biden’s executive order on Monday, AI companies and industry leaders have weighed in on this watershed moment in AI regulation. But the biggest player in the AI space, OpenAI, has been conspicuously quiet.

The Biden-Harris administration’s far-ranging executive order addressing the risks of AI builds upon voluntary commitments secured by 15 leading AI companies. OpenAI was among the first batch of companies to promise the White House safe, secure, and trustworthy development of its AI tools. Yet the company hasn’t issued any statement on its website or X (formerly known as Twitter). CEO Sam Altman, who regularly shares OpenAI news on X, hasn’t posted anything either.

OpenAI has not responded to Mashable’s request for comment.

Of the 15 companies that made a voluntary commitment to the Biden Administration, the following have made public statements, and all of which expressed support for the executive order: Adobe, Amazon, Anthropic, Google, IBM, Microsoft, Salesforce, and Scale AI. Nvidia decline to comment.

In addition to crickets from OpenAI, Mashable has yet to hear from Cohere, Inflection, Meta, Palantir, and Stability AI. But OpenAI and Altman’s publicity tour proclaiming the urgent risks of AI and the need for regulation makes the company’s silence all the more noticeable.

Altman has been vocal about the threat that generative AI made by his own company poses. In May, Altman, along with technology pioneers Geoffrey Hinton and Bill Gates signed an open letter, stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

At a senate hearing in May, Altman expressed the need for AI regulation: “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” said Altman in response to inquiry from Sen. Blumenthal, D-CT about the threat of superhuman machine intelligence.

So far, cooperation with lawmakers and world leaders has worked in OpenAI’s favor. Altman participated in the Senate’s bipartisan closed-door AI summit, giving OpenAI a seat at the table for formulating AI legislation. Shortly after Altman’s testimony, leaked documents from OpenAI showed the company lobbying for weaker regulation in the European Union.

It’s unclear where OpenAI stands on the executive order, but open-source advocates say the company already has too much lobbying influence. On Wednesday, the same day as the AI Safety Summit in the U.K., more than 70 AI leaders issued a joint statement calling for a more transparent approach to AI regulation. “The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst,” said the statement.

Meta Chief AI Scientist Yann LeCun, one of the signatories, doubled down on this sentiment on X (formerly known as Twitter) by calling out OpenAI, DeepMind (a subsidiary of Google), and Anthropic for using fear-mongering to ensure favorable outcomes. “[Sam] Altman, [Demis] Hassabis, and [Dario] Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry,” he posted.

Anthropic and Google leadership have both provided statements supporting the executive order, leaving OpenAI the lone company accused of regulatory capture yet to issue any comment.

What could the executive order mean for OpenAI?

Many of the testing provisions in the EO relate to huge foundation models not yet on the market and future development of AI systems, suggesting consumer-facing tools like OpenAI’s ChatGPT won’t be impacted much.

“I don’t think we’re likely to see any immediate changes to any of the generative AI tools available to consumers,” said Jake Williams, former US National Security Agency (NSA) hacker and Faculty member at IANS Research. “OpenAI, Google, and others are definitely training foundation models and those are specifically called out in the EO if they might impact national security.”

So, whatever OpenAI is working on might be subjected to government testing.

In terms of how the executive order might impact directly OpenAI, Beth Simone Noveck, director of the Burnes Center for Social Change, said it could slow down the pace of new products and updates being released and companies will have to invest more in research and development and compliance.

“Companies developing large-scale language models (e.g. ChatGPT, Bard and those trained on billions of parameters of data) will be required to provide ongoing information to the federal government, including details of how they test their platforms,” said Noveck, who previously served as the first United States Deputy Chief Technology Officer under President Obama.

More than anything, the executive order signals an alignment with growing consumer expectations for greater control and protection of their personal data, said Avani Desai, CEO of Schellman, a top CPA firm that specializes in IT audit and cybersecurity.

“This is a huge win for privacy advocates as the transparency and data privacy measures can boost user confidence in AI-powered products and services,” Desai said.

So while the consequences of the executive order may not be immediate, it squarely applies to OpenAI’s tools and practices. You’d think OpenAI might have something to say about that.

Tech / Technology

White House announces new AI initiatives at Global Summit on AI Safety

Posted on:

Vice President Kamala Harris will reveal the US government’s new AI initiatives to advance the safe and responsible use of AI.
Vice President Kamala Harris delivers remarks about the Biden Administration's work to regulate artificial intelligence during an event in the East Room of the White House on October 30, 2023 in Washington, DC.

Vice President Kamala Harris will outline several new AI initiatives today, laying out the US government’s plans to advance the safe and responsible use of machine learning technology. We already know what many of them will be.

The White House previously announced an executive order on AI regulation earlier this week, with the intention to protect US citizens from the potential harm the technology can cause. It is now building further on this order, aiming to position itself as a global leader in ensuring AI is developed and used in the public interest internationally.

Currently in London to attend the Global Summit on AI Safety, Harris is scheduled to deliver her live-streamed speech on the US’ approach to AI at approximately 1:35 p.m. GMT / 9:35 a.m. ET.

“Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Harris said in an excerpt from her prepared speech. “These threats are often referred to as the ‘existential threats of AI,’ because they could endanger the very existence of humanity.”

“So, the urgency of this moment must compel us to create a collective vision of what this future must be. A future where AI is used to advance human rights and human dignity; where privacy is protected and people have equal access to opportunity; where we make our democracies stronger and our world safer. A future where AI is used to advance the public interest.”

Here are the new announcements and government initiatives Harris will reveal.

1. The US is establishing a United States AI Safety Institute

The US government is establishing a United States AI Safety Institute (US AISI), which will be part of the National Institute of Standards and Technology (NIST). Created through the Department of Commerce, the US AISI will be responsible for applying the NIST’s AI Risk Management Framework, developing benchmarks, best practices, and technical guidance to mitigate the risks of AI. These will then be used by regulators when developing or enforcing rules. The US AISI will also collaborate with similar institutions internationally.

2. The first draft of policy guidance for the US government’s use of AI is being made available for public comment

The US government is publishing the first draft of its policy guidance on its use of AI, with the public invited to comment. Released through the Office of Management and Budget, this policy is intended to outline tangible steps for the responsible use of AI by the US, and builds on previous guidance such as the NIST’s AI Risk Management Framework. The policy is intended for application across a wide range of departments, including health, law enforcement, and immigration, and requires that federal departments monitor the risks of AI, consult the public regarding its use, and provide an avenue of appeal to those harmed by it. 

You can read the draft policy and submit your comments here.

3. 30 nations have joined the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy

The US made its Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy in February, establishing standards for the lawful, responsible use and development of military AI. This included the requirement that it comply with international humanitarian law. Interestingly, a specific goal of the Political Declaration is to preserve nations’ “right to self-defense,” as well as their ability to develop and use AI for the military.

Thirty other nations have now endorsed this Declaration as well, specifically Albania, Australia, Belgium, Bulgaria, Czech Republic, Denmark, Estonia, Finland, France, Georgia Germany, Hungary, Iceland, Ireland, Italy, Japan, Kosovo, Latvia, Liberia, Malawi, Montenegro, Morocco, North Macedonia, Portugal, Romania, Singapore, Slovenia, Spain, Sweden, and the UK.

4. 10 foundations have pledged over $200 million for public interest AI initiatives

Ten foundations are collectively committing over $200 million to fund AI initiatives intended to further the best interests of the global public — specifically workers, consumers, communities, and historically marginalised people. The foundations are also creating a funders’ network, which will coordinate such giving with the specific aim of supporting AI work that protects democracy and rights, drives innovation in the public interest, empowers workers amidst the changes being brought about by AI, improves accountability, or supports international rules regarding AI.

The 10 foundations involved are the David and Lucile Packard Foundation, Democracy Fund, the Ford Foundation, Heising-Simons Foundation, the John D. and Catherine T. MacArthur Foundation, Kapor Foundation, Mozilla Foundation, Omidyar Network, Open Society Foundations, and the Wallace Global Fund.

5. The US government will hold a hackathon to find a solution to scam AI robocalls 

The US government will host a virtual hackathon with the goal to build AI models which can detect and block robocalls and robotexts that can be used to scam people. The hackathon will have a particular focus on calls that use AI-generated voices.

6. The US is calling for international authentication standards for digital government messaging

The US is calling on the global community to support the development of international standards for digital and AI content produced by governments. Such standards would be aimed at helping the public identify whether or not an apparent government message is authentic, and may include labelling such as digital signatures or watermarks.

7. The US will develop a pledge committing to the responsible use of AI

Finally, the US government will work with the Freedom Online Coalition (FOC) to develop a pledge that its development and implementation of AI will incorporate responsible practices. The FOC is a group of 38 countries whose stated aim is to advance internet freedom and protect human rights online worldwide.