Google published a set of principles guiding how the company believes AI should be regulated. Here’s what it covers and what it doesn’t.
Women in AI: Sarah Kreps, professor of government at Cornell
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed […]
UK government urged to adopt more positive outlook for LLMs to avoid missing ‘AI goldrush’
The U.K. government is taking too “narrow” a view of AI safety and risks falling behind in the AI goldrush, according to a report released today. The report, […]
OpenAI buffs safety team and gives board veto power on risky AI
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make […]
Europe’s AI Act talks head for crunch point
Negotiations between European Union lawmakers tasked with reaching a compromise on a risk-based framework for regulating applications of artificial intelligence appear to be on a tricky knife edge. […]
China’s tech vice minister calls for ‘equal rights’ at global AI summit in UK
Despite the ongoing technological decoupling between China and the West, both sides are converging to discuss the threat that runaway artificial intelligence may pose to humanity. Wu Zhaohui, […]
Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK
The promise and pitfall of artificial intelligence is a hot topic these days. Some say AI will save us: it’s already on the case to fix pernicious health […]