- Major global powers unite at the UK-hosted AI safety summit, acknowledging AI’s potential risks.
- Frontier AI's rapid development without oversight concerns leaders; Elon Musk echoes the sentiment.
- Regulatory consensus remains challenging, with the US and UK proposing separate approaches.
November 2, 2023: In an unprecedented move, the UK, US, EU, Australia, and China, among other nations, have acknowledged the possible grave dangers associated with artificial intelligence.
This consensus emerged from the AI safety summit’s “Bletchley declaration” where 28 governments came together under the British government’s initiative.
The British Chancellor, Rishi Sunak, described the agreement as “quite incredible.”
Before addressing the summit, he emphasized the transformative potential of AI and its significance for future generations.
He insisted on developing AI responsibly, addressing the risks it presents.
Highlighting the newfound global agreement on AI dangers, Michelle Donelan, the UK’s Technology Secretary, stated, “For the first time we now have countries agreeing that we need to look not just independently but collectively at the risks around frontier AI.”
Frontier AI, representing the most advanced AI systems, might outperform human intelligence in various tasks, a concern echoed by Tesla and SpaceX owner, Elon Musk. He voiced concerns over humanity’s ability to control such intelligent entities.
Sunak’s decision to host this summit stems from concerns about the rapid, unmonitored progress of AI models.
During the summit, Donelan, together with US Commerce Secretary Gina Raimondo and China’s Vice-Minister of Science and Technology, Wu Zhaohui, presented a united global stance.
Their collective appearance marked a moment of international unity, remarked by Matt Clifford, a key British official involved in the summit’s organization.
China endorsed the declaration, emphasizing cooperation on AI for sustainable growth, human rights protection, and fostering public trust in AI systems. In the same spirit, Wu Zhaohui conveyed that every country, regardless of its size, should have equal rights to develop and use AI.
Subsequent AI summits are planned, with South Korea hosting one in six months and France in a year. Despite this momentum, a cohesive international AI regulatory framework remains a challenge. The UK’s hopes of transforming its AI taskforce into a global AI testing hub didn’t materialize.
Instead, Raimondo unveiled plans for an American AI Safety Institute, aiming to set industry standards for AI safety, security, and testing.
Recently, the US government mandated AI companies, like OpenAI and Google, to disclose their AI safety testing results prior to public AI model launches. VP Kamala Harris further emphasized regulating both existing and upcoming AI models.
Debates on who should spearhead global AI regulations continued. However, both US and UK representatives downplayed any perceived rift, emphasizing their collaborative intentions. Clifford reaffirmed the strength of the US-UK partnership.
The EU is currently working on an AI regulation bill, targeting live facial recognition among other technologies.
Meanwhile, Donelan clarified that the UK won’t announce an AI bill in the upcoming king’s speech but emphasized that the UK’s leadership role in initiating global dialogue on AI should be recognized.