Global Technology Companies Sign Pledge to Foster AI Safety

Leading AI firms spanning Europe, Asia, North America and the Middle East have signed a new voluntary commitment to AI safety. The 16 signatory companies — including Amazon, Google DeepMind, Meta Platforms, Microsoft, OpenAI, xAI and China’s Zhipu AI — will publish outlines indicating how they will measure the risks posed by their frontier models. “In the extreme, leading AI tech companies including from China and the UAE have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated,” according to UK Technology Secretary Michelle Donelan.

Rounding out the list are Anthropic, Cohere, G42, IBM, Inflection, Mistral, Naver, Samsung Electronics and the Technology Information Institute. The news was announced by the UK and South Korean governments timed to the second AI Safety Summit, co-hosted by the two governments in Seoul, May 21-22.

The agreement states the AI companies will “assess the risks posed by their frontier models or systems … including before deploying that model or system, and, as appropriate, before and during training,” reports Ars Technica, pointing out that “it remains unclear, however, how companies might be held to account if they fail to meet their commitments.”

“The frameworks will also outline when severe risks, unless adequately mitigated, would be ‘deemed intolerable’ and what companies will do to ensure thresholds are not surpassed,” the UK statement specifies.

“These risks include but aren’t limited to automated cyberattacks and the threat of bioweapons,” writes CNBC, noting that in response “to such extreme circumstances, companies said they plan to implement a ‘kill switch’ that would cease the development of their AI models if they can’t guarantee mitigation of these risks.”

But Ars Technica points out that the safeguards are self-regulating, adding that “it remains unclear how companies might be held to account.”

“This isn’t just about what more can the companies do, it’s also what more can the countries do,” Ars Technica reports Donelan saying, even as she “reiterated the UK’s stance that it was too early to consider legislation to enforce AI safety.” In March, the European Union became the first bloc to implement AI regulation, including business and safety rules, with the AI Act.

Speaking at the Seoul Summit, U.S. Commerce Secretary Gina Raimondo said “the U.S., UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a ‘global network of AI safety institutes,’” according to Wired, which notes that “the Commerce Department declined to comment on whether China had been invited to join.”

This safety accord announced in Seoul builds on the Bletchley Declaration, signed in November at the inaugural AI Safety Summit, which took place in the UK. “France is slated to hold the successor to the AI Seoul Summit in early 2025 under the new title ‘AI Action Summit,’” explains a report out of Seoul by the Center for Strategic & International Studies.

Related:
U.S. Lawmakers Advance Bill to Make It Easier to Curb Exports of AI Models, Reuters, 5/22/24
Meta’s Zuckerberg Creates Council to Advise on AI Products, Bloomberg, 5/22/24
Europe Sets Benchmark for Rest of the World with Landmark AI Laws, Reuters, 5/22/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.