Industry Leaders Caution That AI Presents ‘Risk of Extinction’

Mitigating the risk of extinction due to AI must should be as much a global priority as pandemics and nuclear war, according to the nonprofit Center for AI Safety, which this week released a warning that artificial intelligence systems may pose an existential threat to humanity. Among the more than 350 executives, researchers and engineers who signed the statement are the CEOs of three leading AI firms: OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei. The statement comes as rapid advancements in large language models raise fears of societal disruption through job loss and widespread misinformation.

Microsoft CTO Kevin Scott and Stability AI CEO Emad Mostaque are also among the signatories of the one sentence statement, as are researchers Geoffrey Hinton and Yoshua Bengio, Turing Award-winners known as the “godfathers” of the current AI movement.

“Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a ‘coming-out’ for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing,” writes The New York Times.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” he said. “But, in fact, many people privately would express concerns about these things.”

This latest statement follows a March warning signed by Elon Musk, Steve Wozniak and more than 1,100 others, published by the Future of Life Institute. It also comes on the heels of a May White House AI summit attended by Altman, Hassabis and Amodei to talk about potential AI regulation.

Altman warned senators in a subsequent hearing that the risks posed by advanced AI are serious and require government intervention. This month, the G7 nations called for global AI standards.

Wired reports that although “everyone wants to regulate AI, no one can agree how,” though there seems to be general consensus that “the idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane,” fueled by an urgency that humankind isn’t “closing the barn door after the robotic horses have fled.”

The EU has been developing an AI Act, while U.S. lawmakers have been criticized for acting too slowly, although regulatory agencies have been attempting to pick up the slack, according to a March piece in The New York Times that says “by failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible AI.”

Yesterday, TechCrunch noted that the EU “has used a transatlantic trade and technology talking shop to commit to moving fast and producing a draft Code of Conduct for artificial intelligence, working with U.S. counterparts and in the hope that governments in other regions — including Indonesia and India — will want to get involved.”

“What’s planned,” notes TechCrunch, “is a set of standards for applying AI to bridge the gap, ahead of legislation being passed to regulate uses of the tech in respective countries and regions around the world.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.