OpenAI Launches a Task Force to Control Superintelligent AI

OpenAI believes artificial intelligence exceeding human intelligence “could arrive this decade.” Calling the massive compute power “superintelligence rather than AGI to stress a much higher capability level,” the company warns that even though this new cognition holds great promise it will not necessarily be benevolent. Preparing for the worst, OpenAI has formed an internal unit charged with developing ways to keep superintelligent AI in check. Led by OpenAI’s Ilya Sutskever and Jan Leike, the Superalignment Team will work toward “steering or controlling a potentially superintelligent AI and preventing it from going rogue.”

“Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Sutskever and Leike wrote in a blog post that asks, “How do we ensure AI systems much smarter than humans follow human intent?”

To solve that problem within four years, the Superalignment Team will have access to 20 percent of OpenAI’s compute power and is looking to attract “ML researchers and engineers to join us,” explains the post, which closes with links to company job posts related to this work.

“The high-level goal is to train AI systems using human feedback, train AI to assist in evaluating other AI systems and ultimately to build AI that can do alignment research,” TechCrunch writes.

“‘Alignment research’ refers to ensuring AI systems achieve desired outcomes,” according to TechCrunch, which adds that “it’s OpenAI’s hypothesis that AI can make faster and better alignment research progress than humans can.”

“As we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study and develop better alignment techniques than we have now,” Leike and colleagues John Schulman and Jeffrey Wu wrote in an alignment research blog post from August 2022 that predicts “human researchers will focus more and more of their effort on reviewing alignment research done by AI systems instead of generating this research by themselves.”

Engadget points out that “by focusing the attention of the public on hypothetical risks that may never materialize, organizations like OpenAI shift the burden of regulation to the horizon instead of the here and now,” noting “there are much more immediate issues around the interplay between AI and labor, misinformation and copyright policymakers need to tackle today, not tomorrow.”

OpenAI’s Superalignment announcement “comes as governments around the world consider how to regulate the nascent AI industry,” a setting in which OpenAI’s CEO Sam Altman has met with what Engadget estimates is “at least 100” U.S. lawmakers.

The CEO also embarked on a world tour to meet with global leaders, most recently in Asia.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.