Politicians and Tech Leaders Gather to Discuss Regulating AI

A new government agency that licenses artificial intelligence above a certain capability, regular testing, and independent audits were some of the ideas to spring from a three-hour Senate judiciary subcommittee hearing to explore ways in which the government might regulate the nascent field. OpenAI co-founder and CEO Sam Altman advocated for all of the above, stressing the need for external validation by independent experts, strict cybersecurity, and a “whole of society approach” to combatting disinformation. While Altman emphasized AI’s advantages, he warned “if this technology goes wrong, it can go quite wrong.”

“What I’m hearing today is ‘stop me before I innovate again,’” Senator Dick Durbin (D-Illinois) interpreted for the panel, that in addition to Altman included IBM chief privacy and trust officer Christina Montgomery and neural scientist Gary Marcus, NYU professor emeritus.

“I can’t recall when we’ve had people representing large corporations or the private sector come before us and plead with us to regulate them,” said Durbin.

Montgomery called for “precision regulation” that would see rules governing AI in specific use-cases, rather than general regulation. “A chatbot that can share restaurant recommendations or draft an email has different impacts on society than a system that supports decisions on credit, housing, or employment,” she testified at “Oversight of AI: Rules for Artificial Intelligence.”

“How the technology might affect elections, intellectual-property theft, news coverage, military operations and even diversity and inclusion initiatives were among the topics covered,” The Wall Street Journal summarized, noting that “the hearing demonstrated the wide-ranging concerns prompted by rapid consumer adoption of AI systems like ChatGPT,” which rocketed to 100 million consumer users in two months.

That rapid adoption sparked an industry race pitting OpenAI investor Microsoft against Google and its AI technologies, including the chatbot Bard. The AI topic has engaged bipartisan interest and the hearing is part of a series of learning sessions.

On Monday, “Altman met privately with about 60 House lawmakers from both parties” and “earlier this month, he attended a White House sit-down with the chief executives of Google and Microsoft and Vice President Kamala Harris, who told the companies they have a responsibility to ensure their products are safe,” WSJ notes.

The senators seemed intent on not repeating their mistake in failing to enact stricter social media regulation earlier in the medium’s growth trajectory. Section 230 of the Communications Decency Act — which provides a liability shield to social platforms — was invoked by Durbin and others who said this will not apply to AI, whose makers will be held accountable.

“Congress failed to meet the moment on social media,” said Senator Richard Blumenthal (D-Connecticut). “Now we have the obligation to do it on AI before the threat becomes real.”

Ownership of the data that AI trains on was a topic of discussion, with Altman saying “people should be able to opt out of having their data be used to train those models,” according to Politico. That subject had its own hearing Wednesday at a House session on “AI and Copyright Law.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.