Biden Administration Hosts Tech Elite at White House AI Meet

The Biden administration has committed $140 million to create seven new artificial intelligence research hubs, bringing the national total to 25. The announcement coincided with Vice President Kamala Harris’ Thursday meeting with representatives from Alphabet, Anthropic, Microsoft, and OpenAI and new White House guidance on AI development. The developments are part of an effort to curtain security risks associated with AI and ensure that it is implemented responsibly. “The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said following the meeting, which included a drop-in by President Biden.

Among those who spent approximately two hours with Harris in the White House Roosevelt Room were Sundar Pichai, CEO of Alphabet and Google, Microsoft CEO Satya Nadella, OpenAI co-founder and chief Sam Altman, and Anthropic co-founder and chief Dario Amodei, according to The New York Times, which said “some of the executives were accompanied by aides with technical expertise, while others brought public policy experts.”

Extending her message beyond the White House technology guests, Harris said in a statement that “every company must comply with existing laws to protect the American people.” While AI has “the potential to improve people’s lives and tackle some of society’s biggest challenges,” it also can potentially “increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy,” Harris noted, stressing the need for public and private partnership moving forward.

It was the administration’s first gathering of AI leaders since the release of OpenAI’s ChatGPT propelled machine learning into the mainstream of the U.S. conversation, triggering a race among corporations hoping to dominate the field.

Although on a practical level, only a small percentage of consumers have actually tried ChatGPT or Alphabet’s competing chatbot Google Bard, it’s impossible to avoid reports of AI’s impending social impact — replacing human workers, transforming economies and supercharging criminal activity. Critics say the most powerful AI systems lack transparency, and worry that about their tendency to discriminate and spread disinformation.

Last week, pioneering researcher Geoffrey Hinton, “known as a ‘godfather’ of AI, resigned from Google so he could speak openly about the risks posed by the technology,” NYT writes, noting that “even as governments call for tech companies to take steps to make their products safe, AI companies and their representatives have pointed back at governments, saying elected officials need to take steps to set the rules.”

While the EU and China have been actively moving to regulate AI, the U.S. has lagged. Several bills have been introduced, but stalled. “There is a first mover advantage in policy as much as there is a first mover advantage in the marketplace,” former FCC Chairman Tom Wheeler told NYT, adding that all eyes are on the U.S.

Last month, Senator Michael Bennet (D-Colorado) introduced a bill to “create an AI task force focused on protecting citizens’ privacy and civil rights,” according to Wired. In April, “four U.S. regulatory agencies including the Federal Trade Commission and Department of Justice jointly pledged to use existing laws to protect the rights of American citizens in the age of AI.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.