EU Releases Its Draft Policy to Regulate Artificial Intelligence

The European Union issued a 108-page policy that establishes rules to govern the use of artificial intelligence, setting limits on its use in everything from bank lending and school enrollment to self-driving cars and hiring decisions. Use of artificial intelligence by law enforcement and court systems, considered “high risk” because of the potential to threaten safety and fundamental rights, is also regulated. Live facial recognition in public spaces would be banned except in cases of national security “and other purposes.”

The New York Times reports that, “the rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness.”

The new regulations will likely take “several years” to wend their way through the EU policymaking process. But, when they do, violators “could face fines of up to 6 percent of global sales.” “On artificial intelligence, trust is a must, not a nice-to-have,” said European Commission executive vice president Margrethe Vestager, who oversees digital policy. “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”

The regulations would require companies providing AI in “high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions” and also must “guarantee human oversight in how the systems are created and used.”

Companies must also make clear that any use of human-like conversation in customer service or “hard-to-detect manipulated images” are computer-generated.

The EU has already enacted strict data privacy regulations, and now other countries are on board, including the U.S. where President Joe Biden “filled his administration with industry critics” and the UK which is “creating a tech regulator to police the industry.” India and China are also, respectively, “tightening oversight of social media” and scrutinizing “domestic tech giants like Alibaba and Tencent.”

Since the release of the EU draft law, “many industry groups expressed relief that the regulations were not more stringent, while civil society groups said they should have gone further.”

At the London-based Ada Lovelace Institute, which studies the ethical use of AI, director Carly Kind stated, “there has been a lot of discussion over the last few years about what it would mean to regulate AI … this is the first time any country or regional bloc has tried.” Kind added that, “many had concerns that the policy was overly broad and left too much discretion to companies and technology developers to regulate themselves.”

In Silicon Valley, “in December, a co-leader of a team at Google studying ethical uses of the software said she had been fired for criticizing the company’s lack of diversity and the biases built into modern artificial intelligence software.” The Federal Trade Commission this week “warned against the sale of artificial intelligence systems that use racially biased algorithms, or ones that could ‘deny people employment, housing, credit, insurance or other benefits’.”

Related:
Europe’s Proposed Limits on AI Would Have Global Consequences, Wired, 4/21/21

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.