AI Regulation’s First Testing Ground Is the European Union

Artificial intelligence and its potential to harm consumers has been much in the spotlight — now, more than ever, in Europe. Several Big Tech executives are in Europe, prior to heading to Davos for the annual World Economic Forum, and some, such as Microsoft president Brad Smith, are meeting with the European Union’s new competition chief Margrethe Vestager. Under the European Commission’s new president Ursula von der Leyen, new rules regulating free flow of data and competition are under consideration.

The Wall Street Journal reports that Alphabet chief executive Sundar Pichai also plans to meet Vestager and Commission vice president in charge of the Green Deal Frans Timmermans, before giving a speech on “the promise and perils of the digital age.” The EU’s Green Deal is “a set of policies aimed at transforming the continent’s economy, including digital services such as cloud computing, into one that curbs greenhouse gas emissions.”

Pichai stated that, “sensible regulation must also take a proportionate approach” to artificial intelligence, “balancing potential harms with social opportunities.” “There is no question in my mind that artificial intelligence needs to be regulated,” he said. “The question is how best to approach this.”

Noting that, “proposals from the European Commission … will likely provide an early testing ground for how these debates will play out.” WSJ reports that one such proposal, due out by end of 2020, is the Digital Services Act, which “would update — and perhaps abolish — decades-old protections that digital intermediaries now enjoy against some liability for harmful activity on their platforms.”

The U.K. is pursuing similar legislation on what it dubs “Online Harms,” and France is “debating a new law to combat hate speech.” EDiMA, the lobbying group representing Apple, Amazon, Facebook, Google and Microsoft among other Big Tech companies, proposed that “new online-liability be coupled with a broader incentive framework that encourages companies to take action against content deemed inappropriate.”

“We’re willing to take our responsibility, but there has to be a balance,” said EDiMA director general Siada El Ramly. Big Tech companies see the opportunities to “profit through the use of deep learning and neural networks that can spot subtle patterns much more quickly than humans … [enabling] businesses to save resources and time, with applications ranging from lowering factories’ power consumption to headhunters departments scanning resumes automatically.”

“We believe in a targeted regulatory intervention, rather than a one size fits all,” said Christian Borggreen of the U.S.-based lobbying group Computer & Communications Industry Association (CCIA). This group sent a letter to Vestager proposing a “risk based” approach to AI regulation. Pichai said that, “the EU should start in part by building on existing rules, such as the General Data Protection Regulation, which already puts restrictions on the use of automated decision-making about individuals … [and] also urged the EU and U.S. to align their regulatory approaches.”