Google’s AI White Paper Calls for Self-Regulation, Not Laws

After Google co-founder Sergey Brin wrote shareholders about the potential downsides of AI in April, chief executive Sundar Pichai released “guiding principles” for the company’s AI projects in June. This came after employee protests succeeded in getting Google to drop a Pentagon contract to interpret drone footage. Now, Google has released a 30-page white paper that stresses the benefits of artificial intelligence, arguing that its downsides can be avoided without more regulation “in the vast majority of instances.”

Wired reports that Google’s pitch for AI was that it “can deliver great benefits for economies and society, and support decision making which is fairer, safer and more inclusive and informed.” A host of government leaders (such as French president Emmanuel Macron) and other politicians are talking about regulating some AI-enabled technologies and outright banning others.

Google global policy lead for emerging technologies Charina Choi, who co-authored the white paper, said that, “one motivation of the report is to offer governments advice on where their input would be most useful.” “At this time, it’s not necessarily super obvious what things should be regulated and what shouldn’t,” she said. “The aim of this paper is to really think about: What are the types of questions that policymakers need to answer and [decisions] we as a society have to make?”

The paper suggests that “input from civil society groups and researchers outside the industry will also be needed,” and that government “rules or guidance” would be helpful for such areas as “safety certifications for some products with AI inside, like the CE mark used to indicate compliance with safety standards on products in Europe.” Examples of smart locks include those with biometric data. Studies, including an experiment performed by ACLU last year, reveal that “machine learning algorithms can pick up and even amplify societal biases, and that facial analysis algorithms perform better on white people than those with darker skin.”

Other tech companies express more enthusiasm for working with regulators. Microsoft has called for federal laws on facial recognition, including a “conspicuous notice” to alert consumers when it is in use. Amazon said it is “very interested” in working with “policymakers on guidance or legislation for facial recognition.” Google, meanwhile, “champions self-regulation, highlighting how it has chosen not to offer a general-purpose facial recognition service … due to concerns it could be used to ‘carry out extreme surveillance’.”

Google does ask for government assistance, however, “on when and how AI systems should explain their decisions — for example, when declaring that a person’s cancer appears to have returned,” and suggests that governments and “civil society groups” could establish “minimum acceptable standards” for “algorithmic explanations for different industries.” It also opines that people should always be “meaningfully involved” in decisions involving criminal law or important medical issues.

Experts outside Google call the paper a “positive but still preliminary step toward engaging with the challenges AI may pose to society.” At Oxford University, Internet Institute researcher Sandra Wachter said Google’s white paper “shows the company attempting to talk more specifically, but doesn’t go very far.” “With explanations, I don’t want to see a code of conduct, I want to see hard laws, because it’s a human rights issue,” she said.