March 28, 2019
Google is forming the Advanced Technology External Advisory Council (ATEAC), an external eight-member advisory group to “consider some of the most complex challenges [in AI],” such as facial recognition and fairness. The move comes about a year after Google issued a charter stating its AI principles, and months after Google said it would not provide “general-purpose facial recognition APIs” before the ATEAC addresses relevant policy issues. The advisory group will hold four meetings in 2019, starting in April.
VentureBeat reports that University of Oxford philosopher/digital ethics expert Luciano Floridi; former U.S. deputy secretary of state William Joseph Burns; drone maker Trumbull chief executive Dyan Gibbens; and Heinz College professor of information technology and public policy Alessandro Acquisti are among the ATEAC members.
Google said it will publish a summary report of the ATEAC’s findings by end of year. Google senior vice president of global affairs Kent Walker wrote in a blog post that, “in addition to consulting with the experts on ATEAC, we’ll continue to exchange ideas and gather feedback from partners and organizations around the world.”
Last June, Google published its “seven guiding AI Principles,” which states the company will not pursue projects that “(1) aren’t socially beneficial, (2) create or reinforce bias, (3) aren’t built and tested for safety, (4) aren’t ‘accountable’ to people, (5) don’t incorporate privacy design principles, (6) don’t uphold scientific standards, and (7) aren’t made available for uses that accord with all principles.”
The company has long had internal AI ethics review teams, including one “consisting of researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, legal experts, and social scientists who handle initial assessments and ‘day-to-day operations’, and a second group of ‘senior experts’ … who provide technological, functional, and application expertise.” Yet another internal council, composed of senior executives, looks at more “complex and difficult issues,” including those related to development of Google’s technologies.
Google has been criticized for this internal focus when reports stated that “it contributed TensorFlow, its open source AI framework, while under a Pentagon contract — Project Maven — that sought to implement object recognition in military drones … [and] reportedly also planned to build a surveillance system that would have allowed Defense Department analysts and contractors” to identify people, buildings, vehicles by “clicking” on them.
The first project resulted in dozens of employees resigning and more than 4,000 signing an “open opposition letter.” The company has experienced other missteps, including “a biased image classifier in Google Photos that mistakenly labeled a black couple as ‘gorillas’.” Amazon and Microsoft have also made moves to pursue AI fairness and ethics issues.