Amazon, National Science Foundation to Further AI Fairness

Amazon is teaming up with the National Science Foundation (NSF), pledging up to $10 million in research grants over the next three years to further fairness in artificial intelligence and machine learning. More specifically, the grants will target “explainability” as well as potential negative biases and effects, mitigation strategies for such effects, validation of fairness and inclusivity. The goal is to encourage “broadened acceptance” of AI, thus enabling the U.S. to make better progress on the technology’s evolution.

VentureBeat reports that, “the two organizations expect proposals, which they’re accepting from today until May 10, to result in new open source tools, publicly available data sets, and publications.”

“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” wrote Amazon Alexa AI Group vice president of natural understanding Prem Natarajan. “Here at Amazon, the fairness of the machine learning systems … is critical to establishing and maintaining our customers’ trust.”

The NSF will make the awards, “independently and in accordance with its merit review process,” and Amazon will “provide partial funding for the program,” which will continue annually through 2021. NSF head of computer, information science and engineering Jim Kurose stated that, “this program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.”

By establishing the program, Amazon is joining other “corporations, academic institutions, and industry consortiums engaged in the study of ethical AI,” whose work has “produced algorithmic bias mitigation tools that promise to accelerate progress toward more impartial models.” Among those are Facebook’s Fairness Flow; tools from Accenture and Microsoft; Google’s What-If tool based on a “bias-detecting feature of the TensorBoard web dashboard” for TensorFlow; and IBM’s cloud-based, automated AI Fairness 360.

Massachusetts Institute of Technology researchers also just released a study that found Rekognition, Amazon Web Services’ (AWS) object detection API, “incapable of reliably determining the sex of female and darker-skinned faces in certain scenarios.” The researchers stated that, in experiments conducted throughout 2018, “Rekognition’s facial analysis feature mistakenly identified pictures of women as men and darker-skinned women as men 19 percent and 31 percent of the time, respectively.”

Amazon disputes the results of the research, saying that in its own tests, of “an updated version of Rekognition,” it found “no difference” in gender classification accuracy across all ethnicities. Amazon also critiques the research for not making clear “the confidence threshold used in the experiments.”