Federal Agency Reveals Bias in Facial Recognition Systems

The National Institute of Standards and Technology reported that most commercially available facial recognition systems — often used by police departments and federal agencies — are biased. The highest error rate involved Native American faces, but African-American and Asian faces were incorrectly identified 10 to 100 times more than Caucasian faces. The systems also had more difficulty identifying female faces and falsely identified older people up to 10 times more than middle-aged adults.

The New York Times reports that in order to create “one of the largest studies of its kind,” researchers availed themselves of 18 million photos from about 8.5 million people found in “U.S. mug shots, visa applications and border-crossing databases.”

They “tested 189 facial-recognition algorithms from 99 developers … [that] included systems from Microsoft, biometric technology companies like Cognitec, and Megvii, an artificial intelligence company in China.” The group did not, however, “test systems from Amazon, Apple, Facebook and Google because they did not submit their algorithms for the federal study.”

The findings validate an earlier MIT study that revealed that “ facial-recognition systems from some large tech companies had much lower accuracy rates in identifying the female and darker-skinned faces than the white male faces.”

“While some biometric researchers and vendors have attempted to claim algorithmic bias is not an issue or has been overcome, this study provides a comprehensive rebuttal,” said MIT Media Lab researcher Joy Buolamwini. “We must safeguard the public interest and halt the proliferation of face surveillance.” NYT notes that China has used facial recognition “to surveil and control ethnic minority groups like the Uighurs.”

The NIST report “comes at a time of mounting concern from lawmakers and civil rights groups over the proliferation of facial recognition.” Some see it as an important tool for tracking criminals and terrorists and tech companies “market it as a convenience that can be used to help identify people in photos or in lieu of a password to unlock smartphones.”

But fear of facial recognition’s misuse led San Francisco, Oakland and Berkeley in California and Somerville and Brookline in Massachusetts to ban government use of it. The U.S. Immigration and Customs Enforcement also “came under fire for using the technology to analyze the drivers’ licenses of millions of people without their knowledge.”

“One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse,” said American Civil Liberties Union policy analyst Jay Stanley. “Government agencies including the FBI, Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology.”