The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) issued a statement on the use of facial recognition “as applied by government and the private sector,” concluding that, “when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems.” ACM, which has 100,000 global members, urged legislators to suspend use of it by government and business entities.
VentureBeat reports that the USTPC called on a temporary ban “until accuracy standards for race and gender performance, as well as laws and regulations, can be put in place.” The Committee added that using a technology that isn’t “sufficiently mature” has the negative consequences of biases that “frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods, and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.”
Georgetown University’s Perpetual Line-Up Project has studied use of facial recognition technology and concluded that, “broad deployment of the tech will negatively impact the lives of Black people in the United States.” The American Civil Liberties Union (ACLU) and Algorithmic Justice League have also “supported halts to the use of the facial recognition in the past.”
ACM, one of the biggest computer science organizations in the world, offers its “principles for facial recognition regulation surrounding issues like accuracy, transparency, risk management, and accountability.” They include disaggregating “system error rates based on race, gender, sex, and other appropriate demographics,” and the need for such systems to “undergo third-party audits and robust government oversight.”
Further, “people must be notified when facial recognition is in use, and appropriate use cases must be defined before deployment.” Another principle is that, “organizations using facial recognition should be held accountable if or when a facial recognition system causes a person harm.” The ACM’s entire statement can be found here.
In 2018 and 2019, the Gender Shades project, followed by the Department of Commerce’s National Institute of Standards and Technology (NIST) tested major facial recognition systems and found they “exhibited race and gender bias, as well as poor performance on people who do not conform to a single gender identity.”
More recently, in the wake of Black Lives Matter protests, Amazon, IBM and Microsoft paused or ended sale of their facial recognition systems, and Congressional members “introduced legislation that would prohibit federal employees from using facial recognition and cut funding for state and local governments who chose to continue using the technology.”
Boston banned facial recognition systems, and Detroit police chief James Craig admitted that the facial recognition system his city uses is “inaccurate 96 percent of the time.”