Legislators on both sides of the aisle agree that the United States should support development of artificial intelligence, even as they — along with the White House, the Department of Defense and the National Institute of Standards and Technology (NIST) — work on bills to regulate it. President Biden’s Office of Science and Technology Policy (OSTP) is focused on limiting discrimination caused by algorithms, and the National Defense Authorization Act mandates that the Pentagon focus on ethics and NIST develop standards.
Wired reports that, “in the past three weeks, the Government Accountability Office [GAO] … released two reports warning that federal law enforcement agencies aren’t properly monitoring the use and potential errors of algorithms used in criminal investigations.” The reports targeted facial recognition and forensic algorithms “for face, fingerprint, and DNA analysis,” while a third GAO report “laid out guidelines for responsible use of AI in government projects.”
At Georgetown’s Center for Security and Emerging Technology (CSET), director of strategy Helen Toner said that, “the bustle of AI activity provides a case study of what happens when Washington wakes up to new technology.” Almost two dozen U.S. cities have thus far banned the use of facial recognition technology “usually citing concerns about accuracy, which studies have shown is often worse on people with darker skin.”
The GAO report on the technology “found that 20 federal agencies that employ law enforcement officers use the technology,” 14 of which sourced the technology from outside the federal government. Thirteen “did not track what systems their employees used.”
Wired says that, “the GAO report appears to have increased the chances of bipartisan legislation constraining government use of face recognition” as evidenced by last week’s hearing of the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security. Representatives Sheila Jackson Lee (D-Texas) and Andy Biggs (R-Arizona) agreed that the GAO report “underscored the need for regulations.”
The GAO said that “algorithms for face recognition, latent fingerprint analysis, and DNA profiling from degraded or mixed samples can help investigators … [but suggested] lawmakers support new standards on training and appropriate use of such algorithms to avoid errors and increase transparency in criminal justice.”
Representative Mark Takano (D-California), who reintroduced a bill drafted in 2019 to direct NIST to establish standards and a testing program for forensic algorithms, noted that, “everything from the data input, to the design of the algorithm, to the testing of it can lead to disparate outcomes for people in the real world.”
The GAO initiated its third report, “on responsible use of AI for federal agencies … in anticipation of rapid growth in government AI projects.” According to GAO chief data scientist Taka Ariga, “the report aims to explain to government agencies and AI suppliers in the private sector the acceptable standards for the testing, security, and privacy of AI systems and data used to create them.”
“We want to make sure we’re asking the accountability questions now because our job is going to get more difficult when we encounter AI systems that are more capable,” he said.
There is also a bipartisan push to revive the Office of Technology Assessment (OTA), shuttered 25 years ago, “to provide lawmakers with independent research on new technologies such as AI,” notes Wired. “Members of Congress from both parties have attempted to bring back the OTA in recent years.”
“We need OTA or something like it to help members anticipate where technology is going to challenge democratic institutions, or the justice system, or political stability,” explained Takano.
The U.S. Wakes Up to China’s AI Threat, Forbes, 6/22/21
How Congress Must Reform Its Budget Process to Compete Against China in AI, The Hill, 6/25/21
The U.S. Says Humans Will Always Be in Control of AI Weapons. But the Age of Autonomous War Is Already Here, The Washington Post, 7/7/21
This Agency Wants to Figure Out Exactly How Much You Trust AI, Wired, 6/22/21
House Calls for Regulating Use of Facial Recognition Software, ETCentric, 7/19/21