Federal Policy Specifies Guidelines for Risk Management of AI

The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO.

Previously, they reported successfully completing all 90-day actions. “The Order directed sweeping action to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more,” the White House specifies in a fact sheet.

“The new policy for U.S. government use of AI announced Thursday asks agencies to take several steps to prevent unintended consequences of AI deployments,” writes  Wired, noting that “to start, agencies must verify that the AI tools they use do not put Americans at risk. For example, for the Department of Veterans Affairs to use AI in its hospitals it must verify that the technology does not give racially biased diagnoses.”

Research has indicated that “AI systems and other algorithms used to inform diagnosis or decide which patients receive care can reinforce historic patterns of discrimination,” Wired says.

The policy also emphasizes transparency, “requiring agencies to release government-owned AI models, data, and code, as long as the release of such information does not pose a threat to the public or government,” Wired explains.

An example the White House cites is “when at the airport, travelers will continue to have the ability to opt out from the use of TSA facial recognition without any delay or losing their place in line.” In March, United joined Delta in testing use of facial recognition to reduce security lines, according to Conde Nast Traveler.

“Agencies will also have to submit an annual report to the OMB listing all AI systems they use, any risks associated with these, and how they plan on mitigating these risks,” reports The Verge.

“All leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefits,” said Vice President Harris on a call with reporters, per The Hill.

The Evolution of AI Spending by the U.S. Government, Brookings Institution, 3/27/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.