AI Laws Becoming Decentralized with Cities First to Regulate

With the federal government still in the early phase of regulating artificial intelligence, cities and states are stepping in as they begin to actively deploy AI. While managing traffic patterns is straightforward, when it comes to policing and hiring practices, precautions must be taken to guard against algorithmic bias inherited from training data. The challenges are formidable. As with human reasoning, it is often difficult to trace the logic behind a machine’s decisions, making it challenging to identify a fix. Municipalities are evaluating different solutions, the goal being to prevent programmatic marginalization.

Some cities and states “require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place,” reports The Wall Street Journal. An inability to explain a machine’s actions leads to complaints of capriciousness, which erodes confidence among the citizenry whose taxpayer dollars ultimately fund such technological advances.

In November, the New York City Council passed a law requiring regular audits on software that is sold to screen potential employees. The audits are designed to ensure the software doesn’t discriminate based on race, sex or national origin. The law, effective in 2023, also mandates companies using AI in hiring or promotion decisions disclose it to candidates.

“Hiring is a really high-stakes domain. And we are using a lot of tools without any oversight,” New York University Tandon Center for Responsible AI director Julia Stoyanovich told WSJ. While the local law doesn’t specify what constitutes an audit, Stoyanovich suggests “making the AI display something like nutritional labels on food, with the data points used in the hiring decision broken down like nutrients and ingredients.”

Such labels are gaining support. The ETC@USC white paper “AI Ethics: A Framework for the Media Industry” says the approach, called “model cards,” conveniently lays out standardized documentation — “all the information necessary to evaluate a model and benchmark its performance in a variety of contexts” per author Yves Bergquist of ETC, which fielded the study conjunction with SMPTE.

AI use in policing is another fraught area. Giving people a say in how local law enforcement uses AI can help reduce bias. In 2017, Santa Clara County, California, worked with the American Civil Liberties Union to institute community control over police surveillance (CCOPS). CCOPS requires that agencies submit for public input policies detailing technologies, and how they would be used, “including how any data collected would be stored or shared,” says WSJ.

Variations on CCOPS are now effect in 22 cities, affecting 17.7 million people, ACLU senior counsel Chad Marlow told WSJ.

In the EU, Amsterdam and Helsinki are leading the way with websites that document how local governments use algorithms to deliver services, and agencies are learning from one another.

And the U.S. government is also taking steps, albeit more slowly. In June, the Government Accountability Office (GAO) “published a report of key practices to guarantee accountability and responsibility in AI use by federal agencies,” writes VentureBeat, adding that in April 2021 the Federal Trade Commission “issued guidance on how to responsibly build AI and machine learning systems.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.