Guidelines, Accountability Considered as AI Becomes Priority

Artificial intelligence and machine learning are technologies with lots of heat behind them, and some controversy. Organizations (including the Entertainment Technology Center at USC) are working to better understand the ramifications of AI and how to hold its users accountable. Among the criticisms is that AI disproportionately exhibits bias against minority groups — the so-called “discrimination feedback loop.” In November, the New York City Council became the first in the nation to pass a law requiring that the hiring and promotion algorithms of employers be subject to audit.

Effective January 2023, the legislation requires city employers to tell New York job applicants and employees when artificial intelligence is involved in making hiring and promotion decisions. The legislation also requires companies to disclose what information the computer used in its evaluation, and that neutral auditors periodically vet the results. In October the NYC Mayor’s technology office published a 116-page “Artificial Intelligence Strategy.”

Congress is drafting a bill that would require monitoring automated decision-making in areas like healthcare, housing, employment and education, with findings reported to the Federal Trade Commission, where a majority of commissioners have expressed support for tougher AI regulation.

And last month, the White House Office of Science and Technology Policy (OSTP) announced it is launching “a series of listening sessions and events … to engage the American public in the process of developing a Bill of Rights for an Automated Society,” with emphasis on how AI might impact civil rights.

European lawmakers “are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems,” Wired reports, explaining that an upcoming report by the nonprofit Algorithmic Justice League (AJL) “recommends requiring disclosure when an AI model is used and creating a public repository of incidents where AI caused harm.”

AJL co-founder Joy Buolamwini coauthored a 2018 audit “that found facial-recognition algorithms work best on white men and worst on women with dark skin,” Wired writes, noting that algorithms discriminate based on everything from the software you use to create your resume to vocal inflections. Some critics are concerned AI auditors will focus too narrowly, missing biases against classes like the aged or disabled.

AI ethics expert Timnit Gebru is opening the Distributed AI Research Institute (DAIR), “designed to be an independent group committed to diverse points of view and preventing harm,” reports Bloomberg.

Bloomberg quotes Gebru as saying the main drivers of AI and machine learning are currently “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently. Those are the two fundamental goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative? That’s what I want to see.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.