January 23, 2020
As lawmakers in the U.S. and Europe ponder how to best regulate artificial intelligence, IBM called for the industry and governments to jointly create standards to measure and avoid AI bias. The company, led by chief executive Ginni Rometty, issued a policy proposal on the eve of the World Economic Forum in Davos. Although their policies are not as strict as governments might otherwise propose, the goal is to find a consensus among all parties. IBM, which has lagged in technology, now focuses on AI and cloud services.
Bloomberg reports that, according to IBM vice president of government and regulatory affairs Chris Padilla, “it seems pretty clear to us that government regulation of artificial intelligence is the next frontier in tech policy regulation.”
IBM, which has been working with the Trump administration, noted that companies and governments must work together to ensure that they avoid the results of algorithms “that rely on historical data such as zip codes or mortgage rates that may have been skewed by discrimination.” The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) would likely be the governmental body to establish such standards.
“Spearheading the AI regulatory debate gives IBM a chance to come back into the spotlight as a leader in cutting-edge technology, a position it hasn’t held for years,” says Bloomberg, “[And] the AI proposals are intended to stave off potential crises that could enrage customers, lawmakers and regulators worldwide.”
In addition to racial, gender and age bias, concerns about AI include everything from facial recognition to “making determinations about mortgage rates.” Some are also worried about the loss of jobs and the potential spread of “existing disparities in areas such as law enforcement, access to credit and hiring.”
The Trump administration already “issued guidelines for use of the technology by federal agencies, which emphasized a desire not to impose burdensome controls … [and] a bipartisan group of U.S. senators unveiled a bill designed to boost private and public funding for AI and other industries of the future.”
In the European Union, regulators are considering “new, legally binding requirements for developers of artificial intelligence to ensure the technology is developed and used in an ethical way.” The European Commission, which was advised by IBM and will release its paper in mid-February, has zeroed in on rules for “high-risk sectors,” such as healthcare and transport and “also may suggest that the bloc update safety and liability laws.”
IBM, meanwhile, has had five consecutive quarters of shrinking sales, “despite its July acquisition of open-source software provider Red Hat, designed to bolster its hybrid cloud-services strategy.”
AI Regulation’s First Testing Ground Is the European Union, ETCentric, 1/22/20