Lawmakers Aim to Control Malicious Content Enabled by AI

The U.S. House of Representatives’ Homeland Security Committee began a series of hearings to look into “emerging technological breakthroughs” to control malicious content posted on digital platforms by AI-enabled software including bots. Facebook head of global policy management Monika Bickert testified that the company has prioritized the development of such tools. Chief information officers at numerous tech companies are paying attention, worried that lawmakers are considering regulating the use of AI.

The Wall Street Journal quotes Gartner research vice president for analytics and data science Jim Hare as saying that, “CIOs should anticipate and plan for governments across the globe, including the U.S. Congress, to enact laws to protect its citizens and country from ‘AI gone wild’.”

“Given that industry hasn’t stepped up to police itself, it makes sense that the U.S. government is stepping in to consider laws to require companies to audit their AI solutions for bias and discrimination, among other requirements,” he added.

According to Gartner, 75 percent of large organizations are likely to “hire AI behavior forensic specialists within the next five years, focused on mitigating risk by uncovering bias and uses of AI that could be perceived as unethical.”

Information Technology and Innovation Foundation vice president Daniel Castro noted that, with these Congressional hearings, it’s clear that “the issue is attracting attention.” This Washington-based think tank, whose members include Microsoft, Apple, and Amazon officials, had earlier “warned that exaggerated fears about the safety, privacy and security of AI systems could result in a heavy-handed approach by federal and state regulators, creating unnecessary barriers to innovation.”

But Castro tempered this by adding that “the Trump administration has taken steps to streamline regulations for industry overall,” and that “most firms are not likely to face significant new regulations.”

At CompTIA, a tech industry trade group, vice president for public advocacy Cinnamon Rogers said that recently introduced legislation is “increasingly focused on understanding the implications of artificial intelligence, including how this technology will implicate consumer privacy, national security, and cybersecurity.”

Forrester Research analyst Fatemeh Khatibloo said she doesn’t think that “a well-crafted regulation — one informed by technologists and data scientists, and ideally passed in conjunction with a comprehensive data protection law — will hinder innovation.” But she added, “many concerns around AI are justified and companies should be thinking about creating AI ethics boards, privacy teams and other initiatives.”

A PricewaterhouseCoopers survey of 132 chief executives of large U.S. companies revealed that 42 percent “identified digital technologies as the business initiative they expect to be the most affected by government policies in the next three years, with 31 percent specifically citing automation.”