April 29, 2021
The Senate Judiciary Committee’s panel on Privacy, Technology and the Law pressed executives from Google’s YouTube, Facebook, and Twitter this week on how user content is shared via algorithms that can be misused. The top Republican on the panel, Senator Ben Sasse (R-Nebraska) stated that the use of such algorithms are “driving us into poisonous echo chambers.” Congress is currently considering the fate of Section 230 of the 1996 Communications Decency Act, which protects platforms from liability for what their users post.
Bloomberg reports that subcommittee chair Senator Chris Coons (D-Delaware) opened the hearing by saying he plans to use it as “an opportunity to learn about how these companies’ algorithms work, what steps may have been taken to reduce algorithmic amplification that is harmful and what can be done better.” He cited the January 6 attack on the U.S. Capitol by extremists who had “shared disinformation on some of the platforms represented at Tuesday’s hearing.”
Facebook vice president for content policy Monika Bickert said it is not in the company’s “interest financially or reputationally … [to] push people toward extremist content.” Twitter’s head of U.S. public policy Lauren Culbertson “highlighted the positive uses for algorithms and machine learning, especially the ability to recognize harmful content to review and remove.”
YouTube director of government affairs and public policy for the Americas Alexandra Veitch “said the service uses an automated process to detect videos that violate the company’s policies, and algorithms can be used to promote trusted sources and minimize content that’s questionable.”
Critics included Center for Humane Technology co-founder and president Tristan Harris who noted, “it’s almost like having the heads of Exxon, BP and Shell asking about what are you doing to responsibly stop climate change,” adding that “their business model is to create a society that is addicted, outraged, polarized, performative and disinformed.”
At Harvard’s Shorenstein Center on Media, Politics and Public Policy, research director Joan Donovan noted that, “misinformation at scale is a feature, not a bug” for the online platforms.
CNBC reports that, “while Section 230 reforms were brought up a handful of times at Tuesday’s hearing, the discussion also called attention to what could perhaps be a more narrow way of reining in some of the most pervasive harms of Internet platforms by focusing on transparency around their algorithms.” It notes that, “both Facebook and Twitter have introduced more choice for users around whether they want to view a curated timeline of content on their feeds or not.”
Algorithms can be “a useful way to surface the most engaging content for any given user, based on their interests and past activity” but lawmakers are concerned they also drive polarization, extremism and misinformation. It adds that, “focusing on algorithms could potentially create a more immediately viable path toward regulation.” At the close of the hearing, Coons “said he looks forward to working with Sasse on bipartisan solutions, which could take the form of either voluntary reforms or regulation.”