February 2, 2021
When Facebook launched Groups in 2019, it was intended to be, per chief executive Mark Zuckerberg, the “heart of the app.” This last August, however, its own data scientists warned about “blatant misinformation and calls to violence” in the site’s top “civic” Groups. Facebook was aware of the problems for years but accelerated plans to make actual changes after rioters broke into and vandalized the U.S. Capitol on January 6. The Groups in question, dedicated to politics, together reached “hundreds of millions of users.”
The Wall Street Journal reports that, during an internal presentation, “the researchers told executives that ‘enthusiastic calls for violence every day’ filled one 58,000-member Group.” Another group stated it was set up by Donald Trump fans but in fact was run by “financially motivated Albanians” who directed one million daily views to fake news stories.
“Our existing integrity systems aren’t addressing these issues,” said the researchers. Prior to the election, Facebook banned some groups and attempted to limit the growth of others, but always “viewed the restrictions as temporary.”
After January 6, “Facebook took down more of the Groups and imposed new rules as part of what it called an emergency response,” and, said Facebook vice president of integrity Guy Rosen, “has canceled plans to resume recommending civic or health Groups.” He added it will also “disable certain tools that researchers argued had facilitated edgy Groups’ rapid growth and require their administrators to devote more effort to reviewing member-created content.”
Rosen insisted, however, that the moves “aren’t an admission that previous rules were too loose, but show Facebook adapting to emerging threats.” Facebook is also “considering steps to reduce political content in its News Feed.”
But evidence that Facebook’s tools were problematic came as early as 2016, when one researcher at a presentation noted that “extremist content had swamped large German political Groups and that ‘64 percent of all extremist group joins are due to our recommendation tools’.” The August 2020 internal presentation revealed U.S. Groups “tied to mercenary and hyper-partisan entities using Facebook’s tools to build large audiences” and many of the most successful groups were headed by administrators that “tolerated or actively cultivated hate speech, harassment and graphic calls for violence.”
One Trump group, the Kayleigh McEnany Fan Club, named after but not associated with that administration’s press secretary, basically served as a distribution center for “low-quality, highly divisive, likely misinformative news content,” with comments that included “death threats against Black Lives Matter activists and members of Congress.” Facebook had flagged it 174 times within three months.
Elsewhere, WSJ reports that Facebook stated it “would soon begin building and testing new controls to let marketers keep their ads away from topics they want to avoid,” a promise the company made last summer when 1,100+ advertisers temporarily boycotted the platform over its handling of hate speech and misinformation. The test solution will “let an advertiser select a topic that it wants to steer clear of, which will determine where and how their ads show up on Facebook, including in the News Feed.”
The controls will be tested with a small group, but Facebook did not name the participants.