September 14, 2020
As the 2020 U.S. presidential election looms, social media platforms are launching strategies to combat false claims and misinformation. Internet companies anticipate a tsunami of this type of content in the lead-up to the election. Google, for example, said it would block some autocomplete search suggestions in an effort to combat misinformation, and Twitter said it would more aggressively label or remove tweets that undermine confidence in the election or promote disputed information. Twitter and Facebook plan to ban new political ads the week leading up to the election. Facebook, meanwhile, is also working to prevent climate misinformation.
The Wall Street Journal reports that, for Twitter, “the changes could affect tweets claiming victory before election results have been certified; posting unverified information about ballot tampering or other matters that could undermine confidence; or including false or misleading information that causes confusion about laws, regulations, officials or institutions related to elections.”
Tweets labeled under the expanded policy “will have reduced visibility across the platform … [although] people following accounts whose tweets are labeled or removed can still view and retweet the content.”
Google senior director for global policy and standards David Graff said, “the search engine … would remove any incorrect election information from some results that aren’t caught by the search engine’s automated systems.” The company did not say, however, that “it would remove all incorrect information that’s displayed in search.”
A 2016 internal study revealed that Google returned misinformation on between 0.1 percent to 0.25 percent of search inquiries, which amounts to almost two billion searches per year. Twitter and Facebook stated they would ban new political ads the week before the November 3 election “and seek to flag any candidates’ premature claims of victory.”
Bloomberg reports that Google “will block some autocomplete search suggestions to stop misinformation spreading online during the U.S. presidential election in November.” It will, for example, “remove predictions that could be interpreted as claims for or against any candidate or political party … [and] pull claims from the autocomplete feature about participation in the election, including statements about voting methods, requirements, the status of voting locations and election security.”
Graff noted that this “might mean some perfectly benign predictions get swept up in this,” but the company thinks it is “the most responsible approach, particularly when it comes to elections-related queries.” Google’s autocomplete search and potential manipulation of results “has been debated for years … [but] Google has denied bias in search results.” It has also “pulled more than 200,000 videos and over 100 million ads from its YouTube service to curb disinformation about the coronavirus pandemic.”
Elsewhere, Bloomberg reports that , although “the climate disaster is undeniable to those who have experienced its harms … Facebook is full of misinformation on the topic, which, when noticed and reported by users, is sent to the company’s third-party fact-checkers.” This summer Facebook removed the nonprofit CO2 Coalition, which claimed that carbon dioxide created by humans was beneficial for the planet, “after too many fact violations.” CO2 Coalition, however, successfully appealed its ban.
After pushback from its third-party checker Climate Feedback’s science editor Scott Johnson, Facebook reevaluated its stance. Spokesman Andy Stone said Facebook is now “working on a climate information center, which will display information from scientific sources.”
Twitter Is Tightening Its Rules Against Voting Misinformation, Recode, 9/10/20
Google Says It’s Eliminating Autocomplete Suggestions That Target Candidates or Voting, TechCrunch, 9/10/20
Graphic Video of Suicide Spreads From Facebook to TikTok to YouTube as Platforms Fail Moderation Test, TechCrunch, 9/13/20