July 20, 2018
In the wake of posts that have incited violence in Sri Lanka, Myanmar and India, Facebook has tweaked its fake news policy and agreed to remove posts that could lead to physical harm. In the incidents that sparked this change, rumors spread on Facebook led to physical attacks on ethnic minorities. The attacks have involved the Rohingya Muslims in Myanmar, Muslims in Sri Lanka, and other attacks in India and Mexico. Changes do not apply to Instagram or WhatsApp, despite the latter’s involvement in incidents in India.
The New York Times reports Facebook product manager Tessa Lyons as saying the company has “identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline.”
“We have a broader responsibility to not just reduce that type of content but remove it,” she added. Facebook’s belief in free speech has not always played well in countries where “access to the Internet is relatively new and there are limited mainstream news sources to counter social media rumors.” That’s what happened in Myanmar, where the U.N. and human rights groups accused Facebook of “facilitating violence against Rohingya Muslims, a minority ethnic group, by allowing anti-Muslim hate speech and false news.”
Not all the rumors call for violence, but instead “amplified underlying tensions.” In an interview for Recode, Facebook chief executive Mark Zuckerberg drew a line between offensive speech (such as Holocaust denial) and “speech that could lead to physical harm.” Facebook already has “rules in place in which a direct threat of violence or hate speech is removed … [but] has been hesitant to remove rumors that don’t directly violate its content policies.”
More specifically, the site bans “hate speech, nudity and direct threats of violence, among other things,” and those posts are immediately removed. Facebook has also begun to identify posts deemed false by independent fact-checkers.
The new rules, based on the creation of “partnerships with local civil society groups to identify misinformation for removal,” have already rolled out in Sri Lanka, and will soon expand to Myanmar and elsewhere.
Elsewhere, NYT reports on how false rumors about child kidnappers, which went viral on WhatsApp in India, led “fearful mobs to kill two dozen innocent people since April.” Facebook’s WhatsApp has one-quarter billion users in India. The clip about child abductions was originally produced as a public service announcement in Pakistan, but “was edited to look like a real kidnapping.”
Millions of Indians have only been able to access the Internet recently, meaning that, “many are quick to believe what is on their phones.” The police have warned people not to believe the rumors in the weeks leading up to the attack but, “they were no match for WhatsApp.”
WhatsApp “began labeling all forwarded messages … [and] also took out newspaper ads to educate people about misinformation and pledged to work more closely with police and independent fact-checkers.”