January 28, 2019
Facebook has changed its policy to make it much easier to take down fraudulent pages and groups, in its efforts to suppress fake news and propaganda. Up until now, a troll has been able to create multiple fake pages and Facebook has had to close each page or group individually. Facebook stated, it “may now also remove other Pages and Groups with similar names that are maintained by the same person, even if that specific Page or Group has not met the threshold to be unpublished on its own.”
Wired reports that, “Facebook will also launch a new control panel Thursday for page managers, designed to make it easier and less confusing for them to understand when their posts have breached Facebook’s Community Standards.”
In this new process, Facebook’s “Page Quality tab will display content that [it] recently removed and will cite the rule it broke,” including “graphic violence, harassment, bullying, nudity, and sexual activity” (but not spam, clickbait, or IP violations). Also on Page Quality, managers will see if Associated Press, PolitiFact or other third party fact-checkers have rated their content “False,” “False Headline” or “Mixture.” With these ratings, “Facebook reduces how many people see them in their News Feed.”
According to Motherboard, which leaked internal Facebook documents in 2018, Facebook “has different deletion thresholds for pages depending on the type of content violation they commit.” Facebook will tell a page manager to delete his or her page, for example, if it’s received five “strikes” for hate speech in 90 days. The new system attempts to make the fact-checking process more transparent “especially for those who are in charge of pages with large followings that generate hundreds of notifications a day,” who might think Facebook’s actions are sinister.
Facebook’s previous efforts to make it harder to spread misinformation and propaganda include “tightening its advertising policies,” with “strict requirements for organizations that want to run so-called issue ads.” The latest changes target “fraudulent pages and groups that don’t need to rely on paid advertising to reach an audience.”
But Facebook has not yet targeted pages that are anonymously controlled. Such pages “can then create their own affiliated groups, allowing bad actors to erect entire communities without revealing their identity.” Whereas “the social media manager for a nonprofit or publication might not want their work connected to their personal Facebook profile,” the anonymously run page “makes it almost impossible for users to understand where a page or group came from.”
Thus far, Facebook has only disclosed the date the page was created and whether the name has recently been changed, but “has stopped short of requiring users to disclose when they create them.”