November 22, 2016
After weeks of accusation that fake news on Facebook, much of it favorable to Donald Trump, impacted the election, Facebook chairman/chief executive Mark Zuckerberg published a post describing ways the social media company might handle the issue. Among the potential steps are third-party verifications, improved automated detection tools and simpler ways for an ordinary user to flag suspicious content. Zuckerberg originally stated that the idea Facebook influenced the election was “a pretty crazy idea.”
The New York Times notes that, since the election, “executives and employees at all levels of the company have since been debating its role and responsibilities.”
Initially, Zuckerberg said that false news on Facebook comprised “less than 1 percent.” But, according to TechCrunch, “a slew of media reports this week have demonstrated that, although fake posts may not make up the bulk of the content on Facebook, they spread like wildfire.” President Obama also called out Facebook and “other platforms” for disseminating misinformation.
TechCrunch reports that “the firestorm over misinformation” began with the headline “FBI Agent Suspected in Hillary Email Leaks Found Dead,” a made-up story about how Clinton plotted the murder of an agent and his wife, from a made-up publication, Denver Guardian. Published days before the election, the fake story was shared more than 568,000 times.
In Zuckerberg’s post, he describes the issue, and why Facebook is reluctant to take a strong stance. “The problems here are complex, both technically and philosophically,” he wrote. “We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible.”
One option he proposes is to attach warnings to news articles “that have been flagged as false by reputable third parties or by Facebook users.” A second option is to make it “harder for websites to make money from spreading misinformation.” Zuckerberg warns that the company needs “to be careful not to discourage sharing of opinions or mistakenly restricting accurate content.”
“We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote.
Fake news is also a problem on Google and Twitter, both of which have also highlighted fake stories. “But,” says TechCrunch, “as the hub where 44 percent of Americans read their news, Facebook bears a unique responsibility to address the problem.”
The root of the problem, according to former Facebook employees and contractors, is that “its culture prioritizes engineering over everything else” and that Facebook has no desire in becoming a media company. “They don’t want to deal with it,” said former Facebook product manager Antonio Garcia-Martinez.
But, argues Garcia-Martinez and others, “Facebook is a media company” and Zuckerberg is “the front page editor of every newspaper in the world.” Zuckerberg resists that title, relying instead on crowd-sourced truth and, now, flagging. Garcia-Martinez believes changing distribution at the algorithmic level is a solution. As a technical cure, it may be “the most likely to get traction in Facebook’s engineering-first culture.”
According to TechCrunch, “engineers working on machine learning … estimate it would take a dedicated team more than a year to train an algorithm to properly do the work Facebook is attempting with Trending Topics.”