Ex-Facebook Scientist Reveals Slow Action on Fake Accounts

A recently fired Facebook data scientist, Sophie Zhang, sent a 6,600-word memo giving specific examples of how the social media company ignored or was slow to act on solid information on fake accounts undermining global politics and elections. That included her proof that, in Azerbaijan and Honduras, government leaders and political parties used fake accounts to shift public opinion. She found similar evidence of coordinated campaigns to impact candidates or elections in Bolivia, Brazil, Ecuador, India, Spain and Ukraine.

BuzzFeed News reports that Zhang, whose LinkedIn profile stated she was a data scientist for the Facebook Site Integrity fake engagement team, added that, “in the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions.”

She added that she had “personally made decisions that affected national presidents without oversight and taken action to enforce against so many prominent politicians globally that I’ve lost count … I know that I have blood on my hands by now.”

She recounted that, in Honduras, thousands of “inauthentic assets” (Facebook’s term for “engagement from bot accounts and coordinated manual accounts”) were created to boost President Juan Orlando Hernandez, but that “it took Facebook’s leaders nine months to act on a coordinated campaign.”

In Azerbaijan, Zhang discovered that the ruling political party used “thousands of inauthentic assets … to harass the opposition en masse,” but it took Facebook a solid year before it began its investigation, still ongoing. In another example, a NATO researcher tipped her off about “inauthentic activity on a high-profile U.S. political figure.”

She added that Facebook prioritizes the U.S. and Western Europe, “and often only acted when she repeatedly pressed the issue publicly in comments on Workplace, the company’s internal, employee-only message board.” She also noted that “it’s an open secret within the civic integrity space that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” stating that “she was told directly at a 2020 summit that anything published in The New York Times or The Washington Post would obtain elevated priority.”

Zhang added that she didn’t want to go public “for fear of disrupting Facebook’s efforts to prevent problems around the upcoming 2020 U.S. presidential election,” but that she also feared for her safety. She reported she “turned down a $64,000 severance package from the company to avoid signing a nondisparagement agreement.”

Bloomberg reports that, according to Method Media Intelligence (MMI), “Facebook can block bots from viewing and clicking on ads but the social-media company isn’t doing enough to stop it.” The result is that advertisers pay for fake activity; advertising fraud has become a massive problem for “companies that pay online platforms including Facebook, Google, Yahoo and Microsoft’s Bing for views, clicks, likes and app installations.”

The report said that, “Facebook makes it easy for bots to log in, view pages and click ads” and only blocks fraud at the account registration stage. MMI director of research and strategy Sachin Dhar noted that, “it’s a lot easier to do this than people think and it’s not illegal.”