Facebook Removes More Fake Accounts and Hate Speech

In Q1 2019, Facebook removed 2.2 billion fake accounts from its popular social platform. That compares to 583 million fake accounts the company deleted in Q1 2018; in Q4 that year, it removed “just more” than 1 billion. Facebook said that “the vast majority” is removed within minutes of being created, so they do not count in its monthly/daily active user metrics. In its biannual report, Facebook also said its automated detection software used to delete “illicit content” was improving, removing more than half of the targeted speech.

Bloomberg reports that, in the last six months, Facebook’s algorithms were able to detect 65 percent of the posts containing hate speech; a year ago, it could only detect 38 percent. A new metric enabled Facebook to catch and delete, in Q1 2019, more than 1.5 million posts “promoting or engaging in drug and firearm sales.”

Facebook plans to “expand its report to include other types of illegal activity.” AI can also identify and remove almost 99 percent of graphic and violent posts before a user reports them, although it “still can’t consistently detect graphic or violent content in live videos.” Facebook chief executive Mark Zuckerberg said that the 2019 budget to target such content “is greater than the whole revenue of our company in the year before we went public in 2012.”

Using artificial intelligence to police the site is on a collision course with Facebook’s pledge to increase privacy and encryption. “We recognize that it’s going to be harder to find all of the different types of harmful content,” he said. “We’ll be fighting that battle without one of the very important tools, which is, of course, being able to look at the content itself. It’s not clear on a lot of these fronts that we’re going to be able to do as good of a job on identifying harmful content as we can today with that tool.”

The New York Times reports that although Facebook said it’s been more aggressive in policing the site, “regulators have expressed renewed interest in cracking down on Facebook after a gunman in Christchurch, New Zealand live-streamed his mass killings on his Facebook account.” The video was viewed 4,000 times before Facebook pulled it and then was reposted millions of times.

Government leaders around the world signed the Christchurch Call, “an agreement to limit violent and extremist content online.” Zuckerberg said that “governments around the world should take a more proactive role in the regulation of online speech.” “If the rules for the Internet were being written from scratch today, I don’t think people would want private companies to be making so many decisions about speech themselves,” he said.

The company said it “removed four million hate-speech posts during the first three months of the year.” “We estimated for every 10,000 times people viewed content on Facebook, 25 views contained content that violated our violence and graphic content policy,” said Facebook vice president of integrity Guy Rosen. “By catching more violating posts proactively, this technology lets our team focus on spotting the next trends in how bad actors try to skirt our detection.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.