At this week’s annual Facebook F8 developer conference in San Jose, California, company CTO Mike Schroepfer discussed the progress being made by internal teams dedicated to reducing the spread of misinformation, hate speech, and abuse on the social platform using various artificial intelligence techniques. In the course of a single quarter, according to Schroepfer, Facebook takes down more than a billion “spammy” accounts, more than 700 million fake accounts, and tens of millions of items containing violent content or nudity.
“We have to dedicate ourselves to getting into the details and working these problems, day after day, month after month, year after year,” said Schroepfer.
As an example, Facebook’s AI contains a new “nearest neighbor” algorithm that can spot illicit photos 8.5 times faster than the previous version. It also uses “a language-agnostic AI model trained on 93 languages across 30 dialect families…in tandem with other classifiers to identify and tackle multiple language problems at once,” according to VentureBeat.
As for video, Facebook’s salient sampler model scans through videos and processes clips to recognize more than 10,000 different actions. But even while important strides are beyond made, Facebook’s director of AI Manohar Paluri urges patience. “I want to take a step back and say, even with these improvements, when we see violent videos that evade our systems, it is clear that video understanding is in its infancy. [But systems like these] allow us to proactively identify problematic content today,” he said.
And that management of expectations goes hand-in-hand with Facebook’s pressure from governments and users around the world as they urge the social network to do more to halt the spread of propaganda, misinformation and abusive content.
In terms of addressing the issue of political misinformation being spread via Facebook, even with the help of algorithms, there’s still a ways to go. “One of Facebook Inc.’s biggest issues in trying to stop the spread of fake news on its platform is being able to train its algorithms on good examples of truth and falsehoods,” reports Bloomberg.