Tech Firms Test AI Solutions to Combat Inappropriate Content

Digital platforms Facebook, Twitter, Google, Microsoft and Periscope are implementing new ways to fight some of the worst misdeeds of the Internet: hate speech, pornography, graphic and gratuitous violence, threats and trolling. To do so, they are relying on a new range of solutions mainly but not entirely fueled by artificial intelligence. In recent months, all these Internet companies have been the target of lawsuits and harsh criticism for their inability to remove such content in a timely fashion.

Bloomberg reports that Facebook, Twitter, Google and Microsoft, in a joint commitment with the European Union, have “pledged to tackle online hate speech in less than 24 hours.”

Social_Media_App_Logos

A study by the French Jewish youth group UEJF, SOS Racisme and SOS Homophobie revealed that “more than 90 percent of the posts pointed out to Twitter and YouTube remained online within 15 days on average following requests for removal.” UEJF has sued Twitter, Facebook and Google over their hate speech monitoring policies.

“With a global community of 1.6 billion people we work hard to balance giving people the power to express themselves whilst ensuring we provide a respectful environment,” said Facebook executive Monika Bickert. “There’s no place for hate speech on Facebook.”

TechCrunch reports that Facebook’s AI systems “now report more offensive photos than humans do, marking a major milestone in the social network’s battle against abuse.”

Last year, Twitter former chief executive Dick Costolo admitted, “We suck at dealing with abuse.” Since then, Twitter acquired visual intelligence startup Madbits and AI neural networks startup Whetlab; their AI incorrectly flags images just 7 percent of the time, says Wired.

With AI, companies can reduce the number of human moderators required to flag offensive images; Wired estimates that 100,000 human content moderators, many making $500 a month, have done the job up until now. The ultimate goal is to use AI to “unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.” That’s a prospect that is appealing to many companies, including Instagram and WhatsApp.

Periscope, owned by Twitter, is taking another tack to put an end to trolling, says The Wall Street Journal. Rather than the broadcaster ejecting specific viewers, the comment will go to a panel of five randomly-chosen viewers, who will have the options to judge it as “abuse,” “looks OK” or “not sure.” If the majority finds it offensive, the person who sent it will be restricted from commenting for 60 seconds; a second offense in the same broadcast would restrict that person from commenting during the rest of the stream.

“I definitely don’t believe this will solve Internet abuse 100 percent,” said Periscope engineer Aaron Wasserman. “I don’t think there is a silver bullet. We’re hoping to create something that isn’t just focused on fighting trolls but also making you feel comfortable using Periscope.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.