April 25, 2018
Google reports that AI-powered machines, not humans, detected about 80 percent of the 8.28 million videos taken off of YouTube in Q4 2017. This revelation underscores the importance of AI-enabled computers in removing unwanted content — and just how aggressively YouTube is pursuing their removal. At Stanford University’s Global Digital Policy Incubator, executive director Eileen Donahoe noted that balancing free speech with the removal of undesirable videos will be YouTube’s major challenge going forward.
The New York Times quoted her as saying that, “it’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side.” Although YouTube would not reveal “whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represented,” it said the removed videos were “a fraction of a percent” of total views during that quarter. The company also stated that three-quarters of flagged videos were removed “before anyone had a chance to watch them.”
YouTube isn’t alone in using AI-powered tools to solve tough problems; Facebook is using them to “detect fake accounts and fake news on its platform.” But, although AI is far from perfect in detecting problematic content, YouTube states that, “the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors” — even as parent company Google announced it was hiring for 10,000 such positions in 2018.
YouTube just stated in a blog post that the majority of those positions — including for specialists in “violent extremism, counterterrorism and human rights” — have been filled.
Users also flagged 30 million times on about 9.3 million videos for “content they considered sexual, misleading or spam, and hateful or abusive,” with the resulting takedown of 1.5 million videos.
Engadget reports that Facebook has updated how it uses AI to “find and take down terrorist content,” including “nearly all ISIS- and Al Qaeda-related content.” The company said its counterterrorism team grew from 150 to 200 people since June, and that, focusing on ISIS and Al Qaeda, the group “took action on 1.9 million pieces of content in the first quarter of 2018, twice as much as the last quarter of 2017.”
It added that, “99 percent of that content was spotted without a user having to report it, with both technology and internal reviewers contributing to that rate.” Facebook’s AI tools have also sped up the process of finding newly uploaded content “in less than one minute on average during Q1.”
Facebook also designed tools to focus on older content and, “in Q1, 600,000 pieces of terrorism-related content were removed through these means.” “We’re under no illusion that the job is done or that the progress we have made is enough,” said Facebook. “Terrorist groups are always trying to circumvent our systems, so we must constantly improve.”