Big Tech, DARPA Ramp Up Deepfake Research, Detection

Movies and TV shows have combined real and CG images for decades, for the purpose of entertainment. But we’re seeing the rise of deepfakes, which mix fake and real elements in still images or videos with a malignant or harmful aim. Many Big Tech companies that have benefited from letting users post and share photos are now turning their attention to battling deepfakes. According to cybersecurity startup Deeptrace, the number of deepfakes online has nearly doubled to 14,678 from December 2018 to August 2019.

The Wall Street Journal reports that Twitter head of site integrity Yoel Roth noted that, “the risk is that these types of synthetic media and disinformation undermine the public trust and erode our ability to have productive conversations about critical issues.” Amazon, Facebook and Microsoft have teamed up with “more than a half-dozen universities to run a Deepfake Detection Challenge starting next month.”

Facebook chief technology officer Mike Schroepfer stated that the Challenge is “intended to accelerate research into new ways of detecting and preventing media manipulated to mislead others.”

After first appearing on Reddit two years ago, deepfakes are now the topic of discussion on “at least 20 websites and online forums.” Deeptrace found services that “can generate and sell custom deepfakes in as little as two days and for a cost as low as $2.99 a video.”

DARPA program manager Matt Turek, who oversees deepfake R&D noted that “it doesn’t take a lot of skill.” DARPA has developed a “prototype media forensics tool for use by government agencies to detect altered photos and video” and plans to develop technology to “detect synthetic audio and fake text and identify the source and intent of any manipulated content.”

Facebook chief executive Mark Zuckerberg is reviewing his company’s deepfake policy, after refusing to take down a doctored video of House Speaker Nancy Pelosi (D-California). Both Facebook and Google have thousands of videos they say can be used by researchers to develop systems to detect deepfakes.

Adobe has “developed a system that will allow authors and publishers to attach information to content, such as who created it and when and where.” Working with The New York Times Co. and Twitter, it plans to share the technology on its Photoshop software. The AI Foundation’s non-profit division has also created Reality Defender 2020, a portal that can “help election campaigns and journalists analyze photos and videos within minutes of receiving them.”

The New York Times reports that deepfakes were “the center of prominent incidents in Brazil, Gabon in Central Africa and China.” The resulting confusion is dubbed “the liar’s dividend.” “You can already see a material effect that deepfakes have had,” said Google engineer Nick Dufour, one of several involved in deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

Experts worry that they won’t be able “keep pace” with the evolving sophistication of deepfakes. Canadian company Dessa built a deepfake detector, which failed 40 percent of the time when tested against videos on the Internet. “Unlike other problems, this one is constantly changing,” said Dessa founder/head of machine learning Ragavan Thurairatnam.

Related:
Deepfakes — Believe at Your Own Risk, Episode 21: ‘Fake Believe’, The New York Times, 11/22/19
MIT Deepfake Shows Nixon Sadly Saying the Moon Astronauts Died, Futurism, 11/22/19