September 27, 2019
Google, in cooperation with its internal tech incubator Jigsaw, released a large number of deepfakes, which have been added to the FaceForensics benchmark run by the Technical University of Munich and the University of Naples Federico II. The deepfakes will be available for free to researchers developing detection techniques. Previously, Google released text-to-speech models as part of the AVspoof 2019 competition to develop systems to distinguish between real and computer-generated speech.
VentureBeat reports that, according to Google, the latter has “been downloaded by more than 150 research and industry organizations to date.”
“Since [the] first appearance [of deepfakes] in late 2017, many open-source deepfake generation methods have emerged, leading to a growing number of synthesized media clips,” said Google Research scientist Nick Dufour and Jigsaw technical research manager Andrew Gully, who added that, although many deepfakes “are likely intended to be humorous, others could be harmful to individuals and society.”
For the more recent project, Google recorded “hundreds” of videos with paid and consenting actors and then, with the team behind FaceForensics and others, created “thousands” of deepfakes. Google stated it will “continue to work with partners … [and] add to the corpus as deepfake technology evolves over time.”
“We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media, and today’s release of our deepfake dataset in the FaceForensics benchmark is an important step in that direction,” stated Dufour and Gully.
This month, Congress members asked National Intelligence Director Dan Coats to request an intelligence report “about the potential impact of deepfakes.” The ability to digitally paste faces onto other people’s bodies, they said, has an impact on democracy and national security.
In a 2018 meeting, Congress members also expressed their concerns to Facebook chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey. Adding to the concern is the Chinese app ZAO, which generates deepfakes and went viral earlier this year.
DARPA’s Media Forensics has tested a prototype of a system that could “automatically detect AI-generated videos in part by looking for cues like unnatural blinking,” and startup Truepic raised $ 8 million to further its deepfakes “detection-as-a-service” system.
Earlier this month, the Partnership on AI, Facebook, Microsoft and academics unveiled the Deepfake Detection Challenge, which offers “up to $10 million in grants and awards to spur the development of deepfake-detecting systems.”