USC Researchers Find Bias in Deepfake Detectors’ Datasets

The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias.

VentureBeat reports that those who took up the Deepfake Detection Challenge were able to rely on a “a large corpus of visual deepfakes produced in collaboration with Jigsaw, Google’s internal technology incubator, which was incorporated into a benchmark made freely available to researchers for synthetic video detection system development.”

Microsoft also “launched its own deepfake-combating solution in Video Authenticator, a system that can analyze a still photo or video to provide a score for its level of confidence that the media hasn’t been artificially manipulated.”

USC researchers reported that, “some of the datasets used to train deepfake detection systems might underrepresent people of a certain gender or with specific skin colors,” and that “this bias can be amplified in deepfake detectors … with some detectors showing up to a 10.7 percent difference in error rate depending on the racial group.”

They examined three deepfake detection models with “proven success in detecting deepfake videos,” all of which were trained on the FaceForensics++ dataset and Google’s DeepfakeDetection, CelebDF, and DeeperForensics-1.0. The results were that, “all of the detectors performed worst on videos with darker Black faces, especially male Black faces” and were “strongly imbalanced” with gender and racial groups.

The FaceForensics++ sample videos showed over 58 percent (mostly white) women compared with 41.7 percent men, and “less than 5 percent of the real videos showed Black or Indian people.”

The datasets also contained so-called irregular swaps, in which “a person’s face was swapped onto another person of a different race or gender.” The USC researchers hypothesized that the irregular swaps “while intended to mitigate bias, are in fact to blame for at least a portion of the bias in the detectors,” which have “learned correlations between fakeness and, for example, Asian facial features.”

“In a real-world scenario, facial profiles of female Asian or female African are 1.5 to 3 times more likely to be mistakenly labeled as fake than profiles of the male Caucasian,” they said. “The proportion of real subjects mistakenly identified as fake can be much larger for female subjects than male subjects.”

The USC researchers also noted that, “at least one deepfake detector in the study achieved 90.1 percent accuracy on a test dataset, a metric that conceals the biases within.” “[U]sing a single performance metrics such as … detection accuracy over the entire dataset is not enough to justify massive commercial rollouts of deepfake detectors,” they concluded.

“As deepfakes become more pervasive, there is a growing reliance on automated systems to combat deepfakes. We argue that practitioners should investigate all societal aspects and consequences of these high impact systems.”

Related:
The Mother of All Deepfakes, Sports Illustrated, 5/12/21
Deepfake Lips Are Coming to Dubbed Films, Gizmodo, 5/6/21
Deepfake Dubs Could Help Translate Film and TV Without Losing an Actor’s Original Performance, The Verge, 5/18/21

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.