USC Researchers Find Bias in Deepfake Detectors’ Datasets

The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias. Continue reading USC Researchers Find Bias in Deepfake Detectors’ Datasets