USC Researchers Find Bias in Deepfake Detectors’ Datasets

The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias. Continue reading USC Researchers Find Bias in Deepfake Detectors’ Datasets

Microsoft Develops Video Authenticator to Identify Deepfakes

Microsoft debuted a Video Authenticator tool that can analyze a still photo or video to determine the percentage of the chance that it is an AI-manipulated deepfake. For videos, Microsoft said the tool will work on a frame-by-frame basis in real time. The company’s tool is based on a FaceForensics++ public database and detects the “blending boundary” of the deepfake, with “subtle fading or grayscale elements” that may be indistinguishable by the human eye. It has been tested on the Deepfake Detection Challenge dataset. Continue reading Microsoft Develops Video Authenticator to Identify Deepfakes