Intel Promises 96 Percent Accuracy with New Deepfake Filter

Intel has debuted FakeCatcher, touting it as the first real-time deepfake detector. capable of determining whether digital video has been altered to change context or meaning. Intel says FakeCatcher has a 96 percent accuracy rate and returns results in milliseconds by analyzing the “blood flow” of pixel patterns, a process called photoplethysmography (PPG) that Intel borrowed from medical research. The company says potential use cases include social media platforms screening to prevent uploads of harmful deepfake videos and helping global news organizations to avoid inadvertent amplification of deepfakes.

Intel Labs senior staff research scientist Ilke Demir designed FakeCatcher in collaboration with Umur Ciftci from the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server and interfaces through a web-based platform.

Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher focuses on “clues within actual videos,” the company says in a news release, noting its PPG technique is also used “to measure the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color.“

Demir told VentureBeat that “PPG signals have been known, but they have not been applied to the deepfake problem before.” With the new system, “PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components.”

“We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real,” Demir told VentureBeat, explaining FakeCatcher used “Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512″ to run up to 72 concurrent detection streams in real time.

Forrester Research predicts that costs associated with U.S. deepfake scams will soon exceed $250 million, and the company’s AI/ML analyst Rowan Curran told VentureBeat that “’we are in for a long evolutionary arms race’ around the ability to determine whether a piece of text, audio or video is human-generated or not.”

University of Southern California research indicates some AI models used to train deepfake detectors “might underrepresent people of a certain gender or with specific skin colors,” VentureBeat reports, noting such AI bias “can be amplified in deepfake detectors,” with some showing as much as a 10.7 percent difference in error rate when it comes to specific subsets.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.