Microsoft Develops Video Authenticator to Identify Deepfakes

Microsoft debuted a Video Authenticator tool that can analyze a still photo or video to determine the percentage of the chance that it is an AI-manipulated deepfake. For videos, Microsoft said the tool will work on a frame-by-frame basis in real time. The company’s tool is based on a FaceForensics++ public database and detects the “blending boundary” of the deepfake, with “subtle fading or grayscale elements” that may be indistinguishable by the human eye. It has been tested on the Deepfake Detection Challenge dataset.

ZDNet reports that the Challenge dataset is considered to be “a leading model for training and testing deepfake detection technologies.” Microsoft stated that deepfake detection is “crucial in the lead up to the U.S. election.” A link to Microsoft’s statement on New Steps to Combat Disinformation can be found here.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” said the company in a blog post. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.”

Microsoft also debuted a two-component technology “it said can both detect manipulated content and assure people that the media they’re viewing is authentic.” The first component is a tool “built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content … [that] then live with the content as metadata wherever it travels online.” The second tool is a reader, “which can be included in a browser extension, that checks the certificates and matches the hashes to determine authenticity.”

Microsoft also teamed up with the AI Foundation, to “make the Video Authenticator available to organizations involved in the democratic process, including news outlets and political campaigns through the foundation’s Reality Defender 2020 initiative.”

Microsoft will also test its authenticity technology through a partnership “with a consortium of media companies, known as Project Origin.” The Trusted News Initiative, a group of publishers and social media companies, has “agreed to engage with Microsoft on testing its technology.”

Sensity, University of Washington, and USA Today have also joined Microsoft to improve media literacy which, said Microsoft, “will help people sort disinformation from genuine facts and manage risks posed by deepfakes and cheap fakes.”

Related:
AI Researchers Use Heartbeat Detection to Identify Deepfake Videos, VentureBeat, 9/3/20

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.