Google Introduces an AI Watermark That Cannot Be Removed

Google DeepMind and Google Cloud have teamed to launch what they claim is an indelible AI watermark tool, which if it works would mark an industry first. Called SynthID, the technique for identifying AI-generated images is being launched in beta. The technology embeds its digital watermark “directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” according to DeepMind. SynthID is being released to a limited number of Google’s Vertex AI customers using Imagen, a Google AI language model that generates photorealistic images.

The subject of AI detection tools comes up with increased frequency, according to Google DeepMind CEO Demis Hassabis, who cites deepfakes as among the obvious and urgent reasons such technology is needed.

“With another contentious election season coming in 2024 in both the U.S. and the UK, Hassabis says that building systems to identify and detect AI imagery is more important all the time,” writes The Verge.

While the SynthID watermark is part of the image, “Hassabis says it doesn’t alter the image itself in any noticeable way,” reports The Verge, which quotes him as saying it is “robust to various transformations — cropping, resizing, all of the things that you might do to try and get around normal, traditional, simple watermarks.”

While SynthID “isn’t foolproof against extreme image manipulations,” DeepMind writes in a blog post, “it does provide a promising technical approach for empowering people and organizations to work with AI-generated content responsibly.”

The UK-based DeepMind, which Google acquired in 2014 and wrapped under the Google AI umbrella earlier this year, is continuing to work on ways to improve SynthID that will make it “even less perceptible to humans but even more easily detected by DeepMind’s tools,” according to The Verge.

Google says the tool could evolve alongside other AI models and modalities beyond imagery to include audio, video, and text. SynthID can potentially be “expanded for use across other AI models and we’re excited about the potential of integrating it into more Google products and making it available to third parties in the near future,” DeepMind explains.

As AI image generators proliferate, deepfakes have become more prevalent and more difficult to detect, reports The Washington Post. While companies have experimented with placing visible identifiers on AI images, as well as embedding text “metadata” disclosing an image’s origin, “both techniques can be cropped or edited out relatively easily.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.