Deepfakes Go Mainstream for Corporate Training, Other Uses

Although deepfakes have mainly been associated with fake news, hoaxes and pornography, they’re now also being used for more conventional tasks, including corporate training. WPP, with startup Synthesia, has created localized training videos by using AI to change presenters’ faces and speech. WPP chief technology officer Stephan Pretorius noted that the localized videos are more compelling and “the technology is getting very good very quickly.” In COVID-19 times, deepfakes can also lower costs and speed up production.

Wired reports that a WPP internal education campaign “might require 20 different scripts for [its] global workforce.” “With Synthesia, we can have avatars that are diverse and speak your name and your agency and in your language and the whole thing can cost $100,000,” said Pretorius, who explained that WPP “hopes to distribute the clips, 20 modules of about 5 minutes each, to 50,000 employees this year.”

The genesis of the term deepfakes comes from the Reddit username of person/persons who released pornographic clips that swapped out the faces for those of Hollywood actresses. Since then, deepfakes have been used to harass, fool, titillate and amuse.

But deepfakes for mainstream use are becoming more common. Talent agency CAA signed virtual Instagram influencer Lil Miquela (above), who has 2+ million followers, and “Rosebud AI specializes in making the kind of glossy images used in e-commerce or marketing.” The latter also released 25,000 photos of virtual models and “tools that can swap synthetic faces into any photo.”

Rosebud AI chief executive Lisha Li debuted a service that “can put clothes photographed on mannequins onto virtual but real-looking models.” Synthesia has made videos with “synthesized talking heads for corporate clients including Accenture and SAP … [and] helped David Beckham appear to deliver a PSA on malaria in several languages.”

With regard to the potential malign uses of deepfakes, Synthesia “has posted ethics rules online … vets its customers and their scripts … requires formal consent from a person before it will synthesize their appearance, and won’t touch political content.” Li, whose company also has an ethics statement, said Rosebud “should encourage a broadening of beauty standards … [and] can generate models of non-binary gender, as well as different ethnicities.”

VentureBeat reports that researchers at University of California Berkeley and Adobe Research have published a paper on Arxiv.org about the Swapping Autoencoder, a machine learning model designed specifically for image manipulation, which they say can “modify any image in a variety ways, including texture swapping, while remaining ‘substantially’ more efficient compared with previous generative models.”

They added that the Swapping Autoencoder could be “used to create deepfakes” but pointed to a “human perceptual study” to make the claim that it “is no more harmful than other AI-powered image manipulation tools.” In the test, the Swapping Autoencoder created images that fooled its human subjects 31 percent of the time, but “proposed detectors can successfully spot images manipulated by the tool at least 73.9 percent of the time.”

Related:
A Fashion Model For the Moment, The New York Times, 7/8/20