AI-Powered Auto-Dubbing May Soon Become Industry Norm

Artificial intelligence and machine learning are poised to revolutionize the dubbing process for media content, optimizing it for a more natural effect as part of an emerging movement called “auto-dubbing.” AI has impacted the way U.S. audiences are experiencing the Netflix breakout “Squid Game” and other foreign content, as well as helping U.S. programming play better abroad. Its impact is in its nascency. Soon, replacing rubber-lip syndrome with AI-enhanced visuals that enable language translation at the click of a button may become the industry norm.  Continue reading AI-Powered Auto-Dubbing May Soon Become Industry Norm

Warner Bros. Teams with AI Startup to Create Custom Trailers

To promote its upcoming sci-fi thriller “Reminiscence,” Warner Bros. has teamed up with AI startup D-ID to create a website that allows anyone to upload a photo that turns it into a deepfake video sequence promoting the film. D-ID, which started out as developing technology to protect consumers against facial recognition, now uses that research to optimize deepfakes. D-ID chief executive Gil Perry stated that the company “built a very strong face engine” that enabled a deepfake to be created from a single photo. Continue reading Warner Bros. Teams with AI Startup to Create Custom Trailers

The Linux Foundation Leads Charge for Voice Tech Standards

The Linux Foundation — along with Microsoft, Target, Veritone and other companies — has launched the Open Voice Network (OVN) in order to “prioritize trust and standards” in voice-focused technology. Open Voice Network executive director Jon Stine said the impetus is the tremendous growth of voice assistance for AI-enabled devices and its future potential as an interface and data source. Linux Foundation senior vice president Mike Dolan said the effort is a “proactive response to combating deepfakes in AI-based voice technology.” Continue reading The Linux Foundation Leads Charge for Voice Tech Standards

USC Researchers Find Bias in Deepfake Detectors’ Datasets

The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias. Continue reading USC Researchers Find Bias in Deepfake Detectors’ Datasets

C-Suite Trends: Spending on Defensive AI, IT to Rise in 2021

MIT Technology Review Insights and cybersecurity firm Darktrace published a survey of 300+ worldwide C-level executives, directors and managers that reveals 96 percent are adopting “defensive AI” against AI-driven attacks. Of this cohort, 55 percent said traditional security solutions aren’t able to anticipate such AI-driven attacks. Defensive AI is comprised of self-learning algorithms that recognize normal user, device and system patterns and can spot anomalies. Gartner reported that global spending on IT will reach $4.1 trillion this year. Continue reading C-Suite Trends: Spending on Defensive AI, IT to Rise in 2021

Facebook Counters AI Bias with a Data Set Featuring Actors

Facebook released an open-source AI data set of 45,186 videos featuring 3,011 U.S. actors who were paid to participate. The data set is dubbed Casual Conversations because the diverse group was recorded giving unscripted answers to questions about age and gender. Skin tone and lighting conditions were also annotated by humans. Biases have been a problem in AI-enabled technologies such as facial recognition. Facebook is encouraging teams to use the new data set. Most AI data sets comprise people unaware they are being recorded. Continue reading Facebook Counters AI Bias with a Data Set Featuring Actors

Study Suggests Deepfakes Fool Top Facial Recognition Tech

Deepfakes, in which a person in a video is swapped for another person via AI-enabled tools, are on the rise. Deeptrace reported that, between October 2019 and June 2020, the number of deepfakes on the Internet jumped 330 percent, reaching 50,000 at the peak. Deepfakes have been used to place celebrities in embarrassing and inappropriate content, defraud a major energy producer and many other disruptive or criminal uses. Tools to create deepfakes are readily available, and a recent study said deepfakes can reliably fool commercial facial recognition services. Continue reading Study Suggests Deepfakes Fool Top Facial Recognition Tech

Adobe Beta-Testing New Tool to Detect Manipulated Images

Adobe released a beta version of a Photoshop tool that will make it easier to determine if an image is real or has been manipulated. The so-called attribution tool, which will first be tested with a select group of people, enables photo editors to attach more detailed, secure metadata to images. In addition to including who created the image, the metadata will provide information on how it was altered and if AI tools were used to do so. Adobe said it will also be clear if the metadata has been tampered with. This could be a step toward combatting deepfakes. Continue reading Adobe Beta-Testing New Tool to Detect Manipulated Images

Microsoft Develops Video Authenticator to Identify Deepfakes

Microsoft debuted a Video Authenticator tool that can analyze a still photo or video to determine the percentage of the chance that it is an AI-manipulated deepfake. For videos, Microsoft said the tool will work on a frame-by-frame basis in real time. The company’s tool is based on a FaceForensics++ public database and detects the “blending boundary” of the deepfake, with “subtle fading or grayscale elements” that may be indistinguishable by the human eye. It has been tested on the Deepfake Detection Challenge dataset. Continue reading Microsoft Develops Video Authenticator to Identify Deepfakes

Quality of Deepfakes and Textfakes Increase Potential Impact

FireEye data scientist Philip Tully showed off a convincing deepfake of Tom Hanks he built with less than $100 and open-source code. Until recently, most deepfakes have been low quality and pretty easy to spot. FireEye demonstrated that now, even those with little AI expertise can use published AI code and a bit of fine-tuning to create much more convincing results. But many experts believe deepfake text is a bigger threat, as the GPT-3 autoregressive language model can produce text that is difficult to distinguish from that written by humans. Continue reading Quality of Deepfakes and Textfakes Increase Potential Impact

AI Foundation Plans to Scale Platform With Series B Funding

San Francisco-based AI Foundation, founded in 2017 by Rob Meadows and Lars Buttler, just closed a series B round for $17 million together with Mousse Partners, You & Mr. Jones, Founders Fund, Alpha Edison and Stone. The foundation previously closed a series A funding of $10.5 million in September 2018. The AI Foundation is both a commercial artificial intelligence company and nonprofit enterprise with the mission of bringing “the power and protection of AI to everyone in the world so they can participate fully in the future.” Continue reading AI Foundation Plans to Scale Platform With Series B Funding

Deepfakes Go Mainstream for Corporate Training, Other Uses

Although deepfakes have mainly been associated with fake news, hoaxes and pornography, they’re now also being used for more conventional tasks, including corporate training. WPP, with startup Synthesia, has created localized training videos by using AI to change presenters’ faces and speech. WPP chief technology officer Stephan Pretorius noted that the localized videos are more compelling and “the technology is getting very good very quickly.” In COVID-19 times, deepfakes can also lower costs and speed up production. Continue reading Deepfakes Go Mainstream for Corporate Training, Other Uses

UK Proposes Internet Laws, Reuters to Fact-Check Facebook

The United Kingdom proposed that its media regulator Ofcom take on the responsibility of regulating Internet content, in part to encourage Facebook, YouTube and other Internet behemoths to police their own platforms. Ofcom would be able to issue penalties against companies lax in fighting “harmful and illegal terrorist and child abuse content.” Many details have yet to be filled in. Meanwhile, Reuters has formed a new Fact Check business unit, which is poised to become a third-party partner aimed at ferreting out misinformation on Facebook. Continue reading UK Proposes Internet Laws, Reuters to Fact-Check Facebook

Shares Rise as Twitter’s Revenue Passes $1B for First Time

Twitter revealed that, in Q4, revenue rose 11 percent to $1.01 billion, the first time that quarterly revenue topped the billion-dollar mark, and surpassing the $992 million projected by Wall Street analysts. The company stated that income was $118.8 million, with costs rising 22 percent from a year earlier. Its operating income, a closely watched number, was $153 million, down from $207 million the previous year and lower than the $161 million predicted by analysts surveyed by FactSet. Shares rose about 15 percent. Continue reading Shares Rise as Twitter’s Revenue Passes $1B for First Time

New Twitter Policy Aims to Combat Fake Photos and Video

Twitter announced yesterday that it would be more assertive in identifying fake and manipulated content on its platform. Beginning next month, the company plans to add labels or remove tweets that feature such manipulated images and video content. While short of an outright ban, the new policy is meant to address the growing concern of users frustrated by the practice of disinformation spread via social platforms. However, it also highlights the challenges faced by social media companies in regards to balancing freedom of speech, parody and satire, and false or manipulated content. On Monday, YouTube announced its plans to better manage misleading political content on its site. Continue reading New Twitter Policy Aims to Combat Fake Photos and Video