Intel Promises 96 Percent Accuracy with New Deepfake Filter

Intel has debuted FakeCatcher, touting it as the first real-time deepfake detector. capable of determining whether digital video has been altered to change context or meaning. Intel says FakeCatcher has a 96 percent accuracy rate and returns results in milliseconds by analyzing the “blood flow” of pixel patterns, a process called photoplethysmography (PPG) that Intel borrowed from medical research. The company says potential use cases include social media platforms screening to prevent uploads of harmful deepfake videos and helping global news organizations to avoid inadvertent amplification of deepfakes. Continue reading Intel Promises 96 Percent Accuracy with New Deepfake Filter

Deepfakes Used for Entertainment, Advertising Draw Concern

Celebrity deepfakes springing up on the web, and even in advertising, are raising concerns. The technology is advancing in sophistication and commercial interest. Apple was just granted rights by the U.S. Patent Office to “face image generation with pose and expression control” from reference images. This month, video of President Biden was manipulated into a performance of the viral children’s tune “Baby Shark,” while a digital doppelganger for Elon Musk hawked investment opportunities for real estate startup reAlpha Tech. Tom Cruise, Leonardo DiCaprio and Bruce Willis are also among those artificially misappropriated for promotional use without permission. Continue reading Deepfakes Used for Entertainment, Advertising Draw Concern

D-ID Creative Reality Studio Helps Users Make DIY AI Videos

Artificial intelligence company D-ID has launched a new presentation platform that can generate video from a single image and text. Creative Reality Studio offers from among 270 voices and 119 languages that users can pair with one of the company’s original avatar creations or an uploaded photo. The product is aimed at markets including education, the metaverse, advertising and sales. The company is offering a limited free 14-day trial, after which users would be required to switch to a $49 per month Pro subscription or higher-end Enterprise plan (pricing available on request). Continue reading D-ID Creative Reality Studio Helps Users Make DIY AI Videos

OpenAI Expands DALL-E 2 Functionality with Facial Uploads

OpenAI has begun allowing users of its DALL-E 2 image-generating system to work with facial image uploads. The program previously allowed only computer-generated faces in an effort to prevent deepfakes and misuse, but OpenAI says improvements to its safety system succeeded in “minimizing the potential of harm” from things like explicit, political or violent content. OpenAI will continue to prohibit use of unauthorized photos and will seek to protect right of publicity, though it remains to be seen how effective that will be. In the past, customers have complained the company was overzealous in its policing. Continue reading OpenAI Expands DALL-E 2 Functionality with Facial Uploads

EU’s AI Act Could Present Dangers for Open-Source Coders

The EU’s draft AI Act is causing quite a stir, particular as it pertains to regulating general-purpose artificial intelligence, including guidelines for open source developers that specify procedures for accuracy, risk management, transparency, technical documentation and data governance, well as cybersecurity. The first law on AI by a major regulator anywhere, the proposed AI Act seeks to promote “trustworthy AI,” but some are critical that as written the legislation could hurt open efforts to develop AI systems. The EU is seeking industry input as the proposal heads for a vote this fall. Continue reading EU’s AI Act Could Present Dangers for Open-Source Coders

Microsoft Pulls AI Analysis Tool Azure Face from Public Use

As part of an overhaul of its AI ethics policies, Microsoft is retiring from the public sphere several AI-powered facial analysis tools, including a controversial algorithm that purports to identify a subject’s emotion from images. Other features Microsoft will excise for new users this week and phase out for existing users within a year include those that claim the ability to identify gender and age. Advocacy groups and academics have expressed concern regarding such facial analysis features, characterizing them as unreliable and invasive as well as subject to bias. Continue reading Microsoft Pulls AI Analysis Tool Azure Face from Public Use

European Union Creates Code of Practice on Disinformation

The European Union unveiled a new code of practice for disinformation, a glimpse at the regulation Big Tech companies will be dealing with under upcoming digital content laws. Meta Platforms, Twitter, TikTok and Google have agreed to the new rules, which update voluntary guidelines. The revised standards direct social media companies to avoid advertising adjacent to intentionally false or misleading content. EU policymakers have said they will make parts of the new code mandatory under the Digital Services Act. Platforms agreeing to comply with the new rules must submit implementation reports by early 2023. Continue reading European Union Creates Code of Practice on Disinformation

DALL-E 2 by OpenAI Creates Images Based on Descriptions

OpenAI has created a new technology that creates and edits images based on written descriptions of the desired result. DALL-E 2, an homage to the surrealist painter Salvador Dalí and the Pixar film “Wall-E,” is still in development but is already producing impressive results with simple instructions like “kittens playing chess” and “astronaut riding a horse.” OpenAI says the tech, “isn’t being directly released to the public” and the hope is “to later make it available for use in third-party apps. “Already some are expressing worry that such a tool has potential to exponentially increase the use of deepfakes. Continue reading DALL-E 2 by OpenAI Creates Images Based on Descriptions

EU’s Sweeping AI Act Takes Tough Stance on High Risk Use

The European Union’s pending Artificial Intelligence Act — the world’s first comprehensive effort to regulate AI — is coming under scrutiny as it moves to law. The Act proposes unplugging AI deemed a risk to society. Critics say it draws too heavily on general consumer product safety rules, overlooking unique aspects of AI, and is too closely tied to EU market law. This could limit its applicability as a template for other regions evaluating AI legislation, contravening the EU’s desired first-movers status in the digital sphere. Continue reading EU’s Sweeping AI Act Takes Tough Stance on High Risk Use

TikTok Halts Russia Live Streams, Battles War Disinformation

Young people who made TikTok a top destination for dance-craze videos and makeup tutorials are now making it a news destination as they seek information about the Russian invasion of Ukraine, but now it’s revealed that some users are doctoring video game images of tanks rolling in and presenting it as footage from the war. Since the conflict erupted, hundreds of thousands of videos about the ongoing saga have been uploaded to TikTok. The war has put the social video platform in the uncharacteristic role of news moderator for material that is often unverified. Continue reading TikTok Halts Russia Live Streams, Battles War Disinformation

Policing the Metaverse Looms as a Challenge for Tech Firms

The metaverse is in its early days, but many are already concerned as they anticipate the content moderation problems that have bedeviled traditional social media increasing exponentially in virtual worlds. The confluence of realistic immersive environments, the anonymity of avatars and potential for deepfakes is enough to give anyone pause. Throw in machine learning that will make today’s ad targeting seem primitive and it’s an even more volatile mix. Experts agree, the very qualities that make the metaverse appealing — false facades and hyperreality — make it potentially more dangerous than the digital platforms of today. Continue reading Policing the Metaverse Looms as a Challenge for Tech Firms

UK Lawmakers Are Taking Steps to Toughen Online Safety Bill

British lawmakers are seeking “major changes” to the forthcoming Online Safety Bill that cracks down on Big Tech but apparently does not go far enough. Expansions under discussion include legal consequences for tech firms and new rules for online fraud, advertising scams and deepfake (AI-generated) adult content. Comparing the Internet to the “Wild West,” Damian Collins, chairman of the joint committee that issued the report, went so far as to suggest corporate directors be subject to criminal liability if their companies withhold information or fail to comply with the act. Continue reading UK Lawmakers Are Taking Steps to Toughen Online Safety Bill

AI-Powered Auto-Dubbing May Soon Become Industry Norm

Artificial intelligence and machine learning are poised to revolutionize the dubbing process for media content, optimizing it for a more natural effect as part of an emerging movement called “auto-dubbing.” AI has impacted the way U.S. audiences are experiencing the Netflix breakout “Squid Game” and other foreign content, as well as helping U.S. programming play better abroad. Its impact is in its nascency. Soon, replacing rubber-lip syndrome with AI-enhanced visuals that enable language translation at the click of a button may become the industry norm.  Continue reading AI-Powered Auto-Dubbing May Soon Become Industry Norm

Warner Bros. Teams with AI Startup to Create Custom Trailers

To promote its upcoming sci-fi thriller “Reminiscence,” Warner Bros. has teamed up with AI startup D-ID to create a website that allows anyone to upload a photo that turns it into a deepfake video sequence promoting the film. D-ID, which started out as developing technology to protect consumers against facial recognition, now uses that research to optimize deepfakes. D-ID chief executive Gil Perry stated that the company “built a very strong face engine” that enabled a deepfake to be created from a single photo. Continue reading Warner Bros. Teams with AI Startup to Create Custom Trailers

The Linux Foundation Leads Charge for Voice Tech Standards

The Linux Foundation — along with Microsoft, Target, Veritone and other companies — has launched the Open Voice Network (OVN) in order to “prioritize trust and standards” in voice-focused technology. Open Voice Network executive director Jon Stine said the impetus is the tremendous growth of voice assistance for AI-enabled devices and its future potential as an interface and data source. Linux Foundation senior vice president Mike Dolan said the effort is a “proactive response to combating deepfakes in AI-based voice technology.” Continue reading The Linux Foundation Leads Charge for Voice Tech Standards