Adobe Firefly Adds Third-Party Models to Generative AI Suite

To coincide with Adobe MAX 2025 in Los Angeles, Adobe has released a new version of its generative AI tool Firefly, calling it “your all-in-one creative AI studio.” Firefly now offers under one subscription a collection of models that include not only Firefly Image Model 5 (in public beta) but those from partners including Google, OpenAI, Luma AI, ElevenLabs, Topaz Labs, and more. From concept to final product, Firefly is attempting to support every phase of the workflow with AI to generate music, narration and video clips, while supporting areas such as ideation and editing, Adobe says. Continue reading Adobe Firefly Adds Third-Party Models to Generative AI Suite

Google Veo 3.1 Advances Generative Video in Flow and Vertex

Google has released Veo 3.1 and Veo 3.1 Fast in paid preview, adding new capabilities to the generative video model that is already a leader in the field. Creative and technical upgrades include richer native audio from dialogue to sound effects, greater understanding of cinematic styles and better prompt adherence. The two new models are available via the Gemini API in Google AI Studio and Vertex AI, with Veo 3.1 also available in the Gemini app and the storytelling tool Flow, which now gets native audio. Flow has generated more than 275 million videos since its release at Google I/O in May, according to the company. Continue reading Google Veo 3.1 Advances Generative Video in Flow and Vertex

Meta Previews Its Vibes AI Video Generator for Social Sharing

Meta Platforms is rolling out Vibes, a short-form AI video generator now in early preview. Using Meta AI, Vibes allows the visually adventurous to create videos from their own ideas or remix existing ones by adding music or changing the style to make it your own. Vibes has its own feed featuring “a range of AI-generated videos from creators and communities.” As you use it, “the feed will become more personalized over time,” according to Meta. Vibes videos can also be cross-posted to Instagram and Facebook Stories and Reels or shared with friends via DM. Vibes is available on the Meta AI app and the Web at Meta.ai. Continue reading Meta Previews Its Vibes AI Video Generator for Social Sharing

OpenAI Sora 2 Vid Generator Has Sound and Social Features

Sora 2 is here, “marking a giant leap forward in realism,” claims OpenAI. And it includes sound and dialogue generation, catching up to Google’s Veo 3. Coming nearly two years after Sora was first introduced, the new model is being released in conjunction with a free iOS social app with a vertical feed and “swipe-and-scroll” functionality like TikTok, YouTube Shorts and Instagram Reels. Available in the U.S. and Canada, the fee version — which currently requires an invitation — is also available at sora.com. ChatGPT Pro subscribers can access an experimental, higher quality Sora 2 Pro model online only. Continue reading OpenAI Sora 2 Vid Generator Has Sound and Social Features

Google Pushes Generative Video with Filmmaker in Residence

Google wants to heighten the profile of its Veo 3 video generator, and to help do so has named Henry Daubrez, the longtime creative chief at the multidisciplinary Dogstudio/DEPT, filmmaker in residence at Google Labs. In addition to working with the Google team to continue developing the Veo 3-powered Flow AI filmmaking tool, Daubrez will mentor artists in a new pilot program called Flow Sessions. Select filmmakers will get unlimited access to Flow, a subscription product starting at $20 per month, plus mentorship and AI education as part of Flow Sessions. Continue reading Google Pushes Generative Video with Filmmaker in Residence

Genie 3 World Model Produces Minutes of Video in Real Time

Google DeepMind has unveiled Genie 3, a world-building model that uses text and image prompts to generate 3D environments in real time. Still in research preview, Genie 3 can output “several minutes” of video that can be navigated in real time at 24fps and a resolution of 720p. Because it remembers the rules of the world it creates, Genie 3 allows agents to predict how the environment evolves and how actions affect it. Google says world models are “a key steppingstone” to artificial general intelligence, or AGI, since they can train AI agents in “an unlimited curriculum of rich simulation.” Continue reading Genie 3 World Model Produces Minutes of Video in Real Time

Character.AI Launches AI-First Social Platform Named ‘Feed’

Feed is a new social platform featured within the Character.AI mobile app, which launched in 2023 and now has more than 20 million active users worldwide. Described as “TikTok-meets-ChatGPT,” the AI-native Feed is populated by AI-generated avatars, called Characters, created by the platform’s subscribers using the company’s tools. Characters can post AI-generated content, including videos, viewed in a scrollable format that invites familiar interactions like sharing, comments, follows and likes. What sets Feed apart, Character.AI says, is its vibe of a “remix playground,” as opposed to the “passive consumption” approach typical of conventional social platforms. Continue reading Character.AI Launches AI-First Social Platform Named ‘Feed’

Amazon Invests in Fable, Creator of the ‘Showrunner’ AI App

Amazon’s Alexa Fund VC has made an investment in San Francisco-based startup Fable, which this week launched Showrunner, a generative AI model with an app that lets people create animated TV shows using text prompts. Showrunner has been in a closed alpha test involving about 10,000 users. Initially, Fable is making Showrunner available for free, but plans to eventually price it at $10-$20 monthly for credits enabling creation of TV-style content on Discord. The Showrunner-generated content will be shareable on social media sites including YouTube. Specific terms of Amazon’s investment have yet to be disclosed. Continue reading Amazon Invests in Fable, Creator of the ‘Showrunner’ AI App

Runway Aleph Provides Video Editors with ‘Endless Coverage’

New York-based Runway AI has introduced a sophisticated video model called Aleph that can perform a wide range of edits from text prompts — adding, removing and transforming objects; generating various angles on a scene; or modifying style and lighting, among other things. Aimed at streamlining post-production, Runway calls Aleph an “in-context video model,” meaning it is designed to work with existing visual material rather than generating imagery from scratch. Using Aleph puts storytellers just a prompt or two from turning that wide shot into an extreme close-up, or adding a new “next shot,” providing what Runway calls “endless coverage.” Continue reading Runway Aleph Provides Video Editors with ‘Endless Coverage’

Runway Sets a 10-City IMAX Release for AI Film Fest Finalists

Runway has joined forces with IMAX to present the finalists from its fourth annual AI Film Festival in a 10-city U.S. commercial run, with four days of screenings at each location, from August 17 to 20. Tickets are on sale now for IMAX theaters in Manhattan, San Francisco, Los Angeles, Chicago, Seattle, Dallas, Boston, Atlanta, Denver and Washington, D.C. The 10 finalists vied among 6,000 submissions for the 2025 competition, which marks the first time the AIFF finalists will get a national theatrical release. In 2024 finalists screened for one day at only two theaters in New York and Los Angeles. Continue reading Runway Sets a 10-City IMAX Release for AI Film Fest Finalists

Google Photos, YouTube Shorts Offer New AI Creation Tools

Google has added new AI features to Google Photos and YouTube Shorts. Having previously introduced generative backgrounds, YouTube Shorts now has a photo-to-video feature, as well as a variety of menu-driven effects accessible via the Shorts camera that aim to advance social media or arts project creativity — things like turning line drawings into watercolors, putting a selfie “underwater” or adding a digital twin. And Google Photos, available on just about every Android phone, now also has the ability to turn stills to video. For now, both rely on the Veo 2 video model rather than Veo 3, launched in May. Continue reading Google Photos, YouTube Shorts Offer New AI Creation Tools

Decart AI’s Mirage Transforms Live-Stream Video in Real Time

Startup Decart AI is showcasing MirageLSD, a “world transformation model” that can change the look of a camera feed, recorded video or game in real time. Built on the company’s Live-Stream Diffusion (LSD) model, Mirage debuted last week as a demo on the company website with iOS and Android apps scheduled for release this week. Mirage makes it possible to manipulate video continuously, in real time with zero latency. The technology has created buzz as a potential disruptor in the live-streaming space, and it looks like it could be an impactful special effects tool as well. Continue reading Decart AI’s Mirage Transforms Live-Stream Video in Real Time

Adobe Adds Generative Audio and Text-to-Avatar to Firefly AI

Adobe’s Firefly Video model has introduced new updates including Generate Sound Effects, in beta, and a text-to-avatar feature that lets users turn scripts into avatar-led videos “in just a few clicks.” Firefly becomes the second video model to generate audio, joining Veo 3, although unlike Google’s AI video tool Firefly does not yet generate dialogue. What it can do is output foley-like sound and sound effects, while text-to-avatar can generate speech. As with Firefly’s generative visuals, Adobe says Generate Sound Effects is “commercially safe,” which means they are trained only on licensed or publicly available material. Continue reading Adobe Adds Generative Audio and Text-to-Avatar to Firefly AI

Google Offers Gemini AI Subscribers Photo-to-Video Function

Google has added photo-to-video capability to its Gemini AI app. Powered by Veo 3, Google’s latest generative video model, launched in May, Gemini AI can now turn images into 8-second videos complete with AI-generated sound including speech, environmental sounds and background noises. Available now via the Web to anyone with a $20 per month Google AI Pro subscription or those on the $125 per quarter Google AI Ultra plan, the new feature is also being released to mobile users this month for both iOS and Android devices. The videos are finished as 720p resolution MP4 files in 16:9 landscape format. Continue reading Google Offers Gemini AI Subscribers Photo-to-Video Function

Google Doppl Lets You Try on Outfits Using Generative Video

Google Labs is testing Doppl, an experimental app that uses AI to let you virtually try on clothes. Available on iOS and Android in the U.S., Doppl requires the user to upload a full body photo to which images of outfits can then be applied. It will work with various types of outfit photos, from pictures taken with a smartphone to screen grabs from shopping sites or social media. Doppl can also create AI-generated videos from a static image to give an idea of what the outfit would look like from different angles when worn. While Google hopes Doppl “helps you explore your style in new and exciting ways,” it cautions that the app “is in its early days and it might not always get things right.” Continue reading Google Doppl Lets You Try on Outfits Using Generative Video