By
Paula ParisiJuly 7, 2025
AI startup Runway has a new tool called Game Worlds that lets users generate simple video game worlds using images and text-based prompts. At the moment, Runway Game Worlds can only help generate simple text-based interactive adventures that include pictures, but the company has plans to enable more complex game creation by the end of the year. Runway CEO Cristóbal Valenzuela says the company is interested in partnering with video game companies who are willing to provide game data that can be used to train the company’s models in exchange for generative capabilities. Continue reading Runway AI Intros Game Worlds Generator in Limited Preview
By
Paula ParisiJune 24, 2025
The redesigned Firefly AI app Adobe released in April with third-party model support is now available on iOS and Android. Text-to-video and background editing are among the features included in the new mobile package, which Adobe claims will help users capture inspiration as it strikes with “the freedom to generate images and videos wherever you are.” Adobe says those of all skill levels will be able to use the app, which was designed “to complement the ways we already interact with our phones.” The company is also rolling out its AI-powered online moodboard creator — Firefly Boards — in public beta, now with video functionality. Continue reading Adobe Unveils Firefly Generative AI App for iOS and Android
By
Paula ParisiJune 9, 2025
China’s Manus AI has unveiled a text-to-video generator it says can transform “prompts into complete stories — structured, sequenced, and ready to watch. With a single prompt, Manus plans each scene, crafts the visuals, and animates your vision,” the company announced last week. Manus generated buzz in March for its agentic approach to AI, and now it is putting that autonomous technology to work on generative AI, promising story generation within minutes. Last month, the firm that developed Manus, Butterfly Effect, reportedly secured $75 million in funding led by U.S.-based Benchmark for a nearly $500 million valuation. Continue reading Manus AI Takes an Agentic Approach with Its Video Generator
By
Paula ParisiJune 6, 2025
AMC Networks has partnered with Runway to use the AI startup’s models and technology in the TV studio’s marketing and development processes. The Cablevision-owned AMC Networks brand — home to cable TV hits such as “Mad Men,” “Breaking Bad” and “The Walking Dead” that have found new audiences on the AMC+ streaming service — plans to use AI in everything from identifying key scenes for promotional use to ideating new ideas, previsualization and special effects. Lionsgate entered a similar deal with Runway last year that had the reciprocal benefit of allowing the AI company to use the studio’s content to train models. Continue reading AMC Networks the Latest to Partner with Runway for AI Tools
By
Paula ParisiMay 8, 2025
Lightricks, the company behind the Facetune and Videoleap apps, has released a new video model called LTX Video, or LTXV, that generates what the company describes as high-quality AI video at speeds up to 30 times faster than competing products, and does it using consumer-grade hardware. The open-source, 13-billion parameter model achieves such efficiency by utilizing an approach called multiscale rendering, which generates video in progressively detailed layers. The program can run on high-end laptops and standard desktop computers, opening up generative video to an audience beyond those who have access to enterprise equipment. Continue reading Lightricks LTXV Makes Video Generation Faster and Cheaper
By
Paula ParisiApril 28, 2025
News from Adobe MAX London 2025 spanned new Firefly image models to a refreshed web app that includes third-party image generators, an AI agent that automates Photoshop, an updated Firefly mobile app coming soon to iOS and Android, and the Firefly Video model in general release. The latest release of Firefly “unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities,” according to Adobe, which says that since its debut nearly two years ago, creatives have used Firefly to generate more than 22 billion assets worldwide. Continue reading Adobe Unveils Two New Image Models and Array of Products
By
Paula ParisiApril 2, 2025
Runway has introduced a new video generation model, launching a next phase of competition that could transform film production. Notably, its Gen-4 system improves the consistency of characters, locations and objects across multiple scenes, an elusive prospect for most AI video generators. The New York-based startup calls its new development “a step towards Universal Generative Models that understand the world.” The key, Runway says, is to provide a single reference image of the character, item or environment as part of the model’s project material. Runway Gen-4 can generate 5- and 10-second clips at 720p resolution. Continue reading Runway Gen-4 Tackles AI’s Elusive Video Scene Consistency
By
Paula ParisiNovember 21, 2024
Promise is a new entertainment studio launched around the potential of generative AI. The Los Angeles-based startup is developing a multiyear slate of films, TV shows and media in “new formats.” With funding led by Peter Chernin’s North Road Company and Andreessen Horowitz, Promise vows to set “a new standard for high-quality storytelling enabled by AI.” The firm is also working on new tools to optimize the generative workflow. The first product, MUSE, “integrates the latest GenAI technology throughout the creative process in a streamlined, collaborative, and secure production environment.” Continue reading Promise Is an Entertainment Studio Built Around Generative AI
By
Paula ParisiNovember 6, 2024
New York-based AI firm Runway has added 3D video camera controls to Gen-3 Alpha Turbo, giving users the ability to manipulate granular aspects of the scene they are generating using effects whether originating from text prompts, uploaded images or self-created video. Users can zoom in and out on a subject or scene, moving around an AI-generated character or form in 3D as if on a real set or actual location. The new feature, available now, lets creators “choose both the direction and intensity of how you move through your scenes for even more intention in every shot,” Runway explains. Continue reading Runway Adds 3D Video Cam Controls to Gen-3 Alpha Turbo
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 16, 2024
Adobe has launched a public beta of its Generate Video app, part of the Firefly Video model, which users can try for free on a dedicated website. Login is required, and there is still a waitlist for unfettered access, but the Web app facilitates up to five seconds of video generation using text and image prompts. It can turn 2D pictures into 3D animation and is also capable of producing video with dynamic text. The company has also added an AI feature called “Extend Video” to Premiere Pro to lengthen existing footage by two seconds. The news has the media lauding Adobe for beating OpenAI’s Sora and Google’s Veo to market. Continue reading Adobe Promos AI in Premiere Pro, ‘Generate Video’ and More
By
Paula ParisiOctober 14, 2024
Generative video models seem to be debuting daily. Pyramid Flow, among the latest, aims for realism, producing dynamic video sequences that have temporal consistency and rich detail while being open source and free. The model can create clips of up to 10 seconds using both text and image prompts. It offers a cinematic look, supporting 1280×768 pixel resolution clips at 24 fps. Developed by a consortium of researchers from Peking University, Beijing University and Kuaishou Technology, Pyramid Flow harnesses a new technique that starts with low-resolution video, outputting at full-res only at the end of the process. Continue reading Pyramid Flow Introduces a New Approach to Generative Video
By
Paula ParisiOctober 11, 2024
Hailuo, the free text-to-video generator released last month by the Alibaba-backed company MiniMax, has delivered its promised image-to-video feature. Founded by AI researcher Yan Junjie, the Shanghai-based MiniMax also has backing from Tencent. The model earned high marks for what has been called “ultra realistic” video, and MiniMax says the new image-to-video feature will improve output across the board as a result of “text-and-image joint instruction following,” which means Hailuo now “seamlessly integrates both text and image command inputs, enhancing your visuals while precisely adhering to your prompts.” Continue reading MiniMax’s Hailuo AI Rolls Out New Image-to-Video Capability
By
Paula ParisiSeptember 30, 2024
Artificial intelligence platform Runway has launched The Hundred Film Fund to help finance 100 projects that use its AI to tell stories. Created by the company through its Runway Studios, the Fund is starting with $5 million, “with the potential to grow to $10 million.” Runway is presenting the Fund as “an open call to all creative professionals who have AI-augmented film projects in the pre- or post-production phases and are in need of funding.” Directors, producers and screenwriters are among those invited to apply. The program will consider all formats, from features to shorts, documentaries, experimental projects, music videos and more. Continue reading Runway Launches $5M AI Film Fund as Open Call to Creators
By
Paula ParisiAugust 5, 2024
AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image