By
Paula ParisiSeptember 10, 2025
OpenAI is hoping an animated short film called “Critterz” that it got off the ground will have its feature-length debut at the Cannes Film Festival in May 2026. OpenAI is providing the AI technology to produce the film, which is being funded at $30 million by Paris-based Federation Studios, whose UK subsidiary Vertigo Films will produce in conjunction with Culver City’s Native Foreign, a firm known for blending AI with conventional techniques. OpenAI is providing use of its generative models, including the Sora video generator and DALL-E imager, to create what it hopes will be a test case. The idea is to complete in nine months what would normally take years at a fraction of the cost. Continue reading OpenAI Making Its Film Debut with $30M Animation ‘Critterz’
By
Paula ParisiAugust 1, 2025
Amazon’s Alexa Fund VC has made an investment in San Francisco-based startup Fable, which this week launched Showrunner, a generative AI model with an app that lets people create animated TV shows using text prompts. Showrunner has been in a closed alpha test involving about 10,000 users. Initially, Fable is making Showrunner available for free, but plans to eventually price it at $10-$20 monthly for credits enabling creation of TV-style content on Discord. The Showrunner-generated content will be shareable on social media sites including YouTube. Specific terms of Amazon’s investment have yet to be disclosed. Continue reading Amazon Invests in Fable, Creator of the ‘Showrunner’ AI App
By
Paula ParisiJuly 29, 2025
Google has added new AI features to Google Photos and YouTube Shorts. Having previously introduced generative backgrounds, YouTube Shorts now has a photo-to-video feature, as well as a variety of menu-driven effects accessible via the Shorts camera that aim to advance social media or arts project creativity — things like turning line drawings into watercolors, putting a selfie “underwater” or adding a digital twin. And Google Photos, available on just about every Android phone, now also has the ability to turn stills to video. For now, both rely on the Veo 2 video model rather than Veo 3, launched in May. Continue reading Google Photos, YouTube Shorts Offer New AI Creation Tools
By
Paula ParisiJune 5, 2025
At its annual State of Unreal event, Epic Games demonstrated the formidable capabilities of its creator tools, showcasing open-world updates in Unreal Engine 5.6 and moving the MetaHuman photo-real 3D character generator out of early access and into general availability — its character and animation output now able to be used in competing game engines Unity and Godot and in software packages including Maya, Houdini and Blender. Game studio CD Projekt Red, creator of the popular “The Witcher” series, joined Epic in demonstrating how Unreal 5.6 lets teams build large-scale open worlds that run smoothly on current-generation phones and PCs. Continue reading Epic Games Unveils Updates to Unreal Engine, MetaHumans
By
Paula ParisiApril 18, 2025
Adobe has taken a stake in business avatar firm Synthesia, which creates clones for corporate videos using generative AI. The investment of an undisclosed sum through Adobe Ventures was interpreted by one media outlet as a bet that the UK startup’s technology “will transform video production.” Adobe couched the move as a strategic alliance. The investment became public along with Synthesia’s announcement that it surpassed the $100 million mark for what the privately held company says qualifies as recurring annual revenue. Nvidia is also an investor. Continue reading Adobe Investment in Synthesia Could Fuel AI Video Production
By
Paula ParisiApril 2, 2025
Runway has introduced a new video generation model, launching a next phase of competition that could transform film production. Notably, its Gen-4 system improves the consistency of characters, locations and objects across multiple scenes, an elusive prospect for most AI video generators. The New York-based startup calls its new development “a step towards Universal Generative Models that understand the world.” The key, Runway says, is to provide a single reference image of the character, item or environment as part of the model’s project material. Runway Gen-4 can generate 5- and 10-second clips at 720p resolution. Continue reading Runway Gen-4 Tackles AI’s Elusive Video Scene Consistency
By
Paula ParisiMarch 14, 2025
Los Angeles-based AI startup Moonvalley has released a video generator purpose-built for entertainment and advertising. Called Marey, its creators say it was designed with the “specifications and tastes” of filmmakers and studios in mind and trained exclusively on owned or fully licensed source data to protect users from lawsuits. Asteria, a partner in the venture, owns a large documentary library through its subsidiary XTR. Its founders say Marey aims to usher in “a new era of GenAI video built to empower — not replace — the creative forces behind modern motion pictures.” Continue reading Moonvalley and Asteria Unveil Cinematic GenVid Model Marey
By
Paula ParisiMarch 7, 2025
Staircase Studios AI — the film, television and gaming studio launched by “Divergent” franchise producer Pouya Shahbazian — has announced its investors and shared plans to produce more than 30 projects at budgets under $500,000 over the next 3-4 years. The company will be using a proprietary AI workflow it invented called ForwardMotion that the company says will revolutionize film and television production. The company has acquired multiple pieces of IP, including more than 20 scripts that have appeared on the Black List, which tallies the most popular unproduced scripts. Continue reading Staircase Studios AI Plans 30 Projects Over Next 3 to 4 Years
By
Paula ParisiFebruary 21, 2025
Microsoft has unveiled a new AI model called Muse that can generate game visuals and controller actions and understands 3D space. The new model can create complex gameplay sequences with accurate physics and character behaviors. Classified by Microsoft as the first World and Human Action Model (WHAM), Muse was trained from over seven years’ worth of human gameplay data from the Xbox game “Bleeding Edge,” published by UK-based Microsoft Games subsidiary Ninja Theory. Muse can, in addition to game goals, provide research insights to support all sorts of creative use of generative AI, Microsoft says. Continue reading Muse Could Be a Gamechanger for Xbox Players, Developers
By
Paula ParisiFebruary 6, 2025
ByteDance has developed a generative model that can use a single photo to generate photorealistic video of humans in motion. Called OmniHuman-1, the multimodal system supports various visual and audio styles and can generate people doing things like singing, dancing, speaking and moving in a natural fashion. ByteDance says its new technology clears hurdles that hinder existing human-generators — obstacles like short play times and over-reliance on high-quality training data. The diffusion transformer-based OmniHuman addressed those challenges by mixing motion-related conditions into the training phase, a solution ByteDance researchers claim is new. Continue reading ByteDance’s AI Model Can Generate Video from Single Image
By
Paula ParisiFebruary 3, 2025
The National Hockey League is testing an animated recap show aimed at drawing young viewers. “NHL Hockeyverse Matchup of the Week” uses NHL Edge Positional Data to turn NHL player into avatars, creating “a visualization of the on-ice action with stunning realism and dynamic movements,” the league says. The half-hour show premiered February 1 featuring a recap of a January 25 game between the Vancouver Canucks and Washington Capitals. Episodes air on the NHL Network and on the NHL YouTube channel in the U.S. and on Sportsnet in Canada and are expected to continue in the Saturday slot. Continue reading NHL Is Turning Players into Avatars in Game Recaps for Kids
By
Paula ParisiJanuary 21, 2025
Sony has debuted a full-body Mocopi mobile motion capture system as part of the ecosystem to support its new extended reality brand XYN (pronounced “zin”), also launched at CES 2025 in Las Vegas. The Mocopi Pro Kit for the new professional mode of Mocopi is available as a PC app and as the XYN Motion Studio app. The apps provide capture and editing functions, while each Pro Kit includes two sets of six lightweight Mocopi sensors, two newly announced Sensor data receivers and an additional set of Sensor bands. The new professional mode expands coverage by connecting 12 Mocopi sensors. Continue reading CES: Sony Mocopi Adds Full-Body Pro Kits for PC, XR Mocap
By
Paula ParisiDecember 12, 2024
World Labs, the AI startup co-founded by Stanford AI pioneer Fei-Fei Li, has debuted a “spatial intelligence” system that can generate 3D worlds from a single image. Although the output is not photorealistic, the tech could be a breakthrough for animation companies and video game developers. Deploying what it calls Large World Models (LWMs), World Labs is focused on transforming 2D images into turnkey 3D environments with which users can interact. Observers say that reciprocity is what sets World Labs’ technology apart from offerings by other AI companies that transform 2D to 3D. Continue reading World Labs AI Lets Users Create 3D Worlds from Single Photo
By
Paula ParisiNovember 8, 2024
Wonder Animation is the latest tool from Wonder Dynamics, the AI startup founded by actor Tye Sheridan and VFX artist Nikola Todorovic in 2017 that Autodesk purchased in May. Now in beta, Wonder Animation can automatically transpose live-action footage into stylized 3D animation. Creators can shoot using any camera, on any set or location, and easily convert to 3D CGI. Matching the camera position and movement to the characters and environment, Wonder Animation lets you film using any camera system and lenses, edit those shots using Maya, Blender or Unreal, and then reconstruct the result as 3D animation using AI. Continue reading Autodesk’s AI Tool Turns Live-Action Video into 3D Animation
By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’