By
Paula ParisiAugust 29, 2024
Canadian generative video startup Viggle AI, which specializes in character motion, has raised $19 million in Series A funding. Viggle was founded in 2022 on the premise of providing a simplified process “to create lifelike animations using simple text-to-video or image-to-video prompts.” The result has been robust adoption among meme creators, with many viral videos circulating among social media platforms powered by Viggle, including one featuring Joaquin Phoenix as the Joker mimicking the movements of rapper Lil Yachty. Viggle’s Discord community has four million members including “both novice and experienced animators,” according to the company. Continue reading Viggle AI Raises $19 Million on the Power of Memes and More
By
Paula ParisiJuly 1, 2024
The world’s first AI-powered movie camera has surfaced. Still in development, it aims to enable filmmakers to turn footage into AI imagery in real time while shooting. Called the CMR-M1, for camera model 1, it is the product of creative tech agency SpecialGuestX and media firm 1stAveMachine, with the goal of providing creatives with a familiar interface for AI imagemaking. It was inspired by the Cine-Kodak device, the first portable 16mm camera. “We designed a camera that serves as a physical interface to AI models,” said Miguel Espada, co-founder and executive creative technologist at SpecialGuestX, a company that does not think directors will use AI sitting at a keyboard. Continue reading New Prototype Is the World’s First AI-Powered Movie Camera
By
Paula ParisiJune 4, 2024
A year after its announcement, Fable is launching Showrunner, a platform that lets anyone make TV-style animated content by writing prompts that are turned into shows by generative AI. The San Francisco company run by CEO Edward Saatchi with recruits from Oculus, Pixar and various AI startups is launching 10 shows that let users make their own episodes “from their couch,” waiting only minutes to see the finished result, according to Saatchi, who says a 15-word prompt is enough to generate 10- to 20-minute episodes. Saatchi is hoping Fable’s shows can garner an audience by self-publishing on Amazon Prime. Continue reading Fable Launches Showrunner Animated Episodic TV Generator
By
ETCentric StaffApril 8, 2024
Mobile entertainment platform Storiaverse is connecting writers and animators around the world to create content for what it claims is a unique “read-watch” format. Available on iOS and Android, Storiaverse combines animated video, audio and text into a narrative that “enhances the reading experience for digital native adults.” Created by Agnes Kozera and David Kierzkowski, co-founders of the Podcorn podcast sponsorship marketplace, Storiaverse caters to graphic novel fans interested in discovering original, short-form animated stories that range from 5-10 minutes in length. At launch there will be 25 original titles. Continue reading Short-Form Video App Storiaverse Touts ‘Read-Write’ Format
By
ETCentric StaffMarch 25, 2024
Stability AI has released Stable Video 3D, a generative video model based on the company’s foundation model Stable Video Diffusion. SV3D, as it’s called, comes in two versions. Both can generate and animate multi-view 3D meshes from a single image. The more advanced version also let users set “specified camera paths” for a “filmed” look to the video generation. “By adapting our Stable Video Diffusion image-to-video diffusion model with the addition of camera path conditioning, Stable Video 3D is able to generate multi-view videos of an object,” the company explains. Continue reading Stable Video 3D Generates Orbital Animation from One Image
By
ETCentric StaffMarch 15, 2024
Artificial intelligence imaging service Midjourney has been embraced by storytellers who have also been clamoring for a feature that enables characters to regenerate consistently across new requests. Now Midjourney is delivering that functionality with the addition of the new “–cref” tag (short for Character Reference), available for those who are using Midjourney v6 on the Discord server. Users can achieve the effect by adding the tag to the end of text prompts, followed by a URL that contains the master image subsequent generations should match. Midjourney will then attempt to repeat the particulars of a character’s face, body and clothing characteristics. Continue reading Midjourney Creates a Feature to Advance Image Consistency
By
ETCentric StaffMarch 11, 2024
Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images
By
ETCentric StaffMarch 6, 2024
Filmmaker Gary Hustwit and artist Brendan Dawes aspire to change the way audiences experience film. Their startup, Anamorph, has launched with an app that can reassemble different versions of the same film. The app debuted with “Eno,” a Hustwit-directed documentary about the music iconoclast Brian Eno that premiered in January at the Sundance Film Festival, where every “Eno” showing presented the audience with a unique viewing experience. Drawing scenes from a repository of over 500 hours of “Eno” material, the Anamorph app would potentially be able to generate what the company says is billions of different configurations. Continue reading Generative Tech Enables Multiple Versions of the Same Movie
By
ETCentric StaffFebruary 16, 2024
Apple has taken a novel approach to animation with Keyframer, using large language models to add motion to static images through natural language prompts. “The application of LLMs to animation is underexplored,” Apple researchers say in a paper that describes Keyframer as an “animation prototyping tool.” Based on input from animators and engineers, Keyframer lets users refine their work through “a combination of prompting and direct editing,” the paper explains. The LLM can generate CSS animation code. Users can also use natural language to request design variations. Continue reading Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation
By
Don LevyJanuary 9, 2024
Impact and opportunity surfaced as the dominant theme of a full day of Digital Hollywood sessions devoted to artificial intelligence at CES 2024. We are in a period of disruption similar to the early 90s when the Internet went mainstream, said Forbes columnist Charlie Fink, moderating a panel of industry leaders from CAA, Paramount, HTC, Nvidia and Google. Yet despite the transformation already underway, panelists agreed that this is neither the first nor last technology to shift the status quo, more the latest example of inevitable change and adjustment. The current conversations around AI at CES are a refreshing departure after a few years of evolutionary, not revolutionary tech confabs. Continue reading CES: Digital Hollywood Session Explores AI at Inflection Point
By
Paula ParisiDecember 22, 2023
Google has unveiled a new large language model designed to advance video generation. VideoPoet is capable of text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. “The leading video generation models are almost exclusively diffusion-based,” Google says, citing Imagen Video as an example. Google finds this counter intuitive, since “LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities.” VideoPoet eschews the diffusion approach of relying on separately trained tasks in favor of integrating many video generation capabilities in a single LLM. Continue reading VideoPoet: Google Launches a Multimodal AI Video Generator
By
Paula ParisiNovember 27, 2023
Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion
By
Paula ParisiNovember 20, 2023
Having made the leap from image generation to video generation over the course of a few months in 2022, Meta Platforms introduces Emu, its first visual foundational model, along with Emu Video and Emu Edit, positioned as milestones in the trek to AI moviemaking. Emu uses just two diffusion models to generate 512×512 four-second long videos at 16 frames per second, Meta said, comparing that to 2022’s Make-A-Video, which requires a “cascade” of five models. Internal research found Emu video generations were “strongly preferred” over the Make-A-Video model based on quality (96 percent) and prompt fidelity (85 percent). Continue reading Meta Touts Its Emu Foundational Model for Video and Editing
By
Paula ParisiNovember 20, 2023
Unity has officially released its Muse AI platform for general use in early access. Muse is a suite of AI-powered tools that streamline game development. The Muse package includes Muse Chat to source answers and generate code, Muse Sprite for 2D sprites generation, and Muse Texture, providing 2D and 3D ready textures. Originally announced in July, Muse is now offered at a $30 per month subscription. Also announced at the firm’s annual Unite conference was the next major software update, Unity 6, for 2024, and the deployment of Unity Cloud to connect development tools across projects and pipelines. Continue reading Unity Opens Beta for Muse AI, Sets General Release for 2024
By
Paula ParisiNovember 9, 2023
The entrepreneurs behind the Myspace social network and gaming company Jam City have shifted their focus to generative AI and web3 with a new venture, Plai Labs, a social platform that provides AI tools for collaboration and connectivity. Plai Labs has released a free text-to-video generator, PlaiDay, which will compete with other GenAI video tools from the likes of OpenAI (DALL-E 2), Google (Imagen), Meta Platforms (Make-A-Video) and Stable Diffusion. But PlaiDay hopes to set itself apart by offering the ability to personalize videos with selfie likenesses. Continue reading Social Startup Plai Labs Debuts Free Text-to-Video Generator