By
ETCentric StaffMarch 27, 2024
OpenAI’s Sora text- and image-to-video tool isn’t publicly available yet, but the company is showing what it’s capable of by putting it in the hands of seven artists. The results — from a short film about a balloon man to a hybrid flamingo giraffe — are stirring excitement and priming the pump for what OpenAI CTO Mira Murati says will be a 2024 general release. Challenges include making it cheaper to run and enhancing guardrails. Since introducing Sora last month, OpenAI says it’s “been working with visual artists, designers, creative directors and filmmakers to learn how Sora might aid in their creative process.” Continue reading OpenAI Releases Early Demos of Sora Video Generation Tool
By
ETCentric StaffMarch 25, 2024
Stability AI has released Stable Video 3D, a generative video model based on the company’s foundation model Stable Video Diffusion. SV3D, as it’s called, comes in two versions. Both can generate and animate multi-view 3D meshes from a single image. The more advanced version also let users set “specified camera paths” for a “filmed” look to the video generation. “By adapting our Stable Video Diffusion image-to-video diffusion model with the addition of camera path conditioning, Stable Video 3D is able to generate multi-view videos of an object,” the company explains. Continue reading Stable Video 3D Generates Orbital Animation from One Image
By
ETCentric StaffMarch 8, 2024
London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika. Continue reading AI Video Startup Haiper Announces Funding and Plans for AGI
By
Paula ParisiJanuary 26, 2024
Google has come up with a new approach to high resolution AI video generation with Lumiere. While most GenAI video models output individual high resolution frames at various points in the sequence (called “distant keyframes”), fill in the missing frames with low-res images to create motion (known as “temporal super-resolution,” or TSR), then up-res that connective tissue (“spatial super-resolution,” or SSR) of non-overlapping frames, Lumiere takes what Google calls a “Space-Time U-Net architecture,” which processes all frames at once, “without a cascade of TSR models, allowing us to learn globally coherent motion.” Continue reading Google Takes New Approach to Create Video with Lumiere AI
By
Paula ParisiDecember 22, 2023
Google has unveiled a new large language model designed to advance video generation. VideoPoet is capable of text-to-video, image-to-video, video stylization, video inpainting and outpainting, and video-to-audio. “The leading video generation models are almost exclusively diffusion-based,” Google says, citing Imagen Video as an example. Google finds this counter intuitive, since “LLMs are widely recognized as the de facto standard due to their exceptional learning capabilities across various modalities.” VideoPoet eschews the diffusion approach of relying on separately trained tasks in favor of integrating many video generation capabilities in a single LLM. Continue reading VideoPoet: Google Launches a Multimodal AI Video Generator
By
Paula ParisiNovember 27, 2023
Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion
By
Paula ParisiNovember 7, 2023
Kaiber, the AI-powered creative studio whose credits include music video collaborations with artists such as Kid Cudi and Linkin Park, has launched a mobile version of its creator tools designed to give musicians and graphic artists on-the-go access to its suite of GenAI tools offering text-to-video, image-to-video and video-to-video, “now with curated music to reimagine the music video creation process.” Users can select artist tracks to accompany visuals to build a music video “with as much or little AI collaboration as they wish.” Users can also upload their own music or audio and tap Kaiber for visuals. Continue reading Startup Kaiber Launches Mobile GenAI App for Music Videos