By
Paula ParisiSeptember 13, 2024
Adobe is showcasing upcoming generative AI video tools that build on the Firefly video model the software giant announced in April. The offerings include a text-to-video feature and one that generates video from pictures. Each outputs clips of up to five seconds. Adobe has developed Firefly as the generative component of the AI integration it is rolling out across its Adobe’s Creative Cloud applications, which previously focused on editing and now, thanks to gen AI, incorporate creation. Adobe wasn’t a first-mover in the space, but its percolating effort has been received enthusiastically. Continue reading Adobe Publicly Demos Firefly Text- and Image-to Video Tools
By
Paula ParisiAugust 29, 2024
Canadian generative video startup Viggle AI, which specializes in character motion, has raised $19 million in Series A funding. Viggle was founded in 2022 on the premise of providing a simplified process “to create lifelike animations using simple text-to-video or image-to-video prompts.” The result has been robust adoption among meme creators, with many viral videos circulating among social media platforms powered by Viggle, including one featuring Joaquin Phoenix as the Joker mimicking the movements of rapper Lil Yachty. Viggle’s Discord community has four million members including “both novice and experienced animators,” according to the company. Continue reading Viggle AI Raises $19 Million on the Power of Memes and More
By
Paula ParisiAugust 20, 2024
ByteDance has debuted a text-to-video mobile app in its native China that is available on the company’s TikTok equivalent there, Douyin. Called Jimeng AI, there is speculation that it will be coming to North America and Europe soon via TikTok or ByteDance’s CapCut editing tool, possibly beating competing U.S. technologies like OpenAI’s Sora to market. Jimeng (translation: “dream”) uses text prompts to generate short videos. For now, its responsiveness is limited to prompts written in Chinese. In addition to entertainment, the app is described as applicable to education, marketing and other purposes. Continue reading ByteDance Intros Jimeng AI Text-to-Video Generator in China
By
Paula ParisiAugust 5, 2024
AI media firm Runway has launched Gen-3 Alpha, building on the text-to-video model by using images to prompt realistic videos generated in seconds. Navigate to Runway’s web-based interface and click on “try Gen 3-Alpha” and you’ll land on a screen with an image uploader, as well as a text box for those who either prefer that approach or want to use natural language to tweak results. Runway lets users generate up to 10 seconds of contiguous video using a credit system. “Image to Video is major update that greatly improves the artistic control,” Runway said in an announcement. Continue reading Runway’s Gen-3 Alpha Creates Realistic Video from Still Image
By
Paula ParisiJune 27, 2024
Toys R Us is the first company to use OpenAI’s generative video platform Sora to produce a commercial, or what is being described as a “brand film.” With a running time of 1:06, the spot depicts company founder Charles Lazurus as a young boy, “envisioning his dreams” for the toy store and mascot Geoffrey the Giraffe. It was co-produced and directed by Los Angeles creative agency Native Foreign co-founder Nik Kleverov, who has alpha access to the pre-release Sora. Toys R Us says that from concept to completed video, the project came together in just a few weeks to premiere at the 2024 Cannes Lions International Festival of Creativity. Continue reading Toys R Us and Native Foreign Create Ad Using OpenAI’s Sora
By
Paula ParisiJune 19, 2024
Runway ML has introduced a new foundation model, Gen-3 Alpha, which the company says can generate high-quality, realistic scenes of up to 10 seconds long from text prompts, still images or a video sample. Offering a variety of camera movements, Gen-3 Alpha will initially roll out to Runway’s paid subscribers, but the company plans to add a free version in the future. Runway says Gen-3 Alpha is the first of a new series of models trained on the company’s new large-scale multimodal infrastructure, which offers improvements “in fidelity, consistency, and motion over Gen-2,” released last year. Continue reading Runway’s Gen-3 Alpha Creates AI Videos Up to 10-Seconds
By
Paula ParisiJune 14, 2024
Northern California startup Luma AI has released Dream Machine, a model that generates realistic videos from text prompts and images. Built on a scalable and multimodal transformer architecture and “trained directly on videos,” Dream Machine can create “action-packed scenes” that are physically accurate and consistent, says Luma, which has a free version of the model in public beta. Dream Machine is what Luma calls the first step toward “a universal imagination engine,” while others are calling it “powerful” and “slammed with traffic.” Though Luma has shared scant details, each posted sequence looks to be about 5 seconds long. Continue reading Luma AI Dream Machine Video Generator in Free Public Beta
By
Paula ParisiJune 14, 2024
China’s Kuaishou Technology has a video generator called Kling AI in public beta that is getting great word-of-mouth, with comments from “incredibly realistic” to “Sora killer,” a reference to OpenAI’s still in closed beta video generator. Kuaishou claims that using only text prompts, Kling can generate “AI videos that closely mimic the real world’s complex motion patterns and physical characteristics,” in sequences as long as two minutes at 30 fps and 1080p, while supporting various aspect ratios. Kuaishou is China’s second most popular short-form video app, after ByteDance’s Douyin, the Chinese version of TikTok. Continue reading ByteDance Rival Kuaishou Creates Kling AI Video Generator
By
Paula ParisiMay 16, 2024
Google is launching two new AI models: the video generator Veo and Imagen 3, billed as the company’s “highest quality text-to-image model yet.” The products were introduced at Google I/O this week, where new demo recordings created using the Music AI Sandbox were also showcased. The 1080p Veo videos can be generated in “a wide range of cinematic and visual styles” and run “over a minute” in length, Google says. Veo is available in private preview in VideoFX by joining a waitlist. At a future date, the company plans to bring some Veo capabilities to YouTube Shorts and other products. Continue reading Veo AI Image Generator and Imagen 3 Unveiled at Google I/O
By
ETCentric StaffApril 12, 2024
During Google Cloud Next 2024 in Las Vegas, Google announced an updated version of its text-to-image generator Imagen 2 on Vertex AI that has the ability to generate video clips of up to four seconds. Google calls this feature “text-to-live images,” and it essentially delivers animated GIFs at 24 fps and 360×640 pixel resolution, though Google says there will be “continuous enhancements.” Imagen 2 can also generate text, emblems and logos in different languages, and has the ability to overlay those elements on existing images like business cards, apparel and products. Continue reading Google Imagen 2 Now Generates 4-Second Clips on Vertex AI
By
ETCentric StaffMarch 27, 2024
OpenAI’s Sora text- and image-to-video tool isn’t publicly available yet, but the company is showing what it’s capable of by putting it in the hands of seven artists. The results — from a short film about a balloon man to a hybrid flamingo giraffe — are stirring excitement and priming the pump for what OpenAI CTO Mira Murati says will be a 2024 general release. Challenges include making it cheaper to run and enhancing guardrails. Since introducing Sora last month, OpenAI says it’s “been working with visual artists, designers, creative directors and filmmakers to learn how Sora might aid in their creative process.” Continue reading OpenAI Releases Early Demos of Sora Video Generation Tool
By
ETCentric StaffMarch 8, 2024
London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika. Continue reading AI Video Startup Haiper Announces Funding and Plans for AGI
By
ETCentric StaffMarch 1, 2024
Lightricks, the company behind apps including Facetune, Photoleap and Videoleap, has come up with a text-to-video tool called LTX Studio that it is being positioned as a turnkey AI tool for filmmakers and other creators. “From concept to creation,” the new app aims to enable “the transformation of a single idea into a cohesive, AI-generated video.” Currently waitlisted, Lightricks says it will make the web-based tool available to the public for free, at least initially, beginning in April, allowing users to “direct each scene down to specific camera angles with specialized AI.” Continue reading Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform
By
ETCentric StaffMarch 1, 2024
On the heels of ElevenLabs’ demo of a text-to-sound app unveiled using clips generated by OpenAI’s text-to-video artificial intelligence platform Sora, Pika Labs is releasing a feature called Lip Sync that lets its paid subscribers use the ElevenLabs app to add AI-generated voices and dialogue to Pika-generated videos and have the characters’ lips moving in sync with the speech. Pika Lip Sync supports both uploaded audio files and text-to-audio AI, allowing users to type or record dialogue, or use pre-existing sound files, then apply AI to change the voicing style. Continue reading Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video
By
ETCentric StaffFebruary 20, 2024
OpenAI has debuted a generative video model called Sora that could be a game changer. In OpenAI’s demonstration clips, Sora depicts both fantasy and natural scenes with photorealistic intensity that makes the images appear to be photographed. Although Sora is said to be currently limited to one-minute clips, it is only a matter of time until that expands, which suggests the technology could have a significant impact on all aspects of production — from entertainment to advertising to education. Concerned about Sora’s disinformation potential, OpenAI is proceeding cautiously, and initially making it available only to a select group to help it troubleshoot. Continue reading OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’