Adobe Considers Sora, Pika and Runway AI for Premiere Pro

Adobe plans to add generative AI capabilities to its Premiere Pro editing platform and is exploring the update with third-party AI technologies including OpenAI’s Sora, as well as models from Runway and Pika Labs, making it easier “to draw on the strengths of different models” within everyday workflows, according to Adobe. Editors will gain the ability to generate and add objects into scenes or shots, remove unwanted elements with a click, and even extend frames and footage length. The company is also developing a video model for its own Firefly AI for video and audio work in Premiere Pro. Continue reading Adobe Considers Sora, Pika and Runway AI for Premiere Pro

Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Microsoft has developed VASA, a framework for generating lifelike virtual characters with vocal capabilities including speaking and singing. The premiere model, VASA-1, can perform the feat in real time from a single static image and a vocalization clip. The research demo showcases realistic audio-enhanced faces that can be fine-tuned to look in different directions or change expression in video clips of up to one minute at 512 x 512 pixels and up to 40fps “with negligible starting latency,” according to Microsoft, which says “it paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.” Continue reading Microsoft’s VASA-1 Can Generate Talking Faces in Real Time

Meta Tests Image-Generating Social Chatbot on Its Platforms

Meta is testing a new large language chatbot, Meta AI, on social platforms in parts of India and Africa. The chatbot was introduced in late 2023, and began testing on U.S. WhatApp users in March. The test is expanding to include more territories and the addition of Instagram and Facebook Messenger. India is reported to be Meta’s largest social market, with more than 500 million Facebook and WhatsApp users, and has big implications as the company scales up its AI plans to compete against OpenAI and others. The Meta AI chatbot answers questions and generates photorealistic images. Continue reading Meta Tests Image-Generating Social Chatbot on Its Platforms

Google Adding Free AI Photo Editing Tools to Google Photos

Beginning May 15, Google Photos users can start accessing a suite of free AI-powered Magic Editor tools like Magic Eraser and Portrait Light. The features will also be accessible on more devices, including Pixel tablets. Last year, Google launched Magic Editor on Pixel 8 and Pixel 8 Pro phones. In addition to making the features available on all Pixel devices, all Google Photos users on Android and iOS will get baseline access to 10 Magic Editor saves per month. Additionally, those with a Pixel device or Premium Google One plan of at least 2TB will have unlimited use. Continue reading Google Adding Free AI Photo Editing Tools to Google Photos

Google Introduces Faster, More Efficient JPEG Coding Library

Google is attacking slow-loading web pages with the new JPEG image encoder/decoder Jpegli, which offers a 35 percent compression ratio improvement using high quality compression settings, the Alphabet company says. The Jpegli JPEG coding library offers backward compatibility via “a fully interoperable encoder and decoder complying with the original JPEG standard and its most conventional 8-bit formalism, and API/ABI compatibility with libjpeg-turbo and MozJPEG,” Google says. The resulting images compressed using Jpegli are “more precise and psychovisually effective” as a result of computations that make images “look clearer” with “fewer observable artifacts.” Continue reading Google Introduces Faster, More Efficient JPEG Coding Library

OpenAI Integrates New Image Editor for DALL-E into ChatGPT

OpenAI has updated the editor for DALL-E, the artificial intelligence image generator that is part of the ChatGPT premium tiers. The update, based on the DALL-E 3 model, makes it easier for users to adjust their generated images. Shortly after DALL-E 3’s September debut, OpenAI integrated it into ChatGPT, enabling paid subscribers to generate images from text or image prompts. The new DALL-E editor interface lets users edit images “by selecting an area of the image to edit and describing your changes in chat” without using the selection tool. Desired changes can also be prompted “in the conversation panel,” according to OpenAI. Continue reading OpenAI Integrates New Image Editor for DALL-E into ChatGPT

New Tech from MIT, Adobe Advances Generative AI Imaging

Researchers from the Massachusetts Institute of Technology and Adobe have unveiled a new AI acceleration tool that makes generative apps like DALL-E 3 and Stable Diffusion up to 30x faster by reducing the process to a single step. The new approach, called distribution matching distillation, or DMD, maintains or enhances image quality while greatly streamlining the process. Theoretically, the technique “marries the principles of generative adversarial networks (GANs) with those of diffusion models,” consolidating “the hundred steps of iterative refinement required by current diffusion models” into one step, MIT PhD student and project lead Tianwei Yin says. Continue reading New Tech from MIT, Adobe Advances Generative AI Imaging

Stable Video 3D Generates Orbital Animation from One Image

Stability AI has released Stable Video 3D, a generative video model based on the company’s foundation model Stable Video Diffusion. SV3D, as it’s called,  comes in two versions. Both can generate and animate multi-view 3D meshes from a single image. The more advanced version also let users set “specified camera paths” for a “filmed” look to the video generation. “By adapting our Stable Video Diffusion image-to-video diffusion model with the addition of camera path conditioning, Stable Video 3D is able to generate multi-view videos of an object,” the company explains. Continue reading Stable Video 3D Generates Orbital Animation from One Image

Apple Unveils Progress in Multimodal Large Language Models

Apple researchers have gone public with new multimodal methods for training large language models using both text and images. The results are said to enable AI systems that are more powerful and flexible, which could have significant ramifications for future Apple products. These new models, which Apple calls MM1, support up to 30 billion parameters. The researchers identify multimodal large language models (MLLMs) as “the next frontier in foundation models,” which exceed the performance of LLMs and “excel at tasks like image captioning, visual question answering and natural language inference.” Continue reading Apple Unveils Progress in Multimodal Large Language Models

Midjourney Creates a Feature to Advance Image Consistency

Artificial intelligence imaging service Midjourney has been embraced by storytellers who have also been clamoring for a feature that enables characters to regenerate consistently across new requests. Now Midjourney is delivering that functionality with the addition of the new “–cref” tag (short for Character Reference), available for those who are using Midjourney v6 on the Discord server. Users can achieve the effect by adding the tag to the end of text prompts, followed by a URL that contains the master image subsequent generations should match. Midjourney will then attempt to repeat the particulars of a character’s face, body and clothing characteristics. Continue reading Midjourney Creates a Feature to Advance Image Consistency

TikTok Updates Its Code to Sync to Separate ‘TikTok Photos’

Having fended off challenges in the short-form video sphere since its late 2016 launch, it now appears TikTok is playing offense, laying the groundwork for a photo-sharing app that has drawn comparisons to Instagram and Pinterest. Avid TikTok users are probably familiar with a feature that lets them post still images as moving images that can be examined by advancing frame-by-frame. Now TikTok seems to want to improve that approach by building a separate TikTok Photos app to which users of the primary platform can export and showcase their still images to Android and iOS. Continue reading TikTok Updates Its Code to Sync to Separate ‘TikTok Photos’

Alibaba’s EMO Can Generate Performance Video from Images

Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images

AI Video Startup Haiper Announces Funding and Plans for AGI

London-based AI video startup Haiper has emerged from stealth mode with $13.8 million in seed funding and a platform that generates up to two seconds of HD video from text prompts or images. Founded by alumni from Google DeepMind, TikTok and various academic research labs, Haiper is built around a bespoke foundation model that aims to serve the needs of the creative community while the company pursues a path to artificial general intelligence (AGI). Haiper is offering a free trial of what is currently a web-based user interface similar to offerings from Runway and Pika. Continue reading AI Video Startup Haiper Announces Funding and Plans for AGI

Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation

Apple has taken a novel approach to animation with Keyframer, using large language models to add motion to static images through natural language prompts. “The application of LLMs to animation is underexplored,” Apple researchers say in a paper that describes Keyframer as an “animation prototyping tool.” Based on input from animators and engineers, Keyframer lets users refine their work through “a combination of prompting and direct editing,” the paper explains. The LLM can generate CSS animation code. Users can also use natural language to request design variations. Continue reading Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation

Stability AI Advances Image Generation with Stable Cascade

Stability AI, purveyor of the popular Stable Diffusion image generator, has introduced a completely new model called Stable Cascade. Now in preview, Stable Cascade uses a different architecture than Stable Diffusion’s SDXL that the UK company’s researchers say is more efficient. Cascade builds on a compression architecture called Würstchen (German for “sausage”) that Stability began sharing in research papers early last year. Würstchen is a three-stage process that includes two-step encoding. It uses fewer parameters, meaning less data to train on, greater speed and reduced costs. Continue reading Stability AI Advances Image Generation with Stable Cascade