Adobe Considers Sora, Pika and Runway AI for Premiere Pro

Adobe plans to add generative AI capabilities to its Premiere Pro editing platform and is exploring the update with third-party AI technologies including OpenAI’s Sora, as well as models from Runway and Pika Labs, making it easier “to draw on the strengths of different models” within everyday workflows, according to Adobe. Editors will gain the ability to generate and add objects into scenes or shots, remove unwanted elements with a click, and even extend frames and footage length. The company is also developing a video model for its own Firefly AI for video and audio work in Premiere Pro. Continue reading Adobe Considers Sora, Pika and Runway AI for Premiere Pro

NAB: Blackmagic Unveils Two New Full-Frame Cine Cameras

Blackmagic Design has introduced two new attention-getting cameras at NAB 2024 in Las Vegas. The flagship URSA Cine 12K LF (large format) has a new full-frame sensor with 16-stop dynamic range, comes with 8TB of built-in storage, and starts at $14,995. It ships with a Canon EF mount, but also accommodates ARRI PL. Blackmagic will also make the URSA available with a 17K sensor, but has yet to share pricing (though it is expected to cost from $20,000 to $25,000). The $2,995 PYXIS 6K cinema box-style camera offers a choice of three lens attachments: EF, PL or L-mount. Continue reading NAB: Blackmagic Unveils Two New Full-Frame Cine Cameras

Sony 4K 60p PTZ Broadcast Camera Boasts AI Auto Framing

Sony Electronics is previewing a new flagship 4K 60p pan-tilt-zoom (PTZ) camera model, the BRC-AM7, which has an integrated lens that uses artificial intelligence for PTZ Auto Framing technology to control advanced tracking and focus. The camera provides “accurate and natural automatic tracking of moving subjects,” making it ideal for live events and sports productions at broadcast quality. Set for release in 2025, Sony says the BRC-AM7 is the world’s “smallest and lightest” integrated lens PTZ camera, weighing in at just over 8 pounds. The camera is among the numerous Sony products featured at NAB 2024 in Las Vegas. Continue reading Sony 4K 60p PTZ Broadcast Camera Boasts AI Auto Framing

DaVinci Resolve 19 Has AI Motion Tracking and Color Grading

Blackmagic Design has unveiled the new DaVinci Resolve 19, with multi-source editing, neural engine AI tools, Resolve FX and Fairlight AI audio panning among the highlight features. With more than 100 feature upgrades in all, Resolve 19 boasts IntelliTrack AI, Ultra NR noise reduction, ColorSlice six vector grading palettes and Film Look Creator FX. The company also announced the DaVinci Resolve Micro Color Panel, a more affordable color panel for DaVinci Resolve software that Blackmagic says was designed in collaboration with the world’s leading colorists. These tools are featured at the Blackmagic booth at NAB 2024. Continue reading DaVinci Resolve 19 Has AI Motion Tracking and Color Grading

AWS Deadline Cloud Service Scales Up Instant Render Farms

Amazon Web Services has launched a new cloud computing service called AWS Deadline Cloud that allows customers to set up, deploy, and scale rendering projects in what the company says is mere “minutes,” improving efficiency by facilitating parallel rendering pipelines. “With Deadline Cloud, customers creating computer graphics, visual effects, or innovating their pipelines to incorporate artificial intelligence-generated visuals can build a cloud-based render farm — aggregated compute — that scales from zero to thousands of compute instances for peak demand, without needing to manage their own infrastructure,” according to AWS. Continue reading AWS Deadline Cloud Service Scales Up Instant Render Farms

OpenAI Releases Early Demos of Sora Video Generation Tool

OpenAI’s Sora text- and image-to-video tool isn’t publicly available yet, but the company is showing what it’s capable of by putting it in the hands of seven artists. The results — from a short film about a balloon man to a hybrid flamingo giraffe — are stirring excitement and priming the pump for what OpenAI CTO Mira Murati says will be a 2024 general release. Challenges include making it cheaper to run and enhancing guardrails. Since introducing Sora last month, OpenAI says it’s “been working with visual artists, designers, creative directors and filmmakers to learn how Sora might aid in their creative process.” Continue reading OpenAI Releases Early Demos of Sora Video Generation Tool

Nikon to Enter Cinema Camera Business with RED Acquisition

Nikon, the Japanese company best known for still cameras, is vaulting into the mainstream of professional moving images with its acquisition of California-based RED Digital Cinema. RED cameras popular among filmmakers and other creators include the RED ONE 4K and V-RAPTOR [X] series. The company also invented the REDCODE RAW compression technology. On closing, RED will become a wholly-owned subsidiary of Nikon, which plans to merge “Nikon’s expertise in product development” with “RED’s knowledge in cinema cameras, including unique image compression technology and color science.” Continue reading Nikon to Enter Cinema Camera Business with RED Acquisition

ETC Releases White Balancing Tutorial for Virtual Production

ETC@USC has posted a tutorial that offers best practices for white balancing during production on a curved wall LED volume stage. Due to the growing number of LED volume and backing wall applications emerging worldwide, many critical color calibration issues have resulted. The informative tutorial — authored by cinematographer Tim Kang from Quasar Science with support from producer Erik Weaver from ETC — provides cinematographers and production teams with a simple graphical guide that breaks down the problems between the camera and its LED wall perception, while specifying different means to implement solutions. The Virtual Production White Balancing Tutorial is now available on the ETC site. Continue reading ETC Releases White Balancing Tutorial for Virtual Production

Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Lightricks, the company behind apps including Facetune, Photoleap and Videoleap, has come up with a text-to-video tool called LTX Studio that it is being positioned as a turnkey AI tool for filmmakers and other creators. “From concept to creation,” the new app aims to enable “the transformation of a single idea into a cohesive, AI-generated video.” Currently waitlisted, Lightricks says it will make the web-based tool available to the public for free, at least initially, beginning in April, allowing users to “direct each scene down to specific camera angles with specialized AI.” Continue reading Lightricks LTX Studio Is a Text-to-Video Filmmaking Platform

Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

On the heels of ElevenLabs’ demo of a text-to-sound app unveiled using clips generated by OpenAI’s text-to-video artificial intelligence platform Sora, Pika Labs is releasing a feature called Lip Sync that lets its paid subscribers use the ElevenLabs app to add AI-generated voices and dialogue to Pika-generated videos and have the characters’ lips moving in sync with the speech. Pika Lip Sync supports both uploaded audio files and text-to-audio AI, allowing users to type or record dialogue, or use pre-existing sound files, then apply AI to change the voicing style. Continue reading Pika Taps ElevenLabs Audio App to Add Lip Sync to AI Video

OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

OpenAI has debuted a generative video model called Sora that could be a game changer. In OpenAI’s demonstration clips, Sora depicts both fantasy and natural scenes with photorealistic intensity that makes the images appear to be photographed. Although Sora is said to be currently limited to one-minute clips, it is only a matter of time until that expands, which suggests the technology could have a significant impact on all aspects of production — from entertainment to advertising to education. Concerned about Sora’s disinformation potential, OpenAI is proceeding cautiously, and initially making it available only to a select group to help it troubleshoot. Continue reading OpenAI’s Generative Video Tech Is Described as ‘Eye-Popping’

Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation

Apple has taken a novel approach to animation with Keyframer, using large language models to add motion to static images through natural language prompts. “The application of LLMs to animation is underexplored,” Apple researchers say in a paper that describes Keyframer as an “animation prototyping tool.” Based on input from animators and engineers, Keyframer lets users refine their work through “a combination of prompting and direct editing,” the paper explains. The LLM can generate CSS animation code. Users can also use natural language to request design variations. Continue reading Apple’s Keyframer AI Tool Uses LLMs to Prototype Animation

CES: Voiseed Upgrades Its Platform for Expressive AI Voices

Milano-based Voiseed demonstrated its web-based Revoiceit platform at CES, pitched as the best way to manage synthetic voice actors, particularly ensuring that synthetic voices present realistic emotions. The company describes it as a cloud-based solution that uses “generative AI to infuse virtual voices with human emotions and prosody, creating highly expressive, lifelike audio experiences.” While Revoiceit’s most obvious feature is its Studio (imagine Adobe Audition devoted to second-by-second management of voices), it may well be the product’s forthcoming API that provides real value to developers of entertaining technology products. Continue reading CES: Voiseed Upgrades Its Platform for Expressive AI Voices

Meta Touts Its Emu Foundational Model for Video and Editing

Having made the leap from image generation to video generation over the course of a few months in 2022, Meta Platforms introduces Emu, its first visual foundational model, along with Emu Video and Emu Edit, positioned as milestones in the trek to AI moviemaking. Emu uses just two diffusion models to generate 512×512 four-second long videos at 16 frames per second, Meta said, comparing that to 2022’s Make-A-Video, which requires a “cascade” of five models. Internal research found Emu video generations were “strongly preferred” over the Make-A-Video model based on quality (96 percent) and prompt fidelity (85 percent). Continue reading Meta Touts Its Emu Foundational Model for Video and Editing

Nielsen: June Marks a New All-Time Record for TV Streaming

Streaming accounted for 37.7 percent of overall U.S. TV usage in June, a record share for the digital format. Cable TV accounted for 30.6 percent and broadcast 20.8 percent, according to Nielsen’s monthly snapshot The Gauge. TV viewing was up 2.2 percent in June, the first monthly increase since January. The uptick was principally attributed to young viewers and the summer break. Notably, TV consumption among the 2-11 and 12-17 age groups was up 16.3 percent and 24.1 percent, respectively, compared with May. Alternative viewing options — including streaming and video gaming — accounted for 90 percent among those groups. Continue reading Nielsen: June Marks a New All-Time Record for TV Streaming