Stability AI Advances Image Generation with Stable Cascade

Stability AI, purveyor of the popular Stable Diffusion image generator, has introduced a completely new model called Stable Cascade. Now in preview, Stable Cascade uses a different architecture than Stable Diffusion’s SDXL that the UK company’s researchers say is more efficient. Cascade builds on a compression architecture called Würstchen (German for “sausage”) that Stability began sharing in research papers early last year. Würstchen is a three-stage process that includes two-step encoding. It uses fewer parameters, meaning less data to train on, greater speed and reduced costs. Continue reading Stability AI Advances Image Generation with Stable Cascade

Runway Opens Waitlist for Its Gen 2 Text-to-Video AI System

New York-based Runway is releasing its Gen 2 system, which generates video clips of up to a few seconds from text or image-based user prompts. The company, which specializes in artificial intelligence-enhanced film and editing tools, has opened a waitlist for the new product that will be accessed through a private Discord channel by an audience grown over time. Last year, Meta Platforms and Google both previewed text-to-video software in the research stage, but neither detailed plans to make their platforms public. Bloomberg called Runway’s limited launch “the most high-profile instance of such text-to-video generation outside of a lab.” Continue reading Runway Opens Waitlist for Its Gen 2 Text-to-Video AI System

Disney Invents High-Quality Tool to Rejuvenate or Age Actors

Disney Research Studios has created an AI tool that can make actors look older or younger more simply than the costly and time-consuming visual effects that are the current status quo. While artificial intelligence had been used to age or de-age people with relative success in still frames, the results lacked photorealism when applied to video. Disney calls its app FRAN, for Face Re-Aging Network. FRAN has been trained to identify the parts of a face that change with age and can either accentuate or erase the telltale signs. Continue reading Disney Invents High-Quality Tool to Rejuvenate or Age Actors

Hollywood VFX Experts Gravitate to AR/VR Jobs in Big Tech

Apple, Facebook and Google are among the Big Tech companies that are hiring technologists behind Hollywood movies like “Avatar” and “Rogue One: A Star Wars Story.” All three companies are developing headsets or glasses for AR/VR or so-called extended reality, and the most cutting-edge Hollywood visual effects create needed photoreal computer-generated characters and landscapes. VFX veteran Paul Debevec, now a professor at the University of Southern California, was recruited by Google four-and-a-half years ago. Continue reading Hollywood VFX Experts Gravitate to AR/VR Jobs in Big Tech

Firms Highlight Real World AI Solutions at HPA Tech Retreat

At the HPA Tech Retreat in Palm Desert this week, Sony chief technology officer Don Eklund described how Sony has been using artificial intelligence as a toolset to create applications specific to its needs. “I was aware of AI but didn’t pay attention,” he said. “It’s now become pervasive.” He brought together three companies — Adobe, Rival Theory and Video Gorillas — that are researching and developing AI-enabled solutions over many years. Some of these tools are commercially available or will be soon. Continue reading Firms Highlight Real World AI Solutions at HPA Tech Retreat

Nvidia Reveals Use of Neural Networks to Create Virtual City

Nvidia used processing power and neural networks to create a very convincing virtual city, which will be open for tours by attendees to this year’s NeurIPS AI conference in Montreal. Nvidia’s system, which uses existing videos of scenery and objects to create these interactive environments, also makes it easier for artists to create similar virtual worlds. Nvidia vice president of applied deep learning Bryan Catanzaro said generative models are key to making the process of creating virtual worlds cost effective. Continue reading Nvidia Reveals Use of Neural Networks to Create Virtual City

Nvidia Ray-Tracing Technology a Quantum Leap in Rendering

At SIGGRAPH 2018, Nvidia debuted its new Turing architecture featuring ray tracing, a kind of rendering, for professional and consumer graphics cards. Considered the Holy Grail by many industry pros, ray tracing works by modeling light in real time as it intersects with objects. Ray tracing is ideal for creating photorealistic lighting and VFX. Up until now, ray tracing has not been possible to do because it requires an immense amount of expensive computing power, but Nvidia’s professional Turing card costs $10,000. Continue reading Nvidia Ray-Tracing Technology a Quantum Leap in Rendering

Nvidia Quadro RTx Chips Offer AI and Real-Time Ray Tracing

Nvidia unveiled new Turing architecture during a keynote at SIGGRAPH 2018 as well as three new Quadro RTx workstation graphics cards aimed at professionals. Nvidia dubs the Turing architecture as its “greatest leap since the invention of the CUDA GPU in 2006.” The RTx chips are the first to use the company’s ray tracing rendering method, which results in more realistic imagery. Also at SIGGRAPH, Porsche showed off car designs accomplished with Epic Games’ Unreal engine and Nvidia’s RTx chips. Continue reading Nvidia Quadro RTx Chips Offer AI and Real-Time Ray Tracing

Nvidia’s Project Holodeck: Photoreal Graphics in Shared VR

At Nvidia’s GPU Technology Conference, the company’s chief executive Jen-Hsun Haung introduced Project Holodeck, which aims to provide an experimental multi-user virtual environment with real-time photorealistic graphics and real-world physics. The new technology, which uses Epic’s Unreal Engine 4 and Nvidia’s GameWorks, VRWorks and DesignWorks, is targeted at design engineers and their collaborators. Nvidia’s Project Holodeck demo involved Koenigsegg Automotive, a Swedish company that makes exotic sports cars. Continue reading Nvidia’s Project Holodeck: Photoreal Graphics in Shared VR

Researchers Develop Efficient Way to Render Shiny Surfaces

Computer scientists at UC San Diego have developed an efficient technique for rendering the sparkling, shiny and uneven surfaces of water, various metals and materials such as injection-molded plastic finishes. The team has created an algorithm that improves how CG software reproduces the interaction between light and different surfaces (known as “glints”), a technique the team claims is 100 times faster than current state-of-the-art methods, requires minimal computational resources, and is effective beyond still images to include animation. Continue reading Researchers Develop Efficient Way to Render Shiny Surfaces