Startup Lightform Readies Release of Inexpensive AR Scanner

Augmented reality startup Lightform has come out of the stealth mode it’s been in for three years, announcing a $2.6 million funding round from former Oculus head scientist Steven LaValle, Lux Capital, Seven Seas and NSF. What Lightform has been working on is a device that can scan the environment and create a high-resolution mesh, which it then combines with very precisely targeted digital projections of light. That makes it capable of creating augmented reality in any environment, without the need for a headset. Continue reading Startup Lightform Readies Release of Inexpensive AR Scanner

Adobe’s AI-Enabled System Could Replace Greenscreen Tech

The traditional bluescreen/greenscreen method of extracting foreground content from the background for film and video production may be on its way out. That’s due to research that Adobe is doing in collaboration with the Beckman Institute for Advanced Science and Technology, to develop a new system that relies on deep convolutional neural networks. A recent paper, “Deep Image Matting,” reports that the new method uses a dataset of 49,300 training images to teach the algorithm how to distinguish and eliminate backgrounds. Continue reading Adobe’s AI-Enabled System Could Replace Greenscreen Tech

Epic Games Demos Real-Time Effects for New Branded Short

In “The Human Race,” a short produced by visual effects house The Mill for Chevrolet, the automaker’s new 2017 Camaro ZL races a futuristic Chevy concept car, one driven by a racecar driver and the other by artificial intelligence. This short premiered at the Game Developers Conference to showcase how the car was created via real-time rendering, with the help of the Unreal game engine. Unreal maker Epic Games CTO Kim Libreri demonstrated how aspects of the movie could be changed in real-time, while it was playing. Continue reading Epic Games Demos Real-Time Effects for New Branded Short

Google Launches Jump Camera Rig and 3D Video Assembler

In fall 2014, Google began developing a project it just unveiled: the Jump 3D video capture and production platform for virtual reality. Just as the newly released Daydream View makes the VR headset affordable, with Jump, the company hopes to make VR production both less expensive and, with 3D, more immersive. Although 3D has lost popularity, its use for virtual reality creates more of a “you are there” experience by emulating the way human vision works. Most 360-degree videos are currently still shot in 2D. Continue reading Google Launches Jump Camera Rig and 3D Video Assembler

ETC, Google and Equinix Present Next-Gen Cloud Workflows

The Entertainment Technology Center at USC (ETC), Equinix and Google are coming together to raise awareness of new cloud-based workflow technologies for creative companies. On September 19th at Google’s Venice, CA headquarters, the companies will host an event for industry professionals to learn how cloud-based workflows are changing media and entertainment. The event will feature presentations and an engaging panel discussion to illustrate how facilities large and small can leverage the cloud to decrease workflow latency while increasing security and productivity. Continue reading ETC, Google and Equinix Present Next-Gen Cloud Workflows

RadicalMedia and Uncorporeal Develop Hologram Experience

RadicalMedia has been working on a project to present “great people” as holograms in venues optimized for augmented reality. Although much of the project is under wraps, more became clear recently when RadicalMedia partnered with Uncorporeal, a volumetric capture startup developing technology to create human holograms that can be used in VR or AR content. Headed by Sebastian Marino, formerly visual effects supervisor on “Avatar,” Uncorporeal’s eight staffers are veterans of Lucasfilm, Weta Digital and Electronic Arts. Continue reading RadicalMedia and Uncorporeal Develop Hologram Experience

Researchers Develop Efficient Way to Render Shiny Surfaces

Computer scientists at UC San Diego have developed an efficient technique for rendering the sparkling, shiny and uneven surfaces of water, various metals and materials such as injection-molded plastic finishes. The team has created an algorithm that improves how CG software reproduces the interaction between light and different surfaces (known as “glints”), a technique the team claims is 100 times faster than current state-of-the-art methods, requires minimal computational resources, and is effective beyond still images to include animation. Continue reading Researchers Develop Efficient Way to Render Shiny Surfaces

ILMxLAB Debuts ‘Tatooine’ VR, Develops Darth Vader Projects

Darth Vader is the star of an upcoming Lucasfilm virtual reality project centered on “Star Wars.” Although the project is largely undefined at this point — it has no name, genre, or release date — what we do know is that the story will both reveal new details about Darth Vader’s background and try out some innovative storytelling techniques. Lucasfilm’s ILMxLAB, which accesses award-winning VFX facility Industrial Light & Magic, Skywalker Sound and the “Star Wars” story group, is developing the project. Continue reading ILMxLAB Debuts ‘Tatooine’ VR, Develops Darth Vader Projects

Twitter Eyes Machine Learning with Acquisition of Magic Pony

Twitter announced it is acquiring London-based artificial intelligence startup Magic Pony Technology to help provide a professional polish to tweeted live videos. The social giant reportedly paid about $150 million to purchase Magic Pony. Twitter CEO Jack Dorsey said he was buying the company “so Twitter can continue to be the best place to see what’s happening and why it matters, first.” Twitter has been emphasizing video in recent months, and machine learning is “increasingly at the core of everything we build,” said Dorsey. In addition to using machine learning, “Magic Pony’s technology uses artificial intelligence for visual effects,” notes Bloomberg. “It can be used to clean up pixelated images or create new images” and “to improve video streaming.” Continue reading Twitter Eyes Machine Learning with Acquisition of Magic Pony

Cloud Conference: Moving From Local to Cloud Infrastructure

ConductorIO VP of business development and operations Monique Bradshaw talked about the paradigm change from local, on-premise infrastructure to the cloud. “The paradigm shift means a fundamental change in approach of underlying assumption,” she said during an ETC Cloud Innovation Conference keynote at NAB. “We’re seeing a big change in the ways that companies are looking at their rendering.” In five years, she noted, 90 percent of respondents to a survey think they’ll have at least some of their rendering in the cloud, up from close to 60 percent today. Continue reading Cloud Conference: Moving From Local to Cloud Infrastructure

Cloud Conference: Challenges to Rendering VFX in the Cloud

Visual effects and rendering in the cloud was the topic of an ETC Cloud Innovation Conference panel at NAB 2016, moderated by Google Cloud Platform senior product manager Srikanth Belwadi. The scope of the issue was made clear by the fact that “The Good Dinosaur” required 110 million compute hours and 300 TB of active data space. Panelists from Thinkbox, Shotgun, Rodeo FX, Avere Systems, and ConductorIO discussed the challenges to producing VFX in the cloud — but also its inevitability.
Continue reading Cloud Conference: Challenges to Rendering VFX in the Cloud

ETC Presents vNAB Cloud Innovation Conference March 2-3

The Entertainment Technology Center@USC will host its second annual vNAB Cloud Innovation Conference on March 2-3, 2016 in the Venice, California offices of Google. This year, the 2-day extension of the April NAB Cloud Innovation Conference presents “Masters of the Media Cloud Lifecycle” with 32 Media & Entertainment (M&E) superstars, panelists and keynotes presenting TED-style talks focused on cloud-related topics designed to keep senior leaders up to date on an ever-changing world. For more information please visit ETC’s vNAB page. Continue reading ETC Presents vNAB Cloud Innovation Conference March 2-3

Digital Domain Moves into VR with Hong Kong Post Acquisition

Visual effects facility Digital Domain, known for its digital work on the “Transformers” series, the “X-Men” series, “Iron Man 3” and “Her,” acquired an 85 percent stake in Hong Kong’s Post Production Company Limited and its parent company for about $17.3 million, with the goal of making a big move into virtual reality. Post Production — which Digital Domain chief executive Daniel Seah calls “the Digital Domain of China” — has worked on many major Chinese movies, TV ads, and music videos. Continue reading Digital Domain Moves into VR with Hong Kong Post Acquisition

SMPTE 2015: Post Production Is Moving to the Cloud, Slowly

In the world of UHD/4K, movies and TV programs can require massive amounts of compute power. Take a recent 50-minute UHD natural history documentary that Sundog Media Toolkit worked on. Chief executive Richard Welsh reports it ran for four hours on over 5,000 processors. The necessity for finding huge amounts of compute power is becoming a challenge for productions, he notes.We could have run that job in real time if we had split it up more, and that would have taken us up to more than 20,000 processors for one hour.” Continue reading SMPTE 2015: Post Production Is Moving to the Cloud, Slowly

Linux to Go: Nvidia GRID Delivers Virtualization, Performance

Linux production environments can now leverage Nvidia’s recently introduced GRID technology to power VMware’s Horizon 6 for Linux and provide visual effects and animation artists anywhere, on any device, with virtual Linux workstations running their familiar high-end applications. Nvidia’s VP of Enterprise Marketing Greg Estes showed a virtual workstation running simulations in Maya on the SIGGRAPH floor in Los Angeles with the application, processors and Nvidia’s GRID and CUDA technologies installed in a data center hundreds of miles away in Northern California. Continue reading Linux to Go: Nvidia GRID Delivers Virtualization, Performance