Nvidia Emphasizes Software at Technicolor Experience Event

At the Technicolor Experience Center in Culver City, Nvidia held an event highlighting its decisive move into software, with artificial intelligence, virtual reality and other areas. Vice president of developer programs Greg Estes noted that the company has 850,000 developers all over the world in universities and labs as well as companies like Adobe. Its developer program provides hands-on training in AI and parallel computing, impacting the media and entertainment industry, as well as smart cities, autonomous vehicles and more.

Andrew Edelsten, Nvidia director of developer technologies for deep learning, described the company’s efforts in AI, including its Deep Learning Institute, which has trained 100,000 developers in AI. He defined deep learning as the ability to add “hidden layers” to neural networks to solve complex problems. “We’re focusing on what makes deep learning successful,” he said, stressing that deep learning is not magic. “If a human being can’t map it, neither can DL,” he warned. He encouraged developers to pick a task that will bring a return on investment. “Put your ideas through the wringer,” he advised.

Nvidia_Logo

DIY deep learning deployments, he said, are complex and time-consuming, similar to installing Linux in the early 2000s. “With deep learning, there are multiple frameworks put out by different companies and they don’t always like the others or need specific versions of Python,” he said. “We took all the major frameworks and put them in containers, all of them accelerated for GPUs. Your teams can grab them and start working.”

With regard to content creation, Edelsten ran through several AI-powered tools, starting with the 18-month-old Super Resolution, a smart upscaling tool, to Style Transfer, which allows the user to apply a “style” to a still photo. That technology has just been updated and lets the user change the season of a photo and could “potentially replace objects in a scene.”

Image Inpainting offers image removal and image repair. Realistic Clouds, based on a Disney research paper, makes high quality clouds in minutes (rather than hours). He also demonstrated tools to create a 3D digital avatar from a 2D photo, facial animation and full character animation, with the 2018 version, Deep Mimic, including real-world physics.

Rev Lebaredian, vice president of GameWorks and Lightspeed, addressed the company’s activities in commercial virtual reality. “We’ve been creating virtual realities primarily for entertainment,” he said. “But now they have other uses.”

Nvidia’s VR platform, he explained, is made up of hardware, SDKs and tools. “More recently, we’ve created applications that use the whole stack,” he said. He described the creation of the Holodeck, and how Nvidia partnered with the “Ready Player One” production to use original film assets to create a Holodeck experience.

He focused on the ways that AI and VR can be used to create an end-to-end platform for self-driving cars, and to create simulations to train robots. The team is also experimenting with cable physics that helps articulate virtual hands. “We can now simulate things that are right and don’t just look right,” he said.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.