NAB 2017: A Look at the Evolving Trends in VR and AR Audio

At the ETC conference on VR/AR, Source Sound VR’s Linda Gedemer moderated a panel of audio experts to talk about today’s VR audio tools, the trends animating VR audio, and their wish list for future technology. Everyone agreed that the audio toolsets for VR/AR seem to change nearly every day, although a handful of tools — such as Dolby Atmos and game tool Wwise — stand out as being widely accepted in this industry sector. Panelists from OSSIC, Nokia, and Source Sound VR described the toolsets they use.

OSSIC’s technical sound designer Sally Kellaway, who also is a member of Virtual Reality Content Creators Australia (VRCC), said she is used to working with game pipelines — both Unity and Unreal, as well as middleware like Wwise. Nokia’s OZO Live product manager Per-Ola Robertsson said he has leveraged Nokia technology for the VR team. Source Sound owner/VR audio director Tim Gedemer reported that his company has had parallel tracks of linear content and games going for some time.

ETC_NAB_MET_Effect

“They really haven’t met that much, but at VR, they’re looking at each other saying, we want what you have,” he said. “These industries have been flirting with one another for a long time, and now they’re forced to date.”

“Companies steeped in one side or the other are now pushed to look at things from the other perspective and borrow from the technology the other has developed,” he added. “I could see an experience that combined a live-feed, interactive elements in physical space and a linear feed of other curated content. In audio, we won’t be able to use five different pieces of software and expect for it to work. We need someone to put them together.”

Robertsson noted that the money is likely to come from Silicon Valley. “VR and AR are driven by Silicon Valley,” he said. “The current state is that VR is using old types of systems, but Silicon Valley won’t wait for the next generation of audio. They will invent it themselves.”

Dolby Laboratories applications engineer Ceri Thomas noted that, over time, he expects ambisonics (full-sphere surround sound) to become more detailed and refined. “We’ll rise up through the ranks as fast as bandwidth allows,” he said. Tim Gedemer agreed. “Audio has to deal with a very small footprint in getting it to the consumer,” he said. “If we had more bandwidth, we’d be able to deliver better experiences.”

Thomas added that ambisonics is fine for 360-degree videos. “But if you start to go through the scene at six-degrees, you need a vastly greater number of channels.” He believes the move from CPUs to GPUs will provide enough hardware acceleration for six-degree and nine-degree experiences. Kellaway enthused about Nvidia’s SDK for audio, which she called “super exciting technology.”

The moderator asked panelists what technology they would develop if they had “Bill Gates money.” Tim Gedemer hopes that, “technology becomes ubiquitous in our daily lives and has a deep enough understanding of our humanity to be more organic.” He said it would “probably include brain implants.”

Thomas looks forward to customization of tools for use with AR. Robertsson wants “implants behind my ear or inside my ear” so he can finally get rid of headphones, and Kellaway said she looks forward to education between content creators and tool developers. “I see a beautiful future where we can say we’re interested in this platform but don’t want the brain implant.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.