January 6, 2017
The primary takeaway from the companies showing in the AR and VR Marketplaces in LVCC South Hall was that many new products look similar to last year’s devices, but they offer significant improvements that will make the experience they deliver more compelling. Mobile VR headsets are finding ways to capture lean-in movement, blurring the difference between new offerings and the high-end VR HMDs such as Oculus Rift and HTC Vive. AR glasses are getting lighter, clearer and brighter. And startups are exploring opportunities to bring VR and AR experiences into current social media platforms and build on media sharing habits. At CES this week, we learned compelling news from the likes of ODG, Zeiss, Lumus, Vuzix, DTS and others.
The big AR/VR story was ODG’s latest AR glasses, developed in partnership with Qualcomm. ODG announced two Android-based models that use the Qualcomm Snapdragon chip. The R-9 has a 50-degree field of view and features a remarkably clear, bright, high resolution image behind a tinted sunglass lens. The R-9 will be available in Q2 for around $1,799. In concert with ODG’s AR “glasses for the masses” positioning, the R-8 will cost under $1,000 and ship later.
Zeiss has partnered with Swiss 3D scanning company Dacuda to enable a lean-in experience on mobile VR. The Dacuda software uses the smartphone camera to recognize when you are moving in on an image and triggers an equivalent lean-in in the virtual image. Zeiss is branding it as “room scale VR.” Dacuda recommends tethering the phone to a PC for the best performance, but it can also work as a normal untethered mobile VR experience. Dacuda plans to charge consumers for its software.
Lumus, the Israeli hardware component company, showed a thinner and brighter AR lens with a 55-degree FOV and 1080p resolution. Partially overlapping a pair of lenses can achieve 90-degree FOV, but you would lose 3D on the sides. The company also showed a working prototype of ultralight, ultrathin monocular AR eyewear, codename Sleek, that has a 23-degree field of view and clear lenses that produce a bright AR image in bright sunlight.
InfinityAR has built a new model of its AR glasses using Lumus lenses and two VGA-resolution forward-facing cameras for positional tracking and depth mapping. They did not have a working demo so we cannot speak to its quality.
Vuzix showed working prototypes of lightweight untethered AR glasses with clear untinted glass and depth tracking capabilities. The demo included motion capture of the wearer’s hands, to illustrate how users can manipulate virtual objects with their hands. No technical information or release date info was available without an NDA.
On the audio side, DTS demonstrated improved spatial audio technology, including smooth and clear 360 panning.
Last year, Vrtify was pitching its ability to capture a music venue using 50+ microphones so you could virtually wander around in the sound space. This year the company showed a music experience management and control system. At the most basic level it visually organizes your music library. Beyond that, if you access their library of performances in which they have captured individual instruments on separate tracks, you can move the tracks around in virtual space and create your own customized immersive music soundscape. Vrtify is working on volumetric capture of musicians, so you can walk around the individual artists as they play.
Using a handheld device to scan a room and create a 3D model of the space for previz has clear benefits for content production workflow and asset management. Occipital, which ETC reported on last year, uses an iPad with an attached IR depth mapping peripheral. The company has improved its iOS-only algorithm to capture details down to approximately 3mm. The product is good for fast, general space and object mapping, but falls short for fine details such as hair.
Fraunhofer uses a camera array of any configuration to capture images and create a 3D model. Their demo of a still image captured using 60 cameras was very detailed, but took many hours to process. Fraunhofer has not yet run tests to determine if the approach is practical for video, or if the processing is too costly and time consuming.
Consumer app startup VisualPathy is working to change consumer expectations for the 360 video capture and sharing experience. Using the application, a consumer can easily upload 360 video, embed it on a website, post it to social media, and directly share it with friends.
The user can also post messages and maintain a social media conversation inside the video or 360 still image. VisualPathy is currently free, but the company hopes to eventually charge users based on data volume used or time spent in the application.
Shanghai-based VisionStar has developed EasyAR, an AR authoring SDK that the company claims is competitive with Vuforia. Currently, more than 20,000 developers are using the SDK.
This story primarily highlights interesting developments in the CES-designated VR and AR areas in the LVCC South Hall. There are a great many other VR, AR, MR, and related companies scattered around other CES locations. We will report any interesting findings in a follow-up story.