Augmented Reality and Artificial Intelligence Shaping the Future

Although up until now, augmented reality has had an inauspicious debut — think Google Glass — it’s poised to transform how we interact with computers in the next two decades. AR now has technical limitations including a narrow field of view, less-than-ideal resolution and latency issues. Furthermore, the only way to interact with AR is via bulky glasses or helmets. But many experts believe that we are in the midst of a speedy evolution to the point where AR will enable us to project a virtual screen on every surface.

The Wall Street Journal reports that, “nearly every tech giant is working on AR,” including Facebook, Samsung and Apple, the latter which acquired German AR company Metaio. Other companies include Microsoft, with its HoloLens, Google, which is still working on its AR-Glass (now called Aura), and Seiko, which is getting ready to ship the third version of its AR glasses.


Startups include Meta, Daqri and Magic Leap, the latter of which raised $1.37 billion.

WSJ, citing a Goldman Sachs Group report, proposes that AR will be much bigger than VR in the long run. “The reason is simple: VR is to the PC as AR is to the smartphone. One requires that you be in a safe, enclosed space, while the other can be used in the real world.” Daqri chief executive Brian Mullins agrees. “There will absolutely be a time in the future where AR is so ubiquitous that we can’t imagine our lives without it,” he said.

Artificial intelligence is another trend shaping the future, says The Verge, which points to DeepMind’s victories over Go legend Lee Se-dol as proof. In an interview, DeepMind founder Demis Hassabis describes Go as “the pinnacle of perfect information games… way more complicated than chess.”

“It’s always been a bit of a holy grail or grand challenge for AI research,” he said. Intuition is what separates great Go players, he noted, and, by adding neural networks, AlphaGo now has a type of intuition. “We’ve imbued AlphaGo with the ability to learn and then it’s learnt it through practice and study, which is much more human-like [than Deep Blue],” he added.

DeepMind announced a partnership with the NHS, says Hassabis, to “start building a platform that machine learning can be used in.” AI-influenced subtle changes in our smartphones will be apparent within the next two to three years, he predicts and, in the next four to five-plus years, AI will create “big step changes in capabilities.”

Hassabis is personally interested in AI’s use for advancing science more quickly, with “AI research assistants” doing the drudgery, enabling human scientists to make quicker breakthroughs.