CES: From Learning to Thinking Machines – the AI Explosion

Artificial Intelligence is finally here. After nearly 50 years in the doldrums of research, the science of designing “thinking machines” has jumped from academic literature to the lab, and even from the lab to the store. This is largely because its precursor, machine learning, has been enjoying a dramatic revival, thanks in part to the commoditization of sensors and large-scale compute architectures, the explosion of available data (necessary to train advanced machine learning architectures such as recurrent neural networks), and the always burning necessity for tech companies to find something new. We expect AI to have a significant presence at next month’s CES in Las Vegas.

Today, roughly half of all IT architectures worldwide incorporate some form of machine learning. Increasingly, it is data that writes the software.

But the jump from learning to thinking machines is a considerable one. A true AI uses a machine-learning layer as the core of a much more complex, integrated cognitive architecture which includes knowledge representation and reasoning (an entire subfield of neuroscience and computer science in and of itself), heuristics, genetic programming, recurrent expert and decision systems, etc.


Yes, AI is hard. And the pool of experts and practitioners trained in its nuances is still extremely limited, which is why only 1,500 companies in North America today are engaged in AI research and development activities.

Still, 2016 was the year consumer-facing AI stepped into the spotlight: the year Charlie Rose interviewed a robot; the year Uber deployed self-driving cars as part of a pilot in Pittsburg; and the year Google’s AlphaGo beat the best professional “Go” player, a game so complex even AI experts thought it would take a decade for machines to beat the best human players.

Fueled by a record $8.5 billion investment in AI worldwide in 2015 (4x as much as in 2010), AI-powered applications have started trickling into the consumer marketplace, mainly as vocal human-machine interfaces.

Chatbots have exploded in 2016. While Microsoft’s Tay suffered a humiliating shutdown after a Twitter journey gone awry, Amazon’s Echo and Google Home have been surprise successes. Even Apple’s Siri has seen a sizeable jump.

As a result of this trend, we are likely to see natural language understanding as one AI application at the core of many consumer electronics in 2017. Whether in toys, mobile, VR/AR or Internet of Things, consumer-facing AI gives product designers the irresistible ability to bury the technology in an organic human-machine interface that Apple showed us was a critical factor of success in consumer electronics.

The ETCentric team will be reporting daily from the CES 2017 show floor in early January as we identify products and services related to these and related technologies.