Here’s What We Hope to See This Week at CES Related to AI

With the buzz way down, AI research more vibrant than ever, and more mainstream experimentation, there’s a lot to potentially look forward to at CES 2019 in the field of AI and machine learning. And already it all seems to converge on one very interesting trend: pragmatism. As AI exits the lab, and heads into the world, we’re expecting new and compelling applications. At CES this week, we’re hoping to see advances in areas such as autonomous vehicles, consumer robots, computer vision, smart assistants, and a more integrated Internet of Things.

2018 was a pivotal year in machine learning, especially in the lab. All throughout last year, the research community went wild with deep reinforcement learning (where AI agents — generally in videogame environments — learn autonomously through reward and trial and error), creating above human-level AI gameplay.

Lots of progress was also made in natural language processing through the application of deep learning. Most importantly, AI researchers started looking pragmatically beyond deep neural net infrastructures (too narrow and data-hungry) to experiment with hybrid models using probabilistic reasoning on graph data structures.

Overall, there’s a pervasive sense throughout the AI community that, to get out of the lab and into people’s lives, AI applications will need to scale down to narrower domains, or pivot towards more complex hybrid architectures mixing methods (deep learning and probabilistic reasoning) that were once considered antagonistic to one another.

We’re already seeing this at CES.

First off, it’s clear that AI is no longer the marketing mantra it once was. With crytocurrencies in crisis, and everything else, from drones to autonomous vehicles, having burned out their buzz years ago, this very much looks like a refreshingly sober and grown-up CES.

Here’s what we hope to see this week:

Actual, Operational Autonomous Vehicles

For the second year in a row, show visitors can ride on an autonomous vehicle. In a partnership with Irish auto parts company Aptiv, Lyft has been operating around a dozen autonomous BMW 540i models in Las Vegas since last CES, with 5,000 “autonomous” rides delivered by August 2018. We tried one for around a mile ride, and we were impressed (dedicated post to follow).

But through intense questioning of the vehicle’s two operators, the biggest surprise was how much the vehicle relied on environmental hardware integration (embedded sensors in traffic lights, for example) and good old-fashioned rule-based programming, together with very real deep learning-type machine learning.

Other surprise: the ride is only autonomous on the Las Vegas Strip, e.g. in a very bounded, and structured, domain. This is particularly interesting when considering that the nearest real-life experiment, (Google) Waymo’s pilot in the Phoenix area, is also limited to a few neighborhoods, and also in a desert climate with little to no variance in weather condition.

It seems that, under high pressure to finally roll out passenger service, and in the absence of a self-driving application that can generalize across roadways and weather types, autonomous vehicle companies have scaled down their immediate ambitions and integrated more pragmatic hybrid models.

This is likely a trend that will dominate the immediate future of autonomous mobility: while fully “intelligent” vehicles that can operate everywhere and in every weather are still likely years away, we’ll start seeing more and more scaled-down real-life implementation across narrow domains, or without passengers.

The “autonomous delivery” vehicle field seems to be a big trend this year, with already some buzz around Chinese e-commerce giant JD Logistics. And John Deere’s revelation on Monday that a full 90 percent of its large fleet is self-driving capable (down to 2.5 cm precision) seems to also confirm that autonomous rollout in highly bound and structured domains (agricultural space is much more structured than a roadway with cars, cyclists, and pedestrians).

Consumer Robots We Can Take Seriously

This has been the running joke at CES for many years now: despite the hundreds of consumer robots presented, nobody has ever seen a technically impressive, viable, and non-gimmicky one. We’re hoping the hundreds of millions of dollars that have flowed into the Shenzhen robotics ecosystem will have produced something worthwhile, or maybe that U.S. leader Sphero has something up its sleeve.

Productized, Deep Learning-Enabled Computer Vision

Being that we’re in year 13 of the deep learning revolution (the original paper came out in 2006), and that computer vision has been its leading domain of application for most of that time, we’re hoping to see a “Cambrian explosion” of features or applications that leverage this method, along with the petabytes of visual training data.

We got a whiff of this on Sunday night at the “Unveiled” event, with an interesting application to regularize distorted images, and Samsung has already announced that it would lift the veil on some of the computer vision work that its C-Lab has done this year, such as autonomously fitting ads and sound to user-generated content.

A More Integrated “Hub and Spoke” IoT

This is a big one. For years now, major trends like the Internet of Things, smart cities and “AI on the edge” have captured our collective imagination, but with little compelling real-life application. Too much focus was put on the edge (sensors and wearables, edge computing and edge machine learning), where the market opportunity is the largest. Too much emphasis was put on collecting point-to-point data, not enough on integrating that data into real knowledge.

Sure, it’s hard, but we’re hoping the reemergence of probabilistic graph data structures will invigorate the old but still incredibly promising field of knowledge representation, and that some focus will have been put on integrating intelligent, powerful and ubiquitous edge devices with a central AI hub.

More Smart Assistant Madness

We had fun last year with Alexa-powered faucets and toilets. And that was before Amazon revealed that Alexa now has 60,000 “skills,” and that it has sold 100 million Alexa-powered devices. We’re hoping the 10,000 employees (yes, you read that right: that’s a $2 to $3 billion annual payroll) of Amazon’s Alexa unit cranked out more interesting integrations this year.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.