Alexa, Cortana, Watson Execs Discuss Today’s AI Limitations

In what might have been the most popular panel at CES 2018, the executives responsible for three major AI-enabled applications — IBM Watson, Microsoft Cortana and Amazon Alexa — met to dig deep into artificial intelligence today and tomorrow. In a conversation led by Tom’s Guide editorial director Avram Piltch, the three executives stressed that machine learning and AI is nothing new, but, in fact, has been the technology behind long-established activities from recommendations to warehouse robots.

Amazon vice president of Alexa Engine software Al Lindsay noted that, “in Alexa specifically, AI and machine learning shows up in every piece of the experience.” Microsoft AI and Research Group corporate vice president Andrew Shuman said that, “we see AI throughout our experiences and take it for granted, like spellchecker in Word.”

“You see aspects of machine learning in Microsoft, especially in the productivity environment,” he continued. “It’s in areas we take for granted; the rich complexity of a search engine requires all sorts of machine learning.”

artificial_intelligence_ai_circuit_brain

“We talk about active versus passive AI,” said Cameron Clayton, former CEO of The Weather Channel who is now general manager of IBM Watson’s Content and IoT Platform. He reported that a trial in one city ingests every frame of video from every camera to predict how to tweak traffic light patterns.

“It’s led to a 15 minute reduction in commute times,” he said, noting that is an example of passive AI, where the system works in the background.

But all three executives agreed that AI is still in its early stages, with a long way to go before it can actually be dubbed intelligent. “A lot of software today is fairly stupid about what we’re doing with it,” said Shuman. “There’s a real opportunity for Cortana to really understand the context of what you do, your schedule, the people you know and help with repetitive things that we waste an enormous amount of time with.”

Voice as an interface also has its limitations, with Shuman reporting that, soon, gestures will be another communication tool. “A lot of our experiences are Watson-initiated, not human-initiated,” said Clayton. “We’re at the infancy of this.”

Piltch asked if giving “personalities” to AI voices risked approaching the so-called uncanny valley. Lindsay reported that, “people project their own expectations of personality,” and that Amazon has “learned a lot by the questions they ask.” He also revealed that Alexa gets frequent marriage proposals and is often asked to sing “Happy Birthday.”

Shuman said Microsoft quickly “learned that suicide hotlines was something we needed to get right.” “It’s really important to have an expectation you set with the user,” he added. “The user should know [the voice] is software and not a real person. That’s a critical piece of the puzzle. The personality part is delightful and helpful.”

But he also noted that, “you can’t teach a digital assistant to pronounce a word properly like you can a 3-year-old,” pointing out the limitation to today’s AI capabilities. For Clayton, because Watson services the enterprise sector, it’s required to have “an answer that’s 100 percent and not 99.2 percent,” a “really huge” bar.

“We worked with a lot of experts and it takes a machine a lot of time to become expert,” he said. “Human intervention is important to us.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.