Google Offers Its AI Chips to All Comers via Cloud Computing

Google, which created tensor processing units (TPUs) for its artificial intelligence systems some years ago, will now make those computer chips available to other companies via its cloud computing service. Google is currently focusing on computer vision technology, which allows computers to recognize objects; Lyft used these chips for its driverless car project. Amazon is also building its own AI chips for use with the Alexa-powered Echo devices to shave seconds off its response time and potentially increase sales.

The New York Times quotes Google engineer Zak Stone as saying, “we are trying to reach as many people as we can as quickly as we can.” The computer vision chips can potentially cut down the “training” of systems from days to hours.

Google_Headquarters_Logo

“There is huge potential here,” said Lyft’s Anantha Kancherla, who heads up software for the company’s driverless car project. Google’s TPU chips helped to speed up Google Assistant and Google Translate, and cut down on its dependence on outside chipmakers like Nvidia and Intel.

At Samsung-owned cloud computing service Joyent, senior product manager Casey Bisson notes that, “at times, the only way to build an efficient service is to build your own hardware.”

With the TPU chips, Google hoped to surpass what is possible with GPUs, mainly supplied by Nvidia and create a competitive advantage for bringing more customers to its cloud computing service. “Google has become so big, it makes sense to invest in chips,” said former AMD chief technology officer Fred Weber. “That gives them leverage. They can cut out the middleman.”

TechCrunch states that a report from The Information reveals that, “Amazon is working on building AI chips for the Echo, which would allow Alexa to more quickly parse information and get those answers.” The idea is to bring response time down to “as close to zero as possible to cultivate the behavior that Amazon can give you the answer you need immediately.” The Echo chips would “probably be geared toward inference, taking inbound information (like speech) and executing a ton of calculations really, really quickly to make sense of the incoming information.”

Numerous startups are already experimenting with these kinds of capacities, but everyone is “pre-market” currently. It’s not entirely clear what all the use cases would be, but TechCrunch points out that, “Apple has designed its own customer GPU for the iPhone, and moving those kinds of speech recognition processes directly onto the phone would help it more quickly parse incoming speech, assuming the models are good and they’re sitting on the device.”

More complicated questions would, however, require a cloud connection to process “the entire sentence tree.”

The Information, it says, “also suggests that Amazon may be working on AI chips for AWS, which would be geared toward machine training.” TechCrunch isn’t “100 percent sure this is a move that Amazon would throw its full weight behind,” since “the wide array of companies working off AWS don’t need some kind of bleeding-edge machine training hardware.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.