Google Develops its Own Chip to Speed Up Machine Learning

Google has just built its own chip as part of its efforts to speed up artificial intelligence developments. The company revealed that this is just the first of many chips it plans to develop and build. At the same time, an increasing number of businesses are migrating to the cloud, lessening the need for servers that rely on chips to function. That’s led some to believe that Google and other Internet titans that follow its lead will impact the future of the chip industry, particularly such stalwarts as Intel and Nvidia.

According to Wired, Urs Hölzle, “the man most responsible for the global data center network that underpins the Google empire,” reveals its first chip, dubbed the Tensor Processing Unit or TPU, “as a way of promoting the cloud services that let businesses and coders tap into its AI engines and build them into their own applications.”

Google_Headquarters_Logo

TPU runs TensorFlow, the Google’s “deep neural networks,” that underpin its machine learning capabilities. Google says TPUs are much more efficient for the task than the typical GPUs (graphics processing units) that other companies use, although Hölzle wouldn’t go into specifics, simply saying that they are the solution for part of the computation used in voice recognition.

“[GPUs] are already going away a little,” he said. “The GPU is too general for machine learning. It wasn’t actually built for that.”

Nvidia, which is the “world’s primary seller of GPUs,” is already working on a GPU for machine learning and other companies, such as Microsoft, are working on the field-programmable gate array chips, or FPGAs, which can be re-programmed for specific tasks. Intel just acquired a company that sells FPGAs as well.

Moor Insights and Strategy analyst Patrick Moorhead believes that FPGAs provide “far more flexibility,” noting that Google’s TPU takes six months to build and might be “overkill.” But Google prefers speed, says Hölzle, who notes that TPUs do not replace CPUs, central processing units, which are still required to run the huge number of machines in its data centers, and are Intel’s core business.

Related:
New Chips Propel Machine Learning, The Wall Street Journal, 5/22/16