February 16, 2015
The latest trend in artificial intelligence involves implementing a much more efficient microprocessor rather than a whole cloud computing system to power deep learning research. These microprocessors, or graphical processing units (GPUs), are great at math-crunching skills, which makes them ideal for deep learning networks. Now, companies such as Google, Facebook, and various labs that run supercomputers, are using GPU-based computers to power their AI and deep learning operations.
Using GPUs significantly cuts the number of computers needed to do the work for deep learning research. According to Wired, three GPU-based computers can do the same work as 1,000 CPU-based computers.
“That big disparity in resources that you needed to run these experiments is because on the one hand the GPUs are a lot faster themselves, but also because once you have a much smaller system that’s much more tightly integrated, there are sort of economies of scale,” said Adam Coates, the researcher that figured out how to connect several GPU-based computers to amplify the computing power.
The Oak Ridge National Laboratory, which houses one of the world’s biggest supercomputers, is now using a GPU-infrastructure to help develop its deep learning models. The lab’s deep learning tech goes through terabytes of data generated by the Spallation Neutron Source to help find patterns in the data. The deep learning software is still in development though, and it will likely take years to finish.
Google’s unique deep learning system DistBelief can actually run on both CPUs and GPUs, and it switches between them with ease. That means there are hardly any snags when one of the machines inside Google’s data center fails. With that type of power, DistBelief has the training to develop refined skills. For example, its AI can distinguish between a chair and a stool.