August 26, 2019
Los Altos, CA-based startup Cerebras, dedicated to advancing deep learning, has created a computer chip almost nine inches (22 centimeters) on each side — huge by the standards of today’s chips, which are typically the size of postage stamps or smaller. The company plans to offer this chip to tech companies to help them improve artificial intelligence at a faster clip. The Cerebras Wafer-Scale Engine (WSE), which took three years to develop, has impressive stats: 1.2 trillion transistors, 46,225 square millimeters, 18 gigabytes of on-chip memory and 400,000 processing cores.
Wired reports that today’s AI relies on deep learning, which uses training, “in which algorithms optimize themselves to a task by analyzing example data.” The more data to learn from or the larger and more complex the learning system is, the more powerful the resulting software. But it can cost $350,000 in energy consumption to “develop a single piece of language-processing software.”
AI lab OpenAI estimated “that between 2012 and 2018, the amount of computing power expended on the largest published AI experiments doubled roughly every three and a half months.”
Enter Cerebras’ new chip, which was built with the help of chipmaker TSMC. At Micron, fellow Eugenio Culurciello, who works on AI chips, said the project makes sense. “It will be expensive, but some people will probably use it,” he said.
Google has developed its own AI chips customized for deep learning, tensor processing units (TPUs), and others are looking to do the same. The process requires a tremendous amount of GPU power, but “Cerebras’ chip covers more than 56 times the area of Nvidia’s most powerful server GPU.”
Cerebras founder/chief executive Andrew Feldman said the chip “can do the work of a cluster of hundreds of GPUs, depending on the task at hand, while consuming much less energy and space,” due to its “large stocks of onboard memory, allowing the training of more complex deep-learning software.”
According to Feldman, the large design is also an advantage since “data can move around a chip around 1,000 times faster than it can between separate chips that are linked together.” To prevent overheating, “Cerebras had to design a system of water pipes that run close by the chip.”
Feldman, who said a few customers are testing the chip, “plans to sell complete servers built around the chip, rather than chips on their own, but declined to discuss price or availability.”
TechCrunch reports that Cerebras had to “invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer,” as well as write new software “to handle chips with trillion-plus transistors.”
To ensure perfection in the entire silicon wafer, Cerebras built in “redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer.” To enable thermal expansion between connectors and the motherboard, Cerebras invented a new material for the connectors to “absorb some of that difference.” Finally, Cerebras turned the chip on its side, to deliver power and cooling vertically at all points across the chip, “ensuring even and consistent access to both.”