Cerebras Supercomputer Calculates at 1 Exaflop per Second

Cerebras Systems has unveiled its Andromeda AI supercomputer. With 13.5 million cores, it can calculate at the rate of 1 exaflop — roughly one quintillion (1 followed by 18 zeroes) operations — per second using a 16-bit floating point format. Andromeda’s brain is built of 16 linked Cerebras CS-2 systems, AI computers that use giant Wafer-Scale Engine 2 chips. Each chip has hundreds of thousands of cores, but is more compact and powerful than servers that use standard CPUs, according to Cerebras, which is making Andromeda available for commercial and academic research.

“Customers are already training these large language models [LLMs] — the largest of the language models — from scratch, so we have customers doing training on unique and interesting datasets, which would have been prohibitively time-consuming and expensive on GPU clusters,” Cerebras co-founder and CEO Andrew Feldman is quoted as saying in VentureBeat, which covered the Andromeda announcement at the SC22 high performance computing conference in Dallas this week.

“It’s one of the largest AI supercomputers ever built. It has an exaflop of AI compute, 120 petaflops of dense compute,” Feldman told VentureBeat, offering a comparative analysis of its processing power: “the largest computer on earth, Frontier, has 8.7 million cores.”

Frontier, installed at the Department of Energy’s Oak Ridge National Laboratory in Tennessee, passed the 1 exaflop performance mark this year using a 64-bit double precision format. “They’re a bigger machine. We’re not beating them. They cost $600 million to build. [Andromeda] is less than $35 million,” Feldman said, as reported by Reuters.

While Andromeda was “built at a high-performance data center in Santa Clara, California called Colovore,” clients can access it remotely, Reuters explains. Forbes likens Cerebras’ independently built Andromeda to Nvidia’s supercomputer, Selene.

Andromeda’s CS-2 cluster “is driven by 284 Gen 3 EPYC processors,” writes Forbes, noting the Dallas SC22 show represents “the first demonstration of the company’s MemoryX and ScaleX technologies,” concluding “it was super easy to set up, both physically and logically, to run AI jobs. 3 days! Holy cow.”

Andromeda, Forbes says, is also “simple to program; the Cerebras software stack and compiler take care of all the nitty gritty; you just enter a line of code, specifying on how many CS-2’s to run it on, and poof! You are done.”

Cerebras has delivered “a super-fast AI machine that scales extremely well and is easy to use. The bad news is that they had to pay for it, instead of having a customer fund the system,” Forbes writes, musing on whether potential clients will see past “the high initial dollar per server and realize the benefits of this one-of-a-kind AI machine.”

The Cerebus press release mentions Andromeda is powering Jasper AI, which “uses large language models to write copy for marketing, ads, books, and more,” and the Argonne National Laboratory, which used it to sequence the COVID-19 genome.

The Sunnyvale, California-based Cerebras has distinguished itself by pioneering a disruptive way of building chips. Whereas most chips use a 12-inch silicon wafer that is chemically treated to embed circuits in rectangular sections that are then cut into individual chips, Cerebras uses the entire 12-inch wafer to make single giant chips, each with 850,000 cores.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.