Nvidia Debuts New Version of A100 GPU for Supercomputers

At the beginning of its SC20 supercomputing conference, Nvidia unveiled its 80GB version of the A100 GPU (graphics processing unit) based on its Ampere graphics architecture and aimed at AI and graphics for supercomputing. The chip is intended to enable faster real-time data analysis for business and government applications. This new version doubles the memory of the predecessor, debuted six months ago. Nvidia executive Paresh Kharya noted that 90 percent of the world’s data was created in the last two years.

VentureBeat reports that the new chip “delivers over 2 terabytes per second of memory bandwidth, which enables a system to feed data more quickly to the GPU.” “Supercomputing has changed in profound ways, expanding from being just focused on simulations to AI supercomputing with data-driven approaches that are now complementing traditional simulations,” said Kharya, who reported that Nvidia has 2.3 million developers for its platforms.

A recent simulation of the COVID-19 virus used 27,000 Nvidia GPUs to simulate 305 million atoms.

The Nvidia A100 80GB GPU is also available in a desktop workstation form with the DGX Station A100 systems, due to ship this quarter. Companies such as Atos, Dell, Fujitsu, HP, Lenovo, Supermicro and others plan to offer “four-GPU or eight-GPU systems based on the new A100 80GB GPU in the first half of 2021.”

That chip competes with the new AMD Instinct MI100 GPU accelerator. Moor Insights & Strategy analyst Karl Freund stated that, “the AMD GPU can provide 18 percent better performance than the original 40GB A100 from Nvidia … [but] real applications may benefit from the 80GB Nvidia version.” “In AI, Nvidia raised the bar yet again, and I do not see any competitors who can clear that hurdle,” Freund said.

The A100 80GB “enables training of the largest models with more parameters fitting within a single DGX-powered server, such as GPT-2, a natural language processing model with superhuman generative text capability,” eliminating “the need for data or model parallel architectures that can be time-consuming to implement and slow to run across multiple nodes.”

The A100 can also “be partitioned into up to seven GPU instances, each with 10GB of memory,” which offers “secure hardware isolation and maximizes GPU utilization for a variety of smaller workloads.”

Engadget reports that the DGX A100 system comes with four A100 GPUs, with either the 40GB or 80GB versions. “Like the DGX A100, the DGX Station A100 is meant for the science and business world,” with tasks that require “crunching massive datasets and investing heavily in machine learning and artificial intelligence.”

Nvidia is marketing the DGX Station A100 as “data center performance without a data center,” which means it plugs into a wall outlet and doesn’t require a datacenter cooling system. With a 64-core AMD CPU with 512GB of memory and a 7.68TB NVME SSD, “a single DGX Station A100 can provide 28 separate GPU instances for parallel jobs or multiple users to access.” Among current users are BMW, Lockheed Martin and NTT Docomo.

Related:
Cerebras’ Wafer-Size Chip Is 10,000 Times Faster Than a GPU, VentureBeat, 11/17/20
Japan Named HPC Leader as World Races to Exascale, SearchDataCenter, 11/16/20

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.