By
Paula ParisiJuly 24, 2023
Cerebras Systems has unveiled the Condor Galaxy 1, powered by nine networked supercomputers designed for a total of 4 exaflops of AI compute via 54 million cores. Cerebras says the CG-1 greatly accelerates AI model training, completing its first run on a large language AI trained for Abu Dhabi-based G42 in only 10 days. Cerebras and G42 have partnered to offer the Santa Clara, California-based CG-1 as a cloud service, positioning it as an alternative to Nvidia’s DGX GH200 cloud supercomputer. The companies plan to release CG-2 and CG-3 in early 2024. Continue reading Cerebras, G42 Partner on a Supercomputer for Generative AI
By
Paula ParisiMay 31, 2023
Nvidia CEO Jensen Huang’s keynote at Computex Taipei marked the official launch of the company’s Grace Hopper Superchip, a breakthrough in accelerated processing, designed for giant-scale AI and high-performance computing applications. Huang also raised the curtain on Nvidia’s new supercomputer, the DGX GH200, which connects 256 Hopper chips into a single data-center-sized GPU with 144 terabytes of scalable shared memory to build massive AI models at the enterprise level. Google, Meta and Microsoft are among the first in line to gain access to the DGX GH200, positioned as “a blueprint for future hyperscale generative AI infrastructure.” Continue reading Nvidia Announces a Wide Range of AI Initiatives at Computex
By
Paula ParisiMay 22, 2023
Meta Platforms has shared additional details on its next generation of AI infrastructure. The company has designed two custom silicon chips, including one for training and running AI models and eventually powering metaverse functions like virtual reality and augmented reality. Another chip is tailored to optimize video processing. Meta publicly discussed its internal chip development last week ahead of a Thursday virtual event on AI infrastructure. The company also showcased an AI-optimized data center design and talked about phase two of deployment of its 16,000 GPU supercomputer for AI research. Continue reading Meta In-House Chip Designs Include Processing for AI, Video
By
Paula ParisiMay 15, 2023
Ten years ago AMD introduced the concept of smaller, interconnected chips that together work like one digital brain. Sometimes called “chiplets,” they’re generally less expensive than building one large chip, and when grouped together into bundles have often outperformed single wafters. In addition to AMD, companies including Apple, Amazon, Intel, IBM and Tesla have embraced the chiplet formula, which leverages advanced packaging technology, an integral part of building advanced semiconductors. Now experts are predicting packaging is going to be even more of a focus in coming years, as the global chip wars heat up. Continue reading Advanced Packaging for ‘Chiplets’ a Focus of CHIPS Funding
By
Paula ParisiMarch 29, 2023
Nvidia is launching new cloud services to help businesses leverage AI at scale. Under the banner Nvidia AI Foundations, the company is providing tools to let clients build and run their own generative AI models that are custom trained on data specific to the intended task. The individual cloud offerings are Nvidia NeMo for language models and Nvidia Picasso for 3D visuals including video and images. Speaking at Nvidia’s annual GPU Technology Conference (GTC) last week, CEO Jensen Huang said “the impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.” Continue reading Nvidia Introduces Cloud Services to Leverage AI Capabilities
By
Paula ParisiMarch 16, 2023
The demand for artificial intelligence by enterprise as well as consumers is putting tremendous pressure on cloud service providers to meet the vast data center resources required to train the models and deploy the resulting apps. Microsoft recently opened up about the pivotal role it played in getting OpenAI’s ChatGPT to the release phase via its Azure cloud computing platform, linking “tens of thousands” of Nvidia A100 GPUs to train the model. Microsoft is already upgrading Azure with Nvidia’s new H100 chips and latest InfiniBand networking to accommodate the next generation of AI supercomputers. Continue reading Microsoft Believes Azure Platform Is Unlocking ‘AI Revolution’
By
Paula ParisiNovember 18, 2022
Microsoft has entered into a multi-year deal with Nvidia to build what they’re calling “one of the world’s most advanced supercomputers,” powered by Microsoft Azure’s advanced supercomputing infrastructure combined with Nvidia GPUs, networking and full stack of AI software to help enterprises train, deploy and scale AI, including large, state-of-the-art models. “AI is fueling the next wave of automation across enterprises and industrial computing, enabling organizations to do more with less as they navigate economic uncertainties,” Microsoft cloud and AI group executive VP Scott Guthrie said of the alliance. Continue reading Microsoft, Nvidia Partner on Azure-Hosted AI Supercomputer
By
Debra KaufmanJuly 23, 2020
The University of Florida (UF) and Nvidia joined forces to enhance the former’s HiPerGator supercomputer with DGX SuperPOD architecture. Set to go online by early 2021, HiPerGator will deliver 700 petaflops (one quadrillion floating-point operations per second), making it the fastest academic AI supercomputer. UF and Nvidia said the HiPerGator will enable the application of AI to a range of studies, including “rising seas, aging populations, data security, personalized medicine, urban transportation and food insecurity.” Continue reading Nvidia and University of Florida Partner on AI Supercomputer
By
Debra KaufmanMay 21, 2020
At Microsoft’s Build 2020 developer conference, the company debuted a supercomputer built in collaboration with, and exclusively for, OpenAI on Azure. It’s the result of an agreement whereby Microsoft would invest $1 billion in OpenAI to develop new technologies for Microsoft Azure and extend AI capabilities. OpenAI agreed to license some of its IP to Microsoft, which would then sell to partners as well as train and run AI models on Azure. Microsoft stated that the supercomputer is the fifth most powerful in the world. Continue reading Microsoft Announces Azure-Hosted OpenAI Supercomputer
By
Debra KaufmanSeptember 5, 2019
Dell EMC and Intel introduced Frontera, an academic supercomputer that replaces Stampede2 at the University of Texas at Austin’s Texas Advanced Computing Center (TACC). The companies announced plans to build the computer in August 2018 and were funded by a $60 million National Science Foundation grant. According to Intel, Frontera’s peak performance can reach 38.7 quadrillion floating point operations per second (petaflops), making it one of the fastest such computers for modeling, simulation, big data and machine learning. Continue reading Academic Supercomputer Is Unveiled by Intel and Dell EMC
By
Emily WilsonMay 9, 2019
This week, AMD announced a partnership with Cray to build a supercomputer called Frontier, which the two companies predict will become the world’s fastest supercomputer, capable of “exascale” performance when it is released in 2021. All told, they expect Frontier to be capable of 1.5 exaflops, performing somewhere around 50 times faster than the top supercomputers out today, and faster than the currently available top 160 supercomputers combined. Frontier will be built at Oak Ridge National Laboratory in Tennessee.
Continue reading AMD’s New Frontier Will Be World’s Fastest Supercomputer
By
Debra KaufmanJuly 19, 2017
Google is getting closer to offering quantum computing over the cloud. It’s uncertain if a quantum computer, which is based on “qubits” rather than 1s and 0s, can out-perform a supercomputer, but Google and other companies are betting it will be able to perform certain important tasks millions of times faster. Google and its rivals would be more likely to rent quantum computing over the Internet, since the computers are too bulky and require too much special care to live in most companies’ data centers. Continue reading Google Push Could Spark Quantum Computing in the Cloud
By
Debra KaufmanNovember 22, 2016
Microsoft is going full bore into quantum computing, moving from pure research into efforts to build a prototype of what has been primarily an experimental field. If and when they come to fruition, quantum computers could have an impact on drug design, artificial intelligence and even our understanding of physics. For that reason, IBM and Google are also investing in quantum computing, although Microsoft has taken a singular approach, based on so-called braiding particles (also known as anyons). Continue reading Microsoft Imagines a Practical Future for Quantum Computers
By
Debra KaufmanJune 8, 2016
The National Science Foundation just announced grants to build the $30 million Stampede 2 supercomputer which, outfitted with 18 petaflops, will offer twice the power of the Stampede, which debuted in March 2013. Its processing capacity puts it on a par with Cray’s Titan and IBM’s Sequoia (though not as powerful as China’s Tianhe-2). The supercomputer will be available to any researcher with a need for immense processing power, for such applications as atomic and atmospheric simulations. Continue reading National Science Foundation Funds Stampede Supercomputer
By
Rob ScottJanuary 5, 2016
During the Nvidia keynote at CES 2016, CEO and co-founder Jen-Hsun Huang introduced a new computer for autonomous vehicles called the Drive PX2. Following last year’s Drive CX, the PX2 touts processing power equivalent to 150 MacBook Pros, according to Huang. The lunchbox-sized, water-cooled computer features 12 CPU cores that support eight teraflops and 24 “deep learning” tera operations per second. As a result, the PX2 can reportedly process data in real time from 12 video cameras, radar, lidar and additional sensors to enhance the self-driving car experience. Continue reading CES: Nvidia Unveils New ‘Supercomputer’ for Self-Driving Cars