Nvidia is rolling out DGX Cloud Lepton, a platform that connects AI developers with GPU access available through various cloud providers. Nvidia calls it “a compute marketplace” that offers tens of thousands of GPUs through a global network that features Nvidia Cloud Partners (NCPs). Among them: CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nebius, Nscale, Softbank Corp. and Yotta Data Services — offering Nvidia Blackwell and other architecture GPUs. Developers can tap into GPU compute capacity in specific regions for both on-demand and long-term computing, Nvidia says, adding that it expects leading cloud computing providers to eventually sign on.
“Nvidia is a relative newcomer to the cloud-computing game, but it’s quickly gaining momentum,” writes The Wall Street Journal, which notes the DGX Cloud Lepton service will widen Nvidia GPU availability “beyond the major cloud providers.”
Nvidia seems determined to grow quickly in the space. In addition to DGX Cloud Lepton, Nvidia this week announced participation in a major new AI data center in France and a for-rent supercomputer in Taiwan.
Cloud providers usually have down-time on some servers. Lepton “is Nvidia’s way to kind of be an aggregator of GPUs across clouds,” Creative Strategies CEO and principal analyst Ben Bajarin told WSJ, adding that “DGX Cloud Lepton is a one-stop AI platform with a marketplace of GPU cloud vendors that developers can pick from to train and use their AI models.”
Under the Lepton framework, developers will be dealing with Nvidia directly rather than going through its NCPs, WSJ reports.
Deep GPU Xceleration (DGX) is the umbrella branding for high-performance Nvidia servers and workstations designed specifically for accelerating artificial intelligence, machine learning and deep learning workloads.
Nvidia also announced that it is partnering with manufacturers including Acer, Asus, Dell, HP and Lenovo to launch DGX personal computing systems.
In addition, Nvidia revealed NVLink Fusion, a program that lets its customers “use non-Nvidia CPUs and GPUs in tandem with Nvidia’s products and NVLink,” according to CNBC. Until now, Nvidia’s NVLink high-speed processor interlink “was closed to chips made by Nvidia,” describing NVLink as designed “to connect and exchange data between its GPUs and CPUs.”
“NVLink Fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips,” CNBC quotes Nvidia CEO Jensen Huang saying at Computex, Asia’s biggest electronics conference.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.