Microsoft, Quantinuum Tout Advance in Quantum Computing

Microsoft and Quantinuum have improved the logical error rate in quantum computing by 800x, a breakthrough the partners say has the potential to usher in a new era of qubit processing. Using ion-trap hardware from Quantinuum and a qubit-virtualization system from Microsoft, the team ran more than 14,000 experiments with no errors — a huge feat in the notoriously fickle realm of qubits. The system has error diagnostics and corrections built in, identifying which errors need to be fixed and correcting them without destroying the underlying logical qubits, according to the companies. Continue reading Microsoft, Quantinuum Tout Advance in Quantum Computing

Microsoft, OpenAI Considering a Supercomputer Data Center

Microsoft and OpenAI are contemplating an AI supercomputer data center that may cost as much as $100 billion. Called Stargate, the aim would be to have it operational by 2008 to drive OpenAI’s next generation of artificial intelligence. According to reports, the Stargate complex would span hundreds of U.S. acres and use up to 5 gigawatts of power, which is massive (the equivalent of a substantial metropolitan power grid). In light of those power needs, a nuclear power source is said to be under consideration. The project is not yet green-lit, and no U.S. location has been selected. Continue reading Microsoft, OpenAI Considering a Supercomputer Data Center

GTC: Nvidia Unveils Blackwell GPU for Trillion-Parameter LLMs

Nvidia unveiled what it is calling the world’s most powerful AI processing system, the Blackwell GPU, purpose built to power real-time generative AI on trillion-parameter large language models at what the company says will be up to 25x less cost and energy consumption than its predecessors. Blackwell’s capabilities will usher in what the company promises will be a new era in generative AI computing. News from Nvidia’s GTC 2024 developer conference included the NIM software platform, purpose built to streamline the setup of custom and pre-trained AI models in a production environment, and the DGX SuperPOD server, powered by Blackwell. Continue reading GTC: Nvidia Unveils Blackwell GPU for Trillion-Parameter LLMs

Meta Building Giant AI Model to Power Entire Video Ecosystem

Facebook chief Tom Alison says parent company Meta Platforms is building a giant AI model that will eventually “power our entire video ecosystem.” Speaking at the Morgan Stanley Technology, Media & Telecom Conference this week, Alison said the model will drive the company’s video recommendation engine across all platforms that host long-form video as well as the short-form Reels, which are limited to 90 seconds. Alison said the company began experimenting with the new, super-sized AI model last year and found that it helped improve Facebook’s Reels watch time by anywhere from 8-10 percent. Continue reading Meta Building Giant AI Model to Power Entire Video Ecosystem

IBM Announces Significant Advances in Quantum Computing

IBM has produced two quantum computing systems to meet its 2023 roadmap, one based on a chip named Condor, which at 1,121 functioning qubits is the largest transmon-based quantum processor released to date. Transmon-based chips use a type of superconducting qubit that is more error-resistant than typical qubits, which are notoriously unstable. The second IBM system uses three Heron chips, each with 133 qubits. The more modestly scaled Heron and its successor, Flamingo, play a vital role in IBM’s quantum plan, which boasts major progress as a result of these developments. Continue reading IBM Announces Significant Advances in Quantum Computing

Aurora Supercomputer Targets 2 Quintillion Ops per Second

Aurora, built by Intel and Hewlett Packard Enterprise, is the latest supercomputer to come online at the Department of Energy’s Argonne National Laboratory outside of Chicago and is among a new breed of exascale supercomputers that draws on artificial intelligence. When fully operational in 2024, Aurora is expected to be the first such computer that will be able to achieve two quintillion operations per second. Brain analytics and the design of batteries that last longer and charge faster are among the vast potential uses of exascale machines. Continue reading Aurora Supercomputer Targets 2 Quintillion Ops per Second

United Kingdom Investing $273 Million in AI Supercomputing

The UK government plans to invest at least £225 million (about $273 million) in AI supercomputing with the aim of bringing Great Britain into closer parity with AI leaders the U.S. and China. Among the new machines coming online is Dawn, which was built by the University of Cambridge Research Computing Services, Intel and Dell and is being hosted by the Cambridge Open Zettascale Lab. “Dawn Phase 1 represents a huge step forward in AI and simulation capability for the UK, deployed and ready to use now,” said Dr. Paul Calleja, director of Research Computing at Cambridge. Continue reading United Kingdom Investing $273 Million in AI Supercomputing

Germany, UK to Host Europe’s First Exascale Supercomputers

Europe is moving forward in the supercomputer space, with two new exascale machines set to come online. Jupiter will be installed at the Jülich Supercomputing Centre in Munich, with assembly set to start as early as Q1 2024. Scotland will be home to the UK’s first exascale supercomputer, to be hosted at the University of Edinburgh, with installation commencing in 2025. An exascale supercomputer can run calculations at speeds of one exaflop (1,000 petaflops) or greater. On completion, these two new supercomputers will land in the top percent of the world’s high-performers. Continue reading Germany, UK to Host Europe’s First Exascale Supercomputers

Cerebras, G42 Partner on a Supercomputer for Generative AI

Cerebras Systems has unveiled the Condor Galaxy 1, powered by nine networked supercomputers designed for a total of 4 exaflops of AI compute via 54 million cores. Cerebras says the CG-1 greatly accelerates AI model training, completing its first run on a large language AI trained for Abu Dhabi-based G42 in only 10 days. Cerebras and G42 have partnered to offer the Santa Clara, California-based CG-1 as a cloud service, positioning it as an alternative to Nvidia’s DGX GH200 cloud supercomputer. The companies plan to release CG-2 and CG-3 in early 2024. Continue reading Cerebras, G42 Partner on a Supercomputer for Generative AI

Nvidia Announces a Wide Range of AI Initiatives at Computex

Nvidia CEO Jensen Huang’s keynote at Computex Taipei marked the official launch of the company’s Grace Hopper Superchip, a breakthrough in accelerated processing, designed for giant-scale AI and high-performance computing applications. Huang also raised the curtain on Nvidia’s new supercomputer, the DGX GH200, which connects 256 Hopper chips into a single data-center-sized GPU with 144 terabytes of scalable shared memory to build massive AI models at the enterprise level. Google, Meta and Microsoft are among the first in line to gain access to the DGX GH200, positioned as “a blueprint for future hyperscale generative AI infrastructure.” Continue reading Nvidia Announces a Wide Range of AI Initiatives at Computex

Meta In-House Chip Designs Include Processing for AI, Video

Meta Platforms has shared additional details on its next generation of AI infrastructure. The company has designed two custom silicon chips, including one for training and running AI models and eventually powering metaverse functions like virtual reality and augmented reality. Another chip is tailored to optimize video processing. Meta publicly discussed its internal chip development last week ahead of a Thursday virtual event on AI infrastructure. The company also showcased an AI-optimized data center design and talked about phase two of deployment of its 16,000 GPU supercomputer for AI research. Continue reading Meta In-House Chip Designs Include Processing for AI, Video

Advanced Packaging for ‘Chiplets’ a Focus of CHIPS Funding

Ten years ago AMD introduced the concept of smaller, interconnected chips that together work like one digital brain. Sometimes called “chiplets,” they’re generally less expensive than building one large chip, and when grouped together into bundles have often outperformed single wafters. In addition to AMD, companies including Apple, Amazon, Intel, IBM and Tesla have embraced the chiplet formula, which leverages advanced packaging technology, an integral part of building advanced semiconductors. Now experts are predicting packaging is going to be even more of a focus in coming years, as the global chip wars heat up. Continue reading Advanced Packaging for ‘Chiplets’ a Focus of CHIPS Funding

Nvidia Introduces Cloud Services to Leverage AI Capabilities

Nvidia is launching new cloud services to help businesses leverage AI at scale. Under the banner Nvidia AI Foundations, the company is providing tools to let clients build and run their own generative AI models that are custom trained on data specific to the intended task. The individual cloud offerings are Nvidia NeMo for language models and Nvidia Picasso for 3D visuals including video and images. Speaking at Nvidia’s annual GPU Technology Conference (GTC) last week, CEO Jensen Huang said “the impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.” Continue reading Nvidia Introduces Cloud Services to Leverage AI Capabilities

Microsoft Believes Azure Platform Is Unlocking ‘AI Revolution’

The demand for artificial intelligence by enterprise as well as consumers is putting tremendous pressure on cloud service providers to meet the vast data center resources required to train the models and deploy the resulting apps. Microsoft recently opened up about the pivotal role it played in getting OpenAI’s ChatGPT to the release phase via its Azure cloud computing platform, linking “tens of thousands” of Nvidia A100 GPUs to train the model. Microsoft is already upgrading Azure with Nvidia’s new H100 chips and latest InfiniBand networking to accommodate the next generation of AI supercomputers. Continue reading Microsoft Believes Azure Platform Is Unlocking ‘AI Revolution’

Microsoft, Nvidia Partner on Azure-Hosted AI Supercomputer

Microsoft has entered into a multi-year deal with Nvidia to build what they’re calling “one of the world’s most advanced supercomputers,” powered by Microsoft Azure’s advanced supercomputing infrastructure combined with Nvidia GPUs, networking and full stack of AI software to help enterprises train, deploy and scale AI, including large, state-of-the-art models. “AI is fueling the next wave of automation across enterprises and industrial computing, enabling organizations to do more with less as they navigate economic uncertainties,” Microsoft cloud and AI group executive VP Scott Guthrie said of the alliance. Continue reading Microsoft, Nvidia Partner on Azure-Hosted AI Supercomputer