OpenAI’s Five New AI Data Centers to Bring Capacity to 7 GW

OpenAI has laid out plans for five new U.S. data centers to bring its Stargate AI infrastructure project to a total of 7 gigawatts of capacity within three years. The company says that puts OpenAI on track to formalize its $500 billion, 10-gigawatt plans for Stargate by the end of 2025, ahead of schedule. The disclosure follows media coverage critical of OpenAI for moving too slowly toward its goals. There is also a SoftBank-imposed deadline of January 1 to corporately restructure in a way that allows investors to more fully participate in profits or risk losing $20 billion in funding. Continue reading OpenAI’s Five New AI Data Centers to Bring Capacity to 7 GW

Nvidia Investing $100 Billion in OpenAI Data Center Build-Out

Nvidia is investing up to $100 billion in a partnership with OpenAI that will result in what Nvidia CEO Jensen Huang predicts will be “the biggest AI infrastructure deployment in history.” The project will use about 10 gigawatts worth of Nvidia systems — including the upcoming Vera Rubin platform — power equivalent to 4 million to 5 million GPUs. “This partnership is about building an AI infrastructure that enables AI to go from the labs into the world,” Huang said on CNBC’s “Halftime Report,” explaining the $100 billion will be invested in stages as each gigawatt is deployed. The investment will be all-cash with Nvidia receiving an undisclosed amount of OpenAI equity. Continue reading Nvidia Investing $100 Billion in OpenAI Data Center Build-Out

Alibaba’s Qwen3-Omni AI Ingests Text, Images, Audio, Video

Alibaba Cloud’s newest AI model, Qwen3-Omni-30B-A3B, has debuted with a splash. The Chinese company is touting it as “the first natively end-to-end omni-modal AI unifying text, image, audio & video in one model.” While Qwen3-Omni can accept prompts of text, image, audio and video, it only outputs text and audio. Alibaba Cloud has released the three versions of Qwen3-Omni so users can select based on their needs, choosing between general multimodal capabilities, deep reasoning or specialized audio understanding. Alibaba has also developed an AI chip called T-Head that performs comparably to Nvidia’s H20. Continue reading Alibaba’s Qwen3-Omni AI Ingests Text, Images, Audio, Video

Nvidia Invests $5 Billion in Intel with Plans for AI Infrastructure

Nvidia is investing $5 billion in Intel via a common stock purchase at $23.28 per share, which translates to about a 4 percent stake. The companies plan to collaborate across multiple projects, developing custom data center and PC products to accelerate applications and workloads across the hyperscale, enterprise and consumer markets. Nvidia’s NVLink will be used to connect the architectures, integrating Nvidia’s GPUs with Intel’s CPU technologies. For data centers, Intel will customize x86 CPUs that Nvidia can integrate into its AI platforms. Intel also plans to build x86 SOCs that integrate Nvidia RTX GPU chiplets for PCs. Continue reading Nvidia Invests $5 Billion in Intel with Plans for AI Infrastructure

Microsoft Details $7 Billion Future of Wisconsin AI Data Center

Microsoft is in the final phases of building the $3.3 billion Wisconsin facility it says will be the most powerful AI data center in the world when it comes online in early 2026 to train the next decade of artificial intelligence models. The software giant has already begun staffing the operation and is already planning further expansion, with another $4 billion to be spent in the next three years to build a second data center of similar size and scale — bringing the total investment in Wisconsin to more than $7 billion. Located in Mount Pleasant, the nearly completed facility will be the first in what Microsoft is calling its Fairwater family of hyperscale data centers. Continue reading Microsoft Details $7 Billion Future of Wisconsin AI Data Center

OpenAI Signs $300 Billion Cloud Computing Deal with Oracle

In one of the largest cloud computing deals ever, OpenAI has contracted with Oracle for $300 billion in processing power over five years starting in 2027. Oracle has committed to 4.5 gigawatts of capacity. A typical nuclear plant caps at 1 gigawatt of output generated at any given instant. The deal involves risk for both companies. OpenAI’s annual revenue of about $10 billion is far short of the amount needed to cover this tab. Oracle’s exposure comes with depending on a small number of large customers for so much revenue and the expense of expanding infrastructure to fulfill the obligation. Continue reading OpenAI Signs $300 Billion Cloud Computing Deal with Oracle

Microsoft Contracts with Nebius for $17.4 Billion in AI Capacity

AI infrastructure company Nebius Group NV has entered into a $17.4 billion deal to provide dedicated compute power to Microsoft from a new data center in Vineland, New Jersey. The five-year agreement could be worth up to $19.4 billion with additional capacity and services. The news sent Nebius shares surging by 49 percent on the Nasdaq composite, underscoring how the rapidly growing demand for AI support can influence the fate of companies. The deal added $1 billion to the value of Nebius founder Arkady Volozh’s stake. The Russian expatriate founded that country’s equivalent of Google. Continue reading Microsoft Contracts with Nebius for $17.4 Billion in AI Capacity

Nvidia Says Rubin CPX Inference Accelerator Coming in 2026

Nvidia has designed a new class of GPU for massive-context inference, the Rubin CPX, due in late 2026. Purpose-built to speed the million-token applications used to generate video and create software, the Rubin CPX functions as a specialty accelerator, working in concert with Nvidia Vera CPUs and Rubin GPUs packaged inside the upcoming Vera Rubin NVL144 CPX rack platform. “The Vera Rubin platform will mark another leap in the frontier of AI computing,” revolutionizing massive-context AI just as RTX did graphics and physical AI, said Nvidia CEO Jensen Huang. Continue reading Nvidia Says Rubin CPX Inference Accelerator Coming in 2026

Europe’s Most Powerful Supercomputer Designed to Foster AI

Europe has entered the big leagues of supercomputing with Jupiter, which this month became the first European system to achieve the exascale threshold of more than one quintillion (a billion billion) operations per second. Jupiter is Europe’s most powerful compute platform and the fourth fastest worldwide. It is a hybrid platform that uses a combination of SiPearl and Nvidia chips, respectively supporting HPC tasks like simulations and data analysis as well as AI workloads, such as training large language models and providing access to the Jupiter AI Factory (JAIF), a managed interface for developers and academics. Continue reading Europe’s Most Powerful Supercomputer Designed to Foster AI

OpenAI Reportedly Turning to Broadcom for Custom AI Chips

OpenAI is said to be in talks with Broadcom about developing custom AI inference chips to run its models. On an earnings call last week, Broadcom disclosed that an AI developer had placed a $10 billion order for AI server racks using its chips. That new customer was reported to be OpenAI, which has relied primarily on hotly sought-after Nvidia GPUs for model training and deployment. Broadcom specializes in XPUs — accelerator chips designed for specific uses, like inference for ChatGPT. OpenAI CEO Sam Altman has publicly complained that a shortage of chips has impeded the company’s ability to get new models and products to market. Continue reading OpenAI Reportedly Turning to Broadcom for Custom AI Chips

Nvidia Announces Continued Growth, $26 Billion in Q2 Profit

Santa Clara, California-based Nvidia reported its sales were $46.7 billion for the most recent quarter, marking 56 percent growth over the same period last year and up 6 percent sequentially. Profit rose more than 59 percent to $26.42 billion. The results, which surpassed estimates, reassured global analysts and investors that AI infrastructure spending remains strong, easing — though not erasing — anxieties about an AI bubble. This summer, the chipmaker became the first company to exceed a market cap of $4 trillion, and it is considered a global barometer for the overall health of the artificial intelligence sector. Continue reading Nvidia Announces Continued Growth, $26 Billion in Q2 Profit

SoftBank Invests $2 Billion in Intel as Government Mulls Stake

Japan’s SoftBank has committed to investing $2 billion in U.S. chipmaker Intel as the company struggles to gain traction in the exploding artificial intelligence space and catch up in the mobile market. SoftBank has agreed to purchase roughly 87 million Intel shares at $23 per share to become the company’s fifth or sixth-largest shareholder. The move comes as the Trump administration deliberates converting the U.S. government’s CHIPS Act grants into a 10 percent equity stake in the company as part of its effort to revive American semiconductor manufacturing. Such a deal would make the government Intel’s largest stakeholder. Continue reading SoftBank Invests $2 Billion in Intel as Government Mulls Stake

Genie 3 World Model Produces Minutes of Video in Real Time

Google DeepMind has unveiled Genie 3, a world-building model that uses text and image prompts to generate 3D environments in real time. Still in research preview, Genie 3 can output “several minutes” of video that can be navigated in real time at 24fps and a resolution of 720p. Because it remembers the rules of the world it creates, Genie 3 allows agents to predict how the environment evolves and how actions affect it. Google says world models are “a key steppingstone” to artificial general intelligence, or AGI, since they can train AI agents in “an unlimited curriculum of rich simulation.” Continue reading Genie 3 World Model Produces Minutes of Video in Real Time

SIGGRAPH: Nvidia Touts Server Chip, Cosmos World Models

Nvidia has unveiled the Blackwell Server Edition GPU designed for enterprise servers. The reveal was made at the ACM SIGGRAPH 2025 computer graphics conference, which started Sunday and runs through Thursday in Vancouver. The company also introduced a host of resources for robotics developers that include a new AI family called the Cosmos World Foundation Models, or Cosmos WFMs, which generate “physics-aware” videos. Notable among them is Cosmos Reason, an open and customizable 7-billion-parameter reasoning vision language model (VLM) for physical AI and robotics. Continue reading SIGGRAPH: Nvidia Touts Server Chip, Cosmos World Models

Huawei May Challenge Nvidia with Its CloudMatrix AI System

At the World AI Conference that opened in Shanghai on Saturday, Huawei emerged as China’s best hope for driving a domestic hardware sector for advanced artificial intelligence workloads. There, Huawei debuted its CloudMatrix 384 AI system, powered by 384 of its high-performance processors, the Ascend 910C GPUs. The setup has drawn favorable comparisons to Nvidia’s flagship supercomputing platform, the GB200 NVL72, a rack-scale solution for on-site AI and HPC tasks. Huawei’s new hardware reportedly drew large crowds to its booth, but the company declined to share detailed comments or live benchmarks, suggesting a tightly controlled public presentation. Continue reading Huawei May Challenge Nvidia with Its CloudMatrix AI System