Nvidia Reports Record $57B in Revenue, $32B in Profit for Q3

Nvidia reported stellar results for the recent quarter, logging record revenue of $57 billion, up 62 percent year-over-year and 22 percent from Q2. “Blackwell sales are off the charts, and cloud GPUs are sold out,” Nvidia founder and CEO Jensen Huang told investors, saying “we’ve entered the virtuous cycle of AI,” with “more new foundation model makers, more AI startups across more industries, and in more countries.” The results quieted fears about an AI bubble. But there was some drama as Nvidia disclosed there is “no guarantee” of finalizing a previously announced $100 billion investment in OpenAI. Continue reading Nvidia Reports Record $57B in Revenue, $32B in Profit for Q3

Anthropic Plans to Invest $50 Billion in AI Data Centers in U.S.

San Francisco-based AI startup Anthropic, founded in 2021, announced plans to invest $50 billion in U.S. computing infrastructure, starting with the creation of data centers with Fluidstack in Texas and New York (with additional plans for future sites). Fluidstack builds and operates high-performance GPU clusters for researchers and AI teams. The new project with Anthropic is expected to create 800 permanent jobs and about 2,400 construction jobs. Sites are scheduled to come online throughout next year. “These facilities are custom built for Anthropic with a focus on maximizing efficiency for our workloads, enabling continued research and development at the frontier,” according to Anthropic. Continue reading Anthropic Plans to Invest $50 Billion in AI Data Centers in U.S.

Microsoft Introduces New Era of Connected AI Super Clusters

Microsoft’s new Azure AI data center in Atlanta, Georgia — the second in its Fairwater series — is ushering in a new era of connected AI super clusters that will power what The Register calls “the 100 trillion-parameter models of the near future.” Fairwater Atlanta, which became operative in October, is part of “a new class of data center – one that doesn’t stand alone but joins a dedicated network of sites functioning as an AI superfactory to accelerate AI,” training models “on a scale that has previously been impossible,” according to Microsoft. The new purpose-built data center is connected to the tech giant’s first Fairwater site, located in Mount Pleasant, Wisconsin. Continue reading Microsoft Introduces New Era of Connected AI Super Clusters

OpenAI Sets $38 Billion AWS Deal for Training and Inference

OpenAI has entered into a $38 billion cloud computing deal with AWS in a deal set to extend at least seven years, the hyperscaler says. The Monday news propelled Amazon stock to an all-time high of $254 per share, up 4 percent at close. After initially working exclusively with investor Microsoft for cloud services, OpenAI has negotiated expansively to meet increased demand. This year, the startup has signed with Oracle, Nvidia, AMD and Broadcom for storage and processing power as well as funding for data center construction plans of its own, here and abroad. The strategic alliance with AWS marks OpenAI’s first such arrangement with Amazon. Continue reading OpenAI Sets $38 Billion AWS Deal for Training and Inference

Nvidia Talks Up Robotics and AI Megafactory with Samsung

Nvidia has struck a series of deals with South Korea’s tech leaders that will result in more than 250,000 of its chips deployed across that country. Included in the frenzy of activity are Samsung Electronics, which is teaming with Nvidia to build an AI “megafactory” that will use more than 50,000 Nvidia GPUs for an intelligent manufacturing facility that will produce state-of-the-art processors that will be used in mobile devices, and robotics, among other things. Nvidia also has South Korean AI factories in the works with Hyundai and manufacturing conglomerate SK Group. The announcements were made as Nvidia CEO Jensen Huang attended the Asia-Pacific Economic Cooperation (APEC) meetings. Continue reading Nvidia Talks Up Robotics and AI Megafactory with Samsung

Nvidia Unveils Quantum Accelerator, $1 Billion Nokia 6G Deal

Every scientific supercomputer powered by Nvidia GPUs will soon be a hybrid quantum computer, CEO Jensen Huang said during last week’s Nvidia GTC conference as he unveiled a quantum compute accelerator called NVQLink that connects “quantum and classical supercomputers — uniting them into a single, coherent system that marks the onset of the quantum-GPU computing era.” Accessible through Nvidia’s CUDA-Q software, NVQLink can create and test applications that draw on CPUs and GPUs alongside QPUs, “helping ready the industry for the hybrid quantum-classical supercomputers of the future.” At the conference, Huang also announced a Nokia partnership that will deliver AI-native 6G using new Nvidia tech. Continue reading Nvidia Unveils Quantum Accelerator, $1 Billion Nokia 6G Deal

Oracle Cloud Orders 50,000 New AMD Instinct MI450 AI GPUs

Oracle Cloud Infrastructure (OCI) will be a launch partner for the first publicly available AI supercluster powered by AMD’s upcoming Instinct MI450 Series GPUs — with an initial order of 50,000 of the chips to be deployed starting in Q3 2026 and expanding in 2027. The resulting Oracle installations will feature Instinct MI450s configured with AMD-designed CPUs in AMD’s new Helios server rack systems, positioned to compete with Nvidia’s Vera Rubin NVL144 CPX racks when both platforms are mass-released next year. Oracle is challenged to rapidly scale its data center capacity due to massive compute commitments made this year to OpenAI. Continue reading Oracle Cloud Orders 50,000 New AMD Instinct MI450 AI GPUs

Qualcomm Debuts Chips, Explores 6G at Snapdragon Summit

Qualcomm has released two new chips within the Snapdragon X Series portfolio. The Snapdragon X2 Elite Extreme and Snapdragon X2 Elite are “the fastest and most efficient processors for Windows PCs,” according to the company. The 3nm chips boast up to 43 percent less power consumption than the prior generation. They were unveiled at the Snapdragon Summit in Maui, where Qualcomm CEO Cristiano Amon talked about the 6G future, which he described as a “dynamic, adaptive network of intelligence” that will be contextually sensitive, feeding across an ecosystem of personal devices from phones and laptops to smart glasses and connected cars. Continue reading Qualcomm Debuts Chips, Explores 6G at Snapdragon Summit

Nvidia Investing $100 Billion in OpenAI Data Center Build-Out

Nvidia is investing up to $100 billion in a partnership with OpenAI that will result in what Nvidia CEO Jensen Huang predicts will be “the biggest AI infrastructure deployment in history.” The project will use about 10 gigawatts worth of Nvidia systems — including the upcoming Vera Rubin platform — power equivalent to 4 million to 5 million GPUs. “This partnership is about building an AI infrastructure that enables AI to go from the labs into the world,” Huang said on CNBC’s “Halftime Report,” explaining the $100 billion will be invested in stages as each gigawatt is deployed. The investment will be all-cash with Nvidia receiving an undisclosed amount of OpenAI equity. Continue reading Nvidia Investing $100 Billion in OpenAI Data Center Build-Out

‘Europa’: ETC Teams Up with AWS on Cloud-First Production

Sci-fi short “Europa,” written and directed by Jacqueline Elyse Rosenthal, is the Entertainment Technology Center’s latest project to test the expanding possibilities of virtual production and remote collaboration. To call “Europa” a cloud-first production is to rethink filmmaking from the ground up. This wasn’t just a distributed team working online — it was an ecosystem where every workflow, from previs to final VFX, operated entirely in the cloud. It wasn’t a workaround; it was the foundation. And powering that foundation — every tool, every task, every decision — was AWS. Continue reading ‘Europa’: ETC Teams Up with AWS on Cloud-First Production

Nvidia Invests $5 Billion in Intel with Plans for AI Infrastructure

Nvidia is investing $5 billion in Intel via a common stock purchase at $23.28 per share, which translates to about a 4 percent stake. The companies plan to collaborate across multiple projects, developing custom data center and PC products to accelerate applications and workloads across the hyperscale, enterprise and consumer markets. Nvidia’s NVLink will be used to connect the architectures, integrating Nvidia’s GPUs with Intel’s CPU technologies. For data centers, Intel will customize x86 CPUs that Nvidia can integrate into its AI platforms. Intel also plans to build x86 SOCs that integrate Nvidia RTX GPU chiplets for PCs. Continue reading Nvidia Invests $5 Billion in Intel with Plans for AI Infrastructure

Microsoft Contracts with Nebius for $17.4 Billion in AI Capacity

AI infrastructure company Nebius Group NV has entered into a $17.4 billion deal to provide dedicated compute power to Microsoft from a new data center in Vineland, New Jersey. The five-year agreement could be worth up to $19.4 billion with additional capacity and services. The news sent Nebius shares surging by 49 percent on the Nasdaq composite, underscoring how the rapidly growing demand for AI support can influence the fate of companies. The deal added $1 billion to the value of Nebius founder Arkady Volozh’s stake. The Russian expatriate founded that country’s equivalent of Google. Continue reading Microsoft Contracts with Nebius for $17.4 Billion in AI Capacity

Nvidia Says Rubin CPX Inference Accelerator Coming in 2026

Nvidia has designed a new class of GPU for massive-context inference, the Rubin CPX, due in late 2026. Purpose-built to speed the million-token applications used to generate video and create software, the Rubin CPX functions as a specialty accelerator, working in concert with Nvidia Vera CPUs and Rubin GPUs packaged inside the upcoming Vera Rubin NVL144 CPX rack platform. “The Vera Rubin platform will mark another leap in the frontier of AI computing,” revolutionizing massive-context AI just as RTX did graphics and physical AI, said Nvidia CEO Jensen Huang. Continue reading Nvidia Says Rubin CPX Inference Accelerator Coming in 2026

OpenAI Reportedly Turning to Broadcom for Custom AI Chips

OpenAI is said to be in talks with Broadcom about developing custom AI inference chips to run its models. On an earnings call last week, Broadcom disclosed that an AI developer had placed a $10 billion order for AI server racks using its chips. That new customer was reported to be OpenAI, which has relied primarily on hotly sought-after Nvidia GPUs for model training and deployment. Broadcom specializes in XPUs — accelerator chips designed for specific uses, like inference for ChatGPT. OpenAI CEO Sam Altman has publicly complained that a shortage of chips has impeded the company’s ability to get new models and products to market. Continue reading OpenAI Reportedly Turning to Broadcom for Custom AI Chips

Microsoft AI Introduces Proprietary Foundation, Voice Models

Microsoft is rolling out its first internally developed AI models. Branded Microsoft AI (MAI), the two initial releases are MAI-Voice-1, a “highly expressive and natural speech generation model,” and MAI-1-preview, a mixture-of-experts LLM designed for consumer facing applications. The move demonstrates Microsoft’s intent to move beyond exclusive reliance on OpenAI models to power its Copilot assistant and other applications. By striking out on its own, Microsoft is paving a smoother road for OpenAI’s transition to a for-profit entity, which the company is scheduled to initiate by the end of the year. Continue reading Microsoft AI Introduces Proprietary Foundation, Voice Models