OpenAI & Broadcom Developing Custom AI Accelerator Chips

OpenAI has expanded its alliance with Broadcom, announcing a plan to create enough custom AI accelerator chips to consume 10 gigawatts of power. News of the custom chip collaboration leaked out last month. Now that it is ready to go public, OpenAI says designing its own chips and systems will allow the startup to leverage directly into the hardware what it has learned from developing frontier models. The racks, scaled entirely with Ethernet and other connectivity solutions from Broadcom, will be deployed across OpenAI’s facilities and partner data centers beginning in the second half of 2026.

Although the companies did not disclose financial terms, The Wall Street Journal reports that the AI startup’s deal with the San Jose, California-based chip manufacturer “will be worth multiple billions of dollars” as it plays out over the next four years.

The agreement is aimed at meeting OpenAI’s enormous computing needs, as it rushes to build data centers in the U.S. and around the world. In the U.S., “OpenAI is already building a facility in Abilene, Texas, and plans additional data centers in other parts of Texas, New Mexico, Ohio and in the Midwest,” writes The New York Times.

Abroad, the San Francisco-based AI company is already underway with plans to create AI processing centers in Norway and Abu Dhabi, and is known to be in the exploratory phase for build-outs in South Korea, India and Argentina.

The computer processing power needed to keep those plants humming is vast, and OpenAI has already committed to deploying “enough Nvidia and AMD chips to consume 16 gigawatts of power,” says NYT.

“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” OpenAI co-founder and CEO Sam Altman explains in a news post.

Broadcom Semiconductor Solutions President Charlie Kawwas said the working relationship with OpenAI — which started about 18 months ago as the company sought to reduce dependence on Nvidia — will “set new industry benchmarks for the design and deployment of open, scalable and power-efficient AI clusters.”

Kawwas emphasized that “custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions,” with which Broadcom has extensive experience, helping the cash-challenged OpenAI to optimize its cost/performance spending. In September word began circulating that the companies were collaborating on custom AI chips for model training and inference.

“OpenAI is among many tech companies that are spending hundreds of billions of dollars on the construction of new data centers for artificial intelligence,” writes NYT, tallying a $325 billion collective spending through the end of 2025 among companies that in addition to OpenAI include Amazon, Google, Meta and Microsoft.

Related:
Oracle Cloud to Deploy 50,000 AMD AI Chips, Signaling New Nvidia Competition, CNBC, 10/14/25
Broadcom’s Chip President Says Mystery $10 Billion Customer Isn’t OpenAI, CNBC, 10/13/25
Why Broadcom’s Bet on OpenAI Is a Big Risk, The Wall Street Journal, 10/14/25

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.