Google Unveils New AI Chips, Announces Deal with Anthropic
November 12, 2025
Google Cloud is rolling out its seventh-generation Tensor Processing Unit (TPU), Ironwood, and new Arm-based computing options that aim to meet exploding demand for AI model deployment in what the Alphabet company describes as a business shift from training models to serving end users. “Constantly shifting model architectures, the rise of agentic workflows, plus near-exponential growth in demand for compute, define this new age of inference,” explains Google Cloud. The company said that Anthropic — known for its Claude family of large language models — “plans to access up to 1 million” of the new TPUs. The deal is reportedly “worth billions.”
VentureBeat describes the deal with Anthropic as “among the largest known AI infrastructure deals to date,” ascribing to it a value of “tens of billions of dollars.”

In a blog post, Google executives describe the “age of inference” as “a transition point where companies shift resources from training frontier AI models to deploying them in production applications serving millions or billions of requests daily.”
Today’s frontier models, including Google’s own Gemini, Veo, and Imagen and Anthropic’s Claude, “train and serve on Tensor Processing Units,” according to the Google post.
Ironwood, which will be generally available in the coming weeks, is “purpose-built for the most demanding workloads: from large-scale model training and complex reinforcement learning (RL) to high-volume, low-latency AI inference and model serving,” notes Google, claiming it offers a “10X peak performance improvement over TPU v5p and more than 4X better performance per chip for both training and inference workloads compared to TPU v6e (Trillium), making Ironwood our most powerful and energy-efficient custom silicon to date.”
Google also revealed “the expansion of its Axion offerings with two new services in preview: N4A, its second-generation Axion virtual machines, and C4A metal, the company’s first Arm Ltd.-based bare-metal instances,” writes SiliconANGLE, explaining that “Axion is the company’s custom Arm-based central processing unit, designed to provide energy-efficient performance for general-purpose workloads.”
CNBC reports that “Google’s decade-long bet on custom chips is turning into company’s secret weapon in AI race,” adding that “other cloud companies are taking a similar approach” with bespoke silicon, “but are well behind in their efforts.” Among them are Amazon and Microsoft, according to CNBC.
Bernstein Research Semiconductor Analyst Stacy Rasgon tells CNBC that Google is “the furthest along among the other hyperscalers,” explaining that of the players in the Application-Specific Integrated Circuit (ASIC) space, “Google’s the only one that’s really deployed this stuff in huge volumes.”
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.