Qualcomm Articulates Its Expansion into AI Data Center Chips
October 29, 2025
Qualcomm, which has established itself as a leading supplier of AI chips for edge devices with its Snapdragon line, is now making a major push into the data center space to challenge industry leaders such as Nvidia and AMD. The AI200 and AI250 accelerator chips are aimed at rack-scale inference systems as the debut entries in what Qualcomm describes as a multi-generation roadmap of AI inference equipment that will be updated annually. At Monday’s market close, Qualcomm stock was up by 11 percent on the news as investors saw promise of the San Diego-based firm’s expansion beyond its core mobile market.
“Qualcomm is seeking to transpose decades of experience in efficient neural processing and system integration from mobile devices into the data-center inference market,” writes EBC Financial Group, a London-based brokerage with ties to the University of Oxford’s Department of Economics.

“Analysts note that Qualcomm is emphasizing inference — a segment where efficiency and cost per inference can be decisive for many cloud and edge customers — rather than training,” EBC continues.
The market “reaction underscored the outsize investor interest in AI chips, driven by a multiyear sales boom that has pushed the market value of the dominant supplier, Nvidia, to more than $4.5 trillion,” writes The New York Times, adding that shares of AMD “have more than doubled this year.”
Qualcomm’s new rack-sized accelerator cards are designed for cost-effective and energy efficient server use, Qualcomm says in an announcement. The AI200 is expected to be made commercially available in 2026, followed in 2027 by the next-generation AI250, which Qualcomm says will feature an innovative near-memory computing architecture to deliver 10x higher effective memory than its predecessor.
“Qualcomm didn’t disclose how much power the accelerators will use or what processing speeds they are expected to provide,” notes SiliconANGLE, specifying that the company did say “they’re based on its Hexagon architecture” that “underpins the neural processing units Qualcomm ships with its consumer systems-on-chip.”
Hexagon-based NPUs are manufactured using a three-nanometer process and deployed in the Snapdragon 8 Elite Gen 5 smartphone processor, among other products. The Hexagon NPUs have also found their way into connected device hardware powered by Qualcomm, such as routers. “Its NPU includes 20 cores based on three different designs that can process up to 220 tokens per second,” SiliconANGLE reports.
Qualcomm simultaneously announced that the Saudi state-backed Humain Public Investment Fund (PIF), which NYT describes as “an AI champion for the country,” has committed to using the U.S. firm’s new rack cards for a planned AI inference service.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.