Qualcomm has taken a strategic step into the AI datacenter sector, unveiling innovative accelerators and rack-scale systems tailored for intensive inferencing tasks.
The tech giant revealed its AI200 and AI250 accelerator cards, highlighting the impressive 768 GB of LPDDR memory capacity of the AI200. They have positioned the AI250 as a breakthrough in energy efficiency and performance, leveraging an advanced ‘near-memory computing’ architecture.
These cards are packaged within pre-configured racks boasting features such as direct liquid cooling and Ethernet scaling, promising a robust power consumption of 160 kW.
Qualcomm CEO Cristiano Amon hinted earlier this year at a unique entry into the AI space, focusing on high performance and low power usage—a promise these new products aim to fulfill.
Surprisingly, the launch eschews CPUs, emphasizing Qualcomm’s leadership in NPU technology, likely referencing their Hexagon units found in modern Snapdragon SOCs.
The accelerators aim to reduce energy costs significantly and minimize cooling needs, making them highly appealing to AI operators who look to manage high workloads efficiently.
The spotlight also turns to Qualcomm’s collaboration with Saudi Arabia’s Humain, which plans to deploy these solutions extensively in 2026. However, full availability of AI250 isn’t expected until 2027, leaving its full impact uncertain until details are clarified further.
Despite this, the move marks Qualcomm’s notable return to data centers, with positive responses in market interest exemplified by a substantial share price increase.
/ Daily News…