In a strategic move signaling the importance of its technology leadership, Broadcom is set to deploy numerous gigawatts of custom accelerators at major players such as Meta, OpenAI, and Anthropic. The company argues these developments showcase the challenges AI firms and hyperscalers face in developing and deploying their own silicon.
Broadcom’s CEO Hock Tan highlighted remarkable growth during the company’s Q1 2026 earnings call, reporting a 106% year-over-year increase in AI-related silicon revenue, reaching $8.4 billion for the quarter. He anticipates even greater demand from Google, as they roll out new TPU advancements.
While OpenAI plans to deploy over one gigawatt of compute capacity through custom XPUs in 2027, Meta targets multiple gigawatts of Broadcom’s XPU accelerators from 2027 onwards.
Broadcom has preemptively secured all necessary resources to meet these demands, including high-bandwidth memory, planning to maintain this supply chain stability until at least 2028.
Reflecting on industry challenges, Tan remarked on the significant trials these AI companies face—from skilled design staffing to large-scale chip production. He expressed skepticism over any competitors matching Broadcom’s capability for many years.
The company’s momentum in AI networking solutions is equally noteworthy, with a 60% revenue boost year-on-year. An upcoming release of the seventh-generation Tomahawk switching chip promises double the performance, reinforcing Broadcom’s cutting-edge status.
In software, VMware’s growth spurred a modest advancement in Broadcom’s software infrastructure segment, despite lackluster performance from CA and Symantec products. Tan praised VMware’s Cloud Foundation suite as a pivotal infrastructure element for enterprise AI deployments.
Looking ahead, Broadcom forecasts substantial Q2 financial growth, buoyed by investor confidence reflected in a share price surge.