Broadcom (AVGO) just gave Wall Street a glimpse of the future of AI; and it doesn’t belong to Nvidia (NVDA) alone.
In its latest earnings report, the company stunned investors with a $10-billion bombshell: a secret hyperscale customer is ditching off-the-shelf GPUs and ordering custom-built AI chips (XPUs) instead.
That single disclosure marks the start of a tectonic shift in AI computing – away from Nvidia’s GPUs and into a new class of purpose-built accelerators.
We think this is the moment the AI boom enters its next act.
Here’s why…
From GPUs to XPUs: Broadcom Signals a New AI Era
For the last two years, Nvidia has dominated headlines (and stock charts) with its GPUs – the workhorse chips that train and run large AI models.
Thus far, they have been the fuel to the AI fire, accounting for 90- to 95% of the accelerator market by revenue. And Nvidia, leading the charge here, has pulled in quite absurd amounts of revenue and profit as a result.
Since the AI Boom began in late 2022, the company’s full-year annual revenue has increased dramatically – from $26.97 billion in fiscal year (FY) 2023 to $130.5 billion in FY2025 (+284%), with its net income rising from $4.37 billion to $72.88 billion in that time (+1,568%)…
But make no mistake. That astronomical growth doesn’t mean that the future of AI compute rests solely on Nvidia’s shoulders. As AI models swell to trillions of parameters and tackle ever more specialized tasks, the blunt force of a general-purpose GPU won’t cut it. The demand now is for chips as unique as the workloads themselves – and that’s why XPUs are set to take center stage.
Unlike general-purpose GPUs, XPUs are custom chips tailored to the unique data and workload of an AI model. In this sense, the ‘X’ is a variable that represents the type of architecture best suited for any given application.
For example, a model designed to generate high-quality video, like Google’s Veo3, would require a state-of-the-art Graphics Processing Unit (GPU). Devices like Apple‘s (AAPL) iPhone – which offers Siri voice assistant, facial recognition, and predictive text ability – rely on Neural Processing Units (NPUs) to handle complex algorithms and respond quickly.
These custom-designed accelerators are built from the ground up to adeptly execute specific workloads, be it training, inference, or recommendation. As such, they allow for better performance per watt, lower cost per compute unit, and tighter ecosystem lock-in.
And what Broadcom revealed alongside its latest earnings is proof that the biggest players in tech are no longer dabbling in the shift toward XPUs. They’re now betting the farm on it.
From Alphabet’s (GOOGL) TPUs to Amazon’s (AMZN) Trainium and Microsoft’s (MSFT) Maia, the world’s largest platforms are betting billions that XPUs will define the next decade of computing.
What was once an experiment is becoming a full-scale arms race.
Why XPUs Are Set to Outshine In the AI Market
So, how big is this transition? Let’s put numbers to it.
- Today, GPUs make up ~80- to 90% of AI chip revenue. XPUs are a ~10- to 20% slice.
- By 2030, XPUs could command closer to 25- or 30% of the market by revenue. And since inference chips are cheaper but sold in higher quantities, they could also make up a much larger share of unit volume.
- With AI accelerators’ total addressable market (TAM) projected to reach $300- to $450 billion by the early 2030s, that translates into $75- to $100 billion-plus a year shifting from GPUs to XPUs.
That’s not cannibalization so much as diversification.
GPUs will remain the backbone for training frontier models. But inference at hyperscale? Recommendation engines? Domain-specific workloads? That’s where XPUs shine. And Big Tech wants them badly.
Broadcom is reportedly already working with OpenAI, Google, and Meta on their own custom AI chips. And rumors suggest that the mysterious new customer that just ordered $10 billion worth of custom chips is none other than tech titan Apple.
Clearly, Big Tech is going all-in on XPUs.
Why? Because Nvidia has leverage on price, supply is constrained, and every hyperscaler is burning tens of billions a year on AI capex.
If you can build your own chip and significantly save on cost – while getting better performance-per-watt – you take that deal every time.
Who Wins as XPUs Rise?
Importantly, this isn’t just a Broadcom story (though the company is the north star here). And the shift from GPUs to XPUs creates a wide circle of winners throughout the semiconductor supply chain:
‘The Other Nvidia’ and a Smaller Pure-Play
Broadcom is the undisputed leader in custom silicon for hyperscalers; already powering Google’s TPUs, Meta’s MTIA, OpenAI’s in-house project – and now a mystery $10B customer. Add in Ethernet networking dominance with Tomahawk and Jericho, and Broadcom is becoming the ‘other Nvidia’ in the datacenter AI stack.
Marvell (MRVL) – the ‘junior’ Broadcom – provides custom application specific integrated circuits (ASICs) and merchant silicon for hyperscalers, plus strong positioning in AI networking and optics.
Electronic Design Automation Kings
Every new XPU has to be designed with electronic design automation (EDA) tools. Whether GPU or XPU, Cadence (CDNS) and Synopsys (SNPS) sell the picks and shovels for every new AI chip project.
Leaders of Foundries & Packaging
Custom silicon doesn’t grow on trees. Taiwan Semiconductor (TSM) is the foundry-of-choice for every hyperscaler chip. Advanced packaging (2.5D/3D stacking, Chip-on-Wafer-on-Substrate (CoWoS) alternatives) is mission-critical for XPUs – which spells growth for Amkor (AMKR) and ASE (ASX).
Networking & Optical Champions
AI compute is useless without networking. Arista (ANET) dominates Ethernet switching, which is winning ground against InfiniBand in AI clusters. Coherent (COHR), Lumentum (LITE), and Fabrinet (FN) supply the optical engines that tie GPUs and XPUs together at blazing speeds.
Equipment Makers
The more diverse the chip landscape, the more complex the wafer starts. Every new, more compact design means more EUV lithography, etch, deposition, and inspection. ASML (ASML), Applied Materials (AMAT), Lam Research (LRCX), and KLA (KLAC) are the silent winners within this niche.
Dominant Hyperscalers
While not direct semiconductor plays, the economic leverage for titans like GOOGL, META, MSFT, AMZN, and AAPL is enormous. Custom XPUs mean lower AI costs, higher margins, and deeper ecosystem lock-in. That’s why Wall Street cheers every new chip rumor out of these companies.
Bottom Line: Broadcom Is Becoming the XPU Kingmaker
Broadcom’s quarterly earnings presentation was a fireworks show.
AI revenue is up 63%, with strong fourth-quarter guidance pointing to continued outperformance ahead.
And the ultimate kicker? A $10 billion custom AI chip deal, poised to cement the company as the kingmaker of the XPU era.
This is the future. GPUs aren’t going away – but we believe the days of the AI market being a one-horse race are numbered.
With hyperscalers leaning in to custom-built chips, tens of billions of dollars are about to shift into new hands across the semiconductor value chain.
The smart money will follow that flow. And Broadcom may have just handed us the clearest roadmap to profits yet.
As it happens, XPUs will be vital to a particular corner of the market where we see outsized potential over the coming years… Humanoid robotics.
Think about bots like Tesla’s (TSLA) Optimus. To walk, grasp, and respond to human speech in real time, they need split-second inference at ultra-low power. GPUs are too bulky, too hot, and too costly. Only custom XPUs can deliver the efficiency and precision that make humanoids viable at scale.
That’s why we believe the rise of XPUs is inseparable from the rise of robotics. And as Big Tech tackles the need for custom chips, it’ll throw the Robotic Revolution into overdrive.
One overlooked supplier, already building the critical components Optimus depends on, could be the biggest beneficiary of that shift. And investors who understand this link today could be positioned for outsized gains as the robot economy moves from prototype to trillion-dollar reality.
Learn the name of this little-known supplier before Wall Street catches on.