The Next AI Gold Rush Is Inside the Data Center

  • AI infrastructure bottlenecks drive new investment opportunities. Each constraint in the AI buildout – from GPUs to cooling to power – creates a new group of winning stocks.
  • Data center networking may be the next AI bottleneck. As GPU clusters scale to hundreds of thousands of processors, interconnect speed becomes critical.
  • The networking layer of AI infrastructure is evolving. Copper dominates short-distance connections today, while optical interconnects may lead long-term.
AI infrastructure bottlenecks - The Next AI Gold Rush Is Inside the Data Center

The AI bull market follows a clear pattern once you know where to look. 

The massive buildout of AI infrastructure isn’t a single race – it’s a rolling gold rush driven by a sequence of AI infrastructure bottlenecks.

Each time hyperscalers run into a constraint – GPUs, servers, cooling, power, memory – the market floods capital toward the companies that solve it. Those companies become the next wave of winners.

The winning strategy isn’t buying what just worked. It’s identifying which constraint hyperscalers will throw hundreds of billions of dollars at next.

The good news is that the pattern is remarkably consistent. The better news? We can see the next bottleneck forming right now.

It sits deep inside the data center itself: the network plumbing that moves data between GPUs.

And if you catch this cycle early, the upside can be enormous.

To see how the AI infrastructure bottleneck cycle works, look at the previous phases of the AI buildout.

The Previous AI Infrastructure Bottlenecks

We’ve already seen how this cycle plays out when a new AI bottleneck emerges.

  1. Compute. The first bottleneck was the most obvious one: you cannot train a large language model without enormous quantities of GPUs. Nvidia (NVDA) had the dominant training GPU. Its revenue went from $27 billion in FY2023 to $130 billion in FY2025. The stock rose roughly 800% in two years. The lesson was not subtle.
  2. Servers. Nvidia was selling chips as fast as it could make them, but someone still had to assemble the systems that housed them. The GPU server build-out created a secondary wave. Super Micro Computer (SMCI) and Dell (DELL) rocketed as hyperscalers raced to deploy. At one point, Super Micro was the fastest-growing company in the S&P 500
  3. Cooling. You cannot pack that many GPUs into a data center without dealing with the thermal consequences. Conventional air cooling hit a wall. Liquid cooling became non-negotiable. Vertiv (VRT) became Wall Street’s favorite infrastructure play seemingly overnight, going from a quiet power management company to a consensus AI trade.
  4. Energy. Data centers started drawing so much power that utilities couldn’t keep up. Suddenly, nuclear power plants were not boring regulated assets – they were scarce AI infrastructure. Constellation Energy (CEG) and small modular reactor plays like Oklo (OKLO) caught enormous bids as investors woke up to the reality that all this compute needed electrons, and those electrons had to come from somewhere reliable and carbon-friendly enough to survive ESG scrutiny.
  5. Memory. AI inference requires massive amounts of fast memory bandwidth. The bottleneck rotated to high-bandwidth memory (HBM) and high-performance storage needed to serve AI workloads at scale. Micron (MU) and the newly independent SanDisk (SNDK) became plays on the memory buildout. The storage and memory layer got its moment in the sun.

Each of these waves followed the same arc: obscurity, recognition, euphoria, rotation. In every case, hyperscalers had identified a specific constraint that prevented them from deploying capital productively – and the market rewarded whoever solved it.

That pattern is repeating again right now. And the next bottleneck is already visible.

The Next AI Bottleneck: Data Center Networking

As AI clusters grow from thousands of GPUs to hundreds of thousands of GPUs – and as the architectural ambition shifts from training giant monolithic models to running distributed inference across sprawling, always-on infrastructure – the internal plumbing of the data center has become the binding constraint.

We are talking about interconnects: the cables, transceivers, switches, and signal-processing chips that move data between GPUs, servers, racks, and buildings. 

GPUs are only as powerful as the data pipeline feeding them. If information can’t move fast enough between chips, racks, and clusters, even the most advanced processors spend time sitting idle. In a world where a single GPU can cost tens of thousands of dollars, idle time becomes extremely expensive.

The hyperscalers understand this. Broadcom (AVGO) CEO Hock Tan made it explicit in the company’s most recent earnings call, distinguishing between scale-up networking (connecting GPUs tightly within a cluster) and scale-out networking (connecting clusters to each other across a data center). This is not semantic hairsplitting. It is the architectural distinction that determines who wins the next leg of the AI infrastructure trade.

Copper vs. Optical Interconnects

The central tension in the interconnect space is a technology debate: copper or optical fiber?

Direct Attach Copper (DAC) cables are the incumbent for short-distance, in-rack connections. They are passive – no active electronics, no lasers, no photodetectors. They are cheap, low-latency, and power-efficient. However, they pose a glaring issue: copper signal integrity degrades rapidly at high data rates over distance. At today’s cutting-edge 800G speeds, usable DAC cable lengths have shrunk to roughly three meters. As data rates increase toward 1.6T, copper’s range gets even shorter.

Optical transceivers, on the other hand, convert electrical signals to light pulses, transmit them over fiber, and convert them back. Distance is no longer a constraint. But the downsides are still real – active components consume five to 15 watts per port, add latency at the conversion step, and cost materially more than copper. But for connecting clusters across a data center, there are very few practical alternatives today.

Active Electrical Cables (AEC) – copper cables with embedded signal-processing chips – represent the emerging middle ground, extending copper’s usable range to seven to 10 meters while consuming roughly 25- to 50% less power than optical alternatives. They are copper’s last stand before the physics wall, and they are genuinely good technology for the near term.

Broadcom CEO Hock Tan chimed in on this debate last week during the company’s quarterly conference call. His argument was this: push copper as deep into the architecture as physics allows, because on every dimension that matters in scale-up – latency, power, cost – copper wins

The takeaway from his comments is that copper dominates most scale-up connections today, while optics handles the longer-distance links. He may be right. But it is important to note that Broadcom’s entire custom AI silicon architecture happens to be optimized for copper-dominant topologies. So, it’s not surprising that a company optimized for copper-heavy architectures sees advantages in copper.

The more rigorous version of the argument is that the copper vs. optics debate is not binary. It is temporal. 

For investors, that means this is a sequence, not a single trade.

Copper wins in scale-up today. Optics wins in scale-out today. Once co-packaged optics (CPO) technology matures – which integrates photonics directly onto the chip package and eliminates the power and latency penalties of optical conversion – optics will likely win both eventually. 

The industry consensus puts CPO at commercial scale somewhere in the 2027-29 window. Nvidia just made a $4 billion bet – split between Lumentum (LITE) and Coherent (COHR) – that this timeline is real and that it intends to control the supply chain when it arrives.

In other words… copper wins the battle today, as Tan suggested… but optics will likely win the war in the long run. And that means the intelligent trade right now is to own both, then selectively rotate into optics over copper. 

The Near-Term Winners: Copper Interconnect Stocks

These are the names that benefit from Hock Tan’s world – the copper-dominant scale-up architecture that defines today’s hyperscaler buildout.

  • Credo Technology (CRDO) – the closest thing to a pure-play copper interconnect stock in the public market. Credo makes AEC silicon, and its technology claims 75% less rack space than DAC cables and significantly lower power consumption than comparable optical links. It has had explosive revenue growth – a 67% quarter-over-quarter guidance raise in late 2024 – and is increasingly used inside hyperscaler AI clusters – including deployments tied to Amazon, Microsoft, and xAI – where its active electrical cables connect high-density GPU servers inside large training racks. High beta, high conviction, appropriate for investors who want maximum leverage to the near-term copper buildout.
  • Marvell Technology (MRVL) – a more diversified play but deeply strategic. Marvell is the only company that ships ACC, AEC, and AOC silicon across the full connectivity spectrum. It benefits regardless of where the copper-to-optics boundary ultimately settles, which makes it a useful hedge. It is also a major custom AI ASIC supplier – its silicon powers Google TPUs and Amazon Trainium – giving it multiple vectors into the AI infrastructure trade beyond interconnects alone.
  • Broadcom – Hock Tan’s company is the dominant Ethernet switching ASIC supplier for AI clusters. When he says copper is optimal in scale-up, he is describing the topology that his own switching chips sit at the center of. Broadcom is not a pure interconnect play – it is the largest custom AI silicon company in the world. But the interconnect thesis is directly supportive of its networking franchise.
  • Amphenol (APH) – the physical connector and cable layer. Less exciting than the chip plays, but a reliable compounder that touches every data center buildout regardless of which medium wins the technology debate. If you are building interconnect exposure and want something that will not keep you up at night, Amphenol is the institutional-quality version of this trade.

The Long-Term Winners: Optical Networking

These are the names that own the future – the scale-out work that is happening right now, plus the CPO transition that arrives over the next three to five years.

  • Lumentum – one of the first companies shipping 200G-per-lane EML lasers at volume, which happen to be the critical component inside next-generation 1.6T optical transceivers. In early March 2026, Nvidia invested $2 billion in Lumentum with multi-year procurement commitments and capacity rights. Jensen Huang called Lumentum his partner for “the next generation of gigawatt-scale AI factories.” That is a supply chain lockup. The stock was up roughly 250% over the prior year; and the valuation, while elevated, reflects a genuine structural position in a constrained market.
  • Coherent – Lumentum’s primary competitor and, by some measures, the bigger optical business. Coherent received the other $2 billion from Nvidia in the same announcement. The investment thesis is slightly different: Coherent is the industrial-scale, multi-site manufacturing powerhouse with a broader product portfolio. It had been undervalued for most of 2025 due to investor perception of it as a legacy company – a perception that had grown increasingly disconnected from reality as its data center optics revenue scaled. For investors who want a slightly more conservative entry into the optics thematic, Coherent’s risk-adjusted profile is compelling.
  • Fabrinet (FN) – the contract manufacturer that assembles and tests optical transceivers for Lumentum, Coherent, and others. Think of Fabrinet as the contract manufacturer behind much of the optical transceiver industry – assembling and testing the components designed by companies like Lumentum and Coherent. It is breaking ground on a new facility representing a 50% capacity expansion. Less upside than the component makers, but more durable – it benefits regardless of which optical supplier wins the technology race.
  • Applied Optoelectronics (AAOI) – the speculative small-cap option. High operating leverage to the AI optical buildout, meaningful volatility, and a stock that has historically moved sharply in both directions on any demand signal. Not for everyone – but for investors with risk tolerance who want maximum torque to the optics cycle, AAOI offers the highest leverage in the group.

The Crossover Play

  • Nvidia – obviously, NVDA already won the compute wave. But it’s also quietly positioning itself to win the optical wave, too. Its $4 billion combined investment in Lumentum and Coherent, CPO switch announcements at GTC 2025, and aggressive pre-allocation of EML laser supply are the actions of a company that does not intend to be dependent on an optical supply chain it does not control. Nvidia is not just a beneficiary of the interconnect buildout. It is trying to own it.

The Two-Phase AI Networking Investment Cycle

The copper-versus-optics debate becomes much clearer when you introduce one critical variable: time.

This is a two-phase trade.

Phase 1: Now through approximately 2027. The copper plays have near-term earnings momentum with less valuation risk. The architecture Hock Tan described – copper-dominant scale-up, optical scale-out – is the one being deployed right now in every major hyperscaler buildout. Credo and Marvell have the strongest revenue tailwinds in this phase. APH, too. Buy them. Hold them. Do not overthink it.

Phase 2: 2027 through 2030 and beyond. CPO commercialization, increasing cluster scale, and the workload mix shifting toward inference will erode copper’s scale-up advantage. Optical interconnect revenue is projected to grow from roughly $16 billion in 2024 to somewhere between $34 billion and $41 billion by 2030. Silicon photonics alone could reach $12- to $16 billion by 2032. The names with the longest runways in this phase are Lumentum, Coherent, and Fabrinet – the companies Nvidia has already decided are critical infrastructure.

The transition name that threads both phases without needing you to call the timing precisely is Marvell. It sells into the copper world today and has AOC silicon for the optical transition. It also has a custom ASIC business that is structurally tied to AI compute spending regardless of which interconnect medium wins. If you only want one name in the interconnect space and you have no tolerance for timing risk, Marvell is the answer.

The Bottom Line: The Next Phase of the AI Infrastructure Boom

The AI bull market has never been about one trade. It has been about a chain of trades – each one funding the next, each one solving a specific engineering constraint that was preventing the hyperscalers from deploying their next hundred billion dollars. 

Compute. Servers. Cooling. Energy. Memory. And now, interconnects.

The pattern is unmistakable. The hyperscalers have identified the bottleneck. And they are deploying enormous capital to solve it. 

The supply chain is constrained. The technology debate – copper versus optics – has a near-term winner and a long-term winner, and we know who they both are.

Get positioned before the rest of the market figures out that the plumbing inside the data center is about to have its Nvidia moment…

Because history shows you do not want to wait for the consensus.

The real lesson of the AI boom is simple: the biggest gains go to investors who position themselves before the market fully understands where the next phase is headed.

That principle applies not only to infrastructure – but to the companies building the AI itself.

One of the most important players in this revolution, OpenAI, is widely expected to pursue a public listing in the coming years – potentially one of the largest tech IPOs ever.

But investors who wait until the day the stock begins trading may already be late.

I recently recorded a briefing explaining a little-known way investors may be able to position themselves before the IPO headlines arrive.

Watch it right here.


Article printed from InvestorPlace Media, https://investorplace.com/hypergrowthinvesting/2026/03/the-next-ai-gold-rush-is-inside-the-data-center/.

©2026 InvestorPlace Media, LLC