Skip to content
Subscribers Only
Investment Alerts

More On Networking, Its Key Role In The AI Revolution and 3 Must-Own Stocks

September 6, 2025

In the age of digitization, data centers play a pivotal role in facilitating the seamless flow of information that drives our interconnected world. From storing vast amounts of critical data to supporting mission-critical applications, data centers are the backbone of modern businesses. However, the importance of data centers extends beyond their physical infrastructure – data center connectivity is the lifeline that ensures data availability, accessibility, and security. In this article, we delve into the crucial role of data center connectivity and why it’s a vital consideration for any organization.

The Essence of Data Center Connectivity

Data center connectivity refers to the intricate network of communication channels that link data centers to the broader digital landscape. It encompasses both internal connections within a data center and external connections to other data centers, cloud services, and the wider internet. This web of connectivity ensures that data can be transmitted, accessed, and processed effectively, supporting a range of business operations and services.

Ensuring Data Availability

Data is the lifeblood of modern businesses. From customer information to financial records, organizations rely on data to make informed decisions and drive innovation. Data center connectivity ensures that this data is available at all times, minimizing downtime and maximizing operational efficiency. Redundant connections and failover mechanisms guarantee that even in the event of a network outage, data remains accessible, preventing disruptions and potential revenue losses.

Facilitating Scalability

In today’s dynamic business landscape, scalability is essential. As your business grows, so does your data storage and processing needs. Data center connectivity enables seamless scalability by providing the ability to add more servers, storage resources, and networking capacity without disrupting ongoing operations via dedicated internet access. This ensures that your organization can adapt to changing demands and opportunities without major infrastructure overhauls.

Empowering Disaster Recovery

Disasters can strike at any time, whether they’re natural events or cyberattacks. Data center connectivity plays a crucial role in disaster recovery strategies. By establishing connections to remote data centers or cloud services, organizations can replicate and back up their critical data. This redundancy ensures that even in the face of a catastrophe, data can be quickly restored, minimizing downtime and data loss.

Supporting Cloud Services

Cloud computing has revolutionized the way businesses operate, offering scalable resources and services on-demand. Data center connectivity is what makes cloud services possible. Public, private, and hybrid clouds all rely on robust connectivity to transmit data between your organization’s infrastructure and the cloud provider’s servers. This allows for seamless integration, enabling your business to leverage the benefits of cloud technology.

Enhancing Data Security

In a digital landscape marked by increasing cyber threats, data security is paramount. Effective data center connectivity includes robust security measures to safeguard sensitive information. Encryption, firewalls, intrusion detection systems, and regular security audits are essential components of a secure data center network. By ensuring data in transit is protected, data center connectivity contributes to maintaining the integrity and confidentiality of your information.

Data center connectivity is not just a technical detail – it’s a fundamental aspect of modern business operations. From ensuring data availability and scalability to empowering disaster recovery and supporting cloud services, the importance of data center connectivity cannot be overstated. In a world driven by data, the ability to securely and efficiently transmit, access, and process information is what allows businesses to remain competitive, innovative, and resilient. As organizations continue to embrace digital transformation, investing in robust data center connectivity is a strategic imperative that underpins their success in the digital age.

Nexthop, Michael Lim, 31 August 2023

This is a couple of years old now, but it highlights the huge importance of networking to data centres. There are three key networking stocks, four if you include Nvidia, which has networking operations, in the Top 20 portfolio – Broadcom, Arista Networks and Credo Technology. Broadcom and Credo have both just reported figures that blew past forecasts. Arista Networks is due to report on 3 November, and it would be no surprise if its figures also beat expectations.

An obvious strategy is to start buying or continue adding to holdings in these three companies. I believe the focus on these three companies and the explosive growth of spending on networking amounts to a ‘something new’ in their development. It has echoes of the impact of Nvidia’s revelation that data centre spending on GPU chips was exploding, which led to a massive rise in its share price.

It is early days for me in my understanding of networking, but I guess all these chips and servers need to connect to each other and the planet, sucking in data and spewing out intelligence, what Nvidia’s Jensen Huang calls tokens. At the end of the day, the whole apparatus of data centres is only as fast, reliable and secure as its connections, justifying massive spending.

What is also unusual is that one of these shares, arguably two if Arista Networks is included, is relatively modest-sized. Arista is valued at $179 billion in a world where a growing number of companies command valuations in the trillions, and Credo Technology is a mere $24 billion. They are small for a reason; their sales are presently small, but given the opportunity, they could become big. A decade ago, companies like Nvidia and Broadcom were much smaller than they are today.

Look on Google or consult AI, and you will find many shares to benefit from the networking boom and AI infrastructure spending in general. But I look for something different, shares that are 3G (great growth, great chart, great story) and have that intangible magical quality that makes them special.

Share Recommendations

Broadcom. AVGO

Arista Networks. ANET

Credo Technology. CRDO

These three shares are flying high at the moment, so you might feel more comfortable embarking on a programme of accumulation, buying some every month or whatever works for you. If you do want to pile in straight away, I think that will work too, but may require a little patience.

Below is a quote that gives a flavour of what is happening and also explains why Broadcom shares look so exciting. There is a lot to read, but it may make sense if you are planning a significant investment in what could be the next leg of the AI boom.

GPUs dominate the conversation when it comes to AI infrastructure. But while they’re an essential piece of the puzzle, it’s the interconnect fabrics that allow us to harness them to train and run multi-trillion-parameter models at scale.

These interconnects span multiple domains, whether it be die-to-die communications on the package itself, between the chips in a system, or the system-to-system networks that allow us to scale to hundreds of thousands of accelerators.

Developing and integrating these interconnects is no small order. It’s arguably the reason Nvidia is the powerhouse it is today. However, over the past few years, Broadcom has been quietly developing technologies that span the gamut from scale-out Ethernet fabrics all the way down to the package itself.

And, unlike Nvidia, Broadcom deals in merchant silicon. It’ll sell its chips and intellectual property to anyone, and in many cases, you may never know that Broadcom was involved. In fact, it’s fairly well established at this point that Google’s TPUs made extensive use of Broadcom IP. Apple is also rumored to be developing server chips for AI using Broadcom designs.

For hyperscalers in particular, this model makes a lot of sense, as it means they can focus their efforts on developing differentiated logic rather than reinventing the wheel to figure out how to stitch all of them together. 

[Google TPUs (Tensor Processing Units) are custom-designed Application-Specific Integrated Circuits (ASICs) created by Google to accelerate the massive computational demands of machine learning (ML) workloads, such as training and serving AI models, by efficiently performing matrix operations. Developed internally for Google’s own AI projects, they are now also offered as a scalable, cost-effective, and high-performance resource through Google Cloud services.]  

Rooted in switching

Your first thought of Broadcom may be the massive pricing headache caused by its acquisition of VMware. But if not, you probably associate them with Ethernet switching. 

While the sheer number of GPUs being deployed by the likes of Meta, xAI, Oracle, and others may grab headlines, you’d be surprised just how many switches you need to stitch them together. A cluster of 128,000 accelerators might need 5,000 or more switches just for the compute fabric, and yet more may be required for storage, management, or API access.

To address this demand, Broadcom is pushing out some seriously high radix switches, initially with its 51.2Tbps Tomahawk 5 chips in 2022, and more recently, the 102.4Tbps Tomahawk 6 (TH6), which can be had with your choice of 1,024 100Gbps SerDes or 512 200Gbps SerDes.

The more ports you can pack into a switch, the higher the radix, and the fewer of them are needed for a given number of endpoints. By our calculations, connecting the same number of GPUs from our earlier example at 200Gbps would require just 750 TH6 switches.

[In technology and mathematics, radix is the base of a number system, referring to the total number of unique digits or symbols used to represent numbers in that system, including zero. For example, the decimal (base-10) system has a radix of 10, using digits 0-9, while the binary (base-2) system has a radix of 2, using digits 0 and 1. The term also has other meanings, such as a plant’s root or the root of a word, derived from the Latin word rādīx.]

Of course, this being Ethernet, customers aren’t locked into one vendor. At GTC earlier this year, Nvidia announced a 102.4Tbps Ethernet switch of its own, and we imagine Marvell and Cisco will have equivalent switches before long.

Scale-up Ethernet

Ethernet is most commonly associated with the scale-out fabrics that form the backbone of modern data centers. However, Broadcom is also positioning switches like the Tomahawk 6 as a sort of short-cut to rack-scale architectures.

If you’re not familiar, these scale-up fabrics provide high-speed chip-to-chip connectivity to anywhere from eight to 72 GPUs, with designs of as many as 576 expected by 2027. While small meshes up to around eight accelerators can be achieved using simple chip-to-chip meshes, larger configurations like we see in Nvidia’s NVL72 or AMD’s Helios reference design require switches.

Nvidia already has its NVLink Switches, and while much of the industry has aligned around Ultra Accelerator Link (UALink), an open alternative, the spec is still in its infancy. The first release just hit in April, and dedicated UALink switching hardware has yet to materialize.

Broadcom was an early proponent of the tech, but in the past few months, its name has disappeared from the UALink Consortium website, and it’s begun talking up its own scale-up Ethernet (SUE) stack, which is designed to work with existing switches.

While there are benefits to having a stripped-down built-for-purpose protocol like UALink for these kinds of scale-up networks, Ethernet will not only get the job done, but it has the benefit of being available today.

In fact, Intel is already using Ethernet for both scale-up and scale-out networks on its Gaudi system. AMD, meanwhile, plans to tunnel UALink over Ethernet for its first generation of rack-scale systems starting next year.

Lighting the way to bigger, more efficient networks

Alongside conventional Ethernet switching, Broadcom has been investing in co-packaged optics (CPO), going back to the introduction of Humboldt in 2021.

In a nutshell, CPO takes the lasers, digital signal processors, and retimers normally found in pluggable transceivers and moves them onto the same package as the switch ASIC.

While networking vendors have resisted going down the CPO route for a while, the technology does offer a number of benefits. In particular, fewer pluggables mean substantially lower power consumption.

According to Broadcom, its CPO tech is more than 3.5x more efficient than pluggables.

The chip merchant teased the third generation of its CPO tech back at Computex, and we’ve since learned it will be paired with its Tomahawk 6 switch ASICs and provide up to 512 200Gbps fiber ports out the front of the switch. By 2028, the networking vendor expects to have CPO capable of 400Gbps lanes.

Broadcom isn’t the only one embracing CPO. At GTC this spring, Nvidia showed off photonic versions of its Spectrum Ethernet and Quantum InfiniBand switches. 

But while Nvidia is embracing photonics for its scale-out networks, it’s sticking with copper for its NVLink scale-up networks for now.

Copper is lower power, but it can only stretch so far. At the speeds modern scale-up interconnects operate, those cables can only reach a few meters at most, and often involve additional retimes, which add latency and power consumption.

But what if you wanted to extend your scale-up network from one rack to several? For that you’re going to need optics. For this reason, Broadcom is also looking at ways to strap optics to the accelerators themselves.

At Hot Chips last summer, the tech giant demoed a 6.4Tb/s optical Ethernet chiplet, which can be co-packaged alongside a GPU. That works out to 1.6TB/s of bidirectional bandwidth per accelerator. 

At the time, Broadcom estimated this level of connectivity could support 512 GPUs, all acting as a single scale-up system with just 64 51.2Tbps switches. With Tomahawk 6, you could either cut that figure in half or add another CPO chiplet to the accelerator and double its bandwidth to 3.2TB/s.

While we’re on the topic of chiplets, Broadcom’s IP stack also extends to chip-to-chip communications and packaging. 

As Moore’s Law slows to a crawl, there’s only so much compute you can pack into a reticle-sized die. This has driven many in the industry toward multi-die architectures. Nvidia’s Blackwell accelerators, for example, are really two GPU dies that have been fused together by a high-speed chip-to-chip interconnect.

AMD’s MI300-series took this to an even greater extreme, using TSMC’s chip-on-wafer-on-substrate (CoWoS) 3D packaging tech to form a silicon sandwich with eight GPU dies stacked on top of four I/O dies.

Multi-die architectures allow you to get away with using smaller dies, which improves yields. The compute and I/O dies can also be fabbed on different process nodes to optimize for cost and efficiency. For example, AMD used TSMC’s 5nm process tech for the GPU dies and the fab’s older 6nm node for the I/O die. 

Designing a chiplet architecture like this is not easy. So, Broadcom has developed what is essentially a blueprint for building multi-die processors with its 3.5D eXtreme Dimension System in Package tech (3.5D XDSiP).

Broadcom’s initial designs look a lot like AMD’s MI300X, but the tech is open to anyone to license. 

Despite the similarities, Broadcom’s approach to interfacing compute dies with the rest of the system logic is a little different. We’re told that previous 3.5D packaging technologies, like we see on the MI300X, used face-to-back interfaces, which require more work to route the through silicon vias (TSVs) that shuttle data and power between the two. 

Broadcom’s XDSiP designs have been optimized for face-to-face communications using a technique called hybrid copper bonding (HCB). This allows for denser electrical interfaces between the chiplets. We’re told this will allow for substantially higher die-to-die interconnect speeds and shorter signal routing.

The first parts based on these designs are expected to enter production in 2026. But because chip designers are not in the habit of disclosing what IP they’ve built in house and which they’ve licensed, we may never know which AI chips or systems are using Broadcom’s tech.

The A Register, 25 June 2025, Tobias Mann

Each of my three favoured networking companies boasts electrifying share charts, to coin a phrase, and great leaders, another key characteristic of winning investments.

Below is what The Motley Fool said about Credo on 3 December 2024.

Shares of data center cable company Credo Technology (CRDO 5.06%) rocketed on Tuesday, with shares up 47.3% as of 1:11 p.m. ET.

The company reported earnings last night that not only beat analyst estimates but also delivered blowout guidance, suggesting Credo has emerged as a new artificial intelligence (AI) winner.

Credo makes a unique cable product called an active electrical cable (AEC), which connects data center servers to networking switches. The company claims its AECs take up 75% less space than Direct Attach Copper (DAC) cables and offer 50% more power efficiency versus active optical cable (AOC) alternatives.

As power and space are becoming scarce commodities in power-hungry AI data centers, Credo’s proprietary technology seems to be finding favor with large AI customers. In its fiscal third quarter, Credo delivered 63.6% revenue growth to $72.0 million, beating estimates by $5.2 million, while adjusted (non-GAAP) earnings per share came in at $0.07, beating estimates by $0.02.

But the biggest story with Credo was its third quarter revenue guidance for between $115.0 million and $125.0 million. That’s obviously a massive 67% quarter-over-quarter jump, suggesting perhaps a tipping point in demand for the technology.

CEO Bill Brennan confirmed, “For the past few quarters, we have anticipated an inflection point in our revenues during the second half of fiscal 2025. I am pleased to share that this turning point has arrived, and we are experiencing even greater demand than initially projected, driven by AI deployments and deepening customer relationships.”

The hot new AI stock on the block

Investors have been clamoring for new artificial intelligence winners to buy, and it appears Credo just emerged as that today in a big way. However, Credo’s $11.8 billion market cap does look rather high after the surge, given its mere $500 million revenue run rate based on the third quarter outlook.

The Motlery Fool, 3 December 2024

That final comment suggests the shares looked expensive, but then that is always the case with great growth shares; otherwise, everybody would buy them. Nine months later, Credo is on a run rate fast approaching $1bn a year, and the market value has roughly doubled. They still look expensive, but that is hardly surprising for a company which has just reported nearly quadrupled turnover and whose products are central to the greatest technology infrastructure boom in history.

Further reading

More >
Subscribers Only
Investment Alerts

Expect Shares To Surge In The Final Quarter Of 2025

October 20, 2025
Subscribers Only
Investment Alerts

Biologic, Argenx SE, has Fast-Growing Drug Sales And A Red Hot Pipeline

October 19, 2025
Subscribers Only
Investment Alerts

New CEO drives IonQ into frenzy of activity

October 16, 2025
Subscribers Only
Investment Alerts

‘Rubbish Theory Of Value’ Drives Explosive Share Price Potential

October 15, 2025