Nvidia plans to ship over 5 million Blackwell GPUs in 2025, necessitating effective networking solutions for AI model training. This has become a lucrative opportunity for Ethernet switch manufacturers such as Cisco and Arista, as well as Nvidia itself.

During Cisco’s Q4 2025 earnings call, CEO Chuck Robbins highlighted that AI-related orders from major web-scale clients surpassed $800 million in the quarter and reached $2 billion annually, doubling their initial $1 billion goal for 2024.

The economic model driving growth is straightforward. With each B200 or GB200 GPU, vendors can sell approximately three to five switch ports.

Switch numbers vary based on port speed, durability, and the cluster’s scale. For clusters up to 8,192 GPUs, only a basic network layer is necessary. For larger clusters, requiring up to 128,000 GPUs, the network demands increase significantly, potentially involving up to 10,000 switches for older systems.

The requirements extend beyond switches to include optics and copper cables, with large clusters needing over a million optical connections. Optics sales have contributed a significant portion of AI networking revenue.

Nvidia’s interest in photonic switches is driven by their potential to reduce the number of optical components needed.

AI network deployments often allow for mix-and-match compatibility due to Ethernet’s standardization, with exponential market growth largely fueled by Nvidia and AMD’s GPGPU production.

Cisco’s AI-related revenue is primarily driven by large-scale clients, but they see growing opportunities among traditional enterprises and emergent markets.

Cisco’s rival Arista forecasts significant revenue from AI, while Juniper Networks also shows growth amid its integration with HPE.

According to market predictions, AI network sales could reach $80 billion by 2030, with Ethernet technologies playing a key role.