As AI models get bigger and more complex, the infrastructure behind them needs to evolve fast. I recently discovered how Broadcom’s latest innovation, the Jericho4 ethernet fabric router, is making huge waves in the AI infrastructure landscape. It’s designed specifically to support distributed AI computing across multiple data centers, breaking through limits that used to hold back large-scale AI deployments.
Why is this such a big deal? Well, traditional data centers simply can’t handle the extreme power and connectivity needs of next-gen AI models all in one place. By interconnecting over one million XPUs (think specialized AI processors) across several data centers, Jericho4 tackles the massive challenge of scaling out without bottlenecks. It combines tremendous bandwidth, strong security features, and lossless performance to enable a truly distributed AI ecosystem.
Jericho4 supports over 36,000 HyperPorts at 3.2 Tbps each, facilitating congestion-free, lossless AI data flow over 100km+ distances.
What really caught my eye is the hardware sophistication behind Jericho4. Using Broadcom’s cutting-edge 3nm process technology and 200G PAM4 SerDes, it offers impressive reach and efficiency. The innovative 3.2T HyperPort technology merges four 800GE links into a single logical port, eliminating inefficiencies and increasing network utilization by up to 70%. Plus, it handles RoCE (RDMA over Converged Ethernet) transport seamlessly over more than 100 kilometers, enabling a robust interconnect fabric between distant data centers.
Security isn’t an afterthought either. Every port supports full-speed MACsec encryption, protecting sensitive AI data in transit—without slowing things down, even under heavy traffic loads. This is crucial when dealing with massive volumes of information spreading across regions.
Another important dimension is interoperability. Jericho4 aligns with the Ultra Ethernet Consortium’s standards, which means it can work smoothly within broad AI networking ecosystems featuring various NICs, switches, and software stacks. This open standard approach helps pave the way for versatile and scalable AI fabrics that won’t trap users in vendor lock-in.
Jericho4 fits into a complete portfolio of Broadcom solutions – alongside Tomahawk 6 and Tomahawk Ultra – tailored specifically for high-performance computing (HPC) and AI workloads. Together, they enable Scale Up Ethernet and distributed computing at incredible scales, from a single rack all the way to multi-data center environments.
What this means for AI infrastructure
Jericho4 signifies a big step towards overcoming critical infrastructure limits in AI development. As AI models explode in size, no single data center can meet the tremendous power and cooling requirements. Distributing the compute across many facilities is the natural answer—but it demands networking capable of keeping pace. Jericho4’s breakthrough bandwidth, lossless traffic management, and secure links help clear this bottleneck, empowering AI systems to grow unhindered.
In a way, Jericho4 is enabling the AI equivalent of a global nervous system, joining countless specialized processors across vast distances with near-perfect coordination and speed. This unlocks new possibilities for research, innovation, and services that depend on scaling AI beyond single locations.
Key takeaways
- Jericho4 enables AI computing at an unprecedented scale by connecting over one million XPUs across multiple data centers with groundbreaking bandwidth and lossless performance.
- Advanced hardware tech like 3nm process and 3.2T HyperPort boosts efficiency and reduces costs while providing long-distance connectivity without extra components.
- Full-speed MACsec encryption at every port ensures AI data is secured in transit, preserving performance even under heavy loads.
- Compliance with Ultra Ethernet Consortium standards guarantees interoperability with a broad ecosystem of AI networking gear and software.
- Part of a broader portfolio that supports scaling AI fabrics from rack-level to multi-data center deployments, paving the way for future-ready AI infrastructure.
Final thoughts
Discovering Broadcom’s Jericho4 gave me a fresh perspective on how AI hardware is evolving to keep pace with the ambitious demands of modern AI workloads. It’s clear we’re entering a new era of distributed AI computing, where overcoming physical and power limitations is no longer a pipe dream. Jericho4 exemplifies how smart engineering and open standards can converge to solve some of the toughest scaling challenges in AI infrastructure. For anyone tracking the AI hardware landscape, this is definitely a development to watch closely.


