Cisco Unveils Silicon One Chip with 92x Density Boost

 Cisco has introduced its new Silicon One P200 controller chip and the accompanying 8223 Ethernet switch, specifically designed for hyperscaler networks including major clients such as Alibaba and Microsoft Azure. While initially targeting cloud hyperscalers, industry analysts suggest these networking solutions could soon play a pivotal role in enterprise artificial intelligence infrastructure.

According to Cisco's official blog post, the 8223 router—featuring a single P200 chip within a 3U chassis—delivers equivalent performance capabilities to the 8804 switch that Cisco launched in 2023, which required a 10U chassis. While maintaining identical routing capacity of 51.2 Tbit/s at the P200 level (supporting 64 ports of 800 Gbit/s Ethernet), the 8223 demonstrates a remarkable 65% reduction in power consumption compared to the 8804 model.

Cisco engineers explain this significant density improvement stems from the P200's innovative design that consolidates components equivalent to 92 individual chips used in the previous-generation switch. These components included not only earlier-generation Silicon One controllers but also DRAM elements utilized for cache memory functionality. This cache memory is now integrated directly into the P200 chip using High Bandwidth Memory (HBM) technology.

The Silicon One chip architecture, first introduced by Cisco in 2019, serves as a network routing equipment controller. The product family has since expanded to cover the complete spectrum of networking equipment, from switches positioned adjacent to servers to core network infrastructure components. Current models include the A100 routing at 1.2 Tbit/s, the K100 and its customized firmware variant E100 both routing at 6.4 Tbit/s, and the Q200 handling 12.8 Tbit/s.

Additionally, Cisco offers a G200 variant of the P200 which, similar to the E100 model, features programmable firmware capable of supporting specific network protocols according to customer requirements.

Competitive Positioning Against Nvidia and Broadcom
This new controller and its companion routers are marketed as "scale-across networking" solutions—a term originally coined by Nvidia during the August launch of its Ethernet Spectrum-XGS equipment series, which was specifically engineered to extend AI networks beyond the confines of individual data centers.

Sameh Boujelbene, Vice President at Dell'Oro Group, commented on this industry trend: "This addresses a critical challenge currently faced by major AI computing service providers: the number of GPUs they need to combine for intensive training sessions exceeds the power capacity of a single data center. Consequently, they require the capability to make GPUs located across different sites, even in different cities, collaborate efficiently within the same network environment."

Nvidia's Spectrum-XGS family, which Cisco aims to compete with, comprises Spectrum-X switches and ConnectX-8 network adapters capable of handling 800 Gbit/s per Ethernet port. These integrated systems also incorporate sophisticated algorithms that dynamically adapt traffic management and latency optimization based on the physical distance separating different data center facilities. AI service provider CoreWeave has already deployed these Nvidia solutions in its operations.

Two other significant competitors in this space include Broadcom's new Tomahawk 6 Ethernet controller for intra-data center switches and the Jericho4 controller designed for inter-data center connectivity solutions. These Broadcom components are expected to be integrated into upcoming routers from established networking equipment manufacturers Arista and Juniper.

Advanced Energy Efficiency Capabilities
According to Cisco's technical explanations, the Silicon One chip architecture employs a unique packet processing approach where all data flows share a unified cache memory system, now implemented as integrated HBM circuitry within the P200.

Rakesh Chopra, Cisco Fellow and Senior Vice President of Hardware, addressed potential concerns during a press briefing: "Some have suggested that using such large cache memory would be counterproductive for AI workloads due to high electricity consumption. It's true that proactively computing congestion control by leveraging the predictability of AI processing is more energy efficient. However, substantial cache resources remain necessary to handle network failures, which at this scale represent the norm rather than the exception. Importantly, sharing this cache represents our key innovation for dramatically reducing energy consumption."

Chopra elaborated on the technical implementation: "Through this shared cache architecture, we avoid upstream packet movement based on network congestion conditions. We write packets once, we read them once. Essentially, we primarily manipulate descriptors to direct traffic to appropriate ports."

He further explained the cumulative benefits: "Once you implement this approach, a compounding effect occurs because everything has been substantially reduced. With components generating less heat, we can decrease fan power requirements, utilize less powerful power supplies with reduced energy losses. We've pursued watt-level savings throughout this system because energy consumption represents the fundamental constraint in AI infrastructure."

Potential Expansion into Enterprise Data Centers
While hyperscalers and high-performance computing service providers have requirements that significantly exceed those of current private data centers, Sameh Boujelbene believes these advanced networking solutions could eventually see widespread adoption in enterprise environments. This potential market expansion is driven by growing enterprise desire to achieve independence from US cloud providers for their AI processing needs.

Boujelbene observed: "All governments now consider AI capability as a crucial differentiator, even a matter of strategic importance. They're engaged in a frenetic race not only to build their AI infrastructure but also to maintain control over it. The question of who will deploy these systems, and where they will be located, represents a major strategic consideration. Cisco maintains strong involvement in government projects—not only in the United States but also across Europe and the Middle East."

She suggested that these networking solutions capable of unifying different smaller sites into a cohesive network could provide viable alternatives for traditional enterprises unable to invest in massive data center facilities.

This perspective is shared by Matthew Kimball, Vice President and Senior Analyst at Moor Insights & Strategy: "AI implementation will push numerous enterprises toward adopting hyperscaler-style approaches by establishing ultra-fast interconnection structures between sites, if only to facilitate large-scale agentic AI deployment," he stated.

SOURCE   LeMagIT    

Read Also
Google Announces $10 Billion Investment in Data Center and AI Projects in Andhra Pradesh
Nvidia prepares data center industry for 1MW racks and 800-volt DC power architectures
CoreWeave to Work With Nvidia-Backed Start-Up on Texas AI Data Center Project

Research