With the introduction of its new Trident II network chip, merchant silicon manufacturer Broadcom is opening up a market for distributed core switching in data centers.
"In the traditional data center network you see these big iron switches, about the size of a refrigerator," said John Mui, senior product line manager for networking and infrastructure at Irvine, Calif.-based Broadcom Corp. "We see a move in the cloud and in the data center today toward a more distributed topology utilizing fixed-form factor switches and using standard, off-the-shelf, integrated silicon from [companies] like Broadcom."
Many cloud providers, Web content providers and enterprise data centers are transitioning toward distributed "pizza box" core switches due to the size, power and cooling limitations of data centers, he said. Plus, distributed core switches enable "fast, flat and fat networks" to meet demand for very fast connectivity between top-of-rack switches, Mui said.
The StrataXGS Trident II series will certainly enable powerful, distributed core switches. Total switching capacity enabled by a single Trident II will range from 960 to 1280 Gbps at full duplex. With this chip, OEM vendors can build 1 RU or 2 RU switches with port counts of 96x10 Gigabit Ethernet (GbE) with 8x40 GbE, 64x10 GbE with 16x40 GbE, or 32x40 GbE. The chip also features ultra-low latency of 500 nanoseconds and a power requirement of 1 watt per port.
While some vendors will use this chip to build high-powered top-of-rack switches for specialty applications like high-frequency trading (see the Cisco Nexus 3000 series), others will instead lean toward distributed core switches. Certainly companies like Google, which are known for building their own infrastructure when they can't get what they need from OEMs, will consider it.
More on the Trident and data center network distributed core switches
The Trident II and other networking innovations at VMworld
Alcatel-Lucent's data center fabric offers possibility of distributed core
A Cisco data center core doesn't have to be fancy
Force10 Networks [now Dell Networking] introduced ASIC-based distributed core switching in April 2011 with its Z9000 switch. The Z9000 is a 2.5 Tbps, 2 rack unit (RU) box that is configurable as either a 32x40 GbE or 128x10 GbE core node.
With a distributed core, network architects can design flat, broad networks where small, fixed-form factor switches aggregate connections from large clusters of top-of-rack switches. These distributed cores then interconnect with each other, either through technologies like multilink aggregation or through a multi-pathing technology like Transparent Interconnections of Lots of Links (TRILL) and Shortest Path Bridging (SPB).
In fact, the Trident II includes a new Smart-Hash feature that enables and speeds up technologies like TRILL and Equal Cost Multi-Path (ECMP) routing. Smart-Hash adds silicon- based insight into how traffic is being separated and shared across all the paths available to a switch. It can determine how to best use those links, Mui said.
The Trident II also has a new Smart-Buffer feature, which offers flexible buffer resources to individual ports. Most switches allocate a fixed amount of buffer memory to each port.
"Let's say you have a 24-port switch with 24 megabits of buffer. It allocates a megabit to each port," Mui said. "That's a bad thing if you have one port using up all its buffering resources."
Smart-Buffer allows a chip to manage its buffering memory as a pool of resources. It can assign those resources to a particular port dynamically. If one port is getting slammed while others on the switch are relatively quiet, in-chip intelligence can assign more buffering to the open port. If traffic patterns change, the chip can shift resources again.
Distributed core has merit, but no match for traditional core
The distributed data center core has lots of obvious benefits, but there are caveats as well, noted Mike Spanbauer, service director for business technology and software at Washington, D.C.-based Current Analysis Inc. Fixed-form factor switches rarely have advanced features like native MPLS.
"Feasibility-wise, the average core environment is overbuilt in most cases," Spanbauer said. "But one thing you lose out on is interface adaptability."
Also, while the Smart-Buffer feature of the Trident II may be innovative, Spanbauer isn't convinced it can match the deep buffering capabilities of a traditional core device.
"I don't care how fast dynamic buffering allocation occurs, [in the core] you're seeing rapid changes in demand and I've never seen a fixed device that can handle that requirement," he said.
Spanbauer believes the Trident II will have a bigger impact in the data center server access switching market, where the previous generation Trident 1 is already a major force.
"Fifty percent of the fixed [data center switches] in the industry are based on the Trident 1 chip set," he said. "It touches almost every vendor in the market. These new use cases [for distributed cores] could offer differentiation for incumbent players and may even offer leverage to new entrants."
Let us know what you think about the story; email: Shamus McGillicuddy, News Director.