Data center fabric overview

A data center fabric is the next logical step for networks in the virtualization age. Discover the need for a data center fabric, as well as our roundup of players in the market.

Virtualization is fundamentally changing the requirements of enterprise data center network architecture. To meet these changing requirements, many networking vendors are reinventing their data center network product lines and emphasizing the concept of a data center fabric.

The days of north-south, client-server application architecture are over. Most enterprise application transactions involve multiple tiers of software. Server virtualization complicates the picture even further. Individual tiers of applications can exist as virtual machines anywhere in a data center. Many enterprises need to replace the north-south network architectures of yesterday with new networks that emphasize the east-west traffic of highly virtualized, multi-tiered applications.

Networking vendors are responding to this shift by introducing data center fabrics that promise high bandwidth and low latency any-to-any connectivity between servers.

Data center fabrics to replace outdated networking architecture

 Legacy three-tiered data center architecture was designed to service the heavy north-south traffic of client-server applications, while enabling network administrators to manage the flow of traffic.  Engineers adopted spanning tree protocol (STP) in these architectures to optimize the path from the client to server and allow for link redundancy.

STP worked well to support client-server applications and its traffic flows, but proved inefficient for server-to-server or east-west communications associated with distributed application architecture.  Server virtualization compounds the problem with spanning tree and the three-tiered architecture. Since virtual machines can migrate across a data center, STP can end up in a nearly continual loop attempting to rebuild its lists of network paths in a highly virtualized data center.

The basics of data center fabric

Networking vendors are developing a new class of data center fabric, a network where traffic from any port can reach any other node with as few latency-inducing hops as possible.

Data center fabrics generally feature high-speed ports, new multipathing alternatives to spanning tree, and simplified management and automation. 

Many networking vendors are relying on various enhancements to Ethernet standards to build their data center fabrics. Two key Ethernet enhancements for building true any-to-any links between services in the data center are Transparent Interconnection of Lots of Links (TRILL),  and Shortest Path Bridging (SPB). TRILL, a pending standard from the Internet Engineering Task Force (IETF), is a Layer 2 protocol that enables the switches in a data center fabric to broadcast the status of their link ports. Collectively, a data center fabric based on TRILL can use this information to determine the shortest path to forward a packet to its destination. SPB, proposed by the IEEE as 802.1aq, is very simiar to TRILL but maintains backward compatibility with existing spanning tree protocols. SPB, unlike TRILL, can be built on existing Ethernet chipsets.

Some vendors achieve multipathing in their data center fabrics via a third option, multichassis link aggregation (M-LAG or MC-LAG). This feature allows a pair of switch interconnects to be both active and usable to handle traffic. M-LAG can be deployed alongside STP, failing over to STP in the event of a loop.

A rundown of competing data center fabrics

Alcatel-Lucent’s OmniSwitch 10000 switch is at the center of the company’s data center fabric strategy. The 5 Tbps chassis switch currently supports MC-LAG, with planned support for SPB. The company’s OmniSwitch 6900 provides a top-of-rack option, offering up to 64 10 GbE ports in a single rack unit. Both the chassis and top-of-rack switches are scheduled to offer 40 GbE uplink ports in the later half of 2011. Alcatel-Lucent's data center fabric will allow customers to use 40 GbE uplinks on the OmniSwitch 6900 to create pods of top-of-rack switches with any-to-any connectivity via MC-LAG. These pods will then form a fabric with the OmniSwitch 10000. Interestingly enough, Alcatel-Lucent is the only vendor to offer a gigabit switch—the OmniSwitch 6580E—as part of data center fabric product portfolio. With up to four 10 GbE uplinks and either 24 or 48 Gb ports, this fixed-configuration switch could serve as a lower-cost alternative for customers in between server buying cycles.

 The Avaya Virtual Services Platform (VSP) is based on the company's augmented version of pre-standard SPB, although in its datasheets online, Avaya pledges that VSP will be compatible with the final SPB in a future release. The VSP product line consists of a top-of-rack VSP 7000 switch and a VSP 9000 chassis. Avaya also touts its Route Switch Processor (RSP) in the VSP 9000 as an advantage over its competitors. The RSP is a specialized high-performance network processing unit (NPU), which unlike an etched in silicon ASIC chip, can be upgraded via firmware.

Brocade’s Virtual Cluster Switching (VCS) architecture automates the detection and configuration of switches within a data center fabric. Based on the draft TRILL specifications, VCS applies Brocade’s lineage in Fibre Channel storage networks to the challenges of the data center fabric. VCS is currently available in a single product line, the VDX 6720, in a fixed-configuration form, with models that support from 12 to 60 10 GbE ports. Brocade has plans for higher capacity, chassis-based VDX switches on its roadmap.

Cisco System’s vision of a data center fabric encompasses its Unified Computing System (UCS) servers, firewalls and even WAN optimization. On the networking side, Cisco's data center fabric vision includes FabricPath, often described as a superset of TRILL. FabricPath is available in Cisco’s Nexus switches and powers low-latency, any-to-any connectivity within the data center. Cisco promises support for the TRILL standard once ratified, but suggests that its data center networking products will operate in either a FabricPath or TRILL compatibility mode. Switches available today from Cisco run the gamut from the top-of-rack Cisco Nexus 2000 Series Fabric Extenders to the large Nexus 7000 chassis. Cisco was an early entrant to the market with FabricPath, but the vendor’s data center fabric vision continues to evolve. The company is rumored to be working on a new data center fabric architecture, codenamed Jawbreaker, which will offer an even flatter network than FabricPath can provide.

Extreme Networks’ Open Fabric takes advantage of the switch vendor’s early foray into 40 Gb Ethernet to provide high-speed uplink ports between data center switches. Rather than choosing sides and adopting one of the pre-ratified standards like TRILL, Extreme’s current design uses M-LAG to get around the spanning tree problem and provide multiple active links between switches. Extreme’s products go from the top-of-rack Summit X670 to the top-end BlackDiamond X8, all of which feature 40 Gb uplink and M-LAG support.

Force10 Networks’ entry into the data center fabric market is based on TRILL.  Force10 offers a distributed core networking solution by linking together its z9000 top-of-rack switch. The z9000 is actually a 32-port 40 GbE switch.

To connect 10 GbE servers to the z9000, a 4-port breakout cable splits a single QSFP+ port into four 10 GbE ports. This allows the z9000 to ramp up to 128 server ports in a 2 RU form factor. For enterprises seeking even higher performance, Force10 also offers the z9512 chassis, which offers 4-port 100 GbE line cards as well as 8-port 40 GbE and 40-port 10 GbE options. With TRILL as a roadmap item for this core switch, the z9512 could raise the bandwidth stakes in the data center fabric market.

Juniper Networks' data center fabric, known asQFabric, is a set of products that function together as a logical switch. To accomplish this, Juniper physically separates  the control and data planes. Top-of-rack QFX3500 switches function like line cards to the management interface. Juniper uses dedicated fiber connections between these QFabric switches and a central midlplane device, the QF/Interconnect.  Finally, the QF/Director serves as an out-of-band supervisor engine for the solution.   Juniper claims that the QFabric solution flattens the data center network down to a single tier, taking the management complexity out of fabric administration. Although the QFX3500 switch is available now, Juniper won't release the other elements of the architecture until the latter half of 2011.

Building a data center fabric: Limited interoperability

While some networking vendors are touting the use of open standards in their data center fabric architectures, they have based their initial offerings on pre-ratified standards and require homogeneous environments. While most vendors have promised standards-based compatibility within their data center fabric portfolios, all of them concede that such compatibility modes will likely not operate as efficiently as their enhanced versions. Network administrators will have to weigh the potential risks of vendor lock-in with the performance that a vendor’s solution can provide.

“A data center fabric is a significant change to the network, and enterprises should do their homework and evaluate a number of vendor offerings before making a decision, “said Zeus Kerravala, senior vice president and distinguished research fellow at Yankee Group. “Defaulting to the incumbent network vendor might not be the best choice.”

Dig deeper on Data Center Network Infrastructure

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close