Tip

Unified data center fabric primer: FCoE and data center bridging

What is a unified data center fabric?

A unified data center fabric is a networking fabric that combines traditional LAN and storage area network (SAN) traffic on the same physical network with

    Requires Free Membership to View

the aim of reducing architecture complexity and enhancing data flow and access. To make this work, the traditional Ethernet network must be upgraded to become "lossless" and provide additional data center networking features and functions. In turn, the storage protocol must be altered to run on Ethernet.

Data center bridging: Upgrading Ethernet

The IEEE has defined the term Data Center Bridging (DCB) to be an architectural collection of Ethernet extensions designed to improve Ethernet networking and management in the data center. DCB is also known by the terms Converged Enhanced Ethernet (CEE), Data Center Ethernet (DCE, trademarked by Cisco), Enhanced Ethernet for Data Center (EEDC), and other similar terms. DCB adds four basic functions to the existing Ethernet infrastructure to enable new capabilities for unified fabrics:

  • Traffic differentiation -- DCB can distinguish among LAN, SAN and IPC traffic.
  • Lossless fabric -- required for SAN traffic.
  • Optimal bridging -- allows the shortest path bridging within the data center.
  • Configuration management -- provides configuration management functions that work with the existing infrastructures of Fibre Channel and Ethernet.
These enhancements to Ethernet allow traffic to be paused rather than having packets dropped, which makes the lossless function required for storage traffic.
,

IEEE 802.1 is the collection of standards for the functioning and management of local area networks (LANs) and metropolitan area networks (MANs). Several new specifications are being added to IEEE 802.1 to provide for the new functions that are required, and ratification of these is expected in 2010.

These are:

  • 802.1aq -- Shortest Path Bridging
  • 802.1Qau -- Congestion Notification
  • 802.1Qaz -- Enhanced Transmission Selection
  • 802.1Qbb -- Priority-based Flow Control

These enhancements to Ethernet allow for traffic to be paused rather than having packets dropped, which makes the lossless function that is required for storage traffic and which is standard in Fibre Channel SANs. These enhancements also allow traffic to be grouped so that administrators can guarantee a specific bandwidth and priority for various types of traffic.

Combining FCoE and data center bridging

Fibre Channel over Ethernet (FCoE) combined with DCB is currently the only way to converge local Ethernet and Fibre Channel traffic together coming out of a server and through a top-of-rack switch. FCoE is the first major application to take advantage of the enhancements to Ethernet made by DCB. INCITS T11, the Fibre Channel standards group, has approved FCoE as a standard for storage traffic.
Because FCoE uses the existing Fibre Channel (FC) protocol on the new lossless Ethernet (DCB), the encapsulated protocol and behavior are the same as traditional Fibre Channel.

FCoE switches

FCoE fabrics must be built with switches that support DCB and FCoE, and these switches must interoperate with existing FC switches, support all FC advanced features and operate identically on FC and FCoE fabrics.

Typically, the top-of-rack switches that support DCB and FCoE have 10 Gb Ethernet (GbE) ports and optionally contain either 4 Gb or 8 Gb Fibre Channel ports. This allows these switches to handle all the LAN and SAN traffic within a rack but forward that traffic to separate existing LAN and SAN infrastructures elsewhere in the data center. Previous generations of Ethernet switches did not provide full-function Fibre Channel ports for storage traffic, and current Fibre Channel switches do not provide full-function Ethernet ports for general LAN traffic.

There are technologies used for long-distance transport of storage traffic over Ethernet, such as Fibre Channel over IP (FCIP) and Internet Fibre Channel Protocol (iFCP), but these run on traditional Ethernet and are subject to the same congestion problems that DCB is designed to overcome. In addition, there are various types of connections to SONET and other long-haul networks that run converged traffic over very long distances, but these technologies simply provide transport over long distance without any of the local management and control features.

More on unified data center fabrics
FCoE roadmap: Do you need a unified fabric strategy?

Converged Enhanced Ethernet: New protocols enhance data center Ethernet

Optimization of the data center with 10 Gigabit Ethernet

Unified data center fabric benefits

One of the benefits of a unified data center fabric is reduced cable count. Consider a rack full of servers and all the cables that are typically found in these racks. For a rack of 20 servers, each server might have four, six or more 1 Gb NIC ports and two 4 Gb Fibre Channel ports. For a rack of 20 servers with four NIC ports and two FC ports per server, that is 120 cables in that rack with 12 Gb of total network bandwidth per server. This rack would also have two separate switches at the top of the rack. In a unified, converged network, that cable count could be reduced to two cables per server, which would be 40 cables in the rack and 20 Gb of total network bandwidth per server. In addition, a rack of servers that used converged networking would require only one switch at the top of the rack.

Cutting power consumption with unified data center fabric

Consider the power consumption of the rack of servers described above. If a server has four NIC ports, this is often accomplished by using two NIC ports on the motherboard and two NIC ports in one or two additional adapter cards. The Fibre Channel ports are usually provided by one or more additional adapter cards. With a converged, unified fabric, a single adapter card runs TCP/IP and storage traffic, resulting in fewer adapter cards and lower power consumption. In addition, power consumption is reduced by having a single switch at the top of the rack, rather than having a separate LAN and SAN switch.

Data center bridging: What if you don't use Fibre Channel?

Those who do not use Fibre Channel might ask whether DCB and converged networks will be of any benefit. If your storage traffic is exclusively file server traffic using either Common Internet File System (CIFS) or Network File System (NFS) protocols, or if you use iSCSI as your SAN, then you can run this traffic over your "old-fashioned" Ethernet or over the newer DCB. Demartek hasn't run extensive testing in the labs comparing running these Ethernet-based storage protocols on traditional Ethernet vs. DCB, but improvements for these protocols running over DCB is likely because of the lossless characteristics and new management and bandwidth QoS features.

Data center bridging capable 10 GbE switches

The switch vendors that support DCB are building this capability into their 10 GbE switches. Generally, DCB isn't built into 1 GbE switches, although many of the 10 GbE switches have 1 GbE ports. So those who currently run Ethernet-only storage protocols can continue to run these protocols as they migrate to 10 GbE. Some switch vendors have not yet announced DCB capable switches. Some are waiting to see how the market matures around the capabilities provided by DCB, and some are waiting for the standards to be ratified.

It is currently best to think of unified, converged fabrics as you think of new data center build-outs and as you begin to plan new infrastructure for the medium to long term.

About the author
Dennis Martin is the founder and President of Demartek, a computer industry analyst organization with its own on-site test lab. Demartek focuses on lab validation testing and performance testing of storage and related hardware and software products. Dennis has been working in the Information Technology industry since 1980, primarily involved in software development and project management in mainframe, UNIX, and Windows environments. These include a variety of large and small end-user customers, and engineering and marketing positions for storage vendors such as StorageTek.

Dennis is the founder of the Rocky Mountain Windows Technology User Group in Denver, and served as its President for its first six years (1994-2000). Dennis was re-elected President of the group in May 2008. Dennis has made numerous presentations at conferences and has authored many industry articles.


This was first published in January 2010

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.