Migrating to 10 GbE enables data and storage network convergence

Migrating your data center to 10 Gigabit Ethernet (10 GbE) enables higher available bandwidth with fewer ports to manage, and lends itself to data and storage network convergence -- both of which are necessary for a high-performing virtualized environment. Learn how 10 GbE makes it possible for you to build a high-performance network.

This Content Component encountered an error

10 Gigabit Ethernet (GbE) is becoming the new normal, at least in the data center.

Adoption has followed the same pattern we saw for the transition from 10 Mbps to 100 Mbps Fast Ethernet and from Fast Ethernet to Gigabit Ethernet: Data center and network managers began installing expensive fiber optic ports to ameliorate bottlenecks between switches. As prices for the technology fell, they were able to begin connecting servers as well. Eventually, the high-performance connections spread to broad adoption. However, while prior Ethernet transitions have been primarily about speed, the migration to 10 GbE promises much more.

Today, most data centers run multiple networks for different purposes. Servers access storage arrays over dedicated Fibre Channel networks, while users access the same servers over separate, and separately managed, Ethernet networks. The Ethernet network may be further divided -- either physically or through virtual LANs -- into networks for user access, management, security and backup. Most virtual server hosts running VMware, Xen or Hyper-V virtualization software must be connected to several of these networks. That means each server host will have a redundant pair of 1 GbE ports for each network. Surveys of data center managers put the average number of Ethernet connections per virtual server host somewhere between six and 10.

With greater bandwidth available, a single pair of 10 GbE connections could satisfy all the input and output (I/O) requirements of a typical virtual server host. But the challenge for network and data center managers is not simply to provide bandwidth. They must also ensure that each system, function and application on the network has the amount of bandwidth and network quality of service it needs while attaining interoperability and avoiding bottlenecks.

10 GbE allows data and storage network convergence

Much of the buzz accompanying 10 GbE has been about its ability to support the convergence of data and storage networks. During the past 10 years, Fiber Channel storage area networks (SANs) have increased in bandwidth to an average of 4 or 8 Gbps, while iSCSI (Internet Small Computer System Interface) SANs typically run over 1 Gbps Ethernet. More significantly, the industry has developed SAN management tools that aren't available for iSCSI but give administrators visibility and control of Fibre Channel networks.

The Fibre Channel over Ethernet (FCoE) protocol was developed to allow IT organizations to converge SAN and data traffic onto 10 GbE while preserving the security and management models of Fibre Channel. FCoE encapsulates Fibre Channel Protocol (FCP) data packets inside Ethernet frames so that a server can access its Fibre Channel SAN-attached storage through an Ethernet connection.

FCoE’s designers specified exactly how FCP packets should be encapsulated within Ethernet, but also had a significant problem to solve. Ethernet and Fibre Channel handle network congestion very differently. Fibre Channel devices negotiate the amount of data they intend to send across a link before the data is actually transmitted, using buffer credits. The sender does transmit until the recipient says it has enough memory available to hold the data. Ethernet systems, on the other hand, just send the data. If the line is so busy that the memory in the switch at the other end is too full to hold the new data, the packet is dropped.

Overcoming packet loss with data center bridging

The inherent issue is that, as usually implemented, Fibre Channel lacks a recovery mechanism to retransmit dropped packets. And implementing one would add unacceptable latency to storage traffic. As a result, FCoE has been engineered to take advantage of a series of congestion management enhancements to Ethernet that together are known as data center bridging (DCB).

The most significant new feature in DCB is per-priority pause. This capability allows an Ethernet switch to tell a server to stop sending low-priority data until it can flush some data from its memory, rather than waiting until its buffers get full and dropping packets. There are eight possible priority levels, and FCoE uses the highest priority so that storage data can be communicated even during busy network times. The packet loss DCB achieves can boost performance of other protocols, like iSCSI, as well.

Although FCoE sends storage traffic across ubiquitous Ethernet links, the switches that make up that network must include specific support for FCoE. Some switches, such as the Cisco Nexus 5000 and Brocade 8000, are designed to be placed at the top of each server rack, and include Fibre Channel ports for connections to existing SANs. FCoE-capable switches cost about the same as other 10 GbE switches with DCB, but equipment makers charge extra for the FCoE support features. Enterprises that may not need FCoE capability now can use them as standard top-of-rack switches and add SAN support later.

The proponents of FCoE wanted to preserve as much of the Fibre Channel experience as possible, so they built converged network adapters (CNAs) that run with Ethernet drivers for data networking and separate Fibre Channel drivers and management software for FCoE traffic. CNAs allow the host operating system, or hypervisor, to see separate storage and data connections, while the storage administrator sees the interface they are accustomed to, such as QLogic SANsurfer or Emulex ONEconnect. CNAs also offload the protocol processing of FCoE, or in some cases iSCSI, from the main processor in the server. This can save between 3% and 5% in a typical eight-core server, which isn't a large percentage but can add up.

Opening the network to multiple paths

Switches with 10 GbE capability also enable Layer 2 multipathing. Traditional Ethernet switches use the spanning tree protocol to prevent network loops that could clog the network with broadcast and other packets. Spanning tree works by enabling one, and only one, path from one switch to another. So, if there are two uplinks from your top-of-rack switch to your core switch, one will be disabled by spanning tree.

Layer 2 multipathing uses the TRILL (Transparent Interconnection of Lots of Links) protocol to learn all the paths from one point on the network to another and send packets down the best path on a per-packet basis. TRILL-based networks don’t need the hierarchical layering (i.e., edge, aggregation and core) of spanning tree networks and can be connected in any sort of mesh or fabric. They also use all links possible, increasing available bandwidth between any two switches.

We are clearly entering the era of 10 Gigabit Ethernet data center networking. With higher speeds and new features like DCB and TRILL, we can build bigger, faster and more resilient networks that can carry not just traditional data traffic but also storage via FCoE. Equipment makers are still at the start of introducing new products, and additional capabilities and features will continue to develop. The time spent carefully taking this evolution into account in your data center network will pay off in spades.

About the author: Howard Marks is the founder and chief scientist at DeepStorage.net, a networking consultancy based in Hoboken, N.J. In more than 25 years of consulting, he has designed and implemented networks, management systems and Internet strategies at organizations including American Express, JP Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide and Foxwoods Resort Casino. Mr. Marks is a frequent speaker at industry conferences and the author of more than 100 articles in technical publications. He has also written the book Networking Windows and co-authored Windows NT Unleashed.     

 

This was first published in March 2011

Dig deeper on Data Center Ethernet

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close