Optimization of the data center with 10 Gigabit Ethernet

Data center networking has taken advantage of 10 Gigabit Ethernet. Optimize your interconnection equipment and prevent latency with more efficient LAN technology.

This Content Component encountered an error

Ethernet has long been the dominant LAN technology. Now the availability of 10 Gigabit Ethernet (10 GbE) has enabled new applications in the data center. Two factors have made this possible:

  • Competitive price and performance
  • Need for simplified data center networks

Gigabit Ethernet is more than adequate for most LAN applications but is not sufficient for connecting servers to storage area networks (SANs) and network attached storage (NAS) or for server-to-server connections. Fibre Channel and InfiniBand have been widely used for these purposes. Fibre Channel supports rates up to 8 Gbps; common InfiniBand interfaces support 10 Gbps rates, and 20 Gbps server and switch ports are also available.

10 GbE price and performance

While 10 GbE cannot match high-end InfiniBand rates, it is adequate for many data centers. Despite competitive performance, 10 GbE was not quickly adopted.

IEEE Std 802.3ae, the 10 GbE standard, was finalized in 2002, but high component prices slowed initial acceptance. Prices have now come down, and sales of 10 GbE interface cards and switches have accelerated. In August 2008, Dell'Oro Group, a market research firm based in Redwood City, Calif., reported that in the second quarter of 2008, 10 GbE switch port shipments exceeded 250,000 for the first time.

Raw data rates are not the only important factor. Good SAN performance requires low latency between servers and storage. Most Ethernet switches have been designed for use on a LAN. They must receive the entire incoming frame before beginning to transmit.

Switch vendors have now addressed this issue by developing switches specifically designed for data center applications. Low-latency switches begin transmitting to the outgoing link after reading just enough of the incoming frame to determine its destination. The result is switches with latency competitive with InfiniBand switches.

Reduced server CPU load and latency

Processing load on server CPUs is a critical performance factor. In most computer systems, the CPU handles the data copies and protocol processing required to support network traffic. As the data rate increases, CPU load increases. At 10 Gbps, processing load on the CPU becomes unacceptable.

Network interface card vendors have incorporated three mechanisms on their 10 GbE cards to address this problem.

  1. A TCP/IP offload engine (TOE) is a special-purpose processor located on a network interface card. Use of TOE transfers protocol processing from the CPU to the network interface. CPU application performance improves as a result of reduced load. Latency decreases because the interface card processor is optimized to move frames quickly.
  2. Remote Direct Memory Access (RDMA) provides an efficient mechanism for server-to-server data transfers. It reduces latency and CPU load by copying data from application memory space directly to the interface card. This single hardware-assisted copy eliminates a copy from application memory to the kernel, a second copy to the network device driver, and a third to the interface card.
  3. iWARP (Internet Wide Area RDMA Protocol), developed by the Internet Engineering Task Force (IETF), enables applications on one server to read or write directly to applications executing on another server without support from either server operating system.

Use of TOE and RDMA requires changes to operating system software. Protocol processing code must be removed. Microsoft has provided TCP Chimney Offload to support TOE and RDMA in Windows Server 2003, Vista and Windows Server 2008. The OpenFabrics Alliance is developing Linux support for TOE and RDMA.

Simplifying the data center network

Replacement of Fibre Channel and InfiniBand with Ethernet results in cost savings and added flexibility. Fibre Channel and InfiniBand are specialized technologies connecting servers and storage. Neither can extend beyond the data center.

A single 10 GbE network and a single switch can support the LAN, server-to-server communications and the SAN, and can connect to the wide-area network. Data center staffs are familiar with Ethernet and IP network technology, so replacing multiple networks with a single network simplifies staff training and support costs.

Both Fibre Channel and InfiniBand require cable types specific to each technology. 10 GbE uses standard twisted pair or fiber cables. As a further bonus, a single high-capacity switch consumes less power and produces less heat than two smaller switches.

Ethernet Virtual LANs (VLANs) make it possible to shift resources quickly as traffic patterns change. LAN, server-to-server and SAN traffic can be allocated individual VLANs on the same cable if traffic level permits, or moved to separate cables as traffic increases.

Fibre Channel over Ethernet and iSCSI

Fibre Channel over Ethernet (FCoE) has been developed by a group of equipment vendors to provide a way to preserve customers' investment in Fibre Channel while gaining the advantages of a single network technology. Fibre Channel packets are transmitted over Ethernet instead of using Fibre Channel interfaces, cables and switches.

Similarly, iSCSI provides a method for preserving investment in existing systems while benefiting from a single network. Storage connected via iSCSI appears to a server to be a directly connected SCSI disk drive. Standard SCSI commands are transported by the TCP/IP protocols over Ethernet.

Fibre Channel and InfiniBand technology, in contrast to Ethernet, guarantee that no packets are lost in transit. To remedy this problem, work is currently under way in the IEEE to develop Ethernet protocols that eliminate packet loss by providing flexible prioritization and eliminating network congestion.

This work, combined with further price reductions, performance improvements, reduced power and the need for simplification, will continue to increase the use of 10 GbE in the data center.

About the author:
David B. Jacobs of The Jacobs Group has more than 20 years of networking industry experience. He has managed leading-edge software development projects and consulted to Fortune 500 companies as well as software startups.


This was first published in April 2009

Dig deeper on 10 Gigabit Ethernet

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close