Upgrading from 10 Gigabit Ethernet data center links to 40 Gigabit Ethernet is the obvious answer to handling the ever increasing network traffic resulting from improved processor performance and virtualization. But faster backbone links alone will not necessarily improve performance. Upgrading to 40 GbE will require a new level of network performance testing that takes into consideration a number of elements.
Performance problems to consider before a 40 GbE upgrade
Since the IEEE ratified 802.3ba for 40 and 100 GbE, networking vendors have launched a range of products aimed at improving link speed. Specifically, they are hoping to address the bottlenecking on 10 GbE links between racks caused by engineers replacing 1 GbE server network interfaces with 10 GbE interfaces and 10 GbE traffic coming to the top of the rack from each server.
Before upgrading backbone links, there are a host of other problem areas and influencing factors to consider:
Backplane: In some cases, even though vendors have released higher capacity interface cards, switch and router backplane throughput may not be sufficient to support all of the upgraded cards operating at full capacity. It’s important to review product specifications carefully and consult independent product test results before simply replacing interface cards.
The good news is that testing in this new environment is simplified by the fact that while data rates have increased, nothing affecting higher protocol layers has changed. All of the familiar switching and routing technologies continue to operate with no extensive reconfiguration needed.
Upgrading to a higher speed backbone may reveal problems elsewhere in the network. After installing the new hardware, it's important to perform a series of network tests to determine whether the maximum in performance improvement has been achieved.
Simultaneously upgrading to 10 GbE server interfaces and 40 gigabit Ethernet backbone: It’s best to start by upgrading the backbone links first and then upgrade server interfaces after verifying that the backbone operates successfully. Finally, after upgrading server interfaces to enable higher per-server throughput, verify that the server virtual switches have not become a bottleneck.
Tests should include cases where traffic passes through a single backbone switch and where traffic passes through multiple switches.
Upgrading network performance testing equipment: Many existing testing and monitoring products don’t sufficiently handle a higher data rate. Products that generate load and monitor performance for 40/100 GbE environments are available from such test equipment vendors as Spirent Communications and Ixia.
Post-upgrade: What's needed for network performance testing?
End-to-end throughput testing: This focuses on the primary reason for upgrading—the need to move greater volumes of data from server to server.
Tests should include cases where traffic passes through a single backbone switch and where traffic passes through multiple switches. Flows from multiple servers must be generated either using hardware load simulators or software running within virtual machines to simulate operational patterns.
Network performance testing scenarios should include cases where load generators send streams of very short packets, as well as those with full-length packets and packet bursts. Use network monitoring hardware and software to spot locations where bottlenecks result in long packet queues or dropped packets.
It’s also important to test sensitive applications in order to make sure increased network throughput has not caused problems. For example, cluster heartbeat packets may have arrived out of order after traveling through upgraded top of rack and backbone switches.
The TCP protocol uses congestion-control algorithms to control and limit the rate at which it transmits packets. Therefore, it may be necessary to modify parameters, such as send-and-receive window size, to achieve the full throughput supported by the network.
Read more on network performance testing and 40 GbE
Enterprise network testing: What engineers overlook
Cisco 40 GbE and 100 GbE switching debut at Cisco Live London
Network monitoring switches eliminate 40 Gigabit Ethernet blind spots
However, because each network and pattern of usage is different, it’s not possible to specify the series of tests required for a specific network. Since it is not feasible to test every possible combination of data flow that may occur in the operational network or the path taken by each flow, it’s advisable to create a more general set of tests with a large number of packet flows between racks.
Testing for jitter: A variation in packet arrival rates can cause unacceptable pauses in voice and video transmissions. While other simulated applications place load on the network, it’s best to use hardware test generators to create streams of equally spaced packets. Measure the variation in arrival time to determine if jitter remains within acceptable limits. If excessive jitter is detected, it may be necessary to modify VLAN priority configurations for packets that require constant delivery rates.
Fibre Channel over Ethernet (FCoE): This type of convergence requires zero packet loss. Disk traffic can be expected to grow along with network load as application performance increases. Verify that priority settings determined for lower bandwidth network hardware are still appropriate for the upgraded network.
Cable testing should not be the issue in a 40 gigabit Ethernet upgrade: Cable testing is not an issue for the 40 GbE upgrade since cables used for 10 GbE links must be replaced. Earlier Ethernet upgrades required testing existing cables to determine whether they could support the higher rate.
Cables supporting 40 GbE consist of ten individual fiber strands because vendors have not yet been able to transmit 40 GbE over a single fiber-optic strand. As a result, traffic in each direction is split across four strands with each strand carrying 10 GbE. Two strands in the cable are unused, so cable replacement must accompany the change in switch and router interfaces. The integrity of the new cables should have been tested when installed.
While the primary goal of network performance testing is to verify that the maximum benefit was gained by upgrading network links, the time and effort expended can be used for a second benefit: Tests done now can help estimate how soon the next upgrade will be required. By continuing to increase generated traffic levels beyond current or near future expected operational levels, it is possible to determine the limits of the 40 GbE backbone. Undoubtedly, an upgrade to 100 GbE will be necessary at some future point.
David B. Jacobs, founder of The Jacobs Group, has more than 20 years of networking industry experience. He has managed leading-edge software development projects and consulted to Fortune 500 companies as well as software start-ups.
- How to Successfully Monitor and Manage Network Traffic in VMware Environments –APCON, Inc.
- Network Monitoring for Dummies –SolarWinds, Inc.
- Streamline Network Management, Monitoring, and Security: Demo –Cato Networks
- Buyer's Guide: Network Monitoring Tools –TechTarget