With prices steadily coming down out of the stratosphere, 10 Gigabit Ethernet (GbE) switches are becoming viable...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
for more extensive deployments in the enterprise. The raw bandwidth available is unprecedented -- a single port offers the raw capacity of 1,000 original 10 Mbps Ethernet links. But as enterprises look toward deployment, they will learn that it is crucial to test 10 GbE switches not just for throughput but also for latency. No matter how powerful the switch, high latency can kill its usefulness.
What is 10 Gigabit Ethernet switch latency?
Latency is simply another word for delay. With respect to switches, we are essentially concerned with how long a data packet remains in the switch from input to output. This delay, naturally, is added to delay incurred at other points in the network between the end user and the target application.
To the end user, latency translates to longer wait-time. Given the inherent capacity of 10 GbE switches, a given switch will probably be servicing a fairly large population of users. Thus, a single switch has the potential to be the source of a major bottleneck for your organization.
What causes 10 Gigabit Ethernet switch latency?
Network switch latency can be caused by a number of factors usually associated, directly or indirectly, with the internal architecture of the switch. At some point, some resource becomes constrained and data packets sit waiting in buffers for that resource. The clock ticks and the result is latency.
High-end switches often do much more than simply switching packets at layer 2. Many 10 GbE switches provide layers 4 to 7 switching, which requires a deeper inspection of the packet content. In many cases, this is accompanied by SSL offload.
The cryptographic load of SSL processing can drain the processing power of a switch and increase latency. For this reason, some vendors implement a hardware offload of the SSL functions to improve overall performance. Vendors may also implement a distributed architecture inside the switch in order to avoid the bottlenecks that add latency.
Switching modes: Cut-through vs. store-and-forward
Switches are designed to process traffic according to standards, but there is no standard for switch design. Switches, even from the same vendor, can vary in design because it is not uncommon for a vendor to build its own high-end switches but simply white-label another vendor's switch to sell as a lower-end product, for example.
If you were building switched LANs in the early 1990s, you almost certainly remember the "architecture war" that erupted with respect to whether "cut-through" or "store-and-forward" was the better switch architecture. Cut-through switches begin forwarding traffic on the output path as soon as they have enough information to identify that path -- typically before the entire data packet has finished arriving at the input port. Store-and-forward switches wait until the entire data packet arrives before beginning the data transfer on the onward path.
While I hadn't seen references to these architectures for a long time (store-and-forward became the standard approach), some vendors in recent years have been extolling the merits of cut-through. The most notable of these is Cisco, whose Nexus 5000 data center switch is identified as using a cut-through architecture.
If nothing else, cut-through can theoretically reduce latency, especially for larger packets, because -- assuming the switch doesn't need to do deep inspection on the data -- it can get it started on its outbound journey potentially earlier than in a store-and-forward scenario. But beware -- the benefit may be only academic if the improvement is only a few microseconds.
Simulating traffic load for testing 10 Gigabit Ethernet switch latency
Latency can be measured quickly and accurately by commercial switch test tools such as those offered by Ixia and Spirent Communications. Fortunately, then, basic testing is not an issue. The challenge, however, is that getting a valid and useful measurement of latency must be done when the switch is under load.
More importantly, the load used for testing must be representative of the load and traffic profile you expect to see once the switch is deployed. For example, latency results when the switch is forwarding unencrypted layer 7 traffic tell you very little about the latency of the switch when it is handling SSL transactions.
Fortunately, today's traffic generators are quite sophisticated, and you can easily configure your test harness to generate streams of multi-protocol traffic that reflect your actual environment. Statistics from existing network gear, or even monitoring the existing network with a humble network analyzer like Wireshark, can provide you with the information you need to understand the protocol and load characteristics, serving as a starting point in your benchmarking.
Going beyond traffic type to size for switch testing
Traffic type isn't the only critical variable when benchmarking latency. Most important is object size. A 64-byte chunk of data will certainly get through a device faster, and thus with lower latency, than an 8-Kbyte chunk. Thus, object size must always be known. Again, statistics from your existing network will provide you with information on the most common object sizes on your network. And it is no surprise that vendors wanting to show the lowest possible latency will almost invariably choose to run that test with the 64-byte data objects.
Testing with concurrent connections
Finally, latency must be measured when a system is running a number of concurrent connections and session setup/teardown rates that are relevant for your environment. All of these various load factors will affect latency and, ultimately, the quality of experience for your end users.
In forthcoming articles, I'll discuss how to deal with situations where your test tool firepower isn't sufficient to push your core switch to the limits. And I'll take a look at key elements to evaluate edge switches where bandwidth and throughput becomes a much different issue than with core switches.
About the author: Kevin Tolly is president and CEO of The Tolly Group, an independent test lab and research firm that publishes a series of networking testing strategies called Tolly Common Test Plan. You can read more of The Tolly Group's reports at the Tolly Common Test Plan.