Application switches -- also known as load balancers -- are essential to application delivery and performance in both enterprise networks and e-commerce. Like any other technology, the RFP process on an application switch means extensive testing.
Application switch testing isn't always easy, since it can be difficult to simulate the volume of requests and traffic that a load balancer would need to handle. Also, now that application switches do much more than load balancing, network administrators find themselves testing for a plethora of functions.
The Tolly Group has developed a recipe book for testing application switches as part of the Tolly Common RFP Series. The RFP series provides testing directions for a spectrum of technologies ranging from LAN switch interoperability to virtual server performance. In Tolly Common RFP: Application Switches, the company discusses application switch testing in a range of scenarios from small businesses with two to 15 servers, all the way up to a data center with 100-plus servers.
Application switch testing: Features to consider
From the simplest perspective, application switches front for a complex set of back-end servers, presenting the appearance of a single name and IP address. But they do much more than send traffic to the back-end server farm. Application switches process data in layers 4 to 7 of the OSI model.
What's more, they do deep packet inspection (DPI) to make balancing decisions based on a higher level of information. For example, HTTP text data might be handled by one group of servers while HTTP image data is processed by another.
Higher up the stack, application switches can be called upon to deliver other functions that help enhance the user experience. Some devices support SSL termination functions that offload processing burden from the target servers. This should allow the servers to be more responsive to user requests. Some application switches can also dynamically apply compression to server responses, which reduces the bandwidth required for delivery.
Load balancers can also provide integrity during attack. Since they are often facing the Internet and function as a termination point in the TCP/IP network, they will inevitably be probed by hackers. The application switch must be able to detect such attacks and remain functioning when they occur.
Challenges in load balancer testing
Testing application switches in your network means finding out how they function in all of these scenarios.
In theory, load balancers can be tested with actual clients sending requests to actual back-end servers, but in practice this is not a sound approach. The volume of clients and servers required to generate a load sufficient to tax many load balancers would make the use of real clients and servers unfeasible both logistically and economically. Even if the client/server infrastructure were available, a method would have to be developed to generate traffic and collect the accurate performance statistics.
Tools on the market for application switch testing
Recognizing these challenges, a number of vendors have developed testing tools that implement both the client and server elements needed to test load balancers. These tools can emulate the data streams that would emanate from clients and servers. Because they are purpose-built to generate traffic, they can usually generate the high loads required to stress load balancers. They are also designed with repeatable workloads, which is necessary to make performance comparisons across devices.
Tolly's application switch RFP guide covers a host of tests, including layer 4 basic device throughput, connection rate and response time in layers 4 and 7, transaction rate at level 7, SSL offload performance, caching, and distributed denial of service (DDoS) attack. In this article, learn more about layers 4 and 7 test validation, the relevance of testing caching, and measuring DDoS performance.
Layers 4 and 7 test validation
The primary difference across the tests is how far into the packet the application switch needs to look (i.e., how far up the protocol stack) before making its decision. In fact, if you aren't careful, you might think you are running a layer 7 test when you are running one on layer 4. Just because you http traffic, for example, doesn't prove that the DUT is evaluating traffic at the application level.
While many variations are possible with respect to features involving special processing of upper-layer, application-specific protocols, the following tables provide information on the most common layers 4 to 7 test verification procedures.
|Layer 4||Make balancing decisions at IP address level.||Verify that traffic directed toward a single "front end" IP address is distributed to two or more servers on the back end.|
|Layer 7 Web (http)||Make balancing decisions at http content level.||Configure DUT to send requests for .html objects to one server and .jpg image objects to another server. Verify that traffic is being directed.|
|Layer 7 Secure Web (https)||Offload encrypt/decrypt functions and then make balancing decisions at http content level.||Verify as with the http example. Additionally, to verify the encrypt/decrypt, verify with a packet capture tool that traffic on the client side of the DUT is encrypted and that traffic between the DUT and servers in unencrypted. (One need only usually glance at the contents of the packet to determine whether it is encrypted or not.)|
Test tools normally provide real-time reporting during the test and provide more detailed analysis afterward. Testers might also want to use the reporting functions from the load balancer under test for the secondary data points. After the test tools save the test results, testers take the values from only the portion of the test run when the device is observed to be running in a steady state and average these readings (typically for a minute or more) to calculate the final result.
Since the tester needs to make an individual assessment as to which results to bracket as the "steady state" for calculations, results can vary. This is offset by the fact that the steady state result is typically made up of an average of 100 or more samples.
Server optimization: Caching relevance
Much of the data delivered by the target server -- and through the application switch -- is static data. For example, objects like company logos and product images are unlikely to change from day to day. For such objects, it is unproductive for an application switch to request these unchanging objects repeatedly and equally unproductive for the source server to fetch and transmit them.
Application switches that implement caching can reduce load on the target server, freeing up the source server to do more productive work and potentially improve client response time.
The most challenging part of building a caching test is to design it so that the results are meaningful to your deployment rather than simply a display of application switch power. While an application switch vendor might choose to highlight a test that illustrates huge amounts of cached data being fed to clients, those results will probably provide little indication of how caching might improve performance.
In order to build a relevant caching test and/or interpret results of a pre-existing cache test report, you need first to evaluate the traffic profiles of data requested from your Web servers. You must first be aware of the caching support offered by the application switch under test. Then you must analyze the actual traffic streams that your servers are delivering. From this, you must determine approximately how much traffic is:
- A data type that is supported by your caching switch.
- Repetitive data that is being delivered by the server.
Test tools do not typically have test profiles that are tuned specifically for evaluating caching in any realistic way.
Distributed denial of service (DDoS) performance test
Application switches are frequently deployed where one's internal network meets the public Internet. Thus, the application switch is a prime target for attack. The switch itself is not the direct target of the attack, but attackers realize that it is a gateway for users to access data within the corporation. By making the application switch incapable of processing actual user data, the attacker can ultimately cause denial of service to users.
The most common method of attempting to prevent normal processing is to send particular protocol packets that trigger the switch to allocate resources which are then unavailable for legitimate sessions.
Testing for DDoS performance examines how well an application switch can detect and defend itself against a denial of service attack.
Where the mainstream performance tests attempt to find maximum performance, this test establishes a baseline of throughput for valid traffic as its starting point. Then, with the DUT processing a steady stream of valid traffic, testers initiate attacks against the DUT to measure how many it can fend off before throughput is degraded.
There is no official or de facto standard for this test -- or even a commonly followed practice to establish how much traffic an application switch should be able to process while under attack. What is important is what is required for the deployment scenario you envision.
While our test uses 1 Gbps as the rate for the valid traffic, it can be any amount of traffic you determine to be relevant. Keep in mind that you will need to choose an amount of bandwidth below the maximum to allow for the attack traffic to be inserted into the test.
The tester must also decide which attacks are relevant to test. As different attacks drive DUTs in different ways, it cannot be assumed that because a DUT can resist a particular attack, it can resist all attacks.
- ICMP (Ping)
- SYN Flood
- Tear Drop
- Zombie Connection
- Ping of Death
About the author
Kevin Tolly is president and CEO of The Tolly Group, an independent test lab and research firm. Visit commonrfp.com to read more of The Tolly Group's LAN Switch Interoperability RFP, or to learn more about the Tolly Common RFP project.