As WAN acceleration becomes more critical to business operations, the average size and scope of deployment is steadily increasing. This creates a variety of questions pertaining to product scalability. What happens when new applications are deployed? How easy and cost effective is it to add new offices? How is performance affected by network load? Do higher-capacity WAN links behave in the same way as lower-capacity links?
Numerous factors affect the scalability of a WAN acceleration solution, from hardware architecture to ease of use. I will address the top considerations for WAN acceleration scalability over the course of several columns. Here are the first two:
1. WAN throughput
Approximately one third of all enterprises have data centers with 45 Mbps or higher WAN connections, 15% of which are over 100 Mbps. Large WAN links are required to connect many remote offices to centralized resources (i.e., to support a large "hub and spoke" architecture), and to support data center-to-data
In addition, clustering often leads to reduced performance in large networks. When only one appliance is deployed, it is guaranteed that successive requests for data will always go through that device, which makes it easy to detect and eliminate duplicate data across the WAN. When multiple appliances are deployed, successive requests for data are sometimes handled by different devices, limiting the effectiveness of data reduction and compression technologies.
WAN throughput can vary with acceleration techniques. For example, when only latency mitigation such as TCP window sizing and selective acknowledgment is employed, it is fairly easy to exceed 600 Mbps of throughput in a single appliance. It is more challenging, however, to achieve high throughput when more advanced techniques such as data reduction are employed. Enterprises should be aware of this distinction, making their vendors highlight the maximum throughput that can be achieved (with a subset of features) separately from the throughput that is expected when all features are enabled.
It is often useful to perform a "sanity check" on a vendor's throughput claims by dividing WAN capacity by LAN capacity. If, for example, an appliance supports only 1 GB of LAN traffic and the vendor is claiming 600 Mbps of WAN throughput, then that product is delivering less than 2x improvement when fully loaded. That is fine when latency mitigation is all that is required, as discussed above, but that ratio is not enough to support the massive performance gains that are achieved when doing data reduction across the WAN, which can easily average in the 10-20x range.
2. Flow limits
WAN acceleration appliances may never reach their stated WAN capacity if the maximum number of TCP sessions supported is reached well before the WAN pipe is filled.
For example, a midsized enterprise with 45 offices and 50 users in each office would generate approximately 22,500 TCP flows (assuming the average user has 10 TCP flows open at any given time). If all of these offices are connected to a main data center via a 45 Mbps link, one might assume that a single WAN acceleration appliance with 45 Mbps of WAN capacity could be deployed at the head-end to support this entire network.
It is important to determine the TCP flow limits of a WAN acceleration solution prior to sizing a deployment. One must also determine what scenarios may affect this number. For example, do flow limits actually decrease when specific features such as disconnected file services are enabled? In addition, what happens when TCP flow limits are exceeded -- is WAN traffic blocked or sent un-accelerated across the WAN? This last question is particularly important when dealing with applications that have long-lived connections, such as CIFS and email. If a user logs on during a busy period and his connection is sent across the WAN un-accelerated, will the session remain un-accelerated for the lifetime of the connection -- even if TCP flow counts drop to acceptable levels?
About the author:
Dr. David Hughes is CTO of Silver Peak Systems, which he founded in 2004. He previously held senior architect positions with Cisco Systems, Stratacom, Blueleaf and Nortel. He has a PhD in packet network optimization.
This was first published in September 2006