Numerous factors affect the scalability of a WAN acceleration solution, from hardware architecture to ease of use. In the last two tips, we highlighted several considerations for scaling a WAN acceleration solution. Here we'll look at four more. Some of these are a bit more esoteric than the typical "speeds and feeds" that most people associate with scalability, but they are just as critical when building a large network.
7. RAM vs. disk-based data reduction
The amount of storage used with data reduction techniques will affect overall performance in larger networks. That is because the efficiency of a compression or data-reduction technique is dependent on the likelihood that data streams that have traversed the network in the past can be recognized on subsequent passes and eliminated across the WAN.
A fully utilized 155 Mbps WAN link, for example, will send 60 gigabytes (GB) of data per hour in a single direction (120 GB per hour if operating in full duplex). At 12% utilization, that same link will send nearly a terabyte (TB) of traffic over a 24-hour period. A WAN acceleration appliance that has a large, rapidly referenceable data store of multiple TBs will be extremely efficient in this type of environment. In contrast, an appliance that operates only out of physical RAM will be limited to 4 GB or 8 GB of active data store, resulting in less than 10 minutes of traffic retention. The differences may not be noticeable when transferring small file sets (e.g., a few GBs), but they are apparent when large volumes of information are sent over extended periods of time, as is the case with data center replication and disaster recovery.
8. Effective storage capacity vs. stated capacity
When performing disk-based data reduction, vendors use different methods to store information. Some techniques are more efficient than others and result in better usage of available storage space, which leads to improved scalability.
A scalable WAN acceleration appliance will store a single local instance of information independent of all peer appliances. The alternative -- storing a separate instance for each individual WAN link (i.e., peer appliance) -- consumes significantly more disk space, particularly when many offices are involved. In addition, detecting and storing repetitive patterns in a bi-directional fashion can dramatically improve scalability by eliminating the need to store a separate instance of information for each direction of a WAN link. For example, if a 10 MB file is sent back and forth among 100 offices, a head-end appliance that works bi-directionally and stores a single instance of information will use up approximately 10 MB of local data store. If the same appliance stored a separate instance for each direction of each WAN link, it would require 2 GB of local hard drive capacity to support this scenario. The first solution is getting 200 times better effective storage capacity than the second solution.
Remote provisioning, management and monitoring are important to scaling WAN operations. This is particularly true in large networks, especially where servers are removed from remote offices and there is limited IT staff on site to handle day-to-day tasks. Features such as zero-touch provisioning, which allows a device to be sent to a remote office and then automatically provisioned from a central location, are critical for growing a large deployment. The ability to generate network traffic reports centrally and export them to commonly used formats such as Cisco Netflow is also important for scalability. Other capabilities that can help ease the burden of managing a large WAN acceleration deployment include centralized alarm aggregation, scheduled tasks, auditing, trending reports and centralized software updates.
10. High availability and clustering
As WAN acceleration deployments become larger, they often become more strategic to the enterprise. As a result, availability often goes hand-in-hand with scalability. This is especially true when WAN acceleration is used for data-center to data-center applications, such as disaster recovery, where downtime can mean a significant loss of revenue.
To address the need for high availability, most vendors support a wide range of network configurations. These often use common techniques to ensure redundancy, such as Virtual Router Redundancy Protocol (VRRP) and Web Cache Coordination Protocol (WCCP). These protocols can also be used to cluster multiple appliances for better scalability. This allows traffic to be balanced across multiple devices for better performance under heavy load.
Resiliency can also be built into the WAN acceleration appliance. Common features include redundant load-sharing power supplies, fail-to-wire network interfaces, ECC memory, and hot-plug RAID drive configurations.
About the author:
Dr. David Hughes is CTO at Silver Peak Systems Inc., which he founded in 2004. He previously held senior architect positions at Cisco Systems Inc., StrataCom Inc., Blueleaf and Nortel Networks Ltd. He has a Ph.D. in packet network optimization.