WAN acceleration can be complementary to most storage replication and backup applications. In fact, the most advanced acceleration solutions can provide a 5 to 30 times average performance improvement over using replication or backup software alone. This can be attributed to several factors.
Advanced WAN acceleration devices reduce data with byte-level resolution whereas most storage applications provide redundancy elimination or de-duplication at a block or file level. This enables WAN acceleration devices to discover and eliminate more repetitive data -- even within the same replication stream. In addition, when data reduction is performed at the network layer, it works across all applications. Therefore, data sent via email, file or Web transfer will populate the WAN acceleration device's disk or storage array. During a subsequent backup, this information is being seen for the second or third time, and so very large data reduction ratio's can be achieved. In the same circumstances a dedicated backup or replication solution would treat the data as new and unique, since it does not have the benefit of seeing the original application traffic that generated the backup data.
Replication and backup software sometimes includes the ability to compress data using LZ type algorithms. This functionality is usually part of a WAN acceleration solution, too. In general it is better to disable this functionality in the replication or backup software and utilize the WAN accelerator's algorithms, which are usually better. This also offloads CPU intensive compression from the backup/replication hosts, ensuring better scalability and performance.
WAN acceleration appliances also often incorporate other techniques (in addition to data reduction and compression) that can improve replication and backup performance. Some examples include TCP acceleration to mitigate the effects of WAN latency and forward packet recovery to mitigate the effects of WAN loss and reduce the need for retransmissions. QoS functionality enables enterprises to control the amount of bandwidth available for backup and replication, ensuring that RPO objectives are met, and at the same time avoiding swamping other application traffic.
This was first published in March 2007