Part 2 of a three-part series
Once an organization has decided that optimization is the solution to its WAN problems, a set of requirements defined, and an appropriate optimization solution architecture selected, the next step is to develop and launch an effective pilot project.
The basic goal of a WAN pilot project is to test drive optimization techniques that are designed to help reduce the physical volume of information on the network, through caching, compression and so on; or improve the behavior of data on the network through acceleration, traffic shaping and other techniques. Optimizers can be hardware-based (appliances), software only (soft client), virtual, server process, or service-based (managed services and outsourced activities).
Optimization pilot projects are very important for pinpointing gaps in a particular solution. In some cases, an initial solution will resolve acceleration problems but fail to address such things as wide-area file services.
Many of the organizations that Nemertes speaks with end up shopping for a second-generation solution to cover the gaps in their first generation. One way organizations can avoid this situation is through more thorough piloting during the initial deployment and launch phases.
Piloting for packet loss, latency
A complete pilot will involve an ongoing bake-off between at least two candidate solutions. Establishing a solid set of objective criteria is critical to developing a meaningful and successful WAN optimization pilot. This includes a list of known problems and a set of network metrics that will let IT establish not only that a new device or service is helping performance on known pain points but also that it is not hurting performance.
In creating an objectives list, IT should look at everything from bandwidth consumed to packet loss, latency through the devices, and jitter in traffic delivery. Sometimes the solution is no solution and may only add to latency problems, as one manufacturing firm's network architect reported when trialing symmetric optimizers. He ultimately re-architected applications to fix their performance issues.
This checklist of problems to be resolved and of criteria for judging the effectiveness of each solution must be the first and most heavily weighted factor in the ultimate selection at pilot's end. However, it cannot be the only thing considered. After all, where performance improvements are similar, other factors have to be taken into account. These include:
Pilots should span some significant business cycles to determine whether problems are associated with such things as a quarter's end, conclusion of a major project, a company audit, start of a new school year, and so on. Devices tested only during "normal conditions" haven't really been tested at all.
Networking with end users
As organizations collect performance data for the candidate solutions, they should also be speaking regularly with end users in a structured and organized way: for example, through conversations guided by a questionnaire to ensure some consistency in data collection -- but not simple fill-in-the-blank Web survey stuff that can miss something not explicitly addressed. This will not only help ensure that IT catches any problems an optimizer might inadvertently cause, but -- when combined with the application traffic visibility that most solutions provide -- it can point to other problems and solutions.
In the case of one defense contractor we encountered, for instance, why were people in Maryland pulling email from California instead of Virginia?
Users sometimes begin to behave differently once performance improves. Improving the performance of file-sharing across the WAN, for example, can reduce the attachment of files to email messages, changing the performance and reducing the disk-usage of the mail servers.
Data collection before and after deployment
While collecting decision data over the course of the pilot, IT should also begin planning the full rollout: how many locations will get appliances or services? In what order should the deployment be conducted? (Hint: start with where the best performance is needed but actual performance is worst, plus wherever the company's funding deployment resides. That way, as soon as the criteria -- quantitative and qualitative -- point to a winner, IT can hit the ground running on the full rollout.)
It is equally important to continue to collect data (perhaps less frequently) after the deployment to ensure that the solution is working as well in production as it did in pilot and to monitor for the development of new problems or new classes of traffic and users.
With proper piloting and a fact-driven selection process, any organization that needs optimization (and can afford it) should be able to select a solution that suits its specific needs and organization. Without them, chances for a misfire and a need to repeat the selection process are high, raising costs and increasing dissatisfaction with IT.
Part 3 and the final article in the series will explore what to do after a WAN optimization system is deployed. In most cases, this requires continuous monitoring of traffic and WAN performance to keep things working well and plan for future network improvements.
Read Part 1 of the series Match WAN optimization, acceleration options to network needs.