In the last part of this series, we run down a final checklist of steps and considerations any enterprise technology department should address in the course of selecting an application delivery optimization solution.
Work across silos. Create a team comprised of staff from security, network, data center and desktop. If all sides are accounted for, it is far less likely the solution chosen and deployed will fail to meet the organization's needs or will interfere with one plan or service in the name of saving another. If the optimization solution would either ignore
Baseline and compare. You can't tell how much things get better if you don’t know how bad they actually were. You have to have a basis for evaluating performance before and after, and compare performance changes across multiple candidate solutions. So measure throughputs, application performance, loss and latency in affected branches. Vendors under consideration can sometimes lend you the equipment necessary to construct realistic baselines for performance.
Read more on application delivery optimization
Vendors offer cloud optimization for AWS
WLAN vendors add mobile app optimization
Don't assume you have to buy something. MPLS is the primary WAN technology, which means more enterprises than ever have access to differentiated delivery services on the WAN. Most MPLS networks allow the client to define four or more classes of traffic with some prioritized over others. Something as simple as defining traffic classes could eliminate the need to purchase any sort of ADO appliance.
Make sure you can get the solution you want where you need it. Some ADO solutions have export restrictions, meaning they can't be taken into certain geographies. Others, if they are cloud services, are only available where the service provider allows it. Make sure that the solution you want can be sent to or used in all the locations where you actually need it, or that you can bring a suite of solutions to get around any location-based problems.
Don't buy the first thing that works. It would be a bad decision to purchase the first thing that worked only to find out later that the same performance boost was available for half the price or at the same price but with other ancillary benefits. The organization should test multiple solutions and compare them objectively on a weighted scorecard, incorporating not just performance factors but also concerns like how they fit into the environment for management and monitoring, the ability to integrate with other solutions, and price and service options. IT can make the best choice across multiple axes, and where a couple of vendors' solutions score similarly overall, can use that fact to negotiate better terms with either top choice.
Look at small players and market leaders. The small players are often the folks bringing innovative solutions, market-altering price points or service models to the game. The smaller they are, the more likely you are to have significant leverage with them. Certainly, there's always the chance that they will go under or get bought out. You'll want someone with enough of a track record and revenue stream to be around for three or four years at least, or have an attractive enough customer list and technology that if they get bought out, the buyer will either preserve the solution and continue to support it, or will gracefully transition off it. That said, there are no guarantees. Blue Coat's acquisition of Packeteer some years back, or Riverbed's of Zeus, provide excellent examples of continuity after acquisition; Riverbed's more recent acquisition of Expand's intellectual property and customer list is an example of a more terminal exercise that abruptly ended support for the Expand optimizers. And even looking only at large and well-established vendors is no guarantee of long-term stability, as illustrated by Juniper's recent decision to exit the WAN optimization space altogether and direct customers interested in an optimization solution to Riverbed.
Consider physical versus virtual, appliance versus cloud. With physical appliances, you get the traditional model of network services, with the accompanying high degree of control over every aspect of service and high capital costs to deploy to every location. With physical appliances, if you already have infrastructure on site to run an appliance you can reduce the capital costs, avoid shipping and installation headaches and avoid making the stack in the network closet any deeper. But you may take performance or capacity hits to do it. With a cloud service, you avoid the capital expense completely, as well as the "one more box to manage" problem, but you also cede control and have to make peace with the security implications of, for example, having a cloud service provider housing the data dictionary used to compress and decompress your data, and housing cached content. Weigh the options and the tradeoffs and especially your own institutional appetite for risk and need for control.
Read more from this series
Application delivery optimization strategies for a changing workforce
Document the solution's success. Documenting how well an optimization solution improves performances is critical to building IT's credibility as a problem-solving organization. Properly documenting success means putting together a good "before" and "after" picture in terms that make sense to upper management in the lines of business, such as showing how it saves hours of work or hard dollars in avoided costs for increased bandwidth or larger server pools.
With all this in mind, IT is ready to find help for performance problems rooted in or amenable to improvement via the network. By taking a careful, holistic and open-minded look at the actual problems to be solved, the broad array of options available for solving them, and which solutions can objectively be shown to best fit the organization's actual needs, IT will be able to improve the daily life of staff and improve the organization's ability to do business when and as it wants.
About the author
John Burke is a principal research analyst with Nemertes Research, where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John leads research on virtual enterprise, focusing primarily on the virtual and mobile desktop, Application Delivery Optimization (ADO), and management and orchestration tools for the virtualized data center and the cloud. He also covers the security aspects of all of these topics, as well as server and storage virtualization, network and application performance management and monitoring, branch office IT and SOA.
This was first published in April 2013