WAN performance measurement and overall design are being significantly impacted by the steady rise in the number of mobile workers and remote locations, especially among companies that compete globally. The need for improvement in these areas was initially driven by two considerations: The desire to bring supply chain resources closer to customers in the field, and an effort to cut costs by offloading some of the network computing resources to remote locations.
The effort to save money has become even more urgent during these tough economic times as companies look to improve their network infrastructures and slash expenses through telecommuting and extending WAN optimization technologies out to remote divisions.
These activities have stirred up interest in innovative ways to optimize WAN performance across conventional and unconventional networks. Smart network managers are also looking at the services and applications used across the WAN, as well as the habits of users on the network, before making decisions on more hardware or software. The following series examines the issues and provides some answers to network administrator who want to sharpen their WAN's edge and improve the overall performance on the network.
WAN applications performance shouldn't suffer as demands, complexity increase
With the number of telecommuting workers and virtual employees on the rise, and with the application infrastructure getting more complex and mobile, networks have to become more application sensitive to keep performance pumped and reliability up to speed.
Improving WAN performance: Zero in on services, user needs
WAN performance problems aren't always solved by applying optimization tools without some planning. Observing what services are being delivered over the WAN, and how, before implementing an optimization solution can help IT better meet user needs and spare the enterprise a deployment and financial headache.
Network optimization, security convergence promise more WAN control
The first rule of security and network optimization: Don't optimize what you don't want to deliver in the first place. Control often requires filtering packets to look at the content of the network traffic, not just at the ports it is on or the hosts it is moving between.
Three trends are changing the way organizations must think about application traffic on the network:
- The growing virtual workplace.
- The interconnectivity of IT systems.
- The rise in server virtualization and service-oriented architectures (SOAs).
[IT shops will] be confronted with a whole new layer of policy coordination and management integration. IT will need to coordinate silos of optimization across data centers, WAN, and remote offices.
Between 2005-2008, the number of remote locations grew an average of 10% each year. That, along with more telecommuting, has increased the number of employees working virtually. Coupled with increased centralization of IT services into data centers, and the consolidation of data centers, the result is that about 90% of users get applications over the WAN.
However, this spread of users and work groups across physical space has not had a significant impact on user expectations for application performance. Regardless of location or how they're connected, geographically dispersed users have come to all expect the same performance and reliability they would get when sitting at a well-connected headquarters. With money tight, keeping applications functioning well over the WAN not only helps keep costs down, but provides the collaborative benefit of fully supporting a virtual workplace as an alternative to expensive and time-consuming travel.
It's also wise to pay attention to the gradual federation and interpenetration of IT systems between an organization and its partners, customers, and suppliers. The "perimeter" is coming to be more Swiss cheese than Swiss bank vault. The more one organization depends on another as a part of data and transaction flows, the more important it is that performance on those flows be fast and reliable.
As performance demands increase across a WAN, a lot of attention is being given to server virtualization and SOAs. Server virtualization, in addition to its money-saving and environmental benefits, creates dynamism in the back-end systems that lets the components of an application move around, even from data center to data center, over very short time frames.
WAN application performance strategies
With SOA, as applications are atomized and turned into constellations of loosely-coupled services, things that used to happen as function calls within an application on a single server become network traffic within and among data centers.
SOA applications may reach across the WAN or Internet for services hosted by partners, suppliers, customers, or software-as-a-service (SaaS) providers. However, both virtualization and SOA can make it a lot harder to reliably optimize the performance of an application because they make it harder to find the traffic in the first place.
Enterprise application traffic, both end-user-facing and generated as part of applications' internal workings, is increasingly flowing across the WAN and the Internet. In the face of these changes, the network must become more intelligent and more active in ensuring application performance.
Application-sensitive performance management and optimization must therefore extend beyond traditional port/address/protocol awareness and actions to embrace the dynamic, complex, and dispersed infrastructure that is taking shape now.
Maximizing WAN performance tips
- WAN-optimization tools that compress and accelerate traffic
- Wide-area file services to replicate non-centralized content
- Quality-of-service tools to prioritize critical traffic and mitigate the impact of non-critical traffic
In a SOA, one of the chief ways to do this is by becoming aware of any application's self-description. By reading the same descriptions of services that application orchestration tools do, application-sensitive performance management tools could map out the web of communications that the application will create and treat it is a coherent whole. BlueCoat, for example, via its acquisition of Packeteer, has technology for this.
In a virtualized data center, performance monitoring can feed directly into active performance management in new ways. Orchestration tools, alerted to increasing loads and decreasing performance, can start up new application or Web servers to improve performance again. Citrix and VMware are among the companies now providing functionality along these lines.
This kind of performance enhancement, just like security, will have to extend into all parts of the network. It could be global, with route optimization for traffic traveling across a complex international WAN; or it could be local, by prioritizing traffic based on what application it is associated with right at the edge switch port. It can take action at the lowest levels of the network stack, with traditional address and port-based sensitivity; or it can pick apart packets and work at the highest level, prioritizing not just an application's traffic but specific subsets of that traffic based on content, making performance management sensitive to specific actions taken within applications.
One can easily imagine, for example, that for an application allowing both entry of financial transactions and reporting on those transactions, the enterprise might want to prioritize traffic associated with doing transactions over traffic created by reporting on them.
WAN capacity resources
At the moment, most organizations start with the pieces that are readily available and that can have the greatest impact on performance with the least impact on IT. These include WAN-optimization tools that compress and accelerate traffic, wide-area file services to replicate non-centralized content, and quality-of-service tools to prioritize critical traffic and mitigate the impact of non-critical traffic.
Inevitably, as the pool of performance vendors grows, and as IT shops come to need and deploy various tools, they will be confronted with a whole new layer of policy coordination and management integration. IT will need to coordinate silos of optimization across data centers, WAN, and remote offices. (And there will of course be the standard consolidation among vendors and evolution of standards for interoperability.)
But, the finer-grain, more application-fluent, dynamic, and pervasive performance management these technologies provide ultimately will be necessary to allow IT to deliver the services users require.
Early WAN acceleration and compression techniques primarily tried to cram as many bits as possible through networking pipes. Over the past few years, however, optimization has become more sensitive to the applications and how they are used within a network. This approach has allowed for a richer and more granular prioritization of what gets optimized -- how, when and even for whom.
Finding the real problem that needs solving is a critical first step to avoid wasting optimization dollars and expending valuable staff time and attention.
Applications are not the end game, though -- services are. By focusing on the services being delivered to users and setting performance goals around these services, IT can direct its efforts and technological solutions toward meeting user needs, not simply hitting certain network performance targets.
As an example, consider a midsize professional services company whose users routinely collaborate across business lines and from great distances. When we came across them during the course of our research, the company had experienced significant performance problems as users tried to work together. The IT department quickly realized the WAN was the culprit, and despite limited visibility into their WAN decided optimization was the solution.
Team members secured a few hundred thousand dollars in funding, enough for a complete rollout of WAN optimization to all branches from almost any vendor; and then laid out a rough project plan. As part of this plan, the company tried three all-purpose optimization solutions (from Riverbed, Cisco, and Blue Lane), each an archetypical combination of compression, latency-reducing accelerations, and prioritization.
What the IT department found, however, was that although all three brands worked exactly as advertised -- compressing and accelerating and prioritizing -- all three failed to fix the problem. The complaints remained, even when connections were supposedly enhanced with optimizers.
Blind ambition: Identify WAN applications before solving problems
The first lesson learned here is that blindly tossing an optimizer into the mix is a lot like throwing more bandwidth at a performance problem -- it is only sometimes effective, and the solution is often only temporary.
IT wisely backed off and returned to basics by deciding to first figure out what the problem actually was before applying any potential solutions. To do that, they had to look at how people actually used the network and applications to do. They had to first understand the IT infrastructure from a user's perspective -- not as a collection of technologies, but as a basket of services enabling the business.
By shifting their perspective and approaching the problem from a user's viewpoint, IT discovered that all the real and consistent performance complaints centered on applications that moved very large files around. They also found that for several applications, the files in question were resistant to standard acceleration techniques.
With this information, IT was finally able to seek out a wide-area file service (WAFS) tool that specifically addressed the problem of sharing very large, very changeable files. Installing that, their problems were solved -- and for less than 66% the cost of even the least expensive of the optimizers they tried.
Finding effective WAN solutions with positive SLAs
The second lesson that came out of this company's experience -- if you do not realize that your core objective is to deliver the performance users need on the services they see as critical, then you run the risk of unnecessarily deploying optimization tools that can easily burn through organizational resources (both hard dollars and staff time) and accelerate the irrelevant without helping the critical.
WAN performance problem-solving 101
- Don't blindly throw technology at problems you don't understand
- Take a user's perspective to problem-solving
- Consider services as well as applications
- Goal is to boost performance for services delivered
Finding the real problem that needs solving is a critical first step to avoid wasting optimization dollars and expending valuable staff time and attention. Focusing on services, not on the network or even on applications per se, and then developing real service-level agreements (SLAs) around these services, is a powerful method for helping IT accomplish this and ultimately eliminate the real problems.
Of course, strong SLAs build in remedies for failure to meet promised service levels, and companies may be reluctant to put really sharp teeth in their internal SLAs. The executives we work with tell us the most effective penalty clauses are not financial, forcing reimbursements or refunds, but provide penalty-free exits from the contract for IT.
Some organizations already treat in-sourcing as an option, to be shed if internal IT cannot provide the services needed. Organizations committed to keeping IT in house should consider adopting outsourcing as an SLA safety valve and IT should support the idea.
Giving business lines an exit clause puts effective teeth in performance SLAs, which should help IT keep its eye on the ball: maintaining and improving the performance of the services the organization requires.
Pervasive network optimization and ubiquitous security are growing ever more important, even in this economy and in the face of limited budgets. IT staffs are reshaping networks and data centers to meet the new agility and service needs of their organizations.
One axiomatic formulation for the functions of security is CIA -- ensure confidentiality, provide integrity, protect availability. Much of this, today, centers on control of networks: admission control, behavioral analysis and ongoing access control, firewall, data-leak prevention, Web filtering, and so on.
As we consider the combination of security and optimization, the first rule is: Don't optimize what you don't want to deliver in the first place.
Similar formulations and abbreviations have not yet been developed for network optimization, although the term "CAP" springs to mind – compress, accelerate, prioritize. Visibility and control are still the touchstones, though: See what is going on, and reshape it to meet the needs of the organization.
Information security and network optimization dovetail on this idea of control, and bringing the two together is powerful. Consider email spam, for example. More than one company we interviewed described an internal system of mail hosts and the havoc spam once played with it, swamping mail servers and burying real email under the tonnage of pharmaceutical, financial, and more questionable digital come-ons.
The solution, of course, was to filter out the spam before it was distributed internally. Many companies bypassed internal spam filtering, subscribing instead to third-party filters that eliminate spam before it consumes any Internet bandwidth the organization is paying for. As a result, email service performance and quality are greatly improved.
As we consider the combination of security and optimization, the first rule is: Don't optimize what you don't want to deliver in the first place. For example, why optimize delivery of an outbound packet over the WAN if it contains data (say, a credit card number) that will cause the data leak firewall to drop it? Or inbound data (say, a webpage with suspicious content) that the user's desktop firewall will block?
Packet and content filtering equal increased WAN awareness?
Control, these days, often requires peeking inside packets to look at the content of the network traffic, not just at the ports it is on or the hosts it is moving between. This is especially true in a world increasingly driven by XML messages whizzing around inside a service-oriented architecture (SOA). Higher-level content awareness is necessary when a denial-of-service attack can be mounted at an application level by traffic that from a typical firewall's lower-level perspective looks completely legitimate. On outbound traffic, data-leak protection is another problem space where content awareness comes into play.
What's driving network optimization/security convergence?
- Network topology: Solutions deployed at critical traffic chokepoints.
- Simplicity: Demand for fewer boxes, fewer vendor relationships, and fewer points of failure.
- Market evolution: Optimization commoditization positions security as differentiator.
The same may be true of network optimization. Problems are emerging, especially in SOAs revolving around applications distributed across multiple data centers that require content-awareness for proper control -- to prioritize financial audit transactions before cleaning supply reorders, for example, or customer service IM traffic above personal chats.
Thus, the second rule of secure network optimization would be: Crack open the packets only once. By bringing security evaluation together with network optimization, a secure optimization appliance could reduce overall latency and minimize what the optimization systems have to process.
The convergence of network optimization and security is driven by other factors too. There is network topology: Both security and optimization are especially effective and economical at critical chokepoints in the network, where the most traffic can be processed with the smallest number of boxes. There is IT's desire for simplicity: Fewer boxes in the comms closet, fewer vendor relationships to manage, fewer points of failure requiring redundancy solutions, fewer fingers pointed. There is market evolution: As baseline optimization folds into routing gear, optimization vendors will use security convergence as a differentiator.
Roadblocks to secure network optimization
Of course, there are problems. Security devices want to fail open – to look like a break in the wire – so nothing passes if the device dies and you know your security posture is not compromised by the failure. Network optimization devices want to fail closed -- to look like wire if they stop working -- so something gets through even if everything no longer can. What should a converged box do?
There are also concerns around SSL traffic streams. A device can't optimize an encrypted stream if it can't see what is inside. The only way to look inside is essentially to execute a man-in-the-middle attack, wherein the security/optimization box terminates encryption tunnels so that it can optimize the traffic inside.
This can be a touchy issue, though, both from a compliance perspective (if my guidelines say the traffic has to be encrypted between endpoints, how is this OK?) and from a privacy perspective (if I let my employees do personal stuff on my Net, is it OK for me, say, to be looking inside their banking transactions?). No one should venture down this path without a lot of serious discussion with legal advisers and auditors about the limits and implications of what they will do.
The bottom line: Look for opportunities, as with spam filtering, to let security and network optimization help each other. Expect your vendors to bring these functional lines together, as BlueLane, Cisco, Juniper, Expand, Radware and others are doing -- but don't expect the ride to be without some bumps.
About the author:
John Burke is a principal research analyst with Nemertes Research, where he focuses on software-oriented architectures and management. He develops and manages research projects, conducts strategic seminars, and advises clients. As an analyst, John draws on his experiences as a practitioner and director of IT to better understand the needs of IT executives and the challenges facing vendors trying to sell to them.