Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

State-of-the-art application acceleration techniques

This excerpt from 'The Shortcut Guide to Optimizing WAN Application Delivery' describes state-of-the-art WAN application acceleration techniques like flexible bandwidth controls and caching, which speed up data, voice and video application delivery over the WAN.

Technology has become another greatly anticipated (though artificial) evolution for mankind ever since the big initial discoveries of man-made fire and the wheel. Technology continues to evolve to suit a greater variety of purposes or deliver an ever-increasing range of functionality. Technology also continually adapts and occasionally demands redesigns to remain viable in an ever-changing environment, until it eventually reaches retirement age and enlistment into the annals of historical or disused technologies.

Technology replacement often comes swiftly, where replacements often provide vast improvements over original designs. Any-to-any network clouds, which utilize MPLS or ATM infrastructures to create and manage links between arbitrary pairs or collections of sites using high-speed WAN links, have an increasing presence in the network landscape, but carry along the inherent issue of too many tunnels because WAN optimization devices usually work in pairs rather than in groups. Even with completely error-free, fully-operational WAN links there are performance-dampening provisioning technologies and practices causing packet-loss at the network layer. Combined with high-latency, timed retransmissions, and congestion-avoidance behaviors native to TCP, this problem can cause application performance to suffer perceptibly even when bandwidth is available and affordable. It's not enough to have a WAN to use, it's also important to understand how to make best use of that resource as well.

One problem for network cloud operation comes from a preponderance of encapsulated tunnel traffic between pairs of peers. The notion is that traffic flows from two sets of networks through an established tunnel that moves traffic through the network medium between endpoints to that tunnel. WAN optimization must be sensitive to this type of communication flow, and make intelligent use of tunnel and receiver connection setup and maintenance to use no more bandwidth or connections than are absolutely necessary when communications move through a cloud.

Indeed, many organizations now utilize any-to-any network clouds that replace the hub-and-spoke paradigm mentioned earlier in this chapter, where these clouds are often based and built upon Multi-Protocol Label Switching (MPLS). This next-generation forwarding and switching architecture realizes WAN "cloud" deployment complete with WAN optimization strategies and important advantages for advanced services and traffic engineering.

Disaster recovery sites require redundant high-speed WAN links for timely and critical data backups to be both accurate and effective at all times. Workflow automation accounts for internal business processes, internal business personnel, and the entire evolution of the business ecosystem including suppliers, partners and customers. Neither scenario is satisfied nor well-served by improperly or poorly-managed WAN links. What happens when crucial customer orders need priority reception? You must ensure delivery of critical business processes for the ordering system across the WAN link, which may involve WAN-applicable Quality of Service (QoS) policies. Orders expedited to manufacturing plant enterprise resource planning (ERP) systems may even require dedicated leased-line linkages to HQ offices. Here again, we find multiple opportunities for WAN acceleration to move information faster and more efficiently using heavily-shared, heavily-utilized network WAN links.

Opening closed communications delivers desirable results

Our technological approach and intellectual analysis of the greater framework involving networking applications, protocols and technologies expands both how we conceive the network (as a whole) and how we deploy its various components to satisfy operational needs. We no longer see communications as "I need to send this message or file from point A to point B", but instead take a deeper analytical approach and explore the parameters that define that message or file, how it travels across a number of links, and what operational inconveniences it imposes. We see the protocol fields and values that influence its routing behavior and the routing characteristics that define its path and progress through the network.

Using the appropriate viewfinders, we can even peer into and visualize our network as a much larger landscape comprising a universe unto itself. We can open, inspect and accelerate Secure Socket Layer (SSL) applications, which is absolutely vital for WAN optimization. Encrypted network data streams are invisible unless they can be opened and handled somehow, which is why WAN optimization devices are often furnished with the necessary encryption keys and certificates to handle those tasks. This enables WAN optimization to peer inside encrypted traffic to analyze data redundancy and lets it enforce traffic policies it might not otherwise be able to invoke. This view inside encrypted streams also makes it possible to analyze and define traffic patterns to apply classic compression techniques, and to use shared symbol and object dictionaries to further reduce the volume of data in motion. This capability is depicted in Figure 3.4.

decryption at the WAN optimization device
Figure 3.4: Given the ability to decrypt encrypted data streams, WAN optimization devices can enforce policy, impose throttling, and even apply various compression and dictionary schemes.

Advanced acceleration techniques speed streaming voice and video

Latency mitigation strategies reduce and shorten delivery times between sender and receiver, which effect increases proportionally constant to the distance travelled. In other words, distance introduces delay; longer distances introduce longer delays. Latency is also increased in transit by queuing and processing through intermediary network appliances and routing devices. These incremental delays levy appreciable impacts on WAN performance, where severe latency incurred from overly chatty TCP conversations and payload-bearing bulk transfers can potentially sever throughput.

Local caching at endpoints is an excellent strategy for delivering information efficiently at reduced data transmission levels. By keeping recent copies of large or repeated requests and transfers nearby, less redundancy and bandwidth need be consumed when passing such data to its consumers. However, caching does require a more analytical and focused perspective on ongoing network communications along with access to the right on-site caching mechanisms to be truly effective.

Flexible bandwidth controls provide multiple benefits

Well-defined policies should drive flexible bandwidth controls to throttle junk traffic and ensure better response time for important traffic. High-level application content and user information weighs heavily upon formulating such policies, which are best orchestrated by involving executive staff, key stakeholders, management, and representatives from the user community. It's also essential to test the impact of resulting policy schemes on the user experience, and to keep ongoing tabs on that experience to adjust policy settings over time to reflect changing needs and priorities.

Acceptable use policy also plays a surprisingly important role in maximizing the value and use of WAN links. It's well known that reducing network noise enhances signal clarity; it's less clear to many (especially undereducated users) that wasting bandwidth on non-work related activities really can place a serious and unwelcome drain on the bottom line. Eliminating unnecessary and undesirable traffic can be a major factor when imposing network traffic management policies. Such controls must be well-considered, carefully deployed, and flexible enough to accommodate the many different types of (authorized) traffic typical on modern enterprise networks.

Traffic by type and source

In some cases, traffic policing or WAN optimization may fail to preserve original client-server address designations and protocol formats, or to keep such data easily accessible. That's because of the many layers of embedding and encapsulation that can occur at various layers of the protocol stack. Applications, WAN optimization tools, and traffic policies that aren't careful to observe or maintain sufficient original context are in for worlds of trouble if they block or downgrade priority on traffic that really needs to get through.

Monitoring solutions must be sensitive to both original and encapsulated formats so as to accurately and effectively report end-to-end performance and traffic conditions. Many WAN optimization solutions tunnel protocol traffic that's necessary for this monitoring process to work properly and thereby obscure network conditions. A monitoring system can lose visibility into individual application flows as they disappear into optimized WAN ports or tunnels. That's why it's important to ask about such behaviors, and to tell WAN optimization vendors what kinds of management and flow related data they should capture and relay for monitoring and management purposes.

Traffic by time of day

Peak hours for various procedures, processes, sites, and users generally occur at regular, predictable intervals on the networks they use. Peak business hours can be observed, monitored and predicted over a given timeline and then used to construct a time-sensitive traffic management strategy. Offline hours then become peak activity windows for off-duty processes, such as daily accounting or inventorying systems, and network-based backup. Time of day is crucial for monitoring operational behaviors on the network and controlling resource availability, so also are periodic peaks that include end-of month, end-of-quarter, and end-of-year cycles, as well as occasional on-demand cycles (audits or emergencies, for example).

Application traffic

An inordinate number of applications and application protocols exist that can be controlled and monitored consistently and cohesively Each obtains its own priority assignment, poses its own unique value in the network management equation. Not all applications are created equally, though many are designed equally badly (or are comparatively worse) when it comes to WAN deployment. The abilities that WAN optimization solutions confer to tame these sometimes savage beasts remain among their most potent value propositions.

Some vendors offer Web acceleration appliances that optimize only certain types of traffic by off-loading certain servers. Other products optimize all TCP traffic equally regardless of differences in their application-layer behaviors. A complete and comprehensive WAN optimization solution must be able to selectively prioritize traffic, especially in situations where WAN links are heavily-utilized or operating at (or above) their rated capacity.

Traffic by user or group identity

User and group identity for sender and receiver pairs is another parameter that may be factored into WAN optimization. Authentication is critical for WAN optimization because it enables acceleration or prioritization of traffic based on individual and group membership. This means that user identity and/or group membership may be used as a criterion for allowing or disallowing traffic, or for prioritizing some traffic, and not prioritizing others. For example, email bound to or from the CEO goes ahead of the line but the mailroom clerk has to wait for "slack" capacity for his messages to be handled.

User identity tracking also facilitates better end-to-end network visibility. It can allow network engineers and planners to streamline security and prioritize delivery of certain traffic from certain sources. Identity may be used to block or allow certain types of traffic, or to apply varying levels of priority to the same kinds of traffic (CEO and customer support email goes ahead of all other email, for example). In the same vein, a salesperson located in some remote branch office may be granted higher priority than a marketing staff employee when accessing the company's centralized CRM application, because of the perceived difference in importance for such access (servicing an existing customer in the former case, prospecting for new customers or expanding on an existing relationship in the latter case).

More about compression dictionaries and local caching

A compression dictionary is any location where an algorithm stores its data sequences, predictions, shortened substitutions and other process-related data. Each dictionary has an associated size governing the amount of data stored and retrieved for compression and decompression.

Compression algorithms shrink data transfer sizes without altering critical protocol payloads -- a form of lossless compression. Many algorithms spot repeated sequences of data and store them for later look-up and retrieval. Some algorithms learn the series order of predictable data sequences rather than actual data content, and then predict subsequent content based on preceding patterns. Where such predictions prove correct, indicators of success are transmitted (as opposed to live, repeat data) to a compression partner on the receiving end. This partner notes the positive indicator and restores original data in place of the signifying notification.

Maximize rapid access to frequently used files, resources, and data

In the realm of servers and workstations, disk defragmentation seeks to boost operational performance by resituating frequently and recently used files, resources, and information. A process of physical reorganization and analysis of access patterns helps determine what data and which files should fit together closely and contiguously for fast access.

In much the same way, frequently and recently-accessed network files, data, and resources should take priority over less frequently referenced elements. Some situations and scenarios call for decentralized servers to house consolidated information and resources as described earlier in this chapter when we discussed DNS decentralization. In any case, the notion is that by watching what kinds of access have repeated in the past, we can predict what kinds will repeat in the future, and use that information to populate local caches that can cut down on the need for WAN access.

Use software acceleration for roaming users on the go

Endpoint client applications for WAN optimization are necessary in addition to the authentication-driven, identity-based traffic prioritization and tracking we mentioned earlier in this chapter Such applications open the boundaries of an enterprise and help make an optimization solution and strategy extend truly end-to-end, rather than remaining oriented primarily at specific tunnels between optimization devices (as is the case for most implementations). Furthermore, client-based software acceleration applications allow all kinds of policy-based controls and priorities to be applied, even when a client is operating independently in a Starbucks in Des Moines, far from her branch office in Dubuque 50 miles away.

Software acceleration clients benefit from built-in "situational awareness" -- that is to say, from the client's perspective the WAN optimization solution is aware of its location (branch office, remote VPN link, HQ) and can automatically apply an appropriate optimization strategy to match. Client software can approximate its proximity to the source server to determine its best possible routes and security options as well.

Caching

Caching is an excellent strategy in any aspect of computing. Router hardware caches MAC address tables and maintains lists of IP assignments; application proxies cache application layer data to conserve bandwidth against repeat requests; and WAN optimization technologies cache sequences of traffic data to avoid duplicate replay of protocol patterns on the network. And the process can be entirely application-independent for general-purpose usage. Single-purpose caches work only with specific applications or repeat requests for the same resource irrespective of all other network traffic (Web-only, email-only, backup-only, ERP and so forth). WAN optimization devices have a global view of all traffic that passes over the links they manage, so their caches can handle data for all the applications whose traffic traverses those links (making them a "links-only" rather than "application-only" type of cache).

Data reduction is an efficient means for WAN application and bandwidth optimization. The trick here is to avoid sending as much data as possible, or at least, never to send the same data more than once. Acceleration appliances examine data in real-time prior to its transmission across the WAN, and store objects and items locally. Any duplicate detected triggers the appropriate appliance to resend data locally instead of moving that same data (unnecessarily) across a WAN link.

Wherever data is stored by intermediary devices, it should also be handled in a secure, policy-driven manner. Caching copies of repeatedly issued data across a network is a great strategy for network performance but a terrible hindrance for application security across the data path. Any information obtained from this cache must also be secured so that it's both accurate and timely upon delivery to the requesting source, but also safe and secure from unwarranted inspection or alteration by unauthorized third parties.

Ideally, a cache should also be free of deployment constraint. Transparency plays a crucial role in the peaceful coexistence of intermediary device and end-user, so having to expose caching servers and services through end-user configurations can be a labor-intensive hands-on process. Zero-configuration is the objective in many of today's application platforms, and this area is no exception. Any and all network optimizations should be readily accessible and completely transparent to the end-user.

Coalescing common data elements

Smaller recurring packets have repeatable header data patterns that consume substantial amounts of bandwidth, which grows comparatively exponential in relation to the payloads. Packet coalescing merges multiple packets into one, provided they traverse the same link to reach the same endpoints.

In combination with header compression, this applies single header across multiple packets to decrease operational overhead and achieve bandwidth requirements. Web, voice and interactive multimedia applications all benefit greatly from packet coalescence.

Bandwidth control

Bandwidth control and bandwidth management are two ways of saying the same thing. It is the process of measuring and controlling packet-based network communications to avoid overusing capacity, which results in network congestion and poor performance. The channel capacity of partitioned, multiple-user Internetwork links is administratively limited. Once this threshold is reached, performance degrades in a highly noticeable way -- network congestion.

Controlling and managing network traffic reduces capacity use to maintain smooth, continuous service between endpoints. The art and science of controlling and managing traffic is a deeply-faceted practice of its own, with myriad solutions at virtually every layer of the network protocol stack. ISPs typically retain control over queue management and QoS to subscribers, window shaping promotes traffic flow reduction in high-end enterprise products and other such solutions increase usability of network capacity and resources.

The majority of WAN protocols utilized today include Integrated Services Digital Network (ISDN), frame relay, Multi-Protocol Label Switching (MPLS), Asynchronous Transfer Mode (ATM), and Point-to-Point Protocol (PPP) over Synchronous Optical Network (SONET). Harmonizing and orchestrating optimal performance among this heterogeny requires handling a series of deeply complex tasks.

 


WAN OPTIMIZATION TOOLS, TECHNIQUES AND TECHNOLOGIES
 
    Introduction
    WAN optimization basics
    WAN optimization techniques
    State-of-the-art acceleration techniques
    WAN technologies summarized

About the author:
Ed Tittel is a 24-year computing industry veteran who has worked as a software developer, systems engineer, trainer, and manager. He has also contributed to more than 100 computer trade books, including several college textbooks, and writes regularly for TechTarget, Tom's Hardware, and TechBuilder.org. Perhaps best known for creating the Exam Cram series of IT certification prep books in 1997, Ed also received the Best Networking Professional Career Achievement Award from the NPA in 2004, and has been a finalist in the "Favorite Study Guide Author" category in the CertCities.com Annual Reader's Choice Awards every year since those awards were launched in 2002.

This was last published in December 2008

Dig Deeper on WAN optimization and performance

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close