Essential Guide

Enhance service delivery model with WAN, application optimization

A comprehensive collection of articles, videos and more, hand-picked by our editors

Application delivery optimization solutions: A technical overview

To evaluate application delivery optimization solutions, you must know different optimization solutions and how each can fit into your environment.

In the third part of this series, we explore a technical overview of application delivery optimization solutions.

In order to properly evaluate application delivery optimization solutions, you must understand the different kinds of optimization solutions and how each can fit into your environment so you can then evaluate accordingly.

As work becomes something you do rather than a place you go to get things done, the virtual enterprise employs myriad new applications for collaboration and communication, like desktop video conferencing, for example. This shift to real-time tools increases the demand for bandwidth and for better and more consistent performance on the wide-area network (WAN). Application delivery optimization (ADO) solutions can be brought to bear to improve application performance on the WAN.

As your organization evaluates ADO solutions to improve response times and meet service-level agreements (SLAs) with different lines of business -- whether those solutions come as software or hardware managed in-house or purchased as a service -- it is imperative that you understand the different kinds of optimization solutions that are available and how each can fit into your environment. It is also imperative that you approach potential solution providers armed with specific questions that will get to the heart of whether their solution is a good fit for your needs. In part two of the three-part series, we will explore the technologies you should be aware of and the questions you should be ready to ask vendors and service providers.

Compression: Shrinking your traffic to fit

Read more application delivery optimization

Integrating performance management and ADCs

Prioritizing WAN application delivery

Brocade's ADC will serve as SDN controller

Compression, caching and deduplication reduce the amount of traffic sent over the network, both making room for more traffic and making the most efficient use of available bandwidth. Optimizers sometimes use the same standard compression techniques that go into compressing a file on a desktop, ZIP-style, and sometimes use other means like the construction of data dictionaries. A dictionary catalogs bit patterns in the input, sending a pattern through only once, while logging it in the dictionary at each end of the link. Thereafter, an optimizer at either end will send through the much smaller pointer to the pattern in the dictionary any time it sees a recurrence of that pattern in the input. Whether by directly caching files or by building a dictionary, or both, the ability to reduce the wait for data needed to fulfill a user's request results in faster response times.

Compression helps most if performance issues center around moving large files from place to place: backing up or replicating data from one location to another; moving large streams of "bulky" data like XML messages; moving among SOA components, or EDI or HL7 data. Compression helps enormously if many users in a given location reach for the same remotely stored content, or if any given person uses the same content several times.

Acceleration: Cut to the chase

Accelerators compensate for the shortcomings in applications' network protocols, typically by lying to the systems at either end of the connection. Many network applications have protocols that only their creators could love, developed in low-load, high-bandwidth environments. Running applications across high-latency, low-bandwidth, lossy or jittery WAN, mobile or Internet links exposes their weaknesses. For example, some applications are chatty, sending long request/response chains back and forth. Add high latency to each leg of each round trip, and the user finds the application has become painfully slow. Of course TCP, which underlies most application protocols, has its own problems with flow control, being quick to ramp up transmission speeds when network conditions are good but even quicker to drop them down drastically when links get congested. Most TCP-based applications can benefit from acceleration.

Accelerators can spoof -- or cheat on -- chatty protocols to make the number of round trips smaller and less painful. Optimizers can fake answers to the programs at each end of a connection without waiting for the actual answers to come back, greatly speeding up operations. For example, if the sending program wants to wait for an acknowledgement of receipt of data (an ACK packet) to come back from the requesting system before it sends the next bits along, the optimizer can supply that ACK signal on its own schedule instead of waiting for a real response. This keeps the data flowing more quickly. Optimizers can even meddle with flow control in TCP to make it more intelligent, ramping transmission speeds up and down far less aggressively.

Many older LAN protocols, such as Microsoft's pre-Vista standard for file server traffic called Server Message Blocks (SMBs) -- the Common Internet File System (CIFS) -- are chatty and miserable on long, lossy connections.

Conditioning: Safety net

Traffic conditioning makes up for weaknesses in the network by helping remediate packet loss and prevent out-of-order packet delivery, timeouts and retransmits. UDP applications, which do not do TCP-style flow control, are especially vulnerable to loss and out-of-order delivery. Instead of introducing performance lags as packets get retransmitted, UDP apps just have to deal with data being missing or arriving out of order. Since the most common UDP apps are voice- and video-based, optimizers can add ordering information to packets as they pass through, or simply pay attention to the ordering information of TCP packets as they pass and make sure they leave the optimizer at the far end in order. They can also add error-correction bits to the packets, to reconstruct information in packets that get dropped along the way, preventing the need to request that a packet be retransmitted.

Optimizers can also repackage packets to make a long train of small packets into a short train of larger ones, or vice versa, depending on the kind of traffic that will best allow overall optimization and delivery across a particular link at a particular time.

If you have lossy WAN links or use an Internet VPN as the WAN, traffic conditioning will be very important to your optimizer selection, especially if you need to send real-time traffic streams over them. Note: Voice, video and remote desktops hate retransmits.

Shaping traffic: Getting your priorities straight

Read more from this series

Application delivery optimization strategies for a changing workforce

Traffic shaping is aimed at making sure that an organization has some control over how it consumes bandwidth, guaranteeing that certain applications, devices or users get bandwidth, or not. It can involve Quality of Service settings on routers, as well as optimizers that manipulate traffic flow rates directly, or control flows using queuing (parking packets in buffers, or queues, to await delivery after higher priority packets), selective packet dropping and connection-closing (sending a "close" packet to one or both ends of a stream, to shut it down).

Traffic shaping makes the biggest difference when network professionals need to protect network capacity for mission-critical work from the impact of less important and purely recreational use.

Visibility: See clearly

Optimizers are often the first and only point where network staff can see, at the application level, which activities are resulting in the greatest consumption of network resources. Network staffs need to hold out for depth and granularity of information about the traffic each application, network node and user generates or consumes.

About the author
John Burke is a principal research analyst with Nemertes Research, where he advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John leads research on virtual enterprise, focusing primarily on the virtual and mobile desktop, Application Delivery Optimization (ADO), and management and orchestration tools for the virtualized data center and the cloud. He also covers the security aspects of all of these topics, as well as server and storage virtualization, network and application performance management and monitoring, branch office IT and SOA.

This was first published in April 2013

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

Enhance service delivery model with WAN, application optimization

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close