Successful networked application delivery requires a framework that consists of planning, optimization, management and control. This document focuses on the optimization component of the framework in general, and on the role of application delivery controllers in particular.
The phrase application optimization refers to an extensive set of techniques that organizations have deployed in an attempt to ensure acceptable application performance. The primary roles these techniques play are to:
- Reduce the amount of data that must be stored or transmitted.
- Reduce the number of round trips (a.k.a., application turns) necessary for a given transaction.
- Mitigate the inefficiencies of older protocols.
- Offload computationally intensive tasks from client systems and servers.
There are two principal categories of network and application optimization products. One category focuses on delivering applications to branch office employees over network technologies such as frame relay, ATM and MPLS. This product category is typically referred to as a WAN optimization controller (WOC).
The second category of products focuses on delivering applications to a wide range of interested parties (e.g., employees, customers, partners) over the Internet. This product category is typically referred to as an application delivery controller (ADC). ADCs are sometimes described as being an asymmetric solution because an appliance is required only in the data center, not in the branch office.
The genesis of this category of solution dates back to the IBM mainframe-computing model of the late 1960s and early 1970s. Part of that computing model was to have a front-end processor (FEP) reside in front of the IBM mainframe. The primary role of the FEP was to free up processing power on the general-purpose mainframe computer by performing communications processing tasks, such as terminating the 9600 baud multi-point private lines, in a device that was designed specifically for these tasks. The role of the ADC is somewhat similar to that of the FEP in that it performs computationally intensive tasks, such as the processing of secure sockets layer (SSL) traffic, hence freeing up server resources. However, another role of the ADC that the FEP did not provide is that of server load balancer (SLB) which, as the name implies, balances traffic over multiple servers.
ADCs can accelerate the performance of applications delivered over the Internet by implementing optimization techniques such as reverse caching, asymmetrical TCP optimization, and compression. With reverse caching, new user requests for static or dynamic Web objects can often be delivered from the cache rather than having to be regenerated by the servers. Reverse caching therefore improves user response time and minimizes the load on Web servers, application servers and database servers.
Asymmetrical TCP optimization is based on the ADC serving as a proxy for TCP processing, minimizing the server overhead for fine-grained TCP session management. TCP proxy functionality is designed to deal with the complexity associated with the fact that each object on a webpage requires its own short-lived TCP connection. Processing all of these connections can consume an inordinate about of the server's CPU resources. Acting as a proxy, the ADC terminates the client-side TCP sessions and multiplexes numerous short-lived network sessions initiated as client-side object requests into a single longer-lived session between the ADC and the Web servers.
The ADC can also offload Web servers by performing compute-intensive HTTP compression operations. HTTP compression is a capability built into both Web servers and Web browsers. Moving HTTP compression from the Web server to the ADC is transparent to the client and so requires no client modifications. HTTP compression is asymmetrical in the sense that there is no requirement for additional client-side appliances or technology.
ADCs may also provide an additional layer of security for Web applications by incorporating application firewall functionality. Application firewalls are focused on blocking increasingly prevalent application-level attacks. Application firewalls are typically based on deep packet inspection (DPI), coupled with session awareness and behavioral models of normal application interchange. For example, an application firewall would be able to detect and block Web sessions that violate rules defining the normal behavior of HTTP applications and HTML programming. Therefore, application firewalls complement traditional perimeter firewalls that are based on recognition of known network-level attack signatures and patterns. Application firewalls also have the advantage of providing a measure of protection against zero day exploits by blocking the sessions of clients whose behaviors are outside the bounds of admissibility.
Several factors are increasing the need for ADCs. For example, in virtually all cases today, at least some of the constituencies that an IT organization has to support don't access enterprise applications by using WAN services such as frame relay and MPLS. They access enterprise applications by using the Internet. In addition, the boundaries of the typical enterprise will continue to become blurred because of a more and more diverse user community, as well as the adoption of new distributed application architectures (e.g., Web-enabled applications and business processes, SOA/Web Services, SaaS, and cloud computing) that often traverse multiple enterprises. As the boundaries of the enterprise become increasingly blurred, a growing percentage of business-critical application traffic will be carried on the Internet. For the reasons previously discussed, ADCs help IT organizations minimize the cost of the servers that support these applications and simultaneously improve application performance.
About the author:
Dr. Jim Metzler, Principal at Ashton Metzler and Associates, is a widely recognized authority on both network technology and its business applications. In more than 28 years of professional experience, Jim has helped numerous vendors refine their product and service strategies and has helped enterprises evolve their network infrastructure. He has directed and conducted market research at a major industry analyst firm and has run a consulting organization. Jim holds a Ph.D. in numerical analysis from Boston University. He has co-authored a book, published by Prentice Hall, entitled Layer 3 Switching: A Guide for IT Professionals.
Dig deeper on Application Acceleration and Server Load Balancing