BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Application delivery controllers, or ADCs, are physical or virtual devices deployed between applications and users. They intercept session traffic between applications and users and provide a range of functions designed to optimize the end-user application experience in a way that's transparent to both the consumer and the requested application or service.
At the highest level, an ADC device provides load balancing, quality of service, high availability and, frequently, security features in a single footprint -- without requiring a companion device at the other edge of the connection. ADCs essentially serve as a proxy for application servers, making a pool of back-end application servers look like a single application server to users.
The load-balancing function of an ADC distributes incoming loads in a variety of ways. The approach can be as simple as a round robin distribution, or as sophisticated as alerting the ADC device of the load levels on servers and assigning sessions based on available resources.
Application delivery controllers use various methods, such as a heartbeat, to avoid sending traffic to servers that are unresponsive or down.
Historically, ADCs were deployed at the edge of where the application data center communicates with clients connecting across a WAN -- typically, the internet. Today, application delivery controllers are also deployed on corporate intranets or inside cloud data centers connecting LAN-based clients to LAN-based application servers. Most ADC devices manage traffic to and from servers local to them -- for example, on the same LAN. But some provide global" functions where traffic is transparently redirected across the WAN to other, remote application servers.
Recent-generation ADCs can provide even more sophisticated and dynamic use of back-end server resources. To that end, ADCs are perfectly positioned to provide a single point of service for many back-end servers. Secure Sockets Layer (SSL) offload, web application firewall and related functions can all be implemented at the ADC. This approach is typically much more cost-effective than having each server implement these functions on its own. It's certainly much easier to manage these functions in a single ADC than separately in dozens or more servers. It also allows back-end server resources to be used for application services, rather than on security or cryptography functions.
Using extensive research into the application delivery controller market, TechTarget editors focused on those companies that offer the broadest selection of ADC features -- both through appliances and through software only. Our research included data from TechTarget surveys, interviews and reports from other respected research firms, including Gartner.
How application delivery controllers evolved
In the 1990s, the lack of server compute power forced companies such as Amazon, CNN and others to build large farms of servers to handle client requests. Yet, it was impractical for end-user clients to keep a record of all available server addresses; thus, the load balancer was born. Through this capability, the device allowed users to navigate to a single web address, such as amazon.com, without having to know or care about the specifics of the servers actually delivering the application services.
Before there were ADCs, traffic was moved around at Layer 2 (switching) and Layer 3 (routing) as traffic went between client and server. The Layer 2-3 devices looked no deeper into the packet than the Layer 3 IP subnetwork address for making forwarding decisions. At that layer, the application is invisible. Thus, Layer 2-3 switches couldn't provide any application services.
Load balancers, by contrast, could peer into the packets and act on relevant application data. As noted, the first problem this class of devices solved was to balance front-end, or client, loads with a back-end group of application servers. Eventually, these devices morphed into ADCs, as more sophisticated functionality -- such as web application firewall, denial-of-service protection and SSL offload -- was added.
Early ADC-class devices -- even as they provided capabilities that went beyond load balancing -- relied on custom, application-specific integrated circuits (ASICs), which made them quite expensive. The increasing power of off-the-shelf Intel chips, however, affected ADCs in several ways. As application servers became more powerful with multiple cores, massive amounts of memory and high-speed, high-capacity disks, the number of individual servers required in farms declined. Thus, the original load-balancing function became somewhat less important -- or at least somewhat less demanding.
At the same time, more powerful PCs allowed vendors to design software-only ADCs that didn't require expensive ASICs to run. While software-based ADCs weren't suitable for the largest installations, they were dramatically less expensive and suitable for many deployment scenarios.
In recent years, the rise of virtual appliances has intersected nicely with software ADCs. Now, most of these products are available prepackaged as virtual appliances in environments such as Microsoft Hyper-V or VMware ESXi.
The benefits of ADC devices
Application delivery controllers address the following needs:
High availability. If your applications require two or more back-end servers, you will want, at a minimum, the load-balancer functionality to provide both high availability and server resource balancing. Because, almost by definition, every important e-commerce and corporate application needs high availability, just about every customer will benefit from the deployment of an ADC for external and internal applications.
Offload. The ADC is perfectly positioned to offload some processing from the back-end servers. The SSL protocol used on e-commerce transactions is a prime example of a process offloaded to ADCs. This allows a single ADC device to use cryptohardware to terminate the SSL session instead of requiring cryptofunctions to run on every back-end server. Some ADCs will offer caching services, which can free the back-end servers from responding repeatedly to requests for the same data. The ADC can store certain static data and deliver it to clients on behalf of the servers.
Distributed denial-of-service (DDoS) protection. As the point where internet users contact applications, the ADC is in a perfect spot to provide security services on behalf of the servers in the back-end farm. DDoS protection is a common security service provided by ADCs. This means back-end servers don't have to support DDoS protection on their own. At the same time, by assuming the responsibility for guarding servers against a DDoS attack, ADCs further increase an application's availability.
The overall ADC market and leading vendors
Years ago, the application delivery controller market was high-end-only, as all ADCs were ASIC-based and inherently expensive. Today's ADC devices provide a mix of high-end products for major e-commerce sites, as well as the midmarket and SMBs with lower performance demands and budget constraints.
Citrix, F5 Networks and Radware Corp. were ranked among the market leaders in Gartner's 2016 Magic Quadrant report ranking ADC vendors, with A10 Networks described as a market challenger. All continue to deliver high-end, high-performance systems, and each has diversified its product portfolio to offer virtual appliances, as well.
Vendors such as Array Networks Inc., Barracuda Networks and Kemp Technologies focus on the midmarket. These vendors started out selling packaged, hardware-based products -- sometimes with boards that would offload some of the processing -- but they now also offer software-only versions.
Avi Networks and NGINIX offer software that's engineered to run only on commodity servers.
Finally, the growth of cloud computing has spawned products that exist just for in-cloud use -- such as Amazon Web Services Elastic Load Balancer and Microsoft Azure Load Balancer. These are marketed as basic load balancers, rather than as more sophisticated ADCs.