IT organizations have long depended on application delivery controllers to enable service requests in server farms...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
for application performance optimization. But now that virtualization has added complexity to server environments, load balancers must meet new challenges. Application delivery controllers (ADCs) can enhance virtual machine (VM) performance by enabling smoother dynamic creation and movement of VMs – but they fall short in meeting other challenges.
In a typical server farm, ADCs from companies such as Citrix, A10, Crescendo, Radware, Zeus, Brocade, Cisco and F5 help ensure that a request for service is delivered to the most appropriate server. There are many ways to define the phrase most appropriate server. Certainly the server has to be available; and, ideally, the most appropriate server is one that is processing the lightest load of any of the members of the server farm. ADCs also handle additional functionality, including offloading the processing of SSL traffic, as well as TCP offload and multiplexing.
How application delivery controllers support virtualization
Various hypervisors interface with ADCs differently. And though VMware holds its place as the dominant hypervisor provider, it is becoming increasingly common to find IT organizations that use other hypervisors, including Citrix Xen, Red Hat KVM and Microsoft Hyper-V.
One of the challenges associated with having multiple hypervisors is that each comes with its own management system, and each system interfaces differently with ADCs, depending on the APIs that exist for this interchange. Enterprises must be aware of these interfaces before choosing one or multiple hypervisors.
Application delivery controllers and VM provisioning
One of the challenges for ADCs is the ability to continue load balancing and directing traffic when a hypervisor management system, such as VMware's vCenter, creates and moves VMs.
In many cases, the hypervisor management system communicates with servers to create new VMs or move an existing VM based on a predefined threshold, such as 80% CPU utilization. ADCs must continue to direct traffic to VMs even as they are moved.
Typically, an external agent communicates with the ADC using one of the control APIs that the ADC supports. These are usually based on Simple Object Access Protocol (SOAP), a Command Line Interface (CLI) or direct reconfiguration. The agent could be a start-up script inside the VM, or it could be the provisioning/management agent that initiated the provisioning of the VM. The provisioning/management agent could be part of an external workflow orchestration system, or it could be part of the orchestration function within the hypervisor management system.
ADCs must also keep up with the de-provisioning of VMs. Hypervisor management systems can disable a VM, preventing new connections from being initiated but allowing the VM to continue serving existing session connections. The VM must be polled periodically to determine when it is no longer servicing connections, at which point it can be removed and the associated IP address can be reclaimed.
The movement of a VM between servers can be more complicated than the task of just creating a VM. If the VM is being moved to another server on the same subnet in the same data center, the traffic will simply continue to be sent to the same IP address, and the ADC will continue to handle standard traffic management functions. If the VM is being moved to a server on a different subnet in the same data center, one approach is to establish any new connections to the new VM and to keep the old VM in place to service existing sessions, then de-provision the old VM as described in the preceding paragraph.
Where application delivery controllers for virtualization fall short
Moving a VM between servers in disparate data centers is very challenging, and in some cases there is nothing that an ADC can do to respond to these challenges.
Cisco and VMware, for example, have stated that when moving VMs between servers in disparate data centers, the maximum roundtrip latency between the source and destination VMware ESX servers cannot exceed 5 milliseconds. The speed of light in a combination of copper and fiber is roughly 120,000 miles per second. In 5 ms, light can travel about 600 miles. Since the 5 ms is roundtrip delay, the data centers can be at most 300 miles apart. That 300-mile figure assumes that the WAN link is a perfectly straight line between the source and destination ESX servers and that the data that is being transmitted does not spend any time at all in a queue in a router or other device. Both of those assumptions are unlikely to be the case.
To support VMs moving between servers in disparate data centers, one can extend the VLAN between the VM and the ADC in the originating data center to the ADC in the receiving data center and then proceed as if the VM were being moved between servers in the same data centers. How this is done tends to be specific to each ADC vendor.
Choosing an application delivery controller for virtualization
When choosing an ADC, it is important for an IT organization to understand how its hypervisor management system will interface with the load balancer and how the ADC supports the dynamic creation and movement of VMs. Most importantly, the IT organization must evaluate how the ADC performs these tasks when it is running in an actual production environment.
About the author: Dr. Jim Metzler, Principal at Ashton Metzler and Associates, is a widely recognized authority on both network technology and its business applications. In more than 28 years of professional experience, Jim has helped numerous vendors refine their product and service strategies and has helped enterprises evolve their network infrastructure. He has directed and conducted market research at a major industry analyst firm and has run a consulting organization. Jim holds a Ph.D. in numerical analysis from Boston University. He has co-authored a book, published by Prentice Hall, entitled Layer 3 Switching: A Guide for IT Professionals.