As DevOps matures and microservices architectures spread, the role of the application delivery controller is undergoing...
its own transformation, making this a good time to take a new look at this networking management tool. Instead of simply enabling resilience through load balancing, today's ADCs are giving services a new dose of functionality.
Microservices are tightly focused packages of code that power cloud native applications. They derive much of their architectural heft from the fact they can be spun up quickly to meet rising demand and easily deleted when no longer needed. But to make that flexibility work, the corollary requirement is that incoming requests for the service be spread across however many copies are running. Enter load balancers. Containerized, programmatically controllable load balancers form part of the core networking management tool set for modern development projects. Open source load balancers dominate the space.
Up to now, ADCs have generally been managed as a permanent and stand-alone networking management tool -- though typically by folks associated with applications management rather than by the network team. They were installed, configured and tested before they were placed under tight change management, lest something break due to unmonitored tweaking. That degree of oversight runs counter to the ideals of DevOps, which would put management of that critical infrastructure alongside control of the other key parts of the service cloud.
In the future, ADCs will be virtual, spun up and down as needed as part of a service environment under the aegis of infrastructure as code. As that shift gains traction, IT departments will need their next ADCs to provide a much more comprehensive set of features and capabilities than those generated by their current model of this networking management tool.
What to consider with ADCs
First and foremost, remember that ADCs have to be easily created, configured and destroyed via scripts and APIs. Not only should use of a GUI be completely optional, the presence of one should be as well. After all, in the emerging paradigm, there will be very few times when an admin will have to actually log into a box to do anything to it manually. Setup will be by script and misbehaving instances thrown away rather than diagnosed and repaired. Hauling around the code for a GUI makes the ADC heavier than it needs to be and thus slower to deploy and move. This means GUIs will become optional equipment -- whitewall tires, not dashboards.
Containerization is a corollary requirement. ADCs will have to be deployable within containers, with Docker at a minimum, although working in Linux containers and other platforms as well is ideal. Containerization will make micro-ADCs manageable using the same tools as all the other containers in the environment.
Finally, in addition to being lightweight and fully programmable, ADCs will have to scale horizontally, not vertically. To manage more traffic, organizations will deploy more of these networking management tools, not bigger ones. This will require a change in licensing terms. If an organization has dozens of ADCs spread around instead of a couple ginormous centralized boxes, it will need a licensing model that supports the new paradigm.
The costs of your ADC decision
At worst, an enterprise will want overall licensing costs to hold steady, even if allocated on different bases. To that end, each micro-ADC will need to cost a lot less than the conventional chokepoint boxes folks are used to. More interestingly, we are seeing vendors in the microservices world step away from instance pricing; the nodes don't cost anything individually. Instead, customers pay per utilization, whether based on cumulative throughput, daily peak throughput or some other measure of use.
As the world shifts further along its arc toward automation, DevOps and microservices, flexible and dynamic ADCs will become the only path forward for vendors and users alike. IT shops should have their eyes on that goal starting now.