"Use of containers for VNFs [virtual network functions] appears to have significant advantages compared to using...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
VMs [virtual machines] and hypervisors, especially for efficiency and performance." -- Internet Engineering Task Force report, "An Analysis of Container-based Platforms for NFV"
It is no secret that container networking has been the new hotness in the development and open source world for the last couple of years. The container technology has been around much, much longer, of course, but as is often the case with other developments enjoying a rebirth, containers are now in the center of the spotlight. And with that popularity comes the rush of everyone trying to somehow attach themselves to its coattails and reap the rewards of proximity. Networking vendors have not been immune to this phenomenon.
Container networking has been around in a few forms for years now -- Linux-based containers as an example -- along with other OS-level virtualization technologies, like OpenVZ and Linux-VServer, Solaris Containers, AIX Workload Partitions, and FreeBSD jails, which is a personal favorite, to name a few.
It was not until Docker came along that the technology really began to hit the mainstream, however. And since then, the trajectory has been nothing but upward. Whole ecosystems have sprouted around container management, networking, automation, security and more, with the Docker-specific ecosystem being arguably the largest.
At DockerCon 2016 in Seattle, I got a firsthand look at the number of companies vying to get noticed on the expo floor, and I was genuinely surprised to see so many startups with very compelling product offerings. More impressive than that was the number of large and established vendors that had a presence, among them Dell EMC, NetApp, VMware, Oracle, Veritas and CA.
Moreover, the major networking vendors were also in attendance, with Cisco, Juniper Networks, Arista Networks and Hewlett Packard Enterprise all showcasing various concepts and services. A couple were concrete, and a couple were more theoretical -- the kind of product with which you show up at a conference if you really don't have a product. Still, Docker built it and they came.
Service providers stepping in
This is all well and good, but within the networking world, you'd be forgiven for thinking, "Yes, but who is actually doing anything real and concrete with containers. Who is doing more than just talking?" As it turns out, the biggest adoption and momentum is coming from the service provider space, and they are implementing containers as quickly as they can vet the performance metrics.
Physical routers tend to be underutilized in general -- and at specific times, extremely so. Virtualization of routers would be an option to remedy this, as much has been done in the server space. This is fairly inefficient, however, as any potential savings in performance are quickly offset by the reality of consuming system space in the form of compute, memory and storage usage, while still having the problem of an underutilized virtual machine. Service providers are especially sensitive to performance numbers, as they are constantly fighting downward price pressure in the market.
Container networking is one possible solution to the problem, and one that has the most traction today. By containerizing the networking functions, such as routing protocols, forwarding planes, control planes and so on, containers can be spun up or down as loads change over time. And since containers instantiate very quickly -- seconds in many cases -- there is very little inefficiency inherent in their transient nature. It's also possible to orchestrate cloud-native network microservices such as these to move within the service provider's network in real time in response to load.
Moving away from ASICs, proprietary techs
While many of the big networking vendors are moving aggressively into the container and network functions virtualization world -- Cisco's acquisition of ContainerX, as well as its work on Project Contiv, are a few examples -- a lot of the work is happening using so-called white box gear, which is simply commodity hardware running what's usually open source software. Why? Because as soon as you virtualize the network functions, you're not beholden to custom application-specific integrated circuits or other specialized hardware, and you can scale faster and cheaper than purchasing more monolithic boxes. However, do not think for a second that the big players aren't going to aggressively attack this market and gain traction as their products mature.
At the end of the day, just as monolithic workloads were moved to virtual infrastructures to gain efficiencies of operation and density, those applications are now being rewritten as cloud-native applications made up of microservices and then containerized. The same trajectory has happened in the networking space, albeit at a much, much slower pace.
But what used to be the old, staid portion of the IT world -- the plumbing holding everything together that hasn't been updated since the dawn of the TCP/IP era -- is finally changing, and it is changing at a frenetic pace. In some ways, it seems likely that we'll see the adoption of container networking increase -- at least among certain segments of the networking industry -- and at a rate dramatically outpacing the mainstream use cases we read so much about today.
Container technology thrives in IT departments
Managing virtual container technology
Container-as-a-service providers compete with different strategies