Rawpixel - Fotolia
It's nearly impossible to go to any technology conference and not hear the words Docker containers at least once. Containers were an old and decidedly niche technology until Docker emerged with a new use case and changed the game, helping usher in a new era of DevOps by enabling developers to rapidly package and deploy applications.
Docker's open source project isn't the only container technology, but it is helping to catalyze the overall container movement as different vendors and end-user organizations all try to benefit from the advantages of containers.
And while adoption of container technology to date has largely been driven by developers, the popularity and production-scale deployments of containers is driving increasingly sophisticated networking needs. Containers offer myriad opportunities for network engineers to actually help simplify complex networking challenges.
Eric Hanselman, a chief analyst at 451 Research, has been looking at containers and how they contrast with other forms of virtualization, most notably virtual machines (VMs).
"Container networking functionality is far simpler than what's available in more mature virtual environments," Hanselman says. "While that may be helpful in working at containers' greater scale, it means that more work has to be done to layer any sophistication into the environment."
In bare-metal and physical environments, container networking is similar to VMware networking -- with two deviations, according to Brandon Philips, CTO of CoreOS Inc., a container platform vendor. First, explains Philips, containers come and go more rapidly than VMs, so the network needs to be designed around this reality. Second, people are going to run many more containers than they would VMs, so the amount of address space that a container environment may consume is likely to be larger.
With a VM, an operating system is typically deployed as part of the VM application image, whereas the container model relies on the host system for operating system features. By not needing its own operating system as part of an application image, a container application image can be smaller; meanwhile, the corresponding density of a container deployment can be higher than an environment based on VMs. It is worth noting that containers and VMs are not mutually exclusive technologies, either. In fact, it's often a recommended best practice that containers run inside of a VM.
Container networks: Gaps and possibilities
Within the container, an IP interface is the only item of concern to the developer, Hanselman explains. But to add resilience or multi-tenancy to containers, particularly Docker's, much more than the native networking functionality is needed.
"Automating the networking configurations of large numbers of containers is critical to tackling more complicated application architectures," Hanselman says.
It's an area where more work is needed, which has opened up an opportunity for third parties to fill the gap with open source projects for container networking, such as Weaveworks' Weave Net and CoreOS' Flannel.
Docker's core networking technology, libnetwork, was first announced alongside the Docker 1.7 release last June and was integrated into the mainline project as a stable technology with the Docker 1.9 release last November.
Chen Chun, a software engineer at Tencent -- an online gaming, social networking and entertainment giant based in Shenzhen, China -- is both a user and a contributor to Docker. In particular, Chun has contributed multiple code commits to libnetwork. Chun explains that Tencent, a company best known for its QQ instant messaging app and WeChat mobile chat service, uses the overlay network provided by libnetwork to deliver connectivity to multi-host apps. Additionally, Tecnent is using Docker's built-in bridge networking capabilities.
"A bridge network allows us to allocate an unused host-port-mapping into a container's port," Chun says. "That helps us to launch multiple online job instances on the same host."
Bridging, however, is not without limitations. Chun says that bridged networks solve port-conflict problems and provide network isolation to containers running on one host. That said, with bridging alone, the container's IP address is not visible from outside of the host.
"This brings inconvenience, as many Web server programs register IP addresses and ports into a service discovery layer," Chun says. "So we developed a static IP Docker network plugin to give each container an IP address, which is routable."
Tencent is able to ping and get a Secure Shell (SSH) into its Docker containers with a static IP address from any internal machine, thanks to libnetwork, which is what made the network plugin possible. The network plugin offers a scalable approach to customized container networking.
There is another key networking challenge that Docker libnetwork is able to help Tencent address to simplify its operations: limited IP address resources when using a static IP network.
"The good news is the overlay network brings us unlimited IPs," Chun says.
The IT team at Tencent realized a number of applications can work in an isolated network without external connectivity, he adds. For example, some test apps only need a connection to a shared database. Overlay networking makes it the perfect solution to these scenarios; an isolated overlay network offers a private IP range to connected endpoints.
The role of micro-services
A term that often comes up when discussing containers is micro-services. The basic idea behind micro-services is that instead of having a monolithic application stack, each specific service in an application delivery chain is broken out into individual parts.
"When using containers, people are making a conscious effort to break their infrastructure down into more understandable units," says CoreOS' Philips. "I think that opens an opportunity for networking technologies to make decisions on behalf of the user that they couldn't make before when we were in a [virtual] machine-focused world."
One potential use case that Philips sees revolves around Kubernetes, an open source container-orchestration technology built by Google but now managed by the Cloud Native Computing Foundation (CNCF). The foundation itself operates as a Linux Foundation Collaborative project. The CNCF enjoys the support of many of the leading names in IT, including AT&T, Cisco, eBay, Fujitsu, Google, Huawei, IBM, Intel, Twitter and VMware.
If an administrator dedicates 500 Mbps to a particular application running on Kubernetes, Philips explains, then the networking control plane can be involved in the scheduling of this application to find the best place to guarantee that bandwidth. Or, by working with the Kubernetes API, a network control plane can start making ingress firewall rules that are aware of the container applications. These are all near-term scenarios that can help make container networking easier to deploy and use, Philips contends.
Hopes for shared networking interface
A lot of organizations are using network address translation (NAT) with overlays on the backend to give containers public IP addresses, Philips says.
Brandon PhilipsCTO, CoreOS Inc.
"Over time, as people move more of their stacks over to containers, they will want to move to a NAT-free solution for simplicity, visibility and speed," he says.
Among the NAT-free approaches is a standard container networking interface (CNI), which is set to become part of the CNCF.
"CNI is about enabling network vendors to integrate their control plane into the container ecosystem," Philips says. "It is minimal and was built alongside a number of network vendor engineers."
Philips' hope is that having a shared interface will encourage an ecosystem of networking hardware and applications to integrate with Kubernetes, Rocket/rkt and any other container tool that adopts a CNI.
"For the container ecosystem to continue to be successful, we need to define and foster these shared interfaces to plug in storage and networking because enterprises will continue to want to leverage their existing systems," Philips says. "And I think the CNCF is well positioned and well organized to help in those efforts."
Among the networking options that can plug into a CNI is CoreOS' open source project Flannel, which was originally introduced to help onboard users to containers and Kubernetes. Flannel is a simple, composable component that can be integrated into existing systems, according to Philips. Starting with Docker 1.9 in November, multi-host networking is now supported in the Docker Engine itself. This means that the Docker Engine directly competes with the functionality implemented in Flannel.
The container space is evolving rapidly, and as development and deployments continue, it's likely there will be improvements in container networking. For users like Tencent's Chun, Docker container networking is already helping reduce network complexity.
Some, however, still see a set of latent complexities for networking large container deployments.
"Work around intent-based network configurations can help to tame this problem," say 451 Research's Hanselman. "We've already seen what's happening with pass-through labels in Kubernetes and this is very promising. The challenge is aligning the different efforts that are taking place, so that we can get to at least a small number of networking models with containers."
Where do SDN and container networking intersect?
What network problems can containers create?
SDN bloggers weigh in on Docker networking