Overlay networks: Understanding the basics, making it a reality
A comprehensive collection of articles, videos and more, hand-picked by our editors
The network must virtualize, and overlay networks may be the best path available. The demand for network virtualization is prompted by the cloud provider community’s quest for a new way to manage, orchestrate and automate network management. Traditional networks just can’t keep pace with the cloud’s requirements for agility, flexibility and manageability.
In an effort to evolve, the networking industry is virtualizing networks to give them properties similar to server virtualization. This network virtualization involves networks being decoupled from hardware, with the flexibility of virtualization and quick provisioning speeds.
One way to decouple networks is to create a virtual network abstraction. Just like server virtualization provided a virtual machine abstraction from x86 hardware, networks can provide virtual network abstractions with the same properties and operational simplicity.
How can you create virtual network abstractions? This is where overlay networks come into play.
Role of Overlays in Network Virtualization
An overlay is essentially a software construct that lives around the edges of a physical network. Typically this overlay consists of virtual switches that reside on the virtualized servers connected to the edges of a data center network. The overlay network relies on a network control plane to handle virtual switching on the server hosts, much like a physical network does. Depending on the vendor, these control planes can use traditional network protocols, or they can rely on a software-defined networking (SDN) controller.
Network operators can decouple networks from the physical infrastructure with overlay networks by introducing a new addressing layer.
“If you use overlays to do network virtualization, when a virtual machine (VM) sends a packet, this packet lives in an address space that’s totally virtual,” explained Martin Casado, Nicira co-founder and now VMware’s chief architect for networking. “But the overlay adds a header to the outside of the packet, and that’s in the physical world. So if you look at the packet on the wire, it has a virtual address space on the inside and the physical address space on the outside.”
This enables virtual networks to have different service properties than the physical networks. “Using a very simple L3 fabric, I can build a complex L2, L3, with access control lists (ACLs), virtual network. And this, in turn, makes it possible to use simple-to-manage physical hardware to reimplement much of networking in software at the edge,” said Casado.
Overlay networks aren’t new. Wireless local area networks (LANs) have long existed as overlays on campus networks. And virtual private networks (VPNs) establish overlays on wide area networks (WANs).
“The new part is bringing the overlay to the entire network and into the data center network—at scale and without adding complexity to the overall deployment,” said Andrew Harding, senior director of product marketing at Big Switch Networks. “This delivers not only dramatic cost effectiveness, but also dramatic improvement in managing, deploying and maintaining a data center network.”
For overlays to be successful, engineers need to focus on the big picture. The advent of tunneling protocols like VXLAN, NVGRE and STT has led many people to focus too heavily on protocols rather than architecture.
“Tunneling protocols are just mechanisms, but providing the overlay and the overall virtualization are the important parts of the story,” according to Brad Casemore, IDC research director of Datacenter Networks. “In the long run, the industry will support whichever tunneling protocol makes the most sense—possibly even all of them. The bigger story is what overlays are capable of doing and how this supports network virtualization.”
A Look at the Main Overlay Approaches
Vendors including VMware, Big Switch, Midokura, IBM and Cisco are all developing overlay network technologies. Here’s a look at the vendors whose overlay products have been on the market longest: VMware-Nicira, Big Switch, and Cisco.
SDN vendors are offering controller-based network overlays, in which a controller tells vswitches what to do via tunneling protocols. Cisco and some others are using a more old-school approach with a virtual switch—the Nexus 1000v—that operates like one of its physical switches and replaces the native virtual switches embedded in software from VMware.
“One of the most significant differences in approaches is the degree to which it’s considered a software-only solution or is a solution that involves a hardware element,” noted Casemore.
VMware’s Nicira “Network Virtualization Platform”
Last year, VMware acquired Nicira and its Network Virtualization Platform (NVP) software solution, which can create an intelligent abstraction layer between virtualized hosts and an existing physical network. NVP is managed by a distributed controller system.
“Just like VMware created virtual machines, our focus now is on creating virtual networks that are fairly complete—with L2, L3 and ACLs—and work just like physical networks, so you can have tens and thousands of isolated virtual networks at scale,” said Casado.
NVP reduces provisioning time, one of the most immediate problems in virtualized data centers. “Rather than taking seven days, it now takes 30 seconds to provision a network,” said Casado. “And we’re solving isolation issues and mobility issues. We’re solving immediate customer pain points, and then we’ll totally change the paradigm. Next up: new methods of debugging and security. We’ll come up with new methods of operational flexibility that we can’t even imagine today. During the next three to four years, we’ll see networking move into areas we can’t even fathom today.”
Big Switch’s“Big Virtual Switch”
Big Switch’s Big Virtual Switch is an OpenFlow-based network virtualization application that runs at the top of the company’s SDN stack where the northbound API is located.
“Our Big Network Controller, which is based on the open source Floodlight Project, is in the middle of the stack and ties together the physical and virtual networks and makes it simple to deploy SDN. Beneath that, we interface to physical switches through OpenFlow,” said Harding.
Big Switch dynamically segments the network into tenant or user networks, through virtual network segments (VNS) that can support a spectrum of topologies and use cases within a data center—from a pure overlay, a kind of tunnel-only network, to a pure OpenFlow one with physical switches.
“A pure overlay works in environments with a legacy physical network and OpenFlow-enabled on the virtual switches only,” said Harding. “In a pure OpenFlow environment, which is likely in a new data center deployment or a build-out for a specific application, it has all the benefits of physical switches—essentially hardware acceleration of the network that can work with virtual switches. Along this virtual spectrum, we also support hybrid network virtualization, which is required to integrate physical firewalls and physical application delivery controllers.”
Cisco’s “Nexus 1000v”
Cisco has adopted an open approach toward network virtualization and its cloud strategy by providing customers with a choice of hypervisor and orchestration stacks, according to Prashant Gandhi, director of Cisco’s Data Center Group.
The Nexus 1000v is a virtual switch designed to function much like its physical switch counterparts in Cisco’s Nexus series of data center switches. Like those physical switches, the Nexus 1000v relies on traditional network protocols for its control plane. It also relies on the VXLAN protocol for added scalability, with the ability to build bare metal workloads and physical services through VXLAN-VLAN functionality.
Cisco’s switch has a modular architecture, with a Virtual Supervisor Module (VSM) controlling the behavior of multiple Virtual Ethernet Modules (VEMs). The architecture is similar to a physical modular switch. Unlike Big Switch and Nicira, Cisco recommends a hardware element for the Nexus 1000v. While the VEMs are embedded on individual hypervisor hosts, Cisco advocates running the VSM on the Nexus 1010 Virtual Services Appliance for scalability and performance.
“Our Nexus 1000v secure multi-tenant solution supports customers using many different solutions: VMware ESX, Microsoft HyperV, Citrix Xen, and KVM. It also integrates with many orchestration platforms, including open source OpenStack, CloudStack, VMware vCloud Director and Microsoft’s SVCMM platforms,” said Gandhi.
Road Ahead For Network Overlays
Moving forward, exactly how all of the vendors’ differentiate themselves from each other will come into clearer focus. “Not just from a subjective standpoint, but also qualitatively in terms of what they’re offering, how they’re offering it, and how they’re positioning it. Many of the vendor strategies are in flux right now,” said Casemore.
About the author
Sally Johnson is the feature writer for TechTarget Networking Media. She writes about networking, security, data centers, cloud computing, network management and other topics for SearchNetworking.com, SearchCloudProvider.com, and SearchUnifiedCommunications.com. Prior to joining TechTarget, Johnson worked as a high-tech and science journalist and editor for more than 12 years. She holds a bachelor’s degree in geology from the University of New Mexico.