Published: 25 Mar 2013
Now that virtualization has taken hold in the data center, engineers have pushed the network into the virtual stack in order to route virtual machine (VM) traffic. But as virtual networks proliferate, network and server pros are forced to find ways to better integrate virtual and physical infrastructures.
This integration is essential to the orchestration and automation of VM provisioning and migration. Virtual networks route traffic between VMs in the stack, but it takes physical networks to connect these virtual environments to the outside world and to interconnect data centers.
If the promise of automation and orchestration is the fluid provisioning and migration of VMs, virtual and physical networks have to be just as flexible, and manual network configuration for VMs won’t remain an option. What’s more, engineers must be able to move VMs across both virtual and physical networks with their security and management policies intact. All of this requires communication between physical and virtual networks.
Many Virtual Switching Strategies Emerge
Both VMware and Microsoft have virtual switches built into their hypervisors, the vSphere Virtual Distributed Switch and the Hyper-V Virtual Switch, which provide visibility and make forwarding decisions.
Until recently—before these switches were improved—the virtualization team had to ask the networking team to create VLANs with Quality of Service (QoS) policies and to allot bandwidth for new VMs, according to Justin Giardina, chief technology officer of Iland, a cloud provider and VMware customer. Once the network team provisioned these resources, they couldn’t share administration of these networks with the virtualization team.
“One of the best things to come out of VMware’s technology for the distributed virtual switch is the ability to pass down administration capabilities to the virtualization engineer while keeping the physical network visible to the networking team as well,” said Giardina.
But VMware’s approach to switching left network pros without the ability to apply their networking skills to the virtual network. To address this, Cisco launched the Nexus 1000v, which provides visibility into the stack, but also more networking control. The 1000v replaces switching in VMware or Microsoft’s hypervisors and extends traffic and security policy across virtual networks and VM paths. It also enables deep network monitoring and analysis within the virtual environment, with features like Switch Port Analyzer (SPAN), Encapsulated Remote SPAN (ERSPAN), NetFlow, packet capture/analysis, and DHCP/IGMPv3 snooping.
Arista Networks took a different approach to expanding networking capabilities in the virtual environment, integrated its EOS operating system with VMware’s vSphere environment, thereby extending its own network programmability features into the virtual network.
SDN and Overlays for Physical and Virtual Network Bridging
Part of the goal of orchestration and automation is to enable cloud networks with automated provisioning of multiple distinct virtual network segments. The idea of these multi-tenant networks is to be able to turn up network segments on demand to support VM provisioning and migration.
Many enterprises are looking to use software-defined networking (SDN) controllers combined with distributed virtual switches to provision network segments or tunnels and to communicate back to the underlying physical network.
These network software overlays are used to move traffic between virtual machines, as well to reach over Layer 2 or Layer 3 physical networks in order to connect servers and interconnect data centers. VMware relies on the VXLAN standard to build these overlays, while Microsoft uses NVGRE. To integrate the virtual edge, some vendors have made it so these controllers can communicate back to a Layer 2 switch outside the virtual switching infrastructure that is used to direct traffic.
The Open vSwitch, which has gained the most traction next to VMware’s vswitch, has led the way in combining virtual switching with a centralized controller to provision and manage overlays, as well as to more tightly integrate virtual and physical networks.
The Open vSwitch works with a centralized OpenFlow-based controller to manage distributed virtual switches as one logical switch. Using the controller, the technology has a full view of every component and node on the virtual network and can direct individual data flows along with linked network services. The switch and controller software can institute cluster-level network configurations across many servers, eliminating the need to separately configure the network for each VM and physical machine. The switch also enables VLAN trunking, visibility via NetFlow, sFlow and RSPAN.
The technology, which supports XenServer, Virtual Box, KVM environments, was largely initiated by Nicira Networks, which has since been acquired by VMware. VMware maintains that it will continue Nicira’s support of Open vSwitch.
IBM, Big Switch and NEC have also launched virtual switching technology that uses SDN with centralized controllers to gain a broader view of both physical and virtual resources, as well as to provision network segments on demand. In these strategies, an OpenFlow controller manages flows within the overlay network, but also communicates out to the physical network.
IBM offers the Distributed Virtual Switch 5000v, which lives on a VMware hypervisor and creates tunnels between endpoints across the underlying network infrastructure. IBM has its own virtual network overlay strategy, using distributed virtual switches deployed on hypervisor hosts to create tunnels between endpoints across the underlying network infrastructure.
NEC’s ProgrammableFlow 1000 vswitch, which works in a Microsoft environment, also combines an OpenFlow controller and virtual switches. Together, the technology maps all of the VMs and enables network provisioning for migration, making sure QoS and ACL policy can be applied throughout.
Similarly, BigSwitch’s Big Virtual Switch, works with the Big Network Controller, to gain a view of the entire virtual and physical network and to provision network segments on demand, applying and managing forwarding policy across virtual and physical environments.
The Big Virtual Switch integrates or communicates at the virtual edge with any physical switch from one of Big Switch’s vendor partners, allowing for policy to stretch across physical and virtual networks.
Big Switch is promoting its Big Virtual Switch as a solution that integrates the virtual edge without undoing the physical network beneath. “Some of the more siloed solutions that are focused on network virtualization only, rather than SDN, leave you an environment where the work of building the virtual networks can undo the network engineering underneath,” said Dan Hersey, a network virtualization product manager at Big Switch.
Overlay strategies in which the controller doesn’t talk to the physical network can lead to network conflicts, along with complexities in debugging and troubleshooting, he said. These overlay networks require software gateways and processing servers that cannot be configured without duplicating the underlying physical network control plane configuration. This leads to increased costs and troubleshooting complexity, Hersey said.
About the author
David Geer writes about security and enterprise technology for international trade and business publications.
- Understanding the pros and cons of network virtualization –SearchSecurity.com