Problem solve Get help with specific problems with your technologies, process and projects.

Edge virtualization technologies solve virtual network switch woes

Virtual network switches have posed serious problems for networking pros, but network edge virtualization technologies, including distributed virtual switches, may be what makes manageability possible in networking for virtualization.

When networking for virtualization, networking pros must battle a host of  virtual network switch problems. In this article, we look at solutions to these virtualization networking problems using network edge virtualization technologies that include distributed virtual switching, edge virtual bridges and I/O virtualization.

While server virtualization is seen as the domain of systems and data center teams, networking professionals must manage a layer of virtual network switches (vSwitches) that are built into hypervisors and pose a host of serious management problems. Engineers may find solutions to those problems in a series of network edge virtualization technologies that include distributed virtual switches, virtual bridging and I/O virtualization. The goal of all of these virtualization networking technologies is to provide better manageability, policy enforcement, security and scalability in virtualization.

Network edge virtualization: Solutions to vSwitch problems

There are recent and ongoing developments that address the shortcomings of the traditional vSwitch by providing improved network support for server virtualization. Most of the efforts to improve networking for virtualization involve various approaches to network edge virtualization. These technologies include:

Part 1: Virtual network switch problems abound

Distributed Virtual Switching (DVS): With DVS, the control plane and the data plane of the virtual switch are de-coupled. This allows the data planes of multiple virtual switches to be controlled by an external centralized management system that implements the control plane functionality. The de-coupling also makes it easier to tightly integrate the vSwitch control plane with the control planes of physical access switches and/or the virtual server management system. DVS functionality can potentially address all of the shortcomings of the traditional vSwitches, except the issue of CPU constraints on I/O bandwidth.

With the VMware vNetwork Distributed Switch, the control plane is integrated with the vCenter management system. This integration allows the virtual switch to take advantage of information that isn't easily derived from simply inspecting network traffic and supports managing up to 64 virtual switches as a single distributed switch.


The Cisco approach to DVS, embodied by the Cisco Nexus 1000v switch, offers an feature set (layered on the VMware vNetwork API) and provides consistency with other Cisco networking products. Cisco's integration with vCenter and the Nexus control plane is based on VN-Link functionality, which establishes a virtual link between the VM and the DVS that can be managed in a manner that emulates a physical link from a server to a Cisco hardware access switch. VN-Link is currently implemented in the Cisco Nexus 1000V Series DVS and will be supported by a class of Cisco server access or aggregation switches that will support network interface virtualization. This allows hypervisors to offload all switching of VM traffic to the external access switch. Offloading switching to the external network provides traffic visibility but increases utilization of the server access switch port (or ports) and may result in a small increase in latency. VN-Link functionality in these external switches depends on additional packet tagging (VN-Tag) to provide network visibility to VM interfaces and to allow application of network policies.


More on networking and virtualization

Network consolidation and virtualization solve management problems

Live migration: Troubleshooting storage and network configurations

Server virtualization impact on network configuration

Server virtualization impact on latency

HP, Cisco compromise on virtual Ethernet port aggregation (VEPA) standards

With the vNetwork API, other implementations of virtual switches can be expected to emerge, possibly derived from the open source Open vSwitch project. Open vSwitch is already compatible with a number of virtualization environments, including Xen, XenServer, KVM and QEMU. It should be noted that there is no standards work being done in the realm of the DVS, but the Open Virtualization Format (OVF) is a proposed metadata standard for describing the properties of a VM, with possible application in facilitating transfer of network state during VM migration.


Edge Virtual Bridging (EVB): The IEEE 802.1 Work Group is creating a standard called Edge Virtual Bridging (EVB). The EVB standards (IEEE 802.1Qbg) work is based on a technology known as Virtual Ethernet Port Aggregator (VEPA). Using VEPA, all traffic from VMs is forwarded to the adjacent physical access switch and directed back to the same physical server if the destination VM is co-resident on the same server. In the latter case, the traffic does a 180-degree turn. VEPA therefore gives the network access switch visibility into, and policy control over, all the virtual machine traffic flows.


EVB/VEPA will provide a standards-based solution that eliminates the need for software-based switching within hypervisors (this is a standards approach akin to Cisco VN-Link performed by an external physical switch, although the implementation details may differ significantly). While the IEEE EVB effort effectively addresses the issues of management scalability, traffic visibility and consistent policy enforcement, it is not yet certain that it will cover integration with virtual server management systems or address the issue of automated transfers of network state in conjunction with VM migration. EVB has broad vendor support but is still at an early stage in the standardization process, so it is reasonable to expect the pre-standard version of EVB to predominate for the next 12 to 18 months.


Single Root I/O Virtualization (SR-IOV): In the near future, hardware NICs will be available that use SR-IOV technology to move software-based virtual switch functionality into PCI NIC hardware. While hardware-based SR-IOV NICs will improve I/O performance, they will not in themselves address the other challenges, such as management scalability and limited management visibility into network traffic flows. However, SR-IOV NICs will also emerge to give hardware support for edge networking technologies (such as FCoE, iSCSI, CEE/DCB, RDMA) and may also evolve to provide hardware support for some of the virtual switching models described above.

Evolving the data center LAN with network edge virtualization

Until recently, there were few changes occurring in the data center LAN. Server virtualization has changed all of that, and today there is a great deal of change, particularly at the edge of virtualized data center networks. In order to be successful, IT organizations need to develop a for the way in which their data center LAN will evolve to allow the organization to derive the maximum benefit from server virtualization. As part of developing this , IT managers need to carefully consider how rapidly evolving proprietary and industry standard network virtualization technologies can deliver the improvements that are needed to give the virtual data center enhanced scalability, end-to-end manageability, security, and responsiveness to changes in user demand.


This was last published in June 2010

Dig Deeper on Network virtualization technology