In part 1 of this two-part series on improving networking for virtualization, we look at virtual network switch problems, including limited traffic visibility and management. In part 2, we examine improvements in virtualization networking through a series of technologies that include distributed virtual switching and edge virtual bridges and I/O virtualization.
With the proliferation of server virtualization, a new network access layer has emerged that is composed of the virtual network switches embedded within hypervisor software, providing connectivity among the virtual machines (VMs) living on the same physical server. In networking for virtualization, the traditional virtual switch (vSwitch) may provide a simple solution for VM-to-VM connectivity, but it can have a number of drawbacks that range from limited traffic visibility to poor manageability. Networking teams are finding themselves looking for virtualization networking technologies to address these issues. The first step to finding a solution is to understand the range of problems.
Virtual network switch problems
Poor management scalability in virtual network switches (vSwitches): Each virtualized server includes at least one software-based virtual network switch, and each of these switches must be configured and managed manually as a separate entity, adding to complexity in management. Another difficult aspect of management is that the server team usually manages the new access layer that is composed of the virtual network switches, while the rest of the data center network is the responsibility of the networking team. The combination of dual access layers and split responsibilities increases the complexity of the virtualized data center network and reduces the efficiency of management. These effects become dramatically more evident as the number of virtualized servers increases.
Limited traffic visibility in virtual network switches: Traditional vSwitches generally do not have the same extensive traffic monitoring features as the physical access switches that interconnect physical servers. Therefore, monitoring performance and troubleshooting VM-to-VM traffic within a given server can be a problem because of a lack of flow analysis, advanced statistics, and remote diagnostics capabilities. Even where some of these capabilities are supported, integrating this information with external management tools can be problematic because this often requires communication between the hypervisor and the network management system.
Inconsistent network policy enforcement: Traditional vSwitches can lack some of the advanced features that are required to provide the degree of traffic control and isolation required in the data center -- i.e., private VLANs, quality of service (QoS) and extensive access control lists (ACLs). Even when vSwitches support some of these features, they must be configured manually through the virtual server management application and may not be fully compatible with similar features offered by physical access switches. This situation results in difficulties in implementing consistent end-to-end policies.
Manual vSwitch and network reconfiguration to support VM migration: When VMs are migrated among virtual servers, manual reconfiguration of the network (including QoS settings, ACLs, and firewall settings) is too time-consuming to allow dynamic resource allocation to meet the dynamic fluctuation in demand for different applications. The benefits of VM migration are greatly enhanced if the VM's network configuration state can be automatically transferred to the new physical server.
Limited I/O bandwidth: The more VMs per server, the higher the traffic load and the greater the number of CPU cycles required to move traffic through a software-based virtual switch.
In part 2 of this series on networking for virtualization, we look at improvements in virtualization networking through a series of technologies that include distributed virtual switching and edge virtual bridges and I/O virtualization.