Problem solve Get help with specific problems with your technologies, process and projects.

Server virtualization and the impact on network configuration

Server virtualization delivers sophisticated capabilities that affect performance, scaling and connectivity needs. Solutions have evolved from enabling partitioning of a single computer into multiple virtual servers to enabling virtual servers to be migrated between physical servers by managing a collection of servers as a single resource pool. This flexibility to migrate workloads and create virtual servers on demand has effects on the network that networking and data center professionals must prepare for. There are three major challenges that server virtualization poses for the network: an increase in the network addressing required, an increase in performance and throughput, and the inability to manage virtual networked relationships.

The adoption of server virtualization enables organizations to reduce the number of physical servers in the data...

center and provide greater flexibility to meet rapidly changing business needs. However, server virtualization introduces challenges as well, and one area where this can be felt is data center network configuration. Data center network managers will need to be prepared to deal with changing media access control (MAC) addresses, new uses for virtual LANs (VLANs), and new interactions with server administrators.

Without a server virtualization solution, data center network administrators generally don't have to deal with constantly changing MAC addresses, nor do they need to track MAC addresses. This is because the MAC addresses are usually burned into the physical network interface cards (NICs) on the servers. Typically, therefore, data center network administrators are safe when they consider the MAC address for a server to be a fixed value, and they can plan accordingly when configuring IP address reservations or when troubleshooting network connectivity.

With a server virtualization solution, however, a virtual server's MAC address can change. Fortunately, the instances when a virtual server's MAC address changes are fairly limited. In a VMware Infrastructure 3 (VI3) environment, for example, MAC addresses only change during what's called a "cold migration," when a virtual machine (VM) is shut down and moved from one location to another. In these instances, the VM will have a new MAC address when it is restarted in its new location. Any configurations that were built around the old MAC address will no longer apply. In that same VI3 environment, performing a live migration -- known as a VMotion -- would not cause a new MAC address to be generated.

Other server virtualization solutions, such as Microsoft Hyper-V and Citrix XenServer, may have different instances during which a virtual machine's MAC address may change. The key point is that the frequency with which a VM's MAC address may change is far greater than that of a physical system.

Granted, there are not that many instances in which data center network administrators configure things based on the MAC address. Where these configurations do exist, however, data center network admins are left with the option of manually generating MAC addresses--instead of allowing the virtualization solution to dynamically generate MAC addresses--and tracking the assignment of these MAC addresses to virtual machines. This manual process leaves room for error, of course, and these errors manifest themselves most commonly in the form of MAC address collisions. These MAC address collisions result in intermittent connectivity loss and can be difficult to troubleshoot. Unfortunately, there's no easy way to be "on the lookout" for MAC address collisions; erratic and intermittent connectivity problems are the only real indication.

Increased VLAN usage
VLANs can also be heavily influenced by the addition of server virtualization. While data center network admins are quite familiar with VLANs, they may not be quite so prepared for how extensively VLANs are used with many server virtualization products. For example, VMware ESX and VMware ESXi have extensive support for 802.1q VLAN tags. This even includes the ability to pass 802.1q VLAN tags all the way up to the virtual machines for operating systems that have support for managing VLANs.

To take advantage of this VLAN support, however, the physical switch ports must be configured to operate as 802.1q VLAN trunks. This practice runs contrary to the way many data center network admins configure their switches. In general, data center network ports are configured as access ports, carrying traffic only for a single VLAN, and these ports do not pass the 802.1q VLAN tags up to servers. Without the physical switch ports configured as 802.1q VLAN trunks, it will usually require more NICs and more Gigabit Ethernet ports on the physical switches to match the same level of flexibility as with 802.1q VLAN support. This may increase the initial acquisition cost and reduce the overall ROI of the server virtualization solution.

In addition to changing the default configuration for physical switch ports, data center network administrators will also need to grow accustomed to a closer interaction between the VLAN configuration on their switches and the physical servers running the server virtualization software. VLANs must not only be configured on the switches, they must also be allowed across the VLAN trunks and properly configured inside the server virtualization software with the correct VLAN ID. This doesn't take into account native (or untagged) VLAN settings and how those interact with the server virtualization software. An incorrect setting anywhere along the way will adversely affect network connectivity for all the workloads running on that physical server.

This tighter interaction between the data center network administrators and the server administrators regarding VLANs and VLAN configuration is only one of several areas in which these two groups will need to work together more closely than perhaps they have in the past. Another area is link aggregation configuration.

NIC teaming and bonding
Some server virtualization solutions also offer support for various forms of NIC teaming or bonding. Supporting NIC teaming or bonding allows the virtualization solution to provide redundancy and/or more efficient utilization of multiple physical NICs. In some cases, this NIC teaming or bonding also requires configuration on the physical switch, typically to enable link aggregation support via 802.3ad/LACP (Link Aggregate Control Protocol) or a proprietary alternative like Gigabit EtherChannel from Cisco or Multi-Link Trunk from Nortel.

This means that data center network admins and server administrators will need to share information about which NICs are plugged into which network ports so that the link aggregation can be properly configured. Fortunately, some virtualization solutions include support for technologies like Cisco Discovery Protocol (CDP) to simplify the process of matching NICs to switch ports. Making link aggregation work properly in conjunction with server virtualization may also require changes to the load balancing configuration of the physical switches.

Finally, one of server virtualization's greatest strengths -- the ability to provision new virtual machines quickly and easily -- may also be a data center network admin's major headache. In some environments, the implementation of virtualization leads to "VM sprawl," and admins must track large numbers of IP addresses and networked devices, with the addition of new virtual servers occurring frequently. Data center network admins should be prepared for this possibility and have systems and procedures in place for handling a potentially greater demand for IP addresses.

About the author:
Scott Lowe is a technical lead for virtualization at ePlus Technology Inc. During his 15 years of experience, he has worked with a broad range of products, specializing in enterprise technologies such as storage area networks, server virtualization, directory services and interoperability. In addition to his work for SearchNetworking.com, Scott is a contributor to SearchVMware.com and SearchServerVirtualization.com. Scott also maintains a blog about virtualization, storage and servers.

This was last published in January 2009

Dig Deeper on Network virtualization technology

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close