Network configuration is not the only area where server virtualization can affect the data center network. Performance and throughput are crucial aspects that need attention from networking professionals. As organizations expand their use of server virtualization, the data center network must evolve to accommodate changing traffic patterns and a greater dependency upon individual switch ports.
Aggregation and uplink capacity
The consolidation of multiple workloads onto fewer physical servers through the use of server virtualization software can change the dynamics of traffic flow within the data center network. Without server virtualization, data center network administrators use less expensive, fixed-configuration switches close to the physical servers. These "rack switches" provide a couple of uplinks to the core network. This arrangement works because the majority of the servers connected to the rack switches have vastly underutilized network links, making it possible to aggregate all that traffic onto only a couple of uplinks to the core network while keeping the oversubscription ratio within generally accepted limits. Traffic aggregation is occurring within the rack switch itself.
With server virtualization, however, traffic aggregation is occurring at the physical server level. Now, multiple workloads are running on the same physical server and sharing the same network links. As a result of workload consolidation through the use of server virtualization, the network links to the physical servers are now more heavily utilized. Using a rack switch to uplink to the core in an effort to aggregate traffic that has already been aggregated can result in bottlenecks and hindrances to network performance and throughput.
The simplest answer to this potential problem is to increase the number of uplinks from the rack switches to the core network. This approach does help deal with the "double aggregation" problem, but it also increases the number of core switch ports required in the data center. This is an approach that data center network admins will have to use judiciously so as not to exceed their core switch network port capacity.
Don't count on locality
Other pressures are driving data center network administrators to add uplinks to the core network. Administrators and network architects try to use locality to their advantage. Servers that communicate heavily or frequently with each other are usually connected to the same switch, which keeps traffic off the core network, minimizes latency and maximizes throughput. The fixed nature of a non-virtualized data center makes this possible.
With server virtualization, however, the placement of workloads can change dynamically depending upon server utilization. Technologies such as live migration allow server administrators to move workloads to different physical servers easily and quickly. Some virtualization solutions even offer the ability to automate this process; VMware's Distributed Resource Scheduling (DRS) is one example. With workloads now moving freely across the server farm, data center network administrators no longer have the option of using locality. Where locality had been used to help minimize the number of uplinks to the core network, the number of uplinks may now need to be increased.
Network capacity needs
This approach of simply adding uplinks does not, unfortunately, address network capacity needs. Without server virtualization in the picture, many servers require only a couple of network connections. Server virtualization hosts, on the other hand, typically use six, eight, 10 or more network connections per host. While workload consolidation will remove some physical servers, data center network administrators may find that the existing capacity in their fixed configuration switches isn't enough to handle a sizable server virtualization solution. For a variety of reasons, the network ports that are being freed through server consolidation may not be suitable for reuse by the server virtualization solution itself. The pressure to add uplinks, as discussed earlier, only exacerbates potential capacity concerns.
These capacity concerns aren't just about the sheer numbers of ports, either. As discussed above, some server virtualization solutions have the ability to take advantage of 802.3ad/LACP link aggregation. However, server virtualization solutions also need redundancy; the loss of a single network port can now affect numerous workloads across multiple departments or business units. The impact of a loss of connectivity on a server virtualization host can be much greater than network connectivity loss for a non-virtualized server. As a result, redundancy takes on a new importance. Data center network administrators and network architects now have to balance the need for effective link aggregation and the need for redundancy. Organizations may need to embrace switches that support cross-switch link aggregation in order to meet these normally opposed requirements.
All these requirements, taken together, may drive network administrators and architects to replace or redesign substantial portions of the data center network. Otherwise, the performance and throughput of the data center network may suffer.
About the author:
Scott Lowe is a technical lead for virtualization at ePlus Technology Inc. During his 15 years of experience, he has worked with a broad range of products, specializing in enterprise technologies such as storage area networks, server virtualization, directory services and interoperability. In addition to his work for SearchNetworking.com, Scott is a contributor to SearchVMware.com and SearchServerVirtualization.com. Scott also maintains a blog about virtualization, storage and servers.