Virtualization has been a hot notion for almost a decade. It's helped enterprises break the old model of static...
resources in the data center and move toward a connected resource pool. It's launched cloud computing, the most important trend in IT since the personal computer. Ironically, although virtualization is rooted in networking, it seems to have left the network for last -- but not for much longer. In fact, network virtualization is already changing -- and disrupting -- data center plans. Eventually, it will revolutionize them.
In both data center or server virtualization and cloud computing, virtual networking -- and, by extension, the virtualized network -- grew out of the need to deploy applications with stable connectivity through myriad resources that could vary over time with load or status. Virtual switches, or vSwitches, hosted on servers provide the internal connectivity among virtual machines; in container/Docker models vSwitches provide default connections. In early virtualization applications where applications were monolithic, virtual switches could be connected to traditional data center LANs to provide WAN access to the applications. Virtualization or cloud software (VMware, OpenStack), meantime, could build connections using network DevOps-like tools. As virtualization matures, things are getting more complicated.
Componentization changing landscape
One big issue is that of componentization of applications, culminating in the microservices trend we now see in software design. Componentized applications can be distributed across a lot of servers, including those in the data center, or in a private or hybrid cloud. This distribution means network connections among components have to span multiple servers and link multiple vSwitches. The connection of vSwitches and traditional data center Ethernet can be inefficient from a traffic perspective, introducing what's sometimes called "hairpinning" of flows in and out and then back into the same switch.
Another challenge that's emerging is scaling and redeployment. Early virtualization applications didn't adapt to load changes or to failure; operations people simply redeployed in the case of failure, and adapted their data center plans accordingly. Now we have a need for dynamic scaling and automatic fault recovery, which means a component can move or be replicated and still must retain a connection with the users of the application it supports.
Both Amazon and Google, leaders in cloud computing, have faced these challenges through an expanded vision of virtual networking that involves multiaddressing of elements. When an application deploys, the components are deployed within a private IP address space as defined in RFC1918 (often 10.x.x.x). Since these addresses can't be routed in a public network, the public ports of the application are also given a standard public IP address, and an address translator links the two. When a component is moved, the "new" private address is linked to the public address, and users access the component in its new location. Componentized application-anchored private networks are evolving to look increasingly like tenants in a public cloud, so these public cloud practices are definitely in the future for data center networks.
NFV adding new dimension to data centers
These requirements are more than sufficient to change data center networking by themselves, but network functions virtualization (NFV) is adding even more complexity to how data centers are planned. NFV introduces the notion of pipelined components that are stitched together into a service chain. In all previous forms of application networking, components addressed each other or interacted through a service bus, so all that was required was a directory to translate logical addresses (URLs or Web links) to IP addresses. With service chaining, the application components don't know the order of processing in the chain, and so direct tunnels have to be established between components under management control.
In addition, there are a few critical new principles to consider when designing data centers for virtualization or the cloud. By adhering to these, data center network planners can not only support current network virtualization and virtualized network needs, they can ensure their data center networks will be able to support future hybrid cloud and virtualization trends:
The data center network is a host for multiple virtual LANs and not a single fabric of connectivity. Flat Ethernet LANs that span an entire data center can make it impossible to securely connect application components using automated tools, and also make it difficult to control the addressing of these components to prevent unauthorized or non-compliant usage. VLANs can solve these problems, providing your Ethernet switches have the capability to create and manage them.
A single strategy for automating connectivity across both vSwitches and Ethernet switches can simplify operations. OpenFlow, an SDN control protocol defined by the Open Network Foundation, is probably deployed more often in this unified control mission in the data center than anywhere else. With OpenFlow, you can not only build VLANs across multiple servers, you can change public/private address mapping and develop explicit service chains where needed.
Directory and DNS services have to be linked to virtual network services to ensure component addressing within an application can be independently controlled. Applications in the virtualized data center are becoming more like independent tenants with every technology change, and this trend is certain to continue. Having a single directory/domain name system for all applications will create collisions on changes as multiple automated processes manage individual applications' scaling and redeployment.
Firewalls, load balancers, other higher layer network tools and even routers are now candidates to be virtualized to software form and deployed as needed within application-based virtualized networks and between these networks and users. Physical appliances are too difficult to connect and reconnect, and most are not suited for multi-tenant operation. The move to make applications into tenants is creating too many VLANs or IP subnets to make discrete physical devices a practical option. As this trend continues, it is likely to create a need for at least gateway routers to be virtualized as well.
The network is the critical link in virtualization, the thing that lets us reach resource pools and assign applications and functions. Enterprises that don't address the virtualized network itself risk more than the loss of network efficiency, they risk the loss of virtualization's overall benefits, and finding their data center plans outmoded.
Best practices for implementing virtual network security
Understanding virtual network functions vs. network functions virtualization
How to manage virtual networks