Data centers are saturated with virtual networks inside the machines hosting virtual servers. IT takes network...
trends like virtual switching for granted and rarely considers the system load represented by the virtual switches. But with more kinds of virtualized network functions finding their way into data centers, IT must ask itself: How much of the processing power of the data center is being consumed by servers doing the work of network appliances? And, whatever the answer, IT needs to ask this related question too: How much will be consumed next year, or the year after?
Several network trends suggest the answer to this second question is going to be "More, a lot more." The following are among them:
- Private cloud: Moving the remaining physical servers into virtual space, replacing specialized storage infrastructures and networks with hyper-converged-style servers-as-storage appliances and shifting to a true cloud architecture in the data center inevitably pushes more traffic onto virtual switching.
- Software-defined networking and network functions virtualization: As SDN technologies, including VMware NSX, become more robust and far reaching, and as more virtual network functions are pushed into enterprise data centers to be everything from internal segmentation firewalls to SD-WAN head-ends, more physical CPU cycles will go to networking.
- DevOps, microservices, and containers: Containers push the servers-as-networking model even further. There are usually many more containers than virtual machines, which means containers force the server CPUs supplying their virtual network to work even harder.
These technologies all power flexibility and agility in providing IT services, and IT can't ignore the need to be faster and more flexible. But IT must keep its eyes on both performance and efficiency in pursuing these ends.
Consider this example: If the enterprise is buying 16-core host servers but devoting most of the cores on any heavily loaded boxes to network chores, it is saying, in effect, that the flexibility derived from virtualizing networks trumps the steep additional investment made to cover the switching load. After all, adding compute cores to process network packets is more expensive than adding switch ports. This strategy has been a long-standing driver for the development of specialized network hardware and custom network chips.
Even where cycles are available, scale can be an issue. With software switches boasting top packet rates under 10 million packets per second -- and average rates far lower -- these switches are capable of only a fraction of what dedicated hardware switching can provide.
Improving services, virtually
What can enterprises committed to joining the network trend that is virtual networking do to improve server efficiency, performance and scalability?
Watch this video to discover ways to track the use of enterprise IT networks.
Major application refactoring is one option. Restructuring applications to have microservice architectures can spread out the network and processing load, removing the "hotspots" that can overwhelm virtual network components. Lots of homegrown and commercial software will be restructured this way as enterprises and vendors work to modernize their application portfolios. But, given the sheer number of applications and vendors out there, progress will not be uniform or swift.
A more immediately available option is to give the servers a boost in the form of special-purpose hardware added to the servers themselves, to which network processing can be offloaded. Specialized network interface cards (NICs) -- sometimes programmable -- from companies such as Cavium, Mellanox and Netronome are filling this need. They expand the NIC's role as a co-processing engine to which more categories of packet processing can be handed off by the operating system or hypervisor.
Pushing more work to the NIC can significantly reduce the number of host server cores dedicated to networking, saving the enterprise money and boosting application performance as well. Microsoft rolled its own version of a smart NIC by repurposing field-programmable gate arrays to the task and installing them in Azure's host servers. The Facebook-led Open Compute Project incorporates smart NICs for network offload in its designs.
Most enterprises have thought about application rearchitecting, but one of the network trends they haven't considered is the role hardware plays in their thoroughly modern, virtualized, software-defined services architecture. They should.
Read more on why hardware still matters in this SDN era
The specific duties of today's network engineers can vary
Learn more about cloud-based networking technology
Dig Deeper on Network Hardware