The expanding footprint of server virtualization is forcing enterprises to rethink how they architect and build data center networks. But choosing the right vendor for that new data center network architecture is more difficult than ever because each provider has its own approach to solving the problems that server virtualization presents to the network. One thing is clear: Engineers may need to look beyond their incumbent providers for answers.
Network problems stem from the fact that virtualization has led to much higher rates of server utilization, which is pushing up I/O congestion at the network edge and creating exploding demand for network performance throughout the data center. In addition, cloud computing and real-time migration of applications and virtual workloads via technologies like VMware's vMotion are driving up demand for low-latency, high-bandwidth networks.
Vendors are targeting the problem with a vast spectrum of solutions, ranging from higher-performance top-of-rack switches to flattening the data center network in order to make it less complex by developing switches and routers that can eliminate the need for aggregation switches today and top-of-rack switches someday.
"Juniper's trying to get rid of the top-of-rack, and Force10 is coming out with a better top-of-rack," noted Dave O'Berry, director of IT systems and services for the South Carolina Department of Probation, Parole, and Pardon Services.
The divergent visions for the next generation of data center networks may for the first time in more than a decade kill the adage "no one ever got fired for buying Cisco."
"Everyone recognizes that the current network doesn't support [virtualization] very well, but I think knowing who to use as a vendor isn't obvious," said Zeus Kerravala, distinguished research fellow with Yankee Group. "Historically, it's been Cisco because, frankly, all the products looked the same. If you were Foundry or Force10, you were a little bit faster than Cisco. And if you were HP or 3Com, you were a little bit cheaper than Cisco. But ultimately, all the products were built and worked the same. The solution for this virtualization problem is different from vendor to vendor."
Various roads to data center networks for virtualization
It remains to be seen which path to the next-generation data center will be the right one. Cisco Systems touts its line of Nexus switches and its vertically integrated Unified Computing System. Juniper Networks says enterprises can eliminate the aggregation layer in the data center with its virtual chassis technology, and it promises that its Project Stratus will transform the industry by flattening networks even more.
This week, Brocade and Force10 Networks both rolled out new products to enhance their own roadmaps for dealing with server virtualization.
Force10 introduced the S60, which it calls the first top-of-rack switch purpose-built for virtualized data center environments with super performance and added buffering capability. The switch features 48 Gigabit Ethernet (GbE) ports and up to four 10 GbE uplink ports, and two features aimed at optimizing the data center network for supporting server virtualization, said Kevin Wade, senior director of product marketing at Force10. First, it has 1.25 GB of packet buffering. The only other top-of-rack switch on the market with more than 20 MB of buffering capability is the Arista 7048, which boasts 768 MB. The large buffering capability allows the switch to hold packets at the port when it detects that the I/O infrastructure at a given server has been overwhelmed by virtualization and bursty applications.
The S60 also supports Force10's new Open Automation Framework automation suite of tools that tie the network into existing data center management tools from VMware and other virtualization vendors, allowing server and virtualization administrators to dynamically allocate network resources to virtual workloads.
Meanwhile, Brocade announced new 10 GbE blades for its MLX router/switch platform that doubles the capacity for 10 GbE wirespeed ports on a single chassis to 256. Bill Dunmire, Brocade's director of product marketing, said server virtualization is driving up the need for increased wirespeed port density in the core because designing an oversubscribed data center network to support server virtualization is too complex.
"If I have oversubscription in my switch or routing infrastructure, that's a whole other level of complexity," Dunmire said. "And virtualization adds another level of complexity, because now I don't have a single traffic flow [from the server]. I have multiple traffic flows, and it spikes at different times. It becomes a challenge for network architects."
Beware vendor lock-in and costly upgrades: Demand more of your vendors
Blogger and freelance network engineer Greg Ferro said data center networks should become more of a platform in order to support server virtualization. As these network elements develop, enterprises risk installing new high-end switches today, only to rip and replace them three or four years from now.
Vendors need to make bigger switches so that enterprises can collapse their core 10 GbE switching onto a single box, Ferro said.
"I need a backplane to handle lots of 10 GbE," he said."Second, those switches need to support fancy features like never before. Last year, having a virtual chassis was the thing. This year, it's converged storage, data center bridging … buffers, lower power and much better redundancy. Third, [vendors must] have a roadmap that convinces [enterprises] to buy the chassis. Instead of replacing stackable switches, we replace blades [in the chassis]."
Ultimately, enterprises must dictate to vendors how they solve the problems of building data center networks to support virtualization, O'Berry said.
"I think at times businesses don't look at what they really need," he said. "They look at what's out there. We've been conditioned to do that. There's a lot of that among CIOs and CTOs.
Instead, network engineers should figure out what they want and demand it of the vendors. In particular, data center network vendors should be developing open, standards-based technology to tackle the challenges of virtualization, O'Berry said.
" I don't want to choose something right now that I am going to have to live with for the next 10 years of my life without modifications," he said. "It has to be something that I can iteratively add to or take away from as we go. So there has to be some kind of flexibility and agility. It has to be open standards based -- I'm talking about open standards, not APIs."
Solving virtualization problems with open standards might be too much to ask for at first, according to Kerravala.
"Initially, you'll have vendors who all have their own secret sauce, their own way of solving the problem," he said. "I guess over time you'll have standards. But if you think back to the early days of LAN, we had vendors who sold Token Ring. Others were doing FDDI. And others were doing Ethernet. Eventually, it did come down to standards, but it took years to get there. I really do believe we're at this kind of precipice for networking again. You're going to see a lot of innovation and a lot of unique ways to solve the problem."
Kerravala said network engineers will have to demand that the prospective data center network vendors show them a comprehensive technology roadmap for solving server virtualization problems.
Let us know what you think about the story; email: Shamus McGillicuddy, News Editor