Navigating the road to flatter, converged data center networks

IT professionals explore the process of building converged data center networks based on expectations of vendors’ promises, as well as their reservations, the benefits and technicalities of this technology.

When it comes to building converged data center networks, network managers have lots of new choices in architecture and approach. In this feature, network managers discuss options for flat networks, integrated storage and networking for the private cloud.

Byron DeLoach, director of architecture for managed services provider Terenine, is undergoing a process that network architects haven’t had to take seriously in a long time: thorough, comparative research on networking vendors and their differing approaches to building the LAN.

That’s because for the first time in years, vendors are promising a revolution in how enterprises will design and build their networks in order to support server virtualization and private cloud computing. This revolution, they say, will include technology to consolidate data center and storage infrastructure using data center bridging and Fibre Channel over Ethernet (FCoE), creating converged data center networks. And it includes the move toward flatter data center networks by reducing the number of switching tiers to diminish latency and management overhead.

In fact, today some enterprises are removing the aggregation layer entirely, and vendors say they can eventually consolidate down to one tier of switching in a giant any-to-any Layer 2 domain. That Layer 2 domain will enable virtual machines to migrate dynamically anywhere within the data center and even across data centers using extended virtual LANs. Vendors are promising to make this happen with pre-standard technologies like Transparent Interconnection of Lots of Links (TRILL) and Cisco Systems’ Overlay Transport Virtualization (OTV).

But at this point, networking equipment makers are working toward executing their professed visions. Juniper Networks, for instance, announced its “3-2-1” initiative, a plan to reduce three-tiered data center networks to two tiers today and one tier in the future. Yet the final stage of this concept is little more than a road map. And many other vendors are in the same boat, particularly as some of them elect to wait for the IEEE and Internet Engineering Task Force to hammer out the standards that will power these concepts.

That forces network engineers and architects like DeLoach to navigate a tricky course through a lot of big promises and a good deal of proprietary technology.

“The way we decide if something is too far out there is by sheer volume of research,” said DeLoach. “We kind of take all the things that HP, Dell and IBM are doing and weigh that against what Cisco, Brocade and Force10 are doing. Certainly if they are a little more closely aligned, then it’s more attractive. It’s kind of like VHS versus Betamax. Betamax didn’t take off, and it’s because the big vendors decide which technology is going forward.”

Data center network convergence: Enterprises start with the server rack

When it comes to network convergence, most vendors are pushing FCoE, which doesn’t thrill DeLoach.

“I’m waiting for the wire speeds to come up pretty drastically before we start looking at [those solutions],” he said.

However, he isn’t averse to network convergence where it can save him money, so he has adopted I/O virtualization technology from Xsigo Systems in order to cut down on the number of cables, network interface cards (NICs) and host bus adaptors he needs to install when he racks a server.

“Whenever we had a new server, we would have two four-port NICs for redundancy, so we would separate out traffic and have redundant routes. Then we would have two Fibre Channel cards, depending on throughput needs. That gets pretty expensive quickly. The [NICs] are right at $800 a piece, and the Fibre Channel cards are at $1,500 to $2,000. Then you throw in VMware and the application licensing that goes on top of that. So you’re looking at connectivity for a server costing almost as much as the server. With Xsigo, we put in two InfiniBand cards for $1,000. It doesn’t cost me any extra for software or hardware,” DeLoach explained.

What’s more, the Xsigo solution reduces the amount of traffic that goes to his top-of-rack, 10 GbE Force10 switches. The reduction in overall traffic on his network has left him with excess capacity and room to grow his network.

How flat is a flat network?

The notion of a single tier of switching in a data center is little more than a promise today, but many enterprises do see the value in a flattened, two-tier network.

Dave O’Berry, director of IT systems and services, has a new two-tiered network in his data center at the South Carolina Department of Probation, Parole and Pardon Services. Using Juniper Networks’ virtual chassis technology, O’Berry has built a data center core of seven Juniper EX4200 switches. His legacy HP E4800 top-of-rack switches plug directly into the virtual core, giving him a two-tiered network. O’Berry is planning a refresh at the server rack using HP’s network convergence technology, FlexFabric for I/O virtualization. He said he anticipates that he’ll be replacing his legacy HP top-of-rack switches with Juniper switches.

But O’Berry is reluctant to flatten his network further to a single, any-to-any network, which is part of Juniper’s 3-2-1 roadmap.

“Very few things are built to support a core-to-edge concept from a visibility and security perspective,” he said. “Each one of these [switching] layers is a point where you get visibility on things. A [completely flat], many-to-many network is cool, but the reality is that the way you secure that is very tough. It’s not trivial.”

In the meantime, two-tier networks go a long way in better supporting a highly virtualized environment. The University of Arizona has built flat, two-tier data center networks in both its primary data center and a “warm” secondary data center using Cisco’s Nexus switches, according to Derek Masseth, senior director for front-line and infrastructure services. The school’s primary data center has two Nexus 7000s in the core and several racks of servers topped by Nexus 5010s. Each Nexus 5010 has dual 10 GbE uplinks into each Nexus 7000 and into a group of Cisco NDS 9509 Fibre Channel switches. The secondary data center has a single Nexus 7000 with a smaller number of Nexus 5010s.

Masseth’s network supports close to 500 virtual machines running on about 24 physical servers with 10 GbE connectivity. With the combination of 10 GbE connectivity and two-tiered architecture, the network rarely runs into latency issues.

“We have a tendency to lay out our data center physically such that if we know something is going to be particularly latency-sensitive server to server, we’ll try to locate those servers in the same rack so that they are slaved to the same top-of-rack device and are just one hop away,” Masseth said. “That’s not always possible, so we watch the pipes from the top of rack into the aggregation points on the Nexus 7000s. But we haven’t had substantial issues in that regard, because all of our Nexus 5010s are slaved to redundant Nexus 7000s.”

Extending the Layer 2 network for data center interconnects

Now that data centers are more geographically dispersed, enterprises will be looking to stretch their Layer 2 networks to ensure smoother movement of VMs and other traffic between data centers. Because most WAN links for data center interconnects are Layer 2 connections, vendors and service providers are offering virtual extended Layer 2 LAN services across the WAN. Some service providers are also offering pricey virtual private LAN services. Meanwhile, Cisco has introduced its OTV feature on the Nexus 7000, which creates a virtual Layer 2 connection over a Layer 3 WAN link.

“That’s something that I’m waiting for to mature a bit before we jump into,” said Terenine’s DeLoach. “If I were wearing my architect hat I’d be very excited about it, but I’m currently wearing my engineering hat and I’m waiting for more security and stability. I’m waiting to see how it plays out, and once I see some actual standards from the IEEE, I’ll be a little more aggressive about pursuing that.”

The University of Arizona has solved the problem of a Layer 2 data center interconnect by running a university-owned Layer 2 dark fiber connection between its two data centers.

“Our vMotion is not automated, but we do move [virtual machines] around quite regularly between sites,” Masseth said. “The Nexus 7000s in our cores are slaved together across the dark fiber between our data centers so our two data center networks are like one big network. We do vMotion between our data centers, and we do that very, very frequently.”

Making sound decisions in choosing a new network strategy

All of these advanced networking technologies in the data center are enabling enterprises to achieve two ultimate objectives, according to Lucinda Borovick, vice president of data center networks research at IDC.

“[Enterprises] want to take the cost out of their network. And they want to know how the network can support a migration to a virtual private cloud.”

As enterprises work toward those goals in a networking market undergoing tremendous change, Borovick said their No. 1 concern is to avoid locking themselves into a single technology. “They know their environment is going to change quickly, and they want to make sure they are not locked into an approach that precludes them from making other decisions down the road,” she said.

Navigating the competing visions from the different networking vendors is somewhat of a challenge, said South Carolina’s O’Berry, but he sees each vendor’s road map toward the new converged data center network as little more than a commodity tool.

“The vendors’ visions change. It’s a product cycle for them,” he said. “What you have to know is that you are solving a business problem. Most vendors act like they are the edge that you need in your business, and that’s a fallacy. The fact is that they are a tool. The edge you have is in how you execute on your business. That’s why open standards matter to me. Open standards don’t force me to make a life-or-death bet on one company’s vision.”

This was first published in April 2011
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close