The hype surrounding data center network convergence and the battle between FCoE or iSCSI can obscure the core issue: At the end of the day all storage will traverse the production Ethernet network and engineers must be prepared for this transition.
Listening to vendor stories can be mind-numbing since each is choosing its own corner. Cisco Systems and Brocade lead the FCoE charge. Meanwhile, Dell tells its customers that iSCSI over Ethernet is the path to convergence. Chip vendors like Intel and network adapter vendors like QLogic and Emulex are backing all horses, ready to support whatever technology an enterprise chooses.
Enterprises must overlook the vendor hype and instead consider their existing infrastructure investments to determine which technology to choose. They also must understand that network convergence may occur in small increments, not necessarily going beyond the rack level in the short term.
"Typically [customers] have one type of convergence in mind when they come to us, and we have to open their minds a bit to the fact that there are many different options and they aren't mutually exclusive," said Joe Onisick, technical solutions architect with the data center practice of a large system integrator.
Why move forward with data center network convergence?
With the proliferation of 10 Gigabit Ethernet (GbE), server virtualization and other technologies, enterprises must look seriously at network convergence to control capital and operational expenses and reduce complexity.
"If you have a rack with a lot of servers, especially a VMware rack, you might have 10 copper Gigabit Ethernet connections and two Fibre Channel connections to each server," said Darren Ramsey, director of technology for Wellmont Health System, a Tennessee-based hospital chain. "If you have 10 servers in that rack, that's 120 cables. That's a lot to try to work around, a lot of inflexible copper, lots of heat. Switch ports aren't cheap."
In his data center, Ramsey recently introduced network convergence to eight racks of virtualized Dell servers using Cisco's Nexus line. He consolidated 10 NICSs and two host bus adapters(HBAs) on each server to two QLogic converged network adapters (CNAs) that provide dual 10 GbE FCoE links to redundant top-of-rack Nexus 2232 Fabric Extenders. All eight pairs of Nexus 2232 Fabric Extenders connect upstream to two redundant Nexus 5020 switches. Storage and data traffic is converged via FCoE up to the Nexus 5020s. From there, storage traffic returns to native Fibre Channel and connects to a pair of Cisco MDS 9506 Director switches on the storage area network (SAN). The production data traffic continues upstream to Catalyst 6509 switches.
"The FCoE streamlined and downsized the number of Fibre Channel ports we needed since all hosts now run directly into the Nexus," Ramsey said. "We no longer need to tie directly into the MDS [from the server]. And it cut down on the complexity that was in each rack. And when we went to two 10 Gigabit links on each server, that allowed us to have more virtual machines on that box."
Choosing a network convergence technology: iSCSI or FCoE?
FCoE gets all the data center network convergence hype, but many industry veterans say iSCSI is another viable option. As an IP-based storage networking protocol, iSCSI can run natively over an Ethernet network. Most enterprises that use iSCSI today run the storage protocol over their own separate networks because convergence wasn't an option on Gigabit Ethernet. But with 10 GbE switches becoming more affordable, iSCSI-based convergence is becoming more of a reality.
More on FCoE or iSCSI and data center network convergence
Myth vs. reality: FCoE data center networks
Navigating the road to flatter, converged data center networks
"Certainly iSCSI is the easier transition [compared to FCoE]," said storage blogger and IT consultant Stephen Foskett. "With iSCSI you don't have to have data center bridging, new NICs, new cables or new switches."
Ultimately the existing infrastructure and the storage demands of an enterprise will govern the choice of a network convergence path.
"There are very few times where I will steer a customer down an FCoE route if they don’t all ready have a Fibre Channel investment," said Onisick. "If they have a need for very high performance and very low throughput block data, FCoE is a great way to do it. If they can sustain a little more latency, iSCSI is fantastic. And if they have no need for block data, then NAS [network-attached storage] and NFS [network file system] is a fantastic option."
For Ramsey, iSCSI was never a viable option because of Wellmont’s high-performance requirements.
"We played around with iSCSI, but that was still going to run over TCP, and you're still going to contend with buffering, flow control, windowing or packet drops and queuing, so we stayed away from it. What FCoE brings to the table -- It doesn’t run over Layer 3. It's an encapsulation of your Fibre Channel packet inside a native Layer 2 frame, and all we're doing is transporting that between the server and up to the Nexus 2232 and the Nexus 5020."
Network convergence: New strategies and cultural clashes
With storage and data convergence, networking pros will have to think about networking in a new way.
"You have to realize you're going to have a lot more traffic and it's going to be a lot more sensitive to latency and it's going to be incredibly sensitive to availability," Foskett said. "You've got to make sure the network doesn't go down. If you lose your data center, people cry about it. If you lose your storage network, then servers fall over and that's a really big difference."
What’s more, networking professionals will need to familiarize themselves with the management concerns of storage professionals if they are going to have storage traffic on their production networks, according to Stuart Miniman, principal research contributor to Wikibon, an online technology research community.
"A storage person is really concerned about data availability and making sure there is never a data loss, as opposed to a networking administrator who is really concerned about making sure there is connectivity, bandwidth and resiliency," Miniman said.
These different mindsets have traditionally led to culture clashes when storage and networking professionals work together. Now they must find common ground.
"We heard that there were a lot of folks who never do [convergence] because their network folks and their SAN folks don't talk to each other," said Bob Cloud, executive director of IT infrastructure services at the University of Alabama at Birmingham. "We're unique because our network guys work for me and our SAN guys work for me."
Cloud has brought his storage and networking teams together while piloting FCoE-based network convergence in his data center. As a Brocade customer on both the Ethernet and Fibre Channel side of things, Cloud elected to pilot FCoE with Brocade's technology beginning last September.
In one server rack he has introduced two redundant Brocade 8000 top- of -rack switches. The servers in the rack connect to each switch via FCoE. The 8000 switches then use Fibre Channel connections to send storage traffic to a Brocade DCX storage director switch and Ethernet uplinks to send production data traffic to upstream Brocade FastIron SuperX switches. At first the storage and networking teams were confused about who would own the Brocade 8000 FCoE switches.
"I think there was some natural confusion over who was going to own the switch and who was going to name the switch," Cloud said. "But we got past that pretty quickly because in our data center both the networking group and the SAN group report to me. I told them to work it out because it's a trial. The purpose of the trial is to uncover the organizational issues."
Defining management duties
A common management platform will be essential to working out those organizational issues, Cloud said. Brocade has announced a new unified management product, Brocade Network Advisor, which supports management of Fibre Channel, FCoE, MPLS and IP switching and routing. Cloud is still waiting for the product.
"We may always have a route access level person in the networking group and I think the SAN folks are fine with that. The networking group will have a little more authority over management of the switch than the SAN folks, but the SAN folks will have everything they need to troubleshoot and monitor Fibre Channel," he said.
Wellmont’s Ramsey also manages both the storage and networking teams in his data center, so he's been able to keep his eye on the big picture. "I'm not going to have anyone fight over a sandbox," he said.
He doesn't see convergence as being that big a change organizationally. The Fibre Channel demarcation line has been pushed back from the server to his Nexus 5020 switches, putting that into the networking team’s domain, but the storage guys do everything else in managing their SAN.
"It's just that, instead of managing ports that correspond directly to a server, now they go to the connections on the 5020 and it's all kind of virtualized. Our SAN guys are more than welcome to jump in there and learn [Ethernet], but they are busy doing what they do -- provisioning storage and making sure that SRDF [EMC Symmetrix Remote Data Facility] is working properly. And I think that they embraced the idea of getting that extra network burden off of them,” Ramsey said.
Network convergence beyond the rack
Although Cisco and other vendors will begin delivery of end-to-end FCoE switching capabilities this year, with technologies like Shortest Path Bridging and Transparent Interconnection of Lots of Links (TRILL), Ramsey doesn't see moving beyond rack-level network convergence within the next five years.
"What you're talking about is multi-hop FCoE, and Cisco is still working on fleshing that out. The most bang for the buck right now is to simplify the rack environment. If you want to go all FCoE, all your EMC stuff is going to have to be retrofitted with FCoE 10 Gigabit. And at that point you could probably get rid of your Fibre Channel. Maybe in five years we'll look at that, but that's not really going to buy us anything right now. We're just not pushing into the type of bandwidth where we would need dedicated 10 Gigabit to the storage. We don’t need that much data. Where FCoE helps us is simplification inside the rack, making it faster, cheaper and smaller."
Cloud is also not ready to look past the rack until he gets a better handle on management of converged networks.
"[Brocade] just announced a lot of this stuff, and we want to test out the management system. Once we prove that out, we'll be looking to go further [with convergence]. We are trying to figure out the total cost of ownership."
- Check the network before moving to the cloud –SearchSecurity.com
- Focus: Network virtualisation –ComputerWeekly.com
- Mobile banking strategies - maximise your revenues –ComputerWeekly.com
- IT Handbook: Network Considerations for VDI –SearchDataCenter.com
Dig Deeper on Network Infrastructure
Simpler management motivates Ethernet SAN investments
Building a converged network architecture to consolidate Ethernet, FC
FCoE for converged networking? Not quite, Brocade veep says
Best practices and hardware requirements for FCoE storage deployment