By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Although there is some debate over just how much of the data center an enterprise should converge with Fibre Channel over Ethernet (FCoE), most experts agree that enterprises can reap the benefits by starting small and expanding FCoE data center network convergence over time. But the wholesale jump into a completely converged FCoE data center network preached by many vendors simply wouldn't pay off yet.
FCoE purists preach a scenario in which an enterprise would converge its entire data center on a single FCoE network, with an Ethernet switch at the core controlling both data and storage traffic across a single infrastructure. That topology would reduce cost and complexity, limiting the number of switches and amount of cabling. For instance, the roadmap for Cisco's Data Center 3.0 vision describes a future in which the Cisco Nexus 7000 switch is at the core of a completely converged FCoE data center, with both server and storage traffic passing through the same chassis switch.
But many believe this is a myth. In a recent research note, Joe Skorupa, research vice president at Gartner, argued that two separate FCoE networks -- one for storage and one for data -- make more sense than a single, large converged FCoE network.
At this point, he said, network vendors don't have a core Ethernet switch with enough non-blocking 10 Gigabit Ethernet (GbE) port density to make this vision a reality for large enterprises. Most networking vendors offer core switches with around 256 10 GbE non-blocking ports -- not enough to serve as a single core switch in a large data center.
To compensate for this, enterprises would have to interconnect two or more chassis switches to get enough ports in their cores. Once you start interconnecting switches, you start dedicating ports just to the interconnections between switches. Eventually whole switches are dedicated to just connecting other switches, Skorupa said. This approach adds both cost and complexity to the data center network, he said, swamping any savings an FCoE data center might offer.
Today's FCoE data center convergence starts at the server and rack -- not the core
Converging storage and data networks all the way up into the core may not make much sense today, but the core isn't the likely starting point for FCoE implementation anyway, according to Dave Vellante, chief research advocate at the online IT community Wikibon. Instead, enterprises can save money and reduce complexity by starting small and working their way up slowly to more advanced convergence technologies.
"I think the starting point is sticking converged network adapters (CNAs) in servers to reduce heat density and getting costs down that way," Vellante said.
CNAs are an emerging line of server adapters that combine the functions of a network interface card (NIC) and a host bus adapter (HBA) in a single device. With CNAs, enterprises can reduce the number of devices and cables attached to their servers. They can also reduce the number of switches in a server rack, as the CNAs can connect directly to a single FCoE-enabled switch within the rack.
"I do think that the technologies are going to converge," Vellante said. "It's going to start at the server, when you put CNAs into the server. Then it's going to move to the top-of-rack switch with FCoE. Then it will move to a converged Ethernet switch, but you're going to have, for a long time, separate LANs and SANs. Then, over time, you're going to converge those technologies, and it's going to save money. It's just going to take a while -- probably a decade -- for this to play out."
Stalled FCoE data center networks: No supporting core switches
What is truly holding back adoption of converged FCoE in the data center core is the lack of core chassis switches that support converged enhanced Ethernet and FCoE. Dennis Martin, president of Demartek, a testing and research firm, pointed out that Cisco's Nexus 7000 doesn't yet support FCoE, even though the switch is the core of its Data Center 3.0 vision. Many Ethernet vendors are waiting to see how FCoE takes shape before they consider offering aggregation layer FCoE-enabled switches, let alone core switches. But eventually, CIOs will drive investment in a totally converged FCoE data center, Martin said.
"Over the next one to four years [storage and data networks will remain separate]," he said. "But over the really long term, they will converge because CIOs will start saying, 'Hey we're running two sets of cables all the time. Why don't we just run one set? And why buy two switches all the time when you could just buy one?' The economics will play out, and CIOs will start to force the decision."
Ethernet advances will force the FCoE data center
It's also likely that Ethernet advances that are way ahead of Fibre Channel progress could force FCoE adoption, Martin said.
"We have 16 gigabit Fibre Channel coming, and today we have 8 gigabit," he said. "After 16 gigabit, we're not really sure where it's going. It might go to 32 gigabit, but that's still a ways out. Now look at Fibre Channel over Ethernet. It runs over converged enhanced Ethernet, which is at 10 gigabit today, but the Ethernet guys are working on 40 gigabit and 100 gigabit specs, and those are expected to be completed this year."
Enterprises will continue reap the benefits of FCoE in the rack with common server adapters and common cabling in the short term, but they will migrate to FCoE top-of-rack switches, with traffic from the racks going into separate Fibre Channel and Ethernet networks.
Solving the human problem critical to FCoE uptake
As noted previously on SearchNetworking.com, storage and networking pros will have to collaborate more and more as FCoE adoption expands within enterprises. Getting this piece right may be the most critical to FCoE success.
"Networking guys are pulling cables and moving stuff around. It's all about flexibility and accommodating new needs for the business," Vellante said. "Storage guys put in a SAN and they say, 'Don't touch my SAN. You touch my SAN, you die.' They do that because their job is data integrity."
Collaboration and the careful definition of roles for network and storage pros will be critical for FCoE convergence.
"Today, Ethernet guys own Ethernet switches and storage guys own Fibre Channel switches. In a converged environment, who owns it?" Vellante said. "At some point, everyone will have to report up to the same group as opposed to having separate chains of command."
What's more, storage pros will have to focus on data integrity on an FCoE network while networking pros focus on meeting the connectivity demands of the data center.
"The storage guys need to be tasked with data integrity. Otherwise, the networking guys who are in charge are going to end up being in trouble," Vellante said. "CIOs need to start there."
Let us know what you think about the story; email: Shamus McGillicuddy, News Editor