Manage Learn to apply best practices and optimize your operations.

10 Gigabit Ethernet migration: What you need to consider

A 10 Gigabit Ethernet (GbE) migration in the data center is crucial for supporting converged storage and data networks and a highly virtualized environment. The technology enables flexibility and prevents latency in an enterprise network.  However, along with these benefits, 10 GbE migration also poses challenges that network managers and architects must overcome.

In this video, DeepStorage.net founder and chief scientist Howard Marks discusses the challenges and considerations that go along with a 10 GbE migration. Marks addresses how to cost efficiently migrate to 10 GbE in the data center, the need for upgrading the data center core to accommodate this migration and how 10 GbE for the data center is somewhat different than Ethernet implemented in campus networks. In addition, Marks addresses whether 10 GbE switches must be backward compatible to 1 GbE NICs and scripts, and he talks about how to plan a network that can eventually migrate to 40 and/or 100 GbE. Finally, Marks tackles whether the ultimate goal of implementing 10 GbE is to move toward a converged data center and storage network, and what a converged storage and data center network will actually look like.

Howard Marks is the founder and chief scientist at DeepStorage.net, a networking consultancy. In over 25 years of consulting, he has designed and implemented networks, management systems and Internet strategies at organizations including American Express, JP Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide and Foxwoods Resort Casino. Marks is a frequent speaker at industry conferences and the author of more than 100 articles in technical publications. He has also written the book Networking Windows and co-authored Windows NT Unleashed.   


Read the full transcript from this video below:  

10 Gigabit Ethernet migration: What you need to consider

Michael Brandenburg: Hello. I am Michael Brandenburg, Technical Editor for SearchNetworking.com. Today I am here with Howard Marks, Founder and Chief Scientist of DeepStorage.net. We are here to discuss 10Gigabit Ethernet. Is it finally more cost effective to implement 10Gigabit Ethernet than 1Gigabit in the data center network?

Howard Marks: It depends on your application. For the most typical application now, virtual server hosting, we crossed that threshold about 6 months ago, because virtual server hosts typically need somewhere between 6 and 10 1Gig connections. You need a pair for the user network, a pair for storage, a pair for management, a pair for v-motion or live migration that . . . I ran the math for a blog entry a while ago, and today, a top rack 10Gig switch configuration, where each server has two 10Gig ports is less expensive than the equivalent, where each server has 6 1Gig ports to a top of rack switch that has 48 1Gig ports and 2 10Gig uplinks.

Michael Brandenburg: In a nutshell, what are the top three challenges in implementing a 10Gigabit Ethernet network?

Howard Marks: First of all, let us understand that a 10Gig Ethernet is today an inside-the-data-center technology; it does not make any sense to run 10Gig to the desktop. The first challenge is that you have to upgrade the data center core. The Cisco 6509 that you installed five or seven years ago does not have the back plane bandwidth to be able to fan in 10 or 15 top of rack 10Gig switches. The second problem is that 10Gig Ethernet for the data center is somewhat different than the Ethernet than we have always used for the campus because converged networking with Fiber Channel over Ethernet requires a lossless network and flow control that campus Ethernet does not have -- 10Gig is not necessarily 10Gig. The third problem is that while that technology is available in the top of rack switches today -- you can get data center bridging the flow control lossless version of 10Gig Ethernet, and even Fiber Channel over Ethernet technology top of rack. It still has not made it quite to the core, and we are in one of those funny situations where we know what the feature set we want for our next core switch is: We want it to support DBC for the next flow control and lossless traffic; we want it to support Trill so that all the spanning tree problems that we have always had go away; we want it to support the ability to move security profiles with virtual machine images as they move from host to host. Various vendors have various parts of that in their product lines, but nobody has the device that we know we want to buy. Today, it is kind of install 10Gig at the edge, put some 10Gig blades in your existing cores to support that, but the core replacement is going to be a 2011 or 2012 purchase.

Michael Brandenburg: Do 10Gigabit Ethernet switches have to be backward compatible with 1Gigabit nicks and cabling?

Howard Marks: The truth is, in the majority they are not. Over the past couple of transition, when we went from 10Meg to 100Meg, and from 100Meg to 1Gig, there was a short period where the faster technology was available only with optical interconnects. Then 100 based T or one 1G based T came out, and we very quickly transitioned to running that higher performing Ethernet on TwistedPear. While there are 10Gig-based T products, it is not nearly as clean a transition as from 1Gig-based T to 10Gig-based T as it was from 100-based T to 1Gig-based T.

10Gig based T requires CAD 6A cable, so if someone runs out of the new cables and takes one of the old CAD 5E cables and plugs it in, things are not going to work, and it is going to take hours and hours to troubleshoot and find out that that was the wrong cable. Secondly, the current implementations of the physical layer chip for 10Gig-based T draw about 2.5 watts per port, so the connection from a server to a switch is going to require 5 watts. The alternative, to use SFP+ cables that plug into the sockets where the optical transceivers, if you were going to do fiber optical plug-in and have twin ax cable attached so that SFP+ is just a new kind of connector, that draws about half a watt for the whole connection. I do not really see the world going to 10Gig-based T. If you do not have the same connectors, then you cannot also have the backward compatibility, in terms of speed. Generally, they are not compatible and will not be, so we are going to need somewhere in the data center to have some switches that have 10 100x1000 ports and 10Gig uplinks to act as an aggregation for things like ILO or Drac Management Card, and the temperature sensor and all of the devices that run slow because that is all they ever need to do.

Michael Brandenburg: When choosing 10Gigabit technology, do you have to keep in mind future migrations to 40Gig and 100Gig Ethernet?

Howard Marks: Not really. That today's, if you want to get involved in 10Gig today, and I would recommend at the top of rack that people get involved in 10Gig today, there are not any products with 40Gig or 100Gig uplinks. 40Gig, the chip vendors are all laying out the silicone in the design. The basic designs are planned, but they are not available. For 100Gig, we have not even decided what the connectors are going to look like yet. It is nice to say, ‘40Gig will be available when I need it for that really humongous server, and 100Gig will be available for my inner switch links when I need that,’ but those technologies are so far down the road that worrying about them now is premature.

Michael Brandenburg: Is the ultimate goal in implementing 10Gigabit Ethernet to move towards a converged data center and storage network, and what does that converged network look like?

Howard Marks: For some people, the object is to go from 8 or 10 cables coming out of the back of every virtual server host to 2 and to converge the storage traffic, which is traditionally run on fiber channel, and all the data traffic that is traditionally run on Ethernet, into those 2 uplinks. The problem with that is political, that technically, we can run Fiber Channel over Ethernet, and it works fine, but it means that the storage team and the network team, who have traditionally been separate and internal rivals, now have to coordinate and cooperate over those network connections and the switch where those connections terminate. I could very easily see a lot of organizations saying, ‘We are going to continue to run fiber channel, and we are going to run 10Gig Ethernet, and we are going to run I/O virtualization on the 10Gig Ethernet to slice it and dice it and julienne it so that the user network, the management network, and the v-motion network are logical as opposed to physical separations, but that we keep the storage network physically separated because the politics are too complicated.

Michael Brandenburg: Thank you Howard. I appreciate your time.

Howard Marks: My pleasure.

View All Videos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close