Interop 2011: Juniper data center fabric changes the network paradigm

At Interop Las Vegas 2011, Juniper data center chief David Yen will explain the six elements of a real network fabric (and how some networking vendors offer imposters).

Image goes here

David Yen, Executive Vice President of the Fabric and Switching Group at Juniper Networks

With all the jabber in the market around data center network fabrics (every major vendor has launched one in the last few months), networking pros are left to wonder: What exactly is in a real data center network fabric? The Juniper data center strategy— QFabric— just may answer that question.

David Yen, executive vice president of the Fabric and Switching Group at Juniper Networks, will use his keynote presentation at Interop Las Vegas 2011 to explain the six elements that a data center network fabric must include (think unblocked, flat networks; low latency; no jitter and virtualization support). Yen sat down with SearchNetworking.com to tell us what Interop can expect to hear from Juniper this year.

Juniper's reach into the enterprise market has grown tremendously over the last several years. How has the company's approach to Interop changed as a result?

David Yen: It's kind of neutral. Our EX switching product portfolio is growing. Our business is growing. Our market share is growing. That certainly has enabled us to interact more with conferences like Interop. We would definitely like to participate fully, and the fact that this year I will have airtime there is certainly very exciting for me.

What will you discuss during your Interop keynote?

Yen: I wish I had more time in that keynote slot. As a result, rather than look at how QFabric fixes all the problems of the legacy, multi-tiered, tree-like structure, as we did at the February [QFabric] launch, I basically want to show how QFabric provides such a simple, any-to-any, scalable, logical switch with extremely low latency. It actually will transform cloud computing data centers because it dramatically simplifies the interconnect among all the compute and storage resources. And with the low latency, it's going to run every application much faster than what they've been doing today in data centers. That actually has significant implications on the resource deployment or maybe even how ultimately the compute and storage devices are going to be built for future cloud data centers.

Everyone seems to have a different story to tell about new data center network fabrics. What is your definition of a fabric, and how should networking pros sort through the hype?

Yen: The word  ‘fabric’ right now is very, very abused. Some of the vendors are just trying to apply what we consider patches to solve some of the challenges currently in the data center, which is based on the legacy, multi-tiered tree structure. For example, data centers want multipath everywhere for high availably. And yet redundant paths are causing loops, which is a highly undesirable thing for a switching environment. So to defeat some of the switching algorithms and eliminate loops, you apply additional patches. Or you try to reduce the amount of complexity. So some vendors focus on just a layer of management software to make it look like it's easier to manage, but underneath it's still the legacy architecture. Some other vendors, who don't really have much, talk about their vision to join the party, or they deliberately spread a lot of messages just to dilute the focus on what a true data center fabric should be. It does lead to a lot of confusion on the customer side.

From the outset, when [Juniper] started to think about what kind of data center network we should build… we started thinking [about] what would be the right architecture to build— versus starting from the legacy, multi-tiered, tree structure.

In our opinion, in order to qualify as a data center fabric, it has to provide any-to-any connectivity with fairness and nonblocking. And this is not just the static behaviors. This also includes the ability to absorb all the dynamic changes in the traffic rate. Furthermore, it will ensure in all cases the full bandwidth of a target interface is shared equally by all contending interfaces which are trying to feed that particular destination. And the bandwidth will be equally shared by all the contending interfaces, even if the egress interface is congested because too much traffic is trying to feed that limited bandwidth. That congestion should not affect the feeding ingress ports, which are sending traffic to non-congested destinations. That's the first criteria.

  • The second criteria we would like to impose is that it has to provide low latency and jitter. [For] small and medium data centers, the latency should be down to a few microseconds. And even in large data centers it should come down to a few tens of microseconds. Why microseconds? Because when you have a table, law of physics with nothing in between, 100 meters takes half a microsecond. So when you provide a data center network or data center fabric, you literally need to provide a near-line-rate speed. And that's as good as you can provide. Not just low latency but also low jitter. Jitter is the instantaneous variation of latency. In today's computer-based equity trading, whenever you send a message to multiple subscribers, you want to make sure they all receive it pretty much at the same time. If the traffic has a lot of variation from one source to different destinations, then there is obviously unfairness involved in that kind of competition.
  • The third requirement is that such a data center network under any congestion situation should drop no packets. This is not trivial because losing packets is absolutely unacceptable, particularly in today's mission-critical environments like financial services data centers.
  • The fourth is linear costs and power scaling. The cost and power consumption of today's data center network technology must increase linearly with number of interfaces but not exponentially as the legacy architecture imposes. This is extremely important for the cloud computing side of data centers where you are talking about a lot of ports.
  • The fifth requirement is that such a data center needs to support virtualization and services. More and more cloud computing data centers are being virtualized for the ease of resource allocation and elasticity. This data center fabric obviously has to support it.
  • The sixth requirement is it has to have a modular, distributed implementation that is highly reliable and scalable. Because it's modular, it can easily scale and the customer can build incrementally. And you can build a kind of resilience, so the fabric can still function while some parts fail are being serviced.
  • And finally, the data center fabric needs to behave and perform like a single logical device. This is for ease of manageability, which can directly translate to operational cost. Any vendor that claims that its data center network or fabric is a fabric really needs to fulfill a higher standard, versus what is loosely being talked about in today's marketplace.

Those are a lot of variables. How do you expect networking professionals to distinguish among the various visions that are being articulated to the market right now?

Yen: You are right, they vary. For small data centers where the pain point of the legacy architecture is not high, where you only have a couple hundred servers and few tens of storage boxes, an efficient two-tier architecture, even though its legacy, and the latency is relatively low overall … in those cases, they probably won't care too much. In those places, Juniper has our very successful general purpose EX product line and the customer can use that.

But when the data center starts to become bigger, where you have thousands, if not tens of thousands of compute and storage devices, then you start to care much more. All the pain, the cost, the power, the complexity and the latency all add up. But correspondingly, you can also understand that [enterprises with those large] data centers typically have very knowledgeable and well-trained IT professionals, and they can tell the differences. When there are no choices, they have to utilize some patch-up solution. But when there are options available to them that fundamentally solve the scaling problems, the power, the complexity and latency issues, and give them all the flexibility to virtualize and to share all these resources for various applications, they can tell.

Virtualization and cloud computing have dominated Interop the last couple years. Do you see any new trends emerging this year?

Yen: The trend— and this is not limited to Interop—in the last couple years is to focus on the advantage for such a new paradigm. Through the sharing and dynamic resource allocation, why should people do cloud computing? But with all that momentum, obviously many people, particularly the early adopters, are ready to go. So you will see more tangible talk about just how you do it. And I expect people to say, in terms of the interface and virtualization vendors, how to do it. And for infrastructure vendors, including Juniper, [there will be talk of] what kind of data center network should be adopted to build such a cloud computing data center.

To learn more about Interop, view our 2011 Interop Las Vegas conference page.

Dig deeper on Network Conference and IT Conference News

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close