Juniper QFabric is the winner of SearchNetworking's April Network Innovation Award. QFabric architecture flattens...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
the network, providing any-to-any connectivity between all nodes, and managing thousands of ports across the network fabric as if they were one switch.
Juniper Networks started talking flat networks and any-to-any connectivity in the data center long before the competition. In fact, Juniper began promoting the idea of a data center network fabric even before the company officially released its QFabric architecture.
Unfortunately, because of all that talk the company was criticized for not releasing the technology soon enough. But since Juniper's unveiling of QFabric last year, the technology has received rave reviews. On its most recent earnings call, Juniper announced its first round of QFabric enterprise customers. Now the company is working on small-scale and large-scale fabrics that it can sell into midsized and large enterprise players.
QFabric aims to extend basic switch architecture to the entire data center network, meaning there is direct connection between every port and every node on the network. More importantly, all these ports can be managed as if they were one giant switch.
For these advancements in data center network fabric, Juniper wins the SearchNetworking.com Network Innovation Award. In this Q&A, Dhritiman Dasgupta, senior director of product marketing, explains how QFabric addresses the challenges of the traditional data center network.
What traditional data center network challenges spurred the development of QFabric?
Dhritiman Dasgupta: In the data center there has been a perfect storm brewing. First there is more and more data being produced, which leads to bigger data centers and the desire to process this data faster. Specifically, feeds on the server level are going from 1 Gb to 10 Gb, and we're looking at 40 and 100 Gb, so the network needs to follow.
Secondly, there is a strong desire to have storage traffic, specifically Fibre Channel traffic, converged onto Ethernet networks. And when you talk about having multiple networks, the storage network is only one of them. There are other networks for production, management and virtualization. It is the desire of data center managers to collapse all of these networks into one converged network without introducing new levels of risk. That would simplify operations and impact the cost of running the data center.
The third [issue] is that the data center network architecture needed to evolve. The folks who built service provider, branch and campus networks in the past asked, 'How do I connect people?' But about 10 years ago they started having to build out these data centers that were about connecting machines. Not knowing better, these designers replicated the same tree structure architecture, which had been predominant in campus networks.
How does QFabric address these issues?
Dasgupta: The whole QFabric project started about five years ago, incubated by company founder Pradeep [Sindhu] and a bunch of senior architects. For the first six months they tried to do what we see from other vendors today, which is to keep the same building blocks and rewrite some protocols. But then they realized it was never going to scale to the needs of the data center today or in the next decade. They realized they needed to completely rethink the architecture.
An ideal fabric would provide direct connection between servers, storage, routers and firewalls, but every resource would literally be one hop away from each other. [In this kind of fabric] no matter where a virtual machine is moving within the data center, it's always one hop away because [every resource] is equidistant.
In any switch ever built, there is a fabric. It has four fundamental characteristics: It is flat, so there is no tiered architecture; every port on every switch is directly connected to every other port; you don't manage each port on a single switch, you manage them as a single entity; and finally there is a shared state, which means all ports follow global tables.
What the architects found was that these are the same characteristics needed in the modern-day data center. The problem was [determining] how to take the architecture from inside a single switch and scale it to the entire data center. That is the problem that QFabric solves. QFabric is one large, flat fabric that makes every point in the data center equidistant to every other point, and at the same time you still manage it as simply as you would a single Ethernet switch.
What are the components of a QFabric architecture, and is this considered a form of mesh networking?
Dasgupta: There is a top-of-rack switch with 48 10-Gb server-facing ports. Each of these top-of-rack switches has 40 Gb uplinks that connect to a [Qfabric Interconnect] box. We call that the aggregation layer, but with a traditional aggregation or core switch, you would still need to do Ethernet processing. We don't have to do that because the box interconnects all of the different access switches that reside along the rim of the architecture. However, it's not a fully meshed network. We still have the traditional design of having some downward-facing ports, which are going to servers and then the uplink connects to the interconnect. We do very little Ethernet processing at those interconnects.
Read more on data center network fabrics
Data center fabric overview: Vendor comparison
Virtual cluster switching: Finding management in a flat network
With edge overlays, is network fabric just for raw bandwidth?
The third component is called Qfabric Director. The Director externalizes the whole control plane. If you look at previous implementations [of any network], the link between any two switches carries not only the data traffic, but also control traffic. We thought that was making network design vulnerable, so the QFabric Director provides the control and management services for the fabric. Every node and every interconnect has this out-of-band connection to the Director. The Director [prevents] having to manage each of the switches individually within the QFabric, and instead gives us the ability to manage a scale of more than 6,000 individual ports as a single Ethernet switch.
There is a lot of talk now about using software-defined networking controllers to granularly manage networks in this way. Is it possible to get as granular using the Director?
Dasgupta: QFabric is an integrated software-defined network itself. It is the software that's running on the Director that tells each of the components what to do. SDN and OpenFlow are similar. Having said that, there is a desire to control the behavior of the switches in the network with OpenFlow controllers, and that is absolutely possible with QFabric. With the Director already acting as a single point of communication, we have the ideal platform to implement OpenFlow on top of it. We made an announcement last October and we are going to support OpenFlow on our routers and switches, and it will be implemented in QFabric. We will not open up interfaces from individual pieces of Qfabric. The Director will be the aggregation point.
QFabric is not a TRILL-based system. Can you explain how multipathing is carried out?
Dasgupta: We've had Layer 2 and Layer 3 multipath routing for the past two decades or more, but when some of the vendors tried to implement a fabric-based solution for data centers, they limited it to Layer 2 only. That's exactly what you get with Transparent Interconnection of Lots of Links (TRILL); it's still a Layer-2 protocol. Again, QFabric has similarities with a single switch. If you think about the network inside the single switch, there is multipathing that has been happening for the last 20 years. If a packet needs to go from Port 1 to Port 45, the switch leverages all possible traces inside the fabric to get the packet to port 45, almost reconstructs it on the fly and then sends it out. That's what we've done with QFabric. QFabric looks at all the paths possible to take a packet from one end of the data center to another end of the data center, but doesn't limit this to Layer 2 only.
Is there concern about a lack of industry-wide standardization and interoperability with multipathing strategies?
Dasgupta: Not that we are hearing from our customers. Everybody talks about their own version of fabric, and unfortunately there are no standards today. But these standards take a while. Even TRILL is not completely ratified. This is how other technologies have evolved, too; you would find different vendors focused on solving their customers' problems.
I don't foresee a world where someone can take a Cisco line card and stick it into a Juniper modular chassis and expect it to work. This is the same way we don't expect the technology in QFabric to be introduced into somebody else's solution.
Can QFabric be integrated into an existing data center network?
Dasgupta: Absolutely. No one is going to wake up one day and throw their existing data center network out the door. The top-of-rack switch can function as a standalone Ethernet switch. Some customers bring in a QFX3500 switch at the top of rack, connect it to servers and -- either using the 40 Gb uplinks or split out into four 10 Gb [uplinks] -- connect to whatever they have in the aggregation or the core part of the network.
After a while [they find] there are no products out there that have 40 Gb interfaces on a modular system, and they end up with a high rate of 10 Gb ports on the aggregation chassis. At this point, customers can continue buying another core switch and keep connecting the uplinks from the 3500, or they can buy the interconnect chassis from us. Once they buy the interconnect chassis, it takes a simple software configuration change and all of those switches can be managed as one. That's exactly how we start seeing our customers bringing QFabric into the data center.