News

Juniper's Project Stratus reinvents data center network architecture

Shamus McGillicuddy

The long-awaited Juniper Networks data center network architecture, codenamed Project Stratus, isn't really a network at all. It is a giant virtual chassis switch that connects

    Requires Free Membership to View

all the compute and storage resources in a data center and promises 10 terabits of throughput on a flat network that can include up to 6,000 servers with latency of just 5 microseconds.

Juniper has abandoned Ethernet transport between nodes on the network to deliver this performance, instead using the company's own proprietary protocol. The technology only uses Ethernet to communicate with storage and servers.

Juniper unveiled its QFabric data center network architecture yesterday with partners such as IBM, VMware and NetApp, and customers like NYSE Euronext and the Lawrence Berkeley National Laboratory, sharing the stage in a webcast event in San Francisco.

Juniper QFabric consists of component devices that are essentially abstracted elements of a physical data center chassis switch. Although there are individual physical devices comprising this data center network architecture, it's useful to think of them as elements of that chassis switch. The three fundamental components are:

  • The QFX3500 switch, a high-performance, Layer 2/3, top-of-rack switch with 48 dual-mode, Gigabit/10 Gigabit Ethernet ports and four 40 Gb uplinks. In a standard data center network architecture, the QFX3500 acts like any high-performance server access switch. In a QFabric deployment, it acts as a node on the network similar in functionality to an Ethernet port line card on a chassis switch.  This device will be available later this quarter with a list price of $34,000.
  • The QF/Interconnnect, a 21 RU, 8-slot chassis with 128 QSFP+ 40 Gb ports, which can support up to 128 QFX3500 switches. The Interconnect device functions much like the copper midplane of a chassis switch backplane, moving packets between ingress and egress ports. Only in this case the ingress and egress ports are physically on the QFX3500 top-of-rack devices that connected to the QF/Interconnect chassis via fiber. It is important to note that the fiber uplinks from the QFX3500 switch to the QF/Interconnect are not Ethernet. Instead, they run a proprietary protocol that Juniper identifies only as the Transport Protocol.
  • QF/Director, a 2 RU x86-based appliance that serves as an out-of-band supervisor engine of the architecture, which runs management and control services for the overall QFabric data center network architecture. Essentially the control plane is distributed across all the nodes on the data center network, giving each node full visibility into the rest of the network and simplifying the forwarding decisions each node  must make. The QF/Interconnect and the QF/Director will be available in the third quarter of this year. Juniper declined to offer price details.

By designing this data center network architecture as a virtual data center chassis switch with the control plane distributed across the entire network, Juniper offers direct any-to-any, single-hop connectivity from any node on the network. The ingress QFX3500 switch is the only device that processes an incoming packet. Then it forwards that packet immediately to its destination QFX3500 counterpart via the QF/Interconnect chassis using management services from the QF/Director. It is with this QFabric architecture and proprietary transport protocol that Juniper claims such high throughput speed. Juniper also claims that it can guarantee zero packet loss because individual QFX3500 switches on the QFabric can instruct the source of incoming packets to throttle down if the switch is nearing its overall capacity.

This QFabric approach also makes the network more scalable and simple. Much like a network engineer would add capacity to a data center switch by sliding another line card into the chassis, a QFabric customer can add capacity by plugging a new QFX3500 into the QF/Interconnect. There is no need to add new aggregation switches to support additional access switches. Everything plugs into the core QF/Interconnect.

Project Stratus/QFabric: A data center network architecture with a secret sauce

When Juniper first announced Project Stratus, it promised a data center architecture that would enable a flat network, consolidating a data center from three switching tiers to one. By creating a giant virtual switch, Juniper appears to have done just that.  While most networking vendors are still focused on two-tiered network architecture with core and edge Ethernet devices, QFabric is only running Ethernet at the edge, using its faster Transport Protocol upstream from the rack.

"It makes you scratch your head and say, 'What is it?'" said Jody Lemoine, network architect and consultant with Tishco Networks, an Ontario-based networking consultancy. "It's one thing to understand it based on [Juniper's] presentation. It's another thing to show what's under the hood. And they're not revealing what's under the hood."

Specifically Lemoine would like to see more details about how Juniper makes QFabric work, particularly the proprietary Transport Protocol.

However, overall Juniper has presented a new way of thinking about the data center network, he said. "HP and Cisco are going to look at what Juniper is doing and say, 'Maybe we need to be thinking along those lines.'"

But Juniper's Transport Protocol will remain a closed and proprietary technology at least until it is perfected, said Andy Ingram, vice president of product marketing and business development for Juniper. 

"At this point we have no expectations of opening it up as a standard, the reason being that there are lots of things we can learn over time to make it more efficient," he said. "But the way to look at it is, I can't take my Juniper EX line card and plug it into a Nexus chassis and they can't take a Nexus line card and plug it into an EX chassis. But you can take the [QFX3500] node and interact that with any server or storage device in the data center."

Comparing the QFabric data center network architecture to supercomputing

Brent Draney has been testing the QFabric architecture inside his environment at the Lawrence Berkeley National Laboratory, where he hosts the NERSC Magellan project, a supercomputing facility backed by the Department of Energy. He compared QFabric's performance to that of his supercomputing environment connected via InfiniBand.

"We put it head-to-head with one of our supercomputers," Draney said during Juniper's webcast. "Two of our [supercomputing] benchmarks were exceeded [by QFabric]. Most of [the QFabric] were running at 90% [of InfiniBand]. We're really excited to see how far we can push that. Ethernet clusters usually get a fraction of this performance."

Juniper QFabric a different paradigm for data center network architecture

Juniper's QFabric is slightly comparable to the approach Cisco Systems has taken with its Nexus 2000 fabric extenders, noted Mike Spanbauer, senior analyst with Current Analysis. Cisco's fabric extenders extend the ports of a Nexus 7000 or Nexus 5000 switch to the server rack. The top-of-rack fabric extenders aren't switches. They simply extend the ports of the upstream device to the rack and network engineers administer each fabric extender through the upstream switch. This enables a flatter network approach, but not quite to the level Juniper is proposing.

"[Cisco] is not able to meet Juniper's latency and density levels with that approach. The other vendors, with their more traditional hierarchical approach, have been architecting and messaging a more flat design for some time, but they are doing it through traditional switching methods by building more powerful boxes," Spanbauer said. "They all have the same design limitations. Even Force10 Networks, with 800 ports on its chassis, is still not sufficient. The moment you go outside the box, latency rockets up. Juniper has addressed this with the ability to separate out the different elements [of a switch.]."

"It's very similar to how a traditional PBX was built, with the separation of the data, control and management planes," he said. "When you are talking about millions of telephone lines you have to scale that way. Juniper has taken another look at that."

Let us know what you think about the story; email Shamus McGillicuddy, News Editor.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: