During a recent reorganization, Juniper Networks integrated its switching, routing and wireless LAN technologies...
under a single business unit, the Platform Systems Division. Juniper Executive Vice President Rami Rahim heads that division. He recently met with SearchNetworking to discuss the state of Juniper's enterprise switching business, including the company's plans for QFabric, the EX9200, the EX8200 and the rest of the EX switching line.
Juniper recently announced the new EX9200 modular switch for data center and campus cores. What kind of investment strategy do you have for the rest of the EX line of switches moving forward?
Rami Rahim: We're investing heavily in our broad switching portfolio. Now that we have a single business unit that is looking at both the one-tier architecture and the two-tier architecture (QFabric and EX), we're certainly looking at ways of building common building blocks in the future that will be able to satisfy both of those architectures.
What you can expect is a bit of the blurring of the lines between the different building blocks and various different architectures we have been promoting over the last few years. [The EX9200] is all about bringing investment protection and programmability into the core of the data center, as well as the core of the campus. In many cases, that core is a unified core. We see enterprises that have a common core for their data center and their campus. The EX9200 -- with its programmability -- makes it an ideal platform for that application. But there is also a top-of-rack strategy and more streamlined switches for the wiring closets in the campus that we are developing roadmaps for.
Given the launch of the EX9200, what investment protection do you offer to existing EX8200 customers?
Rahim: Think of [the EX8200] as a smaller data center solution that has all of the control plane benefits of Junos. That's very important to the vast majority of our customers. But there are going to be applications, especially applications in a virtualized SDN [software-defined networking] world, where you are going to need programmable data plane functionality. That's where the conversation shifts to the 9200.
The 9200 is about a greater 10/40/100 Gigabit story and data plane programmability. The 8200 continues as something that is very important for smaller data centers. Maybe the programmability aspects are not necessarily there, but the scale you get from having Junos in the control plane is absolutely there.
We work closely with our customers to understand what their requirements are and we come up with the best solution to address their particular needs. The conversations we have had with our customers and the guidance we have had in positioning the platforms has been straightforward for us.
What's the current roadmap for top-of-rack and closet EX switches?
Rahim: We haven't yet gone public with what our roadmap is for various different products, but certainly we are leveraging the latest, greatest silicon technologies. We are leveraging the Junos operating system in all its capabilities and enhancing the Junos capabilities from the standpoint of analytics and from the standpoint of providing visibility into what's happening in the underlying infrastructure, which I think is so important for running either a non-virtualized or virtualized data center in an SDN context. Improving the power efficiency. Improving the density. All of that is part and parcel of our existing roadmap. We haven't been specific about what's yet to come, but I think we can be before the end of this year.
You mentioned blending one- and two-tier data center architecture. Could you offer examples of how that blending would manifest.
Rahim: Today our QFX top-of-racks, the ones that connect into the QFabric system itself, one of the things we can do is implement what we call the Virtual Chassis technology on those switches. Virtual Chassis is something we've been using to develop our two-tier architecture. This enables our customers to take that path to flat [networks] in phases. So you can start with an architecture that includes our QFX [switches] working together using Virtual Chassis. And if you want to go all flat using one tier, you can change the software load and move from that Virtual Chassis technology to a full QFabric Interconnect with the [QFabric] Director.
For all the merits of the QFabric system itself, it is a fundamentally different architecture from the way that people have been building data centers for the last couple of decades. Giving them a more incremental approach to achieve the benefits of a unified, one-tier QFabric architecture, I think, can work to our favor.
When it comes to the data center, there are probably two key things we are focusing on. The first is just around providing simplicity. And as a result of simplicity comes OpEx reduction, just reducing the cost of running the network. That comes from things like providing automation [and] providing a means to collapse layers within a data center; even collapsing layers between the data center and the campus. Fewer layers [and] fewer elements to manage results in less complexity and greater operational efficiency.
Automation, from a standpoint of overlay technologies to provide a much easier means of achieving virtualization. Also, automation from a management standpoint.
What we've done with network director in Junos Space management software is provide a single pane of glass to the management of a campus network, a wireless network and a data center network.
What are the next steps for investment in QFabric? You have the first generation and the mid-market solution. Are there other areas to develop?
Rahim: QFabric-M, the micro-fabric, has hit a sweet spot. There is still a lot more to be done with respect to the features that QFabric supports. For example, we just introduced some multicast functionality that has turned out to be very useful for our customers. So there is a very strong software roadmap for that product family. Toward the end of the year we can share more.
What kind of demands are you hearing from customers on your EX line?
Rahim: They want greater 10 Gb density and 40 Gb connectivity certainly is very important. Greater power efficiency. Hooks into our operating system that gives them visibility and in some cases controllability to provide competitive advantages for our customers in terms of what they are able to do with their infrastructure. Protocols of various types. We want to work very closely with our partners at VMware to provide the VXLAN tunneling protocols and the hooks that interface into VMware's controller. All of these are part of the roadmap we are executing toward with our top-of-rack and broader switching strategy.
What's your perspective on the Open Compute Switch project?
Rahim: It's still early but we're supportive. The idea is to come up with a relatively common white box approach to developing some of the switches in data centers. To the extent that we can, I'd be interested in contributing to a project like that.
Do you think it's realistic to envision a day when Open Compute can offer people an opportunity to run Junos or IOS or NX-OS on a white box switch?
Rahim: I certainly have some thoughts, but it's still a little too early to share them because that project is still very new. We still need to engage and understand the direction better. I'll leave it right now as Juniper wants to be very supportive [and] we want to explore different options for us to become a part of that initiative; to contribute to it and influence it in ways that will benefit our customers. And we are open to a variety of different options with respect to the value we can contribute to the effort. In the next few months or so, as the initiative gains more momentum, we can provide more insight into what our strategy is going to be.
I do think there is still going to be a very large part of the customer base that don't necessarily want to deal with white boxes at least for some time to come; [customers] that actually want a more integrated package because it's not so much a networking problem they are trying to solve, but an overall data center problem. There are so many elements involved and networking is a relative small element of this, so it all goes back to our strategy, which is to simplify, reduce friction, to really make deploying network devices as easy as possible to achieve their overall data center requirements.
Dig Deeper on Network Hardware