BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
What exactly are you getting when you buy OpenFlow hardware? Not all OpenFlow switches are created equal. Even if your vendor explicitly states that its product supports OpenFlow 1.0 or 1.3, that isn't the whole story.
A lot of the equipment manufacturers, while they're moving toward OpenFlow support, have not made major commitments to it. They're looking to ensure the market is there, and that carriers and large enterprises are really going to move toward OpenFlow as an underlay as opposed to just overlay.
The Open Networking Foundation (ONF) is working with the networking industry, testing labs and equipment-testing companies like Ixia to sort out the variation in OpenFlow hardware. SearchSDN spoke with Mike Haugh, chairman of the Open Networking Foundation's Testing and Interoperability working group and senior manager of market development at Ixia, about the state of OpenFlow support.
There is a lot of inconsistency in OpenFlow hardware among networking vendors. What's your take on this?
Mike Haugh: What we're seeing is essentially three classifications of OpenFlow-enabled switches.
One is primarily a software-based implementation. Open vSwitch is the most prevalent one out there. [In software] you will get the feature set -- meaning it has strong functionality in terms of OpenFlow standard support. However, you are not going have flow tables in hardware that are going to equate to strong performance. Usually [software implementations are] at the edge, mapping OpenFlow forwarding through tunnel interfaces [that] are then carried over a legacy network, so it's more of an overlay.
The second category is where they are taking existing routing and switching platforms and adding OpenFlow as just another protocol. You are going be limited in terms of what features [these devices] support. If you look at OpenFlow, it is primarily a match-action table. [OpenFlow] 1.0 defined 12 fields you could match on in either the header or the port. In OpenFlow 1.3, there are 40 different aspects you can match on, so they've greatly extended header support. But only 13 are mandatory. When you take OpenFlow and put on it on an existing platform, trying to leverage an existing chipset, you'll have a pretty limited subset of features they support. We've worked with vendors where they enabled OpenFlow at Layer 2 only, where they could only do things on your Ethernet header with a single VLAN tag. We worked with vendors that enabled or configured OpenFlow and it had to be in Layer 3 mode, which means you had OpenFlow functionality only in an IP header.
You are going to get a limited feature set and it will only work for a certain application. When you try doing things that the OpenFlow specification says it supports -- like an action to rewrite a MAC header -- a lot of existing hardware is not capable of doing that. It might be good for some applications, like just provisioning Layer 2 paths or doing MAC learning. There are typically flow table limitations. Some devices we've seen have had a limitation of 1000 table entries, or maybe up to 4000 table entries. That's not carrier grade, so you're going to be limited in the applications you can support through the scalability of table entries in hardware.
Although not many exist at this point, [the third class of OpenFlow switches] are more purpose-built for OpenFlow [with] programmable underlying architecture. Intel is showcasing a reference design from technology they acquired with Fulcrum that is essentially a programmable data plane [that supports the] multi-table processing and pipelining [features of OpenFlow 1.3]. The ONF has formed the Chip Advisory Board. They're engaging much more deeply with Marvel, Broadcom and Intel to see if they are going to come out with purpose-built chips that are enabled more for OpenFlow.
How can a testing company like Ixia help the market understand the variation of OpenFlow hardware?
Haugh: We provide equipment manufactures [with] the ability to differentiate themselves; the ability to showcase what feature set they support and to showcase if they're actually putting tables entries into hardware versus performing it in software. We can do things like flow table capacity testing. We can do validation of data plane forwarding rates. We can measure things like proactive and reactive convergence -- how fast the [controller reacts] if you have a link down.
We're helping on the development side in terms of new features. Nearly all of our install base has moved to OpenFlow 1.3, so we're helping them come out and test the products and showcase the products.
Do any vendors offer switches that fall into that third category of OpenFlow hardware you mentioned?
Haugh: The only one I know of for sure is Intel utilizing that Fulcrum chipset. They are going to be providing that either as a platform or as a chipset to equipment manufacturers. But I think a lot of the equipment manufacturers, while they're moving toward OpenFlow support, have not made major commitments to it. They're looking to ensure the market is there and that carriers and large enterprises are really going to move toward OpenFlow as an underlay as opposed to just overlay, where it's just enabled in software at the edge and tunneled over existing routing and switching. They've been building up teams and expanding [their] product line; however, we really haven't seen that wave of new products that are purpose-built for OpenFlow.
How can vendors get a sense of whether customers are interested in OpenFlow if there aren't any switches on market that provide them with a taste of what's possible?
Haugh: A number [of] switch companies have trials and tests going on. One switch vendor I talked to said they had over 60 large enterprises currently using OpenFlow in various trials, testing small deployments. I think a lot of enterprises and campus networks and carriers are really kicking tires on this. They're first looking at earlier implementations of OpenFlow that are not as complex as the full multi-table implementation of OpenFlow 1.3.
More on OpenFlow basics
A guide to OpenFlow: Controllers and apps
OpenFlow protocol: Are there alternatives?
OpenFlow conformance testing emerges
Many of them are saying, 'I have problems to solve in my network [that I can solve with OpenFlow 1.0]. Give me the ability to provision Layer 2 paths and enable Layer 2 forwarding and MAC learning with centralized control.' As [vendors] see investment from their customer base and these installations and evaluations progress, they will make a bigger commitment to it.
What the market needs to sort out is [whether customers will] pay for the programmable hardware that is in alignment with [OpenFlow 1.3]. You're going to get a lot of benefits of Openflow and SDN [with programmable hardware], but the CapEx [savings are] not going to be there. You may even be charged more, because instead of ASIC- or FPGA-based hardware, you might need something that is more of an investment in the chipset. It's going to vary. A lot of the campus applications and others may not need that full programmability. But as you look for carrier applications and what they're trying to do, these things will require more capabilities at the chipset layer to get the performance they need in mission-critical production networks.
From the beginning people have talked about OpenFlow and SDN leading to commoditization of switching, and yet you say programmable hardware for OpenFlow will not deliver CapEx savings.
Haugh: It won't be commodity-based off the shelf. It's going to be more expensive. That was not part of the original vision. When people were first thinking about OpenFlow, they were just asking, 'How can I just slam entries into a TCAM?' It was very basic what they were trying to accomplish, and they were able to provide a mechanism or API to program forwarding table entries. Now as you get beyond OpenFlow 1.0, you start looking at multi-table pipelining and processing between tables. It can't all be handled in TCAM. Depending on the implementation you want, you're going to look at more expensive hardware eventually.
If you look at the carriers, they're really getting behind this NFV [network functions virtualization] movement. With NFV you're putting a lot of your lot functions in x86-based architecture. If you're going to do your DPI [deep packet inspection] and firewalling and load balancing x86 server environments [you won't need more expensive hardware]. Maybe you do less processing at the network layer and you just want Layer 2 pipes. So there could be a model where they don't need OpenFlow to have a table for security and a table for load balancing and all these different tables processing traffic. Maybe they just use OpenFlow for provisioning a path -- be it Layer 2 or Layer 3 -- and they tie in or stitch in the services that are running in x86 architectures.