Editor's note: This is the first half of a two-part series on a new open source SDN switch, LINCX. In this portion we explore how LINCX differs from Open vSwitch and why Erlang was the programming language of choice. In part two, we look at a CPU-based data plane and what the differences are between that and an ASIC-based switch.
This past year, Stu Bailey, founder and CTO of network management company Infoblox, led a research team in developing a fully programmable, open source SDN switch that is not ASIC dependent. The LINCX switch runs on any off-the-shelf Linux or Xen server or on a white box switch and is not network ASIC dependent.
Bailey claims the switch achieves 80% of the performance of Open vSwitch with a much smaller footprint.
We talked with Bailey about the origins of LINCX, why it's written in the Erlang programming language and how it differs from the Open vSwitch.
What are LINC and LINCX?
Stu Bailey: LINC was an exploration of soft switches in the data plane around OpenFlow and it's the first version. It was a clear implementation of an OpenFlow soft switch that ran on any kind of server. LINCX is a fast version of that; so it's focused on CPU-based switches rather than basic switching. It's similar, broadly in concept, to Open vSwitch.
What is the difference between LINCX and Open vSwitch?
Bailey: We have no intention of competing with Open vSwitch. If you want to do tried-and-true CPU-based software overlay networking, that's clearly there and it works. Our intentions with LINC and LINCX were different. Open vSwitch, I would categorize as a soft switch, CPU-based switch, which happens to support OpenFlow, but it also supports legacy things you would expect to get in a dedicated function box. It's full featured … there's learning, firewalls, and other capabilities built in the data plane layer, and it's programmable from OpenFlow too. So it's a soft version of a hybrid switch.
We were interested in a pure OpenFlow programmable data plane that had no semantics whatsoever. So you turn on LINCX, and you plug in things, and there's no controller and no flows: it doesn't do anything. It's a piece of wood and that's not the case with Open vSwitch. We also wanted to have a clean implementation of a CPU-based programmable switch like that, because one indicator that the market is ready to adopt CPU-based data planes in general is that there should be several sources in the market. So it would be a good thing to have a couple programmable data planes to play around with at this early stage.
Our model also sits on top of [bare] hypervisors and isn't embedded in Linux … which is one way we get the performance and speed. So specifics and motivations are different. But this is early and there should be [different options]. We hope there are five more of these things from other groups over the years.
You developed LINCX through the Flow Forwarding community. What is that and why did you choose it over a wider community?
Bailey: [FlowForwarding.org] is open source and free and it's [used] specifically to explore CPU-based programmable data planes.
I'm not sure the broader market is ready to consume this kind of technology because of the structure of what I call the hardware-defined networking industry. It's an entire industry centered on the ASIC, non-programmable model of the data plane, where the data plane consists of dedicated function boxes. It's the way the whole industry is structured, the way the channel is structured, the way the vendors organize themselves and their products and how the tech media writes things and categorizes them.
Infoblox … we're pretty focused on large enterprise corporations and most of our 7,000 customers organize their IT teams around dedicated function boxes. From a network perspective … the idea of buying programmable data plane hardware in the same way we buy compute today -- I don't buy a word processor, I buy a computer and install the software -- points to a paradigm switch in the structure of the networking industry. When the requirements are increasing for that level of CapEX and OpEx and new functionality is increasing in the mainstream … it's still the very early days. We think it's more important to make sure we understand, and the community understands, what's possible, where the early use cases are and who the early adopters are that are going to flush out specifics that lead to wider market adoption. An open source community seemed like the right place to do that.
Why did you choose Erlang as the programming language for LINCX?
Bailey: I'll say, broadly, Erlang is very well suited to this CPU-based distributed systems model that's focused on telecommunications. It came out of Ericsson in the mid '90s … and it's gotten more traction in the cloud computing industry. I assumed it was perfect for SDN applications, but now that's my conviction.
It has two properties that are very unique. One, it has a rule set … or a pattern matching set compiler, and I don't know of any other open source compiler that can take OpenFlow rules and do exactly what we do in LINCX, which translates OpenFlow rules into Erlang syntax. If you have 1,000 OpenFlow rules, for example, we bring those into the Erlang compiler, and those get compiled into an optimal pattern matching engine, as opposed to having it in tables, which is the typical way. So whether 100 or 1,000 rules, it's the same time to match because you compiled those into an engine. Then, we load that dynamically in run time into the pattern matching engine of LINCX, and Erlang supports this dynamic loading.
Then, [there is the] the bit pattern matching syntax. I don't know of any other language that supports one that's clear and easy to use. If you want to program general pattern matching onto the Ethernet frame, I don't know any easier way to do that than the syntax they developed almost 20 years ago in Erlang. If you imagine what OpenFlow may look like … OpenFlow 2.0, 5.0, to me, it's evolving toward the Erlang bit pattern syntax. So it seems, for us, to be a no-brainer.