Armin Sestic - Fotolia

ONUG 'Great Debate': The pro-software transcript

Check out the pro-software side of ONUG's debate at its Fall 2015 conference in a hardware versus software faceoff.

At the Open Networking User Group Fall 2015 conference in New York, ONUG co-founder and debate moderator Ernest Lefner posed the question: What will solve the networking industry's biggest problems -- hardware or software innovation? In this transcript of the pro-software side of the debate, Philip Brighten Godfrey, assistant professor in the department of computer science at the University of Illinois at Urbana-Champaign, argued that software is the greatest tool to achieve secure, reliable and agile networks capable of meeting modern connectivity demands. (Arguments have been lightly edited for clarity and concision.)

Philip Brighten Godfrey: The question posed to us is, 'Will software solve networking's biggest problems?' So, what I'd like to start with is what are networking's biggest problems? What are the grand challenges in networking?

While I was in grad school, I went through a midlife crisis a couple of decades early. I was trying to get more excited about networking; I wanted big challenges. If you look at other fields, there are crisp, grand challenges like understanding high-temperature superconductors in physics, or building practical electric vehicles, or, in computer science, resolving the question of whether P equals NP. It didn't seem that we had those crisply defined grand challenges in networking and systems. What is the P versus NP of networking?

ONUG debate: Software versus hardware

Debate transcript: The case against software

Software or hardware? See who won ONUG's 'Great Debate'

I informally surveyed a bunch of colleagues and researchers and got a few different kinds of answers. About a third of the responses were, 'Well, the grand challenge is whatever I'm working on.' A third named vague areas where we need improvement, but without that crisp statement of a problem -- security, reliability and usability were the top three at that time. And about a third of respondents said, 'Well, there isn't really a crisp, grand challenge for networking because it's infrastructure and it needs to support diverse requirements, both across space and across time.' I think that points to a challenge that is kind of a meta-challenge, if you will, one of evolvability. How do we build infrastructure that can adapt and change to new environments?

That was six or seven years ago, but to learn about the challenges of networking in the context of enterprise networks today, I think we could look at the working groups that have been identified by ONUG. And we can overgeneralize these into a few categories of challenges: virtualization, visibility, security and automation.

Let's talk about each one of these categories and think about how we're going to solve these challenges. Are we going to be able to solve them through open APIs, through software and through SDN?

On virtualization: Well, virtualization of networking is explicitly about softening the hardware -- turning the hardware into software as much as possible. I think to do that we do, in fact, need open APIs so we have a definition of what the hardware is, and we can virtualize that and emulate it. We have progress in that direction.

On visibility: An interesting thread that's come out in the presentations [at the ONUG Fall 2015 conference]: We need visibility that's not just one little piece of equipment but that is synthesizing information across layers and across components. This is kind of a classic big data problem. We're bringing together heterogeneous sources of information from different systems -- not only networking, but also compute and storage, and analyzing it in new ways. Really the only way we know to solve that challenge is through software.

Software is our only hope for addressing the biggest networking challenges.
Philip Brighten Godfreyassistant professor, University of Illinois at Urbana-Champaign

On security: A common thread in the ONUG security working group is that we need to understand the applications and their associated trust needs. We need to be able to move with applications wherever they are. So, if we have to keep up with software applications, I posit that we need security to be as agile as software -- it needs to be software.

On automation: This is getting to one of the key potentials of software. If we think about the history of networking, we are in the place we are because the architecture that has gotten us to this point -- network and Internet architecture -- was built in a simpler time, a time when networks had simpler functions of data delivery. But now we have much more complex requirements and networks themselves have gotten more complex.

In a modern network there are a lot of moving parts, and those moving parts and systems of systems need to be orchestrated. Lacking APIs, in the past -- lacking the human resources to program that infrastructure -- we've come up with processes that exist at the human level: change management. The problem here is that these processes operate at human timescales and might involve tens -- or even a hundred-plus -- of people involved in making changes to the network. We need to be able to automate that and turn those human processes into software. There is big opportunity there.

Walking through each of these challenges, my conclusion is: Software is our only hope for addressing the biggest networking challenges, so it better work. I think the good news is that we are actually seeing some success, first in software-defined data centers -- both in the Facebooks and Googles of the world -- and also in products that are helping companies build private clouds.

This success has been a virtualization win -- it's something we could not have done without APIs and software. It also represents an automation win. For example, Facebook being able to support 20,000 servers per engineer, which is roughly two orders of magnitude more efficient than traditional IT in terms of human resources. And I'd argue this is actually an evolvability win.

Why is it an evolvability win? One of the key needs that drove us to software-defined data centers, and virtualization specifically, is to be able to take existing applications that depend on certain properties of Layer 2 or multicast or addressing, and bring them over to a new infrastructure. So, in a scoped environment, it's helping us evolve the infrastructure to a very different architecture than some of these applications were designed for.

Second, we are seeing success in WAN traffic engineering. Both Google and Microsoft have described how they've been able to get very efficient wide area network infrastructure, driving that to 100% utilization, with 0% packet loss on traffic that needs 0% traffic loss. That's an efficiency win, an automation win and a visibility win. To make that happen, they are synthesizing information about applications, they can see what applications are running across multiple data centers, where they are and what their performance needs are, and [how they are] affecting that network.

At the same time, there are open questions we can discuss. First is teasing out, 'What are the killer apps?' What I just described are some wins, but they're scoped wins. And if we think about the broader enterprise infrastructure, there are much broader needs. I don't think we're going to have a single SDN controller that solves every problem, or even a single SDN programming language that solves every problem.

Second is reliability. Software may be less reliable, but I think we're going to actually end up with another win here because we're going to have a much easier time with testing and verification -- even potentially mathematical verification -- of some of these systems if there are defined APIs. We will need to define what those APIs are for modern control policy and orchestration. That's getting at questions of performance, and I think the question is, what's going to be the boundary between software and hardware, and how little hardware can we get away with?

So, the takeaway is: I think our only hope is a software-defined world. Fortunately, we know by example that that future can be bright. There is light there. And a particularly interesting question that we will discuss today, I think, but also in the coming years, is how narrow of a role for hardware can we get away with?

Next Steps

Google, Facebook use software-defined networking

What you need to know about APIs and SDN

How the meaning of 'SDN' has evolved

Dig Deeper on Network infrastructure

Unified Communications
Mobile Computing
Data Center
ITChannel
Close