agsandrew - Fotolia
Discussing software-defined networking and network programmability in general is becoming more of a challenge, not because there's nothing to say. Quite the contrary, in fact. And even though another round of vendor hype has kicked off again just like the annual flu, a number of comment-worthy developments actually happened last year.
The real deal is that networking, software-define networking (SDN), traditional data center virtualization and even physical hosting below the hypervisor are all collapsing into a giant virtualized hairball. Previously clean margins and clear demarcation lines are blurring, and for IT admins, that creates new challenges. It seems increasingly incorrect to isolate stand-alone SDN.
Open source quantum phenomena
Docker, the current darling of container-based virtualization, drove more standardization in SDN and application portability last year than just about anything that came before it. It's not tied up in committee, quietly wasting away like OpenDaylight. Docker is a pragmatic solution that even with some rough edges is good enough. Most important, it's driven from the application/VM side of the IT universe, which means it's way more likely to see progress than incubation under networking.
Now, before you flame me on Twitter, let me qualify that position. I'll start with the non-causal observation that application engineers have been getting the new toys first for decades. It's not a failure of networking or network admins. It's that the revenue is in services, but service delivery is viewed as a cost center. Personally, I think that's an unenlightened assessment that businesses need to revisit. And if you think of SDN as the gateway drug to a glorious intertwined network and application configuration future, Docker's ecosystem is really enticing.
A great example of useable technology -- not just tantalizing slideware for network admins -- is the CoreOS team's Rudder -- networking glue between CoreOS and Docker containers. Think of it as an open partial implementation of VMware NSX, where every server gets its own subnet with UDP traffic tunneling. It's a virtual topology overlay that allows reconfiguration of the virtual network without reconfiguring the physical network. And in keeping with the "good-enough-let's-get-on-with-it approach to standards," it also plugs into Google's Kubernetes Docker cluster manager. It was created for one reason -- to grease the skids for Docker adoption by getting SDN moving.
When services are concentrated and compressed beyond a certain threshold, once-immutable structures as significant as physical firewalls collapse into singularity. Multiple previously atomic management disciplines fuse into macro structures responsive only to configuration policy. It becomes bigger than any one vendor. And open standards, even if fluid, a bit rough or led by applications, not networking, are the only way forward.
SDN and application self-determination
Software, and specifically applications, have always driven hardware procurement, never the other way around. Did HP, Dell or IBM ever sell you a rack of servers and then you decided what to put on them? Of course not. Now, replace "hardware" with the words "resource queue," or more succinctly "hypervisor," and consider that reality again. Applications have always driven VM platforms, not the other way around.
Proprietary VM infrastructure vendors might like to think admins are happy with the VM layer, but the reality is that it could operate on the principle of magical unicorn gas, and as admins, we won't care. As long as a solution meets our performance and security requirements, we're open to anything. Cost is an increasing concern as commoditization continues to drive hardware costs down, exposing software licensing as the next big area for improvement. A free, standards-based solution, particularly one that harmonizes commodity, best-of-breed OpenStack, vendor-specific platforms and cloud will win the market.
A "classic" unified computing system (UCS) is a great example -- it led years ago with versatile SDN, although we didn't call it SDN back then, it was "common backplane" plus "API-based management." It put multiple protocols -- mostly Ethernet and Fibre Channel -- on a common conduit, and its versatility even drove it into unexpected markets like SMBs. I'm still surprised when I walk into a server room at a 1,000-employee company and see blue-green compute trays that replaced less expensive commodity Dell and HP servers.
More than that, from launch, UCS was managed by an API, including how it connected to virtualized compute and memory, and it offered the beginnings of application-defined network configuration, at least within the same chassis. Software defined its network configuration years before OpenFlow. It's a history that proves network admins like software-defined options. And now, Cisco is doubling down big on metal-level flexibility with the UCS M-Series for just this reason.
SDN is made of magic unicorns
Perhaps the most striking aspect of vitalization 1.0 was a huge jump in the speed of IT transformation. Virtualizing our server infrastructure paid dividends we couldn't have imagined at the time, even as our top-of-rack and core remained largely unchanged, except for increased capacity and density. As an integrated and fundamental component of OpenStack/Docker/CoreOS, SDN will facilitate another order of magnitude jump in rate of services transformation. It's evolving so quickly that even old-fashioned VMs built on Windows and Linux may painlessly migrate past legacy VM vendors entirely.
Given an opportunity to truly remove platform from the list of admin worries, we'll happily switch to technology that lets us define the network requirements of an application, let the network sort out the implementation programmatically, close the ticket and move on. If it's free as in beer, all the better.