Just how open can OpenFlow research be now that vendors are using educational institutions as proxy product development and testing centers?
At the Techs in
Requires Free Membership to View
The process of negotiating with network hardware vendors on openness won't be easy. Networking has never been a hotbed for open source projects.
Considering all of the fixes that OpenFlow still needs, these institutions know they can't afford to limit collaboration. Yet they also can't afford to refuse industry partnership.
"There is an increase in commercially sponsored research in higher education that is filling the loss of investment that the federal government used to make in [these projects]. With those commercial agreements there are different stripes of NDAs," said Shel Waggener, senior vice president at Internet2, a provider of a 100 gigabit, OpenFlow-friendly backbone for R&E institutions. "We don't support limiting access to developing knowledge on any research."
A whole range of R&E institutions from Stanford University to Indiana University and Marist College have entered commercial agreements with vendors to develop or test OpenFlow products. Some of these institutions are less committed than others to keeping their research open, but most say the answer is to negotiate as much freedom as possible when forming partnerships with industry and signing NDAs.
There is a "fine line" between what can be contributed as open research and what must be concealed, said Robert Cannistra, who teaches in the IT and computer science department at Marist College. Cannistra leads students in OpenFlow research and has created an OpenFlow interoperability lab with his team of researchers. Making the case to industry partners about why information must be shared is an on-going process, and Marist shares as much as it can, he said. Cannistra's students have created a fix for the open source controller Floodlight that stops it from sending packets to down ports. They've also created an OpenFlow quality-of-service (QoS) tool and a network monitoring application. All that research is completely open. On the flip side, the department can't disclose which vendors are participating in its interoperability lab, let alone what results the lab has found.
OpenFlow protocol: From theory to product
Floodlight: An OpenFlow controller
Building on OpenFlow, FloodLight offers path to open network virtualization
IBM's OpenFlow controller: Big Blue eyes SDN applications
NEC ProgammableFlow; OpenFlow networking
Big Switch: Network virtualization and an army of partners
Waggener suggested that university engineers "narrow the context" of what is disclosed, keeping all research open until the point of where it directly contributes to "making a product viable."
But the process of negotiating with network hardware vendors on openness won't be easy. Networking has never been a hotbed for open source projects unless they relate to monitoring or management applications. Even Arista, which opened up its EOS for development to paying users, has not gone so far as to open source its technology. Companies like Cisco very rarely even go that far.
On the bright side, there are a number of projects in which R&E institutions are collaborating on open source networking projects, and these were explored in depth at TIP 2013. For example, the Global Environment for Network Innovations (GENI), a network experimentation project sponsored by the National Science Foundation, is providing campuses the ability to link into the Internet2 backbone and use OpenFlow-based software-defined networking (SDN) to spin out completely distinct virtual network splices. They can do so between their own sites or even between campus domains. Information that emerges from collaborations like this will eventually trickle into vendor product development in a more organic way.
Network Management Strategies for the CIO

Join the conversationComment
Share
Comments
Results
Contribute to the conversation