News Stay informed about the latest enterprise technology news and product updates.

The rise of SDN optical networks: Has automated provisioning arrived?

University researchers worked with IBM and Adva Optical Networks to create SDN optical networks by converting OpenFlow control commands into SNMP.

A few years ago, a New York-based media company sought the help of Adva Optical Networks to build private data centers in the middle of Manhattan. That turned out to be too costly a proposition, as did the alternative of providing dedicated cloud services.

"We thought, 'Wouldn't it be neat to provide cloud bursts in Manhattan?' So we teamed up with Level 3 [Communications] to do wavelength division multiplexing, but the cost of the wavelength was 70% of the entire cloud service. … It was prohibitively expensive," said Todd Bundy, Adva's business development director.

Building a hardened connection for the cloud meant providing enough bandwidth to handle the company's peak traffic -- even though that traffic volume would only be met about 20% of the time.

That's when the idea of using OpenFlow SDN for WAN bandwidth provisioning began to take shape.

"We realized we could give customers what they want but only charge them for what they use," Bundy said.

OpenFlow controllers can direct specific flows on packet networks, but, until recently, there was still no method for making SDN and programmability work at the optical layer.

Research team spins up an approach to SDN optical networks

Adva had already been working with IBM, which was well into testing and producing OpenFlow switches. IBM had funded an SDN lab at Marist College, where senior professional lecturer Rob Cannistra had been leading a group of students in OpenFlow research.

To solve Adva's problem, the Marist team created a software agent that would allow OpenFlow controllers to manage and direct data flows in the optical network at Layer 1 by converting these commands into Simple Network Management Protocol (SNMP).

"All we are doing is taking OpenFlow commands from a Floodlight controller -- and in the future that will also be an OpenDaylight controller -- and we're converting them to SNMP to talk to the data-flow plane on the optical layer," Bundy said. "We can automatically set up and break down wavelengths without worrying about optical layer impairments."

The researchers combined ADVA's FSP 3000 optical transport platform, IBM's G8264 switches and a Floodlight controller. Cannistra and his students programmed the Adva OS agent to orchestrate optical equipment from a common point in conjunction with the Floodlight controller.

Cannistra and his team created three mock data centers with enough spooled fiber to act as if there were 50 km to 75 km between sites. They were then able to use the controller and the Adva OS agent to send commands to the optical layer, spinning up 10 Gigabit Ethernet links between two of the three data centers on demand. These links could be taken down automatically and reconfigured between different data center combinations or configured as a ring topology, Cannistra said.

The team also created an application called Avior, which provides an interface for users to monitor the network and schedule provisioning, including the ability to add pre-defined traffic profiles that trigger reconfiguration of optical traffic patterns.

But they also wanted an automated network that could react to the needs of applications. So the team began using an open source network monitoring tool called Ganglia, which uses information from VMware's vCenter to detect traffic overloads. It then provides this information back through the Adva agent, which communicates northbound to the OpenFlow controller, instructing it to make re-provisioned traffic flows.

If video is overloading one part of the network, for example, Ganglia can instruct the controller to provision another wavelength or to move the application to a less crowded space in the network. In that sense, the system can be used for load balancing, in addition to provisioning.

The technology is applicable to a range of potential use cases, including bandwidth calendaring, cloud bursting, secure multi-tenanting and load balancing, Cannistra said.

"If on Super Bowl day there is a huge hit to the data center, we are able to see that CPU spike and create some trigger," Cannistra said. When a certain CPU threshold is hit, a change is immediately configured.

Going forward, the team will continue testing the technology to see how far it can scale. Researchers will also push for the technology to be controller-agnostic and work with any OpenFlow-capable switch, Cannistra said.

Next Steps

One SDN researcher's road to network programming languages

How OpenFlow can work across network domains

Will Facebook's open source switch be the end of CLI?

Dig Deeper on Software-defined networking

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

One of the arguments I often hear against a move to agile transport networks is that you only realize the additional promised savings if someone else is sharing your physical network, and odds are that all customers will want their peak bandwidth at the same time. There are two problems with this argument.
First, an agile network can be built smaller to begin with, as you do not need to overprovision for peak traffic, as you can take your own resources and direct them where needed, even if it’s only you on the network.
Second, it turns out not everyone does want their bandwidth at the same time. Some businesses have heavy morning traffic, while others may have heavy lunchtime traffic. Some Enterprises need the bandwidth for evenings, while others need it for overnight backup between Midnight and start of business the next day.
The only thing that historically held back agile transport networks is the lack of intelligence at the transport layer. Extending SDN to transport solves that issue, as the transport resources fall under the command of an intelligent centralized controller with full visibility of end-to-end flows of traffic.
Kudos to the team for demonstrating the future of transport networks.