News Stay informed about the latest enterprise technology news and product updates.

SDN data center case study: Using OpenFlow for converged storage

Edgenet built an OpenFlow SDN data center network with a multipath fabric that will allow it to roll storage networking into the SDN programmability.

All the talk of Fibre Channel over Ethernet and converged storage and data center networking strategies in recent years may have been for naught. Data services provider Edgenet is using SDN technology to create a programmable fabric across data center and storage technologies, potentially allowing the company to dump Fibre Channel.

When Edgenet began offering Software as a Service and data services that required both super flexible virtual machine (VM) migration and applications on demand, the IT team moved toward server and storage virtualization -- but the network couldn't keep up.

We can support any-to-any connectivity and we are no longer tied to a particular topology … there are no more distributed controls, no more Spanning Tree [Protocol]. … We avoid loops because the controller can see all paths and manage it.

Mike Steineke,
vice president of IT, Edgenet

"In doing application development, we support multiple lifecycles in our own data center; in addition we have environments for developers, we have production environments for customers, we do user acceptance, [so we need] different levels of Quality of Service for those environments," Edgenet Vice President of IT Mike Steineke explained during a presentation at the Open Networking Summit in April.

To support its data services offering, Edgenet collects large amounts of data from multiple parties, and customers constantly move that data around from application to database tiers. The maintenance of those databases consumes tons of I/O, storage and backup resources.

Edgenet used Hyper-V for server virtualization and Hyper-V over Server Message Block Protocol for disk storage. Storage was connected via Fibre Channel on an Ethernet network and Edgenet was using about 100 VLANs -- nowhere near the 4,096 limit that many engineers must work around -- but with VM migration from host to host, there were traffic spikes and management issues that a traditional network couldn't handle, Steineke explained.

"When you virtualize a lot of machines, your visibility becomes a lot more difficult; we also worry about separation. I ended up having a QA [quality assurance] VM on the same host as [a] production VM. … What happens if the QA one goes bad, but you don't want [the] production one to go bad?" he asked.

The first step was to speed up the network and move from 1 GbE to 10 GbE connections. In shopping around for new switches, Steineke figured, "Why not consider OpenFlow?" In addition to speed, Edgenet needed better visibility into the virtual environment and network, as well as a policy setting that could help prioritize the network in addition to compute resources and storage.

Steineke turned to NEC Corporation of America's ProgrammableFlow SDN ecosystem, which offered high-speed physical switching, as well as virtual switching that supported a Hyper-V environment, a controller that provided full visibility and centralized management. ProgrammableFlow enabled network virtualization provisioning out of the gate.

Edgenet placed NEC PF1000 virtual switches on the Hyper-V hypervisors, as well as four NEC physical switches with Gigabit Ethernet and 10 GbE ports, and then installed an NEC ProgrammableFlow controller.

The PF1000 gives the same level of control over the virtual switches as the physical switches, and it provides a "single-pane-of-glass" management approach to physical and virtual resources.

"PF1000 extends OpenFlow all the way to the virtual switch on hypervisor; it prevents the networking layer of one VM from affecting another; it makes sure we can live migrate a VM from one host to another -- or one cluster to another -- without any loss of activity from the client side," Steineke said. "It automates configuration, gives end-to-end reliability and fully redundant connections."

More on SDN storage and data center

Make sense of the SDN storage hype

Will software-defined storage make hardware obsolete?

Future of the software-defined data center

But something else came out of the deal for Edgenet: The SDN architecture meant the team could "build a multipath, east-west fabric," Steineke explained.

"We can support any-to-any connectivity and we are no longer tied to a particular topology … there are no more distributed controls, no more Spanning Tree [Protocol]. … We avoid loops because the controller can see all paths and manage [them]," Steineke said. "With configurations in a traditional network, and using Spanning Tree on multiple top-of-rack switches and VM clusters, if we were live migrating between nodes on one rack to another, we would see a lag in Spanning Tree and figure out that a machine had moved. It was a major problem doing automatic updates to hosts.

Now, on top of the fabric, the Edgenet team can "build multitenant virtual networks to stitch together physical and virtual islands in the network so the apps see a Layer 2 or Layer 3 network, but they're each managed discreetly and have isolation from one another," Steineke said.

But the beauty of any-to-any, multipath connectivity really plays out when it comes to converged storage networking. The next step for Edgenet will be to completely replace its Fibre Channel SAN by "presenting Infiniband as Ethernet traffic." Once the storage appears as IP traffic, it can be controlled by OpenFlow the same as any other traffic on the network.

That'll relieve Steineky's biggest traffic problem, which lies in the flows between servers and storage. The idea of being able to set different levels of Quality of Service for varying traffic between servers and storage is most appealing. Eventually, Edgenet will work toward 40 GbE connections and total control of Quality of Service depending on application or data type. To do this, the company will use NEC's ProgrammableFlow, regardless of what physical storage equipment is implemented.

Dig Deeper on Data Center Networking

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.