News Stay informed about the latest enterprise technology news and product updates.

Cisco data center interconnect aims to fix vMotion network trouble

In the past, vMotion network problems have inhibited virtual server migration across data centers, but Cisco claims its new data center interconnect technology, Overlay Transport Virtualization, fixes the problem.

Cisco Systems claims it has created a new data center interconnect technology that will enable virtual server migration across data centers with very little configuration or complexity. The key to this technology is a bridging method that extends a Layer 2 network across a Layer 3 wide-area network (WAN) connection.

The bridging method, known as Overlay Transport Virtualization (OTV), essentially takes LAN traffic at the MAC address or Ethernet frame level and tunnels it through a Layer 3 IP network, according to Craig Griffin, director of product management for Cisco. By avoiding the translation of traffic up the OSI stack from Layer 2 to Layer 3, an enterprise can avoid the latency created by this translation.


"As far as a data center is concerned, it looks like you have a direct LAN connection, but it's tunneling through the IP network," Griffin said.

Meeting vMotion network challenges with Overlay Transport Virtualization

OTV allows enterprises to migrate virtual server workloads across multiple data centers with VMware's vMotion for enhanced load balancing and disaster recovery abilities, according to Zeus Kerravala, distinguished research fellow at Yankee Group.

Until now, most companies have used vMotion to shuffle workloads within the server rack's LAN, where everything is connected via a Layer 2 switch. But as applications move from one physical location to another, vMotion requires very low latency networks. So the complexity and latency introduced by moving up the OSI stack for transport across IP networks have been barriers to more advanced use of vMotion, Kerravala said. In that scenario, traffic moves from the rack into the network core, then out into the wide-area network, where it must be converted into Layer 3. Finally, once traffic arrives at the second data center, it must be retranslated back down to Layer 2 and make its way to the second server rack .

"With vMotion, you have to keep the integrity of the virtual server whole," Kerravala said. "When you move an entire server image from one server to another, you want that to be as absolutely fast as possible. So adding the latency of Layer 3 is too much. That's why a lot of high-end enterprises have gone through the work of completely gutting their data centers and re-architecting around Layer 2."

Not everyone can gut and re-architect their networks for virtual server migration

But most companies don't have the option of gutting their data centers and assembling a new, flatter Layer 2 data architecture. Many also can't connect these data centers with a dark fiber network or an MPLS network. The engineering expertise and the capital expenses are simply beyond their resources.

More on data center interconnects, vMotion networking challenges and virtual server migration:
VMware CTO talks virtual server migration at Interop.


The impact of server virtualization on network configuration.

Investigate your 10 Gigabit Ethernet interconnect options carefully.

This guide offers networking pros everything they need to know about virtualization certification.

"The concept here [with OTV] is that as long as the two routers at the two data centers can talk Layer 3 to each other, you can hide all that complexity," said Andreas Antonopoulos, senior vice president at Nemertes Research. "You can basically connect Layer 2 switches so that they use Layer 3 to talk to each other, but they present themselves as if they're in the same Layer 2 LAN…. You will see one MAC address from one data center appear as if it's reachable by a MAC address in another data center."

Removing complexity from vMotion network fixes

Enterprises have other ways of achieving the class of data center interconnects required for vMotion, Antonopoulos said, but generally the complexity and expense have been too much.

"The most common [alternative] approach is VPLS [Virtual Private LAN Service], so that you have an MPLS network connecting two data centers, and the VPLS makes the Layer 2 subnet in one data center look like it's connected to the Layer 2 subnet in another data center," Antonopoulos said. "Between them, you're running MPLS and a VPLS tunnel. That's complex. It's hard to configure. You have to have an MPLS network. You have to configure the VPLS tunnel between them as an overlay. VPLS configuration is notoriously complex. Then you have to optimize performance."

What's more, all that work with an MPLS and VPLS only sets up a connection between two data centers, Antonopoulos pointed out. If an enterprise wants to establish virtual server migration between three or more data centers, each data, engineers will have to build links between all of them. "Data center A will have to be connected to data center B," he said. "Data center B will have to be connected to data center C, and data center C will have to be connected to data center A."

Cisco's Griffin claimed that configuring a data center interconnect for virtual server migration between two data centers can take months with existing technologies, whereas the OTV feature can be set up in five minutes.

OTV will be available this April via a software upgrade to the Nexus 7000 core switch.

In addition to OTV, Cisco also announced new 10GBASE-T modules for its Nexus and Catalyst switches, enabling Cisco customers for the first time to use copper cabling for 10 Gigabit Ethernet.

Let us know what you think about the story; email: Shamus McGillicuddy, News Editor

Dig Deeper on Data center interconnect

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.