I/O virtualization vendor Xsigo has evolved its product into a data center fabric by virtualizing server-to-server connections within its infrastructure. While aimed more at server and virtualization administrators
Xsigo's Server Fabric product is a server-centric option for enterprises looking to consolidate their data center networks, reduce latency and provide any-to-any server connectivity in a heavily virtualized environment.
Several networking vendors have already tried to meet these requirements by introducing data center fabric strategies over the last couple of years. Cisco Systems’ data center fabric strategy consists of FabricPath in its Nexus switches and its Unified Computing System blade server products. Brocade has its Virtual Cluster Switching products. Juniper Networks has announced, but not yet fully launched, QFabric, a set of products that function as an abstracted, modular Ethernet switch. QFabric’s edge devices communicate with resources external to the solution, such as server infrastructure and the WAN, via Ethernet. But QFabric’s edge devices function like line cards on a modular switch, communicating with each other through a central chassis device via a proprietary transport protocol.
While these solutions take a networking-centric approach to developing a data center fabric, with the network administrator heavily involved in establishing and managing server-to-server traffic, Xsigo's solution takes the opposite tack—Server Fabric puts server connectivity firmly in the hands of server and virtualization administrators and reduces the Ethernet footprint within the data center.
“The Ciscos and Junipers of the world are trying to get networking more involved in data center operations, whereas the Xsigo approach says let’s give a solution to the guys [server administrators] who already run data centers,” said Zeus Kerravala, senior vice president and distinguished research fellow with Yankee Group. “All of these approaches are significantly different from each other, and they will all require their own proof of concept. The vendor who shows they have the best solution stands to win. This isn’t a case where you automatically go to the incumbent.”
From I/O virtualization to data center fabric
Xsigo's new data center fabric solution is built on top of its original I/O virtualization product, the I/O Director top-of-rack device, which virtualizes the storage and network connections for servers. Server administrators can plug servers with standard 10 Gigabit Ethernet NICs or 40 Gigabit Infiniband NICs directly into the I/O Director. Through Xsigo software, the server administrator can then assign network and storage connections via those NICs to physical servers and the virtual machines that operate on them.
Xisgo’s data center fabric product, Server Fabric, is a software upgrade to its I/O virtualization product that enables its I/O Directors to manage server-to-server communications, too. Previously, these I/O Directors would either switch server-to-server, or "east-west," network traffic in their own on-board line cards or forward them upstream to an Ethernet switch, which would then send the traffic to the I/O Director sitting in front of the destination server. Now all that east-west traffic is virtualized with a drag-and-drop interface that establishes “Private Virtual Interconnects” among virtual machines and bare metal servers connected to the Xsigo infrastructure.
This server-to-server traffic is no longer visible to the Ethernet network, eliminating the need for network administrators to manage virtual machines with VLANs. Only traffic destined for the corporate LAN or the WAN will traverse the Ethernet network.
"From the network perspective, it's a great simplification because you're not consuming VLANs and Ethernet address space, which is a scarce commodity in some environments. And you're not consuming the network administrator's time, other than when you're going into the network," said Jon Toor, Xsigo's vice president of marketing. "If all you're doing is connecting a virtual machine on one server to a virtual machine on another server, why should that consume a network administrator's time to create connections for something that should essentially be within the rack?"
Xsigo’s new data center fabric could obviate the need for server access and even aggregation layer switches in a data center network. The amount of traffic traversing the actual Ethernet network could drop dramatically since all server-to-server traffic would remain within the Xsigo infrastructure, and many network administrators would find themselves managing nothing more than an Ethernet core.
“I’m not sure how many network managers really want to run virtual switches and blade switches anyway. It’s a bit outside their domain,” Kerravala said. If you think of the five tiers of networking in a data center — core, aggregation, top-of-rack, blade-switch and virtual switch — Xsigo can handle the last two. There’s no reason why they couldn’t take over top-of-rack and extend that out to the aggregation layer, too.”
Xsigo’s data center fabric simplifies server-to-server traffic, but complexity looms
Director of technology Aaron Branham currently uses Xsigo for I/O virtualization within his data center at Bluelook, a VMware-based cloud service provider. Branham has been testing Xsigo’s Server Fabric capabilities in a lab environment.
“I think it will give us even more economies of scale and better efficiency,” Branham said. “One of the limitations of Xsigo today is that if you want to do switched traffic that stays within the Xsigo [infrastructure], it still goes to one of their line cards, and that line card is connected into the fabric at whatever the fabric rate is. So there is a limitation there in the throughput. With this new fabric, there won’t be that limitation anymore because traffic going between servers doesn’t have to go through that line card.”
Still, Kerravala said networking and server teams will have to work closely together with a data center fabric like Xsigo’s, especially since Xsigo’s Server Fabric will probably coexist in many data centers with more traditionally networked legacy infrastructure.
“They need to work together to ensure that network policies are in synch,” he said. “What I can see happening is everyone is running on a Xsigo network, and then that virtual machine moves off that Xisgo network to another server on the regular network. Because the network policies weren’t set up the same way, something breaks. That happens all the time with virtual switches. A virtualization admin sets policies on the vSwitch; the virtual machine moves off that server. Suddenly it’s using a regular switch, and the policies are different and the application breaks.
Let us know what you think about the story; email Shamus McGillicuddy, News Director