This Content Component encountered an error
This Content Component encountered an error
This Content Component encountered an error

By Rivka Gewirtz Little, Senior Site Editor

What is I/O virtualization?

Like all forms of virtualization, I/O virtualization (IOV) allows enterprises to consolidate physical resources by creating virtual instances of a technology. In this case, I/O (input/output) resources are virtualized and shared among a pool of virtual and physical servers and storage components. I/O involves all of the components that enable high-speed data transfer among storage, data center servers and the rest of the network, including network interface cards (NICs), host bus adapters (HBAs) and CPU. With I/O virtualization, enterprises can increase the utilization rate of I/O by assigning virtualized resources on-demand, and they can decrease space and power consumption by using fewer physical components.

Why virtualize I/O?

In a physical environment, each server needs its own set of NICs, HBAs and CPUs for connectivity. In a virtualized environment, that demand becomes even more complex because each virtual machine needs a share of its host server's I/O and bandwidth. As it stands now, virtual machines battle for I/O resources, causing I/O bottlenecking. Administrators can assign pools of virtualized I/O dynamically, so that every virtual server has its needs met and so that I/O doesn't for physical servers doesn't sit idle. In a physical server environment, I/O utilization can be as low as 5% or 6%, even while virtual servers are in need of I/O. With I/O virtualization, utilization can be as high as 50% to 80%.

What is converged I/O?

Converged I/O appliances and software are the basic building blocks of I/O virtualization. With data and applications moving fluidly among storage networks, data center servers and the LAN, enterprises use this technology to unify and manage the flow of the varying traffic types over one fabric.

Many administrators believe that converged I/O inherently means Fibre Channel over Ethernet (FCoE) -- or the ability to encapsulate FC traffic as Ethernet so that it can run over an Ethernet backbone. While a number of vendors, such as Cisco and Brocade, have released FCoE-ready components, converged I/O includes several kinds of data and storage transport combinations, such as Infiniband traffic over Ethernet or vice versa.

Virtualized I/O and converged I/O are generally enabled by pools of converged network adapters (CNAs) -- or shared virtual NICs -- that are built into blade servers, virtual I/O gateways or switches. CNAs recognize and process different traffic types and spill them together into a joint fabric or a single backbone. These gateways are often set up as pools of extended PCI Express (PCIe) cards.

The PCI-SIG has created specifications that help PCIe devices function for a virtualized I/O environment. Single Root IOV is a PCI-SIG specification that makes a PCIe card appear as many physical devices to serve the needs of one OS, while the Multi-Root IOV specification enables one PCIe device to appear as many serving multiple systems at once.

In the case of FCoE-ready components, Cisco and Brocade both produce top-of-rack switches -- the Nexus 5000 and the Brocade 8000 -- with ports for both Ethernet and Fibre Channel traffic. These switches connect data center servers via Ethernet and Fibre Channel uplinks to the LAN core and the SAN. Cisco, HP and IBM have all embedded I/O virtualization functionality in their blade server chassis products, enabling administrators to assign I/O to blade servers on demand and to encapsulate FC traffic onto Ethernet.

Other vendors, such as IBM and HP, have developed software that allows one physical NIC to present itself as multiple virtual network cards. This software allows an administrator to create pools of LAN and SAN addresses dynamically, where upstream switches see these uplinks as aggregated NICs.

Not every company is ready to invest in costly blade server-switching combinations or complicated software from major vendors in order to begin the long road to network convergence. These users often look to third-party vendors such as Xsigo, Aprius, Virtensys, NextIO and Mellanox, which make I/O virtualization and converged I/O appliances or software that can be added into existing servers.

I/O virtualization appliances: PCIe bus extenders and the software approach

PCIe bus extenders -- sometimes referred to as switched PCIe -- extend the functionality of PCI adapters to a so-called card cage. This cage -- or gateway -- houses a group of I/O cards that can enable Ethernet, Fibre Channel, InfiniBand, SAS/SATA, or iSCSI traffic. A pool of servers and SAN components can share this cage with dynamic provisioning intelligence.

Rather than use an extender to connect to a gateway, Xsigo Systems offers a software and hardware combination that uses existing Ethernet adapters in the server to connect to the gateway or pool of I/O adapters. The Xsigo I/O Director aims to reduce the complexity involved in implementing converged I/O and I/O virtualization. This approach can be more costly than PCIe bus extenders.

Not every virtual I/O appliance provider is aiming for an everything-over-Ethernet approach.

The Mellanox BridgeX delivers converged I/O over Infiniband. Mellanox uses adapters to encapsulate Ethernet or Fibre Channel traffic as Infiniband. Simultaneously, Mellanox technology can also encapsulate Infiniband traffic to run over Ethernet. The Mellanox BridgeX BX5020 supports the Remote Direct Memory Access (RDMA) over Converged Ethernet (ROCE) standard, which enables Infiniband to run on an Ethernet backbone.

Improving Ethernet for converged data center networks

As vendors work toward converging data and storage traffic over an Ethernet fabric, they must first evolve Ethernet into a lossless technology. While the data traffic that has historically traversed Ethernet has tolerated dropped and delayed packets, storage traffic will not. Lost packets can corrupt the data on a storage array, sending storage administrators into fits. The Ethernet industry is seeking to address this concern with Converged Enhanced Ethernet (CEE) or Data Center Bridging (DCB), lossless Ethernet standards that are being developed by the IEEE and IETF, respectively. The goal is to add extensions to existing Ethernet to ensure reliability without having to rely on TCP, which can be too complex, causing its own latency. Both CEE and DCB should enhance Ethernet performance in four ways: prioritization and differentiation of SAN and LAN traffic, lossless transport for SAN traffic, bridging to enable the shortest path for data transport, and tighter configuration management.

TRILL or 802.1aq for converged data center networks

As enterprises move toward converged data center networks, the spanning tree protocol will become a hindrance. Since Layer 2 Ethernet is a broadcast technology, it naturally sends frames to every available network link. This broadcast feature can lead to multiple paths linking one network device to another in a broadcast storm or an infinite loop, overloading the network. Spanning tree eliminates the looping problem by allowing only one path from one point on a network to another and closing off other paths. This can be inefficient in a virtualized data center network.

There are two proposed standards to solve this problem: Transparent Interconnection of Lots of Links (TRILL) and 802.1aq or Shortest Path Bridging. TRILL and 802.1aq, which are both nearing standardization, use RBridge or Routing Bridges with the Intermediate System to Intermediate System (IS-IS) routing protocol to identify the shortest possible path for traffic to travel. It is likely that TRILL will be the reigning bridging protocol.

This was last published in September 2010
This Content Component encountered an error
This Content Component encountered an error
This Content Component encountered an error