Will InfiniBand get new life in the quest for converged I/O?

Enterprises are desperately seeking answers for converged I/O to support converged data center networks. While many tout FCoE, others are looking toward the ROCE specification for running InfiniBand over an Ethernet backbone.

In case you thought the high-performance computing fabric InfiniBand was a dying technology – think again. This week at Interop 2010, much of the conversation has centered on strategies for implementing converged data center networks with unified storage and servers.

That debate went beyond the typical evangelizing of Fibre Channel over Ethernet (FCoE). In fact, a bit of buzz centered on what is essentially InfiniBand over Ethernet using Converged Enhanced Ethernet (CEE) or data center bridging technology.

InfiniBand has been used in high-performance compute environments for years and is attractive because of its super-high speed (it can already provide 40 Gigabit) and low latency. The problem is that it's not Ethernet, which is already standardized, dominates networks, and promises the ability to easily manage one technology across networks.

ROCE specification brings InfiniBand into the mix for converged I/O

But earlier this month, the InfiniBand Trade Association (IBTA) finalized a new specification called Remote Direct Memory Access (RDMA) over Converged Ethernet (ROCE -- pronounced "rocky"). RDMA enables servers to connect and talk to one another through a shared fabric without ever going through the network stack, which Ethernet networks require. That means packets bypass the TCP/IP stage, where latency is typically introduced.

More on converged data center networks
Unified data center fabric primer

Implementing FCoE and data center bridging

Optimization of the data center with 10 Gigabit Ethernet

Using Converged Enhanced Ethernet (CEE) switches that enable data center bridging, enterprises can essentially make InfiniBand – or RDMA traffic – run over an Ethernet backbone. In fact, with data center bridging equipment, enterprises could feasibly run FCoE and InfiniBand traffic over the same Ethernet backbone.

The IBTA says the technology will not only introduce the 40 Gigabit speed that InfiniBand already enables but will be less costly in physical resources and energy consumption.

"TCP/IP in the data center uses up too much server space," said Bill Boas, of the IBTA. "InfiniBand requires half the servers."

ROCE products for a converged I/O emerge

Mellanox Technologies, which makes InfiniBand switches and converged I/O adapters for converged data center networks, announced at Interop this week that its InfiniBand bridge and adapters now support the ROCE specification.

The BridgeX BX5020 works with the company's InfiniBand ConnectX adapters to enable computing clusters to share the same fabric among computing, networking and storage, regardless of the LAN or SAN infrastructure. That gives enterprises the ability to run InfiniBand or Fibre Channel traffic – or both -- managed on a standardized Ethernet backbone.

Interop 2010 Las Vegas
Be sure to check out our special news coverage page for more news and blogs from Interop 2010.

The Mellanox converged I/O platform has four 40 Gigabit server switch-facing ports, enables up to 12 10 Gigabit Ethernet LAN-facing and iSCSI SAN-facing ports, up to 16 1/2/4/8 Gigabit Fibre Channel SAN-facing ports, the ability to mix Fibre Channel and Ethernet ports, and promises latency as low as 200 nanoseconds between InfiniBand and Ethernet or Fibre Channel.

Mellanox and IBTA executives say InfiniBand is an obvious choice for financial trading or any other application that relies on speed down to the millisecond, such as financial trading and banking services – and, of course, the much sought-after cloud.

So at its Interop booth, Mellanox combined its InfiniBand bridge and adapters with IBM's BladeCenter HS22 to run super-speed IBM WebSphere low-latency messaging. It also displayed a combination of its gateway and adapters with Cisco's Nexus switching to run real-time New York Stock Exchange (NYSE) applications.

InfiniBand for converged data center networks not popular with everyone

But at an Interop session on converged and virtual I/O, Howard Marks – chief scientist and founder of Networks Are Our Lives, Inc.! -- was a little less than supportive of companies pushing ROCE or InfiniBand over Ethernet bridging as a long-term answer.

"[Supporters] think it should be as ubiquitous as Ethernet.… I would like to know what they're smoking, and I would like them to share," Marks said.

Companies that are buying into InfiniBand over Ethernet are those that are already invested in InfiniBand and want to maintain the technology as they transition to Ethernet, or those that have an extreme and immediate capacity need. To these companies, Mellanox will "sell a few of these every year," he quipped.

"The advantages are high bandwidth and low latency … 1 microsecond or less; 300 nanosecond latency," Marks said. "You can get high port density, 150 ports in a switch, but it's yet another network and yet another set of drivers."

Dig deeper on 10 Gigabit Ethernet

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close