Problem solve Get help with specific problems with your technologies, process and projects.

Unified fabric: Data center bridging and FCoE implementation

IT organizations planning to deploy a unified storage and network fabric through data center bridging and FCoE implementation must consider a host of components, such as adapters, and configuration plans.

In part 1 of this series on implementing a unified data center fabric, we discussed the technology basics of data center bridging and FCoE implementation. We now turn our attention to the actual deployment of a unified storage and network fabric.

Two basic types of components are needed for Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) implementation technologies for an end-to-end unified networking and storage fabric:

  • 10 Gb Ethernet fabric switches that support DCB and FCoE.
  • 10 Gb adapters that support Ethernet LAN and Fibre Channel SAN at the same time, often called Converged Network Adapters (CNAs) or unified adapters.
  • As with other technologies that are deployed in data centers, there are different ways in which these basic components can be deployed, and these components are made with different options. For example, some DCB and FCoE switches have the option to add 4 Gbps FC ports, while other DCB and FCoE switches have the option to add 8 Gbps FC ports.

Three routes to DCB and FCoE implementation

There are three variations of DCB and FCoE implementation with respect to Fibre Channel storage:

  • A converged or unified adapter in the host server connected to a DCB/FCoE switch, then linked to FC ports in a separate traditional FC switch, connected to storage targets with traditional Fibre Channel (FC) adapters in the storage targets (Figure 1).
  • A converged or unified adapter in the host server connected to a DCB/FCoE switch, then connected to FC ports in that same switch, linked to storage targets with traditional Fibre Channel (FC) adapters in the storage targets (Figure 2).
  • A converged or unified adapter in the host server connected through a DCB/FCoE switch to storage targets with converged or unified adapters in the storage targets (Figure 3).

Figure 1 – DCB/FCoE with existing FC infrastructure

Figure 2 – DCB/FCoE switch with FC ports

Figure 3 – Native DCB/FCoE end-to-end infrastructure

As the storage vendors begin to adopt FCoE, they will announce that they have implemented native FCoE-capable converged or unified adapters in their storage products. So far, one storage vendor, NetApp, has a native FCoE storage target available today, as shown in Figure 3. The other major storage vendors are expected to announce similar capabilities. In the interim, however, the major storage vendors are beginning to announce that they support connectivity from hosts using converged or unified adapters today. This is because they have qualified these newer adapters with their existing FC storage targets as shown in Figure 1 and Figure 2 . Part 1 of this article series discusses the similarities and interoperability between traditional FC and the newer FCoE.

More on unified storage and network fabrics
Unified data center fabric primer: Data center bridging and FCoE implementation

Managing unified fabric: Who's in charge?

FCoE roadmap: Do you need a unified fabric strategy?

Factors to consider when implementing FCoE

Choosing the right adapters for unified storage and network fabric

Further, because the traditional NIC and FC HBA vendors are converging technology and are competing in this new market, different types of adapters are now available or becoming available. These differing adapters may seem a bit confusing at first, but they give users an interesting array of choices:

  • Adapters from the traditional FC HBA vendors that support full hardware offload for the FCoE functions in addition to 10 Gb Ethernet.
  • Adapters from the traditional NIC vendors that support 10 Gb Ethernet and have varying degrees of offload for the FCoE functions.
  • Adapters that support full hardware offload for iSCSI with 10 Gb Ethernet.

When it comes to host adapters, it would not be surprising in the future to see native, converged 10 GbE/FCoE ports on server motherboards. The NIC and HBA vendors are scrambling to get their designs into the server vendors. For those using 10 Gb iSCSI storage, the implementation is a bit simpler with DCB-capable infrastructure (Figure 3 shows the infrastructure design), and the only other consideration is how much offload function is desired in the server adapter.

The discussion of offload functions in the adapters becomes more interesting in virtualized server environments. The traditional FC HBAs have always had hardware offload, so FCoE offload is something that traditional FC storage administrators are used to and are probably expecting. However, on the TCP/IP side, full offload of the TCP/IP stack and offload of iSCSI functions were available, but less common, in the past. But as this new DCB infrastructure matures and as virtualized servers become more widely deployed, it may be worthwhile to check into offload functions for the TCP/IP part of the infrastructure.

About the author: Dennis Martin is the founder and President of Demartek, a computer industry analyst organization with its own on-site test lab. Demartek focuses on lab validation testing and performance testing of storage and related hardware and software products. Dennis has been working in the Information Technology industry since 1980, primarily involved in software development and project management in mainframe, UNIX, and Windows environments. These include a variety of large and small end-user customers, and engineering and marketing positions for storage vendors such as StorageTek.

This was last published in January 2010

Dig Deeper on Storage Networks

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.