By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Two basic types of components are needed for Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) implementation technologies for an end-to-end unified networking and storage fabric:
- 10 Gb Ethernet fabric switches that support DCB and FCoE.
- 10 Gb adapters that support Ethernet LAN and Fibre Channel SAN at the same time, often called Converged Network Adapters (CNAs) or unified adapters.
As with other technologies that are deployed in data centers, there are different ways in which these basic components can be deployed, and these components are made with different options. For example, some DCB and FCoE switches have the option to add 4 Gbps FC ports, while other DCB and FCoE switches have the option to add 8 Gbps FC ports.
Three routes to DCB and FCoE implementation
There are three variations of DCB and FCoE implementation with respect to Fibre Channel storage:
- A converged or unified adapter in the host server connected to a DCB/FCoE switch, then linked to FC ports in a separate traditional FC switch, connected to storage targets with traditional Fibre Channel (FC) adapters in the storage targets (Figure 1).
- A converged or unified adapter in the host server connected to a DCB/FCoE switch, then connected to FC ports in that same switch, linked to storage targets with traditional Fibre Channel (FC) adapters in the storage targets (Figure 2).
- A converged or unified adapter in the host server connected through a DCB/FCoE switch to storage targets with converged or unified adapters in the storage targets (Figure 3).
Figure 1 – DCB/FCoE with existing FC infrastructure
Figure 2 – DCB/FCoE switch with FC ports
Figure 3 – Native DCB/FCoE end-to-end infrastructure
As the storage vendors begin to adopt FCoE, they will announce that they have implemented native FCoE-capable converged or unified adapters in their storage products. So far, one storage vendor, NetApp, has a native FCoE storage target available today, as shown in Figure 3. The other major storage vendors are expected to announce similar capabilities. In the interim, however, the major storage vendors are beginning to announce that they support connectivity from hosts using converged or unified adapters today. This is because they have qualified these newer adapters with their existing FC storage targets as shown in Figure 1 and Figure 2 . Part 1 of this article series discusses the similarities and interoperability between traditional FC and the newer FCoE.
Choosing the right adapters for unified storage and network fabric
Further, because the traditional NIC and FC HBA vendors are converging technology and are competing in this new market, different types of adapters are now available or becoming available. These differing adapters may seem a bit confusing at first, but they give users an interesting array of choices:
- Adapters from the traditional FC HBA vendors that support full hardware offload for the FCoE functions in addition to 10 Gb Ethernet.
- Adapters from the traditional NIC vendors that support 10 Gb Ethernet and have varying degrees of offload for the FCoE functions.
- Adapters that support full hardware offload for iSCSI with 10 Gb Ethernet.
When it comes to host adapters, it would not be surprising in the future to see native, converged 10 GbE/FCoE ports on server motherboards. The NIC and HBA vendors are scrambling to get their designs into the server vendors. For those using 10 Gb iSCSI storage, the implementation is a bit simpler with DCB-capable infrastructure (Figure 3 shows the infrastructure design), and the only other consideration is how much offload function is desired in the server adapter.
The discussion of offload functions in the adapters becomes more interesting in virtualized server environments. The traditional FC HBAs have always had hardware offload, so FCoE offload is something that traditional FC storage administrators are used to and are probably expecting. However, on the TCP/IP side, full offload of the TCP/IP stack and offload of iSCSI functions were available, but less common, in the past. But as this new DCB infrastructure matures and as virtualized servers become more widely deployed, it may be worthwhile to check into offload functions for the TCP/IP part of the infrastructure.
About the author: Dennis Martin is the founder and President of Demartek, a computer industry analyst organization with its own on-site test lab. Demartek focuses on lab validation testing and performance testing of storage and related hardware and software products. Dennis has been working in the Information Technology industry since 1980, primarily involved in software development and project management in mainframe, UNIX, and Windows environments. These include a variety of large and small end-user customers, and engineering and marketing positions for storage vendors such as StorageTek.