Data center-class switches: A buyer's guide
A collection of articles that takes you from defining technology needs to purchasing options
The best data center-class switch technologies can provide enterprise organizations with significant advantages in performance, availability and ease of management. But wading through the differences among vendor offerings can be a challenge. Let's look at scenarios and distinguishing categories to point out the differences between the best data center-class switch products to give you an idea of which option is best for you.
Although currently used primarily by large enterprises, any company should consider six major criteria when purchasing data center-class switches:
- Physical requirements
- Converging networks
- Throughput capacity
- Power and cooling requirements
- Hypervisor virtual switch and SDN support
- Single vendor support
Each vendor addresses these criteria in its own ways, although the differences are usually subtle. Most of them show up in the following areas, so we'll take a deeper dive into each:
- Physical port density and throughput
- Converged network and Fibre Channel (FC) support
- Virtual server integration and switch support
- End-to-end strategy and future proofing
Physical port density and throughput
Data center-class switches come in all shapes and sizes in terms of physical port density. But different architectures dictate how many physical ports of a particular transport type are possible. Many data center-class switches are based on blade chassis, which accept various line cards with different port types and densities.
Among the largest are switches from Cisco, Juniper Networks and Arista Networks. The 16-slot Cisco Nexus 9500, for example, can house up to 2,036 10 Gigabit Ethernet (GbE) ports or 576 40 GbE ports. At one time, this density was unrivaled. But Juniper's QFX10016 can match those numbers and the Arista 7300 series comes close. Additionally, both Juniper and Arista boast line card throughputs of around 6 Tbps. Cisco's line cards sport throughputs of just under 4 Tbps. The Arista 7328 switches also offer 25 GbE and 50 GbE interfaces, as do Juniper's QFX 5200 switches.
Achieving maximum bandwidth depends on factors including available fabric modules and the type of supervisor module used. Latency also plays a key role when considering the most important characteristic of any data center -- how long it takes to switch frames and packets. That's where companies like Arista and Extreme Networks seem to shine. Arista is very popular in the high-frequency trading industry, as its ultra-low latency switches can switch frames and packets at around 250 nanoseconds (ns). Extreme's X770 switch is engineered with latency of less than 600 ns when combined in a fabric anchored by the vendor's BlackDiamond x8 core switches; other data center-class switch products offer latency in microsecond timeframes. Cisco soon spotted the lucrative niche market and began producing its specialty Nexus line of switches with Algo-Boost technology that now rival Arista's and Extreme's offerings. If low latency is a crucial part of your data center strategy, it would be wise to look into these niche products from Cisco, Arista and Extreme.
If physical port density and performance are the most important requirements, Arista, Cisco and Juniper are great choices. These are the top vendors that support the highest port density with per-line card performance above 3 Tbps.
Converged networks and Fibre Channel support
Legacy storage networks commonly were deployed as separate physical networks. The storage area network (SAN) used Fibre Channel protocols and switching hardware, while local area network (LAN) traffic used standard Ethernet protocols and hardware. Recent advancements in protocols and hardware now allow for SAN and LAN traffic to use the same physical cabling and switch equipment.
Fibre Channel connectivity can come in native form or through the Fibre Channel over Ethernet (FCoE) interface. If you need one or both, keep in mind that not all switch vendors and models fully support the protocols. Arista, for example, has no Fibre Channel support. So, if you need Fibre Channel in any capacity, you should look elsewhere. Other companies like Cisco, HP, Huawei (a major vendor in Europe and Asia) and Juniper support both Fibre Channel types on some of their data center-class switches.
For example, the Juniper QFX3500 and QFX10000 series, as well as the Huawei CloudEngine 6800 series switches, support Fibre Channel ports that are known as FCoE-FC gateways. This means the physical interface can either speak native Fibre Channel or Ethernet-encapsulated FCoE. With the Cisco Nexus 5000 series, your small form-factor pluggable interfaces can either be standard Ethernet or FCoE. Separate, dedicated Fibre Channel modules can then be added to the switch as required. In May 2015, Cisco bolstered its support of converged networks with a 96-port 16 GbE Fibre Channel switch and 40 GbE support across its Nexus 7700 and 7000 switches.
Alternatively, for users heavily invested in native Fibre Channel and planning to keep it, Brocade is the clear winner. Brocade offers just about every implementation of Fibre Channel -- including support for 16 Gbps Gen 5 native Fibre Channel. Brocade is also reportedly working on a 32 Gbps Gen 6 offering in the near future. Choosing the right vendor could be crucial if you are planning to converge your LAN and SAN networks in the foreseeable future.
Virtual server integration
Server virtualization software like VMware's vSphere, Microsoft's Hyper-V and KVM all have their own form of a virtual switch, which performs switching functions between virtual machines that reside inside the same blade server chassis. For some organizations, the built-in virtual switch tools are sufficient. But often, enterprise-class companies that implement more granular controls on their switch ports want to be able to manage virtual data center-class switches the same way they manage their physical data center-class switches. New data center advancements require a Layer 2 network overlay across multiple data center switches. One very popular method is VMWare's NSX platform. Many data center switch vendors now support NSX overlays in varying degrees.
In terms of virtual switch support, some vendors, including Brocade and Huawei, only allow administrators to use the built-in capabilities found in the base hypervisor software package. Cisco, however, offers its Nexus 1000v virtual switch software. This software essentially replaces the virtual switching systems in either VMware vSphere or Microsoft Hyper-V settings. Hewlett Packard Enterprise attempts to overlay VMware's vSwitch software with its own proprietary software. This allows administrators to manage the switch as if it were a virtual HPE device as opposed to a VMware virtual switch.
Newer technologies such as Cisco's Application Centric Infrastructure (ACI) and VMware's NSX represent the next generation of data-center capabilities and the two dominant software-defined network (SDN) architectures. With NSX, you build a Layer 2 overlay across Layer 3 data center switches to flatten networks and decrease complexity. The use of Virtual Extensible LANs (VXLANs) is prevalent in these architectures. A wide range of vendors -- including Arista, Dell, HPE and Juniper-- have data center switches that support the NSX platform and VXLAN implementations.
Cisco is paving its own way with ACI. ACI primarily differs from NSX in the fact that ACI doesn't create a virtual Layer 2 network over the top of a physical Layer 3 network. Instead, ACI works on a per-application basis -- with network resources allocated accordingly, rather than carving up a network with virtual switches.
Virtual switch integration, and now SDN, are both becoming incredibly important in today's data centers. If you've not yet jumped onto the SDN bandwagon, then Cisco's 1000v offers the most flexibility from a virtual switch support standpoint. The NSX versus ACI battlefront is still unfolding, but VMware, due to its ability to virtualize data centers equipped with hardware from multiple vendors, may have an early advantage.
The final aspect to consider is the value of an end-to-end approach: Unified design and ease of troubleshooting are distinct benefits granted by a single-vendor strategy. Practically speaking, only Cisco, HPE and Huawei have hardware and software portfolios that span the entire data center. Of these, Cisco has perhaps the most well thought out, long-term, end-to-end strategy. One thing to keep in mind on this topic is the movement toward software-defined network architectures. The core concept of SDN is to speed up and simplify the deployment of data center applications by centralizing and automating various infrastructure policies -- from switching and routing to access control and storage allocations. An SDN implementation will only be successful if all network infrastructure components use the same underlying protocol. Many are banking on an open source SDN platform to reign supreme, while others are betting that a mixture of quasi-proprietary/open source will be more advantageous.
Either way, you have to look at the various strategies of the vendors you're implementing in your data center to ensure you are future proofing your investments when SDN becomes the standard. The majority of data center hardware/software vendors (including Arista, Dell, Extreme, HPE and Juniper) are building their SDN strategies based on the popular OpenFlow protocol. By using the same underlying protocol with strict standards, other OpenFlow native software and devices should be interoperable. But as we all know, interoperability sometimes doesn't work as advertised.
In contrast, Cisco released OpFlex, a protocol Cisco designed as open and standards-based. While it's technically not proprietary, competitors are already heavily invested in OpenFlow, and that commitment likely won't change in the near future.
For now, Cisco stands alone by using a differing standard on which to base its SDN platform. Some could argue that VMware's NSX is also a true competitor to Cisco's ACI, but in reality, most see it as a stopgap until OpenFlow gets off the ground. Cisco offers a fairly robust application program interface system where third parties can tap into the infrastructure to add functionality tailored to specific business needs. While the company was slow to get on the SDN bandwagon, it appears it has spent a great deal of time figuring out and explaining what SDN in a Cisco network will look like. The question you have to answer when choosing your next-generation network infrastructure is whether to go with the world's leading network supplier, risking vendor lock-in, or go with another supplier, perhaps risking interoperability issues and unfinished SDN roadmaps.
Plenty of different data center-class switch vendors and models are available for just about any data center. For most tasks, they operate similarly. But there are distinct differences in capacity, performance, convergence, end-to-end operability and future proofing that should be addressed before choosing a particular system. When making your decision, consider your current and future needs and determine which differentiators are most important to you.
Forget the tree, data center networking embraces leaf-spine design
Do vendors pushing bare-metal switches have a shot?
Find out how the OpenFlow protocol actually works
Unified data center fabrics can converge SAN and LAN traffic