chris - Fotolia

The switch to 400 GbE may be closer than you think

Several dynamics are driving the need for 400 GbE infrastructures, including the increase in server density, private cloud applications and network edge data.

Large hyperscale cloud providers initially championed 400 Gigabit Ethernet because of their endless thirst for networking bandwidth. Like so many other technologies that start at the highest end with the most demanding customers, the technology will eventually find its way into regular enterprise data centers.

Most enterprise networks are primarily using 100 GbE for their backbone and leaf-spine infrastructure with 10 GbE and 25 GbE switches further down the stack. Because these are production environments, customers are hesitant to change anything, either because the equipment has not fully depreciated yet or applications are not reaching bandwidth limitations.

Seemingly, if customers are not topping out the 100 GbE infrastructures today, then there would not be much demand for a four-time increase in bandwidth. But, for other reasons, 400 GbE may be headed toward enterprise data centers in the future.

What's driving the need for 400 GbE?

Some large changes in enterprise computing are driving the need to consider 400 GbE infrastructure. For instance, consider the following:

  • Servers are more packed in than ever. Whether it is hyper-converged, blade, modular or even just dense rack servers, the density is increasing. And every server features dual 10 Gb network interface cards or even 25 Gb.
  • Network storage is moving away from Fibre Channel and toward Ethernet, increasing the demand for high-bandwidth Ethernet capabilities.
  • The increase in private cloud applications and virtual desktop infrastructure puts additional demands on networks as more compute is happening at the server level instead of at the distributed endpoints.
  • Finally, IoT and massive data accumulation at the edge are increasing bandwidth requirements for the network.

All these dynamics point to the need for the greater network bandwidth that 400 GbE can provide. But businesses won't start ripping out their existing core infrastructure and rewiring their data centers anytime soon. More likely, we'll see a phase-in for 400 GbE in the leaf and spine where more bandwidth density can help relieve crowded aggregation networks.

400 Gigabit Ethernet

How 400 GbE simplifies cabling

400 GbE can be split via a multiplexer into smaller increments with the most popular options being 2 x 200 Gb, 4 x 100 Gb or 8 x 50 Gb. At the aggregation layer, these new higher-speed connections begin to increase in bandwidth per port, we will see a reduction in port density and more simplified cabling requirements.

400 GbE is making its way into the data center soon, so it's time to start thinking about where it makes sense for your business.

As an example, one of the most common leaf-spine switches today is the Cisco Catalyst 9300 Series, which features six 100 Gb uplinks for 600 Gb of aggregate upstream bandwidth. When setting up two top-of-rack switches for redundancy, networks will require 12 upstream 100 Gb links, or only three 400 Gb links.

Though the top of rack at the leaf level may stay the same, a 4-to-1 reduction in connections to the spine will reduce port counts and provide more room for future expansion. Most importantly, this will bring new breathing room to infrastructures that are starting to feel the pinch, either from space or port availability.

Double down on density

While port density at the aggregation level may be an important driver for 400 GbE, there is another area where density matters as well. The Quad Small Form Factor Pluggable (QSFP) Ethernet transceivers that customers use -- like QSFP+, QSFP28 or QSFP56 -- will not support 400 GbE bandwidth.

New Quad Small Form Factor Pluggable Double Density (QSFP-DD) transceivers are now being introduced and are backward-compatible with existing QSFP cages. When plugged into older systems, they will only engage with one set of contacts, giving four I/O lanes, but will provide eight lanes and 400 Gb in newer systems.

The other primary transceiver is the new Octal Small Form Factor Pluggable (OSFP), but those devices will not be compatible with existing QSFP infrastructure.

The need for more bandwidth

In summary, as technology marches forward, there is always a movement to the next generation of higher bandwidth networking, and 400 GbE will be the next stop in that journey.

Because new server form factors and data center applications demand more bandwidth, this new technology will make its way from cloud data centers -- where it was adopted rapidly -- to the enterprise data centers that tend to be more conservative.

The backward-compatibility and multiplexing capabilities of the transceivers will enable businesses to introduce 400 GbE into the data center as needed, initially at the spine level and eventually down to the leaf level. All of this points to 400 GbE making its way into the data center soon, so it's time to start thinking about where it makes sense for your business.

Dig Deeper on Cloud and data center networking

Unified Communications
Mobile Computing
Data Center
ITChannel
Close