Will we ever need 400 Gigabit Ethernet enterprise networks?

With 40 and 100 Gigabit Ethernet so new, what could possibly drive the need for 400 Gigabit Ethernet in the enterprise? Hint: It’s not just the WAN.

This article can also be found in the Premium Editorial Download: Network Evolution: Let the 40 Gigabit Ethernet migration begin

Though 10 Gigabit Ethernet (GbE) is just at the beginning of widespread uptake, there’s already plenty of talk in the market about the need for 40 and 100 GbE in the enterprise. In the middle of all this, do we need to consider the potential for 400 Gigabit Ethernet in the WAN and LAN? The answer is, plainly, yes.

The IEEE has just formed a working group to determine whether there will be need for 400 GbE or Terabit Ethernet development. But we are already some looming use cases—even in the enterprise LAN—that will drive the need for 400 Gigabit Ethernet.

In general, high-speed standards like 400 GbE or even Terabit Ethernet are applied deeper in the network where traffic has been aggregated. For now, computer and storage interface standards define transfer rates well below 100 GbE, so that won’t cause demand. But since the edge of the network is getting faster, it would be sensible to assume that this higher edge traffic would force up speeds in the metro/core aggregation network as well.

Modern networks, whether they span the globe or only the data center, are built on a hierarchy of devices, with “deeper” switches or routers networking the edge devices together for full connectivity. This has encouraged the use of successively faster trunks as port speeds have increased. The trend makes it logical to assume that future LAN switches or WAN switch/routers will need 400 GbE or faster. More traffic, more capacity. But traffic doesn’t impact the WAN or LAN uniformly—and therefore the needs may be vastly different in the two network types.

In the WAN, think faster Ethernet for optical switching

In the WAN, the largest source of additional traffic in the future network will likely be content. That content is increasingly cached in a content delivery networks (CDNs) or in content farms in each metro area. This means that while the growth of access bandwidth is likely to continue, and this increased edge capacity will demand greater capacity in the metro area for aggregation, the majority of content traffic will stop in the metro cache and never reach the core at all. In that case, within a metro there’s less traffic to aggregate, which means there’s less pressure on Ethernet performance.

It’s likely that there will be greater need for faster Ethernet interfaces on optical switches and optical point-to-point paths in metro networking. Almost all traffic in a metro network goes one place—the metro concentration point, where it’s connected to a cache, a data center or the core network. Ethernet routing or switching based on destination address is hardly useful when all the traffic is headed in the same direction.

Even when traffic does reach the core, which would be the case for business traffic and especially video conferencing, that traffic may not drive the kind of changes everyone expects at the equipment level. Operators have been looking for alternatives to adding more layers of switch/router aggregation, including the optical transport network (OTN) or the routerless core. In both of these alternative network architectures, metro areas are connected to each other in a mesh rather than to deeper-level electrical devices like routers (hence the term routerless). Because traffic is limited to each metro area partner, it’s less likely this connection would demand a radical increase in Ethernet speeds. However, even when traffic growth does drive faster Ethernet interfaces, they’re again more likely to be used on optical devices than on Ethernet switches or routers.

Surprise, surprise: 400 gigabit Ethernet needed for the LAN

In the LAN, it’s harder to discard the notion that uniform connectivity is needed. Cloud computing encourages the creation of dense, highly connected, data centers. Cloud applications are often more componentized and horizontally integrated than traditional applications, which makes traditional multi-tiered LAN switching performance more problematic. In a cloud data center, even 10 GbE server/storage interfaces connected in a four- or five-layer structure might drive switch interface speeds to 400 GbE or more in the deeper layers.

While intra-data-center connectivity seems to be migrating to fabric switching, and away from traditional stacks of LAN switches, that may not preclude the need for faster, 400 GbEd. Fabric switches today either have no internal trunk connections, or base their connections on standards other than Ethernet. But when clouds are created by linking multiple data centers over fiber, a super-Ethernet connection is almost inevitable.

In that case, we need faster Ethernet. Faster Ethernet will connect cloud data centers and will support optical metro aggregation and OTN-based cloud core networks. While these missions are more limited than that of 1 GbE, for example, they’ll be of paramount importance to network design in the future, and so it is very likely that the race for faster Ethernet will continue.

Next Steps

IEEE explores new standards for ultra-fast Ethernet

40 Gigabit Ethernet in the data center: Migration best practices

Research institute deploys 100 Gigabit Ethernet from core to closet

This was last published in November 2012

Dig Deeper on Ethernet

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What does high-capacity mobile (smartphone and tablets) that support 1-way and 2-way video and constant cloud syncing do to your model? Does the WAN become "supra-metro"?
Seems the same question was asked in the mid 1990s when the idea of Gigabit Ethernet seemed so... huge! What happens is that parallel communications of larger bandwidth apps fill any spaces that develop. If there is bandwidth, guaranteed the applications will take advantage of it,to the point where 400Gb will seem pokey by comparison. Give it time, it will happen ;).