zhu difeng - Fotolia
The use cases for 400 Gigabit Ethernet are, for the moment, only at the extreme high end. As one may expect, the early adopter profile will be limited to cloud-scale providers. The standard -- slated for final approval in 2017 -- is based on 50 Gbps single lanes grouped together.
1. The increase in bandwidth -- over potentially -- existing fiber plants reduces the cost per gigabit shipped. This is an import factor in the calculation on how many users or tenants a given infrastructure may support. These savings will ultimately be passed onto the customer, although perhaps not immediately.
2. The increased density these subterabit connections will deliver will improve data centers' efficiency. As fewer discrete pairs will be required to support the same bandwidth, there is less static mass to cool. This may mean a reduction in the cooling requirements -- less energy is required to cool the same infrastructure -- but, more likely, it will allow data center planners to design denser facilities.
3. The reduction in contention -- caused by oversubscription -- may make public clouds more dependable and less susceptible to the gremlins associated with someone else's computer. This will further validate the cloud and hybrid cloud model, and performance and latency differential between cloud and on-premises systems will continue to weaken.
4. The potential reduction in latency offered by terabit-class connections will appeal to those working in high-frequency trading and other time-critical environments. Indeed, 400 Gigabit Ethernet (GbE) implementations will reduce the time in flight for a given transaction. But this increase in speed comes with its own set of caveats. Such high data rates increase the need for forward error correction. FEC creates additional latency, mitigating some -- or all -- the improvements. However, at this point in time, FEC seems to be optional; the individual implementation will determine whether it is necessary or enabled.
5. The biggest -- and possibly most obvious -- advantage of a production-ready implementation of 400 GbE network standards will be the halo effect. Existing 100 GbE and 40 GbE implementations will cease to be state of the art. This will inevitably push costs further down the infrastructure food chain. This cost pressure will allow enterprises to weed out bottlenecks that would otherwise be too costly to address.
For now, of course, 400 Gigabit Ethernet does not really exist beyond some proof-of-concept lab and testing equipment. However, its destiny is already set, along with the path it will carve in the industry. The first customers are no doubt already waiting, checkbook in hand. What is less certain is what lies beyond 400 GbE and how, from an engineering perspective, we get there.
Will we ever need 400 Gigabit Ethernet?
IEEE explores new Ethernet standards
ICYMI: 400 GbE drives new consortium
Dig Deeper on Data Center Networking
Related Q&A from Glen Kemp
A fabric topology may become necessary at some point, but for now, you can get away without one for your SDN overlay. Continue Reading
Glen Kemp discusses the differences between network orchestration, similar to OpenStack, and SDN control, which is happening through OpenDaylight. Continue Reading
Glen Kemp breaks down the differences between Broadcom and Intel SDN chips and reference designs. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.