Upgrades to 40 Gigabit Ethernet are now under way. For now, early adoption is concentrated in clouds and in large...
service-provider, corporate, educational and government data centers, but widespread adoption is sure to follow. Just as advances in IT technology have created the demand for each of the earlier Ethernet standards, continuing improvements will undoubtedly require 40 GbE upgrades in smaller data centers and network backbones.
Increases in server CPU clock rates, plus the advent of chips containing multiple processor cores, add up to a continuing increase in server compute capability. In addition, virtualization boosts the fraction of server capacity that can be effectively used. These factors contribute to increased network traffic from each server. The result is that newly installed servers are being configured with 10 GbE interfaces instead of the 1 GbE interfaces that exist on many extant servers.
Forty-GbE server and workstation network interfaces are available, but they have not been widely installed. Usage to date has been limited to research or other facilities that work with extremely large data sets.
Ripple effect fueled by network traffic and application structure
Ballooning server network traffic causes a ripple effect throughout the network. Installing servers with 10 GbE interfaces means access layer switch interfaces must also be upgraded to 10 GbE. Upstream links to the core and to other aggregation devices must then be upgraded to 40 GbE or, in some networks, to 100 GbE.
Changes in application structure and virtualization also require increased network capacity between racks within the data center. In the past, a single server usually handled an application's execution. In this scenario, traffic between the end user and the application flows across the network backbone, into the data center core and then to the server on which the application executes. There is little need for traffic to move between racks.
Today, applications often comprise multiple components. Each component may execute on any server in the data center, so traffic among components often must travel between racks, creating the need for greater bandwidth within the data center. As networks are updated to support this traffic, 40 GbE links will be required to support current and future requirements.
Virtualization, big data add stress to network traffic flows
Virtualization, meanwhile, yields an additional load. A virtual machine (VM) can move quickly from an overloaded server to another, less heavily loaded server. The destination server may be in a different rack, so moving the VM requires moving the entire image through the aggregation layer and possibly through the core. Assuming that application components were executed in VMs located in the same rack before the move, once the transfer takes place, this may no longer be the case.
Big data adds more pressure. Technologies such as Hadoop, for example, split processing across multiple VMs, each of which crunches a portion of the data. As each cycle completes, the resulting data is moved across the network for further processing and reduction. The total quantity of data, the load it places on the network and the repeated movement across the network all contribute to the need for additional higher-capacity links across the network, in the process contributing to the need to upgrade to 40 GbE.
Want more proof? Consider the millions of smartphones and tablets connected via 802.11n, 802.11ac or 4G networks. These devices are capable of generating very high data volumes. All of this data must travel across the backbone and into the data center. As a result, 40 GbE and even 100 GbE upgrades are beginning to take place in large campus, corporate and government networks.
Converged networks create an additional load
Network and storage data have, in the past, moved over separate networks. Fibre Channel over Ethernet (FCoE) was developed to reduce the cost of maintaining two networks -- and the subsequent requirement for separate cabling and switches. FCoE enables storage data to share the Ethernet with network traffic. Simply combining traffic that previously flowed over separate networks would necessitate a higher-capacity network, but storage traffic places an additional demand.
40 GbE: Ready for prime time
Getting 40 GbE out of first gear
The roadmap to 100 GbE
Preparing for the migration
Storage data requires a lossless network, but Ethernet delivers packets on a best-effort basis. Recently developed IEEE standards for congestion control, bandwidth management and traffic prioritization allocate dedicated network capacity to storage data, creating a guarantee that no data will be lost. Dedicating bandwidth to storage reduces the amount available for network data, further ratcheting up the need for greater overall capacity.
Still early in the upgrade process
The need for increased bandwidth notwithstanding, 40 GbE upgrades have been slowed by the fact that in many cases, existing cables must be replaced to support the higher rate. Rather than simply replace cables, organizations are proceeding carefully, adding connections to support projected traffic growth even as they begin to lay the groundwork for the inevitable rollout of 100 GbE networks.
In the meantime, 40 GbE deployments will continue. Just as the original 10 MbE networks were adequate for their time, and 1 GbE is being replaced by 10 GbE in rack-mount servers, the next step to 40 GbE will surely follow.
About the author:
David B. Jacobs of The Jacobs Group has more than 20 years of networking industry experience. He has managed leading-edge software development projects and has provided consulting services to Fortune 500 companies, as well as software startups.