alex_aldo - Fotolia
Once upon a time, in a faraway data center, network engineers wondered how they could ever fill a 10 gigabit link. They shrugged, they laughed, they ordered more 10/100 Mbps ports, and they all lived happily ever after.
That is, until virtualization and cloud computing gave rise to new networking challenges in hyperscale data centers that became home to tens or sometimes hundreds of thousands of servers, run by giants like Google or Microsoft. One of those challenges was the insatiable hunger for more bandwidth, and Ethernet largely kept up with demand. The 40 and 100 Gigabit Ethernet (GbE) standards were ratified a few years ago by the IEEE, which also launched a study group last year to work on a 400 GbE standard.
But the requirements for cloud providers and other large-scale data center operators are more nuanced than a simple need for raw capacity. Their top-of-rack switches, which typically comprise the largest number of connections in data centers, are rapidly outgrowing 10 GbE. And due to its design, the next step up, 40 GbE, isn't cost-effective or power-efficient in top-of-rack switching for cloud providers and others that operate at a similar scale.
It's under this backdrop that five IT companies -- Arista, Broadcom, Google, Mellanox Technologies and Microsoft -- formed the 25 Gigabit Ethernet Consortium in July to work on specifications for 25 GbE and 50 GbE, designed in such a way that the group expects to make the cost of 25 GbE equal to 10 GbE. The consortium sought to move as quickly as possible, even if that meant bypassing standards bodies. The industry group's founders expect to have 25 and 50 GbE products shipping in 18 to 24 months. It's an aggressive timetable compared to the formal process for ratifying a new Ethernet standard, which can take several years. And that's in addition to the years it takes for commercial products to then hit the market. Cloud providers and large-scale data center operators simply don't have that kind of time.
The need for 25 GbE is there. The solution is within grasp. And the rewards are big.
Nick Buraglio, network engineer
"It's easy to use that much bandwidth, especially if you're doing virtualization on a very large scale," says Nick Buraglio, an Illinois-based network engineer at a national research network. He doesn't work for a cloud provider, but like many networking pros, he is watching the developments with interest. "You can sell more services out of a single host -- assuming you pack it with CPU and memory to the gills," Buraglio adds, "which means you can buy fewer router and switch ports, which are the expensive pieces, especially the router ports."
The effort has gained momentum quickly and hit little -- if any -- opposition or technical barriers. Since its launch in July, the consortium's ranks have grown to include big names in networking like Cisco, Dell, Juniper Networks and Brocade. Its technical committee meets weekly to work on the new specs. And the IEEE has since come on board, forming a study group that met for the first time in mid-September to begin its work on a 25 GbE standard.
"It's a brilliant idea," says Greg Ferro, a U.K.-based freelance network engineer and co-host of the Packet Pushers podcast. "There's no downside to this, really."
Single-lane design makes 25 Gigabit shine
Two words represent what makes 25 GbE a game-changer: single lane. The phrase refers to the electrical signaling on the chip that would power an Ethernet port. Because the design of 40 GbE was based on 10 GbE, a 40 Gigabit chip is manufactured with four pins -- which create four channels, also known as lanes -- of 10 Gbps signaling. Originally, 100 GbE had a similar heritage, with its initial design in 2010 using ten lanes of 10 Gbps. As standards bodies sought to improve the efficiency of 100 GbE in the coming years, its second generation was built on a totally new signaling scheme: four individual lanes of 25 Gbps. Reducing the number of pins on the chip makes it less expensive to produce and less power-hungry.
And so the seed was planted for single-lane 25 GbE, which is effectively being pulled out of the 100 GbE standard. The consortium's specification for 50 GbE uses two lanes of 25 Gbps.
It's worth noting, however, that few expect 25 GbE to be a wholesale replacement for 40 GbE. It enjoys a sizeable installed base, and Infonetics Research predicts the number of 40 Gigabit ports being shipped to triple this year, hitting 1.5 million. Moreover, in some circumstances, there are benefits to breaking out a 40 Gbps port into four lanes of 10 Gbps traffic.
But with the groundwork in place for 25 GbE, the path to a fully ratified standard -- what the IEEE is working on -- is expected to be unusually short and sweet, according to Mark Nowell, chairman of the IEEE's 802.3 25 Gb/s Ethernet study group. The biggest step left for IEEE to do is defining a media access layer (MAC) for 25 GbE, a process he says "is very simple."
"There are no technical challenges to doing this project at all, because it's all kind of done in the industry already," says Nowell, who is also a senior director of engineering and CTO at Cisco. "[Our work] is really just putting all the pieces together in a way that guarantees inoperability."
Because of the way the IEEE operates, however, it must still conduct the standards-development process openly and build enough consensus among its members for a majority or supermajority vote to ratify new standards. In fact, low attendance at an IEEE event in Beijing in March prompted the meeting's chairperson to withdraw a call-for-interest request -- the first step in the standards process -- for work on a 25 Gbps standard, Nowell says.
"He didn't feel that here were enough people there to really make it a proper representation of what the group would want to do," he explains.
Cloud giants like Facebook and Microsoft, both Arista customers, are eager to quickly find a more economical solution to their mounting server and storage input/output (I/O) demands, and that urgency has influenced their willingness to buy products built on specifications not yet standardized, according to Anshul Sadana, senior vice president of customer engineering at Arista, which organized the consortium as a result of the IEEE's inaction in March.
"Two years ago, this was not possible. And if we wait two more years, it will be too little too late," says Sadana, who expects 25 GbE to replace 10 GbE in top-of-rack switches while 50 GbE ousts 40 GbE in storage networking.
But why have those top-of-rack links become so saturated? It's due to the massive amount of data and workloads pulsing through faster, more powerful servers, according to Ferro. And the next generation of microprocessors is only going to make the need to add capacity in the aggregation layer more intense.
"If you're a cloud provider, you're looking at the future: the Intel Grantley class of CPUs will ship early next year," Ferro says. "They're all going to be able to transfer more data than 10 gigabits per second, and they need faster NICs. That's fundamentally your driver -- x86 servers are faster than 10 Gbps."
Driving down top-of-rack costs
The single-lane design will give 25 GbE a lower cost per bit than 10 GbE, enabling cloud providers and large-scale data center operators to deploy fewer switches while still scaling their network infrastructure to meet demand.
"This is magical -- it's like music to the ears of the cloud providers because they can get lower-cost I/O," Sadana says.
In a presentation the IEEE used in its newly formed 25 GbE Study Group, engineers calculated how many top-of-rack switches would be required in a data center with 100,000 servers and 100% port utilization. It would take 3,572 four-lane 40 Gbps top-of-rack switches to support that kind of environment. Single-lane 25 GbE would require less than a third of that: 1,042 top-of-rack switches.
"You can reduce the number of switches in your data center and the actual cost of that interconnect," Nowell says. "That cable has only one copper cable in it rather than four -- that's where all your volume is, and that's where all your [expense] is within your data center."
When compared to 40 GbE, the 25 GbE specification is also expected to lower power and cooling costs significantly in two ways: Having only one pin on the chip draws less power, as does having to deploy fewer switches.
"The hard part of data centers is power," says Buraglio, who also maintains The Forwarding Plane, a blog about networking. "The rest of it is easy. It's not hard to find a dark-fiber provider, and the rest of it can be solved with money. Power can be mostly solved with money, but it takes more of it. So the fewer things you have to run, the more efficiently you can operate and the higher your profit margins are going to be."
It would be an oversimplification to say the cost savings in 25 and 50 GbE will have a direct effect on cloud pricing, but networking pros and industry experts are hopeful the efficiencies in the new specifications could eventually trickle down to enterprise customers.
"My interest in it personally is to see cloud technologies become cheaper and more resilient, and I think the greater bandwidth introduced by 25 Gigabit will facilitate that," says Nathan Roberts, MIS director at CareerSource Florida, a workforce development network based in Tallahassee. "We're moving a lot to the cloud. We've just engaged Salesforce for a statewide rollout, in fact. Seeing enhancements in data centers for a big cloud company like Salesforce could enhance our operations with improved speed and reliability, so [I hope there] is an indirect benefit."
Andrew Lerner, a research director at Gartner, expects the cost savings in the new specifications will be just one of many weapons cloud providers use to wage ongoing price wars with each other.
"If they can save 1%, 3% or even 5% of costs in a large-scale environment, it directly impacts their margins, and currently in the cloud space, that stuff does get passed onto customers," Lerner says. "But I don't think [25 GbE alone] has a material impact that a customer would see."
The final piece
The one wildcard in all of this activity, however, could determine the success of 25 GbE as a mechanism for top-of-rack connectivity.
"Whether it becomes widely used as a server access technology comes down to the server vendors' support in the NICs," says Gartner's Lerner. "[Server] margins are slim, so there's not a lot of advantages to going out and introducing that technology early -- before the mainstream needs it -- so they tend to wait."
If those vendors wait too long, 25 GbE will become predominantly a switch-to-switch technology, rather than a server-to-switch one, he adds, especially if server vendors decide to use 40 GbE NICs. Intel has done just that with its upcoming XL710 NICs, codenamed Fortville, which it announced last month could support 40 Gbps. Although all of the efficiencies and cost savings of 25 GbE would be just as valuable in other parts of the network, it would leave the top of rack in the lurch.
Dell has signed on to the 25 Gigabit Ethernet Consortium, but notably absent are Intel and Sun/Oracle. The consortium has hinted that it will soon announce new members, but it's been tight-lipped on any details.
"It gives me pause when I see that some of those big players aren't yet engaged," says CareerSource's Roberts. "Why aren't they engaged? And why aren't they excited about this? How come they haven't seen the benefit and want to jump on board?"
To understand the potential adoption curve for 25 GbE, vendors only need to look at its 10 GbE predecessor, Buraglio explains.
"If the box comes with it, people will use it. That's one of the ways 10 gig got adopted. It was around forever, but then you could buy a server than had an on-board 10 gig NIC on it," he says. "Basically, the stars have to align -- those stars are the chip manufacturers, the hardware manufacturers on the network side and the hardware manufacturers on the server side. They all need to adopt it. If that happens, people will just use it."
Why hasn't 40 Gigabit Ethernet taken off in the enterprise?
Arista makes 100 GbE affordable on 7500E chassis
Cloud likely to spur 25 GbE use among service providers
- Containers: Where Linux Host Networking Meets Network Infrastructure –Cumulus Networks
- The Maturity Model for Network & Infrastructure Management –SevOne
- SDN & SD-WAN: Transform Network Infrastructure and Security –Dimension Data
- To Survive, Smaller Banking Institutions Need a Strong Network Infrastructure –Windstream Communications