Hyperscale data center operators want more bandwidth for their dollars, particularly at the server-access layer. In response, a consortium of vendors and data center operators are developing 25 and 50 Gigabit Ethernet (GbE). The Institute of Electrical and Electronics Engineers (IEEE), the governing body of Ethernet standards, is only partially on board.
Arista Networks, Broadcom, Google, Microsoft and Mellanox Technologies formed the 25G Ethernet Consortium this month to develop top-of-rack switches with 25 and 50 GbE technology. Switch makers Brocade and Dell and silicon maker Cavium Networks have formally signed on as adopters of the technology.
Arista, Mellanox and other member companies formed the 25G Ethernet Consortium to goad the IEEE into taking action, said Brad Casemore, research director for IDC.
"The Ethernet Alliance [and the IEEE] are very concerned about a potential rupture in the industry, where the hyperscale folks get really impatient because they want higher speeds and the enterprise isn't there yet," Casemore said. "The vendor community is divided. If most of your customers are enterprise customers, then you really don't care what the hyperscale guys are doing. Hyperscale is going to do it one way or another, and it could lead to standards mayhem."
The establishment of the consortium followed a failed attempt in March to form an IEEE study group for the new Ethernet speeds. However, the IEEE reconsidered after the consortium's formation and voted a couple weeks later to explore a 25 GbE standard. Although IEEE is not on board with 50 GbE, the consortium's members are going ahead with both new Ethernet speeds, said Anshul Sadana, senior vice president of customer engineering for Arista.
The straightforward path to 25 (and 50) GbE
Brad Casemoreresearch director, IDC
The basic building blocks for 25 GbE are already in place, thanks to the industry's work on 100 GbE, said Mark Nowell, chair of the newly created IEEE 802.3 25 Gbps Ethernet Study Group. Nowell is also a senior director of engineering for Cisco.
The second generation of 100-GbE technology is based on four lanes of 25 Gbps electrical signaling within the silicon and optics, which makes it relatively simple to repurpose a 100 GbE network chipset for a 25 GbE switch. Each 25 Gbps pin on a chip can power a 25 GbE port.
"There's not a lot to be worked out [with 25 GbE]," Nowell said. "It's taking technology that is already fairly well specified and implemented, and creating a specific lane that everyone can work with. I think the silicon availability drives all this, and the chips and silicon are becoming [more] available today with 25 Gbps lanes."
Given the technology foundation already in place, the IEEE's 25 GbE study group shouldn't take long to complete its work, Nowell said. And if the study group says yes to 25 GbE, an IEEE task force will form to standardize the technology, a process that could take as little as 18 months.
The same technology that makes 25 GbE a simple exercise also makes 50 GbE easy to navigate, Arista's Sadana said. Given that some very large data center operators want both speeds, Arista and others are moving forward with both, regardless of the IEEE's actions.
"On the storage side and in big data, a lot of I/O from these nodes are to the point where 40 GbE can be saturated today," Sadana said. And a 50 GbE port will probably be about half the cost of a 100 GbE port, and give more flexibility to cloud and Web companies that are saturating 40 GbE links.
Although IEEE has declined to explore 50 GbE, the consortium remains open to working with the organization on the technology. "If they standardized 50 GbE, we will adhere to that as well," Sadana said.
25 GbE and 50 GbE offer better top-of-rack economics than 10 and 40 GbE
Extremely large data center operators, including 25G Ethernet Consortium members Google and Microsoft, have found that today's server CPUs are increasingly saturating 10 GbE top-of-rack switch ports. This problem is unheard of in the mainstream enterprise market, where data centers are still making the transition from Gigabit Ethernet to 10 GbE at the server access layer.
Given this link saturation, larger data centers want to upgrade, but 40 GbE is far too expensive. 25 GbE offers a bandwidth boost at a lower cost.
In a presentation to an IEEE 802 Plenary meeting this month, Microsoft principal engineer Brad Booth demonstrated that 25 GbE server I/O is much more economical than 10 GbE and 40 GbE server I/O. In a data center of 100,000 servers using 3.2 Tbps top-of-rack switches, a data center operator would need 2,084 switches to support minimal 2x10 Gbps I/O to each server. If those switches supported 25 GbE ports, the data center operator would need only 1,042 switches, leading to lower capital expenses and significantly reduced power and cooling costs.
The cost of building 25 GbE gear also won't be much more expensive than building 10 GbE gear, Arista's Sadana said. Like 10 GbE, 25 GbE is a single-lane technology, rather than the four lanes that are bundled together in 40 and 100 GbE switches.
"You can think of 25 GbE as nothing but 10 GbE, but clocked 2.5 times faster," he said.
With the 25G Ethernet Consortium moving ahead regardless of what the IEEE does, both 25 GbE and 50 GbE switches could reach the market by the beginning of 2017.
"We haven't announced a timeline yet," Sadana said. "But we expect these products [both 25 GbE and 50 GbE] to come to market in the next 18 to 24 months depending on the [server] adapter side of the industry."
Will we ever need 400 Gigabit Ethernet?
Dig deeper on Data Center Ethernet
Shamus McGillicuddy asks:
What is your opinion of how the industry is managing the development of 25 and 50 GbE technology?
1 ResponseJoin the Discussion