BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Facebook's Open Compute Project unveiled four major contributions that it will likely incorporate into its effort to design an open, top-of-rack, bare-metal switch.
During a media roundtable on the Open Compute switch initiative this week, Facebook shared bare-metal switch designs submitted by Intel Corp., Broadcom Corp. and Mellanox Technologies. Facebook also revealed Cumulus Networks' contribution of a boot loader for a bare-metal switch. In a subsequent blog post, Facebook's Frank Frankovsky, chairman and president of the Open Compute Project Foundation, wrote that the project would likely accept all four contributions. Overall, Open Compute has received 30 contributions to the switch project.
"We're on pace to demo this stuff in January at the [Open Compute Annual] Summit," said Najam Ahmad, director of technology operations at the social networking giant. "The pace has been faster than expected," he said. Facebook already has switches based on these designs in its labs, he added.
Open Compute switch: Three designs and a boot loader
Broadcom's specification for an open switch is based on its Trident II silicon, which the majority of networking vendors are now using in top-of-rack data center switches today. Intel's proposal is based on its Open Network Platform reference design for a 48-port 10/40 Gigabit Ethernet switch using Intel's Ethernet FM6764 silicon. Intel originally specified the platform to run a software stack based on its Wind River Linux distribution. Mellanox contributed a specification based on its own SwitchX-2, an x86-based switch that features 48 SFP+ ports and 12 QSFP ports. An Israeli company best known for its Infiniband switching, Mellanox "open sourced" its software and hardware last March, announcing that it would make its Ethernet switches available as bare-metal devices. With the contribution to Open Compute, the company is now giving original design manufacturers (ODMs) the option to build switches based on its framework.
Cumulus' boot loader software, dubbed Open Network Install Environment (ONIE), is a runtime install environment that allows bare-metal switches to boot up multiple network operating systems. ONIE gives network operators the ability to reboot bare-metal switches with a new operating system, much like admins reboot x86 servers to run Windows or Linux.
"We have one customer that does this using ONIE today," said J.R. Rivers, founder and CEO of Cumulus. "They use our system for traditional workloads, and then when they need to support OpenFlow-type workloads, they can flip the switches back and forth from one operating system to another."
Facebook's mission with the Open Compute switch is to promote the "disaggregation" of hardware and software in data center network gear, Ahmad said. This approach can increase the programmability and flexibility of network infrastructure and reduce costs because data center operators can buy their hardware from low-cost ODMs such as Accton, Quanta and Penguin Computing.
An open switch gives data center operators the ability to run third-party or home-brewed software directly on switches that can automate or enhance the network. "Closed platforms don't allow that at all," Ahmad said.
"A lot of people also want transparency [in network gear]," Rivers said. "They want to better understand how things work so that when chaos happens they can reduce the side effects."
Open Compute switch: Mainstream adoption still a long way off
Open Compute's switch project holds a lot of promise for hyperscale data centers like the ones that Facebook operates. Some enterprises, particularly in the financial industry, are also interested in bare-metal switches. "Goldman Sachs, Fidelity and others are reviewing these specs and building and providing feedback," Ahmad said.
But the hard reality is that mainstream enterprise network engineers have neither the expertise nor the resources to tackle bare-metal switches today. It will be quite some time before they are ready.
"[Open Compute's switch] makes sense for a set of consumers of network hardware who want cost control and manageability," said Eric Hanselman, chief analyst for New York-based 451 Research. "But with great power comes great responsibility. You have to be able to own the platform. You have to be able to take responsibility for the maintenance and integration. Eventually we'll have more pulled-together capabilities out there. Cumulus and Pica8 are making strong efforts in that environment, and we'll see other players around that. But it does mean there is a certain amount of DIY that comes with these capabilities. If you are a Facebook or a Google, you are already down that road. You have network teams and hardware teams that can handle all the integration involved in building and deploying these switches for data centers. You have to have the staff that can cope with all those extra duties beyond just creating the boxes."