The Transparent Interconnection of Lots of Links (TRILL) protocol has yet to emerge from its IETF standardization process, but many networking pros anticipate that the technology will allow them to dump the spanning tree protocol from data center network designs.
Of course, then there are the naysayers who think that spanning tree does the job and that alternatives like TRILL lead to faulty engineering.
Spanning tree protocol has worked well in enterprise networks for a couple of decades, but it isn't efficient enough for some of the demands placed on data center networks today: live migration of virtual machines, bandwidth-intensive real-time media applications, and converged network fabrics via data center bridging.
In fact, some believe the problem with spanning tree is in its basic structure. Since Layer 2 Ethernet is basically a broadcast technology, it naturally sends frames to every available network link, which can lead to multiple paths linking one network device to another in an infinite loop,overloading the network. Spanning tree eliminates the looping problem by determining only one path from one point on a network to another. All other possible paths are closed off to a particular Ethernet frame. This can be inefficient in a data center network with a virtualized environment that can function better with non-blocked connections between servers.
The TRILL protocol uses a technology called RBridges to apply the IS-IS Layer 3 routing protocol to Layer 2 devices. This essentially allows Layer 2 devices to route Ethernet frames,eliminating the need for spanning tree protocol to close off links.
"In TRILL, all the paths are equally valid, instead of a data center looking like a train track from point A to point B," said blogger and freelance network architect Greg Ferro. "This allows you to use all your bandwidth, and all your interfaces become part of a fabric. It's similar to a service provider's backbone."
By getting rid of spanning tree protocol and freeing up more Layer 2 paths, enterprises will be better able to migrate virtual machines (VMs) across the data center network. There will also be more bandwidth available for intensive applications like real-time communications and for the transport of storage traffic across the Ethernet network with Fibre-Channel-over-Ethernet (FCoE) and iSCSI.
Because TRILL will allow enterprises to use more of the links in their data center network designs, it will make switches more cost effective, said Andre Kindness, senior analyst with Forrester Research. TRILL will also allow switches to load balance traffic over multiple Layer 2 links, something that is impossible with spanning tree, he added.
Brocade, a member of the IETF's TRILL working group, pushed the TRILL protocol forward last week with the announcement of its Virtual Cluster Switching (VCS) technology, which allows enterprises to build converged data center network switching fabrics that are multi-pathing and resilient with a fully distributed control plane. Brocade said that VCS will be based, in part, on TRILL. VCS-enabled switches will ship in the fourth quarter of this year and early 2011.
Other vendors have attempted to get around the spanning tree problem with proprietary technology, some of which has been standardized but not broadly adopted by the industry. Cisco Systems has PVST+ (Per-VLAN Spanning Tree). Nortel, now owned by Avaya, has Split Multi-Link Trunking. These technologies allow multiple links between network switches to be treated as one logical link so that traffic can be load balanced across them.
"But all of those are ultimately hacks on top of the problem, which is that Ethernet is a broadcast technology that can loop infinitely," Ferro said. "TRILL says, 'I'm going to take an Ethernet frame, look at the destination address and route it across the backbone.' So effectively it's doing Layer 2 routing."
Is TRILL protocol just more hype?
Not everyone is convinced that spanning tree protocol has to go. Curtis Hinson, network manager at Associated Credit Union of Texas, said that his organization is 95% virtualized and spanning tree works just fine for him.
"I haven't had to effect any changes to spanning tree," Hinson said. "I use spanning tree protocol for convenience's sake. It speeds things up. If I have a new switch and I open up four VMs [virtual machines], spanning tree makes that process a bit faster and a bit more transparent."
Blogger Ivan Pepelnjak, who is chief technology adviser to the European telecom consulting firm NIL Data Communications, is also skeptical of TRILL. He thinks it can alleviate some of spanning tree protocol's limitations, but he worries that it will lead to poor data center network design techniques. On his blog, he wrote: " Unfortunately, I've seen a very similar technology being heavily misused in the past (resulting in some fantastic failures) and remain skeptical about the deployment of TRILL. My worst-case scenario: TRILL will make it too simple to deploy plug-and-pray bridged (vendors will call them "switched") networks with no underlying design that will grow beyond control and implode."
Those who have their doubts about the TRILL protocol's future as a spanning tree replacement should consider that Dr. Radia Perlman, a fellow at Sun Labs, who invented spanning tree, is the driving force behind TRILL.
"Who do you think we're going to listen to?" asked Ferro.
"Over the years we've seen several Layer 2 routing protocols come and go. You get deeply cynical about all the flaws, but I think this time the things that have prevented TRILL and DCB [data center bridging] from coming have now been solved," Ferro said. "Memory, for instance. We haven't had the technology up until now to forward Ethernet frames at wirespeed because we haven't had systems with enough memory and fast enough CPUs to forward all those MAC addresses. We've had switches that have been able to forward at frame level, but they weren't routing. Now we have silicon forwarding. It's ubiquitous and cheap."
Let us know what you think about the story; email: Shamus McGillicuddy, News Editor
Our experts explain how to tackle virtual network switching problems.
Every vendor has their own way of tackling data center networks and server virtualization. Does anyone have a silver bullet?
Will Infiniband bring converged I/O to the mainstream enterprise?
Learn what the industry heavies had to say about the future of data center network architecture at Interop.