Networking managers are taking varied paths toward architecting high performance networks. For some, the answer lies in a flat Layer 2 design, which can be difficult to manage. That's where virtual cluster switching comes into play. Using virtual chassis technology, networking teams can manage multiple switches as one.
The UCLA Laboratory of Neuro Imaging (LONI) is the very definition of a high-performance computing data center. The lab maps the human brain and continually adds scanned data to the nearly petabyte-sized imaging database, which is constantly accessed by about 1,000 researchers trying to understand and cure diseases like Alzheimer’s and schizophrenia. Hundreds of researchers are active at any given time, working with datasets from 20 MB to hundreds of gigabytes.
ike many enterprises that rely on high-performance computing, LONI implemented a non-blocked, flat Layer-2 network with all the racks of servers directly connected. But David Hasson, LONI director of information technology, found that he needed more manageability than a simple flat network could offer -- and the solution to that problem was in virtual cluster switching. Virtual cluster switching is a technology that enables groups of top-of-rack switches to act and be managed as one through a virtual chassis.
“The [flat] network design worked and was manageable, but the problem was that traffic from a single rack of servers could negatively impact the performance of the rest of the data center; and simply figuring out where things were among the racks was a challenge,” Hasson explained.
So Hasson invested in Juniper’s EX series switches, citing their virtual chassis feature as a big selling point.
“With the virtual chassis, we have the appearance of a flat network, while operating at Layer 3 behind the scenes. Our network topology is actually more complex than in the past. But by managing the rack switches as one or two logical chassis, it is actually easier to manage,” Hasson said.
Almost every major networking vendor now offers similar technology under different names, but all with the same aim: to merge disparate switches into a single Ethernet fabric through a virtual backplane. Some vendors supply devices that use existing 10 GbE ports to create this virtual backplane, while others use a dedicated virtual chassis port.
Data center and virtualization demand flat networks and virtual chassis
The data center network and increase in virtualization place new demands on high-performance Ethernet. In the past, network engineers have turned to modular chassis switches to achieve high-performance Ethernet switching because they could consolidate many ports in one device. Low latency and fast throughput was possible because traffic simply had to traverse the high-performance backplane in the modular chassis. These modular switches were also comparatively easier to administer, with a single management console to manage the myriad of ports.
However, cabling hundreds of server network adapters to a modular switch in its own rack is expensive, so fixed-form-factor, top-of-rack switches have become an easier choice. But this layered approach doesn’t lend itself well to a dynamic, virtualized environment.
“An organization could have 90% of its workloads virtualized and still operate on the existing network topologies if they limit the movement of those virtual machines. To see the benefits of virtualization beyond just server consolidation, however, workloads will have to be highly mobile and will need a new type of network,” said Zeus Kerravala, senior vice president and distinguished research fellow at Yankee Group. That new network will be a flat, Layer-2 network.
Virtual cluster switching eases management
Within that changed network, one significant benefit of virtual cluster switching is enabling this extra layer of switches to act as one or a few modular virtual chassis switches. The typical enterprise data center has racks of servers, with a pair of top-of-rack switches providing redundancy for each of the server’s network connections. Five racks of servers equates to 10 fixed-form-factor switches that each need to be administered and managed. Even if network administrators apply network automation tools to that gear, there is still a fair amount of administration work for each device. With virtual chassis technology, the 10 units could be grouped into two logical switches, cutting management work five-fold.
Beyond the high performance data centers, virtual chassis will also play a central role in enabling management across the enterprise network as glass-housed corporate data centers split apart and become geographically dispersed.
“The fundamentals of network design in an era of emerging data centers are changing,” said Rohit Mehra, director of enterprise communications infrastructure for Framingham, Mass.-based International Data Corporation (IDC). “Virtual chassis is a response to dealing with managing infrastructure that is no longer located closely together,” Mehra said.
Interoperability concerns remain virtual cluster switching
The downside to most of the cluster switching or virtual chassis solutions available today is the fuzziness of interoperability.
Vendors such as Brocade and Cisco are already releasing virtual cluster-ready switches based on the pre-standard Transparent Interconnection of Lots of Links (TRILL) protocol. But other companies like Juniper Networks Inc. have released their own technology that is not based on the standard.
All vendors, even those building TRILL into their gear, concede that data center products using virtual chassis are optimized when able to operate in a homogeneous network. While a “forklift upgrade” of the data center infrastructure may not be required in every case, networking professionals should be prepared to scour through piles of support documents and implementation guides to make their virtual chassis a reality.
LONI's Hasson and his team have an all-Juniper environment, so interoperability isn’t a problem. Yet transitioning to virtual chassis management meant re-educating the networking team on how they viewed the very structure and role of the network. “There was one member of our team who didn’t fully understand how the topology worked when we switched, but the fundamentals were sound and I think he finally agreed it was a good way to go,” Hasson explained.
Still, the combination of an entire data center network forklift along with teaching brand new network fundamentals might not be an easy transition for many users.