News Stay informed about the latest enterprise technology news and product updates.

New Brocade VDX chassis adds scale to data center fabric

With a new Brocade VDX chassis, enterprises can build larger Layer 2 data center fabrics with the company's TRILL-based Virtual Cluster Switching.

Brocade Communications is boosting scalability in its Virtual Cluster Switching data center fabric with a new line of Brocade VDX modular switches that ship with 4 Tbps of backplane capacity per slot.

Brocade also announced new line cards for its MLXe core router series that increase the platform's density to 768 ports of 10 Gigabit Ethernet (GbE), as well as a new multi-tenancy feature on its ADX series of application delivery controllers (ADCs).

Brocade VDX chassis: Increasing data center fabric density

Brocade's TRILL-based Virtual Cluster Switching (VCS) data center fabric is designed for building large Layer 2 network domains. It debuted two years ago with Brocade's series of fixed-form factor VDX 6700 switches, which are top-of-rack devices that cluster together in self-forming fabrics.

The VDX 8770 series will ship in 4- and 8-slot form factors with an initial top port density of 385x10 GbE or 96x40 GbE. That capacity will increase the maximum scale of a VCS fabric to 8,000 server-facing ports, according to Sanjib HomChaudhuri, director of product management for Brocade. Prior to the new platform, the ceiling of a VCS-based fabric built with VDX 6700s was 960 server ports. The switches will initially ship with Gigabit Ethernet, 10 GbE and 40 GbE line cards, but the 4 Tbps per-slot backplane capacity is aimed at eventually supporting high-density 100 GbE ports, he said.

More on Brocade VDX and other data center fabric options

Brocade VDX 6700s were the first data center fabric switches to ship.

Data center fabric overview: A vendor comparison

Alcatel-Lucent's data center fabric approach.

Juniper offers QFabric options for smaller data center fabrics.

Brocade said  the VDX 8770 features 3.5-microsecond port-to-port latency and a highly scalable MAC address table, capable of supporting up to 384,000 virtual machines in a single switch.

"VDX and the VCS fabric were a great initial start, but it was a single switch, so the density wasn't very high," said Eric Hanselman, research director for 451 Research. "The VDX 8770 gets them into higher densities so that now large enterprises and service providers have devices they can leverage."

Customer plans for Brocade VDX chassis

Michael Harrison, a senior network engineer at Physicians Sales and Service, a Fortune 500 medical products distributor based in Jacksonville, Fla., plans to install a pair of VDX 8770s in his data center to expand his current VCS fabric built on VDX 6700s, and also to implement in the core of the rest of his data center network, which features a variety of top-of-rack devices from Cisco, Dell, Juniper and IBM. Today he uses a pair of Catalyst 6500s in his core.

"What I'm really looking to do is replace the 6500s with a higher-performing chassis so that issues of oversubscription of 10 Gigabit are no longer a factor," he said.

Harrison initially invested in the VDX 6700 series to deploy affordable 10 GbE server-facing switch ports, but the VCS fabric has also enabled Layer 2 multipathing, which allowed him to eliminate spanning tree. Also, the VDX switches are able to operate as a single virtual switch and are extremely easy to configure. The only thing lacking in his VCS fabric was the scalability to eliminate his Catalyst 6500s.

"The biggest thing for me investing in this fabric is you have [chassis-like] redundancy in a single virtual switch," he said. "You don’t have to worry about design considerations like spanning tree."

Designing a data center fabric with Brocade's VDX switches takes simple errors out of the equation, he said. "You can have minor misconfigurations [with spanning tree] that can cause outages. It's really easy to miss stuff."

Brocade VDX switches add multi-Layer 3 gateway ability

The VDX 8770 will also have a new feature that allows customers to establish multiple load-balanced Layer 3 gateways within a Layer 2 VCS fabric, allowing them to increase the bandwidth available between Layer 2 domains.

"Once you've built [Layer 2 VCS domains] large enough, you now have to deal with multiple Layer 3 gateways within that fabric," said 451 Research's Hanselman. "If you have multiple Layer 3 paths between domains, now you can have greater capacity and resilient capacity between those Layer 2 domains. This avoids a bottleneck."

Brocade is taking orders for the VDX 8770 chassis now with a starting list price of $65,000.

Application delivery controller multi-tenancy, MLXe density

To support high-performance networks, Brocade has upgraded its ADX ADCs to include a multi-tenancy feature that will allow enterprises and cloud service providers to slice up the resources of the appliances and assign virtual instances of an ADC to specific applications and services.

F5 Networks announced its own multi-tenancy feature in 2011, Virtual Cluster Multiprocessing [vCMP] for its modular Viprion series of BIG-IP ADCs. vCMP allows enterprises to slice an ADC for tenants within its traffic management operating system (TMOS), running multiple virtual instances of the ADC on a single chassis.

"There's a little more flexibility [with Brocade's multi-tenancy]," Hanselman said. "vCMP in F5 gives you dedicated access to a [CPU] core. You wind up tying specific services to specific cores within Viprion."

The ADX multi-tenancy doesn't slice resources by individual CPU core. It assigns tenants to processor subsystems, giving enterprises the ability to "mix and match capacity without having to determine where those processor hosts are in the system," said Ananda Rajagopal, Brocade's senior director of product management and strategy for the service provider business unit.

Finally, Brocade added 10 GbE density to its MLXe core router chassis, introducing a 24-port 10 GbE module. These line cards effectively triple the port density of the MLXe. Brocade is aiming these line cards at service providers who are looking for more core capacity.

Let us know what you think about the story; email: Shamus McGillicuddy, News Director

Dig Deeper on Data Center Networking

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Will you embrace Layer 2 data center fabrics?
I note a minor typo in the last paragraph,

"These line cards effectively triple the "pot" density of the MLXe."

So, pot -> Higher density, I presume ;-)
I prefer Cisco Nexus family.