Arista upgrades 7500 for massive leaf-and-spine network architecture

With its new, high-density 10/40/100 GbE 7500E switch, Arista Networks enables customers to build massive leaf-and-spine networks.

Arista Networks updated its modular 7500 switch to support a leaf-and-spine data center network that can connect more than 100,000 servers and more than 1 million virtual machines with a flat topology.

Altogether, Arista announced four new line cards -- including one with triple-speed port support-- a supervisor engine and a fabric backplane that increases the scalability and total bandwidth of the Arista 7500. Although the new components upgrade the company's existing 7500 chassis, Arista dubbed the upgraded switch the 7500E. The updates push the switch to 30 Tbps and 14 billion packets per second of non-blocking Layer 2 and Layer 3 throughput. Customers can outfit the switch to support 1,152x10 Gigabit Ethernet (GbE) ports, 288x40 GbE ports, 96x100 GbE ports or any combination of the three.

The switch update will enable Arista to support a massively large leaf-and-spine network. Using Multi-Link Aggregation (MLAG) or Equal Cost Multi-Path (ECMP) routing to interconnect multiple 7500E switches, customers can create spines ranging from two to 32 switches. A 32-switch spine can support a network with 110,952 10 GbE, server ports with 1.1 million virtual machines and 1 million MAC addresses. The switches can scale out laterally, meaning customers can add more 7500E's as more leaf switches are added at the switch access layer. And given the any-to-any connectivity via ECMP, port-to-port latency remains at less than 5 microseconds.

One of the four new line cards for the 7500E features triple-speed ports, a first in the industry. The ports on the 12x100 GbE module can be software configured to run at 10 GbE or 40 GbE speeds. In order to deliver this unprecedented density and flexibility, Arista integrated the optics into the physical interface of the line card. This optics integration has the side effect of pushing down the cost of wiring the line card by eliminating the need to buy external transceivers.

The new components for the 7500E will be generally available this quarter, Arista said.

"Having 10, 40 and 100 Gigabit on one chassis slot is amazing," said Spartaco Cicerchia, director of enterprise systems at the Howard Hughes Medical Institute in Chevy Chase, Md. "I can upgrade at my own pace in the same chassis footprint. I have latitude in the type of connectivity I use and the chassis itself has greater connectivity than the old version."

HHMI is installing the 7500E in its high-performance computing (HPC) environment, where the infrastructure team is migrating from Infiniband to Ethernet. Cicerchia explored both Gnodal and Cisco as options for Ethernet-based HPC, but neither vendor offered a low-latency spine switch in Arista's class, he said. He plans to deploy 100 GbE on the 7500E as soon as Arista delivers 100 GbE uplinks on its top-of-rack switches.

"We are in an environment where we produce 5 TB per day, so we use a pipe to carry data back and forth from acquisition to storage and from storage to HPC and so on," he said. "Forty gigabit is just a short-term fix."

Massive leaf-and-spine network will apply to multiple environments

With the 7500E, Arista is looking to break out of its sweet spot of high-frequency trading houses that have bought its low-latency top-of-rack switches. The 7500E gives Arista the scalability to appeal to large enterprise data centers, cloud providers, big data networks, large IP storage systems and Web 2.0 data center networks, analysts said.

"[Arista is] signaling that they want to become data center competitors to Cisco, and not just in the early markets where they've made inroads, but right across the board with cloud," said Brad Casemore, research director with IDC.

More large-scale data center network news

HP launches beefy data center core and aggregation switches

Cisco spin-in Insieme building massive fabric controller

Cisco positions Nexus 6000 for leaf-and-spine networks

Building a massive leaf-and-spine network with ECMP compares with the flat networks that Juniper Networks, Brocade, Cisco and others are trying to deliver with fabric-based solutions, Casemore said. "There is a general trend among all vendors, in terms of dealing with more east-west sever traffic, to make these networks flatter. We're seeing that there's nothing wrong with either approach. Each vendor is approaching this in a different way."

"When you look at the combination of being able to support much larger numbers of devices at much higher speeds, this spine-leaf architecture can connect those networks rather simply," said Eric Hanselman, research director at 451 Research. "You don't have to go to a TRILL-based [Transparent Interconnects of Lots of Links] architecture to achieve the interconnect density these network operators are moving toward. [Arista] can integrate well with a whole range of equipment without having to use proprietary interconnects."

San Diego Super Computing Center connects campus labs to HPC environment with 7500E

The San Diego Super Computing Center (SDSC) ordered an Arista 7504 E with two of the new line cards, to be installed as the core switch in a network that interconnects more than 15 laboratories on the University of California San Diego campus. The switch will then link that lab network to the SDSC's high-performance computing environment via a 120 Gbps link.

The 7504 E will be populated with two of the new line cards, a 36x40 GbE port module and a 48x10 GbE port module that includes two additional 100 GbE ports.

"Every [lab] will be connected to [the 7500E] at least two times, either with 2x10 GbE or 2x40 GbE," said Philip Papadopoulos, director of computing systems at SDSC. "The data sets of these labs range from 2 TB to 100 TB and 500 TB, and they have desires at times to move all those data sets from one side of campus to the other. We want to make it trivial for people to be able to share those large data products. In some sense there is so much capacity for aggregation inside these Arista switches that all we have to do is light fiber from one particular lab into the switch and any sort of artificial bottlenecks you might put into a network design made out of discrete components just goes away. It's like a cluster interconnect, but we're using it at campus scale."

Papadopoulos said Arista's support of high-density, non-blocking Layer-2 and Layer-3 port aggregation, at a good price, was a key part of his decision to install the 7500E. Arista's price-per-port numbers are $550 per 10 GbE port, $2,200 per 40 GbE port and $10,000 per 100 GbE port.

Let us know what you think about the story; email: Shamus McGillicuddy, news director.

Dig deeper on Data Center Network Infrastructure

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close