Cloud computing network switch vendor Arista Networks is a startup with a lot of star power. Arista, which specializes...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
in high-density 1 rack unit (RU) 10 Gigabit Ethernet switches, was founded by Andy Bechtolsheim, who also founded Sun Microsystems, and Stanford University Professor David Cheriton, who co-founded Granite Systems with Bechtolsheim. Last year, Arista added to its top-name talent by installing Jayshree Ullal as president and CEO. Ulall had previously been a senior vice president at Cisco in charge of the company's data center, switching and services unit.
SearchNetworking.com met with some of Arista's executives at Interop Las Vegas and talked to them about their vision for the company.
There are a lot of switch vendors on the market today. Why did Arista's founders believe the time was right for a new 10 Gigabit Ethernet (GbE) specialist?
Arista CEO Jayshree Ullal: Our inspiration was [that] we saw a request for information from one of our potential customers that said, "We want to build a cloud networking 10,000-node cluster for 1 GbE and in the future 10 GbE, and it needs to be in the $100 to $200 per-port range." And nobody could satisfy that. And we said: How is it possible that we can achieve those requirements? So building that purpose-built, modern operating system combined with achieving the price performance and latency became our inspiration, driven largely by customer requirements.
Why would an enterprise invest in Arista cloud computing network switches as opposed to something from a better-known vendor like Cisco?
Ullal: I think enterprises will continue to invest in Cisco if they're not trying to break an old habit. If you have traditional applications and a traditional enterprise design, then the existing vendor is fine.
What Arista is trying to do is really provide that breakthrough cloud networking technology for enterprises to build their private platforms. That's not for everybody, but it's for storage applications, virtualization, low-latency market data, high-speed data analytics, seismic analysis, oil and gas, healthcare research. So we're really focused on high-bandwidth, high-data applications. Not kilobit applications, but megabit and gigabit applications. Both our product and our focus is not mainstream enterprise. We almost always connect to a Cisco or Juniper core. And we're almost always a complement for an existing enterprise.
What kind of solutions are you bringing to the market to fill that need?
Anshul Sadana, director of systems engineering for Arista: Arista is making high-density 10 GbE switches today for the data center. The current shipping products are 1 RU. They are high-density, 48 10 GbE ports and 1 RU nonblocking. That's 968 gigabits per second, or a terabit of bandwidth in one rack unit. Prior to these products, to get that kind of capacity, you had to buy a high-end product like a [Cisco Catalyst] 6500. But even a fully loaded Catalyst 6513 will give you 800 gigabits of bandwidth.
In a cloud data center or in a new data center, where people are trying to get nonblocking connectivity from any server to any server -- or from a server to a database or a cache or a storage -- they're looking for high throughput, low latency. And of course they're very price sensitive. So our solutions are sometimes one-fifth to one-tenth of the price. They are sometimes one-fifth to one-tenth of the power consumption. And they are 10 times the performance.
Andy Bechtolsheim, chairman and chief development officer, Arista: We are actually starting to displace Catalyst 6500s in actual customer sites because people want to eliminate those large devices and replace them with these very small 1 RU boxes. These are data centers of banks, enterprises.
Where in the data center are you displacing Catalyst 6500s?
Sadana: Today, it's either access or aggregation. That is the focus of our product. We don't intend to go to other pieces of the data center. We don't intend to be in the core at all. The core has feature sets and routing tables and so on that we don't want to support right now.
Arista has no intention of developing a core switch? Why is that?
Ullal: It's been well done by existing vendors, and we're trying to solve problems that haven't been solved, not ones that have been solved.
So what kind of data center network architecture are you trying to facilitate with your technology?
Sadana: Let's say you log in to Facebook. When you log in, you get a lot of data that is dynamic. You send one friend request to the Facebook website, and you get a response. But to get that response, internally the servers have to do a lot of compute and queries to the database to tell you which friends have recently joined the network, which friends have updated their profile, something that's happening in your community. All that data has to come back really quickly in real time. To be able to do that, you need a high-performance interconnect within the data center.
That's sort of a different mindset of how people are designing networks. In the classic enterprise space, you had the access layer, the distribution layer and the core layer. As you scale things out, you just add more devices and more ports. But in most cases, within a rack, you're just going through the access switch. If you're going from one rack to another rack, going from the access layer to distribution to the core and then back to distribution and back to access. The mindset we have is [that] in a cloud design it doesn't matter on size. It's the principle that matters. If you want good performance, you can have a two-tiered design where you have your leaf layer and spine layer. Most of your traffic is in these two layers. There is a core device, but it is mainly for your traffic going out of your data center.
Bechtolsheim: What people want is this much more flat, two-layer network where you can get the same bandwidth from any server to any server. And you don't get the bottleneck from the traditional 6500 switch. You get a massive amount of bandwidth across thousands and, in some cases, tens of thousands of switches.
There are other vendors trying to bring forth the same architecture, right?
Bechtolsheim: It comes under two things. One is the size of the core switch, which determines how much throughput you get. What we have here are one-unit boxes, and we're working on bigger things [for] the future. But the other one is can you do protocols that enable the traffic to spread out across multiple core switches.
Sadana: There's hardware speeds and feeds; there's price, power and performance. But there's also the software that runs on these devices. In many cases, that's the piece that people miss out on. Once you buy the equipment, then over the next five to eight years, all you need to do is live through the software.
If you look at how switching has evolved, most companies talk about modular operating systems, live patching. They talk about a lot of things. But the reality is [that] to deliver all of this today means a modular switch with two supervisors or some of these features that are available only on something like an IOS XR, which is the highest-end operating system shipping on a CRS-1 [router]. We can do those features on our box today. Our customers come to us and say all vendors come and give a presentation on how their operating system is so good and modern. But for 10 years people have been promising that their operating system is modular and they can do live patching and so on. We were the first ones to deliver that.
If you look at our operating system, we run a Linux kernel under the hood. We open up the platform where you can cache on the switch. You are pretty much running a Linux shell on the switch. You can do all your Linux services, TCP dump, you can run ifconfig to look at your interfaces. You can write your own scripts to automate some things you want to do.
If you plan to stay out of the core, how do you engage customers who want to use one vendor throughout their data center network?
Sadana: They're more than welcome to do that. They'll pay a lot for it. When we design hardware, a lot of times we have a dual-vendor strategy. You never want to be stuck with a single vendor for a single critical part -- then you have no leverage. Our customers do the same thing, where they have a multi-vendor strategy. Our customers come from the non-Cisco market. These are customers that are very price sensitive. They buy a lot of commodity hardware. And they want to buy switches that are good value for money. Not for a company that has an 80% margin built into their business. Customers ask when they buy a server or a new CPU: Every year the price comes down. Why doesn't that happen on my switches?
If you look at the Cisco model, they want to maintain that margin, not give it back to the customer. They are a customer base that is interested in lower prices, better performance. Not cheap. They understand that technology is expensive. But if you look at the Web companies, if you look at high-performance computing, they're very price sensitive.
So what do you want attendees to learn about Arista at Interop?
Ulall: I think Arista's innovation is important to understand, because sometimes Interop becomes all about the buzz. But really understanding that it's possible to innovate, build the lowest latency, get the best core density and have a modern network for the data center and the cloud are some of Arista's key value propositions. And I'd like … the attendees to know that innovation is not dead.
There is a lot of talk about cloud computing at Interop this year. Do you get the sense that enterprises understand what the cloud is and can see through the hype?
Ullal: It's hard to broad brush. I think the more classic enterprises don't have to deal with the cloud because perhaps their scale and their performance and applications don't require it.
I think every enterprise is starting to think about not just how their enterprise looks now but how it needs to be architected in the future. And they start to think: Do we outsource to a public cloud or start thinking about how we build a private cloud? I think that for almost every one of them, that's a very important and thoughtful discussion to have. And almost every one of them realizes they need to start by building a private cloud and have some private hybrid combination in the future interfacing into a public. Many of them are in that architectural stage where it's not a wholesale migration of what they have, but an important piece of where they are and where they want to be.
Dig Deeper on 10 Gigabit Ethernet