It's been a long time since packet sniffing was considered a groundbreaking technology. It's the information you can extract from that raw data -- the analysis that solves problems or even improves revenue -- that determines the value of a network monitoring or application performance management tool.
ExtraHop Networks Inc., winner of September's SearchNetworking's Network Innovation Award for its EH8000 appliance, takes this notion one step further with what it calls an operational intelligence platform. The Seattle-based vendor performs in-depth, real-time wire data analysis in Layers 2 through 7, and it upped its game earlier this year with the release of the EH8000, a 20 Gbps appliance that processes data at line-rate speeds.
SearchNetworking features writer Jessica Scarpati spoke with ExtraHop CEO and co-founder Jesse Rothstein, about the EH8000.
ExtraHop has several other operational intelligence platforms. What's different about the EH8000?
Jesse Rothstein: The latest platform really pushes the envelope of scalability. It is capable of doing 20 Gigabits of line-rate analysis of the wire data. When the traffic is encrypted, we're actually capable of decrypting it -- if loaded properly with the certificate and private key to do so -- at line rate as well and up to 35,000 [SSL] handshakes per second for the 2,048-bit encryption keys. We've actually benchmarked this system in our lab and were able to process 400,000 HTTP transactions per second, so these are not just theoretical specifications. This is what our system is actually able to do, and it … can also monitor up to 4,000 servers individually with 30 days of look-back. That's a massive amount of analytical power in just one appliance, and of course our largest customers will deploy a number of these appliances across multiple data centers.
Tell us about your approach to on-demand packet capture, a feature that was also part of the upgraded appliance.
Rothstein: What's really exciting about on-demand packet capture is that it actually changes the game around packet sniffers and protocol analyzers a bit. Packet sniffers have been around for a very long time … and they've been a very valuable diagnostic tool. But as networks have become faster and the amount of data traversing the network has grown by orders and orders of magnitude, it's become much harder for these packet sniffers to keep up.
Commercial packet sniffers have needed to include more and more storage in order to perform continuous packet capture. In fact, nowadays it's not uncommon to see commercial packet sniffers with hundreds of terabytes of storage, and even though that's an almost unimaginable amount of data [to have to store] … these continuous packet capture systems can be filled up in less than 24 hours because it just takes a couple of 10-Gigabit pipes to fill up that much data in that period of time.
Looking for a problem in a packet dump of that size can take a lifetime and is a lot like looking for a snowflake in an avalanche ... but what we introduced with on-demand packet capture is the ability to take very, very surgical packet captures of exactly what you want to see, and trigger it off of application-level events. We can take a packet capture of a single flow or a single transaction that's observed to be slow or observed to have resulted in an error … [which is] good for forensic proof of a problem and can be provided to a vendor in order to troubleshoot something.
Why did you decide to go up to 20 Gbps?
Rothstein: A few years ago, most servers that were connected to an enterprise network core were using 1-Gigabit Ethernet links. Nowadays, the price per port for 10-Gigabit links has gone down so much that these links are becoming extremely common as interconnects for high-powered servers and applications.
It would've been much, much harder or a more expensive undertaking [to build this platform] five or 10 years ago with older processing technology.
Organizations are producing so much more data than they ever have before, and every application is a networked application -- and all of that data traverses the wire. So, the scale, complexity and dynamism have increased many times over in enterprise environments. We need better visibility in these environments. When I talk about dynamism, I'm really thinking of things like server virtualization -- where VMs [virtual machines] can spin up and spin down and vMotion across the data center -- and that's a level of dynamism that didn't even exist five years ago. [Another trend is] Agile development, where organizations roll out new versions of the application not every year or two, but maybe every week or two -- and some organizations are even doing continuous development, where all the time new versions of the application are being rolled out.
What challenges did ExtraHop face building a product that can perform real-time analysis at 20 Gb, and how did you overcome them?
Rothstein: It was very hard to do. One of the things is, of course, it's taken a lot of good engineering. Our engineering team has decades of experience in high-speed packet processing and in application awareness and application fluency.… We've applied some of that same domain expertise around high-speed packet processing, application awareness and application fluency to this new space of operational intelligence. In order to accomplish this, we built our platform from the ground up to leverage multicore [processors].
How has that helped?
Rothstein: Even though we're seeing many more transistors in processors, clock rates are not really increasing. So, what's happening is multiple processing cores are being stamped out onto the same size of the silicon -- the same die size. We're able to leverage that, but it's not something where you can take a legacy product or legacy architecture and just wake up one morning and say, 'I'd like to light up some more cores now and make it go faster.' In computer science, this is all related to Amdahl's Law … and it has to do with how 'parallel-izable' your task is. So, we built our platform from the ground up to leverage some of the inherent areas of network traffic that are parallel-izable.
And, finally, we have absolutely been leveraging those recent gains in processing power and storage capacity…. It would've been much, much harder or a more expensive undertaking [to build this platform] five or 10 years ago with older processing technology.
At VMworld this year, you announced a partnership with Arista Networks. What will you be offering as part of that alliance?
Rothstein: We're partnering with Arista [Networks] around their new DANZ [Data ANalyZer] module.
The use case we were demoing at VMworld was what we call 'persistent mobile monitoring.' What that means is the ability to monitor a virtual machine [that] needs to be able to move with the virtual machine itself. So, if you vMotion a virtual machine from one part of the data center to the other, and you are monitoring or analyzing its transactions for operational intelligence, you need that to persist when that virtual machine moves.
Typically, you might tap some traffic, and if the virtual machine moves, you no longer see it. With persistent mobile monitoring, when the virtual machine vMotions across the data center, the ability to analyze its transactions and network traffic moves along with it. Arista … can tag the traffic in a certain way such that we can track when these vMotion events occur and when the virtual machine has actually moved to a different port or a different location in the data center.
This was first published in September 2013