Manage Learn to apply best practices and optimize your operations.

Gigamon gets NIA award for network traffic visibility platform

Gigamon wins this month's SearchNetworking Network Innovation Award for its HC2 modular fabric node for network traffic visibility.

In 2014, Gigamon introduced the GigaVue-HC2 modular fabric node for network visibility. HC2 was engineered in part to enable organizations to leverage their network monitoring and visibility infrastructure to various security and management tools that sit on top of a visibility fabric. The node has a combined throughput exceeding 1 Tbps, and can accommodate non-blocking port speeds from 1 Gbps to 40 Gbps. In recognition of its efforts to help enterprises gain better network traffic visibility across their networks and to permit them to exploit these tools to bolster operations such as security, Gigamon is this month's winner of SearchNetworking's Network Innovation Award. SearchNetworking spoke with Ananda Rajagopal, vice president of product management.

What sparked the development of GigaVue-HC2?

Ananda Rajagopal: First, there is the obfuscation that's created by new complex networks. Whether it's the use of virtual extensible LANs (VXLAN) or the increasing use of encryption inside of networks, the ability for customers to get visibility for key types of traffic has been compromised. When we set out to build HC2, one of the things we found is that customers wanted more than one-off projects. They were beginning to become more strategic in how they looked at visibility. Instead of looking at monitoring projects in isolation, they wanted a common visibility framework so they could use their network traffic to understand and solve serious needs of the enterprise.

What kind of network traffic visibility needs were they referring to?

Rajagopal: These needs could be managing the performance of applications; managing performance of the network; being able to cater to multiple types of security appliances with a common framework. Ultimately, they wanted to be more efficient and to become far more secure from the point of view of how to conduct their operations.

As we looked at our product portfolio, a few things stood out. The market was clearly screaming for what we call a multipurpose visibility node. This would be a single node that could be expanded as required, but one that could be scaled to meet the needs of the modern enterprise. Most important, the personality of that node should be able to be customized according to the specific needs of the primary buyer within the organization. If, for example, the security buyer is the primary buyer, than the kinds of capabilities he or she is looking for is quite different than the network buyer who would be looking at a different set of capabilities.

Is it the need to track applications or to support business operations like security that drove the technology?

Rajagopal: I would say it's security that was a primary driver for this. In order to meet that demand, there were a couple of things we did. First, we broke it down to the software capabilities that are required to meet those cases. And No. 2 was the point of view of the hardware. For example, when you look at software capabilities, there were a key set of capabilities we introduced around the same time as we introduced HC2 and they play as critical a role in terms of taking the value proposition of HC2 further.

The very nature of visibility means that once a particular problem is solved, it's time to look for the next blind spot.
Ananda Rajagopalvice president of product management, Gigamon

What were those capabilities?

Rajagopal: The first was the introduction of what we call adaptive packet filtering. It is really about content-based inspection that can actually improve the efficacy and efficiency of the operational and security tools that process the traffic. How? Typically the tools only care about a certain type of content; so if the visibility fabric can filter out everything the tools do not care about. It's a win-win for everyone. The customer gains by increasing the efficiency and utilization of the overall infrastructure because their valuable operational and security tools can focus their processing power on the thing it does best. Adaptive packet filtering is basically a content inspection engine that can look anywhere inside the packet by specifying patterns of interest.

The second is the inclusion of a NetFlow generation engine. You might say that NetFlow has been around for 20 years and that most physical switches and routers support that, so why bother? But we noticed a couple of things.

If a customer has a multivendor network, they still need a consistent way to capture NetFlow traffic; many vendors don't support it and even in newer systems that come out, they tend to use a sampling technology. As a result, the kind of instrumentation that customers get out of NetFlow was decreasing even though speeds of the networks were going higher and higher. So, we came up with a way to generate a high-fidelity NetFlow report for every packet we see. It's done through HC2 and it has a very high compute node that can be built based on the customization of the HC2.

This means that instead of customers looking at a dispersal of NetFlow generator appliances or other approaches that could lead to cost, they can synchronize their NetFlow generation, and regardless of my application infrastructure, I can use this common framework and generate high-fidelity NetFlow that can be used for instrumentation purposes.

Can you talk about HC2's inline bypass capabilities?

Rajagopal: This solves a very big pain point within organizations. Many security teams want to put the security appliances inline, but they get resistance from the network team because the performance of the security appliance doesn't match the performance of the network.

That is where we come in. We help provide load balancing across multiple security appliances and be precise in terms of what kind of traffic is traveling through the security appliance. That's useful because not all network traffic needs to be inspected by every single security appliance, so this provides a common architecture where both inline and out-of-band security appliances can be supported by a common, multitiered security framework.

Finally, there is the introduction of Secure Sockets Layer (SSL) decryption, which we introduced last November. We have received a fantastic reception for that, because there is no visibility appliance today that can has all the abilities we have spoken about and has the capability of performing SSL decryption.

So, from an administration perspective, through the use of a common node, an organization can customize to generate NetFlow reports, it can look deeper into SSL traffic, it can inspect malware, it can do deep packet inspection. And if it wants to terminate traffic coming from a virtual node, it can do that. That is what we mean by a multipurpose visibility appliance that isn't pigeonholed for one use case, but can span the gamut of what an enterprise needs.

How have network traffic visibility demands changed over the past few years?

Rajagopal: They have changed quite significantly. The very nature of visibility means that once a particular problem is solved, it's time to look for the next blind spot. Network performance management was very big a few years back, but that's morphed into application performance management (APM) as opposed to focusing on NPM. Today, APM is still important, but what impact the network has in terms of security has become a very big factor.

If you examine the plethora of network security firms that have done well, it's because of the realization that looking at network traffic provides an instantaneous way of looking at what threats could be emerging in the enterprise.

How has the increase in virtualized applications affected network visibility?

Rajagopal: Let's look at APM. The kinds of apps that are being virtualized are becoming far more mission-critical. And that means it's not just a monolithic one application per server that's being monitored. Rather, it's the virtualized and distributed applications that are built on multiple tiers so it has a different set of capabilities and requirements that will be expected from the visibility infrastructure.

What are the challenges associated with the migration to higher speeds within the data center?

Rajagopal: A visibility infrastructure must be put into place when an organization is undergoing a network upgrade. Organizations realized that many management tools they were using couldn't keep up when they transitioned from 1 Gigabit Ethernet to 10 GbE speeds. The same thing is playing out today, as they move from 10 gig to 40 gig or 100 gig; tools are unable to keep up. Again, it's not necessary to look at all the traffic inside a network. Rather than having this tight coupling between how you manage and operate the network and how you design the network, you can create hierarchy and separation of layers. Having this intermediate layer -- this visibility layer -- you can independently upgrade both of them. The network can be optimized to be as fast as the network team decides and the operations team can get the visibility they need. We help bridge the gap between both of them.

Next Steps

Improving visibility across the WAN

Using flow analysis to detect threats

NetFlow and packet analysis beef up visibility

Dig Deeper on Network management software and network analytics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close