This article can also be found in the Premium Editorial Download "Network Evolution: Let the 40 Gigabit Ethernet migration begin."
Download it now to read this article plus other related content.
It may be early days for 100 Gigabit Ethernet, but many service providers have deployed the technology, and now a small but growing number of enterprises are proceeding with it as well. As soon as 100 GbE is in live production, network and security pros will need to manage and monitor these networks. Herein lies the challenge—some management tools are ready for this next level of ultra-speed, but some clearly are not.
Of 15 network management and monitoring vendors contacted for recent research, very few had concrete products in place or even a near-term roadmap. Yet most are aware of the emerging need for 100 GbE monitoring and management and are exploring how to accommodate customers.
The difficulty of monitoring 100 GbE networks varies depending on the type of network management tool or strategy. Some network management strategies lend themselves more easily to adapting to ultra-high speed networks, while others must be heavily transformed. The following is an assessment of existing network management strategies and whether they’re ready to handle 100 GbE networks.
More on network monitoring tools
Deep packet inspection tools: Proxy vs. stream-based
Cloud monitoring tools: Using Wireshark in the cloud
Network diagnostics that see through virtualization
Network performance testing for a 40 GbE upgrade
Fault/availability management: Since 100 GbE is delivered via network devices, the first objective is to make sure that tools for the network operations center (NOC) are ready to discover and recognize these devices. That will mean recognizing the new interfaces they employ, their topology placement, and their ongoing health. These basic capabilities are already present in most fault/availability monitoring platforms today, particularly among products that have been deployed in service provider environments (i.e. ISPs, mobile operators, hosting providers) where 100G is already in place today. These tools may need to be scaled down for an enterprise network environment.
Statistics-based performance management: Things get a bit stickier when you look at performance management platforms that gather statistics by regular polling of network devices. While many metrics will not be directly affected, traffic polling counters may be insufficient for tracking total volume of activity unless those counters are harvested on a much more rapid basis than polling engines are typically configured to support. This will not be a problem in the early days of 100 GbE, as total traffic volumes will initially be far less than total capacity, but eventually these tools will need to scale.
Flow-based security and performance management: Tools that use technologies such as NetFlow or IPv6 to gather data regarding composition volume of traffic should be essentially agnostic to 100 GbE, and will be stressed only if traffic volumes rise to a level that results in flow record volumes beyond a tool’s capacity to collect and analyze. Again, that’s not likely to occur in the near term, as flow-based tools designed for today’s enterprise or service provider environments are already equipped to handle very high volumes of flow record data.
Packet-based security and performance management: Here’s where we run into a major problem. Packet inspection tools are the heavy lifters of the management tools sector, and they face the greatest technical challenge when trying to accommodate increased network speeds. There are a few essential parts to this challenge—getting to the packets, distributing the packets for analysis, and analyzing the packets. There are no packet analysis tools available today that can directly support 40 GbE, let alone 100 GbE, at full line rates. Many have certified full 10 GbE support and a few offer 40 GbE interfaces, but none have addressed 100 GbE at this time.
Scaling packet inspection tools to meet 100 GbE networks
The only viable approach to 100 GbE network management today is adapt existing tools that are today only rated to 10 GbE. This requires a two-part process.
The first step is gathering packets at 100 GbE. There are currently two techniques primarily used for gathering packets for analysis – TAP and SPAN (a.k.a. port mirroring.) SPAN is a function of the network device itself, but taps must be upgraded to support 100 GbE. The good news here is that taps are already available for 100 GbE. NetOptics released the 100G Flex Tap back in September, and the company claims deployments are already underway in mobile telco, ISP and hosting provider shops. ONPATH (which is being acquired by NetScout) is also shipping 100G-rated, Layer 1 access devices in the form of Optical Blade for its 3900 switch family, mostly to testing environments. Other L1 vendors have this on their roadmaps, so expect more options coming in 2013.
The second element of deep packet inspection is distributing packets at 100G. This is the job of network monitoring switches (a.k.a. network packet brokers), which adapt traffic from 100 GbE interfaces over to 10 GbE interfaces to manage the torrent of packets via load balancing, slicing, filtering, etc. The idea is to prevent analysis tools from being overwhelmed. While 100 GbE is on the roadmap of some vendors (some say they’ll have product in early 2013), Endace was the first to productize and ship a solution. The EndaceAccess 100 provides L1 adaptation and load balancing so that 10G-rated packet analysis tools can be deployed to monitor 100 GbE links. The solution is deployed and being used in production by Endace customers today.
Other vendors are working on 100 GbE network monitoring switching as well. For example, on November 1, 2012, Gigamon announced it would be demonstrating 100G aggregation and connections later in the month. Expect to see others making announcements late this year and early next.
For many, 100 GbE is far off in the future, so there is no immediate need to worry about how to manage it. But for those at this bleeding edge, core fault/availability tools are ready now, and recent introductions by companies such as NetOptics, ONPATH, and Endace, place packet-based security and performance monitoring within reach.
This was first published in November 2012