The need for more speed and capacity never goes away. Most enterprises are still rolling out 10 Gigabit Ethernet links in their networks, but already network architects and engineers are preparing for the future because data traffic never stops growing. Early adopters are testing and deploying the first generation of 40 Gigabit Ethernet (GbE) switches and routers to get a step ahead of this continuous onslaught of traffic.
“We are seeing incidental peaks of traffic nearing 10 Gbps,” said Jeroen van Ingen, a network engineer at the University of Twente in the Dutch city of Enschede. “Given the traffic growth over the years, we expect to need more capacity within 12 to 24 months. That’s why we decided that new core equipment should support 40 GbE.”
Van Ingen is not alone in predicting the need for 40 GbE and beyond. Research from the Dell’Oro Group forecasts the overall Layer 2 and Layer 3 Ethernet switch market will reach $25 billion in 2016, with 40 and 100 GbE technology approaching $3 billion in sales. Meanwhile, Infonetics Research shows sales of 40 GbE technology grew 50% in both Q1 and Q2 of this year. The expansion is largely driven by bigger data centers and the move to the cloud, according to researchers.
The university recently upgraded the core of the residential portion of its campus network with two of Cisco Systems’ Catalyst 6500-E switches. Today, those core switches are interconnected with multiple 10 GbE links, but traffic is spiking upward and the university will need 40 GbE interconnections soon. Van Ingen is testing a 40 GbE interconnect between the two Catalyst 6500s and plans to put two 40 GbE interconnections into production soon.
The demand for this bandwidth is driven by three factors: The university is increasing its use of cloud services and cloud storage, video traffic to the student dormitories is growing, and various open source projects that are mirrored on the university’s campus are extremely popular.
“If we don’t keep up with upgrades, the network experience will get worse and worse,” van Ingen said. “If the links actually get saturated before we upgrade, the increased packet loss will lead to error messages in their applications, actions that can’t be completed and lost productivity in general.”
Verisign Inc., the Reston, Va.-based Internet infrastructure provider, which manages two of the Internet’s 13 root servers, will likely transition to 40 GbE within two years, according to Mike Gibbs, Verisign’s network architect. Gibbs and his team beta-tested the 40 GbE capabilities on Cisco’s Catalyst 6500 Supervisor Engine 2T. “The first and most obvious place that I suspect a lot of people will initially install 40 gigabit is in their QA [quality assurance] and lab environments,” he said. “It’s more convenient to have testing tools and load generation tools and interconnects that are single links of large capacity, as opposed to many aggregated links.”
Verisign will soon have to replace aggregated 10 GbE links with 40 GbE links, particularly to network service devices like load balancers and firewalls. Today, GbE-capable load balancers and firewall appliances are scarce on the market, but the migration will happen. “As capacity for our infrastructure has to grow, as DNS grows, as our denial of service product grows, we’re going to need greater capacity between devices,” Gibbs said.
40 Gigabit Ethernet: When link aggregation is no longer sufficient
Link aggregation has been an essential tool for network engineers for years, whether aggregating multiple Gigabit Ethernet links or 10 GbE links. There always comes a time when aggregation is no longer practical, however, and a migration to the next generation of Ethernet is necessary. Enterprises whose businesses are based on network performance and reliability will reach a point where link aggregation breaks down, and they will have no choice but to upgrade to 40 GbE.
“There’s a limit to how many single 10-gigabit links we’ll be able to wrap together into an aggregate link,” Gibbs said. “With link aggregation, you have a hashing mechanism to send traffic across them as equally as possible. Not every vendor does that equally well. Because of that, when you start having vendors interoperate with each other, you can’t guarantee an equal spread across all those links. So, a single 40-gigabit link ends up being much more preferred.”
Link aggregation is based on the IEEE 802.1ax standard, but the hashing algorithms each vendor uses to load balance traffic across multiple links is unique, according to Dhritiman Dasgupta, senior director of product marketing at Juniper Networks.
“With 40 Gigabit Ethernet you get guaranteed and dedicated 40-gigabit bandwidth, so hashing algorithms don’t play a part anymore,” Dasgupta said. “[When] predictability is their business, [networking pros] are moving to 40 gigabit. They can’t take a chance on hashing algorithms playing a part in how long transactions take.”
Transitioning to 40 Gigabit Ethernet
As is the case with every next-generation Ethernet standard, the transition from 10 GbE to 40 GbE will not be disruptive, Dasgupta said. From a protocol standpoint, TCP, UDP and Layer 2 and Layer 3 protocols remain unchanged.
“The things that need to change are the cabling and the optics at both ends,” he said.
Specifically, enterprises will need to deploy new fiber—OM3 or OM4 – for link lengths of up to 100 meters and duplex single-mode fiber (SMF) for links up to 10 kilometers. Of course, new transceivers will also become necessary, whether SR4 or LR4.
Many enterprises are already thinking about the path to 40 GbE as they install 10 GbE today, and are rolling out ribbon fiber that can support the higher bandwidth.
While these enterprises install 40 GbE-ready infrastructure during a 10 GbE upgrade to save time and money, those savings will only happen if the networking team maintains the cabling plant properly. Signal loss tolerance for a 40 GbE network is extremely low in comparison to Gigabit Ethernet and 10 GbE, and it can be very challenging to achieve an acceptable rate of loss.
“People will connect fiber up and never have cleaned it and never inspected it,” said Ed Gastle, product line manager for JDS Uniphase Corp. (JDSU), a Milpitas, Calif.-based optical test and measurement company. “The [40 GbE] links will come up and they will work, but you’ll be taking errors because your data is getting corrupted.”
Poor fiber maintenance became an issue during Interop Las Vegas in 2011, he added. JDSU provided the testing and measurement for InteropNet that year. when the network was upgraded from Gigabit Ethernet to 10 GbE.
“They had a whole bunch of fiber out, but they hadn’t treated it very well,” he said. “They hadn’t been cleaning and inspecting it. And now they were trying to run 10 gigabit. Well, guess what? It doesn’t work. Now you have to go through a recovery effort to get your end faces to good enough loss [levels] to support 10 gigabit. We were able to get a few of them back by aggressively cleaning them, but it takes a lot of time. And if you have to re-terminate [the fiber], it takes a lot of money.”
The migration to 40 GbE is even more sensitive to poorly maintained fiber. Enterprises need to inspect and clean the end faces on those fiber links because the next generation of Ethernet will have a much lower loss budget.
“We hear more and more that as these speeds go up, systems are not coming up at all or are coming up and having problems,” Gastle said. “Then we start to troubleshoot and we find that the problem is caused by a poorly maintained end face. They’ve been connected up so many times that dirt and debris has been ground into it and you … have to re-terminate that fiber.”
Connecting dirty fiber to active equipment can also spread damage, he said. Anything short of pristine fiber can threaten sensitive optics, for instance.
Securing and monitoring 40 gigabit Ethernet links
As network engineers deploy 40 GbE switches and routers, eventually they’ll need Layer 4-7 appliances, such as application delivery controllers, firewalls or WAN optimization controllers, that can handle 40 GbE traffic. They also need evaluate how these devices monitor the network, whether it is for security or performance.
“It’s not the link speed that you have to be concerned about,” said Verisign’s Gibbs, who is evaluating 40 Gbps firewalls. “It’s whether or not the systems can actually do the processing of the traffic between them, especially firewalls, where every packet is going to be at least briefly reviewed at a minimum of Layer 3, if not Layer 4, all the way up to Layer 7. That takes a huge amount of computational power.”
Some organizations will want line rate visibility right at the 40 GbE link. The University of Texas at Austin has deployed high-availability (HA) pairs of Cisco Nexus 7000s switches in two data centers, with 40 GbE interlinks between the HA pairs and between the data centers, according to Brent Boggan, regional sales manager at Ixia, a network monitoring vendor based in Calabasas, Calif., which has helped the school instrument the network for visibility.
Boggan said the university has 40 GbE links in each of two Austin data centers. Ixia put a physical, passive optical 40 Gbps tap on each of those connections. The monitoring ports coming out of those taps then flow into a network monitoring switch, the Anue Net Tool Optimizer (NTO) 5288, which can process up to 40 Gbps of data at full duplex.
The NTO 5288 then filters, replicates and load balances that 40 Gbps flow across multiple 10 Gbps tools, including multiple intrusion detection appliances like SourceFire’s Real-time Network Awareness tool.
Still, early adopters will have to deal with limitations of existing technology. Many organizations will rely on NetFlow and SNMP for network visibility because line-rate packet analysis will remain a challenge.
The University of Twente will use open source SNMP counters and NetFlow analyzers to maintain visibility on its 40 GbE links, van Ingen said. For packet visibility, he performs full packet captures on the 10 GbE links that are aggregated by the Catalyst 6500.
“If we really need to do full packet analysis on a 40 Gbps link, we’ll probably configure a high-end server with a multiport 10 GbE card and see how far that gets us,” he said.
Verisign relies on NetFlow and SNMP to monitor its 10 GbE network today and will probably continue that approach as it migrates to 40 GbE, Gibbs said.
But the process is imperfect, Gibbs acknowledged. Some events will always slip through. These protocols won’t always reveal bursts of traffic going up and down. On the security side, something malicious can slip through very easily.
“You’re looking at patterns to figure things out instead of looking for something inside of a packet,” he said. “It also means any [probing attack] with a single or couple of packets you might miss because it’s outside of your sampling point. It’s a known risk everyone has to deal with, but it is the side effect of more bandwidth and more speed.”
Mitigating that risk involves a multi-tiered solution for monitoring and security, Gibbs said. Network engineers need to continue watching what’s going on in NetFlow and other sampling technologies, but then have another tier of defense with inspection devices downstream in the network.
“Don’t just look at these [core] routers,” he said. “If you miss it at the core routers, you should catch it in a net further down in the chain.”
- Check the network before moving to the cloud –SearchSecurity.com
- Mosaic NetworX Network Services Overview –Mosaic Networx
- Ensure Your Enterprise Makes It Through the Great Applications Migration –Mosaic Networx
- IT Handbook: Network Considerations for VDI –SearchDataCenter.com