InteropNet blends cutting-edge technologies with network needs

In order to meet the needs of the Interop Las Vegas 2011 attendees, speakers and vendors, InteropNet must balance between practicality and the year's cutting-edge technologies.

This Content Component encountered an error
Image goes here

Glenn Evans, Lead Architect for Interop

Building a network for one of the world's largest tech conferences has its demands. Glenn Evans, lead architect of the vendor-sponsored Interop network, InteropNet, must showcase the year's cutting edge technologies while supporting the needs of more than 13,000 attendees, 350 vendors and 250 speakers. SearchNetworking.com spoke with Evans about the technologies InteropNet is embracing in 2011 and how he manages the traffic demands of conference end users. Find out how InteropNet addressed challenges with cloud computing and a new DNS in part 1 of this Q&A or continue reading below to learn how InteropNet blends the cutting-edge technologies of today with practicality.

What types of cutting-edge technologies are you using this year?

Glenn Evans: Being a production network, we really try not to operate right at the cutting edge. Because our aim is to provide guaranteed service, we try to use some tried-and-true methods.

Some of the things we’re using this year are 10 Gigabit Ethernet (GbE), which has been around for a while, but our implementation has multiple-link trunks of 10 gigs that total up to 160 gigs.

IPv6 is also a big push for us this year. One of the takeaways we’d like our attendees to have is that IPv6 is something everybody should be looking at how to adopt. We gave back a large slice of IPv4 address space last year with the expressed intention of migrating the InteropNet to IPv6. Now we're fully IPv6-aware and compliant, operationally.

What cutting-edge technologies did you forego in order to have the most operational network?

Evans: We would have liked to have done 100 GbE, but it’s just not ready for prime time yet; it’s still in the experimental phases. The overall network design is a collapsed-core type of [network] … but even within the constraints of being conservative, we’re still able to innovate and show new products and new technologies from some of the vendors.

HP is an example: It has its IRF system in place where essentially all the HP switches and routers are seen as one logical unit from a management perspective. Cisco is bringing its CleanAir technology to help better manage the wireless experience for the attendees in their particular area. Xirrus is providing wireless services in other parts of the event area. Also, from a management perspective, rather than having a box just alert us when something’s wrong, we’re using some of the flow-based technologies to get better statistics and better reporting out of all our equipment.

What management tools do you rely on to make sure InteropNet services stay up and running?

Evans: [We use] Syslog for messages, SNMP for polling and tracking, and sFlow and NetFlow for operational statistics. But we also have packet analysis, so if something does go wrong, we can dive a little bit deeper into the systems and try and work out what’s wrong. We’re also focusing on power monitoring this time around. We have constant monitoring of our power usage, down to the equipment level. [This way, we know] how much power each piece of equipment in the network is drawing, and then we associate that with bandwidth and traffic statistics, so we can actually get a feel for how much power we’re using to push a certain amount of traffic.

What are you using for endpoint power management?

Evans: We have a number of tools for that. Our management platforms all take those statistics, and we’re able to generate reports out of all our management platforms. We’re using two major management platforms this year: one from ScienceLogic and one from ScriptLogic, or Quest Software.

How does network traffic compare during the show as opposed to before or after the show?

Evans: Before the show ... [traffic] is actually pretty minor with our particular networks. There’s a certain amount of noise generated from people doing security scanning and some other funny things out on the Internet. Once we bring the show network up, we obviously start to generate traffic ourselves, rather than receive traffic. We have 2 Gb pipes out to the Internet. In general, we see probably 3-4 Mb of traffic with peaks of up to about 600 or 700 Mb.

How do you plan to manage those traffic peaks?

Evans: As a vendor-sponsored network … we haven’t really had to focus on bandwidth management. But I think that as time goes on and people start to use the network for more intensive applications, like video and so on, we’re going to have to start looking at bandwidth management and controlling the flow of traffic through the network.

How will you prepare InteropNet for the cutting-edge technologies of next year?

Evans: There's a general expectation within the industry that video and mobile content will start to play a bigger part in big networks, in general. And the traffic going across your network is going to be less text-based data; it's going to be more multimedia or video-based. That's going to change the way we think about and design networks. This is something we'll be starting to look at for the next couple of years.

In the last five or six years, there has been a paradigm shift in the way networks have been designed and built and how they interact with applications that run on top of them. It's been the case where, historically, the network has driven what's happened at the application level. These days, we’re seeing the application side driving what happens on the network level. The Facebooks of the world and the Twitters and YouTubes, etc., are actually determining how we think of and design our network. The network is becoming more of a utility—like power and phones—where it just has to work, and [we have to figure out] how to make the network better perform or become a utility-type service for the applications. The network enables all these media applications to operate on top of it.

We’re starting to do this with the InteropNet OpenFlow lab. OpenFlow is a technology developed in the research space to allow them to reuse equipment and not have to build research networks, but rather slice out portions of production networks for research purposes. OpenFlow has centralized control away from the transit network and lends itself to the cloud-based data center and cloud-based application delivery that [allows] you [to] put your application where your users are. If your users move or migrate around the world, you can actually move your applications to where your users are, and that's where the innovation is happening in the networking space.

Learn how InteropNet could deploy IPv6 or view our Interop Las Vegas 2011 special news coverage

Dig deeper on Network Conference and IT Conference News

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close