Network upgrade yields facilities automation, active-active data center

With an HP-based network upgrade, the University of North Florida can now support facilities automation and an active-active data center.

With a network upgrade powered by HP Networking, the University of North Florida has enabled networked facilities automation and gigabit connectivity to the desktop. It has also paved the way for an active-active data center configuration.

Steve Lyon, assistant director for network engineering at the university, set out to design a flexible network architecture where he could do Layer 2 networking at the edge but have the option of routing at the edge in areas where the university’s colleges wanted a more distributed local network. He also wanted to provide Gigabit Ethernet (GbE) to the edge with 10 GbE at the core.

The university evaluated multiple vendors for its network upgrade, including Cisco Systems, Juniper Networks and Force10 Networks. Ultimately it chose HP Networking, partly due to the savings the vendor could offer -- lifetime warranties on stackable, access layer switches promised more than $100,000 savings on product maintenance costs.

Extreme makeover: Network upgrade means no more spanning tree

Lyon’s core-to-edge network upgrade started with the installation of 395 stackable E5500 edge switches across the university’s 1,500-acre campus. All of these stacked closet switches backhaul through air-blown fiber optic cable to the central data center, where traffic is aggregated by a pair of A5820 24-port 10 GbE switches that are connected via HP’s network virtualization technology, Intelligent Resilient Framework (IRF). By using link aggregation protocols at the campus edge and IRF at the aggregation and core of the network, Lyon has been able to eliminate spanning tree protocol (STP) from his network.

“The question is,” Lyon asked, “do you want to live with spanning tree when you have convergence and network topology changes going on?

“There’s a level of instability on the network that IRF of Juniper’s Virtual Chassis technology or Cisco’s VPS and VSS eliminate. When these virtualized chassis technologies came along it just took one of the major problems of Layer 2 off the table. All those technologies attempt to allow diversely pathed connections to go to two different boxes which are acting like one physical box. To us, IRF is everything, in the sense that we don’t have to deal with spanning tree.”

The university’s concurrent data center network upgrade consisted of two IRF-linked A12508 core chassis and three pairs of IRF-linked A5820 switches -- including the pair that aggregates the campus edge. The other A5820 switches provide a mix of Gigabit Ethernet and 10 GbE access to racks of Dell servers and what he describes as “high-volume hosts” -- or servers that host high-transaction systems such as multimedia pedagogical applications and on-demand desktop reimaging.

The network upgrade allowed Lyon to repurpose his legacy core Cisco Catalyst 6506 chassis as service appliances running firewalls and Wireless Service Module (WiSM) blades for his Cisco-based 802.11n wireless LAN. The upgrade to two A12508 chassis in IRF mode also introduced a level of resiliency the university had never had before. Each Catalyst 6506 was a single point of failure on Lyon’s network.

“Five years ago our core was a series of 6506s,” Lyon said. “We had one in the [data center] for servers, one for LAN routing and one for the Internet. But they were also very expensive for my university so we suffered in terms of redundancy..

“However, as we have grown as a university [from 4,000 students ten years ago to 17,000 today], it’s become critical that things stay up seven-by-twenty-four. So the sacrifices we made of redundancy before in our Cisco environment we remedied with the purchase of the HP environment.”

Redundancy crucial for networked facilities automation

Redundancy and high availability became even more critical as the university implemented facilities automation on its HP network. The university has connected and automated energy management, plumbing and sprinkler systems, surveillance cameras and other systems across its network, Lyon said.

“So if we have a flow control valve break or a sprinkler zone go out, we know about it,” he said. “It’s not [just] that these things break. They break all the time. What costs money is when you don’t know about it because you weren’t notified. That requirement kind of snuck up on us, but we’re saving the university a ton of money by keeping these things up and running.”

Splitting IRF pairs across an active-active data center configuration

The next step is for the university to establish a new primary data center in a recently purchased office building located a half-mile from its existing data center, Lyon said. He intends to deploy the two data centers in active-active mode, by splitting his IRF-paired A12508 and A5820 switches across the two sites.

“I ran a 144-strand single mode bundle between my building and that [other] building because it had a commercial data center in it,” Lyon said. “Now I’m really going to enjoy the benefits of IRF because over the holidays I’m going to split the network in half. That means that any [server] rack will be dual attached, one to the IRF side in the [old] building and [the other to] the IRF side in the [new] building. I’m shooting for an active-active data center environment. It’s not going to start out that way because I don’t have enough storage facilities … but I’m going to have two functioning data centers where the networking hardware is IRFed across a half-mile single-mode link.”

Managing a multivendor network with HP IMC

Lyon manages his entire network via HP’s Intelligent Management Center (IMC), a network management system known for its ability to manage third-party devices, particularly thousands of Cisco products. Lyon is using it to manage his HP network as well as his legacy Cisco switches and some other IP-enabled devices within his infrastructure.

“IMC has a modular MIB [Management Information Base] organizational structure, not hierarchical,” Lyon said. “So you can pull in other MIBs.”

This modularity has allowed Lyon to improve his monitoring of battery levels of the uninterruptible power supply (UPS) units in his wiring closets.

“You can put Ethernet cards into APC UPSes in the closets,” he said. “The worst thing about having UPSes out in the closets is that the batteries die. If you don’t have SNMP running on those devices, you’re not getting feedback from them on how those batteries are doing. If you don’t have that information, then when you have buildings go down with an outage, [the wiring closets] don’t recover. I can take MIBs out of the APC UPS environment and report battery utilization back to IMC.”

 

Let us know what you think about the story; email: Shamus McGillicuddy, News Director.

 

Dig deeper on LANs (Local Area Networks)

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close