Network management and monitoring: The evolution of network control
A comprehensive collection of articles, videos and more, hand-picked by our editors
I recently hosted a webcast on managing data center networks with an expert from SolarWinds and a Cisco product manager. We had a good crowd, and there were a ton of question, including: “What are the best ways to monitor virtual SANs?,” “How does Cisco’s UCS fit into my data center strategy?” and “Can I use public cloud services with my data center disaster recovery planning?” The questions left me with one major thought: data centers are very, very different today than they used to be.
For the most part the difference can be summed up in one word – density. Data centers are denser than they’ve ever been before. Let’s take a simple, single, 7’ tall-19” wide rack for instance. Traditionally, we’d probably put three-four servers in that rack. Those servers would have local storage within their chassis and they’d have one (at max two) 1 Gbps connection per server. At the bottom of the rack, you might find a Universal Power System (UPS) to provide uninterruptible power.
Read Josh’s Geek Speak Blog
Top ten technologies every network engineer should know: Traffic analysis
Layer 2 networking for network engineers
Technologies every network engineer should know: Log management
Now we would more likely replace those three-four servers with high-performance VMware servers. In that scenario, the storage would no longer be in the chassis. It would be across either a Fibre Channel or high-speed Ethernet connection on a SAN. So, the servers would probably be smaller. Let’s say eight in the rack this time. Each of those eight servers would be running 15 virtual servers. So, now you’d have 128 servers in that rack and at some point, you’d realize that you still only have a total of 4-8 Gbps of bandwidth going to that rack for all of that computing power (not counting the SAN connections). Likewise, while you wouldn’t be using 128 servers worth of electricity -- the power requirements would certainly have gone up from when you had only four servers hanging there.
Now that we have your head spinning, let’s add a few more wrinkles. We’re not only virtualizing application servers, now we’re virtualizing database servers, desktops, and network devices. Virtual switches, firewalls, load balancers and network analyzers are commonplace. In some cases we’re even virtualizing the four walls around us by putting things in the cloud.
So, this ain’t your daddy’s data center. But it’s way cooler – if you know how to manage it. Here are a few data center network management tips:
1. Leverage purpose-built solutions for data center computing needs. The days of installing VMware on one of our old Exchange servers and deploying app servers as VMs on it are over. Best in class solutions like Cisco’s UCS were designed specifically for this purpose and offer several advantages in terms of performance, capacity, and efficiency.
More Fast Packet bloggers
Josh tells you how to manage the distributed enterprise
Fast Packet blogger Michael J. Martin demands session-based application-aware routing.
Fast Packet blogger Brandon Carroll wonders whether you have a meaningful enterprise security policy
Fast Packet blogger Ivan Pepelnjak on vCloud Director: Sure, hand your network over to server admins!
Fast Packet blogger Jennifer Huber says no more everywhere Wi-Fi hotspots
2. Look out for game changing features. VMware’s vMotion is just plain cool. You’re going to want to use it and you may even want to “motion” VMs between data centers. To make that work, you’re going to have to do some careful planning.
3. Start getting cozy with the cloud. I’m not making a business recommendation here – that’s something that every organization has to deal with on a case by case basis. However, as a technologist, you need to cozy up to the cloud and ensure that your skill set includes some cloud savvy.
Eat your heart out, dad. My data center is cooler than your data center…
About the author:
Josh Stephens is head geek and VP of technology at SolarWinds.