Power and cooling considerations for data center network design

The network architect involved in data center design needs to be aware of issues that factor into networking the data center beyond the network itself. This tip will introduce physical facilities considerations of data center design, such as power consumption heating/cooling issues, which can have a strong impact on your data center network design.

If you work as a network architect for long enough, sooner or later you may be asked to help design a new data center network. Although this probably doesn't sound like a big deal at first, I can assure you that it is not a task to be taken lightly. The way your data center is designed can have a direct impact on limiting the scale, topology, or even functionality of your network. Physical facilities considerations of data center design,...

such as power consumption and heating and cooling, will have a strong impact on your network design.

Electrical capacity and the data center network

Your data center network cannot function without electricity. When you combine that with the fact that electrical power is one of a data center's primary ongoing expenses, it makes sense to take a good look at the data center's electrical capacity up front.

Many years ago, I was doing some after-hours consulting projects. I received a phone call one day from a small company that had recently moved into a new building. They were having some problems with a new UPS they were using.

I had never been to the company's new facility, and I hadn't been involved in the move, so I didn't really know what to expect. When I got there, the server room looked something like the scene from the movie "A Christmas Story" where the father had a zillion plugs feeding into a single outlet. Whoever had set up the room had daisy-chained a bunch of power strips. This meant that every server in the room was plugged into a single 20-amp outlet.

Miraculously, the outlet was able to sustain the workload, but the company bought a single UPS and connected it between the wall and the first power strip in the chain. They couldn't understand why this low-end, consumer-grade UPS could not support all those computers and monitors.

The point is that if you are setting up a data center, it is important to figure out what your power consumption is going to be ahead of time so you can make sure the facility has adequate electricity.

Each piece of equipment should have a label that will help you to estimate its power consumption. Once you have made a list of each machine's power consumption, I recommend tacking on 20% as a safety margin. Of course, in a startup data center you may not yet have actually purchased or received the equipment. If that is the case, then I recommend visiting the manufacturer's website for each piece of equipment you are considering. Most hardware manufacturers will post the equipment specs on their website, and this typically includes information about power consumption.

You also have to account for other sources of power consumption. For instance, larger data centers generally have their own cooling systems, which of course consume electricity, as do the lights. You should also plan for future growth by making sure there is plenty of power left over for servers that you might purchase in the future.

If you are moving into an existing facility, then I strongly recommend having an electrician inspect the portion of the facility that you plan to use as a data center.

Typically, you are going to need several dedicated electrical circuits in your data center (more or less, depending on the size of the data center). An electrician can verify that the existing circuit breaker box has the capacity to add any extra dedicated circuits that may be necessary.

Another reason why such an inspection is important is that you need to find out whether the outlets that are currently in the area are on the same circuit as anything else. For example, you may find that the planned data center is on the same circuit as the entire next room. I ran into a situation about 10 years ago in which a nursing home's servers kept going offline for no apparent reason. Examining the logs, I noticed a trend in the timing of the failures, so I stuck around late at night to see what was happening. It turned out that the servers were plugged into an outlet that was wired to a light switch. When the last person left, he would turn off the lights, and the servers would go down.

Granted, this was a small facility without a dedicated data center. You would never expect that sort of thing to happen in a "real" data center. Even so, this is a perfect example of why an electrical inspection is so important. Until you get the facility inspected, you have absolutely no idea how it is wired. That outlet that you plan on plugging a server into may be connected to the light switch, or it may be on the same circuit as the copy machine in the next room. You just don't know until you have an inspection done.

When you add up the power consumption needs of your equipment and factor in a margin of safety, and the needs of future servers, you may be shocked at how much power you are going to need. There is a bit of good news, though. Power consumption in the data center is actually on the decline.

Server virtualization plays a big part in this decline. In my own data center, for instance, I have consolidated 12 servers into two. That really cut down on power consumption in terms of how much electricity the servers use and how much power my air conditioners are using. Having fewer servers means you are going to produce less heat, which means the air conditioners won't have to work as hard.

Another reason why power consumption is on the decline is that the equipment itself is becoming more efficient. This is especially true of hard drives. As you gradually replace older servers with new ones, you will generally find that your power consumption goes down.

Cooling the data center network

Excessive heat can dramatically shorten the lifespan of servers and other types of networking equipment. Network administrators are usually under a tremendous amount of pressure to keep the network running reliably and to make the most of the IT budget. Having equipment fail because it overheats can result in downtime and expensive repairs.

It actually takes quite a bit more cooling power than you might think to keep a data center cool. One extreme example of this is that when I first started doing freelance writing full time, I had a rack of 14 desktop machines I used as servers in a large room. The heat these machines put out was unbearable, even with the air conditioning running at full blast. The air conditioner simply wasn't designed to keep up with that kind of heat.

Modern servers don't give off nearly as much heat as the machines I was using back then, but temperature control can still be an issue, especially when you have a lot of different pieces of equipment in one room. My recommendation would be to contact a heating and air conditioning specialist who routinely deals with data centers. A knowledgeable person should be able to help you figure out how many BTUs of cooling power it is going to take to keep your data center at the desired temperature.

Another thing that you can do to help with heat dissipation is to arrange your data center in a way that helps cool air to flow into the servers and other network components. One common approach is to use a design technique called hot row/cool row or hot/cold aisle for your data center floor plan.

The basic idea behind this technique is that any piece of hardware that uses cooling fans has an air intake and an air exhaust port. The cooler the air entering the intake port, the cooler the server will run. The hot row/cool row design physically arranges networking components with regard to the location of their air intakes and exhaust ports.

Assume, for instance, that all of your servers have an air intake port on the front, and an exhaust port on the back. If you were to have multiple rows of servers arranged so that all of the servers were facing the same direction, then the front row of servers would be sucking in cool air and expelling hot air. The second row of servers would end up sucking in air that had already been heated by the first row of servers, and then expelling even hotter air.

If the hot row/cool row design were applied to this situation, then server racks would be arranged so that the servers were back to back, with the exhaust ports facing each other. That way, the front side of the servers is left exposed to cooler air, rather than air that has already been heated by another server.

Most larger organizations pump cool air through a raised floor in the data center. Strategically placed perforated tiles allow the chilled air to escape in certain areas. I recommend placing these tiles in front of each server rack. That way, there is a source of cool air directly in front of the server's intake port, which will help the servers to run much cooler.

It is important to remember that some network equipment (especially rack-mounted switches) is designed so that air flows in one side and out the other rather than following the traditional front-to-back airflow design. If such equipment is in use, it is important to arrange the racks so that you can avoid sucking the heated exhaust air from one device into another.

Some organizations get around this problem by staggering the racks, while others simply turn the racks at 90-degree angles. Whichever approach you use, keep in mind that you need to leave enough room around the rack to allow you to perform any necessary maintenance on the units within it. In the case of switches, you must also ensure that the layout that you use does not interfere with your cable management technique.

Power management is only one of the issues that you need to consider when designing your data center network. Another important consideration is how your floor space will be used. In Part 2, I will show you some techniques for saving both space and energy.

About the author:
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. He has served as CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. As a freelance technical writer, Brien has written for Microsoft, CNET, ZDNet, TechTarget, MSD2D, Relevant Technologies and other technology companies. You can visit his personal website at www.brienposey.com.


This was first published in February 2009

Dig deeper on Data Center Network Infrastructure

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close