First and foremost, network managers need to understand more than server CPU utilization before starting virtualization plans.
The best investment a company can make is in new supplies that monitor power usage over time. When virtualization planning, bear in mind that, on average, power is half of a data center's total operating costs.
One mistake companies make is assuming that virtualizing all the low-utilization servers will solve their problems. The truth is that we have a lot of servers in business that run applications only certain days of the month; on the remaining days, they are powered up to do nothing. Think, for instance, of machines that do front end processing at the 15th and last day of the month. Taking up resources on a virtualized machine may have less of a power impact than simply putting them in a power cycle management so that they are running only when needed.
Some products, such as Raritan's Metered Power Distribution Unit (PDU) and Raritan's Switched PDU, trend power usage over time, show spikes, and allow remote and automated port shutdowns. These tools enable networking teams to better control power usage.
Further, when looking at actual power utilization (not power supplied, which is only half the information needed), you can move the highest-power server's applications to a virtualized machine for the greatest power impact.
Another common mistake is not monitoring network utilization throughout an entire month's cycle (preferably through a full quarter), the reason being that you must plan for the peaks that you may miss if you look at utilization only every 10 days, for example. When virtualizing machines, you must allow enough room/resources for all applications to operate during peak utilization periods.
Growth of the applications in both size and resources is another important consideration. Most software companies should be able to tell you how requirements have grown over the past several years. A savvy manager will want to allow for that percentage of growth so that applications that have been virtualized won't need to be de-virtualized in the short term.
Bandwidth should be high on the list as well. When planning virtualization, it is time to consider increasing your bandwidth. People who look at port utilization are really missing the mark for network bandwidth. Saying a port is only 20% utilized is equal to saying, "I'm in the car only 20 minutes a day so I only need a car that goes 20 miles an hour." It isn't how often a port is used, but the speed and quality of the connection. Even if you bump up to a 10 GbE link (which will be available on copper 10GBASE-T within the next few months), using half that bandwidth is still five times faster than a 1 GbE link. This gives you room to grow as your applications' network bandwidth needs grow.
Pick an operating system (OS) platform that you know you can support, and by all means get some training. Whether you go for certification or not, knowledge is power and will save a lot of headaches when multiple applications are relying on a fix. There is a move these days to try to consolidate platforms to keep training costs down. Application vendors have become much better about supporting a variety of platforms. If you look at a recent study done at the Data Center Decisions shows, Data Center Decisions 2009 PI results, the increase in UNIX/Linux platforms is pronounced and is overtaking Windows servers. That said, there are several varieties, and deciding on the right one may take some investigation.
Lastly, think about your hardware, especially for new purchases. Some companies purchase hardware from certain vendors based on history, when in fact the choices and options today have a wide variety of features in both hardware and software. For instance, one company may allow field upgrading of power supplies while another may require the purchase of a new chassis to move into more energy-efficient power supplies. One may provide more hot swap capabilities than another, and some may offer better monitoring than others. It is important for companies to do due diligence before just blindly sticking with a historical favorite, or they may be losing out on some great new technology and advantages.
About the author:
Carrie Higbie has been involved in the computing and networking industries for over 28 years. Carrie currently works with The Siemon Company as the global director for Data Center Solutions and Services in support of end-users and active electronics manufacturers. Carrie is an expert in TechTarget's SearchNetworking.com, SearchUnifiedCommunications.com and SearchDataCenter.com websites and is on each site's board of advisors. She contributes articles to these sites as well as SearchMobileComputing.com, SearchSecurity.com and SearchCIO.com covering various topics including technology planning, business continuity, budgeting, personnel management, certifications and security to name a few.