Network monitoring authority Paessler PRTG has seen the good, bad and ugly of network designs that prevent monitoring a system, which is why their product line solutions cover everything from network management to server performance.
To get an idea of what they’ve seen, PR rep Michael Krems got the company talking about the top five most common mistakes network administrators and IT systems managers make that cause their virtualization deployment to fail.
Here’s what Paessler says will mess up your enterprise’s virtualization:
- Virtualizing systems without knowing their usual CPU/memory load, disk usage and network usage: You must monitor a system prior to virtualization in order to know how much load it will put on your VM host servers. System with high load may also be not suitable for virtualization at all.
- Running too many VMs on a host: causing overloads: All virtualized systems suffer in performance.
- Running too few VMs on a host: spending too much money buying too many host server[s]
- Compare mid/long-term monitoring results before and after virtualization to ensure quality of service doesn’t suffer.
- The performance of all virtual systems on a host usually suffers from one virtual system going amok or running into a performance/load peak. Without monitoring, such events often happen undiscovered.
It seems like applying common sense would fix a lot of the issues surrounding virtualization. I wouldn’t place a lot of blame on the network manager though. Much of what goes wrong with a virtual deployment just has to do with the capabilities of the technology. Take these issues for instance:
- Server virtualization impacts network latency.
- Server virtualization creates a network configuration burden.
- Virtual machines present dynamic environment issues for network pros.
As with any new technology, most of the struggle too, is not having the right information. In the end, do you feel like enough conversation surrounds the impact virtualization has on your network?