Imagine being able to do what Amazon Web Services can do within your own private data center: create a new disk...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
or server, or start new instances of an application in seconds with a few clicks by choosing from hundreds of available templates that span a range of operating systems. That in a nutshell is what you could have by building a private cloud.
The phrase "private cloud" is a marketing term for the ability to host applications and server or storage space in a private data center and provide access to these resources on demand. This means that you're able to deploy new applications, servers and storage (in the form of logical disks) in minutes instead of the old model that took weeks or months to scope the requirements, get the budget approved, select the best offer and get the server delivered and deployed.
What's so important about building the private cloud?
You can gain significant cost reductions by migrating your existing applications and server infrastructure to virtualized servers in a private cloud. Today's high-end blade servers have much better price/performance ratios at significantly reduced power consumption than yesterday's standalone servers. You'll also utilize the space in your data center better (sometimes replacing a whole row of rackable servers with a few blade servers in a single chassis that take less than half a rack), allowing you to deploy more servers in the same space or reduce the data center footprint and the cooling costs.
Elements of building a private cloud
The foundation of the private cloud is server virtualization: the ability to run numerous virtual machines on the same physical server usually using Intel's x86 architecture and hypervisor software from VMware, Citrix or Microsoft. Server virtualization increases CPU and RAM utilization as numerous virtual machines use the same physical resource that would otherwise be left idle. High-speed multi-core servers with extended memory architecture, for example, Cisco's UCS blade servers with up to 32 cores and 384 GB of memory, can run up to 100 virtual machines, each of them using 4 GB of memory (usual memory footprint for a Windows or Linux server).
Apart from optimizing the resource utilization, server virtualization gives you instant server creation ability. All you need is a virtual machine definition, specifying the amount of RAM, CPU and external resources a server needs. Ideally you'd also have a library of predefined templates that you can use to create new machines as it doesn't make sense to provision each Windows or Linux server from a CD/DVD ISO image.
The second element of the private cloud architecture is virtualized storage, or the ability to create virtual disks on demand and attach them to virtual machines as needed. Virtual disks are usually stored as files in a large shared file system; all you need to have to deploy them is server virtualization software (vSphere from VMware or Hyper-V from Microsoft) and a large enough storage array.
Last but definitely not least, you need a self-service portal that allows your users (individual departments or application teams) to create new virtual machines and disks themselves. In smaller organizations the GUI provided by the virtualization software (for example, vCenter from VMWare) will do. Larger organizations will definitely need more advanced software that provides isolation between users and role-based access control. The newly released vCloud Director from VMware or UCS Manager from Cisco could easily fit the bill.
Building the private cloud in phases
Private cloud infrastructure is usually created in three phases:
- A pilot project;
- Deployment of larger server and storage infrastructure and migration of existing servers into the virtualized environment;
- True private cloud with self-service portal and (if needed) charge-on-demand mechanisms.
The journey through these phases can take months or even years, and it's best to go in measured steps:
- Research the market and select the server and virtualization platform. My personal choices would be UCS from Cisco and VMware. Others may choose a chassis from IBM or HP or hypervisor technology from Microsoft or Xen, depending on what they already have deployed.
- In most cases you'd already have a storage array; just make sure it has enough spare capacity. You usually have to give each virtual machine several gigabytes of disk space (without the actual user data it needs to store); virtual disks thus quickly consume terabytes.
- After deploying the new infrastructure and getting familiar with it, build a library of virtual disk templates (typical Windows and Linux servers, for example) and start fulfilling all new server requests with the virtual machines to get a few easy wins. When everyone is convinced virtualization is the right way to go, start migrating existing applications to the virtualized environment to improve their performance and reduce the data center costs.
What else do you need to know?
Server/disk virtualization and the private cloud concept are excellent tools ... but not more than that. You have to use them wisely or you'll waste a lot of time and money. Most IT organizations will have to change their organizational structure and processes to tightly integrate the networking, server and storage teams, as they have to work very closely together to create this environment.
About the author: Ivan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks and is currently chief technology advisor at NIL Data Communications, focusing on advanced IP-based networks and Web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. Check out his IOS Hints blog.