Deploying internal cloud computing can help enterprises achieve efficiency and agility, but many IT teams are concerned they don't have the compute resources or budget to implement the technology. The answer may be to better utilize an enterprise's existing compute resources by implementing automated provisioning that can assign -- or roll back -- resources based on actual need.
Sound expensive? Not necessarily. IT managers can combine existing provisioning tools and a range of free software to automatically provision and de-provision compute resources, eliminating idle servers.
Provisioning resources from server pools based on need
In a typical tiered and siloed data center, administrators provision server resources based on projected peak needs, even if those servers will be vastly underutilized or idle most of the time. If servers are instead assigned out of a pool based on near-term needs and then reclaimed into the pool when no longer required, they are free for other uses the rest of the time (and can even be powered down if not needed for anything). A compute-resource pool can thus use host resources more efficiently, especially where virtualization and other techniques leverage
"Lightness" comes as resources are allocated to systems quickly, as and when needed, and then reclaimed by the pool as soon as possible. Using automated provisioning of virtual or real servers out of a pool lets IT bring a new system on line in minutes rather than days or weeks. The addition of resources to the pool is decoupled from the needs of quick systems provisioning.
Challenges to automated server provisioning
Ideally, end users of a cloud simply request resources and then wait briefly for resources to be provisioned. Because of the complexity of what must happen, and the speed with which it will ideally occur, this requires thorough automation that IT can trust. This introduces three problems: cost, trust and expertise.
Buying a full-blown automation tool like Oracle's Sun's Q-Layer can be pricey. And IT is still hesitant to trust full automation of resource allocation. IT managers and administrators still prefer to press a go/no-go button. In addition, IT teams need time to become expert because, as well as understanding provisioning tools, they must work with platforms and management tools required to build a pool of compute resources out of a bunch of servers or a single storage pool out of a bunch of arrays.
Internal cloud computing on the cheap: Reuse, repurpose, recycle
If budgets are tight, IT staffs still can build a cloud -- or something cloud-like -- by leveraging tools they already have, along with free software when possible. They must also be precise in what they spend money to add.
The first step? Reuse, repurpose, recycle. This comes naturally to those who work in perpetually resource-starved environments, such as community colleges and charities. Any IT shop that has tools to help with provisioning servers, network connections and storage probably has the ability to script them -- if in no other way, then by punching commands through to a command line interface (CLI).
Mapping your existing provisioning capabilities
The IT staff should make a thorough map of what its current tools can accomplish. A project manager should create a flow chart or unified modeling language (UML) diagram to help map provisioning and de-provisioning processes and associate them with actual devices, software packages and staff positions. Perl, PowerScript, Bash, or other available scripting languages can provide connecting glue. This emphasizes again the importance of staff knowing how to use the scripting languages available to them.
The goal is to make sure the processes are standardized, modular, and require minimal human intervention after the "go." An organization with an internal IT Web presence can put up a "request resources" page and route requests from it to people who can assess the availability of resources and click "go," or justify not doing so.
Freeware: The answer to internal cloud computing?
If IT does not have the tools it needs, it can fill many gaps with free software, either permanently or simply as a starting point while budgets are tight. It's important that IT keep its processes modular so that using a freeware tool to replace a manual process, or a commercial tool to replace a free one, will be straightforward. Beyond the LAMP stack (Linux, Apache, MySQL, PHP/Python/Perl), free packages can fill many gaps in a data center looking to gear up for more cloud-like operations. Examples include:
- Monitoring tools such as Hyperic, Zenoss, OpenNMS, Cacti, and Nagios can provide visibility into server and storage utilization. SolarWinds supplies a free VM Monitor tool.
- KVM, Xen, and VMware ESX are free hypervisors; Citrix XenServer is a multi-machine resource pool manager; Microsoft HyperV is cheap or already included in many organizations' MS licenses.
- jManage handles Java applications in application servers.
- FastSCP helps move data between physical and virtual environments.
- OpenFiler and Veritas Storage Foundation Basic allow consolidation and allocation of storage. StarWinds provides a free version of its iSCSI Target software.
- Eucalyptus is a cloud-management tool that ties together other systems to provide automation.
Many of these tools have commercial versions and support available if IT requires them or regains enough budget to afford them.
If IT sticks to modular principles in putting together a starter cloud on the cheap, it should be able to scale the parts as needed, swap in commercial solutions, or even integrate the use of provider resource clouds, such as those from Amazon, Terremark or AT&T.
About the author
John Burke is a principal research analyst with Nemertes Research, where he focuses on software-oriented architectures and management.
This was first published in August 2009