A private cloud architecture represents a major shift in not only how enterprise data centers are built, but ultimately...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
how IT services are delivered to the end user. Compute, storage and networking form a collective pool of resources that can be tapped directly by the enterprise user. To support this agility, a private cloud needs to be a self-monitoring, self-healing, cohesive infrastructure.
Private cloud orchestration is like a modern, fly-by-wire aircraft, according to Andre Kindness, senior analyst for enterprise networks at Forrester Research. The plane's flight control system collates thousands of readings from throughout the aircraft, and then it automatically adjusts systems to keep the plane flying. The flight crew steering the aircraft is simply one more set of inputs to the overall system.
“Private cloud orchestration needs to operate in the same vein, monitoring the virtualized environment to optimize user-driven workload requests,” said Kindness.
Service catalogs and private cloud orchestration work together
A service catalog is the end user’s interface to the private cloud architecture. Typically presented to the user through a Web interface, the service catalog provides a list of services available, from something as simple as creating a new user account to spinning up a virtualized application environment to run a workload for the user.
The orchestration software that drives the service catalog understands the processes required for each service listed in the catalog, as well what resources are available in the private cloud architecture to support it. When the user selects a service from the catalog, the orchestration system takes over, launching a series of automated tasks to allocate the server, storage and networking resources for the service. Conversely, when a user no longer needs that service, the orchestration software can release those resources back into the private cloud pool.
Elimination of manual processes key to private cloud orchestration
To allow users to self-provision services in a private cloud, all of the resources within the virtualized data center must be highly automated and self-healing. Network automation and scripting provides plenty of opportunities to integrate the data center network with private cloud orchestration.
Typical network administrators spend about 70% of their time on IT maintenance tasks, according to data collected in Cisco Systems’ own data centers. Handing these tasks over to the private cloud orchestration solution not only prevents the network from becoming a barrier in achieving full orchestration, but should open the network administrators’ time to more complex or productive tasks, including doing new deployments and supporting new systems. Cisco’s own automation efforts resulted in dropping the time spent on maintenance down to 40% of the administrator’s time, leaving over half of the administrator’s time for technology innovation work.
Data center networks also need to be designed from the ground up to support private cloud deployments. “Design for failure,” suggested Andi Mann, vice president of Strategic Solutions for CA Technologies and co-author of Visible Ops Private Cloud. Because virtual machines are highly mobile in a private cloud, they can migrate to new host servers and away from hardware trouble. Data center networks need to react to these migrations and achieve the high level of resiliency by quickly rerouting traffic around problem areas.
In a private cloud environment, this new orchestration layer must coordinate the network with the compute and storage environments. Only with all three solutions executing together can an orchestrated private cloud deployment operate without constant, manual intervention from administrators.
Private cloud orchestration in action
IT director Rick Parker has turned to a private cloud architecture, with orchestration provided by Platform Computing’s Platform ISF, to achieve a high level of resource utilization and agility at Fetch Technologies, a California-based company that collects unstructured data from the Web and delivers it in a structured format for its clients.
Fetch’s quality assurance team is utilizing a Web-based service catalog for the company's testing environment. The team is continually spinning up, turning up or completely refreshing up to 80 virtual machines within Fetch’s virtual environment, all through a Web interface. Without involving any IT administrators, the QA team can quickly build its own virtual data center to emulate whatever conditions it needs to test.
“I check in with the QA team every month or so, just to see if they are OK," said Parker, who finds that help desk calls from the QA team are nearly non-existent.
As he expands his private cloud to Fetch’s production environment, Parker already sees the role of IT changing within his organization. “IT will be out of the server management business and in the data center business,” he said.
While the difference may be subtle on the surface, it has a direct impact on how IT operates going forward. Rather than handling specific components, such as switches or servers, administrators will focus on making sure the data center as a whole runs at peak utilization and efficiency. In fact, even Parker's approach to hiring administrators has changed. “I look for candidates that are interested in technology, not just one element of it. Going forward, I am looking for cloud engineers that are willing to take on any part of the infrastructure, versus network engineers that just want to be really good at networking.”