Rawpixel - Fotolia
Enterprises are trying to make the best use of computing resources wherever they may reside, which ultimately is what the promise of hybrid cloud -- notwithstanding the sea of confusion over its true definition -- is all about.
Things only got murkier last year when VMware abandoned its vCloud Hybrid Service branding, renaming it vCloud Air. This compounded the fact that many organizations, including companies that sell cloud services, still mistake simple hosting as being synonymous with cloud.
Yet despite the market's inability to settle on a consistent definition of hybrid cloud -- which the U.S. National Institute of Science and Technology identifies as an environment with two or more interconnected cloud infrastructures -- its goal is more clear cut: enabling enterprises to get the best of both worlds from public and private clouds.
Historically, however, networking in hybrid clouds has been tricky at best. Now that cloud adoption is becoming the norm, there are multiple sets of challenges around ensuring cloud application delivery and performance over hybrid cloud networks.
Jason Cornell is the manager of cloud and infrastructure automation at Cox Automotive, an Atlanta-based subsidiary of Cox Enterprises that sells digital marketing, software and other services to the automotive industry. He wants to treat the cloud as a platform -- not another data center. To accomplish that, his IT department aims to develop applications that are cloud-aware, meaning they can benefit from cloud resources and capabilities as needed. And if those applications need resources in his data centers, Cox Automotive's IT team taps into APIs to access those resources.
Cornell and his team are familiar with the challenges around networking and application delivery between their public and private clouds. As a result, they have a relatively straightforward approach: Avoid the hybrid model and all its headaches.
"We attempt to not couple on-premises and cloud solutions together where possible," Cornell says.
Apps should be 'cloud native'
The key to success in cloud application delivery -- be it public, private or hybrid -- is to first understand that cloud isn't just hosting; it's about agility and applications that are decoupled from physical infrastructure. Enabling applications to run in either public or private cloud is fundamentally about enabling applications to be what is sometimes referred to as being cloud native, which means the application can scale up or down on demand and is designed to run in a cloud environment.
"When we refer to ‘cloud-native,' it's about how you build your app and think about how your app is laid out in a cloud environment, rather than just the packaging and shipping," says Lachlan Evenson, team lead for cloud platform engineering at Lithium Technologies, a social software company based in San Francisco.
Lithium maintains two data centers: one in North America and one in the Europe, Middle East and Africa region (EMEA). The company also uses Amazon Web Services (AWS) in both the United States and EMEA.
Joseph Sandovaldirector of cloud engineer, Lithium Technologies
"We're just starting the transformation of bringing our applications into OpenStack," adds Joseph Sandoval, director of cloud engineering at Lithium.
The OpenStack cloud platform is being deployed in Lithium's data center to help enable the cloud-native approach to application delivery. Sandoval says that migrating applications from bare-metal servers to a cloud-native deployment isn't just a "lift and shift" from one type of infrastructure to another.
"You're best off just first automating your application monolith before you even start thinking about moving to cloud," Sandoval says.
Given that the cloud is elastic, a cloud-native application also needs to be and scale across nodes. When determining what workloads should run on premises and which ones should run in a public cloud, there are several factors to consider. For example, some workloads have data that needs to reside in a private data center for security, privacy or regulatory requirements.
"We're very specific about which workloads run where," Sandoval says.
Containers gain favor in hybrid clouds
To truly be cloud-native, an application needs to be built with the assumption that it will live in the cloud, according to Lithium's Evenson. Additionally, the use of Docker containers is now becoming a core element of many cloud-native applications.
Prior to Lithium's Docker use, the company used and maintained deployment templates for both AWS and OpenStack for cloud application delivery, in addition to an orchestration system on top of them. By using Docker containers, Lithium is refining the deployment process for its resources in both Amazon's public cloud and its own private OpenStack cloud.
With containers, an organization can build, package and ship a cloud-native application that can run in any Docker-ready cloud. And by overlaying Google's open source container technology, Kubernetes, on top of Docker, an organization can run the same code in two different places in the same way.
"One of the things we're doing with hybrid cloud is using it to enable another availability zone," Evenson says. "We have on-premises OpenStack and then we can augment a second running version of the same application in AWS, where we have [a dedicated network connection via] Direct Connect."
The ability to run the same application in both the public cloud and on-premises OpenStack cloud was enabled by Lithium's abstraction of applications into Docker containers. Lithium's IT team makes an app-by-app policy decision on where things will run, based on data issues, service level-agreements (SLAs) and availability.
Deploying Docker with Kubernetes also provides a way to manage the hybrid cloud in scalable fashion. In the pre-cloud era, Evenson explains, developers always wanted to be able to tunnel into their applications via Secure Shell (SSH). But in an environment with a thousand servers, for instance, it's just not feasible to tunnel into them all. With Kubernetes, an administrator can get a single location to view how the thousand containers are running, including logging information.
Analytics software company FICO, based in San Jose, Calif., is also a big believer in a hybrid cloud enabled by Docker containers. Nicholas Gerasimatos, senior director of cloud services engineering at FICO, says the company has a very broad and diverse architecture that includes what Gerasimatos refers to as "big and blocky" infrastructure, such as EMC storage and IBM mainframes. The next-generation architecture that FICO is building is all based on OpenStack and Docker containers.
"FICO, traditionally, has always been an on-premises company, so we would acquire hardware, load our software on it and then ship it to a customer and do integration," Gerasimatos says. "We realized that model only worked for large organizations."
FICO's smaller customers didn't want hardware deployed, which is where the FICO Analytic Cloud comes into play. It's a customer-facing, multi-tenant cloud service that can reside in a FICO data center, a public cloud or a combination of the two.
"So we might do our own data center and then we might have something for high-availability in AWS," Gerasimatos says.
The main driving factor is availability for deployment.
"Some countries won't let you take data across borders, and there isn't a cloud in every single country," Gerasimatos says.
Like the IT team at Lithium, Gerasimatos says he uses Docker containers at FICO as much as he can now.
"Part of the reason we do that is because we can ship the containers to any location and it doesn't matter from an underlying hardware perspective," Gerasimatos says.
To help manage application availability across both public and private cloud deployments, Gerasimatos uses Amazon's Route 53 managed DNS service as well as some homegrown technology.
Nicholas Gerasimatossenior director of cloud services engineering, FICO
"We have something we have built internally that validates that services are available from an external perspective," Gerasimatos says.
FICO's IT team keeps tabs on its clouds with a mix of commercial and open source network monitoring tools, including Nagios, Splunk and Zabbix, along with open source server search tool ElasticSearch. It then ties all those sources of data together to automate cloud application delivery. Additionally, FICO makes use of Red Hat's CloudForms technology to get a single pane of glass for application deployment that bridges its Amazon, OpenStack and VMware environments.
"Our developers don't know where the apps are running and they shouldn't care," Gerasimatos says. "We want to make it simple for them to deploy, and it doesn't matter if it's OpenStack, VMware or Amazon; it just runs and the app is getting the performance it needs."
SDN takes on hybrid cloud security
The network also has a role to play in securing hybrid clouds. Lithium's Evenson says software-defined networking (SDN) is a key element in providing network-level isolation so that each application interface is on a different network.
Lithium needs to provide the same level of security, if not a better one, inside of AWS as it does with its on-premises cloud.
"There is a common baseline for all our infrastructure that is enforced," Evenson says. "We are providing secure multi-tenancy using SDN and are able to provide the same security in AWS as we do on premises."
Going a step further, Lithium also now encrypts everything in its own data centers as well as its Amazon deployment. In the past, however, Lithium didn't always fully encrypt the data inside its own data centers. Essentially, the best practices that grew out of securing the public cloud are now being pulled back into the private, on-premises deployment to improve security everywhere.
Cloud application delivery: It's all about location
When deciding where an application should run, there are a number of considerations. At Lithium, real-time services have strict SLAs that need to be adhered to. With the public cloud, however, Evenson says it's not always possible to guarantee performance across a network.
"In a bare-metal [server] deployment, you take the networking for granted, while in cloud you can't," Evenson says. "You could be sharing with others, so you need to make better decisions on how to service SLAs."
Ensuring the highest levels of performance is not just about putting the most important SLA-bound workloads on bare-metal servers. In cases where there is a cloud data center closer to a customer than a Lithium data center, Evenson has found the public cloud can actually provide better performance.
"I don't like to think about [our architecture as] hybrid, because it's just something we just do," Sandoval says. "I don't care whether the infrastructure is public or private; I want a data center fabric that gives me the best of both and can run my workloads anywhere."
SDN may be the answer to hybrid cloud network challenges
How does VMware's NSX in AWS stack up in a hybrid cloud?
Hybrid cloud computing: Can I network across physical domains?
- Network Security and the Cloud: The Basics –Barracuda
- Cloud networking and CDN: How to get the best for your business –ComputerWeekly.com
- Reducing Operational Costs with a Cloud-Managed Network –Aruba Networks
- CW Buyer's Guide: Optimising networks for cloud computing and virtualisation –ComputerWeekly.com