Tip

Five challenges in private cloud networking

The basic goal of cloud networking is to provide users with connectivity even when applications and their components are hosted dynamically across a number of resource pools in dispersed private and public data centers.

This will enable a Software as a Service (SaaS)

    Requires Free Membership to View

model that makes applications appear to users exactly as they did when they lived on-premises. To do this, enterprise IT organizations must build private and hybrid cloud networks that dynamically map two sets of independent variables, but this creates a set of special network challenges.

Resource address mapping in a dynamic environment

Harmonizing resource addressing is the most obvious challenge in private cloud networking. Users typically know applications by a URL and, through that, an IP address. The more dynamic a resource pool—meaning, the faster resource assignments change—the more difficult it is to harmonize resource flexibility with application access.

Cloud director technology, including cloud extensions to load-balancing server switches, can learn resource addresses from the cloud provisioning process. That means these elements will link the user to the correct resource. Where the private cloud network is built without specific cloud director features, or where the whole resource pool is linked to a VPN, it may be necessary to manage application directories to keep them synchronized with the resource locations. This would apply to DNS, UDDI and potentially LDAP directories. Where application links change with resource reassignment, those changes have to be reflected in the directories that point to the applications.

Tackling access and hybrid public apps in private cloud networking

Cloud computing promotes flexibility, which consists of access control for both applications and resources. It’s probably safe to assume that a movement to a private cloud should mean considering a centralized security solution with single sign-on and application rights assigned on a per-user basis. This should secure the access side of the cloud. For the resource side, a migration to cloud computing would not normally create a greater server security burden than virtualization would, and the same solutions would apply, except for public cloud resources hybridized with the private cloud network. For public or hybridized apps, it may be necessary to install a cloud-hostable firewall and other security software as part of the public cloud machine images.

Application acceleration in cloud networks

Private cloud deployment places new challenges on application acceleration and application performance-management solutions. Currently many application acceleration/performance management solutions are based on pair-linked hardware elements. In a private cloud, the server side of these hardware devices has to be able to access all of the resources on which the application can run, rather than a single server on which a dedicated application would normally run. In hybrid applications, the problem of installing a hardware device in the public cloud arises again. In that instance, the best solution may be to focus on acceleration and performance management tools that are cloud-hostable as software. The trend to provide software routers for the cloud may also help here, since these virtual devices often have acceleration clients available as an option.

Read more on cloud networking

Common mistakes in cloud application deployment

Moving to the cloud: Tips for mature virtual data centers

Building a private cloud: Is the physical network hostile?

Load balancing and failover in private cloud networking

The dynamic nature of the private cloud makes load balancing and failover more difficult. There is a difference between a cloud application that manages variable resource assignment to optimize resource usage and performance, and one that expects to change resources in response to increases in traffic or the failure of a component. Load-balancing switches with cloud awareness can provide for failover and elastic resource assignment, but traffic will have to be routed through these devices, often increasing latency. Directory-based failover and load balancing are available even in the form of as-a-service, but these may not be fast-acting enough to satisfy user needs. Where possible, try to employ application-based tools like distributed transaction processing, and ensure that the network supports the way these tools work.

Optimizing performance in private cloud networking

Latency is often an insidious problem in the private cloud, because it’s almost an automatic attribute of adding anything new to a network. Access congestion and network congestion both contribute to latency, as does the number of hops a packet must take across the network. A check of network service performance will usually identify where latency is accumulating, and dealing with those points first is essential to maximize cost/benefit relationships for the cloud overall.

Read more on cloud networking by Tom Nolle

Agile cloud networks are key to application performance and access

Implementing cloud computing architecture: A holistic overview

Network management consideration can often start in the same way, as latency/loss consideration. When components are added to deal with security, application acceleration, and load balancing and failover changes, these have to be managed in harmony with the rest of the network. This means ensuring that the management tools in use will support the new devices and that management practices and problem-determination procedures are updated to reflect the new risks of these components. Remember, the private cloud is an ecosystem the network creates and sustains, and the network’s behavior as a whole will determine the success of a private cloud investment.

The best way to manage performance in a private cloud network is to adopt a “problem budget” perspective. Any connection, service or network element introduces performance issues in the form of latency or dropped packets. Any application has a “budget” for problems, and if things are added that increase the problems of latency or loss, the budget has to be restored. It’s important to minimize the latency and loss of new elements, but it may also be necessary to renegotiate SLAs for WAN services like VPNs to compensate for cloud components whose impact can’t be further minimized.

Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his SearchTelecom.com networking blog, Uncommon Wisdom.

This was first published in July 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.