Editor’s Note: Part two of our cloud computing tutorial investigates how the private cloud model affects your network. We examine how it affects network utilization, performance and applications, and how the network needs to be adapted to accommodate private clouds. We also look at how desktop virtualization fits into a private cloud strategy and examine its related impact on the network.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Don't miss any of the articles in this tutorial on network planning for cloud computing:
A private cloud -- also known as an internal cloud -- is the model in which an organization builds a data center using cloud computing principles. The private cloud is really virtualization technology reaching its full potential and vision. A data center designed as a private cloud would look like a public cloud from the perspective of applications and services. The compute resources are “pooled” to create on-demand access to applications and services. The only difference is that the compute resource pools are available from the organization’s own data center rather than over the public Internet. This means that the compute resources need to be virtual and portable, allowing them to move across the network in real time.
Private clouds are highly disruptive to the data center network and break down most traditional network best practices. Traditional network design principles follow a common three-layer topology: an access layer, aggregation layer, and core. In addition, many networks are built at Layer 3 (IP Layer) because this provides the most flexibility. This type of architecture, while ideal for current data centers, will not allow organizations to migrate to a private cloud model.
The three-tier architecture typically has servers connected to a top-of-rack switch, which is then aggregated in an end-of-row switch and connected back to a core switch. In a private cloud, the compute resources need to be fluid and move across a rack, across a data center or even between data centers in as short a time as possible. The multiple hops that the virtual workload would make add a degree of latency that degrades the performance of the private cloud.
Instead, network architects have been building much flatter Layer 2 networks using horizontal stacking technology, a relatively new development. In this case, the top-of-rack switches connect with one another and then are directly connected back to the core switch, effectively removing the end-of-row switch and one hop from the architecture. At Layer 2, this will minimize the amount of latency involved in moving a cloud resource, such as a virtual server, across the network. This puts an emphasis on low latency 10 Gigabit Ethernet and keeps us watching for true 40 Gigabit (not multiplexed) and 100 Gigabit Ethernet.
Another disruptive force in private clouds is the increased use of desktop virtualization. In a virtual desktop environment, the user’s session is run on a centralized server located in the data center and then streamed to the user’s desktop and run in a secure, sandboxed environment. All of the applications are run in the data center, and the endpoint simply displays the desktop or applications. Anyone who has seen a demo of this realizes that it has the power to change the way users work. One could be working at the office, turn off the PC, go to an Internet kiosk at an airport and log in, and the session would be restored exactly as it was left.
A disruptive force in private clouds is the increased use of desktop virtualization.
When the worker shuts down, there’s nothing left on the endpoint. The endpoint itself really doesn’t matter, as a worker could use a standard windows PC, Mac, laptop, Netbook or smartphone and have the same experience (at least, as the device permits).
In many ways, desktop virtualization is the rebirth of the mainframe, except instead of seeing a green screen, the worker has a multimedia experience wherever they are connected. It’s easy to confuse this with the old “thin client” technology, but desktop virtualization is almost indistinguishable from traditional computing and has full audio and video capabilities.
However, the performance of desktop virtualization depends on the organization’s network architecture and configuration. Many organizations do not treat the tiers of the network equally, and this needs to change. The edge of the network, or wiring closet, often has lower-end devices, some of which cannot be managed, while the quality and reliability of the network infrastructure increases the closer it is to the data center. This means that some features that are available in the data center may not be available in the wiring closet. With desktop virtualization, the entire network is the delivery mechanism for the desktop, and a consistent feature set is required from the data center, through the aggregation layer, out to the wiring closet. This will ensure a consistent user experience. Also, placing desktop virtualization traffic in its own virtual LAN or quality of service (QoS) queue will guarantee that the additional network traffic will not interfere with other mission-critical traffic.
Next: Hybrid cloud deployment