Desktop virtualization storage strategies: Easing bandwidth and access

Without the right desktop virtualization storage strategies, implementing the technology could mean a great strain on storage area networks and the core. But with the right approach, enterprises could benefit from desktop virtualization and actually improve their storage systems.

Desktop virtualization is experiencing rapid enterprise uptake, but the technology poses special challenges to the network. In the first two parts of this series, learn about the challenges and network requirements involved with implementing desktop virtualization.

There is an unavoidable truth about desktop virtualization: Storage and user are separated across the network. This places demands on the network that differ by type of desktop virtualization and back-end storage strategy. Done poorly, desktop virtualization will consume inordinate amounts of storage and network resource and result in a lousy end-user experience: a lose-lose-lose proposition. The trick is to figure out a desktop virtualization storage strategy that makes it a win for each team.

Desktop virtualization and the network
Desktop virtualization network challenges

Desktop virtualization network requirements

As it stands now, storage is growing at 20% to 40% annually, according to Nemertes Research's 2009 benchmark, and that's putting the brakes on some virtualization plans. Server-centric desktop virtualization shifts what was once local desktop storage to the data center. What's more, while IT shops know that virtualization has an impact on storage, they often don't know the exact amount of storage outgrowth that results from virtual servers and desktops.

Overcoming these challenges is crucial, considering the storage benefits that could result from desktop virtualization. Beyond the decreased management and maintenance costs, as well as improved security resulting from desktop virtualization, the technology also has significant business continuity and disaster recovery benefits. What happens, for example, when H1N1 spreads through the office and everyone must work from home? And how many desktops are actually fully backed-up in the event of a hard disk crash? Storage consolidation is, in fact, a potential benefit of desktop virtualization.

SANs a natural fit for desktop virtualization

Storage area networks (SANs) are still the dominant storage choice for virtualization, though a significant percentage of organizations also rely on network attached storage (NAS). Most go with SAN because it is the dominant technology for data centers and it is a block storage protocol (which reads and writes data in blocks), providing greater performance than NAS for real-time virtualization operations.

Beyond desktop virtualization storage capacity: Bandwidth and access

Both classes of desktop virtualization (server-centric and desktop-centric) place demands on data center storage beyond just capacity. Each desktop virtual machine (DVM) also requires storage bandwidth and storage access (input/outputs per second, IOPS).

Take, for example, 10,000 virtual desktops, each with a 20 GB OS disk. On the capacity front, moving the desktop OS into the data center could eat 200 TB of disk storage. Add applications and user data, and storage could jump to over 300 TB. The good news is that this is a worst-case scenario. Use of data de-duplication and flexible cloning technology drops the OS load down to under 250 GB and the user and application load down to under 10 TB.

Understanding drive performance in desktop virtualization storage strategies

But we still need to worry about drive performance in addition to total storage. In general, server-centric virtualization is far more demanding on storage I/O than desktop-centric virtualization. Throughput and mileage vary, but based on the number of IOPS they support, reasonable estimates suggest a Fibre Channel drive may support 30 DVMs and SAS drives may support 10 to 15 DVMs -- far fewer per SATA drive.

A 10,000-user desktop virtualization environment may require 333 to 3,000 drives, depending on type of drive, speed of storage network, number of virtual machines and number of simultaneous sessions. You may, for example, have 10,000 server-centric virtual machines, but with only 5,000 active simultaneously running basic office applications. But in another scenario, you could have 5,000 server-centric virtual machines, all simultaneously running high-performance database applications. You must balance between scale and scope. Unfortunately, there is no hard-and-fast rule for determining the optimum balance.

Improving throughput in server-centric desktop virtualization storage strategies

As with most networking discussions, the main focus is throughput. Delay is a factor, but not nearly to the same degree as it is in desktop virtualization real-time performance. In the data center, managing throughput in the storage network is challenging because of virtualization's decoupling of physical from logical. Each physical server has one or more host bus adapters (HBAs) that communicate with the SAN.

To fully implement virtual machines on the SAN, IT must use virtual host bus adapters (vHBAs), allocating one worldwide name (WWN) per virtual machine. This provides a number of advantages over lumping VM traffic all together through a single identifier (looking like one machine to the SAN). First, it means IT can see each VM's storage traffic. Second, as VMs move from host server to host server, the WWN moves with the VM, which helps to maintain consistent storage throughput during server load balancing or even major events like site failover for disaster recovery.

Desktop-centric virtualization hits the network at boot-up

Unlike server-centric virtualization, desktop-centric virtualization -- running a virtual machine on local desktop hardware rather than on a server in the data center -- hits the network hard at startup, while a VM image is downloading, but then uses the network only for storage read/writes. This can mean sending 3 GB to 10 GB across the network at startup. On a 1 Gbps network, this takes 30 to 100 seconds.

Though 30 seconds doesn't sound too bad (after all, my laptop takes minutes to boot), VDI users have experienced greater than 30 minute wait times owing to "boot storms." A boot storm is the classic busy hour situation where everyone comes in at 0800 and turns on desktop client machines. Similarly, a desktop restart due to an urgent patch can trigger the same scenario, but on an even larger scale.

Intelligent caching and cloning: Battling the "boot storm" in desktop-centric virtualization

Three tools that mitigate boot storms are intelligent caching of the desktop image, desktop cloning, and desktop streaming acceleration. Rather than having to create hundreds of multi-gigabyte clones, intelligent caching and cloning distributes the load by sending the nominal differential for each booting desktop over the network. Desktop streaming acceleration prioritizes desktop traffic. The end result is desktop-centric virtualization loading in seconds, independent of the number of simultaneous loads.

The bottom line is that both server- and desktop-centric virtualization require high-performance storage. For server-centric, the emphasis is on SAN or NAS throughput and IOPS. For desktop-centric, the emphasis is on caching, intelligent cloning and network optimization techniques between data center and desktop. Through use of these functions, desktop- and server-centric virtualization can optimize data center storage, minimize network performance and deliver a good user experience.

Ted Ritter

About the author: Ted Ritter is a senior research analyst with Nemertes Research, where he conducts research, advises clients, and delivers strategic seminars. A Certified Information Systems Security Professional (CISSP), Mr. Ritter leads Nemertes' research on information stewardship, which includes compliance, as well as the management, access, storage and back up of data.

Mr. Ritter has designed, implemented and supported telecom and information security solutions for commercial, federal and international clients. He holds a master's degree in telecommunications management from The George Washington University and a bachelor's degree in neuroscience from Oberlin College.

This was last published in December 2009

Dig Deeper on Network virtualization technology

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.