A virtual desktop infrastructure is a cross-domain technology; as a result, deployment requires input from across each enterprise IT team. Yet too often, networking pros are left out of the implementation picture and called in only when it's time for monitoring and troubleshooting.
If network engineers, managers and operators are going to be held responsible for the ongoing success of virtual desktop infrastructure (
More on VDI networking
How to test VDI network performance to avoid application meltdown
Designing and tuning the WLAN to support mobile VDI clients
Why go with solid-state storage for VDI?
In a recent study of 158 IT pros in enterprises with more than 250 employees, the vast majority of respondents recognized the need for server, storage, desktop and applications groups to work together on VDI planning and implementation. But according to the study (conducted by Enterprise Management Associates (EMA) for the "VDI Impact on Networks and Network Management" report), many had not brought the network team into these projects until after implementation.
That's a bit late, considering 54% of respondents cited the network operations group as the one most involved in monitoring and troubleshooting VDI in production. This was even the case in 35% of organizations that had a dedicated VDI team. What's more, the majority of respondents said the network operations team was held primarily responsible for monitoring VDI performance and availability.
So how can networking pros best help an IT organization prepare for VDI implementation? In analyzing the responses by companies with the most successful VDI deployments, we have identified four VDI networking best practices:
Understand your VDI latency budget. A mere 35% of the organizations surveyed in the EMA study had defined targets for maximum acceptable VDI session latency as part of their planning efforts. Yet among respondents experiencing the greatest levels of VDI success, 53% had previously defined latency targets. Meanwhile, 100% of those having the least success with their VDI deployment had not defined latency objectives.
While network latency will only be a part of the total latency budget, it must be understood and minimized however possible, particularly in cases where VDI users will connect to servers via WAN or Internet access. The best practices group had varying answers here, but responses clustered around 100 milliseconds total/aggregate latency within the VDI infrastructure.
Understand VDI networking bandwidth requirements. Even less common than creating a latency budget are understanding and defining minimum expected bandwidth per VDI session, with only 28% of all respondents having taken the time to do so. This number can vary widely depending on the VDI products chosen, the protocols used for communication and the applications delivered. VDI vendors claim that in the simplest of cases, you may only need 50 Kbps to support each session, but our research showed that users are more likely to need 250 to 300 Kbps to support each session on average. For high-end, graphic-intensive applications, users may need to allot as much as 1 Mbps per session/user. The best way to figure this out is to use an assessment tool such as Liquidware Labs' Stratusphere FIT.
Bring on the optimizations. The folks I surveyed were, on average, using two or three out of six network optimization categories to improve VDI deployment success, mostly leaning on WAN optimization, storage network optimization and managed or optimized Internet services. But the most successful shops reported using three to four methods, with additional emphasis placed on using network Quality of Service (QoS) policies and application delivery controllers. In fact, 68% of that successful group embraced network QoS specifically because of their VDI project.
Pay attention to the LAN, not just the WAN. When it comes to root causes of VDI performance issues, the survey results showed the cross-domain nature of the VDI phenomenon. Folks have issues in just about every corner of the infrastructure, from the server tier, to the storage tier, to end devices. But overall, the most common was network traffic congestion, followed by network latency. The catch is that this latency was just as commonly cited in the LAN as it was in the WAN or Internet.
About the author: Jim Frey has 24 years of experience in the computing industry, developing, deploying, managing and marketing software and hardware products. The last 18 of those years were spent in network management, straddling both enterprise and service provider sectors. At Enterprise Management Associates, Frey is managing research director and is responsible for the Network Management practice area.
This was first published in September 2012