Problem solve Get help with specific problems with your technologies, process and projects.

Centralize Exchange without impacting users

Exchange over a WAN is often painfully slow. While administators often increase bandwidth, there are more effective options to consider.

Dr. Steve McCanne

As most everyone would agree, messaging and calendaring have become mission-critical resources in modern business operations, and it's no surprise that Microsoft's Exchange messaging platform has taken a foothold in many large, distributed enterprises. While Exchange has been an extremely successful product, its performance over the WAN is widely known to be problematic.

While most CIOs and IT architects prefer to centralize IT resources for lower cost and easier management, when it comes to certain applications like Exchange, centralization comes at a big cost to employee productivity. When an end user accesses an Exchange server located in a data center thousands of miles away, the Outlook client can literally grind to halt, frustrating the user and stifling productivity.

Messaging architects and other IT professionals often respond to a performance problem like this by adding bandwidth to wide area circuits. And, when that doesn't solve the performance issue --- as is all too often the case --- the measure of last resort is to give up all the benefits of centralization and move additional Exchange servers into remote offices. Of course, adding remote Exchange servers adds costs like additional IT infrastructure and management requirements and increases security, backup and compliance risks.

Why adding bandwidth (or data compression) doesn't help Exchange
The first thing many CIOs try when faced with performance complaints is to provision more bandwidth to remote offices, or equivalently, to install data compression appliances to do more with what they already have. In most cases, that won't improve the performance of Exchange over the WAN, because it's not just the bandwidth that is limiting the performance of the application. The communication protocol used by the application, in this case, MAPI, the Messaging Application Programming Interface, constrains throughput also, no matter how much bandwidth is available!

MAPI, like many other application protocols, is extremely "chatty" in that most operations require an extensive, serialized back-and-forth exchange of messages between the client and server. For example, in Exchange 2000, the data payload for each client-server interaction is typically between 8 and 16 kilobytes. Moving a large attachment, say a 3 MB PowerPoint file, can require hundreds of round trips. Even if you had limitless bandwidth, hundreds of round-trips over a WAN -- even at the speed of light -- add up to many tens of seconds, if not several minutes.

On a LAN, you don't notice all the trips because the round trip time, or latency, is nearly zero. Since most enterprise applications were developed for LANs, the effects of high latency over distance were often not considered.

What's wrong with deploying more Exchange servers?
When throwing bandwidth at the problem doesn't work, all too often IT is forced to give in and deploy Exchange servers at each remote office. In fact, many companies utilize a pre-emptive strategy whereby servers are moved into a remote office as a matter of standard policy when that office hits a pre-determined critical threshold, usually about 20 people. Since an Exchange 2000 server can handle between 5,400 and 16,500 users (see, an office with 25 users with a dedicated Exchange server would vastly underutilize that server, as it would run with utilization well under 1%. Needless to say, deploying dozens of Exchange servers that are more than 99% underutilized is not exactly economical.

While distributed servers will quell the complaints of poor performance from remote users, they introduce a host of additional costs, risks and the complexity associated with backup, compliance, email storage, patch maintenance, systems administration, security, virus protection and so forth.

A better approach: transaction acceleration
There is a new approach, known as transaction acceleration, that enables messaging architects to centralize their Exchange servers without impacting end-user performance. Transaction acceleration addresses the interrelated problems of constrained bandwidth, high network latency and chatty application protocols, and can improve application throughput by up to 100 times on WANs.

Transaction acceleration involves two key optimizations: scalable data referencing (SDR) and transaction prediction, which together tackle the problem of chatty application performance over the WAN. Transaction acceleration is deployed in self-contained IT appliances that are situated in the network infrastructure, with one device deployed in the remote office and one device deployed in the data center near the application servers.

Scalable Data Referencing
To optimize application performance over the WAN, the first step is to ensure that application performance isn't negatively affected by limited bandwidth. To do so, SDR represents data in a universal format that is application- and protocol- independent. This contrasts with the traditional approach taken by a cache, which stores data in an application-specific fashion.

SDR breaks down the data that makes up the files crossing the WAN and creates a reusable application-independent data pool. Thereafter, any data, regardless of application, that contains the same patterns of bytes will not be resent over the WAN again. Since the SDR architecture is disk-based, the time window over which data can be compared is very long, in the range of days to months, depending on the workload.

Transaction prediction
Once the bandwidth bottleneck is eliminated with the SDR, the next step is to deal with the chattiness of application protocols that dramatically affect latency. Transaction prediction exploits the fact that protocols are often highly predictable in the way they handle common interactions. For example, a drag-and-drop file copy in Windows may generate 3,000 client-server interactions, yet the sequence of such interactions turns out to be highly predictable.

By knowing what a specific protocol is about to do, or by predicting it, the appliance is able to complete transactions before they are generated by a client. This is done without interrupting the normal client-server semantics and is thus completely transparent to the application.

Transaction acceleration can improve throughput of applications on WANs by up to 100 times. In a typical deployment with a broad range of workloads and application traffic patterns, customers on average see a tenfold or better performance improvement. With this level of improvement, IT architects are able to offer dramatic performance benefits to under-provisioned remote offices and consolidate expensive distributed Exchange server deployments in the data center. This achieves significant cost savings while reducing risk and complexity and avoiding adversely impacting performance for the end user.


About the author:
Dr. Steve McCanne is the co-founder and CTO of Riverbed Technology, Inc. ( Prior to founding Riverbed, he was the CTO of Inktomi Corporation and the co-founder and CTO of FastForward Networks. Dr. McCanne was an assistant professor of computer science at the University of California at Berkeley before entering the business world. He also spent a number of years as a staff scientist at the Lawrence Berkeley National Laboratory.

This was last published in April 2004

Dig Deeper on Network Infrastructure

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.