What's behind your application performance problems

Glen Kemp illuminates some of the issues that can lead to application performance issues in this two-part series.

Editor's note: In part one of this two-part series, Glen Kemp discusses some of the tools and techniques that can be used to address poor application performance in the enterprise.

A common cry from users: "The application is not fast enough!"

In fact, many simple tools -- including tcdump, found in most Unixes -- are often overlooked.

This complaint may qualify as being one of the least useful grumble users have about IT, but it's also one that can be incredibly difficult to pin down and solve.

The first step to addressing application performance problems is to understand the root cause or causes so that efforts (and budgets) can be channeled appropriately. This stage is often overlooked; instead, assumptions tend to be made and money tends to be thrown at the problem before any proper analysis is conducted.

Fancy tools like those from Compuware Corp. and Extrahop Networks Inc. can shine a light deep into the applications layer and provide better visibility over application performance. However, much can also be learned from a simple WireShark trace of the affected system or application. I stand by the adage that "tcpdump never lies."

In fact, many simple tools -- including tcdump, found in most Unixes -- are often overlooked. The Resource Monitor built into Windows 7 (see Figure 1), for example, gives a per-application breakdown of latency on the wire. This gives real-time insight into where application performance problems may actually lie. In fact, it's worth getting to know the tools that are freely available before getting one's wallet out.

Figure 1. Application latency per-application in Resource Monitor.

Inevitably, there are multiple factors that can impact application performance. It is perfectly possible that relatively minor issues may combine to create a compound effect. In the next few sections, I'll walk through some key performance crunch points and the tools that address the issues that might be generated.

Client

The client is often the area that it is most difficult to address. In a corporate environment, client devices are likely to be of a fixed specification, almost obsolete and not due for a refresh anytime soon. In environments with mobile or Web application users, one has even less control. Addressing the disk I/O is perhaps the only sure-fire way of improving client workstation performance. The time applications (and therefore users) spend waiting for "spinning rust," or magnetic drives, is often overlooked. The assumption is made that application performance is exclusively linked to the CPU speed; everything else must be the network. Replacing a legacy 4200-RPM drive with a flash-memory solid state drive (SSD) in a client workstation can extend its lifespan significantly. The price differential between magnetic and flash drives has dropped dramatically in 2014, and SSD is now an economical consideration for the enterprise administrator.

Latency

Latency, expressed in milliseconds (ms), is the time it takes for a packet leaving the local area network interface to arrive at the destination. This could be a server on the other side of the room or the other side of the planet.

Congestion and packet encapsulation can negatively affect latency, but even under perfect conditions, the network performance is limited by the time it takes for a signal (electrical, optical or radio/microwave, depending on the transmission medium) to make its way from client to server. Over LAN distances on wire latency is barely measurable, let alone perceptible. When dealing with cross-country or intercontinental WAN links, however, latency's effects are cumulative.

Moving traffic over any distance incurs greater latency, and few tools are able to mitigate it. TCP applications such as HTTP(s), Microsoft Exchange and Remote Desktop Protocol are especially impacted by latency because of the inherent chattiness of the TCP protocol. Furthermore, the packet return trip time must be considered; for each request, every reply must make the same trip.

APM: When quality matters

Adding APM to systems management

Next-gen APM tools

APM in the cloud

For clients separated by a wide area network (WAN), it is possible to deploy WAN optimization technology from vendors such as Riverbed Technology and Silver Peak Systems. These platforms act as transparent proxies at both the client and server end. They all play two major roles: caching to reduce the load on the WAN and protocol optimization to mitigate the "ping-pong" of chatter in TCP applications such as HTTP and server message block. Taken together, these systems reduce the impact of latency upon the WAN. The golden rule of these technologies is that enterprises will reap the maximum benefit with latencies greater than 100 ms. In other words, the higher the latency, typically, the better the return on investment.

For mobile users, it is possible to deploy a client at the endpoint that performs in software a subset of WAN optimization features. Riverbed, Juniper Networks and others have this kind of technology available. Pushing a Java, ActiveX, or other platform-dependent app to the endpoint, however, may not be desirable or practical for standard Web applications.

What is possible will be covered in part two.

This was first published in May 2014

Dig deeper on Network Performance Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Glen Kemp asks:

How many tools do you use to monitor application performance?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSDN

SearchEnterpriseWAN

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close