Editor's note: In the concluding part of his series on diagnosing application performance issues, Glen Kemp discusses how application delivery controllers, servers and the applications themselves are part of the analysis puzzle.
Within the data center, there still may be several hops between the user and the application. Even with an infinite number of servers with unlimited capacity, an intelligent decision is still required to determine where traffic should be directed and thus reduce application performance problems. This is the task for load balancers (and their slightly pretentious cousins, application delivery controllers).
Load balancers provide a couple of very useful functions. They can transparently balance traffic across multiple servers to ensure a request is served by the most available server -- the one best able to respond to the request. Even relatively basic load balancers have a major impact on performance by offloading tasks such as Transmission Control Protocol (TCP) handshaking and secure socket layer/transport layer security encryption from the application servers. This is important as it removes CPU load from the target onto a dedicated, lightweight system optimized for high throughput. Load balancers equipped with dedicated silicon can further offload processor-intensive tasks and yield lower latency than software or virtualized platforms. If you are not counting every millisecond of latency, virtualized load balancers can offer greater flexibility since you can spin them up and down with the virtual machines that support them.
Application delivery controllers (ADCs) differ from load balancers in the same way that cars differ from sport cars; performance and features are irrelevant unless they are driven (deployed) optimally. The key difference between an ADC and a load balancer is centered about advanced Layer 7 controls. Web applications suffer particularly badly under the effects of latency, and application delivery controllers use several techniques to address this. For example, HTTP compression and caching optimization can save precious bytes on every HTML object and further remove load from the servers. Optimizing how objects are loaded on end users' devices also improves users' experiences, especially when users are in low-bandwidth or high-latency scenarios.
Maximizing server performance helps diagnose application performance problems
Many features can cause poor server performance. Issues with CPU and RAM use, network interface bandwidth, disk latency and queue lengths are all relatively easy to diagnose and even easier to fix. (The answer is usually "throw money at it.") Additionally, servers that are apparently idle may still display the symptoms of poor application performance; there are many small details which can impact performance that are not visible to the naked eye.
Some classic examples:
When the application itself causes the problem
For most network administrators, the responsibility for the application stops at the TCP socket; this makes application issues difficult to diagnose. Tools from ExtraHop Networks and SolarWinds, among other vendors, can shine a light into most applications, but the reality is you may need to roll up your sleeves and start digging to find the root cause. Assuming there are genuinely no resource issues and the host network is apparently healthy, it's time to sit down with the application team to understand what is happening when a user hits "submit."
The number of traffic flows that are spawned by an apparently simple application will amaze you. Furthermore, you may reach the moment where you'll need to acknowledge that the application is as well optimized as a dog on roller skates. Depending on what you find, you may have to shift some network components around to improve the user experience. Moving databases to the correct side of WAN links, optimizing firewall policies to support long sessions and increasing the capacity of uplinks all may have an impact, even if none of the other areas have.
Ultimately, your response to application performance problems will differ according to the root cause. For any resolution to be effective, you must fully understand what's going on. By following this strategy, when money does need to be spent, it can be spent wisely.
After all is said and done, poor performance may well be a series of small issues that have combined to create a bigger one. In addition, fixing one issue may reveal another; removing a CPU bottleneck may immediately reveal another problem on the disk I/O subsystem.
The moral of this story is that managing application performance is a continuous process, not another black box that you plug into your network. To get the best from your applications and to positively impact the user experience, be cognizant of each end-to-end process involved in your business applications. In my experience -- outside of very latency-sensitive operations like high-frequency market trading -- very few organizations truly realize how their applications are screwed together and where the pitfalls may lie. I've addressed some common issues that can occur, but there are plenty of recessed corners in which other culprits may inhabit.
Dig deeper on Network Performance Management
Glen Kemp asks:
What's the most unusual application performance problem you've solved?
0 ResponsesJoin the Discussion