This article can also be found in the Premium Editorial Download "Network Evolution: Adopting an application-centric architecture."
Download it now to read this article plus other related content.
The bitter truth of application performance management is that mitigating performance problems can be easier than identifying them. Many organizations deploy application delivery optimization (ADO) tools and compression or application acceleration to improve application performance based on user complaints (read: “Help! Our file shares don’t work!”).
But WAN-level performance problems can mask the underlying difficulty of proactively identifying such problems. And, of course, if the performance problem is rooted deeper in the data center, WAN-level fixes will not help. The answer to being proactive lies in a patchwork of application performance monitoring tools that will start at storage I/O and provide visibility at every step on the network to the remote end user.
Application performance monitoring in the SAN: What you need to watch
In the SAN, systems both read and write blocks of data from and to bulk storage. IT professionals should be watching a few basic metrics related to application performance, chiefly I/O operations per second (IOPS) and I/O response time.
Both ends of the connections matter. On the storage end, SAN monitoring can tell you how the storage array is performing; whether there are concentrations of traffic on specific logical unit numbers (LUNs); whether slow storage (with long response time) is being used to house frequently requested -- or hot -- data, and so on. On the host side, SAN monitoring should tell you about IOPS per physical or virtual server, and the response time each server is getting from the storage it is using. Monitoring can also identify the top listeners (reading stored data) and top talkers (writing data to storage) among all the physical and virtual servers.
In addition to the big storage vendors such as EMC, HP, IBM and NetApp, many smaller management vendors have storage monitoring tools, including SolarWinds, ManageEngine, and Virtual Instruments.
Application performance monitoring in physical and virtual servers
Application performance problems can crop up in the server layer as well. Most IT shops are reasonably well instrumented for judging server performance thanks to the broad array of tools available directly from server vendors as well as third-party and open source monitoring tools. Admins can watch the I/O, CPU and memory performance of applications and component services, setting up alarms at performance thresholds as needed.
Given that 97% of companies now use virtual servers, and that about half of enterprise workloads are now virtualized, using tools that let you see both the physical and the virtual worlds is critical. The most important new capability from a performance standpoint is the ability to map virtual servers to physical ones. IT has to be able see if the behavior of one virtual machine is affecting the performance of others on the same host, or if a failing or ailing host server (say with a bad NIC or HBA) is hurting the performance of its hosted virtual machines.
Data center LAN-based application performance monitoring
At the LAN level in the data center it is useful to be able to look at traffic among servers to monitor the performance of multi-tier and SOA-based applications. This can be managed from the server side: Traditional transaction performance tools use agents on database, application and Web servers to collect data and correlate the results. A LAN-based solution watches and analyzes the traffic flowing among the nodes.
Application-aware network monitoring tool vendors include ExtraHop, Fluke Networks, Lancope, InfoVista and SolarWinds.
Application-aware visibility and element management in the WAN
The WAN offers another opportunity to create -- and to monitor and resolve --performance problems. The majority of users access enterprise applications in the data center remotely over the WAN. Unfortunately, the WAN tends to be more prone to packet loss and unavoidably victim to high latency. In addition to element management tools that can tell you about packet loss per interface, for example, IT needs to layer on tools that can provide application-aware visibility into network performance.
Some WAN optimization tools, including those from Exinda, Ipanema and Streamcore, place great emphasis on performance monitoring and reporting. Many tools that work for LAN monitoring, such as ExtraHop, Fluke Networks, NetScout and NetQoS, also report on WAN performance.
Monitoring application performance at the desktop
The last link in the performance chain is, of course, the endpoint -- the PC (or Mac or Linux box) from which most application use originates. Some performance monitoring solutions integrate an agent (permanent, temporary and downloaded ad-hoc) on the end-user device, in order to get true user-level performance views. Other solutions mirror transaction traffic to an analytic engine. Still, others simulate the user with synthetic transactions or tran saction record/playback to build a consistent set of measurements that can be run automatically and from any location.
User-centered performance monitoring providers include the big vendors such as HP, CA and BMC, as well as smaller vendors such as Precise, Symphonic and Knoa.
And of course, in any real world deployment, it is important to remember that there can be problems in the endpoint device itself that affect enterprise application performance as perceived by the user. So, it is important to have the ability to drill into client computer performance just as though it were a server. This can often be accomplished using the same application performance monitoring tools, which can make SNMP data requests or drill in using the Windows Management Interface. Many open source and free tools also exist in this category, and specialty vendors in this area include NEXThink and Centrisoft.
About the author: John Burke is a principal research analyst with Nemertes Research, where he focuses on software-oriented architectures and management. As an analyst, John draws on his experiences as a practitioner and director of IT to better understand the needs of IT executives and the challenges facing vendors trying to sell to them. A frequent speaker, his career began at The Johns Hopkins University, where he supported the engineering faculty in its use of computers in research and teaching. He moved on to systems and network administration at The College of St. Catherine, in St. Paul, Minn., and then to directing staff in voice, data, desktop and systems management at the University of St. Thomas, also in St. Paul.
This was first published in January 2011