As networking becomes more application-centric, what are some of the implications for a network performance manager as networks expand beyond legacy server and storage management?
A network performance manager's job is becoming more difficult as it becomes harder to understand how legacy metrics translate into the end-user's experience.
Application performance is closely associated with the end-user experience, the overall performance of business processes and the extent to which service-level agreements are met. While application performance metrics are related to and affected by legacy server, storage and network performance and associated metrics, they characterize a higher level of abstraction and more directly represent how well the network is serving the business. Thus, the tools and techniques used by network performance managers in overseeing applications need to provide more direct information to executives tasked with critical business decisions.
Legacy performance metrics tend to reflect network building blocks such as router and server components. They might be monotonically increasing counts, e.g., packets or errors -- easily sampled, divided by time and converted into rates -- or they might be gauges that can increase or decrease with time, such as CPU use, bytes of storage or temperature. These same categories apply to application performance metrics, but, with their intimate relationship to the user experience, the latter represent higher-level abstractions like transaction response time or seconds of application downtime. This is not to say that legacy metrics are becoming unimportant. They are simply not of direct concern to executives and are often the substance of drill-downs from application management reports.
Indeed, it becomes the task of the network performance manager to relate the higher-level application metrics to the lower-level legacy metrics in order to understand resource use and how it affects application performance. This is getting more difficult, owing to two evolutionary trends:
- Applications are becoming more complex and distributed, touching more physical components and complicating the gathering and correlation of metrics at any level.
- Virtualization technology is increasing in popularity, offering such structures as multiple virtual servers resident on a single physical server. This further removes the end user from the physical network and its components and complicates the relationship between application performance metrics and legacy metrics.
Thus, the job of the network performance manager is becoming more difficult as it becomes harder to understand how legacy metrics translate into the end-user's experience. And perhaps it is also becoming a little more stressful as the information delivered by management tools places the network performance manager a little closer to the executive suite.
About the author:
Daniel Seligman has worked within the Massachusetts Rte. 128 technology complex since the 1970s and is currently a self-employed network consultant. He has published technical papers in several areas of computer networking technology, including those covering network design, performance analysis and network management. He holds BS, M.Phil. and Ph.D. degrees in physics from Yale University.
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.