Let's consider what is really needed when merging applications management with the management of your network.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Of course, application performance is the key to everything that matters. And yet network performance is where it all starts and stops. You can't have an optimally performing application without considering the network. It's like trying to drive a Ferrari Enzo on a dirt road full of potholes!
So at what point does application performance management include the network? The key is to start from the application perspective. Network management systems don't have that. Alternatives to the classical NMS are described in the white paper "Network Performance Infrastructures: Not your father's NMS."
An effective NPI can be built on either a top-down or a bottom-up approach:
BOTTOM-UP: You can start with the network, characterize it end-to-end in terms of a range of performance indicators using either passive- or active-probing technologies, and then interpret the measures in terms of an application-specific basis. This allows an effective Layer 3 approach to do the bulk of the application performance work, eliminating the need for nasty merging and integration. In some cases, supplementing the approach with application behaviors (e.g. Web, VoIP, or data transfer) is all that is needed to generate a complete picture.
Pros: Very flexible, can be resolved by as few as one technologies or products, addresses the network directly, catches the majority of application performance issues, can isolate faults to specific hosts or mid-path devices, can see remote clients and end-stations, network path is identical to application.
Cons: Limited view of an application's behaviors on a particular host, limited view on Layers 4-7 (i.e. host-specific issues such as disk I/O bottlenecks are not easily identified), passive-probing approaches can't scale to high-speed networks and often require hardware (active-probing has trouble scaling to 10G but is keeping up so far).
TOP-DOWN: You identify the critical applications that you need to manage, implement either passive- or active-probing technologies that emulate typical application traffic, monitor the critical hosts where the network-based services are resident.
Pros: Host-specific and application-specific, view is identical to application w.r.t. packet behavior and Layers involved, new application types can be developed as needed.
Cons: Cannot separate Layer 3/network from OS/application Layers, typically can't see specific path out to remote clients/end-station, doesn't adapt to new application types without development.
So what do you do?
Look at the criteria of adequacy in the white paper and start shopping around the new offerings from various companies offering critical views on network and application performance.
About the author:
Chief Scientist for Apparent Networks Loki Jorgenson, Ph.D., has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com