In this tip, we'll take a look at some advantages and disadvantages of using prior statistics to make management decisions for future planning and we'll take a look at some alternatives.
There is a sharp division between types of network management systems. On one hand, there are the historical, trend analysis-based approaches that rely on temporal information (i.e. data gathered over time). On the other, there is the so-called "real-time" approach that produces comprehensive views (not just packet counts, connectivity or jitter probes) of the end-to-end network path.
One tries to tell you where you should be now, and where you are going, based on where you have been. The other tells you specifically where you are but without predicting where you will be or indicating where you have come from.
Is one better or worse than the other? Of course not. A balanced combination of the two approaches offers the most information and the best basis for future planning. A good network management solution should offer both a continuous monitoring infrastructure with temporal analysis as well as direct feedback on your network's usage and behavior.
What do you gain/lose when you have only one or the other? Let's consider each separately.
Statistical monitoring and trend analysisIn this approach, regularly scheduled queries are made of specific network devices
A very good example of this kind of tool would be the popular Multi Router Traffic Grapher (MRTG) tool (http://www.mrtg.org). It queries routers and then graphs the results in a simple and yet very effective fashion. Few commercial systems offer much more that is of significantly greater practical value (IMHO) – MRTG is used widely by network operators at all scales.
Other types of systems utilize various "active probing" techniques, sending queries of various sorts and gathering data on the nature of the responses. The data recovered is typically quite sparse and not as substantial as recovered by SNMP. However, it has less requirement insofar as access is concerned.
The results can simply be graphed but the addition of automated analysis and notification considerably increases the benefit to the network manager who needs critical feedback. Some very sophisticated systems also offer predictive capabilities delivered via trend analysis techniques – this is very sweet icing but isn't always well implemented (watch for snake oil).
- Network devices require ownership for access
- No end-to-end view from individual devices like routers
- If end-to-end connectivity-based, usually no performance data
- Coarse-grained data – obscures transient effects.
Overall, if used primarily for planning and forecasting, this approach is probably the most effective. If used more for monitoring, prediction, alerts and troubleshooting, you will likely run up against its limitations sooner.
Real-time performance analysisThis approach assesses the actual end-to-end conditions in real-time. It samples specified end-to-end paths from the same point of view as an application. The sampling can be either network layer (IP) or OS layer (TCP) or even application layer (i.e. HTTP, SMTP, FTP.) Like statistical approaches, some implementations are too basic to be truly useful for planning – the venerable tool Ping is the best known and probably simplest example.
However, sophisticated performance analysis systems can generate detailed, hop-by-hop views of end-to-end paths that offer real-time measures of the capacities of the links, the traffic in the path, and problem analysis, identification and isolation. Some systems simulate application traffic (i.e. VoIP or database transactions) in order to develop an application-specific view. Others define a demarcation point between networks and OS/application (Layers 3/4) and utilize IP, UDP or TCP packets to resolve purely network behaviors.
Overall, real-time performance analysis can provide a definite advantage for planning when there isn't sufficient resource available to instrument a network or time to gather data. And it also sidesteps issues of ownership when some part of the networks involves an ISP. As a bonus, it provides immediate feedback at the staging and post-deployment steps in the network build-out. And it provides rapid isolation of performance bottlenecks and, by extension, offers powerful diagnostic capabilities. In the long term though, when monitoring and historical trends are a major part of the network operation, this approach falls short.
Loki Jorgenson, Ph. D., a Research Director with Apparent Networks, Inc., has a Ph.D. in computational physics from McGill University. Dr. Loki Jorgenson has been active in computation, physics and mathematics, visualization, and simulation for over 16 years. He has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Loki is an Adjunct Professor of Mathematics at Simon Fraser University, where he was previously the Research Manager of the Center for Experimental and Constructive Mathematics. He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. He has also worked as an e-business consultant with Idea Integration in the San Francisco Bay area. At Apparent Networks, Inc., Dr. Jorgenson is responsible for leading research efforts and for creating practical collaborative efforts with academics and other thought leaders.
This was first published in September 2003