Network measurement and diagnostics expert Loki Jorgenson was recently asked a question about best practices for monitoring network performance. The question raised some thoughts on how networking and its best practices have changed in recent years.
"Best practices" have a short half-life in our industry. The biggest problem I see with today's management techniques and tools is that they grew up in the client/server era and haven't evolved to serve today's realities.
In the old world...
- You owned or controlled most of the networks your key applications depended on.
- You could predict where critical traffic would flow.
- Complex, long-deployment, agent-based systems were approved for deployment because the problems client/server created were new, acute and very scary.
- You dealt with a dog's breakfast of vendor-specific protocols.
In today's world...
- You depend on networks you do not own or control. (ISP, ASP, customer, supplier, etc.)
- You can't predict where tomorrow's traffic will flow or what will break next.
- There is less -- or no -- time and money for deployment or maintenance of big, complex network management systems.
- It's "IP everything/everywhere."
So what should "best practices" look like?
- Be able to see end-to-end, from the application's view point.
- Be able to deploy "just-in-time" network management infrastructure -- rapidly, where needed, when needed, on demand.
- Be able to see into and through networks that you don't own.
- Employ monitoring technologies that provide thorough network awareness on an ongoing basis, not piecemeal views.
- Rely less on trend analysis and more on real-time assessment.
- Emphasize "effective" over "absolute" -- implement management solutions that resolve your most common, most expensive problems most quickly.
- Focus on application performance after the fundamental networking performance aspects have been addressed.
- Use methodologies and technologies that fit your network and needs, not the other way around.
The approach to this might be laid out in two steps:
- Continuous monitoring of performance (not just availability) as a essential starting point, ideally at the Layer 3 or 4 demarcation point (at a minimum) so you can separate network performance issues from application ones quickly. This has to be end-to-end along all critical paths, and most others of interest, with constant updating.
- Rapid response to performance problems that slip through the cracks. That requires a real-time measurement/assessment/problem diagnosis capability that delivers quickly, without pre-deployed infrastructure, and can be used remotely.
About the author:
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2.