Throwing bandwidth at a network performance problem rarely solves anything. But why did we think it would?
Networks and their related protocols are designed to support traffic. They are designed to tolerate congestion and work properly until they run out of capacity. At that point, throwing bandwidth at the network makes sense — as you'd expect, more capacity unsnarls traffic jams and allows networks to flow freely.
So why doesn't more bandwidth solve your typical network performance problems? In 2002, NASA researchers estimated that 75% of all network performance problems were duplex conflicts. For 2005, Gartner Research predicts 33% of organizations will experience network latency issues and will incorrectly diagnose the problem as too little bandwidth.
The reflex is to think in terms of functional networks. When performance begins to suffer, we often conclude that the network is congested and more bandwidth is needed. Typical applications are designed to do exactly the same thing, reducing demand when loss appears. After all, on a good network path, congestion is really the only possible problem.
But networks don't always work the way you expect, or even as designed. As our mothers have been telling us, nothing good comes from sticky fingers. And while mom was preaching the importance of good hygiene, we should have been listening, as networks are no exception. Often, with the best of intentions, problems are created -- problems that no one is anticipating and no one is looking for -- problems that are simple but often very hard to find.
Think about the last three network problems you encountered; how many were completely avoidable? Maybe a duplex conflict? Or poor NIC drivers? Bad cables? More than 70% of the time, system bottlenecks are related to a configuration or environmental issue rather than an application defect, according to Mercury Interactive research.
So if more bandwidth isn't the solution, what is?
The very best networks are the products of highly disciplined network engineers who "wash their hands before every meal." But after the fact, the best remediation strategy is to view performance issues as a product of the unexpected -- look for things that are not working "as advertised" -- think in terms of network dysfunction. Think outside the functional box.
In that light, more bandwidth is never the answer. Never has been.
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com