On an unhealthy or broken network, quality of service (QoS) is a rudder on a sinking boat -- it fixes nothing and cannot resolve degradation issues. On a healthy, well-provisioned network, QoS is not needed -- all applications receive sufficient resources to perform properly. So when and why would you ever use it?
Quality of service is a lot like politics -- it only becomes important when resources are scarce and demand is high. And in such cases, it has become increasingly typical to engineer a solution that puts certain services at a premium by devaluing others. And, typically, the premium user pays through the nose. Make no mistake - this kind of solution is hardly aimed at functional optimization. Rather, it is all about economics.
Consider that most (if not all) QoS mechanisms effectively limit bandwidth (or more properly, data transfer capacity). They do not ever increase it, certainly. They limit how much cross-traffic certain applications see -- at the expense of other applications. And those so-called premium applications will only ever see the network path as it might have appeared without QoS and no other traffic. This is a zero-sum game wherein anything gained for one application is lost elsewhere for others.
If you take this perspective properly, and start to see QoS for what it is, you will recognize the value in the "scavenger" approach developed by Internet2. QoS is appropriately defined as a "reduced priority" service mechanism that reduces access to resources for those applications that do not need them. Thus, instead of attempting to elevate special classes of service, it depresses non-essential ones. Using the differentiated service code point (DSCP) in the IP header, packets can be tagged as lowered priority, moving them out of the way of the rest of the traffic.
This is naturally what QoS does anyway. It just makes sense.
Further, it instantly solves one of the toughest problems facing QoS – end-to-end robustness. The scavenger approach can be applied opportunistically and there is no penalty for any system that doesn't support it. Instead, network segments can opportunistically subscribe to this mechanism. And thus the overall network can be improved on an incremental basis.
It's amazing how changing your perspective can suddenly make the notion of QoS functional.
Alas -- this is not likely to bring about cheers and joyous rioting in the network industry. As the current political debate about "network neutrality" attests, the point of networks is not optimal functionality but rather optimal profits. Businesses, many of them struggling to redefine themselves in a rapidly changing world, are looking for ways to attach dollars to bytes. And that is exactly where the current approach to QoS receives its support.
QoS provides a means for making money. Premium services come with a premium cost. And, given its nature, QoS can only be offered at the expense of regular services. QoS, in that view, can really only do two things -- degrade overall network performance and cost the user more money.
Here, "overall network performance" means the effective use of the totality of the network resources with respect to the total work output. Efficiency, in other words. And reduced efficiency results in most cases where QoS is deployed.
This really matters where resources are critically scarce and specific applications must be guaranteed a certain level of performance. VoIP immediately comes to mind. But note the key phrase "where resources are critically scarce." The first and best option to alleviate the scarcity is to add bandwidth (but make sure you do it on a healthy network – see Bandwidth scam). In an increasingly optical world, even to the residence, there is declining need to think in terms of QoS. Opening the bandwidth floodgates is rapidly becoming a simpler and more reliable means.
Only when increasing bandwidth is not an option, technically or economically, should you turn to QoS for help. Even then, it won't be easy or trivially reliable, and it will inevitably cost you elsewhere.
About the author: Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com