Manage Learn to apply best practices and optimize your operations.

How much bandwidth is enough?

Bandwidth is discussed in this tip on fact versus fiction and how it can affect network performance.

This tip series explains how to calculate and project how much bandwidth is enough for your enterprise WAN. The introduction to how much bandwidth is enough, below, discusses bandwidth facts and fictions and how it can affect network performance.

 Introduction to how much bandwidth is enough
 Part 1: How much throughput is enough?
 Part 2: The real-world network
 Part 3: Multiple effects of multiple causes
 Part 4: NASA space probes and throughput
 Part 5: Understanding error probability
 Part 6: Rate delay product
 Part 7: Troublesome TCP behavior

"Bandwidth" is a meaningless term for network managers, architects and technicians, and using it instead of "data rate," "link capacity" or even "speed" simply shows that we're a bit ignorant of networking, at least at Layer 1 (of the 7-Layer OSI model). Many people may use "bandwidth" to mean bits per second, but doing so reveals an ignorance of signaling theory and basic physical communications. A review of the following terms reveals that their physical properties are not even commensurate:

Bandwidth: measured in Hertz (Hz) -- the spectral width of a signal or a channel carrying signals (historically: cycles per second).

Data rate: measured in bits per second (or maybe terabytes per fortnight).

"Bandwidth" is often sloppily applied in the wrong context or used for some perceived geeky marketability. Too bad, because new people in networking are then misled rather than educated.

Here's a modest explanation to work from. Consider Claude Shannon's work: "Bandwidth" is like a farm field. The way in which it is farmed yields a particular harvest of data rate.

Many old heroes, such as Dennis Hayes, expended much mental effort and approached Shannon's postulated limit for turning raw bandwidth into bits/sec through a modem. They used cagey, intelligent choices of signaling symbols (FSK, SQPSK…) -- that's how a good data rate is harvested from any given channel's bandwidth.

Some Europeans have defined coding that gets very close to the Shannon Limit. But in no case is bandwidth the same as data rate. It is instead an opportunity demanding intelligent exploitation via a wise choice of transmission symbols -- even Napoleon's network designer knew this 200 years ago, when he built an optical network across Europe that communicated the emperor's conquests back to Paris within 15 minutes, using a 20-symbol code. The Swedes, with their own 512-symbol optical net, also knew this 200 years ago. NASA certainly knew it, when planning to talk to the Voyagers.

So how much "x" per second is needed between net nodes y and z? It depends.

Of most concern to network managers, engineers or technicians are probably the complaints they hear from bosses, heads of service departments, business partners and, eventually, users. Every network manager knows the liturgy of "whine-one-one" calls: "It's slow! I can't get to server ABC! System Q kicked me off! The printer is slow! The network is slow today!"

Which of these problems has to do with what parameter a network architect, manager or technician can correct? That depends, because what gets the job done for systems and users is throughput, the number of good data bits/bytes delivered completely and in order, per second, from sender to receiver.


  How much throughput is enough?

So, the real question we should be asking is: How much throughput is enough? At every layer of the OSI model, this is what the application designer must specify, the architect must design for, the manager must maintain, and the technician must measure and troubleshoot. This is what systems and users actually experience every minute of every day.

Latency and connectivity are two key measurable network properties whose effects on throughput are what a serviced system or user experiences. Latency means the length of time a request's response takes; connectivity simply defines visibility of one node to another. These contribute directly to what is perceived as network "performance," and the reality is that it's not always the physical network that's the source of poor performance. It can be a poorly constructed database or a poorly configured server. It can be a poorly configured application, router or firewall. It can even be a poorly configured data-transport protocol at peer, client or server. This just outlines the network-troubleshooting conundrum, which is exacerbated by the complexity of today's interconnected systems and applications.

If the network has been architected properly, adjustments by technicians of such parameters as link data rates, server processors or memory, database partitioning, and so on can improve behavior. And most of these should automatically be flagged to the people running the network by good network/system-management tools.

⇒ Continue on to part 2: Real-world network performance.

About the author: 
Alexander B. Cannara, PhD, is an electrical engineer, a software and networking consultant, and an educator. He has 18 years of experience in the computer-networking field, including 11 years in managing, developing and delivering technical training. He is experienced in many computer languages and network protocols and is a member of IEEE, the Computer Society, and the AAAS. Alex lives with his wife and son in Menlo Park, California.

This was last published in February 2009

Dig Deeper on Network Infrastructure

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.