Problem solve Get help with specific problems with your technologies, process and projects.

Rate delay product

This tip series delves deep into the bandwidth myth. Part 5 discusses the rate delay product, an efficiency parameter that describes how much data a sender can dump into the network path before it will have any expectation of seeing a response.

Many IT engineers hold misconceptions around bandwidth and network throughput. To get a better understanding of these topics, IT pros should understand the rate delay product, an efficiency parameter that describes how much data a sender can dump into a network path before it will have any expectation of seeing a response. Continue learning about the rate delay product below or, to read the previous articles in this How much bandwidth is enough? tip series, go back to part 4: Space probes, throughput and probability.

Rate delay product

An important, but derived, path parameter is the rate delay product ("bandwidth delay product" in slang). This is the limiting data rate (i.e., T1) multiplied by the RTT for the path. Checking units, we see this is bytes (or bits) per second times seconds, giving a quantity of bytes (or bits). What this means is how much data the sender can dump into the network path before it will have any expectation of seeing a response (i.e., an ACK) from the receiver. This is an efficiency -- therefore a throughput -- parameter. The more bytes we can send out at the path's limiting rate, before we have to await a response, the more throughput we'll see by the end of the transaction.

Calculating this parameter tells us how to set the sender's transmit window, in packets or bytes. The idea of the rate delay product is to tell how many packets/bytes will fill the network path completely, so no time is wasted not sending data. This is not usually understood by most people, yet it's important, as the example below shows.

A very frustrated network technician found that the same application, running on the same user's laptop at two different company locations, was about 50% faster at one than at the other. The network had many T1-T3 lines, one of which was particularly congested. The distances between the server and client laptops ranged from 350 to 450 miles -- the technician drove one laptop around the state! The paths from these locations differed, except where they met at one router that fed a T3 link to the server's location, but RTTs were all about 60 msec. The application was simply downloading a file (over TCP/IP) of several MB that was used often (and updated) by sales staff, half of whom were complaining about low throughput (long delays). It took about two minutes to load 3.3 MB at the slow locations:

Network throughput graph
Click graph to enlarge image. 

Note how steady the throughput graph above appears; but note the small deviations of the accumulated packet-data delivery (red) from the fitted line (black). Throughput is only 217 KBps, yet no link in the paths is slower than T1 -- almost a 7:1 throughput penalty. Why? Analysis of packet captures at a slow client and server showed:

  1. About 0.25% of packets (2.5 per 1000) were being lost in the path common to slow locations.
  2. Retransmission delays associated with these losses were larger than typical for TCP stacks -- about 2.5 seconds each.
  3. The loads on one or two shared T1 links were heavy and bursty, causing congestion delay.
  4. The file was sent using TCP to fragment SMB blocks of about 31 KB into 21 full-MTU (1460 byte payload) packets. Because the packet count per block is odd, the receiver enters its delayed ACK time-out and so delays final block acknowledgment by 150 msec every 31 KB.
  5. The server's transmit window was only six packets and did not grow when there was no packet loss, despite the rate delay product's being T1 * 60 ms = 8.5, 1460 byte packets. Thus, only 70% of the allowable transmit window was used by the server.

Each of the above factors added delay to client data transfers. The first two were of differing sources (physical vs. protocol stack), but they worked together to create large chunks of the delays seen by users. The third observation contributed the kind of delay variability seen earlier and could have added some packet losses caused by router overloads, thus interacting with the second issue. The fourth delay source is strictly a result of TCP stack configuration at the receiver and lowers throughput independently, by adding 150 msec to every 31 KB block transfer, which itself should have taken just 165 msec (including protocol overhead) to send over a T1 link. The last factor is again strictly due to TCP stack configuration and lowers throughput by theoretically wasting 20 msec per block. But that's hidden by the receiver-forced, 150 msec ACK delay at the end of each block.

⇒ Continue reading part 6: Troublesome TCP behavior.

About the author:
Alexander B. Cannara, PhD, is an electrical engineer, a software and networking consultant, and an educator. He has 18 years of experience in the computer-networking field, including 11 years in managing, developing and delivering technical training. He is experienced in many computer languages and network protocols and is a member of IEEE, the Computer Society, and the AAAS. Alex lives with his wife and son in Menlo Park, California.

Dig Deeper on WAN optimization and performance

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.