Problem solve Get help with specific problems with your technologies, process and projects.

# Rate delay product

## This tip series delves deep into the bandwidth myth. Part 5 discusses the rate delay product, an efficiency parameter that describes how much data a sender can dump into the network path before it will have any expectation of seeing a response.

Many IT engineers hold misconceptions around bandwidth and network throughput. To get a better understanding of these topics, IT pros should understand the rate delay product, an efficiency parameter that describes how much data a sender can dump into a network path before it will have any expectation of seeing a response. Continue learning about the rate delay product below or, to read the previous articles in this How much bandwidth is enough? tip series, go back to part 4: Space probes, throughput and probability.

Rate delay product

An important, but derived, path parameter is the rate delay product ("bandwidth delay product" in slang). This is the limiting data rate (i.e., T1) multiplied by the RTT for the path. Checking units, we see this is bytes (or bits) per second times seconds, giving a quantity of bytes (or bits). What this means is how much data the sender can dump into the network path before it will have any expectation of seeing a response (i.e., an ACK) from the receiver. This is an efficiency -- therefore a throughput -- parameter. The more bytes we can send out at the path's limiting rate, before we have to await a response, the more throughput we'll see by the end of the transaction.

Calculating this parameter tells us how to set the sender's transmit window, in packets or bytes. The idea of the rate delay product is to tell how many packets/bytes will fill the network path completely, so no time is wasted not sending data. This is not usually understood by most people, yet it's important, as the example below shows.

A very frustrated network technician found that the same application, running on the same user's laptop at two different company locations, was about 50% faster at one than at the other. The network had many T1-T3 lines, one of which was particularly congested. The distances between the server and client laptops ranged from 350 to 450 miles -- the technician drove one laptop around the state! The paths from these locations differed, except where they met at one router that fed a T3 link to the server's location, but RTTs were all about 60 msec. The application was simply downloading a file (over TCP/IP) of several MB that was used often (and updated) by sales staff, half of whom were complaining about low throughput (long delays). It took about two minutes to load 3.3 MB at the slow locations:

Click graph to enlarge image.

Note how steady the throughput graph above appears; but note the small deviations of the accumulated packet-data delivery (red) from the fitted line (black). Throughput is only 217 KBps, yet no link in the paths is slower than T1 -- almost a 7:1 throughput penalty. Why? Analysis of packet captures at a slow client and server showed:

1. About 0.25% of packets (2.5 per 1000) were being lost in the path common to slow locations.
2. Retransmission delays associated with these losses were larger than typical for TCP stacks -- about 2.5 seconds each.
3. The loads on one or two shared T1 links were heavy and bursty, causing congestion delay.
4. The file was sent using TCP to fragment SMB blocks of about 31 KB into 21 full-MTU (1460 byte payload) packets. Because the packet count per block is odd, the receiver enters its delayed ACK time-out and so delays final block acknowledgment by 150 msec every 31 KB.
5. The server's transmit window was only six packets and did not grow when there was no packet loss, despite the rate delay product's being T1 * 60 ms = 8.5, 1460 byte packets. Thus, only 70% of the allowable transmit window was used by the server.

Each of the above factors added delay to client data transfers. The first two were of differing sources (physical vs. protocol stack), but they worked together to create large chunks of the delays seen by users. The third observation contributed the kind of delay variability seen earlier and could have added some packet losses caused by router overloads, thus interacting with the second issue. The fourth delay source is strictly a result of TCP stack configuration at the receiver and lowers throughput independently, by adding 150 msec to every 31 KB block transfer, which itself should have taken just 165 msec (including protocol overhead) to send over a T1 link. The last factor is again strictly due to TCP stack configuration and lowers throughput by theoretically wasting 20 msec per block. But that's hidden by the receiver-forced, 150 msec ACK delay at the end of each block.

⇒ Continue reading part 6: Troublesome TCP behavior.

Alexander B. Cannara, PhD, is an electrical engineer, a software and networking consultant, and an educator. He has 18 years of experience in the computer-networking field, including 11 years in managing, developing and delivering technical training. He is experienced in many computer languages and network protocols and is a member of IEEE, the Computer Society, and the AAAS. Alex lives with his wife and son in Menlo Park, California.

This was last published in September 2006

#### Start the conversation

Send me notifications when other members comment.

## SearchUnifiedCommunications

• ### How to monitor user-generated content in a video world

User-generated enterprise video is gaining popularity but can cause headaches for IT if left unchecked. Learn how to manage and ...

• ### Microsoft faces hurdles in selling Teams to first-line workers

Microsoft is a long way from convincing businesses with first-line workforces to buy the collaboration app Microsoft Teams, ...

• ### CPaaS market growth outpaces UCaaS as organizations demand APIs

Demand for API capabilities is driving the CPaaS market. Learn why organizations want APIs in addition to UCaaS platforms and why...

## SearchMobileComputing

• ### Sophos adds mobile threat defense app to Intercept X line

Intercept X for Mobile is compatible with iOS and Android devices, as well as Chrome OS and Chromebooks. It offers privacy ...

• ### Ownership scenario should dictate mobile device policies

The policies organizations should deploy for BYOD devices compared to kiosk devices are very different. Learn how organizations ...

• ### Explore Android OS alternatives for enterprise use

While enterprise adoption for Android OS alternatives, IT professionals should learn about the options available so they can ...

## SearchDataCenter

• ### A guide to data center maintenance outsourcing

Data centers have a lot of moving mechanical parts to maintain. It's beneficial to leave electrical and fire system upkeep to the...

• ### Top data center skills admins can use in 2020

The 2019 tech job sector saw consistent growth and job availability. In 2020, admins should develop expertise on cloud ...

• ### Organizations try to predict the effect of 5G infrastructure

With more 5G and IoT devices emerging, admins must prepare their data centers to support low-latency apps and edge computing with...

## SearchITChannel

• ### Citrix releases more ADC, Gateway patches

Citrix rolls out more patches ahead of schedule for CVE-2019-19781, a directory traversal vulnerability that affects Citrix ADC, ...

• ### UiPath RPA training targets channel partners

RPA vendor UiPath has launched a new training portal built specifically for channel partners, offering educational content used ...

• ### Netgear vulnerability raises router security concerns

Security researchers revealed Netgear firmware exposed TLS certificate keys, but SearchSecurity discovered it wasn't the first ...

Close