Several typical assumptions could be made to generate the simplest estimate for a steady-state transfer rate:
- There is no loss.
- There is no cross traffic (WAN link is empty).
- A single TCP connection is involved.
- A single request transfers all images over a period longer than the TCP slow start.
- Transfer is largely one-way (apart from control traffic).
- Network is the bottleneck.
- The WAN connection is T1 or better.
However, you have already mentioned that this is a Web based system – this suggests that many of these assumptions don't hold. Often transfers on the Web are one-per-file. This means that the transfer never reaches steady state beyond the TCP slow start. So each transfer is handled separately and each connection ramps up speed until the file is transferred and then starts over on the next one. Further, there will be overhead generating each connection (instead of negligible overhead of opening once for all files). On the other hand, the Web client may utilize a continuous open connection, an option supported in current HTTP implementations. Each may affect the estimate significantly.
Unless the WAN connection is very low bandwidth, I would guess that you probably aren't going to see better than around 100 kbps on relatively low latency connections and maybe as low as 5-10 kbps on longer latencies.
However, that's based on very limited information. For more information, I'd suggest investigating bandwidth-delay product approximations for TCP performance such as described in a recent paper by Dovrolis, et al.
This was first published in July 2004