Q
Problem solve Get help with specific problems with your technologies, process and projects.

If I only know some information measured in a client, such as the speed of one transaction between t

If I only know some information measured in a client, such as the speed of one transaction between this client and a server, how can I obtain delay and loss statistics between these two computers during this transaction period? Are there equations to express the relation of delay, loss and speed? If not, what else should be known to obtain delay and loss statistics? Please don't tell me to measure delay and loss directly, because I don't want to introduce any extra traffic. Even passive measurements could also introduce a small quantity of extra traffic if a client wants to know the route performance between itself and the server.
This approach to application performance is applied by certain technologies such in the high-end I3 system from Precise. It characterizes the performance of the network by observing everything that the server/application is doing and then inferring that everything else is about the network. If you are constrained as you describe, unable to sniff or probe, this is likely your best approach.

However, it is a difficult science without sufficient information. Here is the simplest characterization:

Let's assume that you know the (mean) transaction time for a particular network application, T1, and the processing time at the host, T2 (where T2 < T1). Then the network time is T3 = T1 – T2.

If you don't know T2, or can't even assume that it is small compared to T1, then you are already stuck.

So now you have a cumulative value T3 for the time spent in the network. Let's assume that this application is TCP-based and governed by a typical Reno or Tahoe stack implementation, that a single TCP connection is used and that the transaction is a simple connection, request and with response as a series of data packets.

Now that's assuming a lot already. But this is the simple case.

But there is more!! Let's assume that the amount of data transferred is negligible – in other words we aren't going to model this transaction in terms of the TCP steady state formula

Throughput = 0.7 * MSS / (RTT * sqrt(loss) )

(see The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm by Matthew Mathis, Jeffrey Semke, Jamshid Mahdavi)

If you did, you would still only know the delay (RTT) and loss as a product and couldn't separate them. You would need to sniff the RTT of a single SYN-ACK sequence to pull out a reliable RTT. Needless to say, this needs to be done statistically over a large number of transactions.

So without any significant slow-start or steady-state effects (assuming very small amount of data transferred), you would assume a certain number N of SYN-ACK/DATA-ACK exchanges and that the total time of the network part of the transaction was based on those RTTs. And that over a statistical collection of transactions, the variations in timing represented any losses and time to retransmit…..

I'll stop now. My point? In order to get down to a single equation of any real meaning, a large number of fairly specific assumptions need to be made. Or more information is needed.

I hope this isn't discouraging you. Depending on what you actually have as data, you may well be able to characterize the network behavior reasonably well. I only have what you described in your question to go on. There just isn't a generic catch-all expression.

This was last published in March 2004

Dig Deeper on Campus area network

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close