I am really confused about the ICMP ping test. Can you explain this whether the continuous PING test affects the bandwidth, data flow and which is the best tool for monitoring the packet loss?
As for the first part of your question, whether the continuous ping test affects bandwidth, data flow, etc. depends on the following:
- A. The size of the ping packet;
- B. The frequency with which you are pinging (in your case "continuous"); and
C. The number of hosts you are pinging.
The following formula will help you determine the effective bandwidth being generated when using pings:
(packet size (bytes) x 8 (converts to bits)) x 2 (because there is also a reply) x number of pings x number of hosts.
Including replies 592
# of pings 100
# of hosts 1
Total kbits 59.2
(592 x 100 x 1)/1000
To confirm the above calculation, I ran... a test on my corporate network with the following results. The test consisted of one hundred (100), 37-byte ICMP pings, run one after the other, from my Windows XP desktop computer (ping -n 100 espn.com). I'm sure ESPN will forgive me the extra traffic.
The table below shows the results of the test and displays the number of packets and the kbits of traffic for both the REQUESTS (outbound) and the REPLIES (inbound). As you can see, there were 100 packets in each direction (since we received a reply for each request). The 100 pings generated 59.2 kbits of traffic. The entire test took 100 seconds, or an effective throughput of .592 kbits/sec. If you had an application that allowed you to do the same test on 100 different hosts, you would effectively fill up a 56k link (.592 x 100 = 59.2kbps).
The affect on other network users depends on how latency-sensitive your applications are, and how large the bandwidth is between the users of those applications.
As for the second part of your question regarding the best tool for monitoring the packet loss, let me state upfront that I have a personal bias because I am currently working for a company (Network Physics) that has a product which passively measures real-time packet loss by monitoring the actual TCP retransmissions on the network. A TCP retransmission is the byproduct of packets being lost or arriving too late at their destinations (just as bad as being lost). This retransmission represents the true impact of packet loss on TCP-based applications and users. Products like these infer that for each retransmitted packet, there has been a packet lost on the network. Given concerns about adding load to the network, I believe this passive methodology is superior, especially across large WANs.
That said, under the right circumstances, pings can be used to create a barometer for packet loss on the network. Its effectiveness as a barometer will depend on how often you sample for packet loss across the various network paths taken to reach your users. The more frequently you measure, the better your barometer. Of course, this means introducing more load on the network, especially when you have a large WAN and therefore need to test many paths.
I added the caveat "under the right circumstances" above. This is because there are certain risks in interpreting failures to receive a reply to a ping. Those risks are the following:
- Interpreting where the packet was lost. Because you have no idea whether the request ever made it to the host, or the reply got lost on the return, it is hard to pinpoint where on the network the packet loss occurred. You only know that the ping failed to get a response.
- Network queueing. ICMP pings usually have the lowest priority across a network end-to-end. Therefore, if any router along the path is saturated with packets, then the ICMP ping is likely to be the first to get dropped by that router. Regular applications may exhibit no packet loss; yet, the ping tests will show packet loss.
- ICMP filtering. Pings are often used as a method for producing denial of service (DoS) attacks or attempts to discover a network topology. For this reason, many firewalls are configured to block ICMP pings. Additionally, many desktop operating systems and routers have mechanisms for minimizing the number of ICMP pings they will respond to in a given time frame. Therefore, it's possible to get failures when, in fact, there is connectivity, leading to false positives with regards to packet loss.
Dig Deeper on Network management and monitoring
Related Q&A from Dwight Barker
Is there an industry standard for monitoring network/device availability? Continue Reading