Very seldom will you see an application achieve 100M bit/sec. throughput. Here are a couple of factors that affect...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
1. Bandwidth vs. Latency
100M bit/sec. defines the capacity of a link and the clock rate that data will be inserted onto the wire. 100M allows for multiple users to transmit and receive files simultaneously over the wire provided they do not over utilize the link. Latency is defined as the time taken between a request for information and its subsequent reply. If you have a 100M link with 100 milliseconds of latency then your maximum throughput is (bandwidth x latency) can theoretical reach 10M bit/sec. And don't forget to check peak utilization on the wire. Processing and queue delays at the store and forward device impact both bandwidth and latency.
2. Protocol overhead
Application data is exchanged between end user stations on a peer layer basis (OSI. Some applications report throughput achieved based on the time taken to move a certain amount of data. The application has no knowledge of the overhead associated with the lower layer protocols (session, transport, network, datalink). Each layer adds a protocol header on transmit and strips on the receive side.
Connection oriented transport protocols such as TCP provide mechanism for congestion control. TCP uses a WINDOW parameter to determine how many packets can be sent before waiting on an acknowledgement (ACK) from the receive side. If the window size is set to send one packet at a time and wait for the acknowledgement, then throughput will be less than optimal. When the window size is set to send multiple packets then throughput is increased. The other consideration is how fast the client and server are processing and sending acknowledgements.
4. MTU Maximum Transfer Rate
Depending on the topology throughput is increased when data is being exchanged at the MTU allowed by the physical topology. For Ethernet the MTU is 1518 bytes (overhead included). If an application is sending less than the desirable MTU, then throughput suffers.
From the Networking Challenge Discussion Forum: September 2002 by rgptallc
A network card takes the parallel data coming off the bus and converts it into a serial bit stream. The speed of the card is fixed, let's say 10M bit/sec. or 100M bit/sec. In this case, we'll use 100M bit/sec.
LAN wiring is a little trickier. For example, Category 5 cabling is made for transmitting 100M bit/sec. It can handle 10M bit/sec. easily. This does not mean it can't go higher than 100M bit/sec., just that the signal becomes unreliable. A higher speed signal may cause signal crossover and other problems.
Now, we take two computers, use a crossover cable (a cable that links transmit to receive on each end) to connect them, and transmit data. As we send data, whether it is a single packet or a large database file, the data travels the wire at 100M bit/sec.
To give you an analogy, I can drive my car down an empty highway at 60 mph. I can also drive in very heavy traffic at 60 mph. There are more cars (packets) traveling on the highway, entering and exiting, but the speed is still 60 mph.
We now have to discuss what happens in real life. Most LANs use Ethernet, which specifies how the data is placed on the wire, how the data is broken down into 1s and 0s, etc. The amount of information that flows across the LAN can vary greatly due to packet size. The smallest packet size is 64 bytes. The largest is 1500 bytes. Since each packets have a header and a checksum included with the data, it is more efficient if the data is sent in large packets. Going back to the highway analogy, you can get 70 people down the highway faster if you use a bus as opposed to each person driving their own car.
You also have the CSMA/CD technology used in Ethernet, where the machines "listen" to the wire to see if anyone else is transmitting. If two machines transmit at the same time, there is a collision. Both machines back off and try to transmit again. High collisions = low throughput. Since switches isolate collision domains, they are very important for sustaining data throughput on a LAN. There is also the function of full duplex and half duplex. Full duplex allows machines to transmit and receive at the same time, giving you 100M bit/sec. in both directions. If you run half duplex, the 100M bit/sec. must be shared by transmit and receive functions.
So what does this all mean for LAN speeds? If you have many users sending many small packet on a shared media like a hub, the network BANDWIDTH is still 100M bit/sec., but the THROUGHPUT may be as low as 10M bit/sec. or less! Also, keep in mind that the speed is 100M bit/sec. and files are measured in bytes, (8 bits = 1 byte)
From the Networking Challenge Discussion Forum: September 2002 by Steve Olson