Duplex mismatch: Stop the conflicts, part 2

Today, the majority of modern cabling supports full-duplex, and there is a significantly diminished need for half-duplex -- or at least, that is what one would like to think. In this column, Loki Jorgenson examines extenuating circumstances where half-duplex may be appropriate and offers best practices where its use may still be needed.

Despite improvements in network technologies, duplex conflicts refuse to go away. They have subtly evolved as Ethernet...

has evolved. In many cases, the so-called cures have only obscured the problem, rather than actually fixing anything. Part 1 explored the nature of the duplex conflict itself. This part looks at extenuating circumstances, possible futures, and best practices.

Today, the majority of modern cabling supports full-duplex, and the need for half-duplex is significantly diminished -- or at least, that is what one would like to think. Duplex conflict should go away simply because use of half-duplex should stop -- at any rate, the use of half-duplex cabling should become an option that has to be invoked deliberately or used only in special circumstances where full-duplex is not possible or appropriate (e.g., wireless). It was originally a means of sharing media (i.e., a single pair of wires) for both transmission and reception.

Instead of moving beyond half-duplex, Ethernet continues to wear it like a bad haircut.

However, legacy interfaces (those with only half-duplex), low-cost NICs (found in many simple Ethernet devices), and the general intention to support both modes have predominated. Instead of moving beyond half-duplex, Ethernet continues to wear it like a bad haircut. Half-duplex appears in the Gigabit Ethernet spec IEEE 802.3.ab (even though it also requires all four pairs of conductors). Thankfully, the industry has largely chosen not to support it. It finally disappeared with the release of 10 Gigabyte Ethernet (802.3an for UTP cabling).

More significantly, in modern NICs designed to support multiple rates (e.g., 10/100/1000 Mbps), the slower capacities necessarily must support half-duplex. Note that prior to Gigabit Ethernet, duplex conflict was easily the most common degradation problem around; a 2001 NASA study suggested that 75% of network issues were related to duplex conflict. It seemed likely that Gigabit would help to relieve this problem by making half-duplex moot. The opposite effect has been observed, however.

In order to make the new mixed-rate GigE NICs consumer friendly, vendors have been playing with configuration interfaces to ensure that the NICs always tend to connect at some speed through the use of auto-sensing in specific combination with auto-negotiation. In other words, connecting a GigE NIC to a 100 Mbps switch (or a 10 Mbps hub) should almost always work. But in doing so, they have invited the duplex conflict back into the fold because auto-negotiation often causes problems when it isn't applied uniformly. So it would seem that we are simply stuck with duplex conflicts for the near (and possibly far) future.

Possible solutions and best practices

What can you do about it? Well, forget about QoS fixing the situation -- in most cases where duplex conflict takes place (i.e., the LAN), QoS is typically applied at Layer 3 where packets are queued and handled according to TOS or DSCP settings in the IP header. Duplex conflicts effectively occur at Layer 2, long before any inspection of the IP headers takes place. Frames collide and packets are corrupted and discarded before QoS can have any effect. Even VLANs and other segregation technologies do not protect traffic from the effects of mismatches. VoIP, for example, cannot be protected from data flows triggering loss from a duplex conflict. It is important to remember that, in general, QoS does not fix broken networks. It only changes the way packets are handled under normal conditions.

In search of effective solutions, vendors continue to work on improved implementations of half-duplex and auto-negotiation. Some implementations have simply been broken. Fixing these has reduced the incidence of auto-negotiation faults. In others cases, vendors have introduced custom variations of the CSMA/CD (collision detection) behavior that reduce levels of loss in the case of conflict or improve the likelihood of correct duplex matching. A popular router manufacturer offers a form of three-quarters duplex that is robust to loss under mismatch conditions but diminishes the two-way data transfer capacity to roughly 75% of its true peak as a consequence.

In general, though, the challenge for administrators is to deal with conflicts in light of heterogeneous mixtures of equipment, NICs and drivers, application environments, and other factors unique to their networks. There is simply no single solution. Overall, best practices are still the most valuable tool in dealing with duplex conflicts. The goal is to prevent them from happening wherever possible -- and to discover and correct them as quickly as possible when they do.

Discovering and localizing duplex mismatch is very hard to do. Duplex conflicts are notoriously difficult to find and can waste a tremendous amount of time. A few technologies are available that can diagnose duplex issues automatically, but as with most pests, it is far easier to take preventive measures and effectively eliminate their presence than it is to chase after the individual instances.

Best practices primarily revolve around how auto-negotiation is utilized. Several years ago, it was regularly contested whether auto-negotiation should be trusted at all -- the alternative was to manually set duplex modes on all interfaces (e.g., set each interface to 100 Mbps full-duplex and configure anything else on a special-case basis). In very large networks, the manual hard-set approach was not very efficient -- it simply didn't scale. However, the approach was, in general, very reliable when applied uniformly and was preferred by many effective network engineers.

The major problem with auto-negotiation is inconsistent deployment. Mixing manual duplex hard-set with auto-negotiation is a sure way to generate conflicts. So, best practices were more about being reliable in interface configuration management. They recommended using manual hard-set or auto-negotiation, but not mixing, and they promoted a very disciplined and consistent application of the preferred choice. Consistency turned out to be the key.

Recently, best practices have become somewhat more complicated. As vendors strive to make their hardware more reliable, they sometimes create new problems. In one example, certain NIC chipsets do not work at all with a popular brand of router unless both interfaces are set to auto-negotiate. Where manual hard-set was previously in use, enterprises have thus been forced to change their management policies. Also, many GigE NICs do not support hard-setting 1000 Mbps, also forcing the use of auto-negotiate.

One recent version of best practices applies auto-negotiate in the LAN and at the edge while using manual hard-set in the core. It seems to work well with most enterprise networks and provides an optimal mix of control, scalability and reliability. To back it up, well-trained support people armed with diagnostic tools and procedures are recommended. Of course, the key remains consistency, discipline and good network hygiene -- these are the hallmarks of good network engineering.

Apparent Networks white paper
IEEE 802.3 ab
IEEE 802.3 ak
IEEE 802.3 an

Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE and Internet2.
This was last published in August 2006

Dig Deeper on Network Monitoring



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.