IP networks have lately been under tremendous pressure to evolve. Compared to the last few decades, the last few years have seen unprecedented change in IP networking. Although the job of handling packets seems relatively straight-forward, complexity arises from increasingly demanding application requirements, the need for enhanced security, and complications introduced by new technologies. What used to fit handily into the OSI Layer model (if anything ever did) now sits in a tangled heap on most network diagrams.
So what will the future bring?
Some expert futurists anticipate another 10 years of highly dynamic evolution in networking is ahead. Certainly we all have become familiar with the process of incremental improvements (or "not-so improvements" in some cases) and occasional disruptive discontinuities in technological progress. But the jumps are few and far between. And sometimes, like the shift to everything-over-IP and Ethernet-as-king, the changes don't reflect any new technologies -- just a new market attitude.
Looking over the recent past, we can see how IP networking has been complicated (or "improved" if you prefer) by a number of "new" developments:
- Gigabit Ethernet
- TCP stacks
- performance metrics for IP networks
- edge controls
- NIC re/design
For the benefit of improved foresight, let's consider how each of these has impacted IP networks.
IPv6Spurred by the dire threat of inadequate supplies of IP addresses, the rush to implement IPv6 has faltered somewhat. Even though the Internet2 working group for IPv6 has largely concluded its work, there is little sign that IPv6 will do much more than continue to loom on the horizon. NAT has provided breathing room that may be viewed either as a blessing or a curse.
Gradual IPv6 adoption has led to a variety of implementations that either tunnel IPv4 through IPv6 or, more likely, the converse. While this is primarily in core networks and thus well handled by smart engineers, the mixing of IP versions inevitably contributes to the complexity of Layer 3 interfaces that must accommodate both. Reports of problems with mixing are rare -- however, this is simply one more complication introduced in recent years.
An anticipated complication will be the changes in Layer 3 behaviors in IPv6, particularly as they relate to the best-effort diagnosis and visibility provided by ICMP. Ideally, IPv6 offers a number of fixes and improvements over the original IPv4 implementations -- surely we have learned from our mistakes and also learned to deal with new challenges like security. However, as IPv6 rolls out, it seems possible (if not likely) that the idiosyncrasies of new algorithms for path MTU discovery and the translation of IPv4 to IPv6 will introduce some unanticipated behaviors. In many instances, IPv4 messages are silently dropped when passing through IPv6 -- ostensibly with no impact. Only time will tell.
Gigabit EthernetGigabit Ethernet might be viewed as "more, faster" -- however a number of changes (or lack thereof) from Ethernet and Fast Ethernet (10 and 100 Mbps) have made IP ever so slightly more complex. The obvious instances are speed/duplex auto-negotiation and MTU.
Just when we might have leapt fearlessly into a future without half-duplex, we found ourselves handed a (seemingly) half-baked standard that allows 1000 Mbps half-duplex. It is hard to imagine what the good reasons for this were -- but surely they exist. Few vendors appear to support it. However, there it is in the standard. Needless to say, vendors must continue to design NICs that can ably negotiate the correct data rate (10, 100 or 1000) and figure out the duplex as well.
Anyone watching the progress of NIC drivers won't have missed notable companies like Intel regularly releasing new drivers for their desktop gigE cards. And close inspection shows that they have been challenged to figure out an effective means of implementing auto-negotiation. For example, the Intel Pro XT Server 1000 card (for Windows) has offered a variety of settings to the user over the last few years:
Date Version Supported User Settings 6/20/2002 22.214.171.124 10Mbps Half, 10Mbps Full, 100 Mbps Half, 100 Mbps Full, Auto-detect 3/11/2003 126.96.36.199 10Mbps Half, 10Mbps Full, 100 Mbps Half, 100 Mbps Full, 1000 Mbps Full, Auto-detect 3/12/2004 188.8.131.52 10Mbps Half, 10Mbps Full, 100 Mbps Half, 100 Mbps Full, Auto-Detect, Auto-negotiate 1000 Mbps
An educated guess for the reason behind these constant changes is that users were getting into trouble hooking gigE cards up to 10/100 switches, not seeing a connection, and generating undesirable support calls. Unfortunately, the tried-and-true method of avoiding duplex conflicts (hand-setting the NIC to a specific speed and duplex) is not well supported by the majority of the vendors in current driver versions.
The other loose cannon on the deck is the unbinding of Ethernet from any convention for MTU. Naturally gigE continues to work at 1500 bytes, the standard for 10/100 Mbps. However, it is also possible to seek performance advantages (and significant ones at that ) using larger "jumbo" packets. The only problem is that there is no fixed upper limit for gigE. Many groups including CANARIE and Internet2 quickly settled on 9000 bytes as a "preferred" size and vendors are slowly following suit. However, difference interfaces will allow different sizes.
If IP worked perfectly -- for example, if RFC 1191 for pMTU discovery wasn't widely broken -- then this wouldn't be much of a problem. However, mixing jumbo and non-jumbo packets can be a significant issue for applications, and in some cases, for hardware -- for example, some switches leak packets across ports and may cause non-jumbo interfaces to react poorly to the apparently "corrupt" jumbo packets.
Techniques for reliably mixing jumbo and non-jumbo packets are still not well defined -- the preferred method is to use Layer 3 devices (routers) at all MTU boundaries. However due to the related expense, switches are more likely to be used. Currently VLANs and other virtual methods are required to deploy 1500 and jumbo in parallel.
WirelessThe impact of wireless on IP network complexity hasn't yet been fully felt. Some of the noticeable areas are related to security and performance. Arguably, neither of these aspects are the domain of IP. Security should be a middleware issue and network performance degradation is often a consequence of inappropriate TCP stack response to the lossiness of the media.
However, efforts to secure wireless and to make it perform better have had perceptible impacts on network deployment. Many network administrators reluctantly approach WLAN and end up devising contorted means of partitioning wireless off from the rest of their secure wired infrastructure. And the challenges of coordinating multiple access points within a large organization can add overhead to the typical logical network diagram. Again, there is nothing new here -- just another layer of complexity for IP.
Yet to be fully appreciated though will be the effects of VoIP and VoWiFi -- any attempt to guarantee the performance of WLAN through QoS mechanisms will likely result in spectacular contortions in the handling of packets -- and the complexities of mobility as 802.11-to-cellular hand-off become de rigueur.
TCP stacksAgain, TCP is ostensibly Layer 4 and not an IP consideration. But the tangled heap of IP complexity includes threads from the TCP stack. In particular, the underlying assumption of most common stacks is that packet loss is due to congestion. Unfortunately, IP provides little insight into the actual cause of packet loss. The regular temptation has been to reinforce IP or TCP with some additional means of distinguishing congestion from non-random loss. Developments are underway to advance the state-of-the art in TCP and already appear in networks with special requirements.
For the most part, these new TCP implementations are likely to respect the usual boundaries between Layer 3 and 4. In some cases though, new forms of interface virtualization (such as recently announced by Level5 Networks) seem to dissolve the boundaries entirely. Although the value of the OSI Layer model is variable, it at least tends to simplify network design. In cases like these though, that simplicity is lost in the trade-off for performance.
In my next column, we'll continue the laundry list of dirty socks that are complicating IP networks.
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com