Problem solve Get help with specific problems with your technologies, process and projects.

Fix before breaking

The tried-and-true standard of TCP/IP isn't able to meet today's high performance demands and we must turn to new schemes for network transport. Here we look at some alternatives.

The cry has gone out -- "The Internet is dead! Long live the Internet!"

Apparently, they say, the tried-and-true standard of TCP/IP isn't able to meet today's high performance demands and we must turn to new schemes for network transport. Worse, it is pointed out, issues of security, authorization, authentication, and the movement toward corporate federation is poorly supported on the current Internet. Best effort isn't always good enough (witness VoIP and data recovery). And so the foundations of the Internet must change.

But into what? And how? There are numerous solutions being offered -- some piece-meal and some complete overhauls. The choices are difficult to assess without benefit of some larger perspective -- so let's reduce the noise to something clearly identifiable.

At 50,000 feet, three distinct approaches to re-inventing the Internet resolve from the hype:

  • Infranet: -- promoted by a coalition of industries led by Juniper and including the likes of Cisco, AOL and Siemens. Roughly speaking, implementing an Infranet solution involves pushing the network up the stack so that mid-path devices offer greater support for the functionalities that underlie effective application performance. In particular, devices like edge routers will offer services to the LAN for adaptive routing, application-specific performance and security. End-hosts will be relieved of some of their current Layer 4-6 duties and local area domain reliance on firewalls will be reduced.

  • Overlay networks: -- driven by a partnership between Hewlett Packard and Intel, and based on academic work such as the experimental PlanetLab project. Overlay networks supplement the existing IP networks with an additional layer of middleware functionality, allowing network-based applications to implement effective new Layer 4-6 protocols without significant changes at either end-hosts or mid-path devices. Mid-path servers will mediate the interactions and provide access to critical network services that enable end-to-end applications.

  • Web Services: -- rapidly being developed by standards groups such as W3C, and touted as a next-generation Web by a range of companies offering HTTP/XML-based solutions such as BEA, IBM and Microsoft. A logical extension of the last decade of Web development, Web Services require little change to IP networks or deployment of special middleware servers. Instead, applications are augmented through standardized interfaces by other broadly accessible, distributed applications in the form of services.

All three approaches offer essentially the same functionality and try to "fix" the Internet problem with the same solution -- middle layer capabilities in the form of services. Each has advantages and disadvantages. And the players promoting the solutions have a clear stake in having their approach shine through -- network vendors want to extend the network upward; Web middleware manufacturers want to extend the Web downwards; and server vendors want to fill the gap with a new brand of network infrastructure.

But what's the right answer? It depends on who you are and how much stomach you have for adopting new technologies. The Web services approach is fastest to build and easiest to deploy but scales most poorly and is likely the most fragile. Infranet would be powerful and robust but has "forklift upgrade" written all over it. Overlay might be a reasonable compromise between those two, with most of the advantages and disadvantages of both. Realistically, a hybrid solution of all three is likely. Broken or not, it looks like we will be given a "solution" to the Internet -- sooner or later -- whether we need it or not.

Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2.
This was last published in May 2006

Dig Deeper on Network protocols and standards

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.