Tip

Network virtualization -- the next big thing?

Network virtualization is an inevitable consequence of the rising complexity of networks. Currently the resources offered by a network are still coupled to the underlying physical

    Requires Free Membership to View

infrastructure and, therefore, relatively inflexible.

Just as servers and storage have become virtualized to support increasingly adaptable and scalable functionalities at the application-side, so will the communications conduits that connect servers together. However, networks are advancing more slowly due to several key factors and are only lately showing signs of becoming thoroughly virtualized.

The term "network virtualization" has been used to describe many things: from high-level network management systems, to the use of networks to enable virtualized storage and grid computing. However, these references are, for the most part, misleading -- networks today are still relatively unchanged from Layer 3 downward and most of these initiatives simply use network paths to support abstractions at Layer 4 and higher. Only a few examples, such as HP's PlanetLab project or the National Lambda Rail, actually virtualize the network itself.

So what does "network virtualization" really mean? Roughly, the phrase refers to the pooling of resources below a layer of abstraction such that the pool can be accessed without regard for its physical constituents.

Consider the following: The "new layer model" ( see Network intelligence for dumb applications) re-interprets the standard OSI model to abstract it into just three simple layers:

  • End-user behaviors
  • Application behaviors
  • Network behaviors

Each is relatively well contained within the various technological embodiments that correspond to them — the end-user, the server, and the end-to-end network, respectively. While each is a complex system in its own right composed of many elements and influences, the boundaries between each are relatively well defined, as are the interfaces that inter-connect them.

Server virtualization has shown that there is significant advantage to decoupling a set of functionalities associated with the application from the underlying infrastructure. Decoupling allows for greater flexibility, scalability, and robustness in the application as its performance is not tied to the intrinsic limits of the server itself. Additional resources (e.g. CPU power, memory, disk space, response time) can be allocated to (and de-allocated from) the application on-demand.

Following on, network virtualization allows for the same sort of decoupling of network functionalities from the underlying infrastructure. These functionalities include any attributes of interest to an application using the network:

  • Bulk transfer capacity
  • Characterizations related to quality of service (e.g. latency, MOS)
  • Route and domain
  • Task-specific service resolution (e.g. where to find DNS)
  • Network-specific services (e.g. DNS)

The framework providing for this virtualized interface with the network needs to be able to manage individual elements, remediate and provision on-demand, monitor and assess end-to-end paths, and otherwise coordinate the resources within Layers 1, 2 and 3.

There are several reasons why IP networks have not yet been as fully virtualized as servers and storage. In contrast to servers, an end-to-end network path…

  • Is necessarily composed of many distributed elements
  • Offers limited means for sharing state between network elements
  • Is constrained by security and trust issues as the elements are not all single-proprietor and may be publicly accessible
  • May support multiple network applications with differing requirements
  • Performance is more (or less) than the sum of its parts

Servers have comparable issues but it has been possible to build a layer of abstraction on top of the operating system that obviates these limits. Doing the same for network elements will be more challenging.

It should be noted that, by design, TCP/IP networks already offer a form of virtualization. The typical TCP connection provides for tremendously simplified interaction with network resources including the ability to transparently packetize data without regard for content, deliver reliably along best-effort routing, and even crudely sense the availability of network resources (through the TCP fair-use mechanisms). Networks today can be torn down, switched over seamlessly, and re-provisioned without significant interruption in service. By humans, of course.

However, networks still fall short in areas of:

  • Shareable end-to-end performance assessment
  • Automated diagnosis and troubleshooting
  • Federated access and authentication to network elements
  • Rich interfaces for applications to network resources (e.g. QoS)
  • Scalable provisioning

In simpler terms, network virtualization requires automated remediation and provisioning. Networks are not sufficiently advanced to scale flexibly and transparently in terms of performance criteria. Until this key requirement can be addressed, networks will continue to be limited in their "virtualization."

On the bright side though, there are many indications that virtualization is on its way. A few notable examples:

References:
HP's PlanetLab
Internet2 HOPI project
National Lambda Rail
Sun Microsystem's N1 Project
Cisco SONA
NetConf

 


Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com
 

This was first published in January 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.