Network virtualization is an inevitable consequence of the rising complexity of networks. Currently the resources...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
offered by a network are still coupled to the underlying physical infrastructure and, therefore, relatively inflexible.
Just as servers and storage have become virtualized to support increasingly adaptable and scalable functionalities at the application-side, so will the communications conduits that connect servers together. However, networks are advancing more slowly due to several key factors and are only lately showing signs of becoming thoroughly virtualized.
The term "network virtualization" has been used to describe many things: from high-level network management systems, to the use of networks to enable virtualized storage and grid computing. However, these references are, for the most part, misleading -- networks today are still relatively unchanged from Layer 3 downward and most of these initiatives simply use network paths to support abstractions at Layer 4 and higher. Only a few examples, such as HP's PlanetLab project or the National Lambda Rail, actually virtualize the network itself.
So what does "network virtualization" really mean? Roughly, the phrase refers to the pooling of resources below a layer of abstraction such that the pool can be accessed without regard for its physical constituents.
Consider the following: The "new layer model" ( see Network intelligence for dumb applications) re-interprets the standard OSI model to abstract it into just three simple layers:
- End-user behaviors
- Application behaviors
- Network behaviors
Each is relatively well contained within the various technological embodiments that correspond to them — the end-user, the server, and the end-to-end network, respectively. While each is a complex system in its own right composed of many elements and influences, the boundaries between each are relatively well defined, as are the interfaces that inter-connect them.
Server virtualization has shown that there is significant advantage to decoupling a set of functionalities associated with the application from the underlying infrastructure. Decoupling allows for greater flexibility, scalability, and robustness in the application as its performance is not tied to the intrinsic limits of the server itself. Additional resources (e.g. CPU power, memory, disk space, response time) can be allocated to (and de-allocated from) the application on-demand.
Following on, network virtualization allows for the same sort of decoupling of network functionalities from the underlying infrastructure. These functionalities include any attributes of interest to an application using the network:
- Bulk transfer capacity
- Characterizations related to quality of service (e.g. latency, MOS)
- Route and domain
- Task-specific service resolution (e.g. where to find DNS)
- Network-specific services (e.g. DNS)
The framework providing for this virtualized interface with the network needs to be able to manage individual elements, remediate and provision on-demand, monitor and assess end-to-end paths, and otherwise coordinate the resources within Layers 1, 2 and 3.
There are several reasons why IP networks have not yet been as fully virtualized as servers and storage. In contrast to servers, an end-to-end network path…
- Is necessarily composed of many distributed elements
- Offers limited means for sharing state between network elements
- Is constrained by security and trust issues as the elements are not all single-proprietor and may be publicly accessible
- May support multiple network applications with differing requirements
- Performance is more (or less) than the sum of its parts
Servers have comparable issues but it has been possible to build a layer of abstraction on top of the operating system that obviates these limits. Doing the same for network elements will be more challenging.
It should be noted that, by design, TCP/IP networks already offer a form of virtualization. The typical TCP connection provides for tremendously simplified interaction with network resources including the ability to transparently packetize data without regard for content, deliver reliably along best-effort routing, and even crudely sense the availability of network resources (through the TCP fair-use mechanisms). Networks today can be torn down, switched over seamlessly, and re-provisioned without significant interruption in service. By humans, of course.
However, networks still fall short in areas of:
- Shareable end-to-end performance assessment
- Automated diagnosis and troubleshooting
- Federated access and authentication to network elements
- Rich interfaces for applications to network resources (e.g. QoS)
- Scalable provisioning
In simpler terms, network virtualization requires automated remediation and provisioning. Networks are not sufficiently advanced to scale flexibly and transparently in terms of performance criteria. Until this key requirement can be addressed, networks will continue to be limited in their "virtualization."
On the bright side though, there are many indications that virtualization is on its way. A few notable examples:
- Overlay networks – abstracted routing and transport layers
- Sun Microsystems' N1 project - converged server, storage and networks
- Cisco's SONA initiative (including AON)
- VLANs – local area virtualized routing and domains
- NetConf v1.0 – IETF standard for querying of network elements
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com