Phrases like "application awareness," and the related term "network awareness," appear with increasing frequency in marketing hype. The concept of awareness reflects a critical step forward in network infrastructures -- they may finally be evolving past the reactive/assisted stage (step 2 in the Gartner IT process maturity model) toward something capable of adaptive behavior.
There are two views on awareness that are important: One is a piece-meal view that corresponds closely to what is available in present-day network offerings. And the other is the big picture analysis that relates awareness to a more coherent capability yet to be realized.
Both are valuable. In part 1, the view represented by "network awareness" and "application awareness" will be covered. The prospects for a completely coherent application-centric network architecture, and the potential of "network intelligence," will be covered in part 2.
Network-aware applications are not new. Most applications that utilize TCP are already provided with a crude form of network awareness. Common TCP stacks (e.g. Reno, Tahoe, and Vegas) gather statistics related to end-to-end network performance such as loss and delay variation. Subsequently, the TCP stack modifies the application's use of the network in order to achieve specific objectives. In this case, the objectives are to maximize access to network resources while maintaining the principle of fair use.
Non-TCP applications such as VoIP (utilizing real-time flows with UDP) are literally buffered against their sensitivities. VoIP and video applications derive no inherent feedback from UDP and instead implement codecs that filter out any experience of prevailing network conditions -- up to a point. Beyond the buffer's ability to mitigate network influences, applications like VoIP and video simply breakdown and degrade in performance (see MOS: A love-hate relationship).
Current references to network awareness mean something different though. They reflect a growing interest in achieving specific levels of application performance. As many applications have a critical dependency on the networks, making them directly "aware" of the instantaneous end-to-end conditions, in real-time and on demand, provides a critical advantage. Instead of generic TCP behaviors, the network-aware application can adapt its behaviors specific to its needs.
For example, an enterprise backup application on a converged network may need to achieve optimal throughput in order to complete on a specific schedule. When backups are incomplete, data is at risk. Alternately, other applications requiring network resources may be significantly impacted if the backup extends beyond its schedule. With network awareness, the backup can determine if it will achieve its objectives as a function of the available network and, if not, adapt to the conditions and/or alert the network management system.
Note - most backup systems can't do this today.
In fact, there are few good examples today of "network-aware applications" -- most applications run blind for lack of means. One recent development is the RTCP/XR standard for VoIP end devices. Arming an IP phone with this capability gives the telephony application access to information regarding the current network path. RTCP/XR instantaneously translates IP behaviors into both listening quality and conversational quality MOS. That said, the phone is not likely to consume this information or make changes to its behavior. The VoIP application is "aware" -- but passive. The information is consumed instead by performance monitoring systems.
For networks to be more proactive and adaptive in supporting specific applications, devices and management systems need to have some sense of:
- what applications are present;
- what they need to achieve;
- and how their performance is affected by IP
QoS mechanisms are an existing means by which networks can support (or at least differentiate between) certain applications. For example, the tagging in IP headers associated with TOS-style implementations indicates to network devices that particular packets are associated with classes of performance and priority.
This is a crude and rudimentary mechanism for application awareness, comparable to TCP. Today QoS settings are only weakly correlated with a specific application's needs and are a matter of artful approximation. In general, network devices have limited insight into the overall network and their own contribution to the end-to-end experience of a specific application.
Application awareness within any single device involves embedding some model of application behavior and requirements into the device. Thus augmented, it can either generate application-specific measures of the instantaneous IP behaviors or configure its handling of packets on some preferred basis. Modeling application behaviors, on a proprietary basis or with recently published standards (see Making the triple play), is increasingly common and allows IP devices to estimate quality of experience (QoE) on the basis of detected IP behaviors.
VoIP switches and gateways offer some aspects of this awareness. They support monitoring of the MOS levels of individual calls, or overall flows, as an estimation of application quality. Eventually these may become part of increasingly sophisticated end-to-end QoS mechanisms that validate the performance of the QoS implementation.
Other more intriguing examples appear in the offerings of the large network vendors like Cisco and Juniper. In the context of Web Services and SOA (service oriented architectures), they include content-specific behaviors at the network level that change the response of the network to a broad range of XML-encoded communications.
Stay tuned for part 2 – A new layer model
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com