Problem solve Get help with specific problems with your technologies, process and projects.

The death of SLAs

The SLA is essentially an empty agreement. Its intention is to assure that an enterprise's critical applications and network-dependent business processes operate optimally. But its nature bears little relationship to its intention -- and neither the carrier nor the enterprise is likely to benefit from enforcing it. In this featured column Loki looks at some alternatives.

The SLA is essentially an empty agreement. Its intention is to assure that an enterprise's critical applications and network-dependent business processes operate optimally. But its nature bears little relationship to its intention -- and neither the carrier nor the enterprise is likely to benefit from enforcing it. In this featured column Loki looks at some alternatives.

The problem with the typical SLA (service level agreement) is that it is a product of its history and technological context. Applied at or near Layer 3 (IP or something equivalent), the SLA is constructed in terms of the measures that are available at the demark point between the enterprise and the carrier -- simple attributes based on counters or measures such as connectivity, loss, latency, maybe jitter. The carrier doesn't have much more to work with. And the enterprise doesn't have much choice but to frame its requirements in these terms.

However, the correlation between any combination of these measures and the application performance and/or user quality of experience (QoE) is weak at best. Enterprises don't really care about these measures -- they are looking at user and process productivity as it affects their bottom lines. Neither side can really be sure that the SLA translates into customer value.

Both are consequently bound to an agreement that offers limited accountability and benefit, supporting an uneasy relationship at best and a lose-lose outcome at worst. Carriers want their customers to succeed but can't guarantee any of the outcomes; getting money back when network-dependent applications fail does little to appease the frustrated customer. Both sides want to make this relationship work. But they are married to an approach that is divorced from reality.

Fortunately, there are alternatives. At one extreme, emerging network technologies offer capabilities that can redefine the basis of SLAs. They offer unique metrics that are substantial and shareable between enterprise and vendor. However, it probably means too big a change of mindset, particularly for the carrier.

More moderately, these same technologies can provide a unique form of SLA verification that can make existing SLA implementations more valuable. Consider the following approach to assuring application performance:

  • Start from application/user QoE (such as developed for VoIP)
  • Map the QoE requirements (using recently developed standards) to IP performance
  • Apply the consequent IP performance specifications as the basis for SLA
  • Verify SLA conformance via end-to-end IP performance now available at the application level
  • Modify SLA as required with changes in demand/application.

VoIP has driven much of the technology development required to realize this approach (see VoIP: The face of the new network police). It has simultaneously raised the profile on the importance of IP performance and provided effective models and metrics for defining QoE with respect to that performance. For example, ITU G.107 (E-model) has been developed into an effective model with the metric of ITU P.800 (MOS) as the now-standard measure that reflects subjective user QoE. And while MOS is a relatively paltry measure (see MOS: A love-hate relationship), it represents a critical advantage -- IP performance can be directly linked to the outcomes.

However, the story isn't yet complete -- the relationship needs to be fully defined in both directions to meet the needs of an effective SLA (or QoS) with supporting verification mechanism. A number of organizations are actively defining those relationships.

A recent example is found in the ITU's work on standard Y.1541 wherein six distinct categories of application are mapped from the objectives of QoE to specific criteria for IP performance. The original intention is to provide QoS (Quality of Service) mechanisms with application-dependent specifications. And yet the mapping process is critical for supporting any relationship between an application and underlying Layer 3 behaviors; including the development of new frameworks for SLAs.

The last piece of the puzzle lies in the end-to-end verification process. Once again, VoIP has been the driver for the development of a range of assessment technologies. Only a few years ago, the phrase "pre-deployment assessment" was unheard of. Now, no self-respecting performance management vendor would step out the door without offering that capability in some form or another (not all are created equal of course). Consequently, a variety of assessment technologies might be employed to support this part of the verification process.

So, there we have it -- a new kind of relationship, rooted in proven technologies, that functionally works. Of course, it is up to carriers and their customers to choose change. And we know how old married couples just can't stop beating dead horses.

Eventually change comes. It will probably be for the better. Someone will profit from it.

NetworkingChief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2.

Loki will be speaking at these upcoming conferences:

  • IPComm in Las Vegas -- Oct. 17-19 -- Why QOS doesn't work; The dark side about the true idiosyncrasies and degradation that plague your network; and, Is my data network ready to handle VoIP?
  • Internet Telephony in Los Angeles -- Oct. 25 -- Triple Play: The implications of convergence
  • This was last published in October 2005

    Dig Deeper on Network application performance

    QoE benchmarking: Unique approaches and environments In most enterprise networks, old technologies such as frame relay and ATM cohabit with wireless, VPN and MPLS networks, many of which function as virtual overlays to core Ethernet transports. WAN, LAN and VLAN technologies exist together, along with the many flavors of WiMAX for last-mile wireless complements. While the answers for quality of experience (QoE) benchmarking reside beyond all of these transport challenges, they are far from immune from them. If you care about QoE, you need to have real visibility into the design and performance of your network, ideally with technologies that can begin to effectively link the user "experience" with real network transport issues. In this tip, learn benchmarks targeted at the increasingly mainstream, Web-based application world and discover how to capture end-user experience data that can help improve performance for your networked applications.

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.