Tip

Look before you leap: Assessing pre-deployment assessment

Pre-deployment assessment -- "look before you leap" -- never made more sense. And yet the industry is only just starting to figure out what that means – thanks to particularly sensitive applications like VoIP and video.

Metcalfe's Network Law states that the "potential value" of a network grows approximately as the square of the number of end-hosts n. The antithesis to Metcalfe's Law, let us call it Murphy's Network Law, might be stated as: The number of performance degradation problems on a network grows at the same rate as the number of unique paths – that is, also as n². Or, more bluntly, it could be said that if you have a network, you have problems.

Consider that each path in your network represents another unique opportunity for problems to appear – that is, each path offers a unique combination of hardware, interfaces, and terminal hosts that may manifest degradation – congestion bottlenecks, MTU constrictions, duplex conflicts, broken QoS mechanisms. Thus, each path contributes to the cumulative degradation of the overall network performance and thus to its value. Starts to sound frightening.

Fortunately, it isn't quite that simple (or that frightening).

Researchers Andrew Odlyzko and Benjamin Tilly recently argued that Metcalfe's model is too aggressive – in fact, they find that value is more likely to grow at a rate of n*log(n). Similarly, the growth of performance degradation doesn't likely grow as n² either.

    Requires Free Membership to View

The reasoning is very similar to Odlyzko's:
♦ some end-hosts are more preferred than others (like servers);
♦ many unique paths at least share common segments;
♦ not all hosts experience degradation in the network equally;
♦ not every form of degradation affects a network's usage the equally;
This points at a much slower growth and n*log(n) can be rationalized from a number of intuitive arguments.

However, even at n*log(n), it is impossible to ignore the implications – big networks have a lot of potential for performance issues. Thus, securing the Metcalfe value of your network means foiling the Murphy effect. And that's what is at the root of the recent interest in pre-deployment assessment.

The challenge with assessing ahead of deployment is that considerable preparation is required – necessarily it requires time, skill, knowledge and effective technology to carry it off. In a less-than-perfect IT world with a shortage of all those things, Best Practices offer critical support to guide the process.

The keys to effective pre-deployment assessment are, in simplest terms, summed up by a helpful little mnemonic - CAP:

  • Clean
  • Assess
  • Plan

CAP the growth of problems, CAP the loss of value, CAP the likelihood of failure.

Consider each step as they apply to your next deployment:

1) Clean – as noted by Murphy's Network Law, the number of problems increase with network size, undermining value as it grows. If you have networks, you have problems. Start with that assumption and clean house. Choose an approach to network assessment that fits your network and translates well into the future (see 3). Be sure that your method of "cleaning up" specifically identifies and resolves the types of problems that affect your application.

  1. Take inventory of your network
  2. Measure and confirm its performance on every vital path (if not every path)
  3. Identify its problems and limits
  4. Resolve all problems (especially those that directly impact your application)

2) Assess – consider your network, the application(s), and the needs of your organization. Establish the metric that defines success and ensure that you can actually apply it. Make sure that you have included all the key requirements.

  1. Define the needs of network, application, and users
    1. Application requirements
    2. Network requirements
    3. Power requirements
    4. Security overhead
    5. Maintenance and upkeep requirements
  2. Define the key metric(s) for success
  3. Identify an effective testing methodology for that metric
  4. Check for the success of the network with respect to the planned application

3) Plan – consider that the problems that you cleaned-up in 1) will likely recur; traffic levels will vary and probably increase; the network, and the scale of problems generated, will likely get bigger and more complex. Be ready for that. Build in the appropriate scalability, fail-over, and excess capacities. As well, define a process that includes regular review of 1) and 2) – this suggests a clean-up and re-assessment process that is easy to re-apply and takes advantage of a historical perspective, and that grows with your network.

  1. Identify how the network will grow
  2. Plan for the anticipated growth and recurrence of problems
  3. Provide the proper support for that growth
    1. Regular clean-ups
    2. Regular re-assessments
  4. Review and adapt

So before you leap into that next application void, CAP your network.

None of this is new or revolutionary of course. This perspective is straight out of every network life cycle model around. For example, consider the Brix approach:

Evaluate, Build, Operate, Assure
or Apparent Network's:
Business Case, Requirements, RFP, Planning, Staging, Deployment, Operation, Review
But like all forms of wisdom, you can never repeat it too often.

References:
Odlyzko paper
ZDnet news on Metcalfe's Law
Network Life Cycle


Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com

This was first published in May 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.