One important job network designers, architects and administrators must deal with regularly (yet often keep simmering on the back burner instead) is planning for the future. Whether you call it capacity planning, growth planning, or predictive modeling, real-time crises often reduce this activity to simple multiplication of current needs instead of allowing time for rigorous planning and modeling to determine where capacity is going, or to explore what-if scenarios.
All too often, capacity planning boils down to simple formulas such as "we're experiencing peak usage of 562 MB per second and demand has been growing by 10% per month. Therefore, we need 1.6 GB per second one year from now."
But sometimes, more rigor is needed -- particularly when planning for deployment of Web- or other Internet-based delivery of services or applications. That's because the resources involved can be quite expensive, and because no responsible organization wants to get caught short on resources or long on outlays for same. Likewise, technical staff members involved in such matters do not want to be left holding the bag, neither when insufficient resources are available to meet demand (which inflicts opportunity costs for lost business) nor when more resources are available than demand to consume them (which puts a direct and palpable drag on the bottom line).
Believe it or not, the National Institute of Standards and Technology (NIST) may be able to come to the rescue here. The organization offers a free Linux-based modeling tool known as NIST Net that can operate at the IP level to model all kinds of potential wide-area networking behaviors and technologies to help model application or service performance and to help plan related resource needs.
The basic technology works on most Linux distributions with X Window installed and running (its developers recommend the Slackware distribution). It's designed to permit a PC-based router to emulate various complex performance scenarios. These include:
- Packet loss distributions, which model various levels of packet losses arising from errors of all kinds
- Congestion losses, which model various levels of packet losses arising from varying levels of inbound and outbound traffic that exceed carrying capacity on one or more links involved
- Packet reordering/duplication, which models TCP fragmentation that MTU mismatches can sometimes cause, or where excessive timeouts lead to duplicated traffic
- Asymmetric bandwidth situations, as when DSL or cable modem connections deliver different upload/download bandwidths.
The tool can model such behaviors more or less abstractly (through descriptive data provided when setting up test runs) or it can play back traces of actual network behavior.
The pros are that it's free and powerful, and that it covers a reasonable range of behaviors and scenarios. The primary cons are that strong working knowledge of Linux and related IP based packet drivers and software are required, and that considerable elbow grease will be needed to turn NIST Net's raw capability into something usable and useful. That said, it's definitely worth looking into. Installation instructions, a FAQ and a User's Guide are all available online.
Ed Tittel is a full-time freelance writer, trainer and consultant who specializes in matters related to information security, markup languages, and networking technologies. He's a regular contributor to numerous TechTarget Web sites and technology editor for Certification Magazine, and writes an e-mail newsletter for CramSession called "Must Know News."