Many organizations can clearly identify their needs but have a hard time mapping these into requirements. This can be a recipe for failure when evaluating a provider's responses to an RFP. I have already discussed the dos and don'ts of how to solicit the providers with the appropriate information. Granted there are many contractual issues that arise when developing an RFP, but those are somewhat unique to each organization, so I have not touched on those items.
The key to evaluation is to understand what requirements are most critical to your organization. For some organizations this will be cost, for others it will be the technical aspects. Regardless of the areas of focus, it is necessary to rank these areas in order of importance and to assign a weight to each of the areas. This should be done by soliciting input from the decision makers of the organization and at a minimum should include the network infrastructure group, the operations group and the security group. These groups should sit down and have a discussion regarding the key areas of focus and their importance to the organization for delivering the services desired. The developers of the RFP should present the key areas of focus that have been developed for evaluation purposes and the group as a whole should assign weighted values to the areas. These weighted values are used to differentiate between the areas of focus when scoring a vendor's response. You can use any scale you want, 1-50, 1-100 etc, but the key point is that you get consensus from all groups. Below is a sample of weighted priorities.
Cost 45 Technical Features 35 Operational Support 20
The RFP should solicit input into each of the key areas of focus. The input for each area of focus can be assigned scores to differentiate between each vendors' offering for the particular feature that is being evaluated. In the past I have used a simple scoring system to evaluate the vendor responses to each key feature or requirement. The scoring system I have used is as follows:
Does not support the feature 0 Supports the feature 1 Supports worse 2 Supports better 3
This is for a two vendor example, if the number of vendors increases the supports worse and better numbers have to be modified.
For example, let's say you are looking for multicast support from a vendor. If one vendor supports the feature and the other doesn't you assign one vendor a 0 (doesn't support) and the other vendor a 1 (does support). If they both support the feature, but one has an advantage over the other assign one vendor a 3 (supports better) and the other a 2 (supports worse). If they both support it equally they both get a 1. This allows you to differentiate between the 2 vendors' responses. Now that you have scored all areas of focus, you can apply the weights to determine which vendor is the right choice. Once you total the scores for each section, you multiply by the weights to get an overall score. Multicast is a technical feature so the scores for the technical features would be multiplied by the weight assigned to technical features. This allows you to differentiate based on the importance of each of the areas of focus. If one vendor is better on the technical side but worse on the cost side, this will be reflected in the overall score.
The key to this approach is that it allows you to evaluate key features, functionality and capabilities independent of your requirements. The weight assigned to each area of focus differentiates based on your individual needs. It is a very scientific approach to evaluating vendor responses.
Robbie Harrell (CCIE#3873) is the National Practice Lead for Advanced Infrastructure Solutions for SBC Communications. He has over 10 years of experience providing strategic, business, and technical consulting services to clients. Robbie resides in Atlanta, and is a graduate of Clemson University. His background includes positions as a Principal Architect at International Network Services, Lucent, Frontway and Callisma.
This was first published in November 2004