Variable-length subnet masking
Variable length subnet masking (VLSM) means implementing more than one mask on the same major class of a network. It allows for a more efficient use of IP address space both in terms of hosts and subnets. On a network that does not have an abundance of IP address space it can be essential. In order to implement different masks for the same major network it is required to have a routing protocol that supports VLSM. Such routing protocols are called classless routing protocols. They carry the mask information along with the route advertisements therefore allowing for the support of more than one mask.
Examples of classless routing protocols include OSPF, RIP version 2, Cisco's EIGRP, BGP and IS-IS. Consider an example that employs VLSM. Assume that the Class B address
Similarly, the 172.16.202.x/30 range produces 64 more subnets suitable for point-to-point links. Each range up to an including 172.16.207.x/30 could be used to provide enough subnet address space for 400 serials links. This means that the addressing requirements were met and there is still a considerable amount of address space free. Aim to use contiguous subnets where possible: Although it is not essential it certainly makes very good sense to choose a continuous range of addresses and apply a particular mask to them. As will be highlighted in the next section, when route summarization is discussed, efficient allocation of IP addresses is not done merely for the sake of neatness; it is often essential for good network design.
Route summarization means summarizing a group of routes into a single route advertisement. The net result of route summarization and its most obvious benefit is a reduction is the size of routing tables on the network. This in turn reduces the latency associated with each router hop since the average speed for routing table lookup will be increased due to the reduced number of entries. The routing protocol overhead can also be significantly reduced since fewer routing entries are being advertised. This can become critical as the overall network (and hence the number of subnets) grows.
Apart from reducing routing table sizes route summarization can also improve the stability of the network by containing the propagation of routing traffic after a network link goes down. If a router is only advertising a summary route to the next downstream router, then it will not advertise changes relating to specific subnets contained within the summarized range. For example if a router only advertises the summary route 172.16.0.0/16 to its adjacent neighbor then it will not update that neighbor if it detects a failure on the 172.16.10.0/24 LAN segment. This principle can significantly reduce any unnecessary routing updates following a network topology change. Essentially this speeds up convergence resulting in a more stable network.
In order to implement route summarization that can be arbitrarily configured a classless routing protocol is required, however that in itself is not enough. It is imperative to plan the IP addressing scheme such that non-conflicting summarization can be performed at strategic points in the network. These ranges are called contiguous address blocks. For example a router that connects a group of branch offices to the head office could summarize all of the subnets used by those branch offices into a single route advertisement. If the subnets all fell within the range 172.16.16.0/24 to 172.16.31.0/24 then the range could be summarized as 172.16.16.0/20. This is a contiguous range that also coincides with a perfect bit boundary thus ensuring that the address range can be summarized in a single statement. Clearly to maximize the benefits of route summarization careful address planning is essential.
Choosing the routing protocol
The importance of selecting the correct IP routing protocol has already been alluded to. Now I will deal with the specific issue of evaluating a routing protocol. Let us examine the characteristics against which a routing protocol is judged.
The routing protocol must exhibit stability against routing loops, which can crash a network as a result of spurious routing information being advertised immediately following a topology change. Hold-down timers are used by the less sophisticated protocols such as RIP in order to increase stability. If a subnet goes down all routers will ignore any further updates about that subnet for the duration of the hold down timer. Effectively the routing protocol is adopting a 'wait and see' approach following a topology change to allow the network to stabilize. The downside to using a hold down timer is slow convergence, however it can be a necessary evil since protocols such as RIP do not maintain enough information about the network to converge quickly and reliably.
When a topology change occurs, such as the loss or addition of a subnet then there is a time lapse before every router on the network is aware of this change. During this time interval, which is called the convergence time, some routers are operating off inconsistent information. Hence the convergence time can also be thought of the time lag from a topology change occurring to the point where all routers in the network have consistent routing information in relation to the affected subnet.
The speed of convergence can vary dramatically on a network depending on a number of factors not least of which are the operational characteristics of the routing protocol itself. Sophisticated link state routing protocols such as Open Shortest Path First (OSPF) maintain a link state database of all subnets on the network detailing what routers are attached to them. If a link goes down the directly attached router will send an immediate Link State Advertisement (LSA) to its neighbor routers and this information floods through the network. Each router, upon receiving the LSA, can consult its database and independently re-calculate the routing table following the topology change. Convergence is fast and reliable as a consequence of OSPF maintaining extensive network topology information above and beyond a routing table. This is distinct from simpler protocols such as RIP, which, as already discussed, require the use of a hold down timer following a topology change in order to ensure a loop-free convergence.
A router that learns multiple paths to a particular destination network (via a routing protocol) will choose the path with the best metric and place that in its routing table. If the best metric is true of more than one path then each of these least cost paths will be placed in the routing table, and equal cost load balancing will be performed. Different routing protocols use different metrics, in other words various routing protocols each have their own way of deciding the best path to a destination. The metric should be sufficiently sophisticated to ensure that the routing protocol's interpretation of the best path is a realistic one. RIP uses hop count as its metric and this is yet another limitation of that particular routing protocol. For example if a router had two paths to a destination where one path was a 56k link and the other were a T-1, RIP would see each path as equal cost if the number of router hops is equal. Thus, RIP would load balance even though one path is 23 times faster than the other one. OSPF uses an administrative cost metric that can be configured arbitrarily. On Cisco routers it is automatically calculated to be inversely proportional to the bandwidth of the link. Nortel take an alternative approach by keeping the OSPF cost equal by default on all links. The network administrator then configures the value on the router interface to relate inversely to the speed of the link.
The significance of VLSM has already been demonstrated. Classless routing protocols support VLSM since they carry the mask in the routing updates. Standardized classless IP routing protocols include OSPF and RIP version 2. RIP version 1 is considered a classful routing protocol since it does not include the subnet mask within the routing update.
A routing protocol should support configurable route summarization. The significance of being able to configure route summarization at strategic points in the network has already been described. Apart from configurable route summarization, some protocols exhibit automatic route summarization. This feature is not necessarily as good as it sounds and in some cases it can be decidedly problematic. Classful routing protocols such as RIP v1 automatically summarize based on class when advertising across a major network boundary. For example, subnets of 172.16.0.0 would be advertised as a single route to the 172.16.0.0/16 Class B network if the router were advertising across a link that was part of anything other than this particular Class B network. This is necessary with classful routing protocols since because they do not advertise the mask the downstream router has no way of deducing the subnet mask if it does not have interfaces in that major network.
Hence it must be assumed (usually incorrectly) that no subnetting is taking place. Automatic route summarization can potentially cause problems if summarization occurs at more than one point in the network, since the summarized routes may be in conflict. This scenario occurs when a router receives identical summary routes from opposite directions and is commonly referred to as a discontiguous network. You can think of discontiguous as meaning 'broken up' by another network. If a major network such as 172.16.0.0 were discontiguous, then routers in the intermediate network (say it's addressed as part of the Class B 184.108.40.206) would receive 172.16.0.0/16 summary routes from opposite directions. These routes would attempt to load share across these routes. It actual fact there would be serious connectivity problems. TCP-based applications would require re-transmissions for every wrong routing choice and UDP applications simply wouldn't work!
The difference between a classful and classless routing protocol is very simple. Classless protocols include the mask in the update while classful protocols do not. The preceding discussion however should have highlighted the fact that the consequences of this simple difference are far-reaching. Classful protocols such as RIP version 1 do not support VLSM, discontiguous networks or configurable route summarization, and are therefore unsuitable for modern networks.
The question of scalability relates to the ability of the routing protocol to adequately support network operation as the network grows with the addition of more IP subnets. Issues such as convergence speed and support for VLSM and configurable route summarization ultimately determine the scalability of the routing protocol. The efficiency with which routing information is exchanged is also relevant. Distance vector protocols such as RIP periodically broadcast the entire routing table to neighbor routers. The more sophisticated protocols only advertise event-driven topology changes once the initial routing information has been exchanged, clearly a more efficient mechanism.
Open Shortest Path First (OSPF)
OSPF is a very complex IP routing protocol and a full explanation of its operation is beyond the scope of this article. However it is worth summarizing the advantages that it provides over distance vector routing protocols such as RIP. If one word were needed to justify the employment of OSPF, it would be scalability. There are a number of reasons why OSPF is suitable for large and growing networks and they are all in some way inter-related.
Hierarchical Structure: OSPF supports the ability to divide the network into multiple areas that have a certain degree of autonomy from each other. In such a structure there is a backbone area (which is always designated as Area 0) and a number of other areas that, barring exceptional cases, must directly attach to Area 0. A consequence of a well-planned hierarchical design is that each area's routes can be summarized into contiguous blocks. OSPF also supports the ability to summarize routes that are redistributed from another routing protocol.
Speed of Convergence: Each router running OSPF maintains a database of the logical topology of the network. The database details every link, LAN segment and router on the network. This increased intelligence of OSPF means that it can converge faster without having to resort to the crude convergence methods of distance vector protocols.
Efficient update processing: Incremental updates are sent when there is a network topology change rather than using periodic updates. OSPF also uses well-known multicast addresses rather than broadcasts to transfer routing information.
VLSM: Since it is a classful protocol, OSPF supports VLSM allowing for an efficient use of IP address space.
Okay, so I have now alluded to all of OSPF's advantages. However almost every networking protocol is a double-edged sword to at least some extent and OSPF is no different. There are two potential disadvantages of OSPF that deserve consideration:
Resource Utilization: OSPF increases router memory requirements due to the fact that each OSPF router maintains a topological database of the network. The routing table is calculated from this database, which consumes more memory than the routing table itself.
Running OSPF also increases the average router CPU utilization. In order to recalculate the routing table following a topology change the Shortest Path First (SPF) algorithm is run. This is a processor-intensive activity that could potentially restrain the performance of low-end routers.
Design restrictions: For a large network that also needs to incorporate scope for growth, multiple OSPF areas should normally be used. There are certain rules how traffic should move between these areas and this can impose some design restrictions.
OSPF provides a facility whereby a network can be segregated into multiple areas. The whole idea behind this concept is to reduce the memory and CPU overhead associated with running the protocol. A router running OSPF in a multi-area implementation retains the database for its local area rather than for the entire network. This reduces memory consumption and it exploits the fact, that on a well designed network, it's usually unnecessary for a router to have full details of sections of the network that are very remote. For this same reason updates are just flooded within the local area after a topology change, thus reducing routing traffic and the CPU consumption associated with frequent and often unnecessary route re-calculations.
This was first published in January 2006