In my last article I discussed the issues and roadblocks to interconnecting one provider's MPLS cloud to another's to provide a customer VPN endpoints on each cloud. The issues focused on management and SLA adherence rather than the technical aspects of interconnection. Unlike most service delivery issues, the problem here isn't the technology itself but the delivery of the service over the technology. The technology for interconnecting provider backbones to support VPN across multiple clouds is supported via Interprovider or Inter-autonomous System support.
Interprovider support allows VPNs to span multiple service provider backbones. The providers are able to do this by exchanging IPv4 address reachability information in the form of VPN-IPv4 addresses. The VPN-IPv4 routing information is exchanged via RBGP between the providers Autonomous System Border Routers (ASBR's). Once the VPN-IPv4 routes have been handed off to each autonomous system, the routing information is propagated via an interior gateway routing protocol such as OSPF or IS-IS. The EBGP border routers are used primarily to distribute the routes between the autonomous systems. In addition, the EBGP border routers handle the distribution of the label switching information associated with each route. The border router will rewrite the next hop and the MPLS label associated with each route that it receives from the alternate autonomous system.
Let's assume we have two providers, P1 and P2 that have a customer with sites on each backbone CE1 and CE2 (C1). CE1 wants to advertise a route to CE2 and vice versa. The two providers are interconnected via ASBR1 and ASBR2 respectively.
NOTE, there may be any number of MPLS P routers between the PE routers and the ASBR routers.
Below is an example of how the routes and labels are advertised between the autonomous systems. For this example I will display CE1 on the P1 backbone to CE2 on the P2 backbone.
Provider 1 Route Advertisement
CE1-PE1 – Advertise 10.10.10.0 (normal advertisement via IGP), next hop CE interface
PE1-ASBR1 – Advertise 10.10.10.0 as VPN-IPV4 with label of 10, next hop PE1
ASBR1-ASBR2 – Advertise 10.10.10.0 as VPN-IPV4 with label of 20, next hop ASBR1
ASBR2 – PE2 - Advertise 10.10.10.0 as VPN-IPV4 with label of 30, next hop ASBR2
PE2 – CE2 - Advertise 10.10.10.0 to CE2 with next hop of PE2
So in this case CE2 will see the next hop to 10.10.10.0 as PE2
PE2 will see the next hop to 10.10.10.0 as ASBR2, this is regardless of how many P routers are between PE2 and ASBR2. PE2 uses label switching to forward traffic to ASBR2.
ASBR2 sees the next hop to 10.10.10.0 as ASBR1
ASBR1 sees the next hop to 10.10.10.0 as PE1
PE1 sees the next hop as CE1
In essence the ASBR routers act as gateways of last resorts for all of the routes requiring reachability within the other autonomous systems. Any traffic on the provider 1 backbone that is destined to provider 2 will be forwarded to the ASBR1 router as it has all of the provider 2 routes in its routing table and has performed label binding to ensure that the labeled packets are forwarded properly.
As can be seen from this example, it is not that hard to set up the network to exchange VPN routing information between autonomous systems. The difficult part is the actual enforcement and adherence to SLAs and QoS parameters that can be offered across the two backbones.
Robbie Harrell (CCIE#3873) is the National Practice Lead for Advanced Infrastructure Solutions for SBC Communications. He has over 10 years of experience providing strategic, business, and technical consulting services to clients. Robbie resides in Atlanta, and is a graduate of Clemson University. His background includes positions as a Principal Architect at International Network Services, Lucent, Frontway and Callisma.