draft-ietf-mpls-seamless-mpls-01.txt   draft-ietf-mpls-seamless-mpls-02.txt 
MPLS Working Group N. Leymann, Ed. MPLS Working Group N. Leymann, Ed.
Internet-Draft Deutsche Telekom AG Internet-Draft Deutsche Telekom AG
Intended status: Informational B. Decraene Intended status: Informational B. Decraene
Expires: September 13, 2012 France Telecom Expires: April 25, 2013 France Telecom
C. Filsfils C. Filsfils
M. Konstantynowicz M. Konstantynowicz
Cisco Systems Cisco Systems
D. Steinberg D. Steinberg
Steinberg Consulting Steinberg Consulting
March 12, 2012 October 22, 2012
Seamless MPLS Architecture Seamless MPLS Architecture
draft-ietf-mpls-seamless-mpls-01 draft-ietf-mpls-seamless-mpls-02
Abstract Abstract
This documents describes an architecture which can be used to extend This documents describes an architecture which can be used to extend
MPLS networks to integrate access and aggregation networks into a MPLS networks to integrate access and aggregation networks into a
single MPLS domain ("Seamless MPLS"). The Seamless MPLS approach is single MPLS domain ("Seamless MPLS"). The Seamless MPLS approach is
based on existing and well known protocols. It provides a highly based on existing and well known protocols. It provides a highly
flexible and a scalable architecture and the possibility to integrate flexible and a scalable architecture and the possibility to integrate
100.000 of nodes. The separation of the service and transport plane 100.000 of nodes. The separation of the service and transport plane
is one of the key elements; Seamless MPLS provides end to end service is one of the key elements; Seamless MPLS provides end to end service
skipping to change at page 1, line 49 skipping to change at page 1, line 49
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 13, 2012. This Internet-Draft will expire on April 25, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 13 skipping to change at page 3, line 13
5.1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . 17
5.1.2. General Network Topology . . . . . . . . . . . . . . . 17 5.1.2. General Network Topology . . . . . . . . . . . . . . . 17
5.1.3. Hierarchy . . . . . . . . . . . . . . . . . . . . . . 18 5.1.3. Hierarchy . . . . . . . . . . . . . . . . . . . . . . 18
5.1.4. Intra-Area Routing . . . . . . . . . . . . . . . . . . 19 5.1.4. Intra-Area Routing . . . . . . . . . . . . . . . . . . 19
5.1.4.1. Core . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.4.1. Core . . . . . . . . . . . . . . . . . . . . . . . 19
5.1.4.2. Aggregation . . . . . . . . . . . . . . . . . . . 19 5.1.4.2. Aggregation . . . . . . . . . . . . . . . . . . . 19
5.1.5. Access . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.5. Access . . . . . . . . . . . . . . . . . . . . . . . . 19
5.1.5.1. LDP Downstream-on-Demand (DoD) . . . . . . . . . . 20 5.1.5.1. LDP Downstream-on-Demand (DoD) . . . . . . . . . . 20
5.1.6. Inter-Area Routing . . . . . . . . . . . . . . . . . . 21 5.1.6. Inter-Area Routing . . . . . . . . . . . . . . . . . . 21
5.1.7. Labled iBGP next-hop handling . . . . . . . . . . . . 22 5.1.7. Labled iBGP next-hop handling . . . . . . . . . . . . 22
5.1.8. Network Availability and Simplicity . . . . . . . . . 23 5.1.8. Network Availability . . . . . . . . . . . . . . . . . 23
5.1.8.1. IGP Convergence . . . . . . . . . . . . . . . . . 23 5.1.8.1. IGP Convergence . . . . . . . . . . . . . . . . . 23
5.1.8.2. Per-Prefix LFA FRR . . . . . . . . . . . . . . . . 24 5.1.8.2. Per-Prefix LFA FRR . . . . . . . . . . . . . . . . 24
5.1.8.3. Hierarchical Dataplane and BGP Prefix 5.1.8.3. Hierarchical Dataplane and BGP Prefix
Independent Convergence . . . . . . . . . . . . . 24 Independent Convergence . . . . . . . . . . . . . 24
5.1.8.4. Local Protection using Anycast BGP . . . . . . . . 25 5.1.8.4. BGP Egress Node FRR . . . . . . . . . . . . . . . 25
5.1.8.5. Assessing loss of connectivity upon any failure . 30 5.1.8.5. Assessing loss of connectivity upon any failure . 25
5.1.8.6. Network Resiliency and Simplicity . . . . . . . . 35 5.1.8.6. Network Resiliency and Simplicity . . . . . . . . 29
5.1.8.7. Conclusion . . . . . . . . . . . . . . . . . . . . 36 5.1.8.7. Conclusion . . . . . . . . . . . . . . . . . . . . 30
5.1.9. Next-Hop Redundancy . . . . . . . . . . . . . . . . . 36 5.1.9. BGP Next-Hop Redundancy . . . . . . . . . . . . . . . 30
5.2. Scalability Analysis . . . . . . . . . . . . . . . . . . . 37 5.2. Scalability Analysis . . . . . . . . . . . . . . . . . . . 31
5.2.1. Control and Data Plane State for Deployment 5.2.1. Control and Data Plane State for Deployment
Scenario #1 . . . . . . . . . . . . . . . . . . . . . 37 Scenario #1 . . . . . . . . . . . . . . . . . . . . . 31
5.2.1.1. Introduction . . . . . . . . . . . . . . . . . . . 37 5.2.1.1. Introduction . . . . . . . . . . . . . . . . . . . 31
5.2.1.2. Core Domain . . . . . . . . . . . . . . . . . . . 38 5.2.1.2. Core Domain . . . . . . . . . . . . . . . . . . . 32
5.2.1.3. Aggregation Domain . . . . . . . . . . . . . . . . 39 5.2.1.3. Aggregation Domain . . . . . . . . . . . . . . . . 33
5.2.1.4. Summary . . . . . . . . . . . . . . . . . . . . . 40 5.2.1.4. Summary . . . . . . . . . . . . . . . . . . . . . 34
5.2.1.5. Numerical application for use case #1 . . . . . . 41 5.2.1.5. Numerical application for use case #1 . . . . . . 35
5.2.1.6. Numerical application for use case #2 . . . . . . 41 5.2.1.6. Numerical application for use case #2 . . . . . . 35
6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 42 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 36
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36
8. Security Considerations . . . . . . . . . . . . . . . . . . . 42 8. Security Considerations . . . . . . . . . . . . . . . . . . . 37
8.1. Access Network Security . . . . . . . . . . . . . . . . . 43 8.1. Access Network Security . . . . . . . . . . . . . . . . . 37
8.2. Data Plane Security . . . . . . . . . . . . . . . . . . . 43 8.2. Data Plane Security . . . . . . . . . . . . . . . . . . . 37
8.3. Control Plane Security . . . . . . . . . . . . . . . . . . 44 8.3. Control Plane Security . . . . . . . . . . . . . . . . . . 38
9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 44 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39
9.1. Normative References . . . . . . . . . . . . . . . . . . . 44 9.1. Normative References . . . . . . . . . . . . . . . . . . . 39
9.2. Informative References . . . . . . . . . . . . . . . . . . 45 9.2. Informative References . . . . . . . . . . . . . . . . . . 39
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 47 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 41
1. Introduction 1. Introduction
MPLS as a mature and well known technology is widely deployed in MPLS as a mature and well known technology is widely deployed in
today's core and aggregation/metro area networks. Many metro area today's core and aggregation/metro area networks. Many metro area
networks are already based on MPLS delivering Ethernet services to networks are already based on MPLS delivering Ethernet services to
residential and business customers. Until now those deployments are residential and business customers. Until now those deployments are
usually done in different domains; e.g. core and metro area networks usually done in different domains; e.g. core and metro area networks
are handled as separate MPLS domains. are handled as separate MPLS domains.
skipping to change at page 23, line 10 skipping to change at page 23, line 10
the overall seamless MPLS architecture since it creates the required the overall seamless MPLS architecture since it creates the required
hierarchy and enables the hiding of all aggregation and access hierarchy and enables the hiding of all aggregation and access
addresses behind the ABRs from an IGP point of view. Leaking of addresses behind the ABRs from an IGP point of view. Leaking of
aggregation ISIS L1 loopback addresses into ISIS L2 is not necessary aggregation ISIS L1 loopback addresses into ISIS L2 is not necessary
and MUST NOT be allowed. and MUST NOT be allowed.
The resulting hierarchical inter-domain MPLS routing structure is The resulting hierarchical inter-domain MPLS routing structure is
similar to the one described in [RFC4364] section 10c, only that we similar to the one described in [RFC4364] section 10c, only that we
use one AS with route reflection instead of using multiple ASes. use one AS with route reflection instead of using multiple ASes.
5.1.8. Network Availability and Simplicity 5.1.8. Network Availability
The seamless mpls architecture illustrated in deployment case study 1 The seamless mpls architecture guarantees a sub-second loss of
guarantees a sub-second loss of connectivity upon any link or node connectivity upon any link or node failures. Furthermore, in the
failures. Furthermore, in the vast majority of cases, the loss of vast majority of cases, the loss of connectivity is limited to sub-
connectivity is limited to sub-50msec. 50msec.
These network availability properties are provided without any These network availability properties are provided without any
degradation on scale and simplicity. This is a key achievement of degradation on scale and simplicity. This is a key achievement of
the design. the design.
In the remainder of this section, we first introduce the different In the remainder of this section, we first introduce the different
network availability technologies and then review their applicability network availability technologies and then review their applicability
for each possible failure scenario. for each possible failure scenario.
5.1.8.1. IGP Convergence 5.1.8.1. IGP Convergence
skipping to change at page 24, line 32 skipping to change at page 24, line 32
Per-Prefix LFA FRR is generally assessed as a simple technology for Per-Prefix LFA FRR is generally assessed as a simple technology for
the operator [I-D.filsfils-rtgwg-lfa-applicability]. It certainly is the operator [I-D.filsfils-rtgwg-lfa-applicability]. It certainly is
in the context of deployment case study 1 as the designer enforced in the context of deployment case study 1 as the designer enforced
triangle and full-mesh topologies in the aggregation network as well triangle and full-mesh topologies in the aggregation network as well
as a dual-plane core network. as a dual-plane core network.
5.1.8.3. Hierarchical Dataplane and BGP Prefix Independent Convergence 5.1.8.3. Hierarchical Dataplane and BGP Prefix Independent Convergence
In a hierarchical dataplane, the FIB used by the packet processing In a hierarchical dataplane, the FIB used by the packet processing
engine reflects the recursions between routes. For example, a BGP engine reflects recursions between the routes. For example, a BGP
route B recursing on IGP route I whose best path is via interface O route B recursing on IGP route I whose best path is via interface O
is encoded as a FIB entry B pointing to a FIB entry I pointing to a is encoded as a hierarchy of FIB entry B pointing to a FIB entry I
FIB entry 0. pointing to a FIB entry 0.
Hierarchical FIB [BGPPIC] extends the hierarchical dataplane with the
concept of a BGP Path-List. A BGP path-list may be abstracted as a
set of primary multipath nhops and a backup nhop. When the primary
set is empty, packets destined to the BGP destinations are rerouted
via the backup nhop.
With hierarchical FIB and hierarchical dataplane, a FIB entry
representing a BGP route points to a FIB entry representing a BGP
Path-List. This entry may either point again to another BGP Path
list entry (BGP over BGP recursion) or more likely points to a FIB
entry representing an IGP route.
A BGP Path-list may be computed automatically by the router and does
not require any operator involvement. Specifically, the automated
computation adapts to any routing policy (this is key to understand
the simplicity of hierarchical FIB and the ability to enable it as a
default router behavior). There is no constraint at all on the
operator design. Any policy is supported (multipath, primary/backup
between neighboring domains or via alternate domains).
The BGP backup nhop is computed in advance of any failure (ie. a
second bestpath computation after excluding the primary nhops).
Hierarchical dataplane and hierarchical FIB provide two important
routing availability properties.
First, upon IGP convergence, recursive BGP routes immediately benefit
from the updated IGP paths thanks to the dataplane indirection. This
is key as most of the traffic is destined to BGP routes, not to IGP
routes.
Second, upon loss of the primary BGP nhop, the dataplane can
immediately reroute the packets towards the pre-computed backup nhop.
This redirection is said to be prefix independent as the only entries
that need to be modified are the BGP path-lists. These entries are
shared across all the BGP prefixes with the same primary and backup
next-hops. This scale independence is key. In the context of
deployment model 1, while there might be 100k BGP routes, we only
expect on the order of 200 BGP path-lists. Assuming 10usec in-place
modification per BGP path-list, we see that the router can enable the
backup path for 100k BGP destinations in less than 2msec (less than
200 * 10usec).
The detection of the loss of the primary BGP nhop (and hence the need
to enable the pre-computed backup BGP nhop) can be local (a local
link failing between an edge device and a single-hop eBGP peer) or
involves an IGP convergence (a remote border router goes down).
These hierarchical FIB properties benefit to any BGP routes:
Internet, L3VPN, 3107, IPv4 or IPv6. Future evolution of VPLS will
also benefit from such properties [I-D.raggarwa-mac-vpn],
[I-D.sajassi-l2vpn-rvpls-bgp]
Hierarchical forwarding and hierarchical FIB are very simple
technology to operate. Their ability to adapt to any topology, any
routing policy and any BGP address family allows router vendors to
enable this behavior by default.
5.1.8.4. Local Protection using Anycast BGP
5.1.8.4.1. Anycast BGP applied to ABR node failure
In this section we described a mechanism that provides local
protection for area border router (ABR) failures. To illustrate this
mechanism consider an example shown in Figure 6.
+-------+
| |
vl0+ ABR 1 |
/| |
+----------+ +-------+ / +-------+
| | | |/
| PE / LER +-..-+ PLR |
| | | |\
+----------+ +-------+ \ +-------+
\| |
vl0+ ABR 2 |
| |
+-------+
+-------+ +-------+ +-------+
| LDP-L +-----+ LDP-L +-----+ LDP-L |
+-------+ +-------+ +-------+
| BGP-L +-------------------+ BGP-L |
+-------+ +-------+
--------------- traffic ---------------->
<----- routing + label distribution -----
Figure 6: Routing and Traffic Flow
The core router adjacent to ABR1 and ABR2 acts as a point of local
repair (PLR). When the PLR detects ABR1 failure, the PLR re-routes
to ABR2 the traffic that the PLR used to forward to ABR1, with ABR2
providing the subsequent forwarding for this traffic. To accomplish
this ABR1, ABR2, and the PLR employ the following procedures.
ABR1, in addition to its own loopback, is provisioned with another IP
address (vl0). This IP address is used to identify the forwarding
state/context on ABR1 that is the subject to the local protection
mechanism outlined in this section. We refer to this IP address,
vl0, as the "context identifier". ABR1 advertises its context
identifier in ISIS and LDP. As ABR1 re-advertises to its core peers
the BGP routes it receives from its peers in the aggregation
domain(s), ABR1 sets the BGP Next Hop on these routes to its context
identifier (this creates an association between the forwarding state/
context created by these routes and the context identifier).
ABR2, acting as a protector for ABR1, is configured with the ABR1's
context identifier. ABR2 advertises this context identifier into LDP
and ISIS. The LDP advertisement is done with no PHP and a non-null
label, and the ISIS advertisement is done with a very high metric.
As a result, the PLR would have an LFA route/LSP to this context
identifier with ABR2 as the next hop. When the PLR detects ABR1's
failure, the LFA procedures on the PLR would result in sending to
ABR2 the traffic that the PLR used to forward to ABR1. Moreover,
since ABR2 advertises into LDP a non-null label for the ABR1's
context identifier, this label would enable ABR2 to identify such
traffic (as we'll see further down the ability to identify such
traffic is essential in order for ABR2 to correctly forward this
traffic).
+-----------------+-----------+-----------+
| FEC 10.0.1.1/32 | Label 200 | NH AGN2-1 |
+-----------------+-----------+-----------+
| FEC 10.0.1.2/32 | Label 233 | NH AGN2-1 | ABR1
+-----------------+-----------+-----------+
| FEC 10.0.1.3/32 | Label 313 | NH AGN2-1 |
+-----------------+-----------+-----------+
+------+ +-------+
| | | | +------------------+
vl0+ ABR1 +----+ AGN21 +----+ AGN11:10.0.1.1/32|
/| | | |\ /+------------------+
/ +------+\ /+-------+ \/
+----+ +-----+/ \/ \ /\ +------------------+
| PE +---+ PLR | /\ X X+ AGN12:10.0.1.2/32|
+----+ +-----+\ / \ / \/ +------------------+
\ +------+ +-------+ /\
\| | | |/ \+------------------+
vl0+ ABR2 +----+ AGN22 +----+ AGN13:10.0.1.3/32|
| | | | +------------------+
+------+ +-------+
+----------------------------------------+
| native forwarding context |
+-----------------+-----------+----------+
| FEC 10.0.1.1/32 | Label 100 | NH AGN21 |
+-----------------+-----------+----------+
| FEC 10.0.1.2/32 | Label 107 | NH AGN21 | ABR2
+-----------------+-----------+----------+
| FEC 10.0.1.3/32 | Label 152 | NH AGN21 |
+-----------------+-----------+----------+
| | |
V V V
+----------------------------------------+
| backup forwarding context |
+-----------------+-----------+----------+
| FEC 10.0.1.1/32 | Label 200 | NH AGN21 |
+-----------------+-----------+----------+
| FEC 10.0.1.2/32 | Label 233 | NH AGN21 | ABR2
+-----------------+-----------+----------+
| FEC 10.0.1.3/32 | Label 313 | NH AGN21 |
+-----------------+-----------+----------+
(ABR2 acting as backup for ABR1)
Figure 7: ABR Failure Scenarios
ABR2, acting as a protector for the forwarding context of ABR1, has
to have the <FEC->label> mapping for the FECs present in that
forwarding context, and should use this mapping to create the
forwarding state it would use when forwarding the traffic received
from the PLR. Figure 7 shows the <FEC->label> mapping on ABR1 and
ABR2. Note that the backup forwarding context on ABR2 is a mirror
image of the forwarding context on ABR1. This backup forwarding
context is populated using the routes that have been re-advertised by
ABR1 to its core peers (as ABR2 is a BGP core peer of ABR1). The
label that ABR2 advertises into LDP for ABR1's context identifier
points to the backup context. This way, ABR2 forwards all the
traffic received with this label using not its native forwarding
context, but the backup forwarding context.
Note that whether the PLR could rely on the basic LFA to re-route to
ABR2 the traffic that the PLR used to forward to ABR1 depends on the
LFA coverage. Since the basic LFA does not guarantee 100% coverage
in all topologies, relying on basic LFA may not be sufficient, in
which case the basic LFA would need to be augmented to provide 100%
coverage.
The procedures outlined above provide local protection upon ABR node
failure. By virtue of being local protection, the actions required
to restore connectivity upon the failure detection are fully
localized to the router closest to the failure - the router directly
connected to the failed ABR. This enables to deliver under 50msec
connectivity recovery time in the presence of ABR failure. These
actions do not depend on propagating failure information in ISIS,
thus providing connectivity recovery time that is independent of the
ISIS routing convergence time. In contrast, a combination of
hierarchical FIB organization and ISIS routing convergence, being a
global protection mechanism, does rely on the ISIS routing
convergence time, as the prefix-independent switch-over on the pre-
computed backup next hop occurs upon IGP convergence (deletion of the
IGP route to the remote ABR), and thus would have several 100s msec
connectivity recovery time.
5.1.8.4.2. Extensions to support ABR's connected to different
aggregation regions
Note that for the purpose of identifying the forwarding context BGP Prefix Independent Convergence [BGP-PIC] extends the hierarchical
ABR1's forwarding state could be partitioned, with each partition dataplane with the concept of a BGP Path-List. A BGP path-list may
being assigned its own IP address (its own context identifier). ABR1 be abstracted as a set of primary multipath nhops and a backup nhop.
would advertise all these identifiers into ISIS and LDP. This may be When the primary set is empty, packets destined to the BGP
useful in the scenario where ABR1 is connected to more than one destinations are rerouted via the backup nhop.
aggregation domain (more than one L1 area), in which case each
context identifier would identify the ABR1's forwarding state
associated with a single aggregation domain.
One could further refine the above scheme by implementing protector For complete description of BGP-PIC technology and its applicability
functionality that would allow a single protector to protect multiple please refer to [BGP-PIC].
forwarding contexts, with each forwarding context being associated
with all the forwarding state maintained by a given (protected) ABR.
Such functionality could be implemented either on a separate router,
or could be co-located with an existing ABR. Details of this are
outside the scope of this document.
5.1.8.4.3. Anycast BGP applied to a L3VPN PE Hierarchical data plane and BGP-PIC are very simple technologies to
operate. Their applicability to any topology, any routing policy and
any BGP unicast address family allows router vendors to enable this
behavior by default.
BGP Anycast is also used to protect against L3VPN PE failures. In 5.1.8.4. BGP Egress Node FRR
general a given VPN site can be multi-homed (connected to several
L3VPN PEs). Moreover, multi-homed sites may be non-congruent with
each other - different multi-homed sites connected to a given PE may
have their other connection(s) to different other PEs. BGP Anycast
scheme, utilizing the construct of Protector PE, provides forwarding
context protection for multiple egress PEs in the presence of non-
congruent multi-homed sites.
Protector PE function is enhanced from the basic BGP Anycast 1:1 BGP egress node FRR is a Fast ReRoute solution and hence relies on
mirroring procedures described for ABR protection, by supporting local protection and the precomputation and preinstallation of the
multiple backup forwarding contexts, one per protected egress PE. backup path in the FIB. BGP egress node FRR relies on a transit LSR
Each backup forwarding context on the Protector PE is identified by ( Point of Local Repair, PLR ) adjacent to the failed protected BGP
the context identifier of the associated protected egress PE. router to detect the failure and re-route the traffic to the backup
BGP router. Number of BGP egress node FRR schemes are being
investigated: [PE-FRR], [ABR-FRR],
[I-D.draft-minto-2547-egress-node-fast-protection-00],
[I-D.draft-minto-2547-egress-node-fast-protection-00],
[I-D.draft-minto-2547-egress-node-fast-protection-00].
Protector PE advertises these context identifiers into IGP with a Differences between these schemes relate to the way backup and
large metric and into LDP with no PHP and a non-null label. This protected BGP routers get associated, how the protected router's BGP
results in PLR of each egress PE having an LFA route/LSP (or bypass state is signalled to the backup BGP router(s) and if any other state
LSP if no native LFA coverage for specific topology) to the is required on protected, backup and PLR routers. The schemes also
associated context identifier with Protector PE as the next hop. differ in compatibility with IPFRR and TEFRR schemes to enable PLR to
Protector PE creates a backup forwarding context per protected egress switch traffic towards the backup BGP router in case of protected BGP
PE based on BGP advertisements from this egress PE and other egress router failure.
PEs with the same multi-homed customer networks.
Similarly to the ABR case described earlier, in case of specific In the Seamless MPLS design, BGP egress node FRR schemes can protect
protected egress PE failure, PLR will follow standard LFA procedure against the failures of PE, AGN and ABR nodes with no requirements on
(or local protection to bypass LSP) and forward affected flows to ingress routers.
Protector PE. Those flows will arrive to Protector PE on the LSP
associated with the context identifier for the failed egress PE, the
backup forwarding context will be identified by this LSP, and flows
will be switched to alternative egress PE(s).
5.1.8.5. Assessing loss of connectivity upon any failure 5.1.8.5. Assessing loss of connectivity upon any failure
We select two typical traffic flows and analyze the loss of We select two typical traffic flows and analyze the loss of
connectivity (LoC) upon each possible failure. connectivity (LoC) upon each possible failure in the Seamless MPLS
design in the deployment scenario #1.
Flow F1 starts from an AN1 in a left aggregation region and ends o Flow F1 starts from an AN1 in a left aggregation region and ends
on an AN2 in a right aggregation region. Each AN is dual-homed to on an AN2 in a right aggregation region. Each AN is dual-homed to
two AGN's. two AGN's.
Flow F2 starts from an L3VPN PE1 in the core and ends at an L3VPN o Flow F2 starts from a CE1 homed on L3VPN PE1 connected to the core
PE2 in the core. LSRs and ends at CE2 dual-homed to L3VPN PE2 and PE3, both
connected to the core LSRs.
Note that due to the symmetric network topology in case study 1, uni- Note that due to the symmetric network topology in case study 1, uni-
directional flows F1' and F2', associated with F1 and F2 and directional flows F1' and F2', associated with F1 and F2 and
forwarded in the reversed direction (AN2 to AN1 right-to-left and PE2 forwarded in the reversed direction (AN2 to AN1 right-to-left and PE2
to PE1, respectively), take advantage of the same failure restoration to PE1, respectively), take advantage of the same failure restoration
mechanisms as F1 and F2. . mechanisms as F1 and F2.
5.1.8.5.1. AN1-AGN link failure or AGN node failure 5.1.8.5.1. AN1-AGN link failure or AGN node failure
F1 is impacted but LoC <50msec is possible assuming fast BFD F1 is impacted but LoC <50msec is possible assuming fast BFD
detection and fast-switchover implementation on the AN. F2 is not detection and fast-switchover implementation on the AN. F2 is not
impacted. impacted.
5.1.8.5.2. Link or node failure within the left aggregation region 5.1.8.5.2. Link or node failure within the left aggregation region
F1 is impacted but LoC <50msec thanks to LFA FRR. No uloop will F1 is impacted but LoC <50msec thanks to LFA FRR. No uloop will
skipping to change at page 31, line 52 skipping to change at page 26, line 39
flow F1. flow F1.
Note: remember that the left region receives the routes to all the Note: remember that the left region receives the routes to all the
remote ABR's and that the labelled BGP routes are reflected from the remote ABR's and that the labelled BGP routes are reflected from the
core to the left region with next-hop unchanged. This ensures that core to the left region with next-hop unchanged. This ensures that
the loss of the (local) ABR between the left region and the core is the loss of the (local) ABR between the left region and the core is
seen as an IGP route impact and hence can be addressed by LFA. seen as an IGP route impact and hence can be addressed by LFA.
Note: if LFA is not available (other topology then case study one) or Note: if LFA is not available (other topology then case study one) or
if LFA is not enabled, then the LoC would be < second as the number if LFA is not enabled, then the LoC would be < second as the number
of impacted important IGP route in a seamless architecture is much of impacted important IGP routes in a seamless architecture is much
smaller than 2960. smaller than 2960 routes.
F2 is not impacted. F2 is not impacted.
5.1.8.5.4. Link or node failure within the core region 5.1.8.5.4. Link or node failure within the core region
F1 and F2 are impacted but LoC <50msec thanks to LFA FRR. F1 and F2 are impacted but LoC <50msec thanks to LFA FRR.
This is specific to the particular core topology used in deployment This is specific to the particular core topology used in deployment
case study 1. The core topology has been optimized case study 1. The core topology has been optimized
[I-D.filsfils-rtgwg-lfa-applicability] for LFA applicability. [I-D.filsfils-rtgwg-lfa-applicability] for LFA applicability.
As explained in [I-D.filsfils-rtgwg-lfa-applicability], another As explained in [I-D.filsfils-rtgwg-lfa-applicability], another
alternative to provide <50msec in this case consists in using an alternative to provide <50msec in this case consists in using an
MPLS-TE full-mesh and MPLS-TE FRR. This is required when the MPLS-TE full-mesh and MPLS-TE FRR. This is required when the
designer is not able or does not want to optimize the topology for designer is not able or does not want to optimize the topology for
LFA applicability and he wants to achieve <50msec protection. LFA applicability and he wants to achieve <50msec protection.
Alternatively, simple IGP convergence would ensure a LoC < second as Alternatively, simple IGP convergence would ensure a LoC < second as
the number of impacted important IGP route in a seamless architecture the number of impacted important IGP routes in a seamless
is much smaller than 2960. architecture is much smaller than 2960 routes.
5.1.8.5.5. PE2 failure 5.1.8.5.5. PE2 failure
F1 is not impacted. F1 is not impacted.
F2 is impacted and the LoC is sub-300msec thanks to IGP convergence F2 is impacted and the LoC is sub-300msec thanks to IGP convergence
and hierarchical FIB. and BGP PIC.
The detection of the primary nhop failure (PE2 down) is performed by The detection of the primary nhop failure (PE2 down) is performed by
a single-area IGP convergence. a single-area IGP convergence.
In this specific case, the convergence should be much faster than In this specific case, the convergence should be much faster than
<sec as very few prefixes are impacted upon an edge node failure. <sec as very few prefixes are impacted upon an edge node failure.
Reusing the introduction on IGP convergence presented in an earlier Reusing the introduction on IGP convergence presented in an earlier
section and assuming 2 important impacted prefixes (two loopbacks per section and assuming 2 important impacted prefixes (two loopbacks per
edge node), one would expect that PE2's failure is detected in edge node), one would expect that PE2's failure is detected in
260msec + 2*0.250msec. 260msec + 2*0.250msec.
In a hierarchical FIB organization on the ingress PE, once the loss If BGP PIC is used on the ingress PE ( PE1 ) then the LoC is the same
of an egress PE is detected, all the impacted BGP Path-Lists as for IGP convergence. The LoC for BGP/L3VPN traffic upon PE2
associated with that egress PE need to be updated, and the impacted failure is thus expected to be <300msec.
traffic gets re-routed to the pre-computed backup PEs. The time it
takes to complete this operation is not constant, but is proportional
to the number of unique BGP Path-Lists affected by the egress PE
failure. Number of such affected BGP Path-Lists is equal to the
number of "non-congruent" multi-homed sites connected to the egress
PE, where the number of non-congruent sites is defined as the number
of other PEs that these sites are connected to (note that in defining
the term "non-congruent" we refer to sites, rather than to CEs, as a
given multi-homed site can use multiple CEs). Furthermore, per CE
BGP policies (e.g. single-path vs. multi-path) may further increase
number of BGP Path-Lists involved.
The LoC for BGP/BPN traffic upon PE2 failure is thus expected to be
<300msec.
Provided that all the deployment considerations have been met, LoC is Provided that all the deployment considerations have been met, LoC is
sub-50msec with BGP Anycast. sub-50msec with BGP egress node FRR.
5.1.8.5.6. PE2's PE-CE link failure 5.1.8.5.6. PE2's PE-CE link failure
F1 is not impacted. F1 is not impacted.
F2 is impacted and the LoC is sub-50msec thanks to local interface F2 is impacted and the LoC is sub-50msec thanks to local interface
failure detection and local forwarding to the backup PE. Forwarding failure detection and local forwarding to the backup PE. Forwarding
to the backup PE is achieved with hierarchical FIB or local-repair of to the backup PE is achieved with hierarchical data plane and local-
BGP egress link providing fast re-route to the backup BGP nhop PE. repair of BGP egress link providing fast re-route to the backup BGP
nhop PE.
5.1.8.5.7. ABR node failure between right region and the core 5.1.8.5.7. ABR node failure between right region and the core
F2 is not impacted. F2 is not impacted.
F1 is impacted. We analyze the LoC for F1 for both hierarchical FIB F1 is impacted. We analyze the LoC for F1 for both BGP PIC and BGP
and BGP anycast. egress node FRR.
LoC is sub-600msec thanks to hierarchical FIB. LoC is sub-600msec thanks to BGP PIC.
The detection of the primary nhop failure (ABR down) is performed by The detection of the primary nhop failure (ABR down) is performed by
a multi-area IGP convergence. a multi-area IGP convergence.
First, the two (local) ABR's between the left and core regions must First, the two (local) ABR's between the left and core regions must
complete the core IGP convergence. The analysis is similar to the complete the core IGP convergence. The analysis is similar to the
loss of PE2. We would thus expect that the core convergence loss of PE2. We would thus expect that the core convergence
completes in ~260msec. completes in ~260msec.
Second, the IGP convergence in the left region will cause all AGN1 Second, the IGP convergence in the left region will cause all AGN1
routers to detect the loss of the remote ABR. This second IGP routers to detect the loss of the remote ABR. This second IGP
convergence is very similar to the first one (2 important prefixes to convergence is very similar to the first one (2 important prefixes to
remove) and hence should also complete in ~260msec. remove) and hence should also complete in ~260msec.
Once an AGN1 has detected the loss of the remote ABR, thanks to Once an AGN1 has detected the loss of the remote ABR, thanks to the
hierarchical FIB organization, in-place modification of shared BGP BGP PIC, in-place modification of shared BGP path-list and pre-
path-list and pre-computation of BGP backup nhop, the AGN1 reroutes computation of BGP backup nhop, the AGN1 reroutes flow F1 via the
flow F1 via the alternate remote ABR in a few msec's. alternate remote ABR in a few msec's [##BGP-PIC].
As a consequence, the LoC for F1 upon remote ABR failure is thus As a consequence, the LoC for F1 upon remote ABR failure is thus
expected to be <600msec. expected to be <600msec.
Provided that all the deployment considerations have been met, LoC is Provided that all the deployment considerations have been met, LoC is
sub-50msec with BGP Anycast. sub-50msec with BGP egress node FRR.
5.1.8.5.8. Link or node failure within the right aggregation region 5.1.8.5.8. Link or node failure within the right aggregation region
F1 is impacted but LoC <50msec thanks to LFA FRR. No uloop will F1 is impacted but LoC <50msec thanks to LFA FRR. No uloop will
occur during the IGP convergence following the LFA protection. occur during the IGP convergence following the LFA protection.
Note: if LFA is not available (other topology then case study one) or Note: if LFA is not available (other topology then case study one) or
if LFA is not enabled, then the LoC would be < second as the number if LFA is not enabled, then the LoC would be < second as the number
of impacted important IGP route in a seamless architecture is much of impacted important IGP route in a seamless architecture is much
smaller than 2960. smaller than 2960.
skipping to change at page 35, line 37 skipping to change at page 30, line 7
interoperability testing. interoperability testing.
More specifically, [I-D.filsfils-rtgwg-lfa-applicability] plays a key More specifically, [I-D.filsfils-rtgwg-lfa-applicability] plays a key
role in the Seamless MPLS architecture as it describes simple design role in the Seamless MPLS architecture as it describes simple design
guidelines which determiniscally ensure LFA coverage for any link and guidelines which determiniscally ensure LFA coverage for any link and
node in the aggregation regions of the network. This is key as it node in the aggregation regions of the network. This is key as it
provides for a simple <50msec protection for the vast majority of the provides for a simple <50msec protection for the vast majority of the
node and link failures (>90% of the IGP/BGP3107 footprint at least). node and link failures (>90% of the IGP/BGP3107 footprint at least).
If the guidelines cannot be met, then either the designer will rely If the guidelines cannot be met, then either the designer will rely
on (1) augmenting native LFA coverage with RSVP, or (2) a full-mesh on (1) augmenting native LFA coverage with remote LFA
TE FRR model, or (3) IGP convergence. The first option provides the [I-D.draft-ietf-rtgwg-remote-lfa-00], or (2) augmenting native LFA
same sub-50msec protection as LFA, but introduces additional RSVP coverage with RSVP, or (3) a full-mesh TE FRR model, or (4) IGP
LSPs. The second option optimizes for sub-50msec protection, but convergence. The first option provides an automatic and fairly
implies a more complex operational model. The third option optimizes simple sub-50msec protection as LFA without introducing any
for simple operation but only provides <sec protection. Up to each additional protocols. The second option provides the same sub-50msec
designer to arbitrate between these three options versus the protection as LFA, but introduces additional RSVP LSPs. The thrid
possibility to engineer the topology for native LFA protection. option optimizes for sub-50msec protection, but implies a more
complex operational model. The fourth option optimizes for simple
operation but only provides <1 sec protection. Up to each designer
to arbitrate between these three options versus the possibility to
engineer the topology for native LFA protection.
A similar choice involves the protection against ABR node failure and A similar choice involves protection against ABR node failure and
L3VPN PE node failure. The designer can either use hierarchical FIB L3VPN PE node failure. The designer can either use BGP PIC or BGP
or Anycast BGP. Up to each designer to asssess the trade-off between egress node FRR. Up to each designer to asssess the trade-off
the valuation of sub-50msec instead of sub-1sec versus additional between the valuation of sub-50msec instead of sub-1sec versus
operational considerations related to Anycast BGP. additional operational considerations related to BGP egress node FRR.
5.1.8.7. Conclusion 5.1.8.7. Conclusion
The Seamless MPLS architecture illustrated in deployment case study 1 The Seamless MPLS architecture illustrated in deployment case study 1
guarantees sub-50msec for majority of link and node failures by using guarantees sub-50msec for majority of link and node failures by using
LFA FRR, except ABR and L3PE node failures, and PE-CE link failure. LFA FRR, except ABR and L3PE node failures, and PE-CE link failure.
L3VPN PE-CE link failure can be protected with sub-50msec L3VPN PE-CE link failure can be protected with sub-50msec
restoration, by using hierarchical FIB or local-repair fast-reroute restoration, by using hierarchical data plane and local-repair fast-
to the backup BGP nhop PE. reroute to the backup BGP nhop PE.
ABR and L3PE node failure can be protected with sub-50msec ABR and L3PE node failure can be protected with sub-50msec
restoration, by using BGP Anycast. restoration, by using BGP egress node FRR.
Alternatively, ABR and L3PE node failure can be protected with sub- Alternatively, ABR and L3PE node failure can be protected with sub-
1sec restoration, by using hierarchical-FIB. 1sec restoration using BGP PIC.
5.1.9. Next-Hop Redundancy 5.1.9. BGP Next-Hop Redundancy
An aggregation domain is connected to the core network using two An aggregation domain is connected to the core network using two
redundant area boarder routers, and MPLS hierarchy is applied on redundant area boarder routers, and MPLS hierarchy is applied on
these ABRs. MPLS hierarchy helps scale the FIB but introduces these ABRs. MPLS hierarchy helps scale the FIB but introduces
additional complexity for the rerouting in case of ABR failure. additional complexity for the rerouting in case of ABR failure.
Indeed ABR failure requires a BGP converge to update the inner MPLS Indeed ABR failure requires a BGP converge to update the inner MPLS
hierarchy, in addition to the IGP converge to update the outer MPLS hierarchy, in addition to the IGP converge to update the outer MPLS
hierarchy. This is also expected to take more time as BGP hierarchy. This is also expected to take more time as BGP
convergence is performed after the IGP convergence and because the convergence is performed after the IGP convergence and because the
number of prefixes to update in the FIB can be significant. This is number of prefixes to update in the FIB can be significant. This is
a drawback but the architecture allow for two "local" solutions which clearly a drawback, but the architecture allows for two "local
restore the traffic before the BGP convergence takes place. protection" solutions which restore the traffic before the BGP
convergence takes place.
One called hierarchical FIB edge, would be required on all edge LSR
involved in the inner (BGP) MPLS hierarchy. Namely all routers
except the AN which are not involved in the inner MPLS hierarchy. It
involves pre-computing and pre-installing in the FIB the BGP backup
path. Such back up path are activated when the IGP advertise the
failure of the primary path.
One called egress fast reroute, would be required on the egress LSR
involved in the inner (BGP) MPLS hierarchy, namely TN and AGN
connected to ABR. It involves:
using a anycast loopback address shared by both nominal and back BGP PIC would be required on all edge LSR involved in the inner (BGP)
up ABR, advertised by both ABR in the IGP and advertised as BGP MPLS hierarchy. Namely all routers except the AN which are not
Next Hop by the nominal ABR; involved in the inner MPLS hierarchy. It involves pre-computing and
pre-installing in the FIB the BGP backup path. Such back up path are
activated when the IGP advertise the failure of the primary path.
For specification see [BGP-PIC1, 2##].
activating IP FRR LFA on the (penultimate) hops, acting as PLR for BGP egress node FRR would be required on the egress LSR involved in
the anycast loopback; the inner (BGP) MPLS hierarchy, namely AGN, ABR and L3VPN PEs. For
using on the backup egress nodes (ABR2) an additional contextual specification see [PE-FRR], [ABR-FRR], [BGP-edge-FRR##].
MPLS FIB populated by the labels upstream allocated by the nominal
egress node (ABR1).
Details can be found in [PEFRR] and [ABRFRR], and in the appendix of Both approaches have their pros and cons, and the choice is left to
this draft. Both solutions have their pro and con, and the choice is each Service Provider or deployment based on the different
left to each Service Provider or deployment based on the different requirements. The key point is that the seamless MPLS architecture
requirements. The point is that the seamless MPLS architecture can can handle fast restoration time, even for ABR failures.
handles fast restoration time, even for ABR failures.
5.2. Scalability Analysis 5.2. Scalability Analysis
5.2.1. Control and Data Plane State for Deployment Scenario #1 5.2.1. Control and Data Plane State for Deployment Scenario #1
5.2.1.1. Introduction 5.2.1.1. Introduction
Let's call: Let's call:
o #AN the number of Access Node (AN) in the seamless MPLS domain o #AN the number of Access Node (AN) in the seamless MPLS domain
skipping to change at page 42, line 28 skipping to change at page 36, line 35
o TN MPLS LFIB 150 o TN MPLS LFIB 150
o RR BGP NLRI 40 000 o RR BGP NLRI 40 000
o RR BGP paths 80 000 o RR BGP paths 80 000
6. Acknowledgements 6. Acknowledgements
Many people contributed to this document. The authors would like to Many people contributed to this document. The authors would like to
thank Wim Henderickx, Clarence Filsfils, Thomas Beckhaus, Wilfried thank Wim Henderickx, Robert Raszuk, Thomas Beckhaus, Wilfried Maas,
Maas, Roger Wenner, Kireeti Kompella, Yakov Rekhter, Mark Tinka and Roger Wenner, Kireeti Kompella, Yakov Rekhter, Mark Tinka and Simon
Simon DeLord for their suggestions and review. DeLord for their suggestions and review.
7. IANA Considerations 7. IANA Considerations
This memo includes no request to IANA. This memo includes no request to IANA.
All drafts are required to have an IANA considerations section (see All drafts are required to have an IANA considerations section (see
the update of RFC 2434 [I-D.narten-iana-considerations-rfc2434bis] the update of RFC 2434 [I-D.narten-iana-considerations-rfc2434bis]
for a guide). If the draft does not require IANA to do anything, the for a guide). If the draft does not require IANA to do anything, the
section contains an explicit statement that this is the case (as section contains an explicit statement that this is the case (as
above). If there are no requirements for IANA, the section will be above). If there are no requirements for IANA, the section will be
skipping to change at page 44, line 44 skipping to change at page 39, line 4
access locations with lower physical security, designer could also access locations with lower physical security, designer could also
consider using: consider using:
o different crypto keys for use in authentication procedures for o different crypto keys for use in authentication procedures for
these locations. these locations.
o stricter network protection mechanisms including DoS protection, o stricter network protection mechanisms including DoS protection,
interface and session flap dampening. interface and session flap dampening.
9. References 9. References
9.1. Normative References 9.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
9.2. Informative References 9.2. Informative References
[ABRFRR] Rekhter, Y., "Local Protection for LSP tail-end node [ABR-FRR] Rekhter, Y., "Local Protection for LSP tail-end node
failure, MPLS World Congress 2009". failure, MPLS World Congress 2009".
[ACM01] "Archieving sub-second IGP convergence in large IP [ACM01] "Archieving sub-second IGP convergence in large IP
networks, ACM SIGCOMM Computer Communication Review, v.35 networks, ACM SIGCOMM Computer Communication Review, v.35
n.3", July 2005. n.3", July 2005.
[BGPPIC] "BGP PIC, Technical Report", November 2007. [BGP-PIC] "BGP PIC, Technical Report", November 2007.
[I-D.draft-bashandy-bgp-edge-node-frr-03]
"".
[I-D.draft-bashandy-bgp-frr-mirror-table-00]
"".
[I-D.draft-ietf-rtgwg-remote-lfa-00]
"".
[I-D.draft-minto-2547-egress-node-fast-protection-00]
"".
[I-D.filsfils-rtgwg-lfa-applicability] [I-D.filsfils-rtgwg-lfa-applicability]
Filsfils, C., Francois, P., Shand, M., Decraene, B., Filsfils, C., Francois, P., Shand, M., Decraene, B.,
Uttaro, J., Leymann, N., and M. Horneffer, "LFA Uttaro, J., Leymann, N., and M. Horneffer, "LFA
applicability in SP networks", applicability in SP networks",
draft-filsfils-rtgwg-lfa-applicability-00 (work in draft-filsfils-rtgwg-lfa-applicability-00 (work in
progress), March 2010. progress), March 2010.
[I-D.ietf-bfd-v4v6-1hop] [I-D.ietf-bfd-v4v6-1hop]
Katz, D. and D. Ward, "BFD for IPv4 and IPv6 (Single Katz, D. and D. Ward, "BFD for IPv4 and IPv6 (Single
Hop)", draft-ietf-bfd-v4v6-1hop-11 (work in progress), Hop)", draft-ietf-bfd-v4v6-1hop-11 (work in progress),
January 2010. January 2010.
[I-D.ietf-mpls-ldp-dod] [I-D.ietf-mpls-ldp-dod]
Beckhaus, T., Decraene, B., Tiruveedhula, K., Beckhaus, T., Decraene, B., Tiruveedhula, K.,
Konstantynowicz, M., and L. Martini, "LDP Downstream-on- Konstantynowicz, M., and L. Martini, "LDP Downstream-on-
Demand in Seamless MPLS", draft-ietf-mpls-ldp-dod-00 (work Demand in Seamless MPLS", draft-ietf-mpls-ldp-dod-03 (work
in progress), January 2012. in progress), August 2012.
[I-D.kothari-henderickx-l2vpn-vpls-multihoming] [I-D.kothari-henderickx-l2vpn-vpls-multihoming]
Kothari, B., Kompella, K., Henderickx, W., and F. Balus, Kothari, B., Kompella, K., Henderickx, W., and F. Balus,
"BGP based Multi-homing in Virtual Private LAN Service", "BGP based Multi-homing in Virtual Private LAN Service",
draft-kothari-henderickx-l2vpn-vpls-multihoming-01 (work draft-kothari-henderickx-l2vpn-vpls-multihoming-01 (work
in progress), July 2009. in progress), July 2009.
[I-D.narten-iana-considerations-rfc2434bis] [I-D.narten-iana-considerations-rfc2434bis]
Narten, T. and H. Alvestrand, "Guidelines for Writing an Narten, T. and H. Alvestrand, "Guidelines for Writing an
IANA Considerations Section in RFCs", IANA Considerations Section in RFCs",
skipping to change at page 46, line 8 skipping to change at page 40, line 25
Aggarwal, R., Isaac, A., Uttaro, J., Henderickx, W., and Aggarwal, R., Isaac, A., Uttaro, J., Henderickx, W., and
F. Balus, "BGP MPLS Based MAC VPN", F. Balus, "BGP MPLS Based MAC VPN",
draft-raggarwa-mac-vpn-01 (work in progress), June 2010. draft-raggarwa-mac-vpn-01 (work in progress), June 2010.
[I-D.sajassi-l2vpn-rvpls-bgp] [I-D.sajassi-l2vpn-rvpls-bgp]
Sajassi, A., Patel, K., Mohapatra, P., Filsfils, C., and Sajassi, A., Patel, K., Mohapatra, P., Filsfils, C., and
S. Boutros, "Routed VPLS using BGP", S. Boutros, "Routed VPLS using BGP",
draft-sajassi-l2vpn-rvpls-bgp-01 (work in progress), draft-sajassi-l2vpn-rvpls-bgp-01 (work in progress),
July 2010. July 2010.
[PEFRR] Le Roux, J., Decraene, B., and Z. Ahmad, "Fast Reroute in [PE-FRR] Le Roux, J., Decraene, B., and Z. Ahmad, "Fast Reroute in
MPLS L3VPN Networks - Towards CE-to-CE Protection, MPLS MPLS L3VPN Networks - Towards CE-to-CE Protection, MPLS
2006 Conference". 2006 Conference".
[RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
June 1999. June 1999.
[RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol
Label Switching Architecture", RFC 3031, January 2001. Label Switching Architecture", RFC 3031, January 2001.
[RFC3107] Rekhter, Y. and E. Rosen, "Carrying Label Information in [RFC3107] Rekhter, Y. and E. Rosen, "Carrying Label Information in
skipping to change at page 48, line 7 skipping to change at page 42, line 17
Brussels, Brussels,
Belgium Belgium
Phone: Phone:
Fax: Fax:
Email: cfilsfil@cisco.com Email: cfilsfil@cisco.com
URI: URI:
Maciek Konstantynowicz Maciek Konstantynowicz
Cisco Systems Cisco Systems
London,
United Kingdom
Phone: Phone:
Fax: Fax:
Email: maciek@cisco.com Email: maciek@cisco.com
URI: URI:
Dirk Steinberg Dirk Steinberg
Steinberg Consulting Steinberg Consulting
Ringstrasse 2 Ringstrasse 2
Buchholz 53567 Buchholz 53567
 End of changes. 49 change blocks. 
378 lines changed or deleted 156 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/