--- 1/draft-ietf-mpls-seamless-mpls-01.txt 2012-10-23 01:14:29.905343246 +0200 +++ 2/draft-ietf-mpls-seamless-mpls-02.txt 2012-10-23 01:14:29.977343140 +0200 @@ -1,24 +1,24 @@ MPLS Working Group N. Leymann, Ed. Internet-Draft Deutsche Telekom AG Intended status: Informational B. Decraene -Expires: September 13, 2012 France Telecom +Expires: April 25, 2013 France Telecom C. Filsfils M. Konstantynowicz Cisco Systems D. Steinberg Steinberg Consulting - March 12, 2012 + October 22, 2012 Seamless MPLS Architecture - draft-ietf-mpls-seamless-mpls-01 + draft-ietf-mpls-seamless-mpls-02 Abstract This documents describes an architecture which can be used to extend MPLS networks to integrate access and aggregation networks into a single MPLS domain ("Seamless MPLS"). The Seamless MPLS approach is based on existing and well known protocols. It provides a highly flexible and a scalable architecture and the possibility to integrate 100.000 of nodes. The separation of the service and transport plane is one of the key elements; Seamless MPLS provides end to end service @@ -38,21 +38,21 @@ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on September 13, 2012. + This Internet-Draft will expire on April 25, 2013. Copyright Notice Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -96,49 +96,49 @@ 5.1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.2. General Network Topology . . . . . . . . . . . . . . . 17 5.1.3. Hierarchy . . . . . . . . . . . . . . . . . . . . . . 18 5.1.4. Intra-Area Routing . . . . . . . . . . . . . . . . . . 19 5.1.4.1. Core . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.4.2. Aggregation . . . . . . . . . . . . . . . . . . . 19 5.1.5. Access . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.5.1. LDP Downstream-on-Demand (DoD) . . . . . . . . . . 20 5.1.6. Inter-Area Routing . . . . . . . . . . . . . . . . . . 21 5.1.7. Labled iBGP next-hop handling . . . . . . . . . . . . 22 - 5.1.8. Network Availability and Simplicity . . . . . . . . . 23 + 5.1.8. Network Availability . . . . . . . . . . . . . . . . . 23 5.1.8.1. IGP Convergence . . . . . . . . . . . . . . . . . 23 5.1.8.2. Per-Prefix LFA FRR . . . . . . . . . . . . . . . . 24 5.1.8.3. Hierarchical Dataplane and BGP Prefix Independent Convergence . . . . . . . . . . . . . 24 - 5.1.8.4. Local Protection using Anycast BGP . . . . . . . . 25 - 5.1.8.5. Assessing loss of connectivity upon any failure . 30 - 5.1.8.6. Network Resiliency and Simplicity . . . . . . . . 35 - 5.1.8.7. Conclusion . . . . . . . . . . . . . . . . . . . . 36 - 5.1.9. Next-Hop Redundancy . . . . . . . . . . . . . . . . . 36 - 5.2. Scalability Analysis . . . . . . . . . . . . . . . . . . . 37 + 5.1.8.4. BGP Egress Node FRR . . . . . . . . . . . . . . . 25 + 5.1.8.5. Assessing loss of connectivity upon any failure . 25 + 5.1.8.6. Network Resiliency and Simplicity . . . . . . . . 29 + 5.1.8.7. Conclusion . . . . . . . . . . . . . . . . . . . . 30 + 5.1.9. BGP Next-Hop Redundancy . . . . . . . . . . . . . . . 30 + 5.2. Scalability Analysis . . . . . . . . . . . . . . . . . . . 31 5.2.1. Control and Data Plane State for Deployment - Scenario #1 . . . . . . . . . . . . . . . . . . . . . 37 - 5.2.1.1. Introduction . . . . . . . . . . . . . . . . . . . 37 - 5.2.1.2. Core Domain . . . . . . . . . . . . . . . . . . . 38 - 5.2.1.3. Aggregation Domain . . . . . . . . . . . . . . . . 39 - 5.2.1.4. Summary . . . . . . . . . . . . . . . . . . . . . 40 - 5.2.1.5. Numerical application for use case #1 . . . . . . 41 - 5.2.1.6. Numerical application for use case #2 . . . . . . 41 - 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 42 - 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 - 8. Security Considerations . . . . . . . . . . . . . . . . . . . 42 - 8.1. Access Network Security . . . . . . . . . . . . . . . . . 43 - 8.2. Data Plane Security . . . . . . . . . . . . . . . . . . . 43 - 8.3. Control Plane Security . . . . . . . . . . . . . . . . . . 44 - 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 44 - 9.1. Normative References . . . . . . . . . . . . . . . . . . . 44 - 9.2. Informative References . . . . . . . . . . . . . . . . . . 45 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 47 + Scenario #1 . . . . . . . . . . . . . . . . . . . . . 31 + 5.2.1.1. Introduction . . . . . . . . . . . . . . . . . . . 31 + 5.2.1.2. Core Domain . . . . . . . . . . . . . . . . . . . 32 + 5.2.1.3. Aggregation Domain . . . . . . . . . . . . . . . . 33 + 5.2.1.4. Summary . . . . . . . . . . . . . . . . . . . . . 34 + 5.2.1.5. Numerical application for use case #1 . . . . . . 35 + 5.2.1.6. Numerical application for use case #2 . . . . . . 35 + 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 36 + 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36 + 8. Security Considerations . . . . . . . . . . . . . . . . . . . 37 + 8.1. Access Network Security . . . . . . . . . . . . . . . . . 37 + 8.2. Data Plane Security . . . . . . . . . . . . . . . . . . . 37 + 8.3. Control Plane Security . . . . . . . . . . . . . . . . . . 38 + 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39 + 9.1. Normative References . . . . . . . . . . . . . . . . . . . 39 + 9.2. Informative References . . . . . . . . . . . . . . . . . . 39 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 41 1. Introduction MPLS as a mature and well known technology is widely deployed in today's core and aggregation/metro area networks. Many metro area networks are already based on MPLS delivering Ethernet services to residential and business customers. Until now those deployments are usually done in different domains; e.g. core and metro area networks are handled as separate MPLS domains. @@ -1017,26 +1017,26 @@ the overall seamless MPLS architecture since it creates the required hierarchy and enables the hiding of all aggregation and access addresses behind the ABRs from an IGP point of view. Leaking of aggregation ISIS L1 loopback addresses into ISIS L2 is not necessary and MUST NOT be allowed. The resulting hierarchical inter-domain MPLS routing structure is similar to the one described in [RFC4364] section 10c, only that we use one AS with route reflection instead of using multiple ASes. -5.1.8. Network Availability and Simplicity +5.1.8. Network Availability - The seamless mpls architecture illustrated in deployment case study 1 - guarantees a sub-second loss of connectivity upon any link or node - failures. Furthermore, in the vast majority of cases, the loss of - connectivity is limited to sub-50msec. + The seamless mpls architecture guarantees a sub-second loss of + connectivity upon any link or node failures. Furthermore, in the + vast majority of cases, the loss of connectivity is limited to sub- + 50msec. These network availability properties are provided without any degradation on scale and simplicity. This is a key achievement of the design. In the remainder of this section, we first introduce the different network availability technologies and then review their applicability for each possible failure scenario. 5.1.8.1. IGP Convergence @@ -1087,299 +1087,83 @@ Per-Prefix LFA FRR is generally assessed as a simple technology for the operator [I-D.filsfils-rtgwg-lfa-applicability]. It certainly is in the context of deployment case study 1 as the designer enforced triangle and full-mesh topologies in the aggregation network as well as a dual-plane core network. 5.1.8.3. Hierarchical Dataplane and BGP Prefix Independent Convergence In a hierarchical dataplane, the FIB used by the packet processing - engine reflects the recursions between routes. For example, a BGP + engine reflects recursions between the routes. For example, a BGP route B recursing on IGP route I whose best path is via interface O - is encoded as a FIB entry B pointing to a FIB entry I pointing to a - FIB entry 0. - - Hierarchical FIB [BGPPIC] extends the hierarchical dataplane with the - concept of a BGP Path-List. A BGP path-list may be abstracted as a - set of primary multipath nhops and a backup nhop. When the primary - set is empty, packets destined to the BGP destinations are rerouted - via the backup nhop. - - With hierarchical FIB and hierarchical dataplane, a FIB entry - representing a BGP route points to a FIB entry representing a BGP - Path-List. This entry may either point again to another BGP Path - list entry (BGP over BGP recursion) or more likely points to a FIB - entry representing an IGP route. - - A BGP Path-list may be computed automatically by the router and does - not require any operator involvement. Specifically, the automated - computation adapts to any routing policy (this is key to understand - the simplicity of hierarchical FIB and the ability to enable it as a - default router behavior). There is no constraint at all on the - operator design. Any policy is supported (multipath, primary/backup - between neighboring domains or via alternate domains). - - The BGP backup nhop is computed in advance of any failure (ie. a - second bestpath computation after excluding the primary nhops). - - Hierarchical dataplane and hierarchical FIB provide two important - routing availability properties. - - First, upon IGP convergence, recursive BGP routes immediately benefit - from the updated IGP paths thanks to the dataplane indirection. This - is key as most of the traffic is destined to BGP routes, not to IGP - routes. - - Second, upon loss of the primary BGP nhop, the dataplane can - immediately reroute the packets towards the pre-computed backup nhop. - This redirection is said to be prefix independent as the only entries - that need to be modified are the BGP path-lists. These entries are - shared across all the BGP prefixes with the same primary and backup - next-hops. This scale independence is key. In the context of - deployment model 1, while there might be 100k BGP routes, we only - expect on the order of 200 BGP path-lists. Assuming 10usec in-place - modification per BGP path-list, we see that the router can enable the - backup path for 100k BGP destinations in less than 2msec (less than - 200 * 10usec). - - The detection of the loss of the primary BGP nhop (and hence the need - to enable the pre-computed backup BGP nhop) can be local (a local - link failing between an edge device and a single-hop eBGP peer) or - involves an IGP convergence (a remote border router goes down). - - These hierarchical FIB properties benefit to any BGP routes: - Internet, L3VPN, 3107, IPv4 or IPv6. Future evolution of VPLS will - also benefit from such properties [I-D.raggarwa-mac-vpn], - [I-D.sajassi-l2vpn-rvpls-bgp] - - Hierarchical forwarding and hierarchical FIB are very simple - technology to operate. Their ability to adapt to any topology, any - routing policy and any BGP address family allows router vendors to - enable this behavior by default. - -5.1.8.4. Local Protection using Anycast BGP -5.1.8.4.1. Anycast BGP applied to ABR node failure - - In this section we described a mechanism that provides local - protection for area border router (ABR) failures. To illustrate this - mechanism consider an example shown in Figure 6. - +-------+ - | | - vl0+ ABR 1 | - /| | - +----------+ +-------+ / +-------+ - | | | |/ - | PE / LER +-..-+ PLR | - | | | |\ - +----------+ +-------+ \ +-------+ - \| | - vl0+ ABR 2 | - | | - +-------+ - - +-------+ +-------+ +-------+ - | LDP-L +-----+ LDP-L +-----+ LDP-L | - +-------+ +-------+ +-------+ - | BGP-L +-------------------+ BGP-L | - +-------+ +-------+ - - --------------- traffic ----------------> - <----- routing + label distribution ----- - - Figure 6: Routing and Traffic Flow - - The core router adjacent to ABR1 and ABR2 acts as a point of local - repair (PLR). When the PLR detects ABR1 failure, the PLR re-routes - to ABR2 the traffic that the PLR used to forward to ABR1, with ABR2 - providing the subsequent forwarding for this traffic. To accomplish - this ABR1, ABR2, and the PLR employ the following procedures. - - ABR1, in addition to its own loopback, is provisioned with another IP - address (vl0). This IP address is used to identify the forwarding - state/context on ABR1 that is the subject to the local protection - mechanism outlined in this section. We refer to this IP address, - vl0, as the "context identifier". ABR1 advertises its context - identifier in ISIS and LDP. As ABR1 re-advertises to its core peers - the BGP routes it receives from its peers in the aggregation - domain(s), ABR1 sets the BGP Next Hop on these routes to its context - identifier (this creates an association between the forwarding state/ - context created by these routes and the context identifier). - - ABR2, acting as a protector for ABR1, is configured with the ABR1's - context identifier. ABR2 advertises this context identifier into LDP - and ISIS. The LDP advertisement is done with no PHP and a non-null - label, and the ISIS advertisement is done with a very high metric. - As a result, the PLR would have an LFA route/LSP to this context - identifier with ABR2 as the next hop. When the PLR detects ABR1's - failure, the LFA procedures on the PLR would result in sending to - ABR2 the traffic that the PLR used to forward to ABR1. Moreover, - since ABR2 advertises into LDP a non-null label for the ABR1's - context identifier, this label would enable ABR2 to identify such - traffic (as we'll see further down the ability to identify such - traffic is essential in order for ABR2 to correctly forward this - traffic). - - +-----------------+-----------+-----------+ - | FEC 10.0.1.1/32 | Label 200 | NH AGN2-1 | - +-----------------+-----------+-----------+ - | FEC 10.0.1.2/32 | Label 233 | NH AGN2-1 | ABR1 - +-----------------+-----------+-----------+ - | FEC 10.0.1.3/32 | Label 313 | NH AGN2-1 | - +-----------------+-----------+-----------+ - - +------+ +-------+ - | | | | +------------------+ - vl0+ ABR1 +----+ AGN21 +----+ AGN11:10.0.1.1/32| - /| | | |\ /+------------------+ - / +------+\ /+-------+ \/ - +----+ +-----+/ \/ \ /\ +------------------+ - | PE +---+ PLR | /\ X X+ AGN12:10.0.1.2/32| - +----+ +-----+\ / \ / \/ +------------------+ - \ +------+ +-------+ /\ - \| | | |/ \+------------------+ - vl0+ ABR2 +----+ AGN22 +----+ AGN13:10.0.1.3/32| - | | | | +------------------+ - +------+ +-------+ - - +----------------------------------------+ - | native forwarding context | - +-----------------+-----------+----------+ - | FEC 10.0.1.1/32 | Label 100 | NH AGN21 | - +-----------------+-----------+----------+ - | FEC 10.0.1.2/32 | Label 107 | NH AGN21 | ABR2 - +-----------------+-----------+----------+ - | FEC 10.0.1.3/32 | Label 152 | NH AGN21 | - +-----------------+-----------+----------+ - | | | - V V V - +----------------------------------------+ - | backup forwarding context | - +-----------------+-----------+----------+ - | FEC 10.0.1.1/32 | Label 200 | NH AGN21 | - +-----------------+-----------+----------+ - | FEC 10.0.1.2/32 | Label 233 | NH AGN21 | ABR2 - +-----------------+-----------+----------+ - | FEC 10.0.1.3/32 | Label 313 | NH AGN21 | - +-----------------+-----------+----------+ - (ABR2 acting as backup for ABR1) - - Figure 7: ABR Failure Scenarios - - ABR2, acting as a protector for the forwarding context of ABR1, has - to have the label> mapping for the FECs present in that - forwarding context, and should use this mapping to create the - forwarding state it would use when forwarding the traffic received - from the PLR. Figure 7 shows the label> mapping on ABR1 and - ABR2. Note that the backup forwarding context on ABR2 is a mirror - image of the forwarding context on ABR1. This backup forwarding - context is populated using the routes that have been re-advertised by - ABR1 to its core peers (as ABR2 is a BGP core peer of ABR1). The - label that ABR2 advertises into LDP for ABR1's context identifier - points to the backup context. This way, ABR2 forwards all the - traffic received with this label using not its native forwarding - context, but the backup forwarding context. - - Note that whether the PLR could rely on the basic LFA to re-route to - ABR2 the traffic that the PLR used to forward to ABR1 depends on the - LFA coverage. Since the basic LFA does not guarantee 100% coverage - in all topologies, relying on basic LFA may not be sufficient, in - which case the basic LFA would need to be augmented to provide 100% - coverage. - - The procedures outlined above provide local protection upon ABR node - failure. By virtue of being local protection, the actions required - to restore connectivity upon the failure detection are fully - localized to the router closest to the failure - the router directly - connected to the failed ABR. This enables to deliver under 50msec - connectivity recovery time in the presence of ABR failure. These - actions do not depend on propagating failure information in ISIS, - thus providing connectivity recovery time that is independent of the - ISIS routing convergence time. In contrast, a combination of - hierarchical FIB organization and ISIS routing convergence, being a - global protection mechanism, does rely on the ISIS routing - convergence time, as the prefix-independent switch-over on the pre- - computed backup next hop occurs upon IGP convergence (deletion of the - IGP route to the remote ABR), and thus would have several 100s msec - connectivity recovery time. - -5.1.8.4.2. Extensions to support ABR's connected to different - aggregation regions + is encoded as a hierarchy of FIB entry B pointing to a FIB entry I + pointing to a FIB entry 0. - Note that for the purpose of identifying the forwarding context - ABR1's forwarding state could be partitioned, with each partition - being assigned its own IP address (its own context identifier). ABR1 - would advertise all these identifiers into ISIS and LDP. This may be - useful in the scenario where ABR1 is connected to more than one - aggregation domain (more than one L1 area), in which case each - context identifier would identify the ABR1's forwarding state - associated with a single aggregation domain. + BGP Prefix Independent Convergence [BGP-PIC] extends the hierarchical + dataplane with the concept of a BGP Path-List. A BGP path-list may + be abstracted as a set of primary multipath nhops and a backup nhop. + When the primary set is empty, packets destined to the BGP + destinations are rerouted via the backup nhop. - One could further refine the above scheme by implementing protector - functionality that would allow a single protector to protect multiple - forwarding contexts, with each forwarding context being associated - with all the forwarding state maintained by a given (protected) ABR. - Such functionality could be implemented either on a separate router, - or could be co-located with an existing ABR. Details of this are - outside the scope of this document. + For complete description of BGP-PIC technology and its applicability + please refer to [BGP-PIC]. -5.1.8.4.3. Anycast BGP applied to a L3VPN PE + Hierarchical data plane and BGP-PIC are very simple technologies to + operate. Their applicability to any topology, any routing policy and + any BGP unicast address family allows router vendors to enable this + behavior by default. - BGP Anycast is also used to protect against L3VPN PE failures. In - general a given VPN site can be multi-homed (connected to several - L3VPN PEs). Moreover, multi-homed sites may be non-congruent with - each other - different multi-homed sites connected to a given PE may - have their other connection(s) to different other PEs. BGP Anycast - scheme, utilizing the construct of Protector PE, provides forwarding - context protection for multiple egress PEs in the presence of non- - congruent multi-homed sites. +5.1.8.4. BGP Egress Node FRR - Protector PE function is enhanced from the basic BGP Anycast 1:1 - mirroring procedures described for ABR protection, by supporting - multiple backup forwarding contexts, one per protected egress PE. - Each backup forwarding context on the Protector PE is identified by - the context identifier of the associated protected egress PE. + BGP egress node FRR is a Fast ReRoute solution and hence relies on + local protection and the precomputation and preinstallation of the + backup path in the FIB. BGP egress node FRR relies on a transit LSR + ( Point of Local Repair, PLR ) adjacent to the failed protected BGP + router to detect the failure and re-route the traffic to the backup + BGP router. Number of BGP egress node FRR schemes are being + investigated: [PE-FRR], [ABR-FRR], + [I-D.draft-minto-2547-egress-node-fast-protection-00], + [I-D.draft-minto-2547-egress-node-fast-protection-00], + [I-D.draft-minto-2547-egress-node-fast-protection-00]. - Protector PE advertises these context identifiers into IGP with a - large metric and into LDP with no PHP and a non-null label. This - results in PLR of each egress PE having an LFA route/LSP (or bypass - LSP if no native LFA coverage for specific topology) to the - associated context identifier with Protector PE as the next hop. - Protector PE creates a backup forwarding context per protected egress - PE based on BGP advertisements from this egress PE and other egress - PEs with the same multi-homed customer networks. + Differences between these schemes relate to the way backup and + protected BGP routers get associated, how the protected router's BGP + state is signalled to the backup BGP router(s) and if any other state + is required on protected, backup and PLR routers. The schemes also + differ in compatibility with IPFRR and TEFRR schemes to enable PLR to + switch traffic towards the backup BGP router in case of protected BGP + router failure. - Similarly to the ABR case described earlier, in case of specific - protected egress PE failure, PLR will follow standard LFA procedure - (or local protection to bypass LSP) and forward affected flows to - Protector PE. Those flows will arrive to Protector PE on the LSP - associated with the context identifier for the failed egress PE, the - backup forwarding context will be identified by this LSP, and flows - will be switched to alternative egress PE(s). + In the Seamless MPLS design, BGP egress node FRR schemes can protect + against the failures of PE, AGN and ABR nodes with no requirements on + ingress routers. 5.1.8.5. Assessing loss of connectivity upon any failure We select two typical traffic flows and analyze the loss of - connectivity (LoC) upon each possible failure. + connectivity (LoC) upon each possible failure in the Seamless MPLS + design in the deployment scenario #1. - Flow F1 starts from an AN1 in a left aggregation region and ends + o Flow F1 starts from an AN1 in a left aggregation region and ends on an AN2 in a right aggregation region. Each AN is dual-homed to two AGN's. - Flow F2 starts from an L3VPN PE1 in the core and ends at an L3VPN - PE2 in the core. + o Flow F2 starts from a CE1 homed on L3VPN PE1 connected to the core + LSRs and ends at CE2 dual-homed to L3VPN PE2 and PE3, both + connected to the core LSRs. Note that due to the symmetric network topology in case study 1, uni- directional flows F1' and F2', associated with F1 and F2 and forwarded in the reversed direction (AN2 to AN1 right-to-left and PE2 to PE1, respectively), take advantage of the same failure restoration - mechanisms as F1 and F2. . + mechanisms as F1 and F2. 5.1.8.5.1. AN1-AGN link failure or AGN node failure F1 is impacted but LoC <50msec is possible assuming fast BFD detection and fast-switchover implementation on the AN. F2 is not impacted. 5.1.8.5.2. Link or node failure within the left aggregation region F1 is impacted but LoC <50msec thanks to LFA FRR. No uloop will @@ -1401,122 +1185,109 @@ flow F1. Note: remember that the left region receives the routes to all the remote ABR's and that the labelled BGP routes are reflected from the core to the left region with next-hop unchanged. This ensures that the loss of the (local) ABR between the left region and the core is seen as an IGP route impact and hence can be addressed by LFA. Note: if LFA is not available (other topology then case study one) or if LFA is not enabled, then the LoC would be < second as the number - of impacted important IGP route in a seamless architecture is much - smaller than 2960. + of impacted important IGP routes in a seamless architecture is much + smaller than 2960 routes. F2 is not impacted. 5.1.8.5.4. Link or node failure within the core region F1 and F2 are impacted but LoC <50msec thanks to LFA FRR. This is specific to the particular core topology used in deployment case study 1. The core topology has been optimized [I-D.filsfils-rtgwg-lfa-applicability] for LFA applicability. As explained in [I-D.filsfils-rtgwg-lfa-applicability], another alternative to provide <50msec in this case consists in using an MPLS-TE full-mesh and MPLS-TE FRR. This is required when the designer is not able or does not want to optimize the topology for LFA applicability and he wants to achieve <50msec protection. Alternatively, simple IGP convergence would ensure a LoC < second as - the number of impacted important IGP route in a seamless architecture - is much smaller than 2960. + the number of impacted important IGP routes in a seamless + architecture is much smaller than 2960 routes. 5.1.8.5.5. PE2 failure F1 is not impacted. F2 is impacted and the LoC is sub-300msec thanks to IGP convergence - and hierarchical FIB. + and BGP PIC. The detection of the primary nhop failure (PE2 down) is performed by a single-area IGP convergence. In this specific case, the convergence should be much faster than 90% of the IGP/BGP3107 footprint at least). If the guidelines cannot be met, then either the designer will rely - on (1) augmenting native LFA coverage with RSVP, or (2) a full-mesh - TE FRR model, or (3) IGP convergence. The first option provides the - same sub-50msec protection as LFA, but introduces additional RSVP - LSPs. The second option optimizes for sub-50msec protection, but - implies a more complex operational model. The third option optimizes - for simple operation but only provides