draft-ietf-rtgwg-bgp-pic-02.txt   draft-ietf-rtgwg-bgp-pic-03.txt 
Network Working Group A. Bashandy, Ed. Network Working Group A. Bashandy, Ed.
Internet Draft C. Filsfils Internet Draft C. Filsfils
Intended status: Informational Cisco Systems Intended status: Informational Cisco Systems
Expires: February 2017 P. Mohapatra Expires: May 2017 P. Mohapatra
Sproute Networks Sproute Networks
August 1, 2016 November 22, 2016
BGP Prefix Independent Convergence BGP Prefix Independent Convergence
draft-ietf-rtgwg-bgp-pic-02.txt draft-ietf-rtgwg-bgp-pic-03.txt
Abstract Abstract
In the network comprising thousands of iBGP peers exchanging millions In the network comprising thousands of iBGP peers exchanging millions
of routes, many routes are reachable via more than one next-hop. of routes, many routes are reachable via more than one next-hop.
Given the large scaling targets, it is desirable to restore traffic Given the large scaling targets, it is desirable to restore traffic
after failure in a time period that does not depend on the number of after failure in a time period that does not depend on the number of
BGP prefixes. In this document we proposed an architecture by which BGP prefixes. In this document we proposed an architecture by which
traffic can be re-routed to ECMP or pre-calculated backup paths in a traffic can be re-routed to ECMP or pre-calculated backup paths in a
timeframe that does not depend on the number of BGP prefixes. The timeframe that does not depend on the number of BGP prefixes. The
skipping to change at page 2, line 17 skipping to change at page 2, line 17
documents at any time. It is inappropriate to use Internet-Drafts documents at any time. It is inappropriate to use Internet-Drafts
as reference material or to cite them other than as "work in as reference material or to cite them other than as "work in
progress." progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on December 1, 2016. This Internet-Draft will expire on May 22, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 39 skipping to change at page 2, line 39
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
1.1. Conventions used in this document.........................4 1.1. Conventions used in this document.........................4
1.2. Terminology...............................................4 1.2. Terminology...............................................4
2. Overview.......................................................5 2. Overview.......................................................6
3. Constructing the Shared Hierarchical Forwarding Chain..........7 2.1. Dependency................................................6
3.1. Example 1: Primary-Backup Path Scenario...................8 2.1.1. Hierarchical Hardware FIB............................6
3.2. Example 2: Platforms with Limited Levels of Hierarchy.....9 2.1.2. Availability of more than one primary or secondary BGP
4. Forwarding Behavior...........................................13 next-hops...................................................7
5. Forwarding Chain Adjustment at a Failure......................15 2.1.3. Pre-Computation of a secondary BGP next-hop.....Error!
5.1. BGP-PIC core.............................................16 Bookmark not defined.
5.2. BGP-PIC edge.............................................17 2.2. BGP-PIC Illustration......................................7
5.2.1. Adjusting forwarding Chain in egress node failure...17 3. Constructing the Shared Hierarchical Forwarding Chain..........9
5.2.2. Adjusting Forwarding Chain on PE-CE link Failure....17 3.1. Constructing the BGP-PIC forwarding Chain.................9
5.3. Handling Failures for Flattended Forwarding Chains.......18 3.1.1. Example: Primary-Backup Path Scenario...............10
6. Properties....................................................19 4. Forwarding Behavior...........................................11
6.1. Coverage.................................................19 5. Handling Platforms with Limited Levels of Hierarchy...........12
6.1.1. A remote failure on the path to a BGP next-hop......19 5.1. Flattening the Forwarding Chain..........................12
6.1.2. A local failure on the path to a BGP next-hop.......19 5.2. Example: Flattening a forwarding chain...................14
6.1.3. A remote iBGP next-hop fails........................20 6. Forwarding Chain Adjustment at a Failure......................21
6.1.4. A local eBGP next-hop fails.........................20 6.1. BGP-PIC core.............................................22
6.2. Performance..............................................20 6.2. BGP-PIC edge.............................................23
6.2.1. Perspective.........................................20 6.2.1. Adjusting forwarding Chain in egress node failure...23
6.3. Automated................................................21 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure....23
6.4. Incremental Deployment...................................22 6.3. Handling Failures for Flattened Forwarding Chains........24
7. Dependency....................................................22 7. Properties....................................................25
7.1. Hierarchical Hardware FIB................................22 7.1. Coverage.................................................25
7.2. Availability of more than one primary or secondary BGP next- 7.1.1. A remote failure on the path to a BGP next-hop......25
hops..........................................................22 7.1.2. A local failure on the path to a BGP next-hop.......25
7.3. Pre-Computation of a secondary BGP next-hop..............23 7.1.3. A remote iBGP next-hop fails........................26
8. Security Considerations.......................................23 7.1.4. A local eBGP next-hop fails.........................26
9. IANA Considerations...........................................23 7.2. Performance..............................................26
10. Conclusions..................................................23 7.3. Automated................................................27
11. Acknowledgments..............................................25 7.4. Incremental Deployment...................................27
12. References...................................................23 8. Security Considerations.......................................27
12.1. Normative References....................................23 9. IANA Considerations...........................................27
12.2. Informative References..................................24 10. Conclusions..................................................27
11. References...................................................28
11.1. Normative References....................................28
11.2. Informative References..................................28
12. Acknowledgments..............................................29
Appendix A. Perspective..........................................30
1. Introduction 1. Introduction
As a path vector protocol, BGP is inherently slow due to the As a path vector protocol, BGP propagates reachability serially.
serial nature of reachability propagation. BGP speakers exchange Hence BGP convergence speed is limited by the time taken to
serially propagate reachability information from the point of
failure to the device that must re-converge. BGP speakers exchange
reachability information about prefixes[2][3] and, for labeled reachability information about prefixes[2][3] and, for labeled
address families, namely AFI/SAFI 1/4, 2/4, 1/128, and 2/128, an address families, namely AFI/SAFI 1/4, 2/4, 1/128, and 2/128, an
edge router assigns local labels to prefixes and associates the edge router assigns local labels to prefixes and associates the
local label with each advertised prefix such as L3VPN [8], 6PE local label with each advertised prefix such as L3VPN [8], 6PE
[9], and Softwire [7] using BGP label unicast technique[4]. A BGP [9], and Softwire [7] using BGP label unicast technique[4]. A BGP
speaker then applies the path selection steps to choose the best speaker then applies the path selection steps to choose the best
path. In modern networks, it is not uncommon to have a prefix path. In modern networks, it is not uncommon to have a prefix
reachable via multiple edge routers. In addition to proprietary reachable via multiple edge routers. In addition to proprietary
techniques, multiple techniques have been proposed to allow for techniques, multiple techniques have been proposed to allow for
BGP to advertise more than one path for a given prefix BGP to advertise more than one path for a given prefix
[6][11][12], whether in the form of equal cost multipath or [6][11][12], whether in the form of equal cost multipath or
primary-backup. Another more common and widely deployed scenario primary-backup. Another common and widely deployed scenario is
is L3VPN with multi-homed VPN sites with unique Route L3VPN with multi-homed VPN sites with unique Route Distinguisher.
Distinguisher. It is advantageous to utilize the commonality among paths used by
NLRIs to significantly improve convergence in case of topology
modifications.
This document proposes a hierarchical and shared forwarding chain This document proposes a hierarchical and shared forwarding chain
organization that allows traffic to be restored to pre-calculated organization that allows traffic to be restored to pre-calculated
alternative equal cost primary path or backup path in a time alternative equal cost primary path or backup path in a time
period that does not depend on the number of BGP prefixes. The period that does not depend on the number of BGP prefixes. The
technique relies on internal router behavior that is completely technique relies on internal router behavior that is completely
transparent to the operator and can be incrementally deployed and transparent to the operator and can be incrementally deployed and
enabled with zero operator intervention. enabled with zero operator intervention.
1.1. Conventions used in this document 1.1. Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL"
in this document are to be interpreted as described in RFC-2119 in this document are to be interpreted as described in RFC-2119
[1]. [1].
In this document, these words will appear with that interpretation In this document, these words will appear with that interpretation
only when in ALL CAPS. Lower case uses of these words are not to only when in ALL CAPS. Lower case uses of these words are not to
be interpreted as carrying RFC-2119 significance. be interpreted as carrying RFC-2119 significance.
1.2. Terminology 1.2. Terminology
This section defines the terms used in this document. For ease of This section defines the terms used in this document. For ease of
use, we will use terms similar to those used by L3VPN [8] use, we will use terms similar to those used by L3VPN [8]
o BGP prefix: It is a prefix P/m (of any AFI/SAFI) that a BGP o BGP prefix: A prefix P/m (of any AFI/SAFI) that a BGP speaker
speaker has a path for. has a path for.
o IGP prefix: It is a prefix P/m (of any AFI/SAFI) that is learnt o IGP prefix: A prefix P/m (of any AFI/SAFI) that is learnt via
via an Interior Gateway Protocol, such as OSPF and ISIS, has a an Interior Gateway Protocol, such as OSPF and ISIS, has a path
path for. The prefix may be learnt directly through the IGP or for. The prefix may be learnt directly through the IGP or
redistributed from other protocol(s) redistributed from other protocol(s)
o CE: It is an external router through which an egress PE can o CE: An external router through which an egress PE can reach a
reach a prefix P/m. prefix P/m.
o Ingress PE, "iPE": It is a BGP speaker that learns about a prefix o Ingress PE, "iPE": A BGP speaker that learns about a prefix
through a IBGP peer and chooses an egress PE as the next-hop for through a IBGP peer and chooses an egress PE as the next-hop for
the prefix.. the prefix.
o Path: It is the next-hop in a sequence of unique connected o Path: The next-hop in a sequence of nodes starting from the
nodes starting from the current node and ending with the current node and ending with the destination node or network
destination node or network identified by the prefix. identified by the prefix. The nodes may not be directly
connected.
o Recursive path: It is a path consisting only of the IP address o Recursive path: A path consisting only of the IP address of the
of the next-hop without the outgoing interface. Subsequent next-hop without the outgoing interface. Subsequent lookups are
lookups are needed to determine the outgoing interface. necessary to determine the outgoing interface and a directly
connected next-hop
o Non-recursive path: It is a path consisting of the IP address o Non-recursive path: A path consisting of the IP address of a
of the next-hop and one outgoing interface directly connected next-hop and outgoing interface
o Primary path: It is a recursive or non-recursive path that can o Primary path: A recursive or non-recursive path that can be
be used all the time. A prefix can have more than one primary used all the time as long as a walk starting from this path can
end to an adjacency. A prefix can have more than one primary
path path
o Backup path: It is a recursive or non-recursive path that can o Backup path: A recursive or non-recursive path that can be used
be used only after some or all primary paths become unreachable only after some or all primary paths become unreachable
o Leaf: A leaf is container data structure for a prefix or local o Leaf: A container data structure for a prefix or local label.
label. Alternatively, it is the data structure that contains Alternatively, it is the data structure that contains prefix
prefix specific information. specific information.
o IP leaf: Is the leaf corresponding to an IPv4 or IPv6 prefix o IP leaf: The leaf corresponding to an IPv4 or IPv6 prefix
o Label leaf. It is the leaf corresponding to a locally allocated o Label leaf. The leaf corresponding to a locally allocated label
label such as the VPN label on an egress PE [8]. such as the VPN label on an egress PE [8].
o Pathlist: It is an array of paths used by one or more prefix to o Pathlist: An array of paths used by one or more prefix to forward
forward traffic to destination(s) covered by a IP prefix. Each traffic to destination(s) covered by a IP prefix. Each path in
path in the pathlist carries its "path-index" that identifies the pathlist carries its "path-index" that identifies its
its position in the array of paths. A pathlist may contain a position in the array of paths. "). In general, the value of the
mix of primary and backup paths "path-index" stored in path may not necessarily has the same
value of the location of the path in the pathlist. For example
the 3rd path may carry path-index value of 1
o A pathlist may contain a mix of primary and backup paths
o OutLabel-List: Each labeled prefix is associated with an o OutLabel-List: Each labeled prefix is associated with an
OutLabel-List. The OutLabel-List is an array of one or more OutLabel-List. The OutLabel-List is an array of one or more
outgoing labels and/or label actions where each label or label outgoing labels and/or label actions where each label or label
action has 1-to-1 correspondence to a path in the pathlist. action has 1-to-1 correspondence to a path in the pathlist.
Label actions are: push the label, pop the label, or swap the Label actions are: push the label, pop the label, swap the
incoming label with the label in the Outlabel-Array entry. The incoming label with the label in the Outlabel-Array entry, or
prefix may be an IGP or BGP prefix don't push anything at all in case of "unlabeled". The prefix
may be an IGP or BGP prefix
o Adjacency: It is the layer 2 encapsulation leading to the layer o Adjacency: The layer 2 encapsulation leading to the layer 3
3 directly connected next-hop directly connected next-hop
o Dependency: An object X is said to be a dependent or Child of o Dependency: An object X is said to be a dependent or child of
object Y if Object Y cannot be deleted unless object X is no object Y if there is at least one forwarding chain where the
longer a dependent/child of object Y forwarding engine must visits the object X before visiting the
object Y in order to forward a packet. Note that if object X is
a child of object Y, then Y cannot be deleted unless object X
is no longer a dependent/child of object Y
o Route: It is a prefix with one or more paths associated with o Route: A prefix with one or more paths associated with it.
it. Hence the minimum set of objects needed to construct a Hence the minimum set of objects needed to construct a route is
route is a leaf and a pathlist. a leaf and a pathlist.
2. Overview 2. Overview
The idea of BGP-PIC is based on two pillars The idea of BGP-PIC is based on two pillars
o A shared hierarchical Forwarding Chain o A shared hierarchical forwarding Chain: It is not uncommon to see
multiple destinations are reachable via the same list of next-
hops. Instead of having a separate list of next-hops for each
destination, all destinations sharing the same list of next-hops
can point to a single copy of this list thereby allowing fast
convergence by making changes to a single shared list of next-
hops rather than possibly a large number of destinations. Because
paths in a pathlist may be recursive, a hierarchy is formed
between pathlist and the resolving prefix whereby the pathlist
depends on the resolving prefix.
o A forwarding plane that supports multiple levels of indirection o A forwarding plane that supports multiple levels of indirection:
A forwarding that starts with a destination and ends with an
outgoing interface is not a simple flat structure. Instead a
forwarding entry is constructed via multiple levels of
dependency. A BGP NLRI uses a recursive next-hop, which in turn
resolves via an IGP next-hop, which in turn resolves via an
adjacency consisting of one or more outgoing interface(s) and
next-hop(s).
To illustrate the two pillars above, we will use an example of a Designing a forwarding plane that constructs multi-level forwarding
simple multihomed L3VPN [8] prefix in a BGP-free core running LDP chains with maximal sharing of forwarding objects allows rerouting a
[5] or segment routing over MPLS forwarding plane [14]. large number of destinations by modifying a small number of objects
thereby achieving convergence in a time frame that does not depend
on the number of destinations. For example, if the IGP prefix that
resolves a recursive next-hop is updated there is no need to update
the possibly large number of BGP NLRIs that use this recursive next-
hop.
+--------------------------------+ 2.1. Dependency
| |
| ePE2 This section describes the required functionality in the forwarding
| | \ and control planes to support BGP-PIC described in this document
| | \
| | \ 2.1.1. Hierarchical Hardware FIB
iPE | CE.......VRF "Blue"
| | / (VPN-IP1) BGP PIC requires a hierarchical hardware FIB support: for each BGP
| | / (VPN-IP2) forwarded packet, a BGP leaf is looked up, then a BGP Pathlist is
| LDP/Segment-Routing Core | / consulted, then an IGP Pathlist, then an Adjacency.
| ePE1
| | An alternative method consists in "flattening" the dependencies when
+--------------------------------+ programming the BGP destinations into HW FIB resulting in
potentially eliminating both the BGP Path-List and IGP Path-List
consultation. Such an approach decreases the number of memory
lookup's per forwarding operation at the expense of HW FIB memory
increase (flattening means less sharing hence duplication), loss of
ECMP properties (flattening means less pathlist entropy) and loss of
BGP PIC properties.
2.1.2. Availability of more than one primary or secondary BGP next-hops
When the primary BGP next-hop fails, BGP PIC depends on the
availability of a pre-computed and pre-installed secondary BGP next-
hop in the BGP Pathlist.
The existence of a secondary next-hop is clear for the following
reason: a service caring for network availability will require two
disjoint network connections hence two BGP next-hops.
The BGP distribution of the secondary next-hop is available thanks
to the following BGP mechanisms: Add-Path [11], BGP Best-External
[6], diverse path [12], and the frequent use in VPN deployments of
different VPN RD's per PE. It is noteworthy to mention that the
availability of another BGP path does not mean that all failure
scenarios can be covered by simply forwarding traffic to the
available secondary path. The discussion of how to cover various
failure scenarios is beyond the scope of this document
2.2. BGP-PIC Illustration
To illustrate the two pillars above as well as the platform
dependency, we will use an example of a simple multihomed L3VPN [8]
prefix in a BGP-free core running LDP [5] or segment routing over
MPLS forwarding plane [14].
+--------------------------------+
| |
| ePE2 (IGP-IP1 192.0.2.1, Loopback)
| | \
| | \
| | \
iPE | CE....VRF "Blue", ASnum 65000
| | / (VPN-IP1 11.1.1.0/24)
| | / (VPN-IP2 11.1.2.0/24)
| LDP/Segment-Routing Core | /
| ePE1 (IGP-IP2 192.0.2.2, Loopback)
| |
+--------------------------------+
Figure 1 VPN prefix reachable via multiple PEs Figure 1 VPN prefix reachable via multiple PEs
Referring to Figure 1, suppose the iPE (the ingress PE) receives Referring to Figure 1, suppose the iPE (the ingress PE) receives
NLRIs for the VPN prefixes VPN-IP1 and VPN-IP2 from two egress PEs, NLRIs for the VPN prefixes VPN-IP1 and VPN-IP2 from two egress PEs,
ePE1 and ePE2 with next-hop BGP-NH1 and BGP-NH2, respectively. ePE1 and ePE2 with next-hop BGP-NH1 and BGP-NH2, respectively.
Assume that ePE1 advertise the VPN labels VPN-L11 and VPN-L12 while Assume that ePE1 advertise the VPN labels VPN-L11 and VPN-L12 while
ePE2 advertise the VPN labels VPN-L21 and VPN-L22 for VPN-IP1 and ePE2 advertise the VPN labels VPN-L21 and VPN-L22 for VPN-IP1 and
VPN-IP2, respectively. Suppose that BGP-NH1 and BGP-NH2 are resolved VPN-IP2, respectively. Suppose that BGP-NH1 and BGP-NH2 are resolved
via the IGP prefixes IGP-IP1 and IGP-P2, where each happen to have 2 via the IGP prefixes IGP-IP1 and IGP-P2, where each happen to have 2
ECMP paths with IGP-NH1 and IGP-NH2 reachable via the interfaces I1 ECMP paths with IGP-NH1 and IGP-NH2 reachable via the interfaces I1
and I2, respectively. Suppose that local labels (whether LDP[5] or and I2, respectively. Suppose that local labels (whether LDP[5] or
segment routing [14]) on the downstream LSRs for IGP-IP1 are IGP-L11 segment routing [14]) on the downstream LSRs for IGP-IP1 are IGP-L11
and IGP-L12 while for IGP-P2 are IGP-L21 and IGP-L22. and IGP-L12 while for IGP-P2 are IGP-L21 and IGP-L22. As such, the
routing table at iPE is as follows:
Based on the information about NLRIs and the resolving IGP prefixes, 65000:11.1.1.0/24
a hierarchical forwarding chain can be constructed as shown in via ePE1 (192.0.2.1), VPN Label: VPN-L11
Figure 2. via ePE2 (192.0.2.2), VPN Label: VPN-L21
65000:11.1.2.0/24
via ePE1 (192.0.2.1), VPN Label: VPN-L12
via ePE2 (192.0.2.2), VPN Label: VPN-L22
192.0.2.1/32
via Core, Label: IGP-L11
via Core, Label: IGP-L12
192.0.2.2/32
via Core, Label: IGP-L21
via Core, Label: IGP-L22
Based on the above routing table, a hierarchical forwarding chain
can be constructed as shown in Figure 2.
IP Leaf: Pathlist: IP Leaf: Pathlist: IP Leaf: Pathlist: IP Leaf: Pathlist:
-------- +-------+ -------- +----------+ -------- +-------+ -------- +----------+
VPN-IP1-->|BGP-NH1|-->IGP-IP1(BGP NH1)--->|IGP NH1,I1|--->Adjacency1 VPN-IP1-->|BGP-NH1|-->IGP-IP1(BGP NH1)--->|IGP NH1,I1|--->Adjacency1
| |BGP-NH2|-->.... | |IGP NH2,I2|--->Adjacency2 | |BGP-NH2|-->.... | |IGP NH2,I2|--->Adjacency2
| +-------+ | +----------+ | +-------+ | +----------+
| | | |
| | | |
v v v v
OutLabel-List: OutLabel-List: OutLabel-List: OutLabel-List:
+----------------------+ +----------------------+ +----------------------+ +----------------------+
|VPN-L11 (VPN-IP1, NH1)| |IGP-L11 (IGP-IP1, NH1)| |VPN-L11 (VPN-IP1, NH1)| |IGP-L11 (IGP-IP1, NH1)|
|VPN-L12 (VPN-IP1, NH2)| |IGP-L12 (IGP-IP1, NH2)| |VPN-L12 (VPN-IP1, NH2)| |IGP-L12 (IGP-IP1, NH2)|
+----------------------+ +----------------------+ +----------------------+ +----------------------+
Figure 2 Shared Hierarchical Forwarding Chain at iPE Figure 2 Shared Hierarchical Forwarding Chain at iPE
The forwarding chain depicted in Figure 2 illustrates the first The forwarding chain depicted in Figure 2 illustrates the first
pillar, which is sharing and hierarchy. We can see that the BGP pillar, which is sharing and hierarchy. We can see that the BGP
pathlist consisting of BGP-NH1 and BGP-NH2 is shared by all NLRIs pathlist consisting of BGP-NH1 and BGP-NH2 is shared by all NLRIs
skipping to change at page 7, line 24 skipping to change at page 9, line 22
prefixes sharing the IGP pathlist nor the BGP NLRIs using the IGP prefixes sharing the IGP pathlist nor the BGP NLRIs using the IGP
prefixes for resolution need to be modified. prefixes for resolution need to be modified.
Figure 2 can also be used to illustrate the second BGP-PIC pillar. Figure 2 can also be used to illustrate the second BGP-PIC pillar.
Having a deep forwarding chain such as the one illustrated in Figure Having a deep forwarding chain such as the one illustrated in Figure
2 requires a forwarding plane that is capable of accessing multiple 2 requires a forwarding plane that is capable of accessing multiple
levels of indirection in order to calculate the outgoing levels of indirection in order to calculate the outgoing
interface(s) and next-hops(s). While a deeper forwarding chain interface(s) and next-hops(s). While a deeper forwarding chain
minimizes the re-convergence time on topology change, there will minimizes the re-convergence time on topology change, there will
always exist platforms with limited capabilities and hence imposing always exist platforms with limited capabilities and hence imposing
a limit on the depth of the forwarding chain. The example in Section a limit on the depth of the forwarding chain. Section 5 describes
3.2 illustrates how to gracefully trade off convergence speed with how to gracefully trade off convergence speed with the number of
the number of hierarchical levels to support platforms with hierarchical levels to support platforms with different
different capabilities. capabilities.
3. Constructing the Shared Hierarchical Forwarding Chain 3. Constructing the Shared Hierarchical Forwarding Chain
Constructing the forwarding chain is an application of the two Constructing the forwarding chain is an application of the two
pillars described in Section 2. pillars described in Section 2. This section describes how to
construct the forwarding chain in hierarchical shared manner
3.1. Constructing the BGP-PIC forwarding Chain
The whole process starts when BGP downloads a prefix to FIB. The The whole process starts when BGP downloads a prefix to FIB. The
prefix contains one or more outgoing paths. For certain labeled prefix contains one or more outgoing paths. For certain labeled
prefixes, such as VPN [8] prefixes, each path may be associated with prefixes, such as VPN [8] prefixes, each path may be associated with
an outgoing label and the prefix itself may be assigned a local an outgoing label and the prefix itself may be assigned a local
label. The list of outgoing paths defines a pathlist. If such label. The list of outgoing paths defines a pathlist. If such
pathlist does not already exist, then FIB creates a new pathlist, pathlist does not already exist, then FIB creates a new pathlist,
otherwise the existing pathlist is used. The BGP prefix is added as otherwise the existing pathlist is used. The BGP prefix is added as
a dependent of the pathlist. a dependent of the pathlist.
skipping to change at page 8, line 5 skipping to change at page 10, line 7
paths of the pathlist. A BGP path usually consists of a next-hop. paths of the pathlist. A BGP path usually consists of a next-hop.
The next-hop is resolved by finding a matching IGP prefix. The next-hop is resolved by finding a matching IGP prefix.
The end result is a hierarchical shared forwarding chain where the The end result is a hierarchical shared forwarding chain where the
BGP pathlist is shared by all BGP prefixes that use the same list of BGP pathlist is shared by all BGP prefixes that use the same list of
paths and the IGP prefix is shared by all pathlists that have a path paths and the IGP prefix is shared by all pathlists that have a path
resolving via that IGP prefix. It is noteworthy to mention that the resolving via that IGP prefix. It is noteworthy to mention that the
forwarding chain is constructed without any operator intervention at forwarding chain is constructed without any operator intervention at
all. all.
The remainder of this section illustrates two examples. The first The remainder of this section goes over an example to illustrate the
example illustrates the applicability of BGP-PIC in a primary-backup applicability of BGP-PIC in a primary-backup path scenario.
path deployment. The second example illustrates how BGP-PIC can be
applied in cases where the forwarding plane supports limited number
of indirections.
3.1. Example 1: Primary-Backup Path Scenario 3.2. Example: Primary-Backup Path Scenario
Consider the egress PE ePE1 in the case of the multi-homed VPN Consider the egress PE ePE1 in the case of the multi-homed VPN
prefixes in the BGP-free core depicted in Figure 1. Suppose ePE1 prefixes in the BGP-free core depicted in Figure 1. Suppose ePE1
determines that the primary path is the external path but the backup determines that the primary path is the external path but the backup
path is the iBGP path to the other PE ePE2 with next-hop BGP-NH2. path is the iBGP path to the other PE ePE2 with next-hop BGP-NH2.
ePE2 constructs the forwarding chain depicted in Figure 3. We are ePE2 constructs the forwarding chain depicted in Figure 3. We are
only showing a single VPN prefix for simplicity. But all prefixes only showing a single VPN prefix for simplicity. But all prefixes
that are multihomed to ePE1 and ePE2 share the BGP pathlist. that are multihomed to ePE1 and ePE2 share the BGP pathlist.
BGP OutLabel Array BGP OutLabel Array
VPN-L11 +---------+ VPN-L11 +---------+
(Label-leaf)---+---->|Unlabeled| (Label-leaf)---+---->|Unlabeled|
| +---------+ | +---------+
| | VPN-L21 | | | VPN-L21 |
| | (swap) | | | (swap) |
| +---------+ | +---------+
| ^ |
| | BGP Pathlist | BGP Pathlist
| | +------------+ Connected route | +------------+ Connected route
| | | CE-NH |------>(to the CE) | | CE-NH |------>(to the CE)
| | |path-index=0| | |path-index=0|
| | +------------+ | +------------+
V | | VPN-NH2 | | | VPN-NH2 |
VPN-IP1 -----------------+------>| (backup) |------>IGP Leaf VPN-IP1 -----+------------------>| (backup) |------>IGP Leaf
(IP prefix leaf) |path-index=1| (Towards ePE2) (IP prefix leaf) |path-index=1| (Towards ePE2)
+------------+ | +------------+
|
| BGP OutLabel Array
| +---------+
+------------->|Unlabeled|
+---------+
| VPN-L21 |
| (push) |
+---------+
Figure 3 : VPN Prefix Forwarding Chain with eiBGP paths on egress PE Figure 3 : VPN Prefix Forwarding Chain with eiBGP paths on egress PE
The example depicted in Figure 3 differs from the example in Figure The example depicted in Figure 3 differs from the example in Figure
2 in two main aspects. First, as long as the primary path towards 2 in two main aspects. First, as long as the primary path towards
the CE (external path) is useable, it will be the only path used for the CE (external path) is useable, it will be the only path used for
forwarding while the OutLabel-List contains both the unlabeled label forwarding while the OutLabel-List contains both the unlabeled label
(primary path) and the VPN label (backup path) advertised by the (primary path) and the VPN label (backup path) advertised by the
backup path ePE2. The second aspect is presence of the label leaf backup path ePE2. The second aspect is presence of the label leaf
corresponding to the VPN prefix. This label leaf is used to match corresponding to the VPN prefix. This label leaf is used to match
VPN traffic arriving from the core. Note that the label leaf shares VPN traffic arriving from the core. Note that the label leaf shares
the OutLabel-List and the pathlist with the IP prefix. the pathlist with the IP prefix.
3.2. Example 2: Platforms with Limited Levels of Hierarchy 4. Forwarding Behavior
This section explains how the forwarding plane uses the hierarchical
shared forwarding chain to forward a packet.
When a packet arrives at a router, it matches a leaf. A labeled
packet matches a label leaf while an IP packet matches an IP prefix
leaf. The forwarding engines walks the forwarding chain starting
from the leaf until the walk terminates on an adjacency. Thus when a
packet arrives, the chain is walked as follows:
1. Lookup the leaf based on the destination address or the label at
the top of the packet
2. Retrieve the parent pathlist of the leaf
3. Pick the outgoing path "Pi" from the list of resolved paths in
the pathlist. The method by which the outgoing path is picked is
beyond the scope of this document (e.g. flow-preserving hash
exploiting entropy within the MPLS stack and IP header). Let the
"path-index" of the outgoing path "Pi" be "j".
4. If the prefix is labeled, use the "path-index" "j" to retrieve
the jth label "Lj" stored the jth entry in the OutLabel-List and
apply the label action of the label on the packet (e.g. for VPN
label on the ingress PE, the label action is "push"). As
mentioned in Section 1.2, the value of the "path-index" stored
in path may not necessarily be the same value of the location of
the path in the pathlist.
5. Move to the parent of the chosen path "Pi"
6. If the chosen path "Pi" is recursive, move to its parent prefix
and go to step 2
7. If the chosen path is non-recursive move to its parent adjacency.
Otherwise go to the next step.
8. Encapsulate the packet in the layer string specified by the
adjacency and send the packet out.
Let's apply the above forwarding steps to the forwarding chain
depicted in Figure 2 in Section 2. Suppose a packet arrives at
ingress PE iPE from an external neighbor. Assume the packet matches
the VPN prefix VPN-IP1. While walking the forwarding chain, the
forwarding engine applies a hashing algorithm to choose the path and
the hashing at the BGP level yields path 0 while the hashing at the
IGP level yields path 1. In that case, the packet will be sent out
of interface I2 with the label stack "IGP-L12,VPN-L11".
5. Handling Platforms with Limited Levels of Hierarchy
This section describes the construction of the forwarding chain if a
platform does not support the number of recursion levels required to
resolve the NLRIs. There are two main design objectives
o Being able to reduce the number of hierarchical levels from any
arbitrary value to a smaller arbitrary value that can be
supported by the forwarding engine
o Minimal modifications to the forwarding algorithm due to such
reduction.
5.1. Flattening the Forwarding Chain
Let's consider a pathlist associated with the leaf "R1" consisting
of the list of paths <P1, P2,..., Pn>. Assume that the leaf "R1" has
an Outlabel-list <L1, L2,..., Ln>. Suppose the path Pi is a
recursive path that resolves via a prefix represented by the leaf
"R2". The leaf "R2" itself is pointing to a pathlist consisting of
the paths <Q1, Q2,..., Qm>
If the platform supports the number of hierarchy levels of the
forwarding chain, then a packet that uses the path "Pi" will be
forwarded as follows:
1. The forwarding engine is now at leaf "R1"
2. So it moves to its parent pathlist, which contains the list <P1,
P2,..., Pn>.
3. The forwarding engine applies a hashing algorithm and picks the
path "Pi". So now the forwarding engine is at the path "Pi"
4. The forwarding engine retrieves the label "Li" from the outlabel-
list attached to the leaf "R1" and applies the label action
5. The path "Pi" uses the leaf "R2"
6. The forwarding engine walks forward to the leaf "R2" for
resolution
7. The forwarding plane performs a hash to pick a path among the
pathlist of the leaf "R2", which is <Q1, Q2,..., Qm>
8. Suppose the forwarding engine picks the path "Qj"
9. Now the forwarding engine continues the walk to the parent of
"Qj"
Suppose the platform cannot support the number of hierarchy levels
in the forwarding chain. FIB needs to reduce the number of hierarchy
levels. The idea of reducing the number of hierarchy levels is to
"flatten" two chain levels into a single level. The "flattening"
steps are as follows
1. FIB wants to reduce the number of levels used by "Pi" by 1
2. FIB walks to the parent of "Pi", which is the leaf "R2"
3. FIB extracts the parent pathlist of the leaf "R2", which is <Q1,
Q2,..., Qm>
4. FIB also extracts the OutLabel-list(R2) associated with the leaf
"R2". Remember that OutLabel-list(R2) = <L1, L2,..., Lm>
5. FIB replaces the path "Pi", with the list of paths <Q1, Q2,...,
Qm>
6. Hence the path list <P1, P2,..., Pn> now becomes "<P1, P2,...,Pi-
1, Q1, Q2,..., Qm, Pi+1, Pn>
7. The path index stored inside the locations "Q1", "Q2", ..., "Qm"
must all be "i" because the index "i" refers to the label "Li"
associated with leaf "R1"
8. FIB attaches an OutLabel-list with the new pathlist as follows:
<Unlabeled,..., Unlabeled, L1, L2,..., Lm, Unlabeled, ...,
Unlabeled>. The size of the label list associated with the
flattened pathlist equals the size of the pathlist. Hence there
is a 1-1 mapping between every path in the "flattened" pathlist
and the OutLabel-list associated with it.
It is noteworthy to mention that the labels in the outlabel-list
associated with the "flattened" pathlist may be stored in the same
memory location as the path itself to avoid additional memory
access. But that is an implementation detail that is beyond the
scope of this document.
The same steps can be applied to all paths in the pathlist <P1,
P2,..., Pn> so that all paths are "flattened" thereby reducing the
number of hierarchical levels by one. Note that that "flattening" a
pathlist pulls in all paths of the parent paths, a desired feature
to utilize all ECMP/UCMP paths at all levels. A platform that has a
limit on the number of paths in a pathlist for any given leaf may
choose to reduce the number paths using methods that are beyond the
scope of this document.
The steps can be recursively applied to other paths at the same
levels or other levels to recursively reduce the number of
hierarchical levels to an arbitrary value so as to accommodate the
capability of the forwarding engine.
Because a flattened pathlist may have an associated OutLabel-list
the forwarding behavior has to be slightly modified. The
modification is done by adding the following step right after step 4
in Section 4.
5. If there is an OutLabel-list associated with the pathlist, then
if the path "Pi" is chosen by the hashing algorithm, retrieve the
label at location "i" in that OutLabel-list and apply the label
action of that label on the packet
In the next subsection, we apply the steps in this subsection to a
sample scenario.
5.2. Example: Flattening a forwarding chain
This example uses a case of inter-AS option C [8] where there are 3 This example uses a case of inter-AS option C [8] where there are 3
levels of hierarchy. Figure 4 illustrates the sample topology. To levels of hierarchy. Figure 4 illustrates the sample topology. To
force 3 levels of hierarchy, the ASBRs on the ingress domain (domain force 3 levels of hierarchy, the ASBRs on the ingress domain (domain
1) advertise the core routers of the egress domain (domain 2) to the 1) advertise the core routers of the egress domain (domain 2) to the
ingress PE (iPE) via BGP-LU [4] instead of redistributing them into ingress PE (iPE) via BGP-LU [4] instead of redistributing them into
the IGP of domain 1. The end result is that the ingress PE (iPE) has the IGP of domain 1. The end result is that the ingress PE (iPE) has
2 levels of recursion for the VPN prefix VPN-IP1 and VPN2-P2. 2 levels of recursion for the VPN prefix VPN-IP1 and VPN2-IP2.
Domain 1 Domain 2 Domain 1 Domain 2
+----------------+ +-------------+ +-------------+ +-------------+
| | | | | | | |
| LDP/SR Core | | LDP/SR core | | LDP/SR Core | | LDP/SR core |
| | | | | | | |
| ASBR11------ASBR21.......ePE1\ | (192.0.1.1) | |
| | \ / | . . | \ | ASBR11---------ASBR21........ePE1(192.0.2.1)
| | \ / | . . | \ | | \ / | . . |\
| | \/ | .. | \VPN-IP1 | | \ / | . . | \
| | /\ | . . | / | | \ / | . . | \
| | / \ | . . | / | | \/ | .. | \VPN-IP1 (11.1.1.0/24)
| | / \ | . . | / | | /\ | . . | /VRF "Blue" ASn: 65000
iPE ASBR12------ASBR22.......ePE2 | | / \ | . . | /
| | | | \ | | / \ | . . | /
| | | | \ | | / \ | . . |/
| | | | \ iPE ASBR12---------ASBR22........ePE2 (192.0.2.2)
| | | | /VPN-IP2 | (192.0.1.2) | |\
| | | | / | | | | \
| | | | / | | | | \
| ASBR13------ASBR23.......ePE3/ | | | | \VRF "Blue" ASn: 65000
| | | | | | | | /VPN-IP2 (11.1.2.0/24)
| | | | | | | | /
+----------------+ +-------------+ | | | | /
<============== <========= <============ | | | |/
Advertise PE2x Advertise Redistribute | ASBR13---------ASBR23........ePE3(192.0.2.3)
Using iBGP-LU PE2x Using IGP into | (192.0.1.3) | |
eBGP-LU BGP | | | |
| | | |
+-------------+ +-------------+
<============ <========= <============
Advertise ePEx Advertise Redistribute
Using iBGP-LU ePEx Using IGP into
eBGP-LU BGP
Figure 4 Sample 3-level hierarchy topology Figure 4 : Sample 3-level hierarchy topology
We will make the following assumptions about connectivity We will make the following assumptions about connectivity
o In "domain 2", both ASBR21 and ASBR22 can reach both ePE1 and o In "domain 2", both ASBR21 and ASBR22 can reach both ePE1 and
ePE2 using the same distance ePE2 using the same distance
o In "domain 2", only ASBR23 can reach ePE3 o In "domain 2", only ASBR23 can reach ePE3
o In "domain 1", iPE (the ingress PE) can reach ASBR11, ASBR12, and o In "domain 1", iPE (the ingress PE) can reach ASBR11, ASBR12, and
ASBR13 via IGP using the same distance. ASBR13 via IGP using the same distance.
We will make the following assumptions about the labels We will make the following assumptions about the labels
o The VPN labels advertised by ePE1 and ePE2 for prefix VPN-IP1 are o The VPN labels advertised by ePE1 and ePE2 for prefix VPN-IP1 are
VPN-L11 and VPN-L21, respectively VPN-L11 and VPN-L21, respectively
o The VPN labels advertised by ePE2 and ePE3 for prefix VPN-IP2 are o The VPN labels advertised by ePE2 and ePE3 for prefix VPN-IP2 are
VPN-L22 and VPN-L32, respectively VPN-L22 and VPN-L32, respectively
o The labels advertised by ASBR11 to iPE using BGP-LU [4] for the o The labels advertised by ASBR11 to iPE using BGP-LU [4] for the
egress PEs ePE1 and ePE2 are LASBR11(ePE1) and LASBR11(ePE2), egress PEs ePE1 and ePE2 are LASBR11(ePE1) and LASBR11(ePE2),
respectively. respectively.
skipping to change at page 10, line 30 skipping to change at page 16, line 25
egress PEs ePE1 and ePE2 are LASBR12(ePE1) and LASBR12(ePE2), egress PEs ePE1 and ePE2 are LASBR12(ePE1) and LASBR12(ePE2),
respectively respectively
o The label advertised by ASBR11 to iPE using BGP-LU [4] for the o The label advertised by ASBR11 to iPE using BGP-LU [4] for the
egress PE ePE3 is LASBR13(ePE3) egress PE ePE3 is LASBR13(ePE3)
o The IGP labels advertised by the next hops directly connected to o The IGP labels advertised by the next hops directly connected to
iPE towards ASBR11, ASBR12, and ASBR13 in the core of domain 1 iPE towards ASBR11, ASBR12, and ASBR13 in the core of domain 1
are IGP-L11, IGP-L12, and IGP-L13, respectively. are IGP-L11, IGP-L12, and IGP-L13, respectively.
Based on these connectivity assumptions and the topology in Figure
4, the routing table on iPE is
65000:11.1.1.0/24
via ePE1 (192.0.2.1), VPN Label: VPN-L11
via ePE2 (192.0.2.2), VPN Label: VPN-L21
65000:11.1.2.0/24
via ePE1 (192.0.2.2), VPN Label: VPN-L22
via ePE2 (192.0.2.3), VPN Label: VPN-L23
192.0.2.1/32 (ePE1)
Via ASBR11, BGP-LU Label: LASBR11(ePE1)
Via ASBR12, BGP-LU Label: LASBR12(ePE1)
192.0.2.2/32 (ePE2)
Via ASBR11, BGP-LU Label: LASBR11(ePE2)
Via ASBR12, BGP-LU Label: LASBR12(ePE2)
192.0.2.3/32 (ePE3)
Via ASBR13, BGP-LU Label: LASBR13(ePE3)
192.0.1.1/32 (ASBR11)
via Core, Label: IGP-L11
192.0.1.2/32 (ASBR12)
via Core, Label: IGP-L12
192.0.1.3/32 (ASBR13)
via Core, Label: IGP-L13
The diagram in Figure 5 illustrates the forwarding chain in iPE The diagram in Figure 5 illustrates the forwarding chain in iPE
assuming that the forwarding hardware in iPE supports 3 levels of assuming that the forwarding hardware in iPE supports 3 levels of
hierarchy. The leaves corresponding to the ABSRs on domain 1 hierarchy. The leaves corresponding to the ABSRs on domain 1
(ASBR11, ASBR12, and ASBR13) are at the bottom of the hierarchy. (ASBR11, ASBR12, and ASBR13) are at the bottom of the hierarchy.
There are few important points: There are few important points:
o Because the hardware supports the required depth of hierarchy, o Because the hardware supports the required depth of hierarchy,
the sizes of a pathlist equal the size of the label list the sizes of a pathlist equal the size of the label list
associated with the leaves using this pathlist associated with the leaves using this pathlist
o The index inside the pathlist entry indicates the label that will o The index inside the pathlist entry indicates the label that will
be picked from the Outlabel-List if that path is chosen by the be picked from the Outlabel-List associated with the child leaf
forwarding engine hashing function. if that path is chosen by the forwarding engine hashing function.
Outlabel-List Outlabel-List Outlabel-List Outlabel-List
For VPN-IP1 For VPN-IP2 For VPN-IP1 For VPN-IP2
+------------+ +--------+ +-------+ +------------+ +------------+ +--------+ +-------+ +------------+
| VPN-L11 |<---| VPN-IP1| |VPN-IP2|-->| VPN-L22 | | VPN-L11 |<---| VPN-IP1| |VPN-IP2|-->| VPN-L22 |
+------------+ +---+----+ +---+---+ +------------+ +------------+ +---+----+ +---+---+ +------------+
| VPN-L21 | | | | VPN-L32 | | VPN-L21 | | | | VPN-L32 |
+------------+ | | +------------+ +------------+ | | +------------+
| | | |
V V V V
skipping to change at page 12, line 7 skipping to change at page 19, line 7
| +------+ +------+ | +------+ | | +------+ +------+ | +------+ |
v v v v v v
+-------+ +-------+ +-------+ +-------+ +-------+ +-------+
|IGP-L11| |IGP-L12| |IGP-L13| |IGP-L11| |IGP-L12| |IGP-L13|
+-------+ +-------+ +-------+ +-------+ +-------+ +-------+
Figure 5 : Forwarding Chain for hardware supporting 3 Levels Figure 5 : Forwarding Chain for hardware supporting 3 Levels
Now suppose the hardware on iPE (the ingress PE) supports 2 levels Now suppose the hardware on iPE (the ingress PE) supports 2 levels
of hierarchy only. In that case, the 3-levels forwarding chain in of hierarchy only. In that case, the 3-levels forwarding chain in
Figure 5 needs to be "flattended" into 2 levels only. Figure 5 needs to be "flattened" into 2 levels only.
Outlabel-List Outlabel-List Outlabel-List Outlabel-List
For VPN-IP1 For VPN-IP2 For VPN-IP1 For VPN-IP2
+------------+ +-------+ +-------+ +------------+ +------------+ +-------+ +-------+ +------------+
| VPN-L11 |<---|VPN-IP1| | VPN-IP2|--->| VPN-L22 | | VPN-L11 |<---|VPN-IP1| | VPN-IP2|--->| VPN-L22 |
+------------+ +---+---+ +---+---+ +------------+ +------------+ +---+---+ +---+---+ +------------+
| VPN-L21 | | | | VPN-L32 | | VPN-L21 | | | | VPN-L32 |
+------------+ | | +------------+ +------------+ | | +------------+
| | | |
| | | |
skipping to change at page 13, line 5 skipping to change at page 20, line 5
v v v v v v
+-------+ +-------+ +-------+ +-------+ +-------+ +-------+
|IGP-L11| |IGP-L12| |IGP-L13| |IGP-L11| |IGP-L12| |IGP-L13|
+-------+ +-------+ +-------+ +-------+ +-------+ +-------+
Figure 6 : Flattening 3 levels to 2 levels of Hierarchy on iPE Figure 6 : Flattening 3 levels to 2 levels of Hierarchy on iPE
Figure 6 represents one way to "flatten" a 3 levels hierarchy into Figure 6 represents one way to "flatten" a 3 levels hierarchy into
two levels. There are few important points: two levels. There are few important points:
o The flattened pathlists have label lists associated with them. o As mentioned in Section 5.1 a flattened pathlist may have label
The size of the label list associated with the flattened pathlist lists associated with them. The size of the label list associated
equals the size of the pathlist. Hence it is possible that an with a flattened pathlist equals the size of the pathlist. Hence
implementation includes these label lists in the flattened it is possible that an implementation includes these label lists
pathlist itself in the flattened pathlist itself
o Because of "flattening", the size of a flattened pathlist may not
be equal to the size of the label lists of leaves using the
flattened pathlist.
o The indices inside a flattened pathlist still indicate the label o Again as mentioned in Section 5.1, the size of a flattened
index in the Outlabel-Lists of the leaves using that pathlist. pathlist may not be equal to the size of the OutLabel-lists of
Because the size of the flattened pathlist may be different from leaves using the flattened pathlist. So the indices inside a
the size of the label lists of the leaves, the indices may be flattened pathlist still indicate the label index in the
repeated Outlabel-Lists of the leaves using that pathlist. Because the
size of the flattened pathlist may be different from the size of
the OutLabel-lists of the leaves, the indices may be repeated.
o Let's take a look at the flattened pathlist used by the prefix o Let's take a look at the flattened pathlist used by the prefix
"VPN-IP2", The pathlist associated with the prefix "VPN-IP2" has "VPN-IP2", The pathlist associated with the prefix "VPN-IP2" has
three entries. three entries.
o The first and second entry have index "0". This is because o The first and second entry have index "0". This is because
both entries correspond to ePE2. Hence when hashing performed both entries correspond to ePE2. Hence when hashing performed
by the forwarding engine results in using first or the second by the forwarding engine results in using first or the second
entry in the pathlist, the forwarding engine will pick the entry in the pathlist, the forwarding engine will pick the
correct VPN label "VPN-L22", which is the label advertised by correct VPN label "VPN-L22", which is the label advertised by
ePE2 for the prefix "VPN-IP2" ePE2 for the prefix "VPN-IP2"
o The third entry has the index "1". This is because the third o The third entry has the index "1". This is because the third
entry corresponds to ePE3. Hence when the hashing is performed entry corresponds to ePE3. Hence when the hashing is performed
by the forwarding engine results in using the third entry in by the forwarding engine results in using the third entry in
the flattened pathlist, the forwarding engine will pick the the flattened pathlist, the forwarding engine will pick the
correct VPN label "VPN-L32", which is the label advertised by correct VPN label "VPN-L32", which is the label advertised by
"ePE3" for the prefix "VPN-IP2" "ePE3" for the prefix "VPN-IP2"
4. Forwarding Behavior Now let's try and apply the forwarding steps in Section 4. together
with the additional step in Section 5.1 to the flattened forwarding
This section explains how the forwarding plane uses the hierarchical
shared forwarding chain to forward a packet.
When a packet arrives at a router, it matches a leaf. A labeled
packet matches a label leaf while an IP packet matches an IP prefix
leaf. The forwarding engines walks the forwarding chain starting
from the leaf until the walk terminates on an adjacency. Thus when a
packet arrives, the chain is walked as follows:
1. Lookup the leaf based on the destination address or the label at
the top of the packet
2. Retrieve the parent pathlist of the leaf
3. Pick the outgoing path from the list of resolved paths in the
pathlist. The method by which the outgoing path is picked is
beyond the scope of this document (i.e. flow-preserving hash
exploiting entropy within the MPLS stack and IP header). Let the
"path-index" of the outgoing path be "i".
4. If the prefix is labeled, use the "path-index" "i" to retrieve
the ith label "Li" stored the ith entry in the OutLabel-List and
apply the label action of the label on the packet (e.g. for VPN
label on the ingress PE, the label action is "push").
5. Move to the parent of the chosen path "i"
6. If the chosen path "i" is recursive, move to its parent prefix
and go to step 2
7. If the chosen path "i" is non-recursive move to its parent
adjacency
8. Encapsulate the packet in the L2 string specified by the
adjacency and send the packet out.
Let's apply the above forwarding steps to the forwarding chain
depicted in Figure 2 in Section 2. Suppose a packet arrives at
ingress PE iPE from an external neighbor. Assume the packet matches
the VPN prefix VPN-IP1. While walking the forwarding chain, the
forwarding engine applies a hashing algorithm to choose the path and
the hashing at the BGP level yields path 0 while the hashing at the
IGP level yields path 1. In that case, the packet will be sent out
of interface I2 with the label stack "IGP-L12,VPN-L11".
Now let's try and apply the above steps to the flattened forwarding
chain illustrated in Figure 6. chain illustrated in Figure 6.
o Suppose a packet arrives at "iPE" and matches the VPN prefix o Suppose a packet arrives at "iPE" and matches the VPN prefix
"VPN-IP2" "VPN-IP2"
o The forwarding engine walks to the parent of the "VPN_P2", which o The forwarding engine walks to the parent of the "VPN-IP2", which
is the flattened pathlist and applies a hashing algorithm to pick is the flattened pathlist and applies a hashing algorithm to pick
a path a path
o Suppose the hashing by the forwarding engine picks the second o Suppose the hashing by the forwarding engine picks the second
entry in the flattened pathlist associated with the leaf "VPN- path in the flattened pathlist associated with the leaf "VPN-
IP2". IP2".
o Because the second entry has the index "0", the label "VPN-L22" o Because the second path has the index "0", the label "VPN-L22" is
is pushed on the packet pushed on the packet
o At the same time, the forwarding engine picks the second label o Next the forwarding engine picks the second label from the
from the Outlabel-Array associated with the flattened pathlist. Outlabel-Array associated with the flattened pathlist. Hence the
Hence the next label that is pushed is "LASBR12(ePE2)" next label that is pushed is "LASBR12(ePE2)"
o The forwarding engine now moves to the parent of the flattened o The forwarding engine now moves to the parent of the flattened
pathlist corresponding to the second entry. The parent is the IGP pathlist corresponding to the second path. The parent is the IGP
label leaf corresponding to "ASBR12" label leaf corresponding to "ASBR12"
o So the packet is forwarded towards the ASBR "ASBR12" and the IGP o So the packet is forwarded towards the ASBR "ASBR12" and the IGP
label at the top will be "L12" label at the top will be "L12"
Based on the above steps, a packet arriving at iPE and destined to Based on the above steps, a packet arriving at iPE and destined to
the prefix VPN-L22 reaches its destination as follows the prefix VPN-L22 reaches its destination as follows
o iPE sends the packet along the shortest path towards ASBR12 with o iPE sends the packet along the shortest path towards ASBR12 with
the following label stack starting from the top: {L12, the following label stack starting from the top: {L12,
skipping to change at page 15, line 43 skipping to change at page 21, line 43
o Hence ASBR22 swaps "LASBR22(ePE2)" with the IGP label for ePE2 o Hence ASBR22 swaps "LASBR22(ePE2)" with the IGP label for ePE2
advertised by the next-hop towards ePE2 in domain 2, and sends advertised by the next-hop towards ePE2 in domain 2, and sends
the packet along the shortest path towards ePE2. the packet along the shortest path towards ePE2.
o The penultimate hop of ePE2 pops the top label. Hence ePE2 o The penultimate hop of ePE2 pops the top label. Hence ePE2
receives the packet with the top label VPN-L22 at the top receives the packet with the top label VPN-L22 at the top
o ePE2 pops "VPN-L22" and sends the packet as a pure IP packet o ePE2 pops "VPN-L22" and sends the packet as a pure IP packet
towards the destination VPN-IP2. towards the destination VPN-IP2.
5. Forwarding Chain Adjustment at a Failure 6. Forwarding Chain Adjustment at a Failure
The hierarchical and shared structure of the forwarding chain The hierarchical and shared structure of the forwarding chain
explained in Section 2 allows modifying a small number of explained in the previous section allows modifying a small number of
forwarding chain objects to re-route traffic to a pre-calculated forwarding chain objects to re-route traffic to a pre-calculated
equal-cost or backup path without the need to modify the possibly equal-cost or backup path without the need to modify the possibly
very large number of BGP prefixes. In this section, we go over very large number of BGP prefixes. In this section, we go over
various core and edge failure scenarios to illustrate how FIB various core and edge failure scenarios to illustrate how FIB
manager can utilize the forwarding chain structure to achieve BGP manager can utilize the forwarding chain structure to achieve BGP
prefix independent convergence. prefix independent convergence.
5.1. BGP-PIC core 6.1. BGP-PIC core
This section describes the adjustments to the forwarding chain when This section describes the adjustments to the forwarding chain when
a core link or node fails but the BGP next-hop remains reachable. a core link or node fails but the BGP next-hop remains reachable.
There are two case: remote link failure and attached link failure. There are two case: remote link failure and attached link failure.
Node failures are treated as link failures. Node failures are treated as link failures.
When a remote link or node fails, IGP on the ingress PE receives When a remote link or node fails, IGP on the ingress PE receives
advertisement indicating a topology change so IGP re-converges to advertisement indicating a topology change so IGP re-converges to
either find a new next-hop and/or outgoing interface or remove the either find a new next-hop and/or outgoing interface or remove the
skipping to change at page 16, line 31 skipping to change at page 22, line 31
immediately. The FIB manager marks the impacted path(s) as unusable immediately. The FIB manager marks the impacted path(s) as unusable
so that only useable paths are used to forward packets. Hence only so that only useable paths are used to forward packets. Hence only
IGP pathlists with paths using the failed local link need to be IGP pathlists with paths using the failed local link need to be
modified. All other pathlists are not impacted. Note that in this modified. All other pathlists are not impacted. Note that in this
particular case there is actually no need even to backwalk to IGP particular case there is actually no need even to backwalk to IGP
leaves to adjust the OutLabel-Lists because FIB can rely on the leaves to adjust the OutLabel-Lists because FIB can rely on the
path-index stored in the useable paths in the pathlist to pick the path-index stored in the useable paths in the pathlist to pick the
right label. right label.
It is noteworthy to mention that because FIB manager modifies the It is noteworthy to mention that because FIB manager modifies the
forwarding chain starting from the IGP leaves only, BGP pathlists forwarding chain starting from the IGP leaves only. BGP pathlists
and leaves are not modified. Hence traffic restoration occurs within and leaves are not modified. Hence traffic restoration occurs within
the time frame of IGP convergence, and, for local link failure, the time frame of IGP convergence, and, for local link failure,
assuming a backup path has been precomputed, within the timeframe of assuming a backup path has been precomputed, within the timeframe of
local detection (e.g. 50ms). Examples of solutions that pre- local detection (e.g. 50ms). Examples of solutions that pre-
computing backup paths are IP FRR [16] remote LFA [17], Ti-LFA [15] computing backup paths are IP FRR [16] remote LFA [17], Ti-LFA [15]
and MRT [18] or eBGP path having a backup path [10]. and MRT [18] or eBGP path having a backup path [10].
Let's apply the procedure to the forwarding chain depicted in Figure Let's apply the procedure mentioned in this subsection to the
2. Suppose a remote link failure occurs and impacts the first ECMP forwarding chain depicted in Figure 2. Suppose a remote link failure
IGP path to the remote BGP next-hop. Upon IGP convergence, the IGP occurs and impacts the first ECMP IGP path to the remote BGP next-
pathlist used by the BGP next-hop is updated to reflect the new hop. Upon IGP convergence, the IGP pathlist used by the BGP next-hop
topology (one path instead of two). As soon as the IGP convergence is updated to reflect the new topology (one path instead of two). As
is effective for the BGP next-hop entry, the new forwarding state is soon as the IGP convergence is effective for the BGP next-hop entry,
immediately available to all dependent BGP prefixes. The same the new forwarding state is immediately available to all dependent
behavior would occur if the failure was local such as an interface BGP prefixes. The same behavior would occur if the failure was local
going down. As soon as the IGP convergence is complete for the BGP such as an interface going down. As soon as the IGP convergence is
next-hop IGP route, all its BGP depending routes benefit from the complete for the BGP next-hop IGP route, all its BGP depending
new path. In fact, upon local failure, if LFA protection is enabled routes benefit from the new path. In fact, upon local failure, if
for the IGP route to the BGP next-hop and a backup path was pre- LFA protection is enabled for the IGP route to the BGP next-hop and
computed and installed in the pathlist, upon the local interface a backup path was pre-computed and installed in the pathlist, upon
failure, the LFA backup path is immediately activated (sub-50msec) the local interface failure, the LFA backup path is immediately
and thus protection benefits all the depending BGP traffic through activated (e.g. sub-50msec) and thus protection benefits all the
the hierarchical forwarding dependency between the routes. depending BGP traffic through the hierarchical forwarding dependency
between the routes.
5.2. BGP-PIC edge 6.2. BGP-PIC edge
This section describes the adjustments to the forwarding chains as a This section describes the adjustments to the forwarding chains as a
result of edge node or edge link failure. result of edge node or edge link failure.
5.2.1. Adjusting forwarding Chain in egress node failure 6.2.1. Adjusting forwarding Chain in egress node failure
When an edge node fails, IGP on neighboring core nodes send route When an edge node fails, IGP on neighboring core nodes send route
updates indicating that the edge node is no longer reachable. IGP updates indicating that the edge node is no longer reachable. IGP
running on the iBGP peers instructs FIB to remove the IP and label running on the iBGP peers instructs FIB to remove the IP and label
leaves corresponding to the failed edge node from FIB. So FIB leaves corresponding to the failed edge node from FIB. So FIB
manager performs the following steps: manager performs the following steps:
o FIB manager deletes the IGP leaf corresponding to the failed edge o FIB manager deletes the IGP leaf corresponding to the failed edge
node node
o FIB manager backwalks to all dependent BGP pathlists and marks o FIB manager backwalks to all dependent BGP pathlists and marks
that path using the deleted IGP leaf as unresolved that path using the deleted IGP leaf as unresolved
o Note that there is no need to modify BGP leaves because each path o Note that there is no need to modify the possibly large number of
in the pathlist carries its path index and hence the correct BGP leaves because each path in the pathlist carries its path
outgoing label will be picked. Consider for example the index and hence the correct outgoing label will be picked.
forwarding chain depicted in Figure 2. If the 1st BGP path Consider for example the forwarding chain depicted in Figure 2.
becomes unresolved, then the forwarding engine will only use the If the 1st BGP path becomes unresolved, then the forwarding
second path for forwarding. Yet the pathindex of that single engine will only use the second path for forwarding. Yet the path
resolved path will still be 1 and hence the label VPN-L12 will be index of that single resolved path will still be 1 and hence the
pushed. label VPN-L12 will be pushed.
5.2.2. Adjusting Forwarding Chain on PE-CE link Failure 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure
Suppose the link between an edge router and its external peer fails. Suppose the link between an edge router and its external peer fails.
There are two scenarios (1) the edge node attached to the failed There are two scenarios (1) the edge node attached to the failed
link performs next-hop self and (2) the edge node attached to the link performs next-hop self and (2) the edge node attached to the
failure advertises the IP address of the failed link as the next-hop failure advertises the IP address of the failed link as the next-hop
attribute to its iBGP peers. attribute to its iBGP peers.
In the first case, the rest of iBGP peers will remain unaware of the In the first case, the rest of iBGP peers will remain unaware of the
link failure and will continue to forward traffic to the edge node link failure and will continue to forward traffic to the edge node
until the edge node attached to the failed link withdraws the BGP until the edge node attached to the failed link withdraws the BGP
skipping to change at page 18, line 27 skipping to change at page 24, line 30
o For unlabeled traffic, packets are simply redirected towards o For unlabeled traffic, packets are simply redirected towards
backup egress PE. backup egress PE.
In the second case where the edge router uses the IP address of the In the second case where the edge router uses the IP address of the
failed link as the BGP next-hop, the edge router will still perform failed link as the BGP next-hop, the edge router will still perform
the previous steps. But, unlike the case of next-hop self, IGP on the previous steps. But, unlike the case of next-hop self, IGP on
failed edge node informs the rest of the iBGP peers that IP address failed edge node informs the rest of the iBGP peers that IP address
of the failed link is no longer reachable. Hence the FIB manager on of the failed link is no longer reachable. Hence the FIB manager on
iBGP peers will delete the IGP leaf corresponding to the IP prefix iBGP peers will delete the IGP leaf corresponding to the IP prefix
of the failed link. The behavior of the iBGP peers will be identical of the failed link. The behavior of the iBGP peers will be identical
to the case of edge node failure outlined in Section 5.2.1. to the case of edge node failure outlined in Section 6.2.1.
It is noteworthy to mention that because the edge link failure is It is noteworthy to mention that because the edge link failure is
local to the edge router, sub-50 msec convergence can be achieved as local to the edge router, sub-50 msec convergence can be achieved as
described in [10]. described in [10].
Let's try to apply the case of next-hop self to the forwarding chain Let's try to apply the case of next-hop self to the forwarding chain
depicted in Figure 3. After failure of the link between ePE1 and CE, depicted in Figure 3. After failure of the link between ePE1 and CE,
the forwarding engine will route traffic arriving from the core the forwarding engine will route traffic arriving from the core
towards VPN-NH2 with path-index=1. A packet arriving from the core towards VPN-NH2 with path-index=1. A packet arriving from the core
will contain the label VPN-L11 at top. The label VPN-L11 is swapped will contain the label VPN-L11 at top. The label VPN-L11 is swapped
with the label VPN-L21 and the packet is forwarded towards ePE2. with the label VPN-L21 and the packet is forwarded towards ePE2.
5.3. Handling Failures for Flattended Forwarding Chains 6.3. Handling Failures for Flattened Forwarding Chains
As explained in the Example in Section 3.2 if the number of As explained in the in Section 5 if the number of hierarchy levels
hierarchy levels of a platform cannot support the native number of of a platform cannot support the native number of hierarchy levels
hierarchy levels of a recursive forwarding chain, the instantiated of a recursive forwarding chain, the instantiated forwarding chain
forwarding chain is constructed by flattening two or more levels. is constructed by flattening two or more levels. Hence a 3 levels
Hence a 3 levels chain in Figure 5 is flattened into the 2 levels chain in Figure 5 is flattened into the 2 levels chain in Figure 6.
chain in Figure 6.
While reducing the benefits of BGP-PIC, flattening one hierarchy While reducing the benefits of BGP-PIC, flattening one hierarchy
into a shallower hierarchy does not always result in a complete loss into a shallower hierarchy does not always result in a complete loss
of the benefits of the BGP-PIC. To illustrate this fact suppose of the benefits of the BGP-PIC. To illustrate this fact suppose
ASBR12 is no longer reachable in domain 1. If the platform supports ASBR12 is no longer reachable in domain 1. If the platform supports
the full hierarchy depth, the forwarding chain is the one depicted the full hierarchy depth, the forwarding chain is the one depicted
in Figure 5 and hence the FIB manager needs to backwalk one level to in Figure 5 and hence the FIB manager needs to backwalk one level to
the pathlist shared by "ePE1" and "ePE2" and adjust it. If the the pathlist shared by "ePE1" and "ePE2" and adjust it. If the
platform supports 2 levels of hierarchy, then a useable forwarding platform supports 2 levels of hierarchy, then a useable forwarding
chain is the one depicted in Figure 6. In that case, if ASBR12 is no chain is the one depicted in Figure 6. In that case, if ASBR12 is no
longer reachable, the FIB manager has to backwalk to the two longer reachable, the FIB manager has to backwalk to the two
flattened pathlists and update both of them. flattened pathlists and updates both of them.
The main observation is that the loss of convergence speed due to The main observation is that the loss of convergence speed due to
the loss of hierarchy depth depends on the structure of the the loss of hierarchy depth depends on the structure of the
forwarding chain itself. To illustrate this fact, let's take two forwarding chain itself. To illustrate this fact, let's take two
extremes. Suppose the forwarding objects in level i+1 depend on the extremes. Suppose the forwarding objects in level i+1 depend on the
forwarding objects in level i. If every object on level i+1 depends forwarding objects in level i. If every object on level i+1 depends
on a separate object in level i, then flattening level i into level on a separate object in level i, then flattening level i into level
i+1 will not result in loss of convergence speed. Now let's take the i+1 will not result in loss of convergence speed. Now let's take the
other extreme. Suppose "n" objects in level i+1 depend on 1 object other extreme. Suppose "n" objects in level i+1 depend on 1 object
in level i. Now suppose FIB flattens level i into level i+1. If a in level i. Now suppose FIB flattens level i into level i+1. If a
topology change results in modifying the single object in level i, topology change results in modifying the single object in level i,
then FIB has to backwalk and modify "n" objects in the flattened then FIB has to backwalk and modify "n" objects in the flattened
level, thereby losing all the benefit of BGP-PIC. Experience shows level, thereby losing all the benefit of BGP-PIC. Experience shows
that flattening forwarding chains usually results in moderate loss that flattening forwarding chains usually results in moderate loss
of BGP-PIC benefits. Further analysis is needed to corroborate and of BGP-PIC benefits. Further analysis is needed to corroborate and
quantify this statement. quantify this statement.
6. Properties 7. Properties
6.1. Coverage 7.1. Coverage
All the possible failures, except CE node failure, are covered, All the possible failures, except CE node failure, are covered,
whether they impact a local or remote IGP path or a local or remote whether they impact a local or remote IGP path or a local or remote
BGP next-hop as described in Section 5. This section provides BGP next-hop as described in Section 6. This section provides
details for each failure and now the hierarchical and shared FIB details for each failure and now the hierarchical and shared FIB
structure proposed in this document allows recovery that does not structure proposed in this document allows recovery that does not
depend on number of BGP prefixes. depend on number of BGP prefixes.
6.1.1. A remote failure on the path to a BGP next-hop 7.1.1. A remote failure on the path to a BGP next-hop
Upon IGP convergence, the IGP leaf for the BGP next-hop is updated Upon IGP convergence, the IGP leaf for the BGP next-hop is updated
upon IGP convergence and all the BGP depending routes leverage the upon IGP convergence and all the BGP depending routes leverage the
new IGP forwarding state immediately. new IGP forwarding state immediately. Details of this behavior can
be found in Section 6.1.
This BGP resiliency property only depends on IGP convergence and is This BGP resiliency property only depends on IGP convergence and is
independent of the number of BGP prefixes impacted. independent of the number of BGP prefixes impacted.
6.1.2. A local failure on the path to a BGP next-hop 7.1.2. A local failure on the path to a BGP next-hop
Upon LFA protection, the IGP leaf for the BGP next-hop is updated to Upon LFA protection, the IGP leaf for the BGP next-hop is updated to
use the precomputed LFA backup path and all the BGP depending routes use the precomputed LFA backup path and all the BGP depending routes
leverage this LFA protection. leverage this LFA protection. Details of this behavior can be found
in Section 6.1.
This BGP resiliency property only depends on LFA protection and is This BGP resiliency property only depends on LFA protection and is
independent of the number of BGP prefixes impacted. independent of the number of BGP prefixes impacted.
6.1.3. A remote iBGP next-hop fails 7.1.3. A remote iBGP next-hop fails
Upon IGP convergence, the IGP leaf for the BGP next-hop is deleted Upon IGP convergence, the IGP leaf for the BGP next-hop is deleted
and all the depending BGP Path-Lists are updated to either use the and all the depending BGP Path-Lists are updated to either use the
remaining ECMP BGP best-paths or if none remains available to remaining ECMP BGP best-paths or if none remains available to
activate precomputed backups. activate precomputed backups. Details about this behavior can be
found in Section 6.2.1.
This BGP resiliency property only depends on IGP convergence and is This BGP resiliency property only depends on IGP convergence and is
independent of the number of BGP prefixes impacted. independent of the number of BGP prefixes impacted.
6.1.4. A local eBGP next-hop fails 7.1.4. A local eBGP next-hop fails
Upon local link failure detection, the adjacency to the BGP next-hop Upon local link failure detection, the adjacency to the BGP next-hop
is deleted and all the depending BGP pathlists are updated to either is deleted and all the depending BGP pathlists are updated to either
use the remaining ECMP BGP best-paths or if none remains available use the remaining ECMP BGP best-paths or if none remains available
to activate precomputed backups. to activate precomputed backups. Details about this behavior can be
found in Section 6.2.2.
This BGP resiliency property only depends on local link failure This BGP resiliency property only depends on local link failure
detection and is independent of the number of BGP prefixes impacted. detection and is independent of the number of BGP prefixes impacted.
6.2. Performance 7.2. Performance
When the failure is local (a local IGP next-hop failure or a local When the failure is local (a local IGP next-hop failure or a local
eBGP next-hop failure), a pre-computed and pre-installed backup is eBGP next-hop failure), a pre-computed and pre-installed backup is
activated by a local-protection mechanism that does not depend on activated by a local-protection mechanism that does not depend on
the number of BGP destinations impacted by the failure. Sub-50msec the number of BGP destinations impacted by the failure. Sub-50msec
is thus possible even if millions of BGP routes are impacted. is thus possible even if millions of BGP routes are impacted.
When the failure is remote (a remote IGP failure not impacting the When the failure is remote (a remote IGP failure not impacting the
BGP next-hop or a remote BGP next-hop failure), an alternate path is BGP next-hop or a remote BGP next-hop failure), an alternate path is
activated upon IGP convergence. All the impacted BGP destinations activated upon IGP convergence. All the impacted BGP destinations
benefit from a working alternate path as soon as the IGP convergence benefit from a working alternate path as soon as the IGP convergence
occurs for their impacted BGP next-hop even if millions of BGP occurs for their impacted BGP next-hop even if millions of BGP
routes are impacted. routes are impacted.
6.2.1. Perspective Appendix A puts the BGP PIC benefits in perspective by providing
some results using actual numbers.
The following table puts the BGP PIC benefits in perspective
assuming
o 1M impacted BGP prefixes
o IGP convergence ~ 500 msec
o local protection ~ 50msec
o FIB Update per BGP destination ~ 100usec conservative,
~ 10usec optimistic
o BGP Convergence per BGP destination ~ 200usec conservative,
~ 100usec optimistic
Without PIC With PIC
Local IGP Failure 10 to 100sec 50msec
Local BGP Failure 100 to 200sec 50msec
Remote IGP Failure 10 to 100sec 500msec
Local BGP Failure 100 to 200sec 500msec
Upon local IGP next-hop failure or remote IGP next-hop failure, the
existing primary BGP next-hop is intact and usable hence the
resiliency only depends on the ability of the FIB mechanism to
reflect the new path to the BGP next-hop to the depending BGP
destinations. Without BGP PIC, a conservative back-of-the-envelope
estimation for this FIB update is 100usec per BGP destination. An
optimistic estimation is 10usec per entry.
Upon local BGP next-hop failure or remote BGP next-hop failure,
without the BGP PIC mechanism, a new BGP Best-Path needs to be
recomputed and new updates need to be sent to peers. This depends on
BGP processing time that will be shared between best-path
computation, RIB update and peer update. A conservative back-of-the-
envelope estimation for this is 200usec per BGP destination. An
optimistic estimation is 100usec per entry.
6.3. Automated 7.3. Automated
The BGP PIC solution does not require any operator involvement. The The BGP PIC solution does not require any operator involvement. The
process is entirely automated as part of the FIB implementation. process is entirely automated as part of the FIB implementation.
The salient points enabling this automation are: The salient points enabling this automation are:
o Extension of the BGP Best Path to compute more than one primary o Extension of the BGP Best Path to compute more than one primary
([11]and [12]) or backup BGP next-hop ([6] and [13]). ([11]and [12]) or backup BGP next-hop ([6] and [13]).
o Sharing of BGP Path-list across BGP destinations with same o Sharing of BGP Path-list across BGP destinations with same
primary and backup BGP next-hop primary and backup BGP next-hop
o Hierarchical indirection and dependency between BGP pathlist and o Hierarchical indirection and dependency between BGP pathlist and
IGP pathlist IGP pathlist
6.4. Incremental Deployment 7.4. Incremental Deployment
As soon as one router supports BGP PIC solution, it benefits from As soon as one router supports BGP PIC solution, it benefits from
all its benefits without any requirement for other routers to all its benefits without any requirement for other routers to
support BGP PIC. support BGP PIC.
7. Dependency
This section describes the required functionality in the forwarding
and control planes to support BGP-PIC described in this document
7.1. Hierarchical Hardware FIB
BGP PIC requires a hierarchical hardware FIB support: for each BGP
forwarded packet, a BGP leaf is looked up, then a BGP Pathlist is
consulted, then an IGP Pathlist, then an Adjacency.
An alternative method consists in "flattening" the dependencies when
programming the BGP destinations into HW FIB resulting in
potentially eliminating both the BGP Path-List and IGP Path-List
consultation. Such an approach decreases the number of memory
lookup's per forwarding operation at the expense of HW FIB memory
increase (flattening means less sharing hence duplication), loss of
ECMP properties (flattening means less pathlist entropy) and loss of
BGP PIC properties.
7.2. Availability of more than one primary or secondary BGP next-
hops
When the primary BGP next-hop fails, BGP PIC depends on the
availability of a pre-computed and pre-installed secondary BGP next-
hop in the BGP Pathlist.
The existence of a secondary next-hop is clear for the following
reason: a service caring for network availability will require two
disjoint network connections hence two BGP next-hops.
The BGP distribution of the secondary next-hop is available thanks
to the following BGP mechanisms: Add-Path [11], BGP Best-External
[6], diverse path [12], and the frequent use in VPN deployments of
different VPN RD's per PE. It is noteworthy to mention that the
availability of another BGP path does not mean that all failure
scenarios can be covered by simply forwarding traffic to the
available secondary path. The discussion of how to cover various
failure scenarios is beyond the scope of this document
7.3. Pre-Computation of a secondary BGP next-hop
[13] describes how a secondary BGP next-hop can be precomputed on a
per BGP destination basis.
8. Security Considerations 8. Security Considerations
The behavior described in this document is internal functionality The behavior described in this document is internal functionality
to a router that result in significant improvement to convergence to a router that result in significant improvement to convergence
time as well as reduction in CPU and memory used by FIB while not time as well as reduction in CPU and memory used by FIB while not
showing change in basic routing and forwarding functionality. As showing change in basic routing and forwarding functionality. As
such no additional security risk is introduced by using the such no additional security risk is introduced by using the
mechanisms proposed in this document. mechanisms proposed in this document.
9. IANA Considerations 9. IANA Considerations
skipping to change at page 23, line 33 skipping to change at page 28, line 7
This document proposes a hierarchical and shared forwarding chain This document proposes a hierarchical and shared forwarding chain
structure that allows achieving BGP prefix independent structure that allows achieving BGP prefix independent
convergence, and in the case of locally detected failures, sub-50 convergence, and in the case of locally detected failures, sub-50
msec convergence. A router can construct the forwarding chains in msec convergence. A router can construct the forwarding chains in
a completely transparent manner with zero operator intervention a completely transparent manner with zero operator intervention
thereby supporting smooth and incremental deployment. thereby supporting smooth and incremental deployment.
11. References 11. References
11.1. Normative References 11.1. Normative References
[1] Bradner, S., "Key words for use in RFCs to Indicate [1] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[2] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway Protocol [2] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway Protocol
4 (BGP-4), RFC 4271, January 2006 4 (BGP-4), RFC 4271, January 2006
[3] Bates, T., Chandra, R., Katz, D., and Rekhter Y., [3] Bates, T., Chandra, R., Katz, D., and Rekhter Y.,
"Multiprotocol Extensions for BGP", RFC 4760, January 2007 "Multiprotocol Extensions for BGP", RFC 4760, January 2007
[4] Y. Rekhter and E. Rosen, " Carrying Label Information in BGP- [4] Y. Rekhter and E. Rosen, " Carrying Label Information in BGP-
4", RFC 3107, May 2001 4", RFC 3107, May 2001
[5] Andersson, L., Minei, I., and B. Thomas, "LDP Specification", [5] Andersson, L., Minei, I., and B. Thomas, "LDP Specification",
RFC 5036, October 2007 RFC 5036, October 2007
11.2. Informative References 11.2. Informative References
[6] Marques,P., Fernando, R., Chen, E, Mohapatra, P., Gredler, H., [6] Marques,P., Fernando, R., Chen, E, Mohapatra, P., Gredler, H.,
"Advertisement of the best external route in BGP", draft-ietf- "Advertisement of the best external route in BGP", draft-ietf-
idr-best-external-05.txt, January 2012. idr-best-external-05.txt, January 2012.
[7] Wu, J., Cui, Y., Metz, C., and E. Rosen, "Softwire Mesh [7] Wu, J., Cui, Y., Metz, C., and E. Rosen, "Softwire Mesh
Framework", RFC 5565, June 2009. Framework", RFC 5565, June 2009.
[8] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private [8] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private
Networks (VPNs)", RFC 4364, February 2006. Networks (VPNs)", RFC 4364, February 2006.
skipping to change at line 1174 skipping to change at page 30, line 4
Email: bashandy@cisco.com Email: bashandy@cisco.com
Clarence Filsfils Clarence Filsfils
Cisco Systems Cisco Systems
Brussels, Belgium Brussels, Belgium
Email: cfilsfil@cisco.com Email: cfilsfil@cisco.com
Prodosh Mohapatra Prodosh Mohapatra
Sproute Networks Sproute Networks
Email: mpradosh@yahoo.com Email: mpradosh@yahoo.com
Appendix A. Perspective
The following table puts the BGP PIC benefits in perspective
assuming
o 1M impacted BGP prefixes
o IGP convergence ~ 500 msec
o local protection ~ 50msec
o FIB Update per BGP destination ~ 100usec conservative,
~ 10usec optimistic
o BGP Convergence per BGP destination ~ 200usec conservative,
~ 100usec optimistic
Without PIC With PIC
Local IGP Failure 10 to 100sec 50msec
Local BGP Failure 100 to 200sec 50msec
Remote IGP Failure 10 to 100sec 500msec
Local BGP Failure 100 to 200sec 500msec
Upon local IGP next-hop failure or remote IGP next-hop failure, the
existing primary BGP next-hop is intact and usable hence the
resiliency only depends on the ability of the FIB mechanism to
reflect the new path to the BGP next-hop to the depending BGP
destinations. Without BGP PIC, a conservative back-of-the-envelope
estimation for this FIB update is 100usec per BGP destination. An
optimistic estimation is 10usec per entry.
Upon local BGP next-hop failure or remote BGP next-hop failure,
without the BGP PIC mechanism, a new BGP Best-Path needs to be
recomputed and new updates need to be sent to peers. This depends on
BGP processing time that will be shared between best-path
computation, RIB update and peer update. A conservative back-of-the-
envelope estimation for this is 200usec per BGP destination. An
optimistic estimation is 100usec per entry.
 End of changes. 90 change blocks. 
382 lines changed or deleted 563 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/