draft-ietf-spring-segment-routing-msdc-11.txt   rfc8670.txt 
Network Working Group C. Filsfils, Ed. Internet Engineering Task Force (IETF) C. Filsfils, Ed.
Internet-Draft S. Previdi Request for Comments: 8670 S. Previdi
Intended status: Informational Cisco Systems, Inc. Category: Informational Cisco Systems, Inc.
Expires: June 2, 2019 G. Dawra ISSN: 2070-1721 G. Dawra
LinkedIn LinkedIn
E. Aries E. Aries
Juniper Networks Arrcus, Inc.
P. Lapukhov P. Lapukhov
Facebook Facebook
November 29, 2018 December 2019
BGP-Prefix Segment in large-scale data centers BGP Prefix Segment in Large-Scale Data Centers
draft-ietf-spring-segment-routing-msdc-11
Abstract Abstract
This document describes the motivation and benefits for applying This document describes the motivation for, and benefits of, applying
segment routing in BGP-based large-scale data-centers. It describes Segment Routing (SR) in BGP-based large-scale data centers. It
the design to deploy segment routing in those data-centers, for both describes the design to deploy SR in those data centers for both the
the MPLS and IPv6 dataplanes. MPLS and IPv6 data planes.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This document is not an Internet Standards Track specification; it is
provisions of BCP 78 and BCP 79. published for informational purposes.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Engineering Task Force
and may be updated, replaced, or obsoleted by other documents at any (IETF). It represents the consensus of the IETF community. It has
time. It is inappropriate to use Internet-Drafts as reference received public review and has been approved for publication by the
material or to cite them other than as "work in progress." Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are candidates for any level of Internet
Standard; see Section 2 of RFC 7841.
This Internet-Draft will expire on June 2, 2019. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc8670.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction
2. Large Scale Data Center Network Design Summary . . . . . . . 3 2. Large-Scale Data-Center Network Design Summary
2.1. Reference design . . . . . . . . . . . . . . . . . . . . 4 2.1. Reference Design
3. Some open problems in large data-center networks . . . . . . 5 3. Some Open Problems in Large Data-Center Networks
4. Applying Segment Routing in the DC with MPLS dataplane . . . 6 4. Applying Segment Routing in the DC with MPLS Data Plane
4.1. BGP Prefix Segment (BGP-Prefix-SID) . . . . . . . . . . . 6 4.1. BGP Prefix Segment (BGP Prefix-SID)
4.2. eBGP Labeled Unicast (RFC8277) . . . . . . . . . . . . . 6 4.2. EBGP Labeled Unicast (RFC 8277)
4.2.1. Control Plane . . . . . . . . . . . . . . . . . . . . 7 4.2.1. Control Plane
4.2.2. Data Plane . . . . . . . . . . . . . . . . . . . . . 8 4.2.2. Data Plane
4.2.3. Network Design Variation . . . . . . . . . . . . . . 9 4.2.3. Network Design Variation
4.2.4. Global BGP Prefix Segment through the fabric . . . . 10 4.2.4. Global BGP Prefix Segment through the Fabric
4.2.5. Incremental Deployments . . . . . . . . . . . . . . . 10 4.2.5. Incremental Deployments
4.3. iBGP Labeled Unicast (RFC8277) . . . . . . . . . . . . . 11 4.3. IBGP Labeled Unicast (RFC 8277)
5. Applying Segment Routing in the DC with IPv6 dataplane . . . 13 5. Applying Segment Routing in the DC with IPv6 Data Plane
6. Communicating path information to the host . . . . . . . . . 13 6. Communicating Path Information to the Host
7. Additional Benefits . . . . . . . . . . . . . . . . . . . . . 14 7. Additional Benefits
7.1. MPLS Dataplane with operational simplicity . . . . . . . 14 7.1. MPLS Data Plane with Operational Simplicity
7.2. Minimizing the FIB table . . . . . . . . . . . . . . . . 14 7.2. Minimizing the FIB Table
7.3. Egress Peer Engineering . . . . . . . . . . . . . . . . . 15 7.3. Egress Peer Engineering
7.4. Anycast . . . . . . . . . . . . . . . . . . . . . . . . . 15 7.4. Anycast
8. Preferred SRGB Allocation . . . . . . . . . . . . . . . . . . 16 8. Preferred SRGB Allocation
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 9. IANA Considerations
10. Manageability Considerations . . . . . . . . . . . . . . . . 17 10. Manageability Considerations
11. Security Considerations . . . . . . . . . . . . . . . . . . . 17 11. Security Considerations
12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 18 12. References
13. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 18 12.1. Normative References
14. References . . . . . . . . . . . . . . . . . . . . . . . . . 19 12.2. Informative References
14.1. Normative References . . . . . . . . . . . . . . . . . . 19 Acknowledgements
14.2. Informative References . . . . . . . . . . . . . . . . . 20 Contributors
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 Authors' Addresses
1. Introduction 1. Introduction
Segment Routing (SR), as described in Segment Routing (SR), as described in [RFC8402], leverages the
[I-D.ietf-spring-segment-routing] leverages the source routing source-routing paradigm. A node steers a packet through an ordered
paradigm. A node steers a packet through an ordered list of list of instructions called "segments". A segment can represent any
instructions, called segments. A segment can represent any instruction, topological or service based. A segment can have a
instruction, topological or service-based. A segment can have a local semantic to an SR node or a global semantic within an SR
local semantic to an SR node or global within an SR domain. SR domain. SR allows the enforcement of a flow through any topological
allows to enforce a flow through any topological path while path while maintaining per-flow state only from the ingress node to
maintaining per-flow state only at the ingress node to the SR domain. the SR domain. SR can be applied to the MPLS and IPv6 data planes.
Segment Routing can be applied to the MPLS and IPv6 data-planes.
The use-cases described in this document should be considered in the The use cases described in this document should be considered in the
context of the BGP-based large-scale data-center (DC) design context of the BGP-based large-scale data-center (DC) design
described in [RFC7938]. This document extends it by applying SR both described in [RFC7938]. This document extends it by applying SR both
with IPv6 and MPLS dataplane. with IPv6 and MPLS data planes.
2. Large Scale Data Center Network Design Summary 2. Large-Scale Data-Center Network Design Summary
This section provides a brief summary of the informational document This section provides a brief summary of the Informational RFC
[RFC7938] that outlines a practical network design suitable for data- [RFC7938], which outlines a practical network design suitable for
centers of various scales: data centers of various scales:
o Data-center networks have highly symmetric topologies with * Data-center networks have highly symmetric topologies with
multiple parallel paths between two server attachment points. The multiple parallel paths between two server-attachment points. The
well-known Clos topology is most popular among the operators (as well-known Clos topology is most popular among the operators (as
described in [RFC7938]). In a Clos topology, the minimum number described in [RFC7938]). In a Clos topology, the minimum number
of parallel paths between two elements is determined by the of parallel paths between two elements is determined by the
"width" of the "Tier-1" stage. See Figure 1 below for an "width" of the "Tier-1" stage. See Figure 1 for an illustration
illustration of the concept. of the concept.
o Large-scale data-centers commonly use a routing protocol, such as * Large-scale data centers commonly use a routing protocol, such as
BGP-4 [RFC4271] in order to provide endpoint connectivity. BGP-4 [RFC4271], in order to provide endpoint connectivity.
Recovery after a network failure is therefore driven either by Therefore, recovery after a network failure is driven either by
local knowledge of directly available backup paths or by local knowledge of directly available backup paths or by
distributed signaling between the network devices. distributed signaling between the network devices.
o Within data-center networks, traffic is load-shared using the * Within data-center networks, traffic is load shared using the
Equal Cost Multipath (ECMP) mechanism. With ECMP, every network Equal Cost Multipath (ECMP) mechanism. With ECMP, every network
device implements a pseudo-random decision, mapping packets to one device implements a pseudorandom decision, mapping packets to one
of the parallel paths by means of a hash function calculated over of the parallel paths by means of a hash function calculated over
certain parts of the packet, typically a combination of various certain parts of the packet, typically a combination of various
packet header fields. packet header fields.
The following is a schematic of a five-stage Clos topology, with four The following is a schematic of a five-stage Clos topology with four
devices in the "Tier-1" stage. Notice that number of paths between devices in the "Tier-1" stage. Notice that the number of paths
Node1 and Node12 equals to four: the paths have to cross all of between Node1 and Node12 equals four; the paths have to cross all of
Tier-1 devices. At the same time, the number of paths between Node1 the Tier-1 devices. At the same time, the number of paths between
and Node2 equals two, and the paths only cross Tier-2 devices. Other Node1 and Node2 equals two, and the paths only cross Tier-2 devices.
topologies are possible, but for simplicity only the topologies that Other topologies are possible, but for simplicity, only the
have a single path from Tier-1 to Tier-3 are considered below. The topologies that have a single path from Tier-1 to Tier-3 are
rest could be treated similarly, with a few modifications to the considered below. The rest could be treated similarly, with a few
logic. modifications to the logic.
2.1. Reference design 2.1. Reference Design
Tier-1 Tier-1
+-----+ +-----+
|NODE | |NODE |
+->| 5 |--+ +->| 5 |--+
| +-----+ | | +-----+ |
Tier-2 | | Tier-2 Tier-2 | | Tier-2
+-----+ | +-----+ | +-----+ +-----+ | +-----+ | +-----+
+------------>|NODE |--+->|NODE |--+--|NODE |-------------+ +------------>|NODE |--+->|NODE |--+--|NODE |-------------+
| +-----| 3 |--+ | 6 | +--| 9 |-----+ | | +-----| 3 |--+ | 6 | +--| 9 |-----+ |
skipping to change at page 4, line 30 skipping to change at line 164
| | | +---| 4 |--+->| 7 |--+--| 10 |---+ | | | | | | +---| 4 |--+->| 7 |--+--| 10 |---+ | | |
| | | | +-----+ | +-----+ | +-----+ | | | | | | | | +-----+ | +-----+ | +-----+ | | | |
| | | | | | | | | | | | | | | | | | | |
+-----+ +-----+ | +-----+ | +-----+ +-----+ +-----+ +-----+ | +-----+ | +-----+ +-----+
|NODE | |NODE | Tier-3 +->|NODE |--+ Tier-3 |NODE | |NODE | |NODE | |NODE | Tier-3 +->|NODE |--+ Tier-3 |NODE | |NODE |
| 1 | | 2 | | 8 | | 11 | | 12 | | 1 | | 2 | | 8 | | 11 | | 12 |
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
| | | | | | | | | | | | | | | |
A O B O <- Servers -> Z O O O A O B O <- Servers -> Z O O O
Figure 1: 5-stage Clos topology Figure 1: 5-Stage Clos Topology
In the reference topology illustrated in Figure 1, It is assumed: In the reference topology illustrated in Figure 1, it is assumed:
o Each node is its own AS (Node X has AS X). 4-byte AS numbers are * Each node is its own autonomous system (AS) (Node X has AS X).
recommended ([RFC6793]). 4-byte AS numbers are recommended ([RFC6793]).
* For simple and efficient route propagation filtering, Node5, - For simple and efficient route propagation filtering, Node5,
Node6, Node7 and Node8 use the same AS, Node3 and Node4 use the Node6, Node7, and Node8 use the same AS; Node3 and Node4 use
same AS, Node9 and Node10 use the same AS. the same AS; and Node9 and Node10 use the same AS.
* In case of 2-byte autonomous system numbers are used and for - In the case in which 2-byte autonomous system numbers are used
efficient usage of the scarce 2-byte Private Use AS pool, for efficient usage of the scarce 2-byte Private Use AS pool,
different Tier-3 nodes might use the same AS. different Tier-3 nodes might use the same AS.
* Without loss of generality, these details will be simplified in - Without loss of generality, these details will be simplified in
this document and assume that each node has its own AS. this document. It is to be assumed that each node has its own
AS.
o Each node peers with its neighbors with a BGP session. If not * Each node peers with its neighbors with a BGP session. If not
specified, eBGP is assumed. In a specific use-case, iBGP will be specified, external BGP (EBGP) is assumed. In a specific use
used but this will be called out explicitly in that case. case, internal BGP (IBGP) will be used, but this will be called
out explicitly in that case.
o Each node originates the IPv4 address of its loopback interface * Each node originates the IPv4 address of its loopback interface
into BGP and announces it to its neighbors. into BGP and announces it to its neighbors.
* The loopback of Node X is 192.0.2.x/32. - The loopback of Node X is 192.0.2.x/32.
In this document, the Tier-1, Tier-2 and Tier-3 nodes are referred to In this document, the Tier-1, Tier-2, and Tier-3 nodes are referred
respectively as Spine, Leaf and ToR (top of rack) nodes. When a ToR to as "Spine", "Leaf", and "ToR" (top of rack) nodes, respectively.
node acts as a gateway to the "outside world", it is referred to as a When a ToR node acts as a gateway to the "outside world", it is
border node. referred to as a "border node".
3. Some open problems in large data-center networks 3. Some Open Problems in Large Data-Center Networks
The data-center network design summarized above provides means for The data-center-network design summarized above provides means for
moving traffic between hosts with reasonable efficiency. There are moving traffic between hosts with reasonable efficiency. There are
few open performance and reliability problems that arise in such few open performance and reliability problems that arise in such a
design: design:
o ECMP routing is most commonly realized per-flow. This means that * ECMP routing is most commonly realized per flow. This means that
large, long-lived "elephant" flows may affect performance of large, long-lived "elephant" flows may affect performance of
smaller, short-lived "mouse" flows and reduce efficiency of per- smaller, short-lived "mouse" flows and may reduce efficiency of
flow load-sharing. In other words, per-flow ECMP does not perform per-flow load sharing. In other words, per-flow ECMP does not
efficiently when flow lifetime distribution is heavy-tailed. perform efficiently when flow-lifetime distribution is heavy
Furthermore, due to hash-function inefficiencies it is possible to tailed. Furthermore, due to hash-function inefficiencies, it is
have frequent flow collisions, where more flows get placed on one possible to have frequent flow collisions where more flows get
path over the others. placed on one path over the others.
o Shortest-path routing with ECMP implements an oblivious routing * Shortest-path routing with ECMP implements an oblivious routing
model, which is not aware of the network imbalances. If the model that is not aware of the network imbalances. If the network
network symmetry is broken, for example due to link failures, symmetry is broken, for example, due to link failures, utilization
utilization hotspots may appear. For example, if a link fails hotspots may appear. For example, if a link fails between Tier-1
between Tier-1 and Tier-2 devices (e.g. Node5 and Node9), Tier-3 and Tier-2 devices (e.g., Node5 and Node9), Tier-3 devices Node1
devices Node1 and Node2 will not be aware of that, since there are and Node2 will not be aware of that since there are other paths
other paths available from perspective of Node3. They will available from the perspective of Node3. They will continue
continue sending roughly equal traffic to Node3 and Node4 as if sending roughly equal traffic to Node3 and Node4 as if the failure
the failure didn't exist which may cause a traffic hotspot. didn't exist, which may cause a traffic hotspot.
o Isolating faults in the network with multiple parallel paths and * Isolating faults in the network with multiple parallel paths and
ECMP-based routing is non-trivial due to lack of determinism. ECMP-based routing is nontrivial due to lack of determinism.
Specifically, the connections from HostA to HostB may take a Specifically, the connections from HostA to HostB may take a
different path every time a new connection is formed, thus making different path every time a new connection is formed, thus making
consistent reproduction of a failure much more difficult. This consistent reproduction of a failure much more difficult. This
complexity scales linearly with the number of parallel paths in complexity scales linearly with the number of parallel paths in
the network, and stems from the random nature of path selection by the network and stems from the random nature of path selection by
the network devices. the network devices.
First, it will be explained how to apply SR in the DC, for MPLS and 4. Applying Segment Routing in the DC with MPLS Data Plane
IPv6 data-planes.
4. Applying Segment Routing in the DC with MPLS dataplane
4.1. BGP Prefix Segment (BGP-Prefix-SID) 4.1. BGP Prefix Segment (BGP Prefix-SID)
A BGP Prefix Segment is a segment associated with a BGP prefix. A A BGP Prefix Segment is a segment associated with a BGP prefix. A
BGP Prefix Segment is a network-wide instruction to forward the BGP Prefix Segment is a network-wide instruction to forward the
packet along the ECMP-aware best path to the related prefix. packet along the ECMP-aware best path to the related prefix.
The BGP Prefix Segment is defined as the BGP-Prefix-SID Attribute in The BGP Prefix Segment is defined as the BGP Prefix-SID Attribute in
[I-D.ietf-idr-bgp-prefix-sid] which contains an index. Throughout [RFC8669], which contains an index. Throughout this document, the
this document the BGP Prefix Segment Attribute is referred as the BGP Prefix Segment Attribute is referred to as the "BGP Prefix-SID"
BGP-Prefix-SID and the encoded index as the label-index. and the encoded index as the label index.
In this document, the network design decision has been made to assume In this document, the network design decision has been made to assume
that all the nodes are allocated the same SRGB (Segment Routing that all the nodes are allocated the same SRGB (Segment Routing
Global Block), e.g. [16000, 23999]. This provides operational Global Block), e.g., [16000, 23999]. This provides operational
simplification as explained in Section 8, but this is not a simplification as explained in Section 8, but this is not a
requirement. requirement.
For illustration purpose, when considering an MPLS data-plane, it is For illustration purposes, when considering an MPLS data plane, it is
assumed that the label-index allocated to prefix 192.0.2.x/32 is X. assumed that the label index allocated to prefix 192.0.2.x/32 is X.
As a result, a local label (16000+x) is allocated for prefix As a result, a local label (16000+x) is allocated for prefix
192.0.2.x/32 by each node throughout the DC fabric. 192.0.2.x/32 by each node throughout the DC fabric.
When IPv6 data-plane is considered, it is assumed that Node X is When the IPv6 data plane is considered, it is assumed that Node X is
allocated IPv6 address (segment) 2001:DB8::X. allocated IPv6 address (segment) 2001:DB8::X.
4.2. eBGP Labeled Unicast (RFC8277) 4.2. EBGP Labeled Unicast (RFC 8277)
Referring to Figure 1 and [RFC7938], the following design Referring to Figure 1 and [RFC7938], the following design
modifications are introduced: modifications are introduced:
o Each node peers with its neighbors via a eBGP session with * Each node peers with its neighbors via an EBGP session with
extensions defined in [RFC8277] (named "eBGP8277" throughout this extensions defined in [RFC8277] (named "EBGP8277" throughout this
document) and with the BGP-Prefix-SID attribute extension as document) and with the BGP Prefix-SID attribute extension as
defined in [I-D.ietf-idr-bgp-prefix-sid]. defined in [RFC8669].
o The forwarding plane at Tier-2 and Tier-1 is MPLS. * The forwarding plane at Tier-2 and Tier-1 is MPLS.
o The forwarding plane at Tier-3 is either IP2MPLS (if the host * The forwarding plane at Tier-3 is either IP2MPLS (if the host
sends IP traffic) or MPLS2MPLS (if the host sends MPLS- sends IP traffic) or MPLS2MPLS (if the host sends MPLS-
encapsulated traffic). encapsulated traffic).
Figure 2 zooms into a path from server A to server Z within the Figure 2 zooms into a path from ServerA to ServerZ within the
topology of Figure 1. topology of Figure 1.
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+
+---------->|NODE | |NODE | |NODE | +---------->|NODE | |NODE | |NODE |
| | 4 |--+->| 7 |--+--| 10 |---+ | | 4 |--+->| 7 |--+--| 10 |---+
| +-----+ +-----+ +-----+ | | +-----+ +-----+ +-----+ |
| | | |
+-----+ +-----+ +-----+ +-----+
|NODE | |NODE | |NODE | |NODE |
| 1 | | 11 | | 1 | | 11 |
+-----+ +-----+ +-----+ +-----+
| | | |
A <- Servers -> Z A <- Servers -> Z
Figure 2: Path from A to Z via nodes 1, 4, 7, 10 and 11 Figure 2: Path from A to Z via Nodes 1, 4, 7, 10, and 11
Referring to Figure 1 and Figure 2 and assuming the IP address with Referring to Figures 1 and 2, and assuming the IP address with the AS
the AS and label-index allocation previously described, the following and label-index allocation previously described, the following
sections detail the control plane operation and the data plane states sections detail the control-plane operation and the data-plane states
for the prefix 192.0.2.11/32 (loopback of Node11) for the prefix 192.0.2.11/32 (loopback of Node11).
4.2.1. Control Plane 4.2.1. Control Plane
Node11 originates 192.0.2.11/32 in BGP and allocates to it a BGP- Node11 originates 192.0.2.11/32 in BGP and allocates to it a BGP
Prefix-SID with label-index: index11 [I-D.ietf-idr-bgp-prefix-sid]. Prefix-SID with label-index: index11 [RFC8669].
Node11 sends the following eBGP8277 update to Node10: Node11 sends the following EBGP8277 update to Node10:
. IP Prefix: 192.0.2.11/32 IP Prefix: 192.0.2.11/32
. Label: Implicit-Null
. Next-hop: Node11's interface address on the link to Node10 Label: Implicit NULL
. AS Path: {11}
. BGP-Prefix-SID: Label-Index 11 Next hop: Node11's interface address on the link to Node10
AS Path: {11}
BGP Prefix-SID: Label-Index 11
Node10 receives the above update. As it is SR capable, Node10 is Node10 receives the above update. As it is SR capable, Node10 is
able to interpret the BGP-Prefix-SID and hence understands that it able to interpret the BGP Prefix-SID; therefore, it understands that
should allocate the label from its own SRGB block, offset by the it should allocate the label from its own SRGB block, offset by the
Label-Index received in the BGP-Prefix-SID (16000+11 hence 16011) to label index received in the BGP Prefix-SID (16000+11, hence, 16011)
the NLRI instead of allocating a non-deterministic label out of a to the Network Layer Reachability Information (NLRI) instead of
dynamically allocated portion of the local label space. The allocating a nondeterministic label out of a dynamically allocated
implicit-null label in the NLRI tells Node10 that it is the portion of the local label space. The implicit NULL label in the
penultimate hop and must pop the top label on the stack before NLRI tells Node10 that it is the penultimate hop and that it must pop
forwarding traffic for this prefix to Node11. the top label on the stack before forwarding traffic for this prefix
to Node11.
Then, Node10 sends the following eBGP8277 update to Node7: Then, Node10 sends the following EBGP8277 update to Node7:
. IP Prefix: 192.0.2.11/32 IP Prefix: 192.0.2.11/32
. Label: 16011
. Next-hop: Node10's interface address on the link to Node7 Label: 16011
. AS Path: {10, 11}
. BGP-Prefix-SID: Label-Index 11 Next hop: Node10's interface address on the link to Node7
AS Path: {10, 11}
BGP Prefix-SID: Label-Index 11
Node7 receives the above update. As it is SR capable, Node7 is able Node7 receives the above update. As it is SR capable, Node7 is able
to interpret the BGP-Prefix-SID and hence allocates the local to interpret the BGP Prefix-SID; therefore, it allocates the local
(incoming) label 16011 (16000 + 11) to the NLRI (instead of (incoming) label 16011 (16000 + 11) to the NLRI (instead of
allocating a "dynamic" local label from its label manager). Node7 allocating a "dynamic" local label from its label manager). Node7
uses the label in the received eBGP8277 NLRI as the outgoing label uses the label in the received EBGP8277 NLRI as the outgoing label
(the index is only used to derive the local/incoming label). (the index is only used to derive the local/incoming label).
Node7 sends the following eBGP8277 update to Node4: Node7 sends the following EBGP8277 update to Node4:
. IP Prefix: 192.0.2.11/32 IP Prefix: 192.0.2.11/32
. Label: 16011
. Next-hop: Node7's interface address on the link to Node4 Label: 16011
. AS Path: {7, 10, 11}
. BGP-Prefix-SID: Label-Index 11 Next hop: Node7's interface address on the link to Node4
AS Path: {7, 10, 11}
BGP Prefix-SID: Label-Index 11
Node4 receives the above update. As it is SR capable, Node4 is able Node4 receives the above update. As it is SR capable, Node4 is able
to interpret the BGP-Prefix-SID and hence allocates the local to interpret the BGP Prefix-SID; therefore, it allocates the local
(incoming) label 16011 to the NLRI (instead of allocating a "dynamic" (incoming) label 16011 to the NLRI (instead of allocating a "dynamic"
local label from its label manager). Node4 uses the label in the local label from its label manager). Node4 uses the label in the
received eBGP8277 NLRI as outgoing label (the index is only used to received EBGP8277 NLRI as an outgoing label (the index is only used
derive the local/incoming label). to derive the local/incoming label).
Node4 sends the following eBGP8277 update to Node1: Node4 sends the following EBGP8277 update to Node1:
. IP Prefix: 192.0.2.11/32 IP Prefix: 192.0.2.11/32
. Label: 16011
. Next-hop: Node4's interface address on the link to Node1 Label: 16011
. AS Path: {4, 7, 10, 11}
. BGP-Prefix-SID: Label-Index 11 Next hop: Node4's interface address on the link to Node1
AS Path: {4, 7, 10, 11}
BGP Prefix-SID: Label-Index 11
Node1 receives the above update. As it is SR capable, Node1 is able Node1 receives the above update. As it is SR capable, Node1 is able
to interpret the BGP-Prefix-SID and hence allocates the local to interpret the BGP Prefix-SID; therefore, it allocates the local
(incoming) label 16011 to the NLRI (instead of allocating a "dynamic" (incoming) label 16011 to the NLRI (instead of allocating a "dynamic"
local label from its label manager). Node1 uses the label in the local label from its label manager). Node1 uses the label in the
received eBGP8277 NLRI as outgoing label (the index is only used to received EBGP8277 NLRI as an outgoing label (the index is only used
derive the local/incoming label). to derive the local/incoming label).
4.2.2. Data Plane 4.2.2. Data Plane
Referring to Figure 1, and assuming all nodes apply the same Referring to Figure 1, and assuming all nodes apply the same
advertisement rules described above and all nodes have the same SRGB advertisement rules described above and all nodes have the same SRGB
(16000-23999), here are the IP/MPLS forwarding tables for prefix (16000-23999), here are the IP/MPLS forwarding tables for prefix
192.0.2.11/32 at Node1, Node4, Node7 and Node10. 192.0.2.11/32 at Node1, Node4, Node7, and Node10.
----------------------------------------------- +----------------------------------+----------------+------------+
Incoming label | outgoing label | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | | Interface | | | Interface |
------------------+----------------+----------- +----------------------------------+----------------+------------+
16011 | 16011 | ECMP{3, 4} | 16011 | 16011 | ECMP{3, 4} |
192.0.2.11/32 | 16011 | ECMP{3, 4} +----------------------------------+----------------+------------+
------------------+----------------+----------- | 192.0.2.11/32 | 16011 | ECMP{3, 4} |
+----------------------------------+----------------+------------+
Figure 3: Node1 Forwarding Table Table 1: Node1 Forwarding Table
----------------------------------------------- +----------------------------------+----------------+------------+
Incoming label | outgoing label | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | | Interface | | | Interface |
------------------+----------------+----------- +----------------------------------+----------------+------------+
16011 | 16011 | ECMP{7, 8} | 16011 | 16011 | ECMP{7, 8} |
192.0.2.11/32 | 16011 | ECMP{7, 8} +----------------------------------+----------------+------------+
------------------+----------------+----------- | 192.0.2.11/32 | 16011 | ECMP{7, 8} |
+----------------------------------+----------------+------------+
Figure 4: Node4 Forwarding Table Table 2: Node4 Forwarding Table
----------------------------------------------- +----------------------------------+----------------+-----------+
Incoming label | outgoing label | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | | Interface | | | Interface |
------------------+----------------+----------- +----------------------------------+----------------+-----------+
16011 | 16011 | 10 | 16011 | 16011 | 10 |
192.0.2.11/32 | 16011 | 10 +----------------------------------+----------------+-----------+
------------------+----------------+----------- | 192.0.2.11/32 | 16011 | 10 |
+----------------------------------+----------------+-----------+
Figure 5: Node7 Forwarding Table Table 3: Node7 Forwarding Table
----------------------------------------------- +----------------------------------+----------------+-----------+
Incoming label | outgoing label | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | | Interface | | | Interface |
------------------+----------------+----------- +----------------------------------+----------------+-----------+
16011 | POP | 11 | 16011 | POP | 11 |
192.0.2.11/32 | N/A | 11 +----------------------------------+----------------+-----------+
------------------+----------------+----------- | 192.0.2.11/32 | N/A | 11 |
+----------------------------------+----------------+-----------+
Node10 Forwarding Table Table 4: Node10 Forwarding Table
4.2.3. Network Design Variation 4.2.3. Network Design Variation
A network design choice could consist of switching all the traffic A network design choice could consist of switching all the traffic
through Tier-1 and Tier-2 as MPLS traffic. In this case, one could through Tier-1 and Tier-2 as MPLS traffic. In this case, one could
filter away the IP entries at Node4, Node7 and Node10. This might be filter away the IP entries at Node4, Node7, and Node10. This might
beneficial in order to optimize the forwarding table size. be beneficial in order to optimize the forwarding table size.
A network design choice could consist in allowing the hosts to send A network design choice could consist of allowing the hosts to send
MPLS-encapsulated traffic based on the Egress Peer Engineering (EPE) MPLS-encapsulated traffic based on the Egress Peer Engineering (EPE)
use-case as defined in [I-D.ietf-spring-segment-routing-central-epe]. use case as defined in [SR-CENTRAL-EPE]. For example, applications
For example, applications at HostA would send their Z-destined at HostA would send their Z-destined traffic to Node1 with an MPLS
traffic to Node1 with an MPLS label stack where the top label is label stack where the top label is 16011 and the next label is an EPE
16011 and the next label is an EPE peer segment peer segment ([SR-CENTRAL-EPE]) at Node11 directing the traffic to Z.
([I-D.ietf-spring-segment-routing-central-epe]) at Node11 directing
the traffic to Z.
4.2.4. Global BGP Prefix Segment through the fabric 4.2.4. Global BGP Prefix Segment through the Fabric
When the previous design is deployed, the operator enjoys global BGP- When the previous design is deployed, the operator enjoys global BGP
Prefix-SID and label allocation throughout the DC fabric. Prefix-SID and label allocation throughout the DC fabric.
A few examples follow: A few examples follow:
o Normal forwarding to Node11: a packet with top label 16011 * Normal forwarding to Node11: A packet with top label 16011
received by any node in the fabric will be forwarded along the received by any node in the fabric will be forwarded along the
ECMP-aware BGP best-path towards Node11 and the label 16011 is ECMP-aware BGP best path towards Node11, and the label 16011 is
penultimate-popped at Node10 (or at Node 9). penultimate popped at Node10 (or at Node 9).
o Traffic-engineered path to Node11: an application on a host behind * Traffic-engineered path to Node11: An application on a host behind
Node1 might want to restrict its traffic to paths via the Spine Node1 might want to restrict its traffic to paths via the Spine
node Node5. The application achieves this by sending its packets node Node5. The application achieves this by sending its packets
with a label stack of {16005, 16011}. BGP Prefix SID 16005 directs with a label stack of {16005, 16011}. BGP Prefix-SID 16005 directs
the packet up to Node5 along the path (Node1, Node3, Node5). BGP- the packet up to Node5 along the path (Node1, Node3, Node5). BGP
Prefix-SID 16011 then directs the packet down to Node11 along the Prefix-SID 16011 then directs the packet down to Node11 along the
path (Node5, Node9, Node11). path (Node5, Node9, Node11).
4.2.5. Incremental Deployments 4.2.5. Incremental Deployments
The design previously described can be deployed incrementally. Let The design previously described can be deployed incrementally. Let
us assume that Node7 does not support the BGP-Prefix-SID and let us us assume that Node7 does not support the BGP Prefix-SID, and let us
show how the fabric connectivity is preserved. show how the fabric connectivity is preserved.
From a signaling viewpoint, nothing would change: even though Node7 From a signaling viewpoint, nothing would change; even though Node7
does not support the BGP-Prefix-SID, it does propagate the attribute does not support the BGP Prefix-SID, it does propagate the attribute
unmodified to its neighbors. unmodified to its neighbors.
From a label allocation viewpoint, the only difference is that Node7 From a label-allocation viewpoint, the only difference is that Node7
would allocate a dynamic (random) label to the prefix 192.0.2.11/32 would allocate a dynamic (random) label to the prefix 192.0.2.11/32
(e.g. 123456) instead of the "hinted" label as instructed by the BGP- (e.g., 123456) instead of the "hinted" label as instructed by the BGP
Prefix-SID. The neighbors of Node7 adapt automatically as they Prefix-SID. The neighbors of Node7 adapt automatically as they
always use the label in the BGP8277 NLRI as outgoing label. always use the label in the BGP8277 NLRI as an outgoing label.
Node4 does understand the BGP-Prefix-SID and hence allocates the Node4 does understand the BGP Prefix-SID; therefore, it allocates the
indexed label in the SRGB (16011) for 192.0.2.11/32. indexed label in the SRGB (16011) for 192.0.2.11/32.
As a result, all the data-plane entries across the network would be As a result, all the data-plane entries across the network would be
unchanged except the entries at Node7 and its neighbor Node4 as shown unchanged except the entries at Node7 and its neighbor Node4 as shown
in the figures below. in the figures below.
The key point is that the end-to-end Label Switched Path (LSP) is The key point is that the end-to-end Label Switched Path (LSP) is
preserved because the outgoing label is always derived from the preserved because the outgoing label is always derived from the
received label within the BGP8277 NLRI. The index in the BGP-Prefix- received label within the BGP8277 NLRI. The index in the BGP Prefix-
SID is only used as a hint on how to allocate the local label (the SID is only used as a hint on how to allocate the local label (the
incoming label) but never for the outgoing label. incoming label) but never for the outgoing label.
------------------------------------------ +----------------------------------+----------------+-----------+
Incoming label | outgoing | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | label | Interface | | | Interface |
-------------------+---------------------- +----------------------------------+----------------+-----------+
12345 | 16011 | 10 | 12345 | 16011 | 10 |
+----------------------------------+----------------+-----------+
Figure 7: Node7 Forwarding Table Table 5: Node7 Forwarding Table
------------------------------------------ +----------------------------------+----------------+-----------+
Incoming label | outgoing | Outgoing | Incoming Label or IP Destination | Outgoing Label | Outgoing |
or IP destination | label | Interface | | | Interface |
-------------------+---------------------- +----------------------------------+----------------+-----------+
16011 | 12345 | 7 | 16011 | 12345 | 7 |
+----------------------------------+----------------+-----------+
Figure 8: Node4 Forwarding Table Table 6: Node4 Forwarding Table
The BGP-Prefix-SID can thus be deployed incrementally one node at a The BGP Prefix-SID can thus be deployed incrementally, i.e., one node
time. at a time.
When deployed together with a homogeneous SRGB (same SRGB across the When deployed together with a homogeneous SRGB (the same SRGB across
fabric), the operator incrementally enjoys the global prefix segment the fabric), the operator incrementally enjoys the global prefix
benefits as the deployment progresses through the fabric. segment benefits as the deployment progresses through the fabric.
4.3. iBGP Labeled Unicast (RFC8277) 4.3. IBGP Labeled Unicast (RFC 8277)
The same exact design as eBGP8277 is used with the following The same exact design as EBGP8277 is used with the following
modifications: modifications:
All nodes use the same AS number. * All nodes use the same AS number.
Each node peers with its neighbors via an internal BGP session * Each node peers with its neighbors via an internal BGP session
(iBGP) with extensions defined in [RFC8277] (named "iBGP8277" (IBGP) with extensions defined in [RFC8277] (named "IBGP8277"
throughout this document). throughout this document).
Each node acts as a route-reflector for each of its neighbors and * Each node acts as a route reflector for each of its neighbors and
with the next-hop-self option. Next-hop-self is a well known with the next-hop-self option. Next-hop-self is a well-known
operational feature which consists of rewriting the next-hop of a operational feature that consists of rewriting the next hop of a
BGP update prior to send it to the neighbor. Usually, it's a BGP update prior to sending it to the neighbor. Usually, it's a
common practice to apply next-hop-self behavior towards iBGP peers common practice to apply next-hop-self behavior towards IBGP peers
for eBGP learned routes. In the case outlined in this section it for EBGP-learned routes. In the case outlined in this section, it
is proposed to use the next-hop-self mechanism also to iBGP is proposed to use the next-hop-self mechanism also to IBGP-
learned routes. learned routes.
Cluster-1 Cluster-1
+-----------+ +-----------+
| Tier-1 | | Tier-1 |
| +-----+ | | +-----+ |
| |NODE | | | |NODE | |
| | 5 | | | | 5 | |
Cluster-2 | +-----+ | Cluster-3 Cluster-2 | +-----+ | Cluster-3
+---------+ | | +---------+ +---------+ | | +---------+
skipping to change at page 12, line 43 skipping to change at line 571
+---------+ | | +---------+ +---------+ | | +---------+
| | | |
| +-----+ | | +-----+ |
| |NODE | | | |NODE | |
Tier-3 | | 8 | | Tier-3 Tier-3 | | 8 | | Tier-3
+-----+ +-----+ | +-----+ | +-----+ +-----+ +-----+ +-----+ | +-----+ | +-----+ +-----+
|NODE | |NODE | +-----------+ |NODE | |NODE | |NODE | |NODE | +-----------+ |NODE | |NODE |
| 1 | | 2 | | 11 | | 12 | | 1 | | 2 | | 11 | | 12 |
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
Figure 9: iBGP Sessions with Reflection and Next-Hop-Self Figure 3: IBGP Sessions with Reflection and Next-Hop-Self
For simple and efficient route propagation filtering and as * For simple and efficient route propagation filtering and as
illustrated in Figure 9: illustrated in Figure 3:
Node5, Node6, Node7 and Node8 use the same Cluster ID (Cluster- - Node5, Node6, Node7, and Node8 use the same Cluster ID
1) (Cluster-1).
Node3 and Node4 use the same Cluster ID (Cluster-2)
Node9 and Node10 use the same Cluster ID (Cluster-3) - Node3 and Node4 use the same Cluster ID (Cluster-2).
The control-plane behavior is mostly the same as described in the - Node9 and Node10 use the same Cluster ID (Cluster-3).
previous section: the only difference is that the eBGP8277 path
propagation is simply replaced by an iBGP8277 path reflection with
next-hop changed to self.
The data-plane tables are exactly the same. * The control-plane behavior is mostly the same as described in the
previous section; the only difference is that the EBGP8277 path
propagation is simply replaced by an IBGP8277 path reflection with
next hop changed to self.
5. Applying Segment Routing in the DC with IPv6 dataplane * The data-plane tables are exactly the same.
5. Applying Segment Routing in the DC with IPv6 Data Plane
The design described in [RFC7938] is reused with one single The design described in [RFC7938] is reused with one single
modification. It is highlighted using the example of the modification. It is highlighted using the example of the
reachability to Node11 via spine node Node5. reachability to Node11 via Spine node Node5.
Node5 originates 2001:DB8::5/128 with the attached BGP-Prefix-SID for Node5 originates 2001:DB8::5/128 with the attached BGP Prefix-SID for
IPv6 packets destined to segment 2001:DB8::5 IPv6 packets destined to segment 2001:DB8::5 ([RFC8402]).
([I-D.ietf-idr-bgp-prefix-sid]).
Node11 originates 2001:DB8::11/128 with the attached BGP-Prefix-SID Node11 originates 2001:DB8::11/128 with the attached BGP Prefix-SID
advertising the support of the SRH for IPv6 packets destined to advertising the support of the Segment Routing Header (SRH) for IPv6
segment 2001:DB8::11. packets destined to segment 2001:DB8::11.
The control-plane and data-plane processing of all the other nodes in The control-plane and data-plane processing of all the other nodes in
the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and
2001:DB8::11 are installed in the FIB along the eBGP best-path to 2001:DB8::11 are installed in the FIB along the EBGP best path to
Node5 (spine node) and Node11 (ToR node) respectively. Node5 (Spine node) and Node11 (ToR node) respectively.
An application on HostA which needs to send traffic to HostZ via only An application on HostA that needs to send traffic to HostZ via only
Node5 (spine node) can do so by sending IPv6 packets with a Segment Node5 (Spine node) can do so by sending IPv6 packets with a Segment
Routing header (SRH, [I-D.ietf-6man-segment-routing-header]). The Routing Header (SRH, [IPv6-SRH]). The destination address and active
destination address and active segment is set to 2001:DB8::5. The segment is set to 2001:DB8::5. The next and last segment is set to
next and last segment is set to 2001:DB8::11. 2001:DB8::11.
The application must only use IPv6 addresses that have been The application must only use IPv6 addresses that have been
advertised as capable for SRv6 segment processing (e.g. for which the advertised as capable for SRv6 segment processing (e.g., for which
BGP prefix segment capability has been advertised). How applications the BGP Prefix Segment capability has been advertised). How
learn this (e.g.: centralized controller and orchestration) is applications learn this (e.g., centralized controller and
outside the scope of this document. orchestration) is outside the scope of this document.
6. Communicating path information to the host 6. Communicating Path Information to the Host
There are two general methods for communicating path information to There are two general methods for communicating path information to
the end-hosts: "proactive" and "reactive", aka "push" and "pull" the end-hosts: "proactive" and "reactive", aka "push" and "pull"
models. There are multiple ways to implement either of these models. There are multiple ways to implement either of these
methods. Here, it is noted that one way could be using a centralized methods. Here, it is noted that one way could be using a centralized
controller: the controller either tells the hosts of the prefix-to- controller: the controller either tells the hosts of the prefix-to-
path mappings beforehand and updates them as needed (network event path mappings beforehand and updates them as needed (network event
driven push), or responds to the hosts making request for a path to driven push) or responds to the hosts making requests for a path to a
specific destination (host event driven pull). It is also possible specific destination (host event driven pull). It is also possible
to use a hybrid model, i.e., pushing some state from the controller to use a hybrid model, i.e., pushing some state from the controller
in response to particular network events, while the host pulls other in response to particular network events, while the host pulls other
state on demand. state on demand.
It is also noted, that when disseminating network-related data to the Note also that when disseminating network-related data to the end-
end-hosts a trade-off is made to balance the amount of information hosts, a trade-off is made to balance the amount of information vs.
Vs. the level of visibility in the network state. This applies both the level of visibility in the network state. This applies to both
to push and pull models. In the extreme case, the host would request push and pull models. In the extreme case, the host would request
path information on every flow, and keep no local state at all. On path information on every flow and keep no local state at all. On
the other end of the spectrum, information for every prefix in the the other end of the spectrum, information for every prefix in the
network along with available paths could be pushed and continuously network along with available paths could be pushed and continuously
updated on all hosts. updated on all hosts.
7. Additional Benefits 7. Additional Benefits
7.1. MPLS Dataplane with operational simplicity 7.1. MPLS Data Plane with Operational Simplicity
As required by [RFC7938], no new signaling protocol is introduced. As required by [RFC7938], no new signaling protocol is introduced.
The BGP-Prefix-SID is a lightweight extension to BGP Labeled Unicast The BGP Prefix-SID is a lightweight extension to BGP Labeled Unicast
[RFC8277]. It applies either to eBGP or iBGP based designs. [RFC8277]. It applies either to EBGP- or IBGP-based designs.
Specifically, LDP and RSVP-TE are not used. These protocols would Specifically, LDP and RSVP-TE are not used. These protocols would
drastically impact the operational complexity of the Data Center and drastically impact the operational complexity of the data center and
would not scale. This is in line with the requirements expressed in would not scale. This is in line with the requirements expressed in
[RFC7938]. [RFC7938].
Provided the same SRGB is configured on all nodes, all nodes use the Provided the same SRGB is configured on all nodes, all nodes use the
same MPLS label for a given IP prefix. This is simpler from an same MPLS label for a given IP prefix. This is simpler from an
operation standpoint, as discussed in Section 8 operation standpoint, as discussed in Section 8.
7.2. Minimizing the FIB table 7.2. Minimizing the FIB Table
The designer may decide to switch all the traffic at Tier-1 and Tier- The designer may decide to switch all the traffic at Tier-1 and
2's based on MPLS, hence drastically decreasing the IP table size at Tier-2 based on MPLS, thereby drastically decreasing the IP table
these nodes. size at these nodes.
This is easily accomplished by encapsulating the traffic either This is easily accomplished by encapsulating the traffic either
directly at the host or the source ToR node by pushing the BGP- directly at the host or at the source ToR node. The encapsulation is
Prefix-SID of the destination ToR for intra-DC traffic, or the BGP- done by pushing the BGP Prefix-SID of the destination ToR for intra-
Prefix-SID for the the border node for inter-DC or DC-to-outside- DC traffic, or by pushing the BGP Prefix-SID for the border node for
world traffic. inter-DC or DC-to-outside-world traffic.
7.3. Egress Peer Engineering 7.3. Egress Peer Engineering
It is straightforward to combine the design illustrated in this It is straightforward to combine the design illustrated in this
document with the Egress Peer Engineering (EPE) use-case described in document with the Egress Peer Engineering (EPE) use case described in
[I-D.ietf-spring-segment-routing-central-epe]. [SR-CENTRAL-EPE].
In such case, the operator is able to engineer its outbound traffic In such a case, the operator is able to engineer its outbound traffic
on a per host-flow basis, without incurring any additional state at on a per-host-flow basis, without incurring any additional state at
intermediate points in the DC fabric. intermediate points in the DC fabric.
For example, the controller only needs to inject a per-flow state on For example, the controller only needs to inject a per-flow state on
the HostA to force it to send its traffic destined to a specific the HostA to force it to send its traffic destined to a specific
Internet destination D via a selected border node (say Node12 in Internet destination D via a selected border node (say Node12 in
Figure 1 instead of another border node, Node11) and a specific Figure 1 instead of another border node, Node11) and a specific
egress peer of Node12 (say peer AS 9999 of local PeerNode segment egress peer of Node12 (say peer AS 9999 of local PeerNode segment
9999 at Node12 instead of any other peer which provides a path to the 9999 at Node12 instead of any other peer that provides a path to the
destination D). Any packet matching this state at host A would be destination D). Any packet matching this state at HostA would be
encapsulated with SR segment list (label stack) {16012, 9999}. 16012 encapsulated with SR segment list (label stack) {16012, 9999}. 16012
would steer the flow through the DC fabric, leveraging any ECMP, would steer the flow through the DC fabric, leveraging any ECMP,
along the best path to border node Node12. Once the flow gets to along the best path to border node Node12. Once the flow gets to
border node Node12, the active segment is 9999 (because of PHP on the border node Node12, the active segment is 9999 (because of
upstream neighbor of Node12). This EPE PeerNode segment forces Penultimate Hop Popping (PHP) on the upstream neighbor of Node12).
border node Node12 to forward the packet to peer AS 9999, without any This EPE PeerNode segment forces border node Node12 to forward the
IP lookup at the border node. There is no per-flow state for this packet to peer AS 9999 without any IP lookup at the border node.
engineered flow in the DC fabric. A benefit of segment routing is There is no per-flow state for this engineered flow in the DC fabric.
the per-flow state is only required at the source. A benefit of SR is that the per-flow state is only required at the
source.
As well as allowing full traffic engineering control such a design As well as allowing full traffic-engineering control, such a design
also offers FIB table minimization benefits as the Internet-scale FIB also offers FIB table-minimization benefits as the Internet-scale FIB
at border node Node12 is not required if all FIB lookups are avoided at border node Node12 is not required if all FIB lookups are avoided
there by using EPE. there by using EPE.
7.4. Anycast 7.4. Anycast
The design presented in this document preserves the availability and The design presented in this document preserves the availability and
load-balancing properties of the base design presented in load-balancing properties of the base design presented in [RFC8402].
[I-D.ietf-spring-segment-routing].
For example, one could assign an anycast loopback 192.0.2.20/32 and For example, one could assign an anycast loopback 192.0.2.20/32 and
associate segment index 20 to it on the border Node11 and Node12 (in associate segment index 20 to it on the border nodes Node11 and
addition to their node-specific loopbacks). Doing so, the EPE Node12 (in addition to their node-specific loopbacks). Doing so, the
controller could express a default "go-to-the-Internet via any border EPE controller could express a default "go-to-the-Internet via any
node" policy as segment list {16020}. Indeed, from any host in the DC border node" policy as segment list {16020}. Indeed, from any host in
fabric or from any ToR node, 16020 steers the packet towards the the DC fabric or from any ToR node, 16020 steers the packet towards
border Node11 or Node12 leveraging ECMP where available along the the border nodes Node11 or Node12 leveraging ECMP where available
best paths to these nodes. along the best paths to these nodes.
8. Preferred SRGB Allocation 8. Preferred SRGB Allocation
In the MPLS case, it is recommend to use same SRGBs at each node. In the MPLS case, it is recommended to use the same SRGBs at each
node.
Different SRGBs in each node likely increase the complexity of the Different SRGBs in each node likely increase the complexity of the
solution both from an operational viewpoint and from a controller solution both from an operational viewpoint and from a controller
viewpoint. viewpoint.
From an operation viewpoint, it is much simpler to have the same From an operational viewpoint, it is much simpler to have the same
global label at every node for the same destination (the MPLS global label at every node for the same destination (the MPLS
troubleshooting is then similar to the IPv6 troubleshooting where troubleshooting is then similar to the IPv6 troubleshooting where
this global property is a given). this global property is a given).
From a controller viewpoint, this allows us to construct simple From a controller viewpoint, this allows us to construct simple
policies applicable across the fabric. policies applicable across the fabric.
Let us consider two applications A and B respectively connected to Let us consider two applications, A and B, respectively connected to
Node1 and Node2 (ToR nodes). A has two flows FA1 and FA2 destined to Node1 and Node2 (ToR nodes). Application A has two flows, FA1 and
Z. B has two flows FB1 and FB2 destined to Z. The controller wants FA2, destined to Z. B has two flows, FB1 and FB2, destined to Z.
FA1 and FB1 to be load-shared across the fabric while FA2 and FB2 The controller wants FA1 and FB1 to be load shared across the fabric
must be respectively steered via Node5 and Node8. while FA2 and FB2 must be respectively steered via Node5 and Node8.
Assuming a consistent unique SRGB across the fabric as described in Assuming a consistent unique SRGB across the fabric as described in
the document, the controller can simply do it by instructing A and B this document, the controller can simply do it by instructing A and B
to use {16011} respectively for FA1 and FB1 and by instructing A and to use {16011} respectively for FA1 and FB1 and by instructing A and
B to use {16005 16011} and {16008 16011} respectively for FA2 and B to use {16005 16011} and {16008 16011} respectively for FA2 and
FB2. FB2.
Let us assume a design where the SRGB is different at every node and Let us assume a design where the SRGB is different at every node and
where the SRGB of each node is advertised using the Originator SRGB where the SRGB of each node is advertised using the Originator SRGB
TLV of the BGP-Prefix-SID as defined in TLV of the BGP Prefix-SID as defined in [RFC8669]: SRGB of Node K
[I-D.ietf-idr-bgp-prefix-sid]: SRGB of Node K starts at value K*1000 starts at value K*1000, and the SRGB length is 1000 (e.g., Node1's
and the SRGB length is 1000 (e.g. Node1's SRGB is [1000, 1999], SRGB is [1000, 1999], Node2's SRGB is [2000, 2999], ...).
Node2's SRGB is [2000, 2999], ...).
In this case, not only the controller would need to collect and store In this case, the controller would need to collect and store all of
all of these different SRGB's (e.g., through the Originator SRGB TLV these different SRGBs (e.g., through the Originator SRGB TLV of the
of the BGP-Prefix-SID), furthermore it would need to adapt the policy BGP Prefix-SID); furthermore, it would also need to adapt the policy
for each host. Indeed, the controller would instruct A to use {1011} for each host. Indeed, the controller would instruct A to use {1011}
for FA1 while it would have to instruct B to use {2011} for FB1 for FA1 while it would have to instruct B to use {2011} for FB1
(while with the same SRGB, both policies are the same {16011}). (while with the same SRGB, both policies are the same {16011}).
Even worse, the controller would instruct A to use {1005, 5011} for Even worse, the controller would instruct A to use {1005, 5011} for
FA1 while it would instruct B to use {2011, 8011} for FB1 (while with FA1 while it would instruct B to use {2011, 8011} for FB1 (while with
the same SRGB, the second segment is the same across both policies: the same SRGB, the second segment is the same across both policies:
16011). When combining segments to create a policy, one need to 16011). When combining segments to create a policy, one needs to
carefully update the label of each segment. This is obviously more carefully update the label of each segment. This is obviously more
error-prone, more complex and more difficult to troubleshoot. error prone, more complex, and more difficult to troubleshoot.
9. IANA Considerations 9. IANA Considerations
This document does not make any IANA request. This document has no IANA actions.
10. Manageability Considerations 10. Manageability Considerations
The design and deployment guidelines described in this document are The design and deployment guidelines described in this document are
based on the network design described in [RFC7938]. based on the network design described in [RFC7938].
The deployment model assumed in this document is based on a single The deployment model assumed in this document is based on a single
domain where the interconnected DCs are part of the same domain where the interconnected DCs are part of the same
administrative domain (which, of course, is split into different administrative domain (which, of course, is split into different
autonomous systems). The operator has full control of the whole autonomous systems). The operator has full control of the whole
domain and the usual operational and management mechanisms and domain, and the usual operational and management mechanisms and
procedures are used in order to prevent any information related to procedures are used in order to prevent any information related to
internal prefixes and topology to be leaked outside the domain. internal prefixes and topology to be leaked outside the domain.
As recommended in [I-D.ietf-spring-segment-routing], the same SRGB As recommended in [RFC8402], the same SRGB should be allocated in all
should be allocated in all nodes in order to facilitate the design, nodes in order to facilitate the design, deployment, and operations
deployment and operations of the domain. of the domain.
When EPE ([I-D.ietf-spring-segment-routing-central-epe]) is used (as When EPE ([SR-CENTRAL-EPE]) is used (as explained in Section 7.3),
explained in Section 7.3, the same operational model is assumed. EPE the same operational model is assumed. EPE information is originated
information is originated and propagated throughout the domain and propagated throughout the domain towards an internal server, and
towards an internal server and unless explicitly configured by the unless explicitly configured by the operator, no EPE information is
operator, no EPE information is leaked outside the domain boundaries. leaked outside the domain boundaries.
11. Security Considerations 11. Security Considerations
This document proposes to apply Segment Routing to a well known This document proposes to apply SR to a well-known scalability
scalability requirement expressed in [RFC7938] using the BGP-Prefix- requirement expressed in [RFC7938] using the BGP Prefix-SID as
SID as defined in [I-D.ietf-idr-bgp-prefix-sid]. defined in [RFC8669].
It has to be noted, as described in Section 10 that the design It has to be noted, as described in Section 10, that the design
illustrated in [RFC7938] and in this document, refer to a deployment illustrated in [RFC7938] and in this document refer to a deployment
model where all nodes are under the same administration. In this model where all nodes are under the same administration. In this
context, it is assumed that the operator doesn't want to leak outside context, it is assumed that the operator doesn't want to leak outside
of the domain any information related to internal prefixes and of the domain any information related to internal prefixes and
topology. The internal information includes prefix-sid and EPE topology. The internal information includes Prefix-SID and EPE
information. In order to prevent such leaking, the standard BGP information. In order to prevent such leaking, the standard BGP
mechanisms (filters) are applied on the boundary of the domain. mechanisms (filters) are applied on the boundary of the domain.
Therefore, the solution proposed in this document does not introduce Therefore, the solution proposed in this document does not introduce
any additional security concerns from what expressed in [RFC7938] and any additional security concerns from what is expressed in [RFC7938]
[I-D.ietf-idr-bgp-prefix-sid]. It is assumed that the security and and [RFC8669]. It is assumed that the security and confidentiality
confidentiality of the prefix and topology information is preserved of the prefix and topology information is preserved by outbound
by outbound filters at each peering point of the domain as described filters at each peering point of the domain as described in
in Section 10. Section 10.
12. Acknowledgements 12. References
12.1. Normative References
[RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A
Border Gateway Protocol 4 (BGP-4)", RFC 4271,
DOI 10.17487/RFC4271, January 2006,
<https://www.rfc-editor.org/info/rfc4271>.
[RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
BGP for Routing in Large-Scale Data Centers", RFC 7938,
DOI 10.17487/RFC7938, August 2016,
<https://www.rfc-editor.org/info/rfc7938>.
[RFC8277] Rosen, E., "Using BGP to Bind MPLS Labels to Address
Prefixes", RFC 8277, DOI 10.17487/RFC8277, October 2017,
<https://www.rfc-editor.org/info/rfc8277>.
[RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L.,
Decraene, B., Litkowski, S., and R. Shakir, "Segment
Routing Architecture", RFC 8402, DOI 10.17487/RFC8402,
July 2018, <https://www.rfc-editor.org/info/rfc8402>.
[RFC8669] Previdi, S., Filsfils, C., Lindem, A., Ed., Sreekantiah,
A., and H. Gredler, "Segment Routing Prefix Segment
Identifier Extensions for BGP", RFC 8669,
DOI 10.17487/RFC8669, December 2019,
<https://www.rfc-editor.org/info/rfc8669>.
12.2. Informative References
[IPv6-SRH] Filsfils, C., Dukes, D., Previdi, S., Leddy, J.,
Matsushima, S., and D. Voyer, "IPv6 Segment Routing Header
(SRH)", Work in Progress, Internet-Draft, draft-ietf-6man-
segment-routing-header-26, 22 October 2019,
<https://tools.ietf.org/html/draft-ietf-6man-segment-
routing-header-26>.
[RFC6793] Vohra, Q. and E. Chen, "BGP Support for Four-Octet
Autonomous System (AS) Number Space", RFC 6793,
DOI 10.17487/RFC6793, December 2012,
<https://www.rfc-editor.org/info/rfc6793>.
[SR-CENTRAL-EPE]
Filsfils, C., Previdi, S., Dawra, G., Aries, E., and D.
Afanasiev, "Segment Routing Centralized BGP Egress Peer
Engineering", Work in Progress, Internet-Draft, draft-
ietf-spring-segment-routing-central-epe-10, 21 December
2017, <https://tools.ietf.org/html/draft-ietf-spring-
segment-routing-central-epe-10>.
Acknowledgements
The authors would like to thank Benjamin Black, Arjun Sreekantiah, The authors would like to thank Benjamin Black, Arjun Sreekantiah,
Keyur Patel, Acee Lindem and Anoop Ghanwani for their comments and Keyur Patel, Acee Lindem, and Anoop Ghanwani for their comments and
review of this document. review of this document.
13. Contributors Contributors
Gaya Nagarajan Gaya Nagarajan
Facebook Facebook
US United States of America
Email: gaya@fb.com Email: gaya@fb.com
Gaurav Dawra Gaurav Dawra
Cisco Systems Cisco Systems
US United States of America
Email: gdawra.ietf@gmail.com Email: gdawra.ietf@gmail.com
Dmitry Afanasiev Dmitry Afanasiev
Yandex Yandex
RU Russian Federation
Email: fl0w@yandex-team.ru Email: fl0w@yandex-team.ru
Tim Laberge Tim Laberge
Cisco Cisco
US United States of America
Email: tlaberge@cisco.com Email: tlaberge@cisco.com
Edet Nkposong Edet Nkposong
Salesforce.com Inc. Salesforce.com Inc.
US United States of America
Email: enkposong@salesforce.com Email: enkposong@salesforce.com
Mohan Nanduri Mohan Nanduri
Microsoft Microsoft
US United States of America
Email: mohan.nanduri@oracle.com
Email: mnanduri@microsoft.com
James Uttaro James Uttaro
ATT ATT
US United States of America
Email: ju1738@att.com Email: ju1738@att.com
Saikat Ray Saikat Ray
Unaffiliated Unaffiliated
US United States of America
Email: raysaikat@gmail.com Email: raysaikat@gmail.com
Jon Mitchell Jon Mitchell
Unaffiliated Unaffiliated
US United States of America
Email: jrmitche@puck.nether.net Email: jrmitche@puck.nether.net
14. References
14.1. Normative References
[I-D.ietf-idr-bgp-prefix-sid]
Previdi, S., Filsfils, C., Lindem, A., Sreekantiah, A.,
and H. Gredler, "Segment Routing Prefix SID extensions for
BGP", draft-ietf-idr-bgp-prefix-sid-27 (work in progress),
June 2018.
[I-D.ietf-spring-segment-routing]
Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B.,
Litkowski, S., and R. Shakir, "Segment Routing
Architecture", draft-ietf-spring-segment-routing-15 (work
in progress), January 2018.
[I-D.ietf-spring-segment-routing-central-epe]
Filsfils, C., Previdi, S., Dawra, G., Aries, E., and D.
Afanasiev, "Segment Routing Centralized BGP Egress Peer
Engineering", draft-ietf-spring-segment-routing-central-
epe-10 (work in progress), December 2017.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>.
[RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A
Border Gateway Protocol 4 (BGP-4)", RFC 4271,
DOI 10.17487/RFC4271, January 2006,
<https://www.rfc-editor.org/info/rfc4271>.
[RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
BGP for Routing in Large-Scale Data Centers", RFC 7938,
DOI 10.17487/RFC7938, August 2016,
<https://www.rfc-editor.org/info/rfc7938>.
[RFC8277] Rosen, E., "Using BGP to Bind MPLS Labels to Address
Prefixes", RFC 8277, DOI 10.17487/RFC8277, October 2017,
<https://www.rfc-editor.org/info/rfc8277>.
14.2. Informative References
[I-D.ietf-6man-segment-routing-header]
Filsfils, C., Previdi, S., Leddy, J., Matsushima, S., and
d. daniel.voyer@bell.ca, "IPv6 Segment Routing Header
(SRH)", draft-ietf-6man-segment-routing-header-15 (work in
progress), October 2018.
[RFC6793] Vohra, Q. and E. Chen, "BGP Support for Four-Octet
Autonomous System (AS) Number Space", RFC 6793,
DOI 10.17487/RFC6793, December 2012,
<https://www.rfc-editor.org/info/rfc6793>.
Authors' Addresses Authors' Addresses
Clarence Filsfils (editor) Clarence Filsfils (editor)
Cisco Systems, Inc. Cisco Systems, Inc.
Brussels Brussels
BE Belgium
Email: cfilsfil@cisco.com Email: cfilsfil@cisco.com
Stefano Previdi Stefano Previdi
Cisco Systems, Inc. Cisco Systems, Inc.
Italy Italy
Email: stefano@previdi.net Email: stefano@previdi.net
Gaurav Dawra Gaurav Dawra
LinkedIn LinkedIn
USA United States of America
Email: gdawra.ietf@gmail.com Email: gdawra.ietf@gmail.com
Ebben Aries Ebben Aries
Juniper Networks Arrcus, Inc.
1133 Innovation Way 2077 Gateway Place, Suite #400
Sunnyvale CA 94089 San Jose, CA 95119
US United States of America
Email: exa@juniper.net Email: exa@arrcus.com
Petr Lapukhov Petr Lapukhov
Facebook Facebook
US United States of America
Email: petr@fb.com Email: petr@fb.com
 End of changes. 171 change blocks. 
473 lines changed or deleted 489 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/