draft-ietf-bmwg-protection-meth-03.txt   draft-ietf-bmwg-protection-meth-04.txt 
Network Working Group Rajiv Papneja
Network Working Group R. Papneja
Internet Draft Isocore Internet Draft Isocore
Intended status: Informational S. Vapiwala Intended Status: Informational S.Vapiwala
Expires: August 2008 J. Karthik Expires: April 2, 2009 J. Karthik
Cisco Systems Cisco Systems
S. Poretsky S. Poretsky
Reef Point Allot
S. Rao S. Rao
Qwest Communications Qwest Communications
Jean-Louis Le Roux Jean-Louis Le Roux
France Telecom France Telecom
February 19, 2008 November 3, 2008
Methodology for benchmarking MPLS Protection mechanisms Methodology for Benchmarking MPLS Protection Mechanisms
<draft-ietf-bmwg-protection-meth-03.txt> draft-ietf-bmwg-protection-meth-04.txt
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that By submitting this Internet-Draft, each author represents that
any applicable patent or other IPR claims of which he or she is any applicable patent or other IPR claims of which he or she is
aware have been or will be disclosed, and any of which he or she aware have been or will be disclosed, and any of which he or she
becomes aware will be disclosed, in accordance with Section 6 of becomes aware will be disclosed, in accordance with Section 6 of
BCP 79. BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six
and may be updated, replaced, or obsoleted by other documents at any months and may be updated, replaced, or obsoleted by other documents
time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as
material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on August 19, 2008. This Internet-Draft will expire on April 3, 2009.
Copyright Notice
Copyright (C) The IETF Trust (2008).
Poretsky, Rao, Le Roux Abstract
Protection Mechanisms Protection Mechanisms
Abstract This draft describes the methodology for benchmarking MPLS
Protection mechanisms for link and node protection as defined in
[MPLS-FRR-EXT]. This document provides test methodologies and test
bed setup for measuring failover times while considering all
dependencies that might impact faster recovery of real-time services
bound to MPLS based traffic engineered tunnels.
This draft describes the methodology for benchmarking MPLS Protection The terms used in the procedures included in this document are
mechanisms for link and node protection as defined in [MPLS-FRR-EXT].
The benchmarking and terminology [TERM-ID] are to be used for
benchmarking MPLS based protection mechanisms [MPLS-FRR-EXT]. This
document provides test methodologies and test-bed setup for measuring
failover times while considering all dependencies that might impact
faster recovery of real time services riding on MPLS based primary
tunnel. The terms used in the procedures included in this document are
defined in [TERM-ID]. defined in [TERM-ID].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Existing definitions...........................................6 2. Document Scope.................................................4
3. Test Considerations............................................6 3. General reference sample topology..............................5
3.1. Failover Events...........................................6 4. Existing definitions...........................................5
3.2. Failure Detection [TERM-ID]...............................7 5. Test Considerations............................................6
3.3. Use of Data Traffic for MPLS Protection Benchmarking......8 5.1. Failover Events..............................................6
3.4. LSP and Route Scaling.....................................8 5.2. Failure Detection [TERM-ID]..................................7
3.5. Selection of IGP..........................................8 5.3. Use of Data Traffic for MPLS Protection benchmarking.........7
3.6. Reversion [TERM-ID].......................................9 5.4. LSP and Route Scaling........................................8
3.7. Traffic generation........................................9 5.5. Selection of IGP.............................................8
3.8. Motivation for topologies.................................9 5.6. Reversion [TERM-ID]..........................................8
4. Test Setup....................................................10 5.7. Traffic Generation...........................................8
4.1. Link Protection with 1 hop primary (from PLR) and 1 hop 5.8. Motivation for Topologies....................................9
backup........................................................11 6. Reference Test Setup...........................................9
TE tunnels....................................................11 6.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup
4.2. Link Protection with 1 hop primary (from PLR) and 2 hop TE tunnels.......................................................10
backup TE tunnels.............................................11 6.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup
4.3. Link Protection with 2+ hop (from PLR) primary and 1 hop TE tunnels.......................................................11
backup TE tunnels.............................................12 6.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup
4.4. Link Protection with 2+ hop (from PLR) primary and 2 hop TE tunnels.......................................................11
backup TE tunnels.............................................12 6.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup
4.5. Node Protection with 2 hop primary (from PLR) and 1 hop TE tunnels.......................................................12
backup TE tunnels.............................................13 6.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup
TE tunnels.......................................................12
Poretsky, Rao, Le Roux 6.6. Node Protection with 2 hop primar (from PLR) and 2 hop backup
TE tunnels.......................................................13
6.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup
TE tunnels.......................................................14
6.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup
TE tunnels.......................................................15
7. Test Methodology..............................................15
7.1. Headend as PLR with link failure............................15
7.2. Mid-Point as PLR with link failure..........................17
7.3. Headend as PLR with Node Failure............................18
Protection Mechanisms Protection Mechanisms
4.6. Node Protection with 2 hop primary (from PLR) and 2 hop
backup TE tunnels.............................................14 7.4. Mid-Point as PLR with Node failure..........................19
4.7. Node Protection with 3+ hop primary (from PLR) and 1 hop 7.5. MPLS FRR Forwarding Performance Test cases..................21
backup TE tunnels.............................................15 7.5.1. PLR as Headend............................................21
4.8. Node Protection with 3+ hop primary (from PLR) and 2 hop 7.5.2. PLR as Mid-point..........................................22
backup TE tunnels.............................................16 8. Reporting Format..............................................23
5. Test Methodology..............................................16 Benchmarks.......................................................24
5.1. Headend as PLR with link failure.........................16 9. Security Considerations.......................................25
5.2. Mid-Point as PLR with link failure.......................18 10. IANA Considerations..........................................25
5.3. Headend as PLR with Node failure.........................19 11. References...................................................25
5.4. Mid-Point as PLR with Node failure.......................20 11.1. Normative References.......................................25
5.5. MPLS FRR Forwarding Performance Test Cases...............22 11.2. Informative References.....................................25
5.5.1. PLR as Headend......................................22 Author's Addresses...............................................26
5.5.2. PLR as Mid-point....................................23 Intellectual Property Statement..................................27
6. Reporting Format..............................................24 Disclaimer of Validity...........................................28
7. IANA Considerations...........................................25 Copyright Statement..............................................28
This document requires no IANA considerations....................25 12. Acknowledgments..............................................28
8. Security Considerations.......................................25 Appendix A: Fast Reroute Scalability Table.......................28
9. Acknowledgements..............................................26 Appendix B: Abbreviations........................................31
10. References...................................................26
10.1. Normative References....................................26
10.2. Informative References..................................27
11. Authors' Addresses...........................................27
Intellectual Property Statement..................................29
Appendix A: Fast Reroute Scalability Table.......................30
1. Introduction 1. Introduction
This draft describes the methodology for benchmarking MPLS based This draft describes the methodology for benchmarking MPLS based
protection mechanisms. The new terminology that it introduces is defined protection mechanisms. The new terminology that this document
in [TERM-ID]. introduces is defined in [TERM-ID].
MPLS based protection mechanisms provide faster recovery of real time MPLS based protection mechanisms provide fast recovery of real-time
services in case of an unplanned link or node failure in the network services from a planned or an unplanned link or node failures. MPLS
core, where MPLS is used as a signaling protocol to setup point-to-point protection mechanisms are generally deployed in a network
traffic engineered tunnels. MPLS based protection mechanisms improve infrastructure, where MPLS is used for provisioning of point-to-
service availability by minimizing the duration of the most common point traffic engineered tunnels (tunnel). MPLS based protection
failures. There are generally two factors impacting service mechanisms promises to improve service disruption period by
availability. One is the frequency and the other is the duration of the minimizing recovery time from most common failures.
failure. Unexpected correlated failures are less common. Correlated
failures mean co-occurrence of two or more failures simultaneously.
Poretsky, Rao, Le Roux Generally there two factors impacting service availability - one is
frequency of failures, and other being duration for which the
failures last. Failures can be classified further into two types- -
correlated uncorrelated failures. A Correlated failure is the co-
occurrence of two or more failures simultaneously. A typical example
would be a failure of logical resource (e.g. layer-2 links), relying
on a common physical resource (e.g. common interface) fails. Within
the context of MPLS protection mechanisms, failures that arise due
to Shared Risk Link Groups (SRLG) [MPLS-FRR-EXT] can be considered
as correlations failures or. Not all correlated failures are
Protection Mechanisms Protection Mechanisms
These failures are often observed when two or more logical resources
(for e.g. layer-2 links), relying on a common physical resource (for
e.g. common transport) fail. Common transport may include TDM and WDM
links providing multiplexing at layer-2 and layer-1. Within the context
of MPLS protection mechanisms, Shared Risk Link Groups [MPLS-FRR-EXT]
encompass correlations failures.
Not all correlated failures can be anticipated in advance of their predictable in advance especially the ones caused due to natural
occurrence. Failures due to natural disasters or planned failures are disasters.
the most notable causes. Due to the frequent occurrences of such
failures, it is necessary that implementations can handle these faults
gracefully, and recover the services affected by failures very quickly.
Some routers recover faster as compared to the others, hence Planned failures on the other hand are predictable and
benchmarking this type of failures become very useful. Benchmarking of implementations should handle both types of failures and recover
unexpected correlated failures should include measurement of gracefully within the time frame acceptable for service assurance.
restoration with and without the availability of IP fallback. This Hence, failover recovery time is one of the most important benchmark
document provides detailed test cases focusing on benchmarking MPLS that a service provider considers in choosing the building blocks
protection mechanisms. Benchmarking of unexpected correlated failures for their network infrastructure.
is currently out of scope of this document.
A link or a node failure could occur either at the head-end or at the It is a known fact that network elements from different manufactures
mid point node of a primary tunnel. The backup tunnel could offer either behave differently to network failures, which impact their ability
link or node protection following a failure along the path of the to recover from the failures. It becomes imperative from network
primary tunnel. The time lapsed in transitioning primary tunnel traffic service providers to have a common benchmark, which could be
to the backup tunnel is a key measurement that ensures the service level followed to understand the performance behaviors of network
agreements. Failover time depends upon many factors such as the number elements.
of prefixes bound to a tunnel, services (such as IGP, BGP, Layer 3/
Layer 2 VPNs) that are bound to the tunnel, number of primary tunnels
affected by the failure event, number of primary tunnels protected by
backup, the type of failure and the physical media on which the failover
occurs. This document describes all different topologies and scenarios
that should be considered to effectively benchmark MPLS protection
mechanisms and failover times. Different failure scenarios and scaling
considerations are also provided in this document. In addition the
document provides a reporting format for the observed results.
To benchmark the failover time, data plane traffic is used as defined in Considering failover recovery an important parameter, the test
[IGP-METH]. Traffic loss is the key component in a black-box type test methodology presented in this document considers the factors that
and is used to measure convergence. may impact the failover times. To benchmark the failover times, data
plane traffic is used as defined in [IGP-METH].
All benchmarking test cases defined in this document apply to both
facility backup and local protection enabled in detour mode. The
test cases cover all possible failure scenarios and the associated
procedures benchmark the ability of the DUT to perform recovery from
failures within target failover time.
2. Document Scope
This document provides detailed test cases along with different
topologies and scenarios that should be considered to effectively
benchmark MPLS protection mechanisms and failover times. Different
failure scenarios and scaling considerations are also provided in
this document, in addition to reporting formats for the observed
results.
Benchmarking of unexpected correlated failures is currently out of
scope of this document.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
All benchmarking test cases defined in this document apply to both 3. General reference sample topology
facility backup and local protection enabled in detour mode. The test
cases cover all possible failure scenarios and the associated procedures
benchmark the ability of the DUT to perform recovery from failures
within target failover time.
Figure 1 represents the basic reference test bed and is applicable to Figure 1 illustrates the basic reference testbed and is applicable
all the test cases defined in this document. TG & TA represents Traffic to all the test cases defined in this document. TG & TA represents
Generator & Analyzer respectively. A tester is connected to the DUT and Traffic Generator & Analyzer respectively. A tester is connected to
it sends and receives IP traffic along with the working Path, run the DUT and it sends and receives IP traffic along with the working
protocol emulations simulating real world peering scenarios. Path, run protocol emulations simulating real world peering
scenarios. The reference testbed shown in the figure
--------------------------- ---------------------------
| ------------|--------------- | ------------|---------------
| | | | | | | |
| | | | | | | |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
TG-| R1 |-----| R2 |----| R3 | | R4 | | R5 |-TA TG-| R1 |-----| R2 |----| R3 | | R4 | | R5 |-TA
| |-----| |----| |----| |---| | | |-----| |----| |----| |---| |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
| | | | | | | |
| | | | | | | |
| -------- | | | -------- | |
---------| R6 |-------- | ---------| R6 |-------- |
| |-------------------- | |--------------------
-------- --------
Fig.1: Fast Reroute Topology. Fig.1: Fast Reroute Topology.
The tester MUST record the number of lost, duplicate, and reordered The tester MUST record the number of lost, duplicate, and reordered
packets. It should further record arrival and departure times so that packets. It should further record arrival and departure times so
Failover Time, Additive Latency, and Reversion Time can be measured. that failover Time, Additive Latency, and Reversion Time can be
The tester may be a single device or a test system emulating all the measured. The tester may be a single device or a test system
different roles along a primary or backup path. emulating all the different roles along a primary or backup path.
Poretsky, Rao, Le Roux 4. Existing definitions
Protection Mechanisms
2. Existing definitions
For the sake of clarity and continuity this RFC adopts the template For the sake of clarity and continuity this RFC adopts the template
for definitions set out in Section 2 of RFC 1242. Definitions are for definitions set out in Section 2 of RFC 1242. Definitions are
indexed and grouped together in sections for ease of reference. indexed and grouped together in sections for ease of reference. The
terms used in this document are defined in detail in [TERM-ID].
Protection Mechanisms
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in RFC 2119. this document is to be interpreted as described in RFC 2119.
The reader is assumed to be familiar with the commonly used MPLS The reader is assumed to be familiar with the commonly used MPLS
terminology, some of which is defined in [MPLS-FRR-EXT]. terminology, some of which is defined in [MPLS-FRR-EXT].
3. Test Considerations 5. Test Considerations
This section discusses the fundamentals of MPLS Protection testing: This section discusses the fundamentals of MPLS Protection testing:
-The types of network events that causes failover -The types of network events that causes failover
-Indications for failover -Indications for failover
-the use of data traffic -the use of data traffic
-Traffic generation -Traffic generation
-LSP Scaling -LSP Scaling
-Reversion of LSP -Reversion of LSP
-IGP Selection -IGP Selection
3.1. Failover Events 5.1. Failover Events [TERM ID]
The failover to the backup tunnel is primarily triggered by either a The failover to the backup tunnel is primarily triggered by either
link or node failures observed downstream of the Point of Local link or node failures observed downstream of the Point of Local
repair (PLR). Some of these failure events are listed below. repair (PLR). Some of these failure events are listed below.
Link failure events Link failure events
- Interface Shutdown on PLR side with POS Alarm - Interface Shutdown on PLR side with POS Alarm
- Interface Shutdown on remote side with POS Alarm - Interface Shutdown on remote side with POS Alarm
- Interface Shutdown on PLR side with RSVP hello - Interface Shutdown on PLR side with RSVP hello enabled
- Interface Shutdown on remote side with RSVP hello - Interface Shutdown on remote side with RSVP hello enabled
- Interface Shutdown on PLR side with BFD - Interface Shutdown on PLR side with BFD
- Interface Shutdown on remote side with BFD - Interface Shutdown on remote side with BFD
- Fiber Pull on the PLR side (Both TX & RX or just the TX)
Poretsky, Rao, Le Roux - Fiber Pull on the remote side (Both TX & RX or just the RX)
Protection Mechanisms
- Fiber Pull on the PLR side (Both TX & RX or just the Tx)
- Fiber Pull on the remote side (Both TX & RX or just the Rx)
- Online insertion and removal (OIR) on PLR side - Online insertion and removal (OIR) on PLR side
- OIR on remote side - OIR on remote side
- Sub-interface failure (e.g. shutting down of a VLAN) - Sub-interface failure (e.g. shutting down of a VLAN)
- Parent interface shutdown (an interface bearing multiple sub- - Parent interface shutdown (an interface bearing multiple sub-
interfaces interfaces
Protection Mechanisms
Node failure events Node failure events
A System reload is initiated either by a graceful shutdown or by a A System reload is initiated either by a graceful shutdown or by a
power failure. A system crash is referred to as a software failure or power failure. A system crash is referred to as a software failure
an assert. or an assert.
- Reload protected Node, when RSVP Hello is enabled - Reload protected Node, when RSVP hello is enabled
- Crash Protected Node, when RSVP Hello is enable - Crash Protected Node, when RSVP hello is enabled
- Reload Protected Node, when BFD is enable - Reload Protected Node, when BFD is enable
- Crash Protected Node, when BFD is enable - Crash Protected Node, when BFD is enable
3.2. Failure Detection [TERM-ID] 5.2. Failure Detection [TERM-ID]
Local failures can be detected via SONET/SDH failure with directly Link failure detection time depends on the link type and failure
connected LSR. Failure indication may vary with the type of alarm - detection protocols running. For SONET/SDH, the alarm type (such as
LOS, AIS, or RDI. Failures on Ethernet links such as Gigabit Ethernet LOS, AIS, or RDI) can be used. Other link types have layer-two
rely upon Layer 3 signaling indication for failure. alarms, but they may not provide a short enough failure detection
time. Ethernet based links do not have layer 2 failure indicators,
and therefore relies on layer 3 signaling for failure detection.
Different MPLS protection mechanisms and different implementations MPLS has different failure detection techniques such as BFD, or use
use different failure detection techniques such as RSVP hellos, BFD of RSVP hellos. These methods can be used for the layer 3 failure
etc. Ethernet technologies such as Gigabit Ethernet rely upon layer 3 indicators required by Ethernet based links, or for some other non-
failure indication mechanisms since there is no Layer 2 failure Ethernet based links to help improve failure detection time.
indication mechanism. The failure detection time may not always be
negligible and it could impact the overall failover time.
The test procedures in this document can be used for a local failure The test procedures in this document can be used for a local failure
or remote failure scenarios for comprehensive benchmarking and to or remote failure scenarios for comprehensive benchmarking and to
evaluate failover performance independent of the failure detection evaluate failover performance independent of the failure detection
techniques. techniques.
Poretsky, Rao, Le Roux 5.3. Use of Data Traffic for MPLS Protection benchmarking
Protection Mechanisms
3.3. Use of Data Traffic for MPLS Protection Benchmarking
Currently end customers use packet loss as a key metric for failover Currently end customers use packet loss as a key metric for failover
time. Packet loss is an externally observable event and has direct time. Packet loss is an externally observable event and has direct
impact on customers' applications. MPLS protection mechanism is impact on customers' applications. MPLS protection mechanism is
expected to minimize the packet loss in the event of a failure. For expected to minimize the packet loss in the event of a failure. For
this reason it is important to develop a standard router benchmarking this reason it is important to develop a standard router
methodology for measuring MPLS protection that uses packet loss as a benchmarking methodology for measuring MPLS protection that uses
metric. At a known rate of forwarding, packet loss can be measured packet loss as a metric. At a known rate of forwarding, packet loss
and the Failover time can be determined. Measurement of control plane can be measured and the failover time can be determined. Measurement
signaling to establish backup paths is not enough to verify failover. of control plane signaling to establish backup paths is not enough
Failover is best determined when packets are actually traversing the Protection Mechanisms
backup path.
to verify failover. Failover is best determined when packets are
actually traversing the backup path.
An additional benefit of using packet loss for calculation of An additional benefit of using packet loss for calculation of
Failover time is that it allows use of a black-box tests environment. failover time is that it allows use of a black-box tests
Data traffic is offered at line-rate to the device under test (DUT), environment. Data traffic is offered at line-rate to the device
and an emulated network failure event is forced to occur, and packet under test (DUT), and an emulated network failure event is forced to
loss is externally measured to calculate the convergence time. This occur, and packet loss is externally measured to calculate the
setup is independent of the DUT architecture. convergence time. This setup is independent of the DUT architecture.
In addition, this methodology considers the packets in error and In addition, this methodology considers the packets in error and
duplicate packets that could have been generated during the failover duplicate packets that could have been generated during the failover
process. In scenarios, where separate measurement of packets in error process. In scenarios, where separate measurement of packets in
and duplicate packets is difficult to obtain, these packets should be error and duplicate packets is difficult to obtain, these packets
attributed to lost packets. should be attributed to lost packets.
3.4. LSP and Route Scaling 5.4. LSP and Route Scaling
Failover time performance may vary with the number of established Failover time performance may vary with the number of established
primary and backup tunnels (LSP) and installed routes. However the primary and backup tunnel label switched paths (LSP) and installed
procedure outlined here should be used for any number of LSPs (L) and routes. However the procedure outlined here should be used for any
number of routes protected by PLR(R). Number of L and R must be number of LSPs (L) and number of routes protected by PLR(R). Number
recorded. of L and R must be recorded.
3.5. Selection of IGP 5.5. Selection of IGP
The underlying IGP could be ISIS-TE or OSPF-TE for the methodology The underlying IGP could be ISIS-TE or OSPF-TE for the methodology
proposed here. proposed here.
Poretsky, Rao, Le Roux 5.6. Reversion [TERM-ID]
Protection Mechanisms
3.6. Reversion [TERM-ID]
Fast Reroute provides a method to return or restore a backup path to Fast Reroute provides a method to return or restore a backup path to
original primary LSP upon recovery from the failure. This is referred original primary LSP upon recovery from the failure. This is
to as Reversion, which can be implemented as Global Reversion or referred to as Reversion, which can be implemented as Global
Local Reversion. In all test cases listed here Reversion should not Reversion or Local Reversion. In all test cases listed here
produce any packet loss, out of order or duplicate packets. Each of Reversion should not produce any packet loss, out of order or
the test cases in this methodology document provides a check to duplicate packets. Each of the test cases in this methodology
confirm that there is no packet loss. document provides a check to confirm that there is no packet loss.
3.7. Traffic generation 5.7. Traffic Generation
It is suggested that there be one or more traffic streams as long as It is suggested that there be one or more traffic streams as long as
there is a steady and constant rate of flow for all the streams. In there is a steady and constant rate of flow for all the streams. In
order to monitor the DUT performance for recovery times a set of order to monitor the DUT performance for recovery times a set of
route prefixes should be advertised before traffic is sent. The route prefixes should be advertised before traffic is sent. The
traffic should be configured towards these routes. traffic should be configured towards these routes.
Protection Mechanisms
A typical example would be configuring the traffic generator to send A typical example would be configuring the traffic generator to send
the traffic to the first, middle and last of the advertised routes. the traffic to the first, middle and last of the advertised routes.
(First, middle and last could be decided by the numerically smallest, (First, middle and last could be decided by the numerically
median and the largest respectively of the advertised prefix). smallest, median and the largest respectively of the advertised
Generating traffic to all of the prefixes reachable by the protected prefix). Generating traffic to all of the prefixes reachable by the
tunnel (probably in a Round-Robin fashion, where the traffic is protected tunnel (probably in a Round-Robin fashion, where the
destined to all the prefixes but one prefix at a time in a cyclic traffic is destined to all the prefixes but one prefix at a time in
manner) is not recommended. The reason why traffic generation is not a cyclic manner) is not recommended. The reason why traffic
recommended in a Round-Robin fashion to all the prefixes, one at a generation is not recommended in a Round-Robin fashion to all the
time is that if there are many prefixes reachable through the LSP the prefixes, one at a time is that if there are many prefixes reachable
time interval between 2 packets destined to one prefix may be through the LSP the time interval between 2 packets destined to one
significantly high and may be comparable with the failover time being prefix may be significantly high and may be comparable with the
measured which does not aid in getting an accurate failover failover time being measured which does not aid in getting an
measurement. accurate failover measurement.
3.8. Motivation for topologies 5.8. Motivation for Topologies
Poretsky, Rao, Le Roux
Protection Mechanisms
Given that the label stack is dependent of the following 3 entities Given that the label stack is dependent of the following 3 entities
it is recommended that the benchmarking of failover time be performed it is recommended that the benchmarking of failover time be
on all the 8 topologies provided in section 4 performed on all the 8 topologies provided in section 4
- Type of protection (Link Vs Node) - Type of protection (Link Vs Node)
- # of remaining hops of the primary tunnel from the PLR - # of remaining hops of the primary tunnel from the PLR
- # of remaining hops of the backup tunnel from the PLR - # of remaining hops of the backup tunnel from the PLR
4. Test Setup 6. Reference Test Setup
Topologies to be used for benchmarking the failover time: In addition to the general reference topology shown in figure 1,
this section provides detailed insight into various proposed test
setups that should be considered for comprehensively benchmarking
the failover time in different roles along the primary tunnel:
This section proposes a set of topologies that covers all the This section proposes a set of topologies that covers all the
scenarios for local protection. All of these 8 topologies shown scenarios for local protection. All of these 8 topologies shown
(figure 2- figure 9) can be mapped to the reference topology shown in (figure 2- figure 9) can be mapped to the reference topology shown
figure 1. Topologies provided in sections 4.1 to 4.8 refer to test- in figure 1. Topologies provided in sections 4.1 to 4.8 refer to
bed required to benchmark failover time when DUT is configured as a test-bed required to benchmark failover time when DUT is configured
PLR in either head-end or midpoint role. The labels stack provided as a PLR in either headend or midpoint role. The labels stack
with each topology is at the PLR. provided with each topology is at the PLR.
The label stacks shown below each figure in section 4.1 to 4.9 The label stacks shown below each figure in section 4.1 to 4.9
considers enabling of Penultimate Hop Popping (PHP). considers enabling of Penultimate Hop Popping (PHP).
Protection Mechanisms
Figures 2-9 uses the following convention: Figures 2-9 uses the following convention:
a) HE is Head-End a) HE is Headend
b) TE is Tail-End b) TE is Tail-End
c) MID is Mid point c) MID is Mid point
d) MP is Merge Point d) MP is Merge Point
e) PLR is Point of Local Repair e) PLR is Point of Local Repair
f) PRI is Primary f) PRI is Primary
g) BKP denotes Backup Node g) BKP denotes Backup Node
Poretsky, Rao, Le Roux 6.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup TE
Protection Mechanisms tunnels
4.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup
TE tunnels
------- -------- PRI -------- ------- -------- PRI --------
| R1 | | R2 | | R3 | | R1 | | R2 | | R3 |
TG-| HE |--| MID |----| TE |-TA TG-| HE |--| MID |----| TE |-TA
| | | PLR |----| | | | | PLR |----| |
------- -------- BKP -------- ------- -------- BKP --------
Figure 2: Represents the setup for section 4.1 Figure 2: Represents the setup for section 4.1
Traffic No of Labels No of labels after Traffic No of Labels No of labels after
before failure failure before failure failure
IP TRAFFIC (P-P) 0 0 IP TRAFFIC (P-P) 0 0
Layer3 VPN (PE-PE) 1 1 Layer3 VPN (PE-PE) 1 1
Layer3 VPN (PE-P) 2 2 Layer3 VPN (PE-P) 2 2
Layer2 VC (PE-PE) 1 1 Layer2 VC (PE-PE) 1 1
Layer2 VC (PE-P) 2 2 Layer2 VC (PE-P) 2 2
Mid-point LSPs 0 0 Mid-point LSPs 0 0
skipping to change at page 11, line 26 skipping to change at page 11, line 4
Figure 2: Represents the setup for section 4.1 Figure 2: Represents the setup for section 4.1
Traffic No of Labels No of labels after Traffic No of Labels No of labels after
before failure failure before failure failure
IP TRAFFIC (P-P) 0 0 IP TRAFFIC (P-P) 0 0
Layer3 VPN (PE-PE) 1 1 Layer3 VPN (PE-PE) 1 1
Layer3 VPN (PE-P) 2 2 Layer3 VPN (PE-P) 2 2
Layer2 VC (PE-PE) 1 1 Layer2 VC (PE-PE) 1 1
Layer2 VC (PE-P) 2 2 Layer2 VC (PE-P) 2 2
Mid-point LSPs 0 0 Mid-point LSPs 0 0
Protection Mechanisms
4.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE 6.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
------- -------- -------- ------- -------- --------
| R1 | | R2 | | R3 | | R1 | | R2 | | R3 |
TG-| HE | | MID |PRI | TE |-TA TG-| HE | | MID |PRI | TE |-TA
| |----| PLR |----| | | |----| PLR |----| |
------- -------- -------- ------- -------- --------
|BKP | |BKP |
| -------- | | -------- |
| | R6 | | | | R6 | |
|----| BKP |----| |----| BKP |----|
| MID | | MID |
-------- --------
Figure 3: Representing setup for section 4.2 Figure 3: Representing setup for section 4.2
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
Poretsky, Rao, Le Roux
Protection Mechanisms
IP TRAFFIC (P-P) 0 1 IP TRAFFIC (P-P) 0 1
Layer3 VPN (PE-PE) 1 2 Layer3 VPN (PE-PE) 1 2
Layer3 VPN (PE-P) 2 3 Layer3 VPN (PE-P) 2 3
Layer2 VC (PE-PE) 1 2 Layer2 VC (PE-PE) 1 2
Layer2 VC (PE-P) 2 3 Layer2 VC (PE-P) 2 3
Mid-point LSPs 0 1 Mid-point LSPs 0 1
4.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE 6.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE
tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 |PRI | R3 |PRI | R4 | | R1 | | R2 |PRI | R3 |PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR |----| | | | | | | PLR |----| | | |
-------- -------- BKP -------- -------- -------- -------- BKP -------- --------
Figure 4: Representing setup for section 4.3 Figure 4: Representing setup for section 4.3
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
Protection Mechanisms
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
4.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE 6.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE
tunnels tunnels
-------- -------- PRI -------- PRI -------- -------- -------- PRI -------- PRI --------
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
Poretsky, Rao, Le Roux
Protection Mechanisms
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR | | | | | | | | PLR | | | | |
-------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
| -------- | | -------- |
| | R6 | | | | R6 | |
---| BKP |- ---| BKP |-
| MID | | MID |
-------- --------
Figure 5: Representing the setup for section 4.4 Figure 5: Representing the setup for section 4.4
skipping to change at page 13, line 28 skipping to change at page 12, line 39
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
4.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE 6.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
Protection Mechanisms
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 |PRI | R3 | PRI | R4 | | R1 | | R2 |PRI | R3 | PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR | | | | | | | | PLR | | | | |
-------- -------- -------- -------- -------- -------- -------- --------
|BKP | |BKP |
----------------------------- -----------------------------
Figure 6: Representing the setup for section 4.5 Figure 6: Representing the setup for section 4.5
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
Poretsky, Rao, Le Roux
Protection Mechanisms
IP TRAFFIC (P-P) 1 0 IP TRAFFIC (P-P) 1 0
Layer3 VPN (PE-PE) 2 1 Layer3 VPN (PE-PE) 2 1
Layer3 VPN (PE-P) 3 2 Layer3 VPN (PE-P) 3 2
Layer2 VC (PE-PE) 2 1 Layer2 VC (PE-PE) 2 1
Layer2 VC (PE-P) 3 2 Layer2 VC (PE-P) 3 2
Mid-point LSPs 1 0 Mid-point LSPs 1 0
4.6. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE 6.6. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
TG-| HE | | MID |PRI | MID |PRI | TE |-TA TG-| HE | | MID |PRI | MID |PRI | TE |-TA
| |----| PLR |----| |----| | | |----| PLR |----| |----| |
-------- -------- -------- -------- -------- -------- -------- --------
| | | |
BKP| -------- | BKP| -------- |
| | R6 | | | | R6 | |
---------| BKP |--------- ---------| BKP |---------
| MID | | MID |
-------- --------
Figure 7: Representing setup for section 4.6 Figure 7: Representing setup for section 4.6
Protection Mechanisms
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Poretsky, Rao, Le Roux 6.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE
Protection Mechanisms
4.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
-------- -------- PRI -------- PRI -------- PRI -------- -------- -------- PRI -------- PRI -------- PRI --------
| R1 | | R2 | | R3 | | R4 | | R5 | | R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA
| | | PLR | | | | | | | | | | PLR | | | | | | |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
-------------------------- --------------------------
Figure 8: Representing setup for section 4.7 Figure 8: Representing setup for section 4.7
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
4.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup 6.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup TE
TE tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 | | R3 | | R4 | | R5 | | R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA
| |-- | PLR |---| |---| |---| | | |-- | PLR |---| |---| |---| |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
| -------- | | -------- |
| | R6 | | | | R6 | |
---------| BKP |------- ---------| BKP |-------
skipping to change at page 16, line 34 skipping to change at page 15, line 32
Traffic No of Labels No of labels Traffic No of Labels No of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
5. Test Methodology 7. Test Methodology
The procedure described in this section can be applied to all the 8 The procedure described in this section can be applied to all the 8
base test cases and the associated topologies. The backup as well as base test cases and the associated topologies. The backup as well as
the primary tunnel are configured to be alike in terms of bandwidth the primary tunnels are configured to be alike in terms of bandwidth
usage. In order to benchmark failover with all possible label stack usage. In order to benchmark failover with all possible label stack
depth applicable as seen with current deployments, it is suggested depth applicable as seen with current deployments, it is suggested
that the methodology includes all the scenarios listed here that the methodology includes all the scenarios listed here
5.1. Headend as PLR with link failure 7.1. Headend as PLR with link failure
Objective Objective
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
To benchmark the MPLS failover time due to Link failure events To benchmark the MPLS failover time due to Link failure events
described in section 3.1 experienced by the DUT which is the point described in section 3.1 experienced by the DUT which is the point
of local repair (PLR). of local repair (PLR).
Test Setup Test Setup
- select any one topology out of 8 from section 4 - Select any one topology out of 8 from section 4
- select overlay technology for FRR test e.g. IGP,VPN,or VC - Select overlay technology for FRR test e.g. IGP,VPN,or VC
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not Generator/analyzer. (If the node downstream of the PLR is not
A simulated node, then the Ingress of the tunnel should have A simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). a link connected to the traffic analyzer).
Test Configuration Test Configuration
1. Configure the number of primaries on R2 and the backups on 1. Configure the number of primaries on R2 and the backups on R2
R2 as required by the topology selected. as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in 2. Advertise prefixes (as per FRR Scalability table describe
Appendix A) by the tail end. in Appendix A) by the tail end.
Procedure Procedure
1. Establish the primary lsp on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected selected.
2. Establish the backup lsp on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology topology.
3. Verify primary and backup lsps are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected protected.
4. Verify Fast Reroute protection is enabled and ready 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 3.7 5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum Forwarding Rate to DUT. 6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP. 7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Link failure as describe in section 8. Trigger any choice of Link failure as describe in section 3.1.
3.1 9. Verify that primary tunnel and prefixes gets mapped to backup
9. Verify that primary tunnel and prefixes gets mapped to tunnels.
backup tunnels
10. Stop traffic stream and measure the traffic loss. 10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as defined in section 6, Reporting
format.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
11. Failover time is calculated as defined in section 6,
Reporting format.
12. Start traffic stream again to verify reversion when
protected interface comes up. Traffic loss should be 0 due
to make before break or reversion.
13. Enable protected interface that was down (Node in the case
of NNHOP)
14. Verify head-end signals new LSP and protection should be in
place again
5.2. Mid-Point as PLR with link failure 12. Start traffic stream again to verify reversion when protected
interface comes up. Traffic loss should be 0 due to make
before break or reversion.
13. Enable protected interface that was down (Node in the case of
NNHOP).
14. Verify headend signals new LSP and protection should be in
place again.
7.2. Mid-Point as PLR with link failure
Objective Objective
To benchmark the MPLS failover time due to Link failure events To benchmark the MPLS failover time due to Link failure events
described in section 3.1 experienced by the device under test which described in section 3.1 experienced by the device under test which
is the point of local repair (PLR). is the point of local repair (PLR).
Test Setup Test Setup
- select any one topology out of 8 from section 4 - Select any one topology out of 8 from section 4
- select overlay technology for FRR test as Mid-Point lsps - Select overlay technology for FRR test as Mid-Point LSPs
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration Test Configuration
1. Configure the number of primaries on R1 and the backups on 1. Configure the number of primaries on R1 and the backups on R2
R2 as required by the topology selected as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in 2. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end. Appendix A) by the tail end.
Procedure Procedure
1. Establish the primary lsp on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected selected.
2. Establish the backup LSP on R2 required by the selected
topology.
3. Verify primary and backup LSPs are up and that primary is
protected.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
2. Establish the backup lsp on R2 required by the selected
topology 4. Verify Fast Reroute protection.
3. Verify primary and backup lsps are up and that primary is 5. Setup traffic streams as described in section 3.7.
protected
4. Verify Fast Reroute protection
5. Setup traffic streams as described in section 3.7
6. Send IP traffic at maximum Forwarding Rate to DUT. 6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP. 7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Link failure as describe in section 8. Trigger any choice of Link failure as describe in section 3.1.
3.1 9. Verify that primary tunnel and prefixes gets mapped to backup
9. Verify that primary tunnel and prefixes gets mapped to tunnels.
backup tunnels
10. Stop traffic stream and measure the traffic loss. 10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as per defined in section 6, 11. Failover time is calculated as per defined in section 6,
Reporting format. Reporting format.
12. Start traffic stream again to verify reversion when 12. Start traffic stream again to verify reversion when protected
protected interface comes up. Traffic loss should be 0 due interface comes up. Traffic loss should be 0 due to make
to make before break or reversion before break or reversion.
13. Enable protected interface that was down (Node in the case 13. Enable protected interface that was down (Node in the case of
of NNHOP) NNHOP).
14. Verify head-end signals new LSP and protection should be in 14. Verify headend signals new LSP and protection should be in
place again place again.
5.3. Headend as PLR with Node failure 7.3. Headend as PLR with Node Failure
Objective Objective
To benchmark the MPLS failover time due to Node failure events To benchmark the MPLS failover time due to Node failure events
described in section 3.1 experienced by the device under test which described in section 3.1 experienced by the device under test, which
is the point of local repair (PLR). is the point of local repair (PLR).
Test Setup Test Setup
- select any one topology from section 4.5 to 4.8 - Select any one topology from section 4.5 to 4.8
- select overlay technology for FRR test e.g. IGP,VPN,or VC - Select overlay technology for FRR test e.g. IGP, VPN, or VC
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration Test Configuration
Poretsky, Rao, Le Roux 1. Configure the number of primaries on R2 and the backups on R2
Protection Mechanisms as required by the topology selected.
1. Configure the number of primaries on R2 and the backups on
R2 as required by the topology selected
2. Advertise prefixes (as per FRR Scalability table describe in 2. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end. Appendix A) by the tail end.
Procedure Procedure
Protection Mechanisms
1. Establish the primary lsp on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected selected.
2. Establish the backup lsp on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology topology.
3. Verify primary and backup lsps are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected protected.
4. Verify Fast Reroute protection 4. Verify Fast Reroute protection.
5. Setup traffic streams as described in section 3.7 5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum Forwarding Rate to DUT. 6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP. 7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Node failure as describe in section 8. Trigger any choice of Node failure as describe in section 3.1.
3.1 9. Verify that primary tunnel and prefixes gets mapped to backup
9. Verify that primary tunnel and prefixes gets mapped to tunnels
backup tunnels
10. Stop traffic stream and measure the traffic loss. 10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as per defined in section 6, 11. Failover time is calculated as per defined in section 6,
Reporting format. Reporting format.
12. Start traffic stream again to verify reversion when 12. Start traffic stream again to verify reversion when protected
protected interface comes up. Traffic loss should be 0 due interface comes up. Traffic loss should be 0 due to make
to make before break or reversion before break or reversion.
13. Boot protected Node that was down. 13. Boot protected Node that was down.
14. Verify head-end signals new LSP and protection should be in 14. Verify headend signals new LSP and protection should be in
place again place again.
5.4. Mid-Point as PLR with Node failure 7.4. Mid-Point as PLR with Node failure
Objective Objective
Poretsky, Rao, Le Roux
Protection Mechanisms
To benchmark the MPLS failover time due to Node failure events To benchmark the MPLS failover time due to Node failure events
described in section 3.1 experienced by the device under test which described in section 3.1 experienced by the device under test, which
is the point of local repair (PLR). is the point of local repair (PLR).
Test Setup Test Setup
- select any one topology from section 4.5 to 4.8 - Select any one topology from section 4.5 to 4.8.
- select overlay technology for FRR test as Mid-Point lsps - Select overlay technology for FRR test as Mid-Point LSPs.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration Test Configuration
Protection Mechanisms
1. Configure the number of primaries on R1 and the backups on 1. Configure the number of primaries on R1 and the backups on R2
R2 as required by the topology selected as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in 2. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end. Appendix A) by the tail end.
Procedure Procedure
1. Establish the primary lsp on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected selected.
2. Establish the backup lsp on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology topology.
3. Verify primary and backup lsps are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected protected.
4. Verify Fast Reroute protection 4. Verify Fast Reroute protection.
5. Setup traffic streams as described in section 3.7 5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum Forwarding Rate to DUT. 6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP. 7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Node failure as describe in section 8. Trigger any choice of Node failure as describe in section 3.1.
3.1 9. Verify that primary tunnel and prefixes gets mapped to backup
9. Verify that primary tunnel and prefixes gets mapped to tunnels.
backup tunnels
10. Stop traffic stream and measure the traffic loss. 10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as per defined in section 6, 11. Failover time is calculated as per defined in section 6,
Reporting format. Reporting format.
12. Start traffic stream again to verify reversion when 12. Start traffic stream again to verify reversion when protected
protected interface comes up. Traffic loss should be 0 due interface comes up. Traffic loss should be 0 due to make
to make before break or reversion before break or reversion.
13. Boot protected Node that was down 13. Boot protected Node that was down.
14. Verify headend signals new LSP and protection should be in
place again.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
14. Verify head-end signals new LSP and protection should be in
place again
5.5. MPLS FRR Forwarding Performance Test Cases 7.5. MPLS FRR Forwarding Performance Test cases
For the following MPLS FRR Forwarding Performance Benchmarking For the following MPLS FRR Forwarding Performance Benchmarking
cases, Test the maximum PPS rate allowed by given hardware cases, Test the maximum PPS rate allowed by given hardware. One
may follow the procedure for determining MPLS forwarding
performance defined in [MPLS-FORWARD]
5.5.1. PLR as Headend 7.5.1. PLR as Headend
Objective Objective
To benchmark the maximum rate (pps) on the PLR (as headend) To benchmark the maximum rate (pps) on the PLR (as headend) over
over primary FRR LSP and backup lsp. primary FRR LSP and backup LSP.
Test Setup Test Setup
- select any one topology out of 8 from section 4 - Select any one topology out of 8 from section 4.
- select overlay technology for FRR test e.g. IGP,VPN,or VC - Select overlay technology for FRR test e.g. IGP,VPN,or VC.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not Generator/analyzer. (If the node downstream of the PLR is not
A simulated node, then the Ingress of the tunnel should have A simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). a link connected to the traffic analyzer).
Procedure Procedure
1. Establish the primary lsp on R2 required by the 1. Establish the primary LSP on R2 required by the topology
topology selected selected.
2. Establish the backup lsp on R2 required by the 2. Establish the backup LSP on R2 required by the selected
selected topology topology.
3. Verify primary and backup lsps are up and that primary 3. Verify primary and backup LSPs are up and that primary is
is protected protected.
4. Verify Fast Reroute protection is enabled and ready 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 3.7 5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum forwarding rate (pps) that 6. Send IP traffic at maximum forwarding rate (pps) that the
the device under test supports over the primary LSP device under test supports over the primary LSP.
7. Record maximum PPS rate forwarded over primary LSP 7. Record maximum PPS rate forwarded over primary LSP.
8. Stop traffic stream.
9. Trigger any choice of Link failure as describe in section 3.1.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
8. Stop traffic stream
9. Trigger any choice of Link failure as describe in
section 3.1
10. Verify that primary tunnel and prefixes gets mapped to
backup tunnels
11. Send IP traffic at maximum forwarding rate (pps) that
the device under test supports over the primary LSP
12. Record maximum PPS rate forwarded over backup LSP
5.5.2. PLR as Mid-point 10. Verify that primary tunnel and prefixes gets mapped to backup
tunnels.
11. Send IP traffic at maximum forwarding rate (pps) that the
device under test supports over the primary LSP.
12. Record maximum PPS rate forwarded over backup LSP.
To benchmark the maximum rate (pps) on the PLR (as mid-point) 7.5.2. PLR as Mid-point
over primary FRR LSP and backup lsp.
Objective
To benchmark the maximum rate (pps) on the PLR (as mid-point of the
primary path and ingress of the backup path) over primary FRR LSP
and backup LSP.
Test Setup Test Setup
- select any one topology out of 8 from section 4 - Select any one topology out of 8 from section 4.
- select overlay technology for FRR test as Mid-Point lsps - Select overlay technology for FRR test as Mid-Point LSPs.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Procedure Procedure
1. Establish the primary lsp on R1 required by the 1. Establish the primary LSP on R1 required by the topology
topology selected selected.
2. Establish the backup lsp on R2 required by the 2. Establish the backup LSP on R2 required by the selected
selected topology topology.
3. Verify primary and backup lsps are up and that primary 3. Verify primary and backup LSPs are up and that primary is
is protected protected.
4. Verify Fast Reroute protection is enabled and ready 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 3.7 5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum forwarding rate (pps) that 6. Send IP traffic at maximum forwarding rate (pps) that the
the device under test supports over the primary LSP device under test supports over the primary LSP.
7. Record maximum PPS rate forwarded over primary LSP 7. Record maximum PPS rate forwarded over primary LSP.
8. Stop traffic stream 8. Stop traffic stream.
9. Trigger any choice of Link failure as describe in section 3.1.
10. Verify that primary tunnel and prefixes gets mapped to backup
tunnels.
11. Send IP traffic at maximum forwarding rate (pps) that the
device under test supports over the backup LSP.
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
9. Trigger any choice of Link failure as describe in
section 3.1
10. Verify that primary tunnel and prefixes gets mapped to
backup tunnels
11. Send IP traffic at maximum forwarding rate (pps) that
the device under test supports over the backup LSP
12. Record maximum PPS rate forwarded over backup LSP
6. Reporting Format 12. Record maximum PPS rate forwarded over backup LSP.
8. Reporting Format
For each test, it is recommended that the results be reported in the For each test, it is recommended that the results be reported in the
following format. following format.
Parameter Units Parameter Units
IGP used for the test ISIS-TE/ OSPF-TE IGP used for the test ISIS-TE/ OSPF-TE
Interface types Gige,POS,ATM,VLAN etc. Interface types Gige,POS,ATM,VLAN etc.
Packet Sizes offered to the DUT Bytes Packet Sizes offered to the DUT Bytes
Forwarding rate number of packets
IGP routes advertised number of IGP routes
RSVP hello timers configured (if any) milliseconds
Number of FRR tunnels configured number of tunnels
Number of VPN routes in head-end number of VPN routes
Number of VC tunnels number of VC tunnels
Number of BGP routes number of BGP routes
Number of mid-point tunnels number of tunnels
Number of Prefixes protected by Primary number of prefixes
Number of LSPs being protected number of LSPs
Topology being used Section number
Failure Event Event type
Benchmarks Forwarding rate Number of packets per
second
Minimum failover time milliseconds IGP routes advertised Number of IGP routes
Mean failover time milliseconds
Maximum failover time milliseconds
Minimum reversion time milliseconds
Poretsky, Rao, Le Roux RSVP hello timers configured Milliseconds
(if any)
Number of FRR tunnels Number of tunnels
configured
Number of VPN routes installed Number of VPN routes
on the headend
Number of VC tunnels Number of VC tunnels
Number of BGP routes BGP routes installed
Number of mid-point tunnels Number of tunnels
Number of Prefixes protected by Number of LSPs
Primary
Topology being used Section number, and
figure reference
Protection Mechanisms Protection Mechanisms
Mean reversion time milliseconds
Maximum reversion time milliseconds Failure event Event type
Benchmarks
Parameter Unit
Minimum failover time Milliseconds
Mean failover time Milliseconds
Maximum failover time Milliseconds
Minimum reversion time Milliseconds
Mean reversion time Milliseconds
Maximum reversion time Milliseconds
Failover time suggested above is calculated using one of the Failover time suggested above is calculated using one of the
following 3 methods following three methods
1. Packet-Based Loss method (PBLM): (Number of packets 1. Packet-Based Loss method (PBLM): (Number of packets
dropped/packets per second * 1000) milliseconds. This method dropped/packets per second * 1000) milliseconds. This method
could also be referred as Rate Derived method. could also be referred as Rate Derived method.
2. Time-Based Loss Method (TBLM): This method relies on the 2. Time-Based Loss Method (TBLM): This method relies on the
ability of the Traffic generators to provide statistics which ability of the Traffic generators to provide statistics which
reveal the duration of failure in milliseconds based on when the reveal the duration of failure in milliseconds based on when
packet loss occurred (interval between non-zero packet loss and the packet loss occurred (interval between non-zero packet loss
zero loss). and zero loss).
3. Timestamp Based Method (TBM): This method of failover 3. Timestamp Based Method (TBM): This method of failover
calculation is based on the timestamp that gets transmitted as calculation is based on the timestamp that gets transmitted as
payload in the packets originated by the generator. The Traffic payload in the packets originated by the generator. The Traffic
Analyzer records the timestamp of the last packet received Analyzer records the timestamp of the last packet received
before the failover event and the first packet after the before the failover event and the first packet after the
failover and derives the time based on the difference between failover and derives the time based on the difference between
these 2 timestamps. Note: The payload could also contain these 2 timestamps. Note: The payload could also contain
sequence numbers for out-of-order packet calculation and sequence numbers for out-of-order packet calculation and
duplicate packets. duplicate packets.
Note: If the primary is configured to be dynamic, and if the primary Note: If the primary is configured to be dynamic, and if the primary
is to reroute, make before break should occur from the backup that is is to reroute, make before break should occur from the backup that
in use to a new alternate primary. If there is any packet loss seen,
it should be added to failover time.
7. IANA Considerations
This document requires no IANA considerations.
8. Security Considerations
Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory
Poretsky, Rao, Le Roux
Protection Mechanisms Protection Mechanisms
environment, with dedicated address space and the constraints
specified in the sections above.
The benchmarking network topology will be an independent test setup is in use to a new alternate primary. If there is any packet loss
and MUST NOT be connected to devices that may forward the test seen, it should be added to failover time.
traffic into a production network, or misroute traffic to the test
management network.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically 9. Security Considerations
for benchmarking purposes. Any implications for network security
arising from the DUT/SUT SHOULD be identical in the lab and in
production networks.
The isolated nature of the benchmarking environments and the fact During the course of test, the test topology must be disconnected
that no special features or capabilities, other than those used in from devices that may forward the test traffic into a production
operational networks, are enabled on the DUT/SUT requires no environment.
security considerations specific to the benchmarking process.
9. Acknowledgements There are no specific security considerations within the scope of
this document.
We would like to thank Jean Philip Vasseur for his invaluable input 10. IANA Considerations
to the document and Curtis Villamizar his contribution in suggesting
text on definition and need for benchmarking Correlated failures.
Additionally we would like to thank Arun Gandhi, Amrit Hanspal, Karu There are no considerations for IANA at this time.
Ratnam and for their input to the document.
10. References 11. References
10.1. Normative References 11.1. Normative References
[MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G., "Fast Reroute [MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G., "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels", RFC 4090. Extensions to RSVP-TE for LSP Tunnels", RFC 4090.
Poretsky, Rao, Le Roux 11.2. Informative References
Protection Mechanisms
10.2. Informative References
[RFC-WORDS] Bradner, S., "Key words for use in RFCs to
Indicate Requirement Levels", RFC 2119, March 1997.
[TERM-ID] Poretsky S., Papneja R., Karthik J., Vapiwala S., [TERM-ID] Poretsky S., Papneja R., Karthik J., Vapiwala S.,
"Benchmarking Terminology for Protection "Benchmarking Terminology for Protection
Performance", draft-ietf-bmwg-protection-term- Performance", draft-ietf-bmwg-protection-term-
02.txt, work in progress. 05.txt, work in progress.
[MPLS-FRR-EXT] Pan P., Swollow G., Atlas A., "Fast Reroute [MPLS-FRR-EXT] Pan P., Swallow G., Atlas A., "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels", RFC 4090. Extensions to RSVP-TE for LSP Tunnels'', RFC 4090.
[IGP-METH] S. Poretsky, B. Imhoff, "Benchmarking Methodology [IGP-METH] S. Poretsky, B. Imhoff, "Benchmarking Methodology
for IGP Data Plane Route Convergence, "draft-ietf- for IGP Data Plane Route Convergence, draft-ietf-
bmwg-igp-dataplane-conv-meth-12.txt", work in bmwg-igp-dataplane-conv-meth-16.txt, work in
progress. progress.
11. Authors' Addresses [MPLS-FORWARD] A. Akhter, and R. Asati, ''MPLS Forwarding
Benchmarking Methodology,'' draft-ietf-bmwg-mpls-
forwarding-meth-00.txt, work in progress.
Protection Mechanisms
Author's Addresses
Rajiv Papneja Rajiv Papneja
Isocore Isocore
12359 Sunrise Valley Drive, STE 100 12359 Sunrise Valley Drive, STE 100
Reston, VA 20190 Reston, VA 20190
USA USA
Phone: +1 703 860 9273 Phone: +1 703 860 9273
Email: rpapneja@isocore.com Email: rpapneja@isocore.com
Samir Vapiwala Samir Vapiwala
Cisco System Cisco System
300 Beaver Brook Road 300 Beaver Brook Road
Boxborough, MA 01719 Boxborough, MA 01719
USA USA
Phone: +1 978 936 1484 Phone: +1 978 936 1484
Email: svapiwal@cisco.com Email: svapiwal@cisco.com
Poretsky, Rao, Le Roux
Protection Mechanisms
Jay Karthik Jay Karthik
Cisco Systems, Cisco System
300 Beaver Brook Road 300 Beaver Brook Road
Boxborough, MA 01719 Boxborough, MA 01719
USA USA
Phone: + 1 978 936 0533 Phone: + 1 978 936 0533
Email: jkarthik@cisco.com Email: jkarthik@cisco.com
Scott Poretsky Scott Poretsky
Reef Point Systems Allot Communications
8 New England Executive Park 67 South Bedford Street, Suite 400
Burlington, MA 01803 Burlington, MA 01803
USA USA
Phone: + 1 781 395 5090 Phone: + 1 508 309 2179
EMail: sporetsky@reefpoint.com EMail: sporetsky@allot.com
Shankar Rao Shankar Rao
Qwest Communications, Qwest Communications,
950 17th Street 950 17th Street
Suite 1900 Suite 1900
Protection Mechanisms
Qwest Communications
Denver, CO 80210 Denver, CO 80210
USA USA
Phone: + 1 303 437 6643 Phone: + 1 303 437 6643
Email: shankar.rao@qwest.com Email: shankar.rao@qwest.com
Jean-Louis Le Roux Jean-Louis Le Roux
France Telecom France Telecom
2 av Pierre Marzin 2 av Pierre Marzin
22300 Lannion 22300 Lannion
France France
Phone: 00 33 2 96 05 30 20 Phone: 00 33 2 96 05 30 20
Email: jeanlouis.leroux@orange-ft.com Email: jeanlouis.leroux@orange-ft.com
Poretsky, Rao, Le Roux Intellectual Property Statement
Protection Mechanisms
Full Copyright Statement
Copyright (C) The IETF Trust (2008).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to Intellectual Property Rights or other rights that might be claimed
pertain to the implementation or use of the technology described in to pertain to the implementation or use of the technology described
this document or the extent to which any license under such rights in this document or the extent to which any license under such
might or might not be available; nor does it represent that it has rights might or might not be available; nor does it represent that
made any independent effort to identify any such rights. Information it has made any independent effort to identify any such rights.
on the procedures with respect to rights in RFC documents can be Information on the procedures with respect to rights in RFC
found in BCP 78 and BCP 79. documents can be found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of attempt made to obtain a general license or permission for the use
such proprietary rights by implementers or users of this of such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at specification can be obtained from the IETF on-line IPR repository
http://www.ietf.org/ipr. at http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at this standard. Please address the information to the IETF at
ietf-ipr@ietf.org. ietf-ipr@ietf.org.
Acknowledgment Protection Mechanisms
Funding for the RFC Editor function is provided by the IETF Disclaimer
Administrative Support Activity (IASA).
Poretsky, Rao, Le Roux This document and the information contained herein are provided on
Protection Mechanisms an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE
REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE
IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY
WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE
ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Full Copyright Statement
Copyright (C) The IETF Trust (2008).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
12. Acknowledgments
We would like to thank Jean Philip Vasseur for his invaluable input
to the document and Curtis Villamizar his contribution in suggesting
text on definition and need for benchmarking Correlated failures.
Additionally we would like to thank Arun Gandhi, Amrit Hanspal, Karu
Ratnam and for their input to the document.
Appendix A: Fast Reroute Scalability Table Appendix A: Fast Reroute Scalability Table
This section provides the recommended numbers for evaluating the This section provides the recommended numbers for evaluating the
scalability of fast reroute implementations. It also recommends the scalability of fast reroute implementations. It also recommends the
typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries.
Based on the features supported by the device under test, appropriate Based on the features supported by the device under test,
scaling limits can be used for the test bed. appropriate scaling limits can be used for the test bed.
Protection Mechanisms
A 1. FRR IGP Table A 1. FRR IGP Table
No of Headend IGP Prefixes No. of Headend TE Tunnels IGP Prefixes
TE LSPs
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 5000 1 5000
2(Load Balance) 100 2(Load Balance) 100
2(Load Balance) 500 2(Load Balance) 500
2(Load Balance) 1000 2(Load Balance) 1000
2(Load Balance) 2000 2(Load Balance) 2000
2(Load Balance) 5000 2(Load Balance) 5000
100 100 100 100
500 500 500 500
1000 1000 1000 1000
2000 2000 2000 2000
Protection Mechanisms
A 2. FRR VPN Table A 2. FRR VPN Table
No of Headend VPNv4 Prefixes No. of Headend TE Tunnels VPNv4 Prefixes
TE LSPs
Poretsky, Rao, Le Roux
Protection Mechanisms
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 5000 1 5000
1 10000 1 10000
1 20000 1 20000
1 Max 1 Max
2(Load Balance) 100 2(Load Balance) 100
2(Load Balance) 500 2(Load Balance) 500
2(Load Balance) 1000 2(Load Balance) 1000
2(Load Balance) 2000 2(Load Balance) 2000
2(Load Balance) 5000 2(Load Balance) 5000
2(Load Balance) 10000 2(Load Balance) 10000
2(Load Balance) 20000 2(Load Balance) 20000
2(Load Balance) Max 2(Load Balance) Max
A 3. FRR Mid-Point LSP Table A 3. FRR Mid-Point LSP Table
No of Mid-point TE LSPs could be configured at the following No of Mid-point TE LSPs could be configured at recommended levels -
recommended levels 100, 500, 1000, 2000, or max supported number.
100
500 Protection Mechanisms
1000
2000
Max supported number
A 4. FRR VC Table A 4. FRR VC Table
No of Headend VC entries No. of Headend TE Tunnels VC entries
TE LSPs
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 Max 1 Max
Poretsky, Rao, Le Roux
Protection Mechanisms
100 100 100 100
500 500 500 500
1000 1000 1000 1000
2000 2000 2000 2000
Appendix B: Abbreviations Appendix B: Abbreviations
BFD - Bidirectional Fault Detection BFD - Bidirectional Fault Detection
BGP - Border Gateway protocol BGP - Border Gateway protocol
CE - Customer Edge CE - Customer Edge
DUT - Device Under Test DUT - Device Under Test
FRR - Fast Reroute FRR - Fast Reroute
IGP - Interior Gateway Protocol IGP - Interior Gateway Protocol
skipping to change at page 32, line 31 skipping to change at page 32, line 4
LSP - Label Switched Path LSP - Label Switched Path
MP - Merge Point MP - Merge Point
MPLS - Multi Protocol Label Switching MPLS - Multi Protocol Label Switching
N-Nhop - Next - Next Hop N-Nhop - Next - Next Hop
Nhop - Next Hop Nhop - Next Hop
OIR - Online Insertion and Removal OIR - Online Insertion and Removal
P - Provider P - Provider
PE - Provider Edge PE - Provider Edge
PHP - Penultimate Hop Popping PHP - Penultimate Hop Popping
PLR - Point of Local Repair PLR - Point of Local Repair
Protection Mechanisms
RSVP - Resource reSerVation Protocol RSVP - Resource reSerVation Protocol
SRLG - Shared Risk Link Group SRLG - Shared Risk Link Group
TA - Traffic Analyzer TA - Traffic Analyzer
TE - Traffic Engineering TE - Traffic Engineering
TG - Traffic Generator TG - Traffic Generator
VC - Virtual Circuit VC - Virtual Circuit
VPN - Virtual Private Network VPN - Virtual Private Network
Poretsky, Rao, Le Roux
 End of changes. 209 change blocks. 
557 lines changed or deleted 572 lines changed or added

This html diff was produced by rfcdiff 1.35. The latest version is available from http://tools.ietf.org/tools/rfcdiff/