draft-ietf-bmwg-protection-meth-04.txt   draft-ietf-bmwg-protection-meth-05.txt 
Network Working Group Rajiv Papneja Network Working Group R. Papneja
Internet Draft Isocore Internet Draft Isocore
Intended Status: Informational S.Vapiwala Intended Status: Informational
Expires: April 2, 2009 J. Karthik Expires: September 8, 2009 S. Vapiwala
J. Karthik
Cisco Systems Cisco Systems
S. Poretsky S. Poretsky
Allot Allot Communications
S. Rao S. Rao
Qwest Communications Qwest Communications
Jean-Louis Le Roux
J.L. Le Roux
France Telecom France Telecom
November 3, 2008
Methodology for Benchmarking MPLS Protection Mechanisms March 8, 2009
draft-ietf-bmwg-protection-meth-04.txt
Status of this Memo Methodology for benchmarking MPLS protection mechanisms
draft-ietf-bmwg-protection-meth-05.txt
By submitting this Internet-Draft, each author represents that Status of this Memo
any applicable patent or other IPR claims of which he or she is This Internet-Draft is submitted to IETF in full conformance with the
aware have been or will be disclosed, and any of which he or she provisions of BCP 78 and BCP 79.
becomes aware will be disclosed, in accordance with Section 6 of
BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six months
months and may be updated, replaced, or obsoleted by other documents and may be updated, replaced, or obsoleted by other documents at any
at any time. It is inappropriate to use Internet-Drafts as time. It is inappropriate to use Internet-Drafts as reference
reference material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 3, 2009. This Internet-Draft will expire on September 8, 2009.
Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info).
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document.
Abstract
Protection Mechanisms Protection Mechanisms
Abstract
This draft describes the methodology for benchmarking MPLS This draft describes the methodology for benchmarking MPLS
Protection mechanisms for link and node protection as defined in Protection mechanisms for link and node protection as defined in
[MPLS-FRR-EXT]. This document provides test methodologies and test [MPLS-FRR-EXT]. This document provides test methodologies and
bed setup for measuring failover times while considering all testbed setup for measuring failover times while considering
dependencies that might impact faster recovery of real-time services all dependencies that might impact faster recovery of real-time
bound to MPLS based traffic engineered tunnels. applications bound to MPLS based traffic engineered tunnels.
The benchmarking terms used in this document are defined in
The terms used in the procedures included in this document are [TERM-ID].
defined in [TERM-ID].
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Document Scope.................................................4 2. Document Scope.................................................4
3. General reference sample topology..............................5 3. Existing definitions...........................................5
4. Existing definitions...........................................5 4. General Reference Topology.....................................5
5. Test Considerations............................................6 5. Test Considerations............................................6
5.1. Failover Events..............................................6 5.1. Failover Events..............................................6
5.2. Failure Detection [TERM-ID]..................................7 5.2. Failure Detection............................................7
5.3. Use of Data Traffic for MPLS Protection benchmarking.........7 5.3. Use of Data Traffic for MPLS Protection Benchmarking.........7
5.4. LSP and Route Scaling........................................8 5.4. LSP and Route Scaling........................................8
5.5. Selection of IGP.............................................8 5.5. Selection of IGP.............................................8
5.6. Reversion [TERM-ID]..........................................8 5.6. Reversion....................................................8
5.7. Traffic Generation...........................................8 5.7. Offered Load.................................................8
5.8. Motivation for Topologies....................................9 5.8. Tester Capabilities..........................................9
6. Reference Test Setup...........................................9 6. Reference Test Setups..........................................9
6.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup 6.1 Link Protection...............................................9
TE tunnels.......................................................10 6.2 Node Protection..............................................13
6.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup 7. Test Methodologies............................................15
TE tunnels.......................................................11 7.1. MPLS FRR Forwarding Performance Test Cases..................15
6.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup 7.2. Headend PLR with link failure...............................17
TE tunnels.......................................................11 7.3. Mid-Point PLR with link failure.............................18
6.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup 7.4. Headend PLR with Node Failure...............................19
TE tunnels.......................................................12
6.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup
TE tunnels.......................................................12
6.6. Node Protection with 2 hop primar (from PLR) and 2 hop backup
TE tunnels.......................................................13
6.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup
TE tunnels.......................................................14
6.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup
TE tunnels.......................................................15
7. Test Methodology..............................................15
7.1. Headend as PLR with link failure............................15
7.2. Mid-Point as PLR with link failure..........................17
7.3. Headend as PLR with Node Failure............................18
Protection Mechanisms Protection Mechanisms
7.4. Mid-Point as PLR with Node failure..........................19 7.5. Mid-Point PLR with Node Failure.............................21
7.5. MPLS FRR Forwarding Performance Test cases..................21
7.5.1. PLR as Headend............................................21
7.5.2. PLR as Mid-point..........................................22
8. Reporting Format..............................................23 8. Reporting Format..............................................23
Benchmarks.......................................................24 9. Security Considerations.......................................24
9. Security Considerations.......................................25 10. IANA Considerations..........................................24
10. IANA Considerations..........................................25 11. References...................................................24
11. References...................................................25 11.1. Normative References.......................................24
11.1. Normative References.......................................25 11.2. Informative References.....................................24
11.2. Informative References.....................................25 12. Acknowledgments..............................................24
Author's Addresses...............................................26 Author's Addresses...............................................25
Intellectual Property Statement..................................27 Appendix A: Fast Reroute Scalability Table.......................26
Disclaimer of Validity...........................................28 Appendix B: Abbreviations........................................38
Copyright Statement..............................................28
12. Acknowledgments..............................................28
Appendix A: Fast Reroute Scalability Table.......................28
Appendix B: Abbreviations........................................31
1. Introduction 1. Introduction
This draft describes the methodology for benchmarking MPLS based This draft describes the methodology for benchmarking MPLS based
protection mechanisms. The new terminology that this document protection mechanisms. The new terminology that this document
introduces is defined in [TERM-ID]. introduces is defined in [TERM-ID].
MPLS based protection mechanisms provide fast recovery of real-time MPLS based protection mechanisms provide fast recovery of real-time
services from a planned or an unplanned link or node failures. MPLS services from a planned or an unplanned link or node failures.
protection mechanisms are generally deployed in a network MPLS protection mechanisms are generally deployed in a network
infrastructure, where MPLS is used for provisioning of point-to- infrastructure where MPLS is used for provisioning of point-to-
point traffic engineered tunnels (tunnel). MPLS based protection point traffic engineered tunnels (tunnel). MPLS based protection
mechanisms promises to improve service disruption period by mechanisms promise to improve service disruption period by
minimizing recovery time from most common failures. minimizing recovery time from most common failures.
Generally there two factors impacting service availability - one is Network elements from different manufacturers behave differently to
frequency of failures, and other being duration for which the network failures, which impacts the network's ability and
failures last. Failures can be classified further into two types- - performance for failure recovery. It therefore becomes imperative
correlated uncorrelated failures. A Correlated failure is the co- for service providers to have a common benchmark to understand the
occurrence of two or more failures simultaneously. A typical example performance behaviors of network elements.
would be a failure of logical resource (e.g. layer-2 links), relying
on a common physical resource (e.g. common interface) fails. Within
the context of MPLS protection mechanisms, failures that arise due
to Shared Risk Link Groups (SRLG) [MPLS-FRR-EXT] can be considered
as correlations failures or. Not all correlated failures are
Protection Mechanisms Protection Mechanisms
predictable in advance especially the ones caused due to natural There are two factors impacting service availability:
disasters. frequency of failures and duration for which the failures persist.
Failures can be classified further into two types: correlated and
uncorrelated. Correlated and uncorrelated failures may be planned
or unplanned.
Planned failures on the other hand are predictable and Planned failures are predictable. Network implementations should
implementations should handle both types of failures and recover be able to handle both planned and unplanned failures and recover
gracefully within the time frame acceptable for service assurance. gracefully within a time frame to maintain service assurance.
Hence, failover recovery time is one of the most important benchmark Hence, failover recovery time is one of the most important benchmark
that a service provider considers in choosing the building blocks that a service provider considers in choosing the building blocks
for their network infrastructure. for their network infrastructure.
It is a known fact that network elements from different manufactures A correlated failure is the simultaneous occurrence
behave differently to network failures, which impact their ability of two or more failures. A typical example is failure of a logical
to recover from the failures. It becomes imperative from network resource (e.g. layer-2 links) due to a dependency on a common
service providers to have a common benchmark, which could be physical resource (e.g. common conduit) that fails. Within
followed to understand the performance behaviors of network the context of MPLS protection mechanisms, failures that arise due
elements. to Shared Risk Link Groups (SRLG) [MPLS-FRR-EXT] can be considered
as correlated failures. Not all correlated failures are
Considering failover recovery an important parameter, the test predictable in advance, for example, those caused by natural
methodology presented in this document considers the factors that disasters.
may impact the failover times. To benchmark the failover times, data
plane traffic is used as defined in [IGP-METH].
All benchmarking test cases defined in this document apply to both
facility backup and local protection enabled in detour mode. The
test cases cover all possible failure scenarios and the associated
procedures benchmark the ability of the DUT to perform recovery from
failures within target failover time.
2. Document Scope 2. Document Scope
This document provides detailed test cases along with different This document provides detailed test cases along with different
topologies and scenarios that should be considered to effectively topologies and scenarios that should be considered to effectively
benchmark MPLS protection mechanisms and failover times. Different benchmark MPLS protection mechanisms and failover times on the
failure scenarios and scaling considerations are also provided in Data Plane. Different Failover Events and scaling considerations
this document, in addition to reporting formats for the observed are also provided in this document.
results.
Benchmarking of unexpected correlated failures is currently out of All benchmarking testcases defined in this document apply to both
scope of this document. facility backup and local protection enabled in detour mode. The
test cases cover all possible failure scenarios and the
associated procedures benchmark the performance of the Device
Under Test (DUT) to recover from failures. Data plane traffic is
used to benchmark failover times.
Benchmarking of correlated failures is out of scope of this
document. Protection from Bi-directional Forwarding Detection
(BFD) is outside the scope of this document.
Protection Mechanisms Protection Mechanisms
3. General reference sample topology 3. Existing definitions
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, RFC 2119
[Br97]. RFC 2119 defines the use of these key words to help make the
intent of standards track documents as clear as possible. While this
document uses these keywords, this document is not a standards track
document.
The reader is assumed to be familiar with the commonly used MPLS
terminology, some of which is defined in [MPLS-FRR-EXT].
This document uses much of the terminology defined in
[TERM-ID]. This document also uses existing terminology defined
in other BMWG work. Examples include, but are not limited to:
Throughput [Ref.[Br91], section 3.17]
Device Under Test (DUT) [Ref.[Ma98], section 3.1.1]
System Under Test (SUT) [Ref.[Ma98], section 3.1.2]
Out-of-order Packet [Ref.[Po06], section 3.3.2]
Duplicate Packet [Ref.[Po06], section 3.3.3]
4. General Reference Topology
Figure 1 illustrates the basic reference testbed and is applicable Figure 1 illustrates the basic reference testbed and is applicable
to all the test cases defined in this document. TG & TA represents to all the test cases defined in this document. The Tester is
Traffic Generator & Analyzer respectively. A tester is connected to comprised of a Traffic Generator (TG) & Test Analyzer (TA). A
the DUT and it sends and receives IP traffic along with the working Tester is directly connected to the DUT. The Tester sends and
Path, run protocol emulations simulating real world peering receives IP traffic to the tunnel ingress and performs signaling
scenarios. The reference testbed shown in the figure protocol emulation to simulate real network scenarios in a lab
environment. The Tester may also support MPLS-TE signaling to act
as the ingress node to the MPLS tunnel.
--------------------------- ---------------------------
| ------------|--------------- | ------------|---------------
| | | | | | | |
| | | | | | | |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
TG-| R1 |-----| R2 |----| R3 | | R4 | | R5 |-TA TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 |
| |-----| |----| |----| |---| | | |-----| |----| |----| |---| |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
| | | | | | | | |
| | | | | | | | |
| -------- | | | -------- | | TA
---------| R6 |-------- | ---------| R6 |--------- |
| |-------------------- | |----------------------
-------- --------
Fig.1: Fast Reroute Topology. Fig.1: Fast Reroute Topology.
Protection Mechanisms
The tester MUST record the number of lost, duplicate, and reordered The tester MUST record the number of lost, duplicate, and reordered
packets. It should further record arrival and departure times so packets. It should further record arrival and departure times so
that failover Time, Additive Latency, and Reversion Time can be that Failover Time, Additive Latency, and Reversion Time can be
measured. The tester may be a single device or a test system measured. The tester may be a single device or a test system
emulating all the different roles along a primary or backup path. emulating all the different roles along a primary or backup path.
4. Existing definitions The label stack is dependent of the following 3 entities:
For the sake of clarity and continuity this RFC adopts the template
for definitions set out in Section 2 of RFC 1242. Definitions are
indexed and grouped together in sections for ease of reference. The
terms used in this document are defined in detail in [TERM-ID].
Protection Mechanisms
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", - Type of protection (Link Vs Node)
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in - # of remaining hops of the primary tunnel from the PLR
this document is to be interpreted as described in RFC 2119. - # of remaining hops of the backup tunnel from the PLR
The reader is assumed to be familiar with the commonly used MPLS Due to this dependency, it is RECOMMENDED that the benchmarking of
terminology, some of which is defined in [MPLS-FRR-EXT]. failover times be performed on all the topologies provided in
section 6.
5. Test Considerations 5. Test Considerations
This section discusses the fundamentals of MPLS Protection testing: This section discusses the fundamentals of MPLS Protection testing:
-The types of network events that causes failover -The types of network events that causes failover
-Indications for failover -Indications for failover
-the use of data traffic -the use of data traffic
-Traffic generation -Traffic generation
-LSP Scaling -LSP Scaling
-Reversion of LSP -Reversion of LSP
-IGP Selection -IGP Selection
5.1. Failover Events [TERM ID] 5.1. Failover Events [TERM-ID]
The failover to the backup tunnel is primarily triggered by either The failover to the backup tunnel is primarily triggered by either
link or node failures observed downstream of the Point of Local link or node failures observed downstream of the Point of Local
repair (PLR). Some of these failure events are listed below. repair (PLR). Some of these failure events are listed below.
Link failure events Link failure events
- Interface Shutdown on PLR side with POS Alarm - Interface Shutdown on PLR side with POS Alarm
- Interface Shutdown on remote side with POS Alarm - Interface Shutdown on remote side with POS Alarm
- Interface Shutdown on PLR side with RSVP hello enabled - Interface Shutdown on PLR side with RSVP hello enabled
- Interface Shutdown on remote side with RSVP hello enabled - Interface Shutdown on remote side with RSVP hello enabled
- Interface Shutdown on PLR side with BFD - Interface Shutdown on PLR side with BFD
- Interface Shutdown on remote side with BFD - Interface Shutdown on remote side with BFD
- Fiber Pull on the PLR side (Both TX & RX or just the TX) - Fiber Pull on the PLR side (Both TX & RX or just the TX)
- Fiber Pull on the remote side (Both TX & RX or just the RX) - Fiber Pull on the remote side (Both TX & RX or just the RX)
- Online insertion and removal (OIR) on PLR side - Online insertion and removal (OIR) on PLR side
- OIR on remote side - OIR on remote side
skipping to change at page 7, line 4 skipping to change at page 6, line 55
- Interface Shutdown on remote side with RSVP hello enabled - Interface Shutdown on remote side with RSVP hello enabled
- Interface Shutdown on PLR side with BFD - Interface Shutdown on PLR side with BFD
- Interface Shutdown on remote side with BFD - Interface Shutdown on remote side with BFD
- Fiber Pull on the PLR side (Both TX & RX or just the TX) - Fiber Pull on the PLR side (Both TX & RX or just the TX)
- Fiber Pull on the remote side (Both TX & RX or just the RX) - Fiber Pull on the remote side (Both TX & RX or just the RX)
- Online insertion and removal (OIR) on PLR side - Online insertion and removal (OIR) on PLR side
- OIR on remote side - OIR on remote side
- Sub-interface failure (e.g. shutting down of a VLAN) - Sub-interface failure (e.g. shutting down of a VLAN)
- Parent interface shutdown (an interface bearing multiple sub- - Parent interface shutdown (an interface bearing multiple sub-
interfaces interfaces
Protection Mechanisms
Node failure events Node failure events
- A System reload initiated either by a graceful shutdown or by
a power failure.
- A system crash due to a software failure or an assert.
A System reload is initiated either by a graceful shutdown or by a Protection Mechanisms
power failure. A system crash is referred to as a software failure
or an assert.
- Reload protected Node, when RSVP hello is enabled
- Crash Protected Node, when RSVP hello is enabled
- Reload Protected Node, when BFD is enable
- Crash Protected Node, when BFD is enable
5.2. Failure Detection [TERM-ID] 5.2. Failure Detection [TERM-ID]
Link failure detection time depends on the link type and failure Link failure detection time depends on the link type and failure
detection protocols running. For SONET/SDH, the alarm type (such as detection protocols running. For SONET/SDH, the alarm type (such as
LOS, AIS, or RDI) can be used. Other link types have layer-two LOS, AIS, or RDI) can be used. Other link types have layer-two
alarms, but they may not provide a short enough failure detection alarms, but they may not provide a short enough failure detection
time. Ethernet based links do not have layer 2 failure indicators, time. Ethernet based links do not have layer 2 failure indicators,
and therefore relies on layer 3 signaling for failure detection. and therefore relies on layer 3 signaling for failure detection.
skipping to change at page 7, line 38 skipping to change at page 7, line 28
indicators required by Ethernet based links, or for some other non- indicators required by Ethernet based links, or for some other non-
Ethernet based links to help improve failure detection time. Ethernet based links to help improve failure detection time.
The test procedures in this document can be used for a local failure The test procedures in this document can be used for a local failure
or remote failure scenarios for comprehensive benchmarking and to or remote failure scenarios for comprehensive benchmarking and to
evaluate failover performance independent of the failure detection evaluate failover performance independent of the failure detection
techniques. techniques.
5.3. Use of Data Traffic for MPLS Protection benchmarking 5.3. Use of Data Traffic for MPLS Protection benchmarking
Currently end customers use packet loss as a key metric for failover Currently end customers use packet loss as a key metric for
time. Packet loss is an externally observable event and has direct Failover Time [TERM-ID]. Failover Packet Loss [TERM-ID] is an
impact on customers' applications. MPLS protection mechanism is externally observable event and has direct impact on application
expected to minimize the packet loss in the event of a failure. For performance. MPLS protection is expected to minimize the packet
this reason it is important to develop a standard router loss in the event of a failure. For this reason it is important to
benchmarking methodology for measuring MPLS protection that uses develop a standard router benchmarking methodology for measuring
packet loss as a metric. At a known rate of forwarding, packet loss MPLS protection that uses packet loss as a metric. At a known rate
can be measured and the failover time can be determined. Measurement of forwarding, packet loss can be measured and the failover time
of control plane signaling to establish backup paths is not enough can be determined. Measurement of control plane signaling to
Protection Mechanisms establish backup paths is not enough to verify failover. Failover
is best determined when packets are actually traversing the backup
path.
to verify failover. Failover is best determined when packets are Protection Mechanisms
actually traversing the backup path.
An additional benefit of using packet loss for calculation of An additional benefit of using packet loss for calculation of
failover time is that it allows use of a black-box tests failover time is that it allows use of a black-box test environment.
environment. Data traffic is offered at line-rate to the device Data traffic is offered at line-rate to the device under test (DUT)
under test (DUT), and an emulated network failure event is forced to an emulated network failure event is forced to occur, and packet loss
occur, and packet loss is externally measured to calculate the is externally measured to calculate the convergence time. This setup
convergence time. This setup is independent of the DUT architecture. is independent of the DUT architecture.
In addition, this methodology considers the packets in error and In addition, this methodology considers the packets in error and
duplicate packets that could have been generated during the failover duplicate packets that could have been generated during the failover
process. In scenarios, where separate measurement of packets in process. The methodologies consider lost, out-of-order, and
error and duplicate packets is difficult to obtain, these packets duplicate packets to be impaired packets that contribute to the
should be attributed to lost packets. Failover Time.
5.4. LSP and Route Scaling 5.4. LSP and Route Scaling
Failover time performance may vary with the number of established Failover time performance may vary with the number of established
primary and backup tunnel label switched paths (LSP) and installed primary and backup tunnel label switched paths (LSP) and installed
routes. However the procedure outlined here should be used for any routes. However the procedure outlined here should be used for
number of LSPs (L) and number of routes protected by PLR(R). Number any number of LSPs (L) and number of routes protected by PLR(R).
of L and R must be recorded. The amount of L and R must be recorded.
5.5. Selection of IGP 5.5. Selection of IGP
The underlying IGP could be ISIS-TE or OSPF-TE for the methodology The underlying IGP could be ISIS-TE or OSPF-TE for the methodology
proposed here. proposed here. See [IGP-METH] for IGP options to consider and
report.
5.6. Reversion [TERM-ID] 5.6. Restoration and Reversion [TERM-ID]
Fast Reroute provides a method to return or restore a backup path to Fast Reroute provides a method to return or restore an original
original primary LSP upon recovery from the failure. This is primary LSP upon recovery from the failure (Restoration) and to
referred to as Reversion, which can be implemented as Global switch traffic from the Backup Path to the restored Primary Path
Reversion or Local Reversion. In all test cases listed here (Reversion). In MPLS-FRR, Reversion can be implemented as Global
Reversion should not produce any packet loss, out of order or Reversion or Local Reversion. It is important to include
duplicate packets. Each of the test cases in this methodology Restoration and Reversion as a step in each test case to measure
document provides a check to confirm that there is no packet loss. the amount of packet loss, out of order packets, or duplicate
packets that is produced.
5.7. Traffic Generation 5.7. Offered Load
It is suggested that there be one or more traffic streams as long as It is suggested that there be one or more traffic streams as long as
there is a steady and constant rate of flow for all the streams. In there is a steady and constant rate of flow for all the streams. In
order to monitor the DUT performance for recovery times a set of order to monitor the DUT performance for recovery times, a set of
route prefixes should be advertised before traffic is sent. The route prefixes should be advertised before traffic is sent. The
traffic should be configured towards these routes. traffic should be configured towards these routes.
Protection Mechanisms
A typical example would be configuring the traffic generator to send A typical example would be configuring the traffic generator to send
the traffic to the first, middle and last of the advertised routes. the traffic to the first, middle and last of the advertised routes.
(First, middle and last could be decided by the numerically (First, middle and last could be decided by the numerically
Protection Mechanisms
smallest, median and the largest respectively of the advertised smallest, median and the largest respectively of the advertised
prefix). Generating traffic to all of the prefixes reachable by the prefix). Generating traffic to all of the prefixes reachable by the
protected tunnel (probably in a Round-Robin fashion, where the protected tunnel (probably in a Round-Robin fashion, where the
traffic is destined to all the prefixes but one prefix at a time in traffic is destined to all the prefixes but one prefix at a time in
a cyclic manner) is not recommended. The reason why traffic a cyclic manner) is not recommended. The reason why traffic
generation is not recommended in a Round-Robin fashion to all the generation is not recommended in a Round-Robin fashion to all the
prefixes, one at a time is that if there are many prefixes reachable prefixes, one at a time is that if there are many prefixes reachable
through the LSP the time interval between 2 packets destined to one through the LSP the time interval between 2 packets destined to one
prefix may be significantly high and may be comparable with the prefix may be significantly high and may be comparable with the
failover time being measured which does not aid in getting an failover time being measured which does not aid in getting an
accurate failover measurement. accurate failover measurement.
5.8. Motivation for Topologies 5.8 Tester Capabilities
Given that the label stack is dependent of the following 3 entities
it is recommended that the benchmarking of failover time be
performed on all the 8 topologies provided in section 4
- Type of protection (Link Vs Node)
- # of remaining hops of the primary tunnel from the PLR It is RECOMMENDED that the Tester used to execute each test case
have the following capabilities:
1. Ability to establish MPLS-TE tunnels and push/pop labels.
2. Ability to produce Failover Event [TERM-ID].
3. Ability to insert a timestamp in each data packet's IP
payload.
4. An internal time clock to control timestamping, time
measurements, and time calculations.
5. Ability to disable or tune specific Layer-2 and Layer-3
protocol functions on any interface(s).
- # of remaining hops of the backup tunnel from the PLR The Tester MAY be capable to make non-data plane convergence
observations and use those observations for measurements.
6. Reference Test Setup 6. Reference Test Setup
In addition to the general reference topology shown in figure 1, In addition to the general reference topology shown in figure 1,
this section provides detailed insight into various proposed test this section provides detailed insight into various proposed test
setups that should be considered for comprehensively benchmarking setups that should be considered for comprehensively benchmarking
the failover time in different roles along the primary tunnel: the failover time in different roles along the primary tunnel:
This section proposes a set of topologies that covers all the This section proposes a set of topologies that covers all the
scenarios for local protection. All of these 8 topologies shown scenarios for local protection. All of these topologies can be
(figure 2- figure 9) can be mapped to the reference topology shown mapped to the reference topology shown in Figure 1. Topologies
in figure 1. Topologies provided in sections 4.1 to 4.8 refer to provided in this section refer to the testbed required to
test-bed required to benchmark failover time when DUT is configured benchmark failover time when the DUT is configured as a PLR in
as a PLR in either headend or midpoint role. The labels stack either Headend or midpoint role. Provided with each topology
provided with each topology is at the PLR. below is the label stack at the PLR. Penultimate Hop
Popping (PHP) MAY be used and must be reported when used.
The label stacks shown below each figure in section 4.1 to 4.9
considers enabling of Penultimate Hop Popping (PHP).
Protection Mechanisms Protection Mechanisms
Figures 2-9 uses the following convention: Figures 2 thru 9 use the following convention:
a) HE is Headend a) HE is Headend
b) TE is Tail-End b) TE is Tail-End
c) MID is Mid point c) MID is Mid point
d) MP is Merge Point d) MP is Merge Point
e) PLR is Point of Local Repair e) PLR is Point of Local Repair
f) PRI is Primary Path
g) BKP denotes Backup Path and Nodes
f) PRI is Primary 6.1. Link Protection
g) BKP denotes Backup Node
6.1. Link Protection with 1 hop primary (from PLR) and 1 hop backup TE 6.1.1 Link Protection - 1 hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
------- -------- PRI -------- ------- -------- PRI --------
| R1 | | R2 | | R3 | | R1 | | R2 | | R3 |
TG-| HE |--| MID |----| TE |-TA TG-| HE |--| MID |----| TE |-TA
| | | PLR |----| | | | | PLR |----| |
------- -------- BKP -------- ------- -------- BKP --------
Figure 2: Represents the setup for section 4.1 Figure 2.
Traffic No of Labels No of labels after Traffic Num of Labels Num of labels
before failure failure before failure after failure
IP TRAFFIC (P-P) 0 0 IP TRAFFIC (P-P) 0 0
Layer3 VPN (PE-PE) 1 1 Layer3 VPN (PE-PE) 1 1
Layer3 VPN (PE-P) 2 2 Layer3 VPN (PE-P) 2 2
Layer2 VC (PE-PE) 1 1 Layer2 VC (PE-PE) 1 1
Layer2 VC (PE-P) 2 2 Layer2 VC (PE-P) 2 2
Mid-point LSPs 0 0 Mid-point LSPs 0 0
Protection Mechanisms Protection Mechanisms
6.2. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
------- -------- -------- ------- -------- --------
| R1 | | R2 | | R3 | | R1 | | R2 | | R3 |
TG-| HE | | MID |PRI | TE |-TA TG-| HE | | MID |PRI | TE |-TA
| |----| PLR |----| | | |----| PLR |----| |
------- -------- -------- ------- -------- --------
|BKP | |BKP |
| -------- | | -------- |
| | R6 | | | | R6 | |
|----| BKP |----| |----| BKP |----|
| MID | | MID |
-------- --------
Figure 3: Representing setup for section 4.2
Traffic No of Labels No of labels Figure 3.
Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 0 1 IP TRAFFIC (P-P) 0 1
Layer3 VPN (PE-PE) 1 2 Layer3 VPN (PE-PE) 1 2
Layer3 VPN (PE-P) 2 3 Layer3 VPN (PE-P) 2 3
Layer2 VC (PE-PE) 1 2 Layer2 VC (PE-PE) 1 2
Layer2 VC (PE-P) 2 3 Layer2 VC (PE-P) 2 3
Mid-point LSPs 0 1 Mid-point LSPs 0 1
6.3. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE 6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE
tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 |PRI | R3 |PRI | R4 | | R1 | | R2 |PRI | R3 |PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR |----| | | | | | | PLR |----| | | |
-------- -------- BKP -------- -------- -------- -------- BKP -------- --------
Figure 4: Representing setup for section 4.3
Traffic No of Labels No of labels Figure 4.
before failure after failure
Protection Mechanisms
Traffic Num of Labels Num of labels
before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Protection Mechanisms
6.4. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE
tunnels tunnels
-------- -------- PRI -------- PRI -------- -------- -------- PRI -------- PRI --------
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR | | | | | | | | PLR | | | | |
-------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
| -------- | | -------- |
| | R6 | | | | R6 | |
---| BKP |- ---| BKP |-
| MID | | MID |
-------- --------
Figure 5: Representing the setup for section 4.4
Traffic No of Labels No of labels Figure 5.
Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
Protection Mechanisms
6.5. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE 6.2. Node Protection
6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
Protection Mechanisms
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 |PRI | R3 | PRI | R4 | | R1 | | R2 |PRI | R3 | PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA TG-| HE |----| MID |----| MID |------| TE |-TA
| | | PLR | | | | | | | | PLR | | | | |
-------- -------- -------- -------- -------- -------- -------- --------
|BKP | |BKP |
----------------------------- -----------------------------
Figure 6: Representing the setup for section 4.5
Traffic No of Labels No of labels Figure 6.
Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 0 IP TRAFFIC (P-P) 1 0
Layer3 VPN (PE-PE) 2 1 Layer3 VPN (PE-PE) 2 1
Layer3 VPN (PE-P) 3 2 Layer3 VPN (PE-P) 3 2
Layer2 VC (PE-PE) 2 1 Layer2 VC (PE-PE) 2 1
Layer2 VC (PE-P) 3 2 Layer2 VC (PE-P) 3 2
Mid-point LSPs 1 0 Mid-point LSPs 1 0
6.6. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
TG-| HE | | MID |PRI | MID |PRI | TE |-TA TG-| HE | | MID |PRI | MID |PRI | TE |-TA
| |----| PLR |----| |----| | | |----| PLR |----| |----| |
-------- -------- -------- -------- -------- -------- -------- --------
| | | |
BKP| -------- | BKP| -------- |
| | R6 | | | | R6 | |
---------| BKP |--------- ---------| BKP |---------
| MID | | MID |
-------- --------
Figure 7: Representing setup for section 4.6
Figure 7.
Protection Mechanisms Protection Mechanisms
Traffic No of Labels No of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
6.7. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
-------- -------- PRI -------- PRI -------- PRI -------- -------- -------- PRI -------- PRI -------- PRI --------
| R1 | | R2 | | R3 | | R4 | | R5 | | R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA
| | | PLR | | | | | | | | | | PLR | | | | | | |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
-------------------------- --------------------------
Figure 8: Representing setup for section 4.7
Traffic No of Labels No of labels Figure 8.
Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Protection Mechanisms Protection Mechanisms
6.8. Node Protection with 3+ hop primary (from PLR) and 2 hop backup TE 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
| R1 | | R2 | | R3 | | R4 | | R5 | | R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA
| |-- | PLR |---| |---| |---| | | |-- | PLR |---| |---| |---| |
-------- -------- -------- -------- -------- -------- -------- -------- -------- --------
BKP| | BKP| |
| -------- | | -------- |
| | R6 | | | | R6 | |
---------| BKP |------- ---------| BKP |-------
| MID | | MID |
-------- --------
Figure 9: Representing setup for section 4.8
Traffic No of Labels No of labels Figure 9.
Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
7. Test Methodology 7. Test Methodology
The procedure described in this section can be applied to all the 8 The procedure described in this section can be applied to all the 8
base test cases and the associated topologies. The backup as well as base test cases and the associated topologies. The backup as well as
the primary tunnels are configured to be alike in terms of bandwidth the primary tunnels are configured to be alike in terms of bandwidth
usage. In order to benchmark failover with all possible label stack usage. In order to benchmark failover with all possible label stack
depth applicable as seen with current deployments, it is suggested depth applicable as seen with current deployments, it is RECOMMENDED
that the methodology includes all the scenarios listed here to perform all of the test cases provided in this section. The
forwarding performance test cases in section 7.1 MUST be performed
prior to performing the failover test cases.
7.1. Headend as PLR with link failure Protection Mechanisms
7.1. MPLS FRR Forwarding Performance
Benchmarking Failover Time [TERM-ID] for MPLS protection first
requires baseline measurement of the forwarding performance of the
test topology including the DUT. Forwarding performance is
benchmarked by the metric Throughput as defined in [Br91] and
measured in units pps. This section provides two test cases to
benchmark forwarding performance. These are with the DUT
configured as a Headend PLR, Mid-Point PLR, and Egress PLR.
7.1.1. Headend PLR Forwarding Performance
Objective Objective
Protection Mechanisms
To benchmark the MPLS failover time due to Link failure events To benchmark the maximum rate (pps) on the PLR (as headend) over
described in section 3.1 experienced by the DUT which is the point primary LSP and backup LSP.
of local repair (PLR).
Test Setup Test Setup
- Select any one topology out of 8 from section 4 - Select any one topology out of the 8 from section 6.
- Select overlay technology for FRR test e.g. IGP,VPN,or VC - Select overlay technologies (e.g. IGP, VPN, or VC) with DUT
as Headend PLR.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not Generator/analyzer. (If the node downstream of the PLR is not
A simulated node, then the Ingress of the tunnel should have A simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). a link connected to the traffic analyzer).
Test Configuration
1. Configure the number of primaries on R2 and the backups on R2
as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe
in Appendix A) by the tail end.
Procedure Procedure
1. Establish the primary LSP on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 3.7. 5. Setup traffic streams as described in section 5.7.
6. Send IP traffic at maximum Forwarding Rate to DUT. 6. Send MPLS traffic over the primary LSP at the Throughput
7. Verify traffic switched over Primary LSP. supported by the DUT.
8. Trigger any choice of Link failure as describe in section 3.1. 7. Record the Throughput over the primary LSP.
9. Verify that primary tunnel and prefixes gets mapped to backup 8. Trigger a link failure as described in section 5.1.
tunnels. 9. Verify that the offered load gets mapped to the backup tunnel
10. Stop traffic stream and measure the traffic loss. and measure the Additive Backup Delay.
11. Failover time is calculated as defined in section 6, Reporting 10. 30 seconds after Failover, stop the offered load and
format. measure the Throughput, Packet Loss, Out-of-Order Packets,
and Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are
equal.
12. Record the Throughput. This is the offered load that will be
used for the Headend PLR failover test cases.
Protection Mechanisms Protection Mechanisms
12. Start traffic stream again to verify reversion when protected 7.1.2. Mid-Point PLR Forwarding Performance
interface comes up. Traffic loss should be 0 due to make
before break or reversion.
13. Enable protected interface that was down (Node in the case of
NNHOP).
14. Verify headend signals new LSP and protection should be in
place again.
7.2. Mid-Point as PLR with link failure
Objective Objective
To benchmark the MPLS failover time due to Link failure events To benchmark the maximum rate (pps) on the PLR (as mid-point) over
described in section 3.1 experienced by the device under test which primary LSP and backup LSP.
is the point of local repair (PLR).
Test Setup Test Setup
- Select any one topology out of 8 from section 4 - Select any one topology out of 8 from section 6.
- Select overlay technology for FRR test as Mid-Point LSPs - Select overlay technologies (e.g. IGP, VPN, or VC) with DUT
as Mid-Point PLR.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration
1. Configure the number of primaries on R1 and the backups on R2
as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end.
Procedure Procedure
1. Establish the primary LSP on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 5.7.
6. Send MPLS traffic over the primary LSP at the Throughput
supported by the DUT.
7. Record the Throughput over the primary LSP.
8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup
tunnel and measure the Additive Backup Delay.
10. 30 seconds after Failover, stop the offered load and
measure the Throughput, Packet Loss, Out-of-Order Packets,
and Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are
equal.
12. Record the Throughput. This is the offered load that will
be used for the Mid-Point PLR failover test cases.
Protection Mechanisms Protection Mechanisms
4. Verify Fast Reroute protection. 7.1.3. Egress PLR Forwarding Performance
5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Link failure as describe in section 3.1.
9. Verify that primary tunnel and prefixes gets mapped to backup
tunnels.
10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as per defined in section 6,
Reporting format.
12. Start traffic stream again to verify reversion when protected
interface comes up. Traffic loss should be 0 due to make
before break or reversion.
13. Enable protected interface that was down (Node in the case of
NNHOP).
14. Verify headend signals new LSP and protection should be in
place again.
7.3. Headend as PLR with Node Failure
Objective Objective
To benchmark the MPLS failover time due to Node failure events To benchmark the maximum rate (pps) on the PLR (as egress) over
described in section 3.1 experienced by the device under test, which primary LSP and backup LSP.
is the point of local repair (PLR).
Test Setup Test Setup
- Select any one topology from section 4.5 to 4.8 - Select any one topology out of 8 from section 6.
- Select overlay technology for FRR test e.g. IGP, VPN, or VC - Select overlay technologies (e.g. IGP, VPN, or VC) with DUT
as Egress PLR.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Procedure
1. Establish the primary LSP on R1 required by the topology
selected.
2. Establish the backup LSP on R2 required by the selected
topology.
3. Verify primary and backup LSPs are up and that primary is
protected.
4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 5.7.
6. Send MPLS traffic over the primary LSP at the Throughput
supported by the DUT.
7. Record the Throughput over the primary LSP.
8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup
tunnels and measure the Additive Backup Delay..
10. 30 seconds after Failover, stop the offered load and
measure the Throughput, Packet Loss, Out-of-Order Packets,
and Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are
equal.
12. Record the Throughput. This is the offered load that will be
used for the Egress PLR failover test cases.
7.2. Headend PLR with Link Failure
Objective
To benchmark the MPLS failover time due to link failure events
described in section 5.1 experienced by the DUT which is the
Headend PLR.
Test Setup
- Select any one topology out of 8 from section 6
- Select overlay technology for FRR test (e.g. IGP,VPN,or VC).
Protection Mechanisms
- The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not
A simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer).
Test Configuration Test Configuration
1. Configure the number of primaries on R2 and the backups on R2 1. Configure the number of primaries on R2 and the backups on R2
as required by the topology selected. as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in 2. Configure the test setup to support Reversion.
Appendix A) by the tail end. 3. Advertise prefixes (as per FRR Scalability Table described
in Appendix A) by the tail end.
Procedure Procedure
Protection Mechanisms Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be
completed first to obtain the Throughput to use as the offered
load.
1. Establish the primary LSP on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 3.7. 5. Setup traffic streams for the offered load as described
6. Send IP traffic at maximum Forwarding Rate to DUT. in section 5.7.
7. Verify traffic switched over Primary LSP. 6. Provide the offered load from the tester at the Throughput
8. Trigger any choice of Node failure as describe in section 3.1. [Br91] level obtained from test case 7.1.1.
9. Verify that primary tunnel and prefixes gets mapped to backup 7. Verify traffic is switched over Primary LSP without packet
tunnels loss.
10. Stop traffic stream and measure the traffic loss. 8. Trigger a link failure as described in section 5.1.
11. Failover time is calculated as per defined in section 6, 9. Verify that the offered load gets mapped to the backup
Reporting format. tunnel and measure the Additive Backup Delay.
12. Start traffic stream again to verify reversion when protected 10. 30 seconds after Failover [TERM-ID], stop the offered
interface comes up. Traffic loss should be 0 due to make load and measure the total Failover Packet Loss [TERM-ID].
before break or reversion. 11. Calculate the Failover Time [TERM-ID] benchmark using the
13. Boot protected Node that was down. selected Failover Time Calculation Method (TBLM, PLBM, or
14. Verify headend signals new LSP and protection should be in TBM) [TERM-ID].
12. Restart the offered load and restore the primary LSP to
verify Reversion [TERM-ID] occurs and measure the Reversion
Packet Loss [TERM-ID].
13. Calculate the Reversion Time [TERM-ID] benchmark using the
selected Failover Time Calculation Method (TBLM, PLBM, or
TBM) [TERM-ID].
14. Verify Headend signals new LSP and protection should be in
place again. place again.
7.4. Mid-Point as PLR with Node failure IT is RECOMMENDED that this procedure be repeated for each of
the link failure triggers defined in section 5.1.
Protection Mechanisms
7.3. Mid-Point PLR with link failure
Objective Objective
To benchmark the MPLS failover time due to Node failure events To benchmark the MPLS failover time due to link failure events
described in section 3.1 experienced by the device under test, which described in section 5.1 experienced by the DUT which
is the point of local repair (PLR). is the Mid-Point PLR.
Test Setup Test Setup
- Select any one topology from section 4.5 to 4.8. - Select any one topology out of 8 from section 6
- Select overlay technology for FRR test as Mid-Point LSPs. - Select overlay technology for FRR test as Mid-Point LSPs
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration Test Configuration
Protection Mechanisms
1. Configure the number of primaries on R1 and the backups on R2 1. Configure the number of primaries on R1 and the backups on R2
as required by the topology selected. as required by the topology selected.
2. Advertise prefixes (as per FRR Scalability table describe in 2. Configure the test setup to support Reversion.
3. Advertise prefixes (as per FRR Scalability Table described in
Appendix A) by the tail end. Appendix A) by the tail end.
Procedure Procedure
Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be
completed first to obtain the Throughput to use as the offered
load.
1. Establish the primary LSP on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Perform steps 3 through 14 from section 7.2 Headend PLR
protected. with Link Failure.
4. Verify Fast Reroute protection.
5. Setup traffic streams as described in section 3.7.
6. Send IP traffic at maximum Forwarding Rate to DUT.
7. Verify traffic switched over Primary LSP.
8. Trigger any choice of Node failure as describe in section 3.1.
9. Verify that primary tunnel and prefixes gets mapped to backup
tunnels.
10. Stop traffic stream and measure the traffic loss.
11. Failover time is calculated as per defined in section 6,
Reporting format.
12. Start traffic stream again to verify reversion when protected
interface comes up. Traffic loss should be 0 due to make
before break or reversion.
13. Boot protected Node that was down.
14. Verify headend signals new LSP and protection should be in
place again.
Protection Mechanisms
7.5. MPLS FRR Forwarding Performance Test cases IT is RECOMMENDED that this procedure be repeated for each of
the link failure triggers defined in section 5.1.
For the following MPLS FRR Forwarding Performance Benchmarking Protection Mechanisms
cases, Test the maximum PPS rate allowed by given hardware. One
may follow the procedure for determining MPLS forwarding
performance defined in [MPLS-FORWARD]
7.5.1. PLR as Headend 7.4. Headend PLR with Node Failure
Objective Objective
To benchmark the maximum rate (pps) on the PLR (as headend) over To benchmark the MPLS failover time due to Node failure events
primary FRR LSP and backup LSP. described in section 5.1 experienced by the DUT which is the
Headend PLR.
Test Setup Test Setup
- Select any one topology out of 8 from section 4. - Select any one topology from section 6.5 to 6.8
- Select overlay technology for FRR test e.g. IGP,VPN,or VC. - Select overlay technology for FRR test (e.g. IGP, VPN, or VC)
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not generator.
A simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node Test Configuration
downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). 1. Configure the number of primaries on R2 and the backups on R2
as required by the topology selected.
2. Configure the test setup to support Reversion.
3. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end.
Procedure Procedure
Test Case "7.1.1. Headend PLR Forwarding Performance" MUST be
completed first to obtain the Throughput to use as the offered
load.
1. Establish the primary LSP on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection.
5. Setup traffic streams as described in section 3.7. 5. Setup traffic streams for the offered load as described
6. Send IP traffic at maximum forwarding rate (pps) that the in section 5.7.
device under test supports over the primary LSP. 6. Provide the offered load from the tester at the Throughput
7. Record maximum PPS rate forwarded over primary LSP. [Br91] level obtained from test case 7.1.1.
8. Stop traffic stream. 7. Verify traffic is switched over Primary LSP without packet
9. Trigger any choice of Link failure as describe in section 3.1. loss.
8. Trigger a node failure as described in section 5.1.
9. Perform steps 9 through 14 in 7.2 Headend PLR with Link
Failure.
Protection Mechanisms IT is RECOMMENDED that this procedure be repeated for each of
the node failure triggers defined in section 5.1.
10. Verify that primary tunnel and prefixes gets mapped to backup Protection Mechanisms
tunnels.
11. Send IP traffic at maximum forwarding rate (pps) that the
device under test supports over the primary LSP.
12. Record maximum PPS rate forwarded over backup LSP.
7.5.2. PLR as Mid-point 7.5. Mid-Point PLR with Node failure
Objective Objective
To benchmark the maximum rate (pps) on the PLR (as mid-point of the To benchmark the MPLS failover time due to Node failure events
primary path and ingress of the backup path) over primary FRR LSP described in section 5.1 experienced by the DUT which is the
and backup LSP. Mid-Point PLR.
Test Setup Test Setup
- Select any one topology out of 8 from section 4. - Select any one topology from section 6.5 to 6.8.
- Select overlay technology for FRR test as Mid-Point LSPs. - Select overlay technology for FRR test as Mid-Point LSPs.
- The DUT will also have 2 interfaces connected to the traffic - The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration
1. Configure the number of primaries on R1 and the backups on
R2 as required by the topology selected.
2. Configure the test setup to support Reversion.
3. Advertise prefixes (as per FRR Scalability table describe in
Appendix A) by the tail end.
Procedure Procedure
Test Case "7.1.2. Mid-Point PLR Forwarding Performance" MUST be
completed first to obtain the Throughput to use as the offered
load.
1. Establish the primary LSP on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection.
5. Setup traffic streams as described in section 3.7. 5. Setup traffic streams for the offered load as described
6. Send IP traffic at maximum forwarding rate (pps) that the in section 5.7.
device under test supports over the primary LSP. 6. Provide the offered load from the tester at the Throughput
7. Record maximum PPS rate forwarded over primary LSP. [Br91] level obtained from test case 7.1.1.
8. Stop traffic stream. 7. Verify traffic is switched over Primary LSP without packet
9. Trigger any choice of Link failure as describe in section 3.1. loss.
10. Verify that primary tunnel and prefixes gets mapped to backup 8. Trigger a node failure as described in section 5.1.
tunnels. 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link
11. Send IP traffic at maximum forwarding rate (pps) that the Failure.
device under test supports over the backup LSP.
Protection Mechanisms IT is RECOMMENDED that this procedure be repeated for each of
the node failure triggers defined in section 5.1.
12. Record maximum PPS rate forwarded over backup LSP. Protection Mechanisms
8. Reporting Format 8. Reporting Format
For each test, it is recommended that the results be reported in the For each test, it is recommended that the results be reported in the
following format. following format.
Parameter Units Parameter Units
IGP used for the test ISIS-TE/ OSPF-TE IGP used for the test ISIS-TE/ OSPF-TE
Interface types Gige,POS,ATM,VLAN etc. Interface types Gige,POS,ATM,VLAN etc.
Packet Sizes offered to the DUT Bytes Packet Sizes offered to the DUT Bytes
Forwarding rate Number of packets per Forwarding rate packets per second
second
IGP routes advertised Number of IGP routes IGP routes advertised Number of IGP routes
RSVP hello timers configured Milliseconds Penultimate Hop Popping Used/Not Used
(if any)
RSVP hello timers Milliseconds
Number of FRR tunnels Number of tunnels Number of FRR tunnels Number of tunnels
configured
Number of VPN routes installed Number of VPN routes Number of VPN routes installed Number of VPN routes
on the headend on the Headend
Number of VC tunnels Number of VC tunnels Number of VC tunnels Number of VC tunnels
Number of BGP routes BGP routes installed Number of BGP routes BGP routes installed
Number of mid-point tunnels Number of tunnels Number of mid-point tunnels Number of tunnels
Number of Prefixes protected by Number of LSPs Number of Prefixes protected by Number of LSPs
Primary Primary
Topology being used Section number, and Topology being used Section number, and
figure reference figure reference
Protection Mechanisms
Failure event Event type
Benchmarks
Parameter Unit
Minimum failover time Milliseconds
Mean failover time Milliseconds Failover Event Event type
Protection Mechanisms
Maximum failover time Milliseconds
Minimum reversion time Milliseconds Benchmarks (to be recorded for each test case):
Mean reversion time Milliseconds Failover-
Failover Time seconds
Failover Packet Loss packets
Additive Backup Delay seconds
Out-of-Order Packets packets
Duplicate Packets packets
Maximum reversion time Milliseconds Reversion-
Reversion Time seconds
Reversion Packet Loss packets
Additive Backup Delay seconds
Out-of-Order Packets packets
Duplicate Packets packets
Failover time suggested above is calculated using one of the Failover Time suggested above is calculated using one of the
following three methods following three methods
1. Packet-Based Loss method (PBLM): (Number of packets 1. Packet-Based Loss method (PBLM): (Number of packets
dropped/packets per second * 1000) milliseconds. This method dropped/packets per second * 1000) milliseconds. This method
could also be referred as Rate Derived method. could also be referred as Rate Derived method.
2. Time-Based Loss Method (TBLM): This method relies on the 2. Time-Based Loss Method (TBLM): This method relies on the
ability of the Traffic generators to provide statistics which ability of the Traffic generators to provide statistics which
reveal the duration of failure in milliseconds based on when reveal the duration of failure in milliseconds based on when
the packet loss occurred (interval between non-zero packet loss the packet loss occurred (interval between non-zero packet loss
skipping to change at page 25, line 4 skipping to change at page 24, line 47
payload in the packets originated by the generator. The Traffic payload in the packets originated by the generator. The Traffic
Analyzer records the timestamp of the last packet received Analyzer records the timestamp of the last packet received
before the failover event and the first packet after the before the failover event and the first packet after the
failover and derives the time based on the difference between failover and derives the time based on the difference between
these 2 timestamps. Note: The payload could also contain these 2 timestamps. Note: The payload could also contain
sequence numbers for out-of-order packet calculation and sequence numbers for out-of-order packet calculation and
duplicate packets. duplicate packets.
Note: If the primary is configured to be dynamic, and if the primary Note: If the primary is configured to be dynamic, and if the primary
is to reroute, make before break should occur from the backup that is to reroute, make before break should occur from the backup that
Protection Mechanisms
is in use to a new alternate primary. If there is any packet loss is in use to a new alternate primary. If there is any packet loss
seen, it should be added to failover time. seen, it should be added to failover time.
9. Security Considerations Protection Mechanisms
During the course of test, the test topology must be disconnected
from devices that may forward the test traffic into a production
environment.
There are no specific security considerations within the scope of 9. Security Considerations
this document. Documents of this type do not directly affect the security of
Internet or corporate networks as long as benchmarking is not
performed on devices or systems connected to production networks.
Security threats and how to counter these in SIP and the media
layer is discussed in RFC3261, RFC3550, and RFC3711 and various
other drafts. This document attempts to formalize a set of
common methodology for benchmarking performance of failover
mechanisms in a lab environment.
10. IANA Considerations 10. IANA Considerations
This document requires no IANA considerations.
There are no considerations for IANA at this time.
11. References 11. References
11.1. Normative References 11.1. Informative References
NONE
[MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G., "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels", RFC 4090.
11.2. Informative References 11.2. Normative References
[TERM-ID] Poretsky S., Papneja R., Karthik J., Vapiwala S., [TERM-ID] Poretsky S., Papneja R., Karthik J., Vapiwala S.,
"Benchmarking Terminology for Protection "Benchmarking Terminology for Protection Performance",
Performance", draft-ietf-bmwg-protection-term- draft-ietf-bmwg-protection-term-06.txt, work in
05.txt, work in progress. progress.
[MPLS-FRR-EXT] Pan P., Swallow G., Atlas A., "Fast Reroute [MPLS-FRR-EXT] Pan P., Swallow G., Atlas A., "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels'', RFC 4090. Extensions to RSVP-TE for LSP Tunnels", RFC 4090.
[IGP-METH] S. Poretsky, B. Imhoff, "Benchmarking Methodology [IGP-METH] S. Poretsky, B. Imhoff, "Benchmarking Methodology
for IGP Data Plane Route Convergence, draft-ietf- for IGP Data Plane Route Convergence, "draft-ietf-
bmwg-igp-dataplane-conv-meth-16.txt, work in bmwg-igp-dataplane-conv-meth-17.txt", work in progress.
progress.
[MPLS-FORWARD] A. Akhter, and R. Asati, ''MPLS Forwarding [Br91] Bradner, S., Editor, "Benchmarking Terminology for
Benchmarking Methodology,'' draft-ietf-bmwg-mpls- Network Interconnection Devices", RFC 1242, July 1991.
forwarding-meth-00.txt, work in progress.
[Br97] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", RFC 2119, July 1997.
[Ma98] Mandeville, R., "Benchmarking Terminology for LAN
Switching Devices", RFC 2285, February 1998.
[Po06] Poretsky, S., et al., "Terminology for Benchmarking
Network-layer Traffic Control Mechanisms", RFC 4689,
November 2006.
12. Acknowledgments
We would like to thank Jean Philip Vasseur for his invaluable input
to the document and Curtis Villamizar his contribution in suggesting
text on definition and need for benchmarking Correlated failures.
Additionally we would like to thank Al Morton, Arun Gandhi,
Amrit Hanspal, Karu Ratnam, Raveesh Janardan, Andrey Kiselev, and
Mohan Nanduri for their formal reviews of this document.
Protection Mechanisms Protection Mechanisms
Author's Addresses Author's Addresses
Rajiv Papneja Rajiv Papneja
Isocore Isocore
12359 Sunrise Valley Drive, STE 100 12359 Sunrise Valley Drive, STE 100
Reston, VA 20190 Reston, VA 20190
USA USA
skipping to change at page 26, line 35 skipping to change at page 26, line 35
Jay Karthik Jay Karthik
Cisco System Cisco System
300 Beaver Brook Road 300 Beaver Brook Road
Boxborough, MA 01719 Boxborough, MA 01719
USA USA
Phone: +1 978 936 0533 Phone: +1 978 936 0533
Email: jkarthik@cisco.com Email: jkarthik@cisco.com
Scott Poretsky Scott Poretsky
Allot Communications Allot Communications
67 South Bedford Street, Suite 400
Burlington, MA 01803
USA USA
Phone: + 1 508 309 2179 Phone: + 1 508 309 2179
EMail: sporetsky@allot.com EMail: sporetsky@allot.com
Shankar Rao Shankar Rao
Qwest Communications, Qwest Communications,
950 17th Street 950 17th Street
Suite 1900 Suite 1900
Protection Mechanisms
Qwest Communications Qwest Communications
Denver, CO 80210 Denver, CO 80210
USA USA
Phone: + 1 303 437 6643 Phone: + 1 303 437 6643
Email: shankar.rao@qwest.com Email: shankar.rao@qwest.com
Jean-Louis Le Roux Jean-Louis Le Roux
France Telecom France Telecom
2 av Pierre Marzin 2 av Pierre Marzin
22300 Lannion 22300 Lannion
France France
Phone: 00 33 2 96 05 30 20 Phone: 00 33 2 96 05 30 20
Email: jeanlouis.leroux@orange-ft.com Email: jeanlouis.leroux@orange-ft.com
Intellectual Property Statement
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed
to pertain to the implementation or use of the technology described
in this document or the extent to which any license under such
rights might or might not be available; nor does it represent that
it has made any independent effort to identify any such rights.
Information on the procedures with respect to rights in RFC
documents can be found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use
of such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository
at http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at
ietf-ipr@ietf.org.
Protection Mechanisms Protection Mechanisms
Disclaimer
This document and the information contained herein are provided on
an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE
REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE
IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY
WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE
ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Full Copyright Statement
Copyright (C) The IETF Trust (2008).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
12. Acknowledgments
We would like to thank Jean Philip Vasseur for his invaluable input
to the document and Curtis Villamizar his contribution in suggesting
text on definition and need for benchmarking Correlated failures.
Additionally we would like to thank Arun Gandhi, Amrit Hanspal, Karu
Ratnam and for their input to the document.
Appendix A: Fast Reroute Scalability Table Appendix A: Fast Reroute Scalability Table
This section provides the recommended numbers for evaluating the This section provides the recommended numbers for evaluating the
scalability of fast reroute implementations. It also recommends the scalability of fast reroute implementations. It also recommends the
typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries.
Based on the features supported by the device under test, Based on the features supported by the device under test (DUT),
appropriate scaling limits can be used for the test bed. appropriate scaling limits can be used for the test bed.
Protection Mechanisms
A 1. FRR IGP Table A 1. FRR IGP Table
No. of Headend TE Tunnels IGP Prefixes No. of Headend TE Tunnels IGP Prefixes
1 100 1 100
1 500 1 500
1 1000 1 1000
skipping to change at page 32, line 4 skipping to change at page 29, line 40
LSP - Label Switched Path LSP - Label Switched Path
MP - Merge Point MP - Merge Point
MPLS - Multi Protocol Label Switching MPLS - Multi Protocol Label Switching
N-Nhop - Next - Next Hop N-Nhop - Next - Next Hop
Nhop - Next Hop Nhop - Next Hop
OIR - Online Insertion and Removal OIR - Online Insertion and Removal
P - Provider P - Provider
PE - Provider Edge PE - Provider Edge
PHP - Penultimate Hop Popping PHP - Penultimate Hop Popping
PLR - Point of Local Repair PLR - Point of Local Repair
Protection Mechanisms
RSVP - Resource reSerVation Protocol RSVP - Resource reSerVation Protocol
SRLG - Shared Risk Link Group SRLG - Shared Risk Link Group
TA - Traffic Analyzer TA - Traffic Analyzer
TE - Traffic Engineering TE - Traffic Engineering
TG - Traffic Generator TG - Traffic Generator
VC - Virtual Circuit VC - Virtual Circuit
VPN - Virtual Private Network VPN - Virtual Private Network
 End of changes. 165 change blocks. 
516 lines changed or deleted 542 lines changed or added

This html diff was produced by rfcdiff 1.35. The latest version is available from http://tools.ietf.org/tools/rfcdiff/