draft-ietf-bmwg-protection-meth-09.txt   draft-ietf-bmwg-protection-meth-10.txt 
Network Working Group R. Papneja Network Working Group R. Papneja
Internet-Draft Huawei Technologies Internet-Draft Huawei Technologies
Intended status: Standards Track S. Vapiwala Intended status: Informational S. Vapiwala
Expires: April 28, 2012 J. Karthik Expires: November 30, 2012 J. Karthik
Cisco Systems Cisco Systems
S. Poretsky S. Poretsky
Allot Communications Allot Communications
S. Rao S. Rao
Qwest Communications Qwest Communications
J. Roux JL. Le Roux
France Telecom France Telecom
October 26, 2011 May 29, 2012
Methodology for benchmarking MPLS protection mechanisms Methodology for Benchmarking MPLS-TE Fast Reroute Protection
draft-ietf-bmwg-protection-meth-09.txt draft-ietf-bmwg-protection-meth-10.txt
Abstract Abstract
This draft describes the methodology for benchmarking MPLS Protection This document describes the methodology for benchmarking MPLS
mechanisms for link and node protection as defined in [MPLS-FRR-EXT]. Protection mechanisms for link and node protection as defined in
This document provides test methodologies and testbed setup for [RFC4090]. This document provides test methodologies and testbed
measuring failover times while considering all dependencies that setup for measuring failover times while considering all dependencies
might impact faster recovery of real-time applications bound to MPLS that might impact faster recovery of real-time applications bound to
based traffic engineered tunnels. The benchmarking terms used in MPLS traffic engineered (MPLS-TE) tunnels.
this document are defined in [TERM-ID].
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 28, 2012. This Internet-Draft will expire on November 30, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
skipping to change at page 3, line 12 skipping to change at page 3, line 12
it for publication as an RFC or to translate it into languages other it for publication as an RFC or to translate it into languages other
than English. than English.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 6
3. Existing Definitions and Requirements . . . . . . . . . . . . 6 3. Existing Definitions and Requirements . . . . . . . . . . . . 6
4. General Reference Topology . . . . . . . . . . . . . . . . . . 7 4. General Reference Topology . . . . . . . . . . . . . . . . . . 7
5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 8
5.1. Failover Events [TERM-ID] . . . . . . . . . . . . . . . . 8 5.1. Failover Events . . . . . . . . . . . . . . . . . . . . . 8
5.2. Failure Detection [TERM-ID] . . . . . . . . . . . . . . . 9 5.2. Failure Detection . . . . . . . . . . . . . . . . . . . . 9
5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 9 5.3. Use of Data Traffic for MPLS Protection benchmarking . . . 9
5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10 5.4. LSP and Route Scaling . . . . . . . . . . . . . . . . . . 10
5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10 5.5. Selection of IGP . . . . . . . . . . . . . . . . . . . . . 10
5.6. Restoration and Reversion [TERM-ID] . . . . . . . . . . . 10 5.6. Restoration and Reversion . . . . . . . . . . . . . . . . 10
5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11 5.7. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 11
5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11 5.8. Tester Capabilities . . . . . . . . . . . . . . . . . . . 11
5.9. Failover Time Measurement Methods . . . . . . . . . . . . 12
6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12 6. Reference Test Setup . . . . . . . . . . . . . . . . . . . . . 12
6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 12 6.1. Link Protection . . . . . . . . . . . . . . . . . . . . . 13
6.1.1. Link Protection - 1 hop primary (from PLR) and 1 6.1.1. Link Protection - 1 hop primary (from PLR) and 1
hop backup TE tunnels . . . . . . . . . . . . . . . . 12
6.1.2. Link Protection - 1 hop primary (from PLR) and 2
hop backup TE tunnels . . . . . . . . . . . . . . . . 13 hop backup TE tunnels . . . . . . . . . . . . . . . . 13
6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 6.1.2. Link Protection - 1 hop primary (from PLR) and 2
hop backup TE tunnels . . . . . . . . . . . . . . . . 13
6.1.4. Link Protection - 2+ hop (from PLR) primary and 2
hop backup TE tunnels . . . . . . . . . . . . . . . . 14 hop backup TE tunnels . . . . . . . . . . . . . . . . 14
6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 14 6.1.3. Link Protection - 2+ hops (from PLR) primary and 1
6.2.1. Node Protection - 2 hop primary (from PLR) and 1
hop backup TE tunnels . . . . . . . . . . . . . . . . 14 hop backup TE tunnels . . . . . . . . . . . . . . . . 14
6.2.2. Node Protection - 2 hop primary (from PLR) and 2 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2
hop backup TE tunnels . . . . . . . . . . . . . . . . 15 hop backup TE tunnels . . . . . . . . . . . . . . . . 15
6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 6.2. Node Protection . . . . . . . . . . . . . . . . . . . . . 16
6.2.1. Node Protection - 2 hop primary (from PLR) and 1
hop backup TE tunnels . . . . . . . . . . . . . . . . 16 hop backup TE tunnels . . . . . . . . . . . . . . . . 16
6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 6.2.2. Node Protection - 2 hop primary (from PLR) and 2
hop backup TE tunnels . . . . . . . . . . . . . . . . 17 hop backup TE tunnels . . . . . . . . . . . . . . . . 17
7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 17 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1
7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 18 hop backup TE tunnels . . . . . . . . . . . . . . . . 18
7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 18 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2
7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 19 hop backup TE tunnels . . . . . . . . . . . . . . . . 19
7.1.3. Egress PLR Forwarding Performance . . . . . . . . . . 20 7. Test Methodology . . . . . . . . . . . . . . . . . . . . . . . 20
7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 21 7.1. MPLS FRR Forwarding Performance . . . . . . . . . . . . . 21
7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 23 7.1.1. Headend PLR Forwarding Performance . . . . . . . . . . 21
7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 24 7.1.2. Mid-Point PLR Forwarding Performance . . . . . . . . . 22
7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 26 7.2. Headend PLR with Link Failure . . . . . . . . . . . . . . 23
8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 27 7.3. Mid-Point PLR with Link Failure . . . . . . . . . . . . . 25
9. Security Considerations . . . . . . . . . . . . . . . . . . . 29 7.4. Headend PLR with Node Failure . . . . . . . . . . . . . . 26
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 7.5. Mid-Point PLR with Node Failure . . . . . . . . . . . . . 28
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 8. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 29
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 30 9. Security Considerations . . . . . . . . . . . . . . . . . . . 30
12.1. Informative References . . . . . . . . . . . . . . . . . . 30 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 31
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 31
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 31
12.1. Informative References . . . . . . . . . . . . . . . . . . 31
12.2. Normative References . . . . . . . . . . . . . . . . . . . 31 12.2. Normative References . . . . . . . . . . . . . . . . . . . 31
Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 31 Appendix A. Fast Reroute Scalability Table . . . . . . . . . . . 32
Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 33 Appendix B. Abbreviations . . . . . . . . . . . . . . . . . . . . 35
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 36
1. Introduction 1. Introduction
This draft describes the methodology for benchmarking MPLS based This document describes the methodology for benchmarking MPLS based
protection mechanisms. The new terminology that this document protection mechanisms. This document uses much of the terminology
introduces is defined in [TERM-ID]. defined in [RFC6414].
MPLS based protection mechanisms provide fast recovery of real-time MPLS based protection mechanisms provide fast recovery of real-time
services from a planned or an unplanned link or node failures. MPLS services from a planned or an unplanned link or node failures. MPLS
protection mechanisms are generally deployed in a network protection mechanisms are generally deployed in a network
infrastructure where MPLS is used for provisioning of point-to- point infrastructure where MPLS is used for provisioning of point-to- point
traffic engineered tunnels (tunnel). MPLS based protection traffic engineered tunnels (tunnel). MPLS based protection
mechanisms promise to improve service disruption period by minimizing mechanisms promise to reduce service disruption period by minimizing
recovery time from most common failures. recovery time from most common failures.
Network elements from different manufacturers behave differently to Network elements from different manufacturers behave differently to
network failures, which impacts the network's ability and performance network failures, which impacts the network's ability and failure
for failure recovery. It therefore becomes imperative for service recovery performance. It therefore becomes imperative for service
providers to have a common benchmark to understand the performance providers to have a common benchmark to verify the performance
behaviors of network elements. behaviors of these network elements.
There are two factors impacting service availability: frequency of There are two factors impacting service availability: frequency of
failures and duration for which the failures persist. Failures can failures and duration for which the failures persist. Failures can
be classified further into two types: correlated and uncorrelated. be classified into two types: correlated and uncorrelated.
Correlated and uncorrelated failures may be planned or unplanned. Correlated or uncorrelated failures may be planned or unplanned.
Planned failures are predictable. Network implementations should be Planned failures are predictable. Network implementations should be
able to handle both planned and unplanned failures and recover able to handle both planned and unplanned failures and recover
gracefully within a time frame to maintain service assurance. Hence, gracefully within a time period acceptable to maintain service
failover recovery time is one of the most important benchmark that a assurance. Hence, failover recovery time is one of the most
service provider considers in choosing the building blocks for their important benchmark that a service provider considers in choosing a
network infrastructure. the building blocks for their network infrastructure.
A correlated failure is the simultaneous occurrence of two or more A correlated failure is the simultaneous occurrence of two or more
failures. A typical example is failure of a logical resource (e.g. failures. A typical example is failure of a logical resource (e.g.
layer-2 links) due to a dependency on a common physical resource layer-2 links) due to a dependency on a common physical resource
(e.g. common conduit) that fails. Within the context of MPLS (e.g. common conduit) that fails. Within the context of MPLS-TE
protection mechanisms, failures that arise due to Shared Risk Link protection mechanisms, failures that arise due to Shared Risk Link
Groups (SRLG) [MPLS-FRR-EXT] can be considered as correlated Groups (SRLG) [RFC4090] can be considered as correlated failures.
failures. Not all correlated failures are predictable in advance,
for example, those caused by natural disasters.
MPLS Fast Re-Route (MPLS-FRR) allows for the possibility that the MPLS Fast Re-Route (MPLS-FRR) allows for the possibility that the
Label Switched Paths can be re-optimized in the minutes following Label Switched Paths tunnels can be re-optimized following the
Failover. IP Traffic would be re-routed according to the preferred Failover. IP Traffic would be re-routed according to the preferred
path for the post-failure topology. Thus, MPLS-FRR includes an path according to the post-failure topology. Hence, MPLS-FRR may
additional step to the General model: include additional steps following the occurrence of the failure
detection [RFC6414] and failover event [RFC6414].
(1) Failover Event - Primary Path (Working Path) fails (1) Failover Event - Primary Path (Working Path) fails
(2) Failure Detection- Failover Event is detected (2) Failure Detection- Failover Event is detected
(3) (3)
a. Failover - Working Path switched to Backup path a. Failover - Working Path switched to Backup path
b. Re-Optimization of Working Path (possible change from b. Re-Optimization of Working Path (possible change from
Backup Path) Backup Path)
(4) Restoration - Primary Path recovers from a Failover Event (4) Restoration [RFC6414]
(5) Reversion (optional) - Working Path returns to Primary Path (5) Reversion [RFC6414]
2. Document Scope 2. Document Scope
This document provides detailed test cases along with different This document provides detailed test cases along with different
topologies and scenarios that should be considered to effectively topologies and scenarios that should be considered to effectively
benchmark MPLS protection mechanisms and failover times on the Data benchmark MPLS-TE protection mechanisms and failover times.
Plane. Different Failover Events and scaling considerations are also Different failover events and scaling considerations are also
provided in this document. provided in this document.
All benchmarking testcases defined in this document apply to both All benchmarking test-cases defined in this document apply to
facility backup and local protection enabled in detour mode. The facility backup method [RFC4090]. The test cases cover all possible
test cases cover all possible failure scenarios and the associated failure scenarios to benchmark the performance of the Device Under
procedures benchmark the performance of the Device Under Test (DUT) Test (DUT) to recover from failures. Data plane traffic is used to
to recover from failures. Data plane traffic is used to benchmark benchmark failover times.
failover times.
Benchmarking of correlated failures is out of scope of this document. Benchmarking of correlated failures is out of scope of this document.
Protection from Bi-directional Forwarding Detection (BFD) is outside Faster failure detection using Bi-directional Forwarding Detection
the scope of this document. (BFD) is outside the scope of this document, but is mentioned in the
discussion sections.
The Performance benchmarking of control plane is outside the scope of
this benchmarking.
As described above, MPLS-FRR may include a Re-optimization of the As described above, MPLS-FRR may include a Re-optimization of the
Working Path, with possible packet transfer impairments. Working Path. Characterization of Re-optimization is beyond the
Characterization of Re-optimization is beyond the scope of this memo. scope of this memo.
3. Existing Definitions and Requirements 3. Existing Definitions and Requirements
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, RFC 2119 document are to be interpreted as described in BCP 14, RFC 2119
[Br97]. RFC 2119 defines the use of these key words to help make the [RFC2119]. RFC 2119 defines the use of these key words to help make
intent of standards track documents as clear as possible. While this the intent of standards track documents as clear as possible. While
document uses these keywords, this document is not a standards track this document uses these keywords, this document is not a standards
document. track document.
The reader is assumed to be familiar with the commonly used MPLS The reader is assumed to be familiar with the commonly used MPLS
terminology, some of which is defined in [MPLS-FRR-EXT]. terminology, some of which is defined in [RFC4090].
This document uses much of the terminology defined in [TERM-ID]. This document uses much of the terminology defined in [RFC6414].
This document also uses existing terminology defined in other BMWG This document also uses existing terminology defined in other BMWG
work. Examples include, but are not limited to: Work [RFC1242], [RFC2285], [RFC4689].
Throughput [Ref.[Br91], section 3.17]
Device Under Test (DUT) [Ref.[Ma98], section 3.1.1]
System Under Test (SUT) [Ref.[Ma98], section 3.1.2]
Out-of-order Packet [Ref.[Po06], section 3.3.2]
Duplicate Packet [Ref.[Po06], section 3.3.3]
4. General Reference Topology 4. General Reference Topology
Figure 1 illustrates the basic reference testbed and is applicable to Figure 1 illustrates the basic reference testbed and is applicable to
all the test cases defined in this document. The Tester is comprised all the test cases defined in this document. Tester comprises a
of a Traffic Generator (TG) & Test Analyzer (TA). A Tester is Traffic Generator (TG), Test Analyzer (TA) and Emulator. The Tester
directly connected to the DUT. The Tester sends and receives IP is connected to the test network and based on test case, the DUT role
traffic to the tunnel ingress and performs signaling protocol could vary. The Tester (TG) sends and receives (TA) IP traffic to
emulation to simulate real network scenarios in a lab environment. the tunnel ingress and performs signaling protocol emulation to
The Tester may also support MPLS-TE signaling to act as the ingress simulate real network scenarios in a lab environment. The Tester may
node to the MPLS tunnel. also support MPLS-TE signaling to act as the ingress/egress node.
+---------------------------+ +---------------------------+
| +------------|---------------+ | +------------|---------------+
| | | | | | | |
| | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 |
| |-----| |----| |----| |---| | | |-----| |----| |----| |---| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| | | | | | | | | |
skipping to change at page 8, line 4 skipping to change at page 7, line 43
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 | TG--| R1 |-----| R2 |----| R3 | | R4 | | R5 |
| |-----| |----| |----| |---| | | |-----| |----| |----| |---| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| | | | | | | | | |
| | | | | | | | | |
| +--------+ | | TA | +--------+ | | TA
+---------| R6 |---------+ | +---------| R6 |---------+ |
| |----------------------+ | |----------------------+
+--------+ +--------+
Fig. 1 Fast Reroute Topology Fig. 1 Fast Reroute Topology
The tester MUST record the number of lost, duplicate, and reordered The tester must be able to record the number of lost, duplicate, and
packets. It should further record arrival and departure times so reordered packets. It should further record arrival and departure
that Failover Time, Additive Latency, and Reversion Time can be times so that Failover Time, Additive Latency, and Reversion Time can
measured. The tester may be a single device or a test system be measured. The tester may be a single device or a test system
emulating all the different roles along a primary or backup path. emulating different roles along a primary or backup path.
The label stack is dependent of the following 3 entities: The label stack is dependent on the following 3 entities:
(1) Type of protection (Link Vs Node) (1) Type of protection (Link Vs Node)
(2) # of remaining hops of the primary tunnel from the PLR (2) # of remaining hops of the primary tunnel from the Point of
Local Repair (PLR)[RFC6414]
(3) # of remaining hops of the backup tunnel from the PLR (3) # of remaining hops of the backup tunnel from the PLR
Due to this dependency, it is RECOMMENDED that the benchmarking of Due to this dependency, it is RECOMMENDED that the benchmarking of
failover times be performed on all the topologies provided in section failover times be performed on all the topologies provided in section
6. 6.
5. Test Considerations 5. Test Considerations
This section discusses the fundamentals of MPLS Protection testing: This section discusses the fundamentals of MPLS Protection testing:
skipping to change at page 8, line 42 skipping to change at page 8, line 42
(3) the use of data traffic (3) the use of data traffic
(4) Traffic generation (4) Traffic generation
(5) LSP Scaling (5) LSP Scaling
(6) Reversion of LSP (6) Reversion of LSP
(7) IGP Selection (7) IGP Selection
5.1. Failover Events [TERM-ID] 5.1. Failover Events
The failover to the backup tunnel is primarily triggered by either The failover to the backup tunnel is primarily triggered by either
link or node failures observed downstream of the Point of Local link or node failures observed downstream of the PLR. Some of these
repair (PLR). Some of these failure events are listed below. failure events [RFC6414] are listed below.
Link Failure Events Link Failure Events
- Interface Shutdown on PLR side with POS Alarm - Interface Shutdown on PLR side with POS Alarm
- Interface Shutdown on remote side with POS Alarm - Interface Shutdown on remote side with POS Alarm
- Interface Shutdown on PLR side with RSVP hello enabled - Interface Shutdown on PLR side with RSVP hello enabled
- Interface Shutdown on remote side with RSVP hello enabled - Interface Shutdown on remote side with RSVP hello enabled
- Interface Shutdown on PLR side with BFD - Interface Shutdown on PLR side with BFD
- Interface Shutdown on remote side with BFD - Interface Shutdown on remote side with BFD
- Fiber Pull on the PLR side (Both TX & RX or just the TX) - Fiber Pull on the PLR side (Both TX & RX or just the TX)
- Fiber Pull on the remote side (Both TX & RX or just the RX) - Fiber Pull on the remote side (Both TX & RX or just the RX)
- Online insertion and removal (OIR) on PLR side - Online insertion and removal (OIR) on PLR side
- OIR on remote side - OIR on remote side
- Sub-interface failure (e.g. shutting down of a VLAN) - Sub-interface failure on PLR side (e.g. shutting down of a VLAN)
- Parent interface shutdown (an interface bearing multiple - Sub-interface failure on remote side
sub-interfaces - Parent interface shutdown on PLR side (an interface bearing multiple
sub-interfaces)
- Parent interface shutdown on remote side
Node Failure Events Node Failure Events
- A System reload initiated either by a graceful shutdown - A System reload initiated either by a graceful shutdown
or by a power failure. or by a power failure.
- A system crash due to a software failure or an assert. - A system crash due to a software failure or an assert.
5.2. Failure Detection [TERM-ID] 5.2. Failure Detection
Link failure detection time depends on the link type and failure Link failure detection [RFC6414] time depends on the link type and
detection protocols running. For SONET/SDH, the alarm type (such as failure detection techniques enabled. For SONET/SDH, the alarm type
LOS, AIS, or RDI) can be used. Other link types have layer-two (such as LOS, AIS, or RDI) can be used. Other link types have layer-
alarms, but they may not provide a short enough failure detection two alarms, but they may not provide a short enough failure detection
time. Ethernet based links do not have layer 2 failure indicators, time. Ethernet based links do not have layer 2 failure indicators,
and therefore relies on layer 3 signaling for failure detection. and therefore relies on layer 3 signaling for failure detection.
However for directly connected devices, remote fault indication in However for directly connected devices, remote fault indication in
the ethernet auto-negotiation scheme could be considered as a type of the Ethernet auto-negotiation scheme could be considered as a type of
layer 2 link failure indicator. layer 2 link failure indicator.
MPLS has different failure detection techniques such as BFD, or use BFD and RSVP-hellos may be used as failure detection techniques.
of RSVP hellos. These methods can be used for the layer 3 failure These methods can be used for the layer 3 failure indicators required
indicators required by Ethernet based links, or for some other non- by Ethernet based links, or for some other non- Ethernet based links
Ethernet based links to help improve failure detection time. to help improve failure detection time. However, these fast failure
detection mechanisms are out of scope of this document.
The test procedures in this document can be used for a local failure The test procedures in this document can be used for MPLS-TE
or remote failure scenarios for comprehensive benchmarking and to protection benchmarking due to either a local failure or remote
evaluate failover performance independent of the failure detection failure.
techniques.
5.3. Use of Data Traffic for MPLS Protection benchmarking 5.3. Use of Data Traffic for MPLS Protection benchmarking
Currently end customers use packet loss as a key metric for Failover Currently end customers use packet loss as a key metric for Failover
Time [TERM-ID]. Failover Packet Loss [TERM-ID] is an externally Time [RFC6414]. Failover Packet Loss [RFC6414] is an externally
observable event and has direct impact on application performance. observable event and has direct impact on application performance.
MPLS protection is expected to minimize the packet loss in the event MPLS-TE protection is expected to minimize the packet loss in the
of a failure. For this reason it is important to develop a standard event of a failure. For this reason it is important to develop a
router benchmarking methodology for measuring MPLS protection that standard router benchmarking methodology for measuring MPLS
uses packet loss as a metric. At a known rate of forwarding, packet protection that uses packet loss as a metric. At a known rate of
loss can be measured and the failover time can be determined. forwarding, packet loss can be measured and the failover time can be
Measurement of control plane signaling to establish backup paths is determined. Measurement of control plane recovery and establishing
not enough to verify failover. Failover is best determined when backup paths is not enough to verify a timely failover. Failover
packets are actually traversing the backup path. performance is best determined when packets are actually switched to
the backup path.
An additional benefit of using packet loss for calculation of Benefit of using packet loss for calculation of failover time is that
failover time is that it allows use of a black-box test environment. it allows use of a black-box test environment. Data traffic is
Data traffic is offered at line-rate to the device under test (DUT) offered at line-rate to the device under test (DUT) an emulated
an emulated network failure event is forced to occur, and packet loss network failure event is forced to occur, and packet loss is
is externally measured to calculate the convergence time. This setup externally measured to calculate the convergence time. This setup is
is independent of the DUT architecture. independent of the DUT architecture.
In addition, this methodology considers the packets in error and The methodology considers lost, packet in error, out-of-order
duplicate packets that could have been generated during the failover [RFC4689] and duplicate packets as impaired packets that contribute
process. The methodologies consider lost, out-of-order, and to the Failover Time.
duplicate packets to be impaired packets that contribute to the
Failover Time.
5.4. LSP and Route Scaling 5.4. LSP and Route Scaling
Failover time performance may vary with the number of established Failover time performance may vary with the number of established
primary and backup tunnel label switched paths (LSP) and installed primary and backup tunnel label switched paths (LSP) and installed
routes. However the procedure outlined here should be used for any routes. However, the procedure outlined here should be used for any
number of LSPs (L) and number of routes protected by PLR(R). The number of LSPs (L) and number of routes protected by the headend as
amount of L and R must be recorded. the PLR(R). The amount of L and R must be recorded. The recommended
table is provided in appendix A.
5.5. Selection of IGP 5.5. Selection of IGP
The underlying IGP could be ISIS-TE or OSPF-TE for the methodology The underlying IGP could be ISIS-TE or OSPF-TE for the methodology
proposed here. See [IGP-METH] for IGP options to consider and proposed here. See [RFC6412] for IGP options to consider and report.
report. At least one of the IGP is required to be enabled for the procedures
discussed in the document.
5.6. Restoration and Reversion [TERM-ID] 5.6. Restoration and Reversion
Fast Reroute provides a method to return or restore an original Path restoration [RFC6414] provides a method to restore an alternate
primary LSP upon recovery from the failure (Restoration) and to primary LSP upon failure and to switch traffic from the Backup Path
switch traffic from the Backup Path to the restored Primary Path to the restored Primary Path (Reversion). In MPLS-FRR, Reversion can
(Reversion). In MPLS-FRR, Reversion can be implemented as Global be implemented as Global Reversion or Local Reversion. It is
Reversion or Local Reversion. It is important to include Restoration important to include Restoration and Reversion as a step in each test
and Reversion as a step in each test case to measure the amount of case to measure the amount of packet loss, out of order packets, or
packet loss, out of order packets, or duplicate packets that is duplicate packets that occurs in this process.
produced.
Note: In addition to restoration and reversion, re-optimization can Note: In addition to restoration and reversion, re-optimization can
take place while the failure is still not recovered but it depends on take place while the failure is still not recovered but it depends on
the user configuration, and re-otimization timers. the user configuration, and re-optimization timers.
5.7. Offered Load 5.7. Offered Load
It is suggested that there be one or more traffic streams as long as It is recommended that there be three or more traffic streams
there is a steady and constant rate of flow for all the streams. In configured with steady and constant rate of flow for all the streams.
order to monitor the DUT performance for recovery times, a set of In order to monitor the DUT performance for recovery times, a set of
route prefixes should be advertised before traffic is sent. The route prefixes should be advertised before traffic is sent. The
traffic should be configured towards these routes. traffic should be configured to target the advertised routes.
At least 16 flows should be used, and more if possible. Prefix- For better accuracy, one may consider provisioning 16 flows, or more
dependency behaviors are key in IP and tests with route-specific if possible. IP Prefix-dependency behaviors are key and tests with
flows spread across the routing table will reveal this dependency. route-specific flows spread across the routing table reveals such
Generating traffic to all of the prefixes reachable by the protected dependency. Sending traffic to all of the prefixes reachable by the
tunnel (probably in a Round-Robin fashion, where the traffic is protected tunnel in a round-robin fashion is not recommended as the
destined to all the prefixes but one prefix at a time in a cyclic time interval between two subsequent packets destined to one prefix
manner) is not recommended. The reason why traffic generation is not may be higher than the failover time being measured resulting in
recommended in a Round-Robin fashion to all the prefixes, one at a inaccurate failover measurements.
time is that if there are many prefixes reachable through the LSP the
time interval between 2 packets destined to one prefix may be
significantly high and may be comparable with the failover time being
measured which does not aid in getting an accurate failover
measurement.
5.8. Tester Capabilities 5.8. Tester Capabilities
It is RECOMMENDED that the Tester used to execute each test case have It is RECOMMENDED that the Tester used to execute each test case have
the following capabilities: the following capabilities:
1.Ability to establish MPLS-TE tunnels and push/pop labels. 1.Ability to establish MPLS-TE tunnels and push/pop labels.
2.Ability to produce Failover Event [TERM-ID]. 2.Ability to produce Failover Event [RFC6414].
3.Ability to insert a timestamp in each data packet's IP 3.Ability to insert a timestamp in each data packet's IP
payload. payload.
4.An internal time clock to control timestamping, time 4.An internal time clock to control timestamping, time
measurements, and time calculations. measurements, and time calculations.
5.Ability to disable or tune specific Layer-2 and Layer-3 5.Ability to disable or tune specific Layer-2 and Layer-3
protocol functions on any interface(s). protocol functions on any interface(s) such as disabling or
enabling interface IP addresses, auto-negotiation on ethernet
interfaces or scrambling on Packet over SONET interfaces.
6.Ability to react upon the receipt of path error from the PLR 6. In a case, if the tester is the headend, it should be able
to react upon the receipt of path error from the PLR
The Tester MAY be capable to make non-data plane convergence The Tester MAY be capable to make non-data plane convergence
observations and use those observations for measurements. observations and use those observations for measurements.
5.9. Failover Time Measurement Methods
Failover Time is calculated using one of the following three methods
1. Packet-Loss Based method (PLBM): (Number of packets dropped/
packets per second * 1000) milliseconds. This method could also
be referred as Loss-Derived method.
2. Time-Based Loss Method (TBLM): This method relies on the ability
of the Traffic generators to provide statistics which reveal the
duration of failure in milliseconds based on when the packet loss
occurred (interval between non-zero packet loss and zero loss).
3. Timestamp Based Method (TBM): This method of failover calculation
is based on the timestamp that gets transmitted as payload in the
packets originated by the generator. The Traffic Analyzer
records the timestamp of the last packet received before the
failover event and the first packet after the failover and
derives the time based on the difference between these 2
timestamps. Note: The payload could also contain sequence
numbers for out-of-order packet calculation and duplicate
packets.
The timestamp based method would be able to detect Reversion
impairments beyond loss, thus it is RECOMMENDED method as a Failover
Time method.
6. Reference Test Setup 6. Reference Test Setup
In addition to the general reference topology shown in figure 1, this In addition to the general reference topology shown in figure 1, this
section provides detailed insight into various proposed test setups section provides detailed insight into various proposed test setups
that should be considered for comprehensively benchmarking the that should be considered for comprehensively benchmarking the
failover time in different roles along the primary tunnel failover time in different roles along the primary tunnel
This section proposes a set of topologies that covers all the This section proposes a set of topologies that covers all the
scenarios for local protection. All of these topologies can be scenarios for local protection. All of these topologies can be
mapped to the reference topology shown in Figure 1. Topologies mapped to the reference topology shown in Figure 1. Topologies
provided in this section refer to the testbed required to benchmark provided in this section refer to the testbed required to benchmark
failover time when the DUT is configured as a PLR in either Headend failover time when the DUT is configured as a PLR in either Headend
or midpoint role. Provided with each topology below is the label or midpoint role. Provided with each topology below is the label
stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must stack at the PLR. Penultimate Hop Popping (PHP) MAY be used and must
be reported when used. be reported when used.
Figures 2 thru 9 use the following convention: Figures 2 thru 9 use the following convention and are subset of
figure 1:
a) HE is Headend a) HE is Headend
b) TE is Tail-End b) TE is Tail-End
c) MID is Mid point c) MID is Mid point
d) MP is Merge Point d) MP is Merge Point
e) PLR is Point of Local Repair e) PLR is Point of Local Repair
f) PRI is Primary Path f) PRI is Primary Path
g) BKP denotes Backup Path and Nodes g) BKP denotes Backup Path and Nodes
h) UR is Upstream Router
6.1. Link Protection 6.1. Link Protection
6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE 6.1.1. Link Protection - 1 hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
+-------+ +--------+ +--------+ +-------+ +--------+ +--------+
| R1 | | R2 | PRI| R3 | | R1 | | R2 | PRI| R3 |
TG-| HE |--| MID |----| TE |-TA | UR/HE |--| HE/MID |----| MP/TE |
| | | PLR |----| | | | | PLR |----| |
+-------+ +--------+ BKP+--------+ +-------+ +--------+ BKP+--------+
Figure 2. Figure 2.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 0 0 IP TRAFFIC (P-P) 0 0
Layer3 VPN (PE-PE) 1 1 Layer3 VPN (PE-PE) 1 1
Layer3 VPN (PE-P) 2 2 Layer3 VPN (PE-P) 2 2
Layer2 VC (PE-PE) 1 1 Layer2 VC (PE-PE) 1 1
Layer2 VC (PE-P) 2 2 Layer2 VC (PE-P) 2 2
Mid-point LSPs 0 0 Mid-point LSPs 0 0
Note: Please note the following:
a) For P-P case, R2 and R3 acts as P routers
b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R3 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2 and R3 act as shown in above figure
HE, Midpoint/PLR and TE respectively
6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE 6.1.2. Link Protection - 1 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
+-------+ +--------+ +--------+ +-------+ +--------+ +--------+
| R1 | | R2 | | R3 | | R1 | | R2 | | R3 |
TG-| HE | | MID |PRI | TE |-TA | UR/HE | | HE/MID |PRI | MP/TE |
| |----| PLR |----| | | |----| PLR |----| |
+-------+ +--------+ +--------+ +-------+ +--------+ +--------+
|BKP | |BKP |
| +--------+ | | +--------+ |
| | R6 | | | | R6 | |
|----| BKP |----| |----| BKP |----|
| MID | | MID |
+--------+ +--------+
Figure 3. Figure 3.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 0 1 IP TRAFFIC (P-P) 0 1
Layer3 VPN (PE-PE) 1 2 Layer3 VPN (PE-PE) 1 2
Layer3 VPN (PE-P) 2 3 Layer3 VPN (PE-P) 2 3
Layer2 VC (PE-PE) 1 2 Layer2 VC (PE-PE) 1 2
Layer2 VC (PE-P) 2 3 Layer2 VC (PE-P) 2 3
Mid-point LSPs 0 1 Mid-point LSPs 0 1
6.1.3. Link Protection - 2+ hop (from PLR) primary and 1 hop backup TE Note: Please note the following:
tunnels
a) For P-P case, R2 and R3 acts as P routers
b) For PE-PE case,R2 acts as PE and R3 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R3 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2 and R3 act as shown in above figure
HE, Midpoint/PLR and TE respectively
6.1.3. Link Protection - 2+ hops (from PLR) primary and 1 hop backup TE
tunnels
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| R1 | | R2 |PRI | R3 |PRI | R4 | | R1 | | R2 |PRI | R3 |PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA | UR/HE |----| HE/MID |----| MP/MID |------| TE |
| | | PLR |----| | | | | | | PLR |----| | | |
+--------+ +--------+ BKP+--------+ +--------+ +--------+ +--------+ BKP+--------+ +--------+
Figure 4. Figure 4.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Note: Please note the following:
a) For P-P case, R2, R3 and R4 acts as P routers
b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R3 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2, R3 and R4 act as shown in above
figure HE, Midpoint/PLR and TE respectively
6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE 6.1.4. Link Protection - 2+ hop (from PLR) primary and 2 hop backup TE
tunnels tunnels
+--------+ +--------+PRI +--------+ PRI +--------+ +--------+ +--------+PRI +--------+ PRI +--------+
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA | UR/HE |----| HE/MID |----| MP/MID|------| TE |
| | | PLR | | | | | | | | PLR | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
BKP| | BKP| |
| +--------+ | | +--------+ |
| | R6 | | | | R6 | |
+---| BKP |- +---| BKP |-
| MID | | MID |
+--------+ +--------+
Figure 5. Figure 5.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
Note: Please note the following:
a) For P-P case, R2, R3 and R4 acts as P routers
b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R3 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2, R3 and R4 act as shown in above
figure HE, Midpoint/PLR and TE respectively
6.2. Node Protection 6.2. Node Protection
6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE 6.2.1. Node Protection - 2 hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| R1 | | R2 |PRI | R3 | PRI | R4 | | R1 | | R2 |PRI | R3 | PRI | R4 |
TG-| HE |----| MID |----| MID |------| TE |-TA | UR/HE |----| HE/MID |----| MID |------| MP/TE |
| | | PLR | | | | | | | | PLR | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
|BKP | |BKP |
----------------------------- -----------------------------
Figure 6. Figure 6.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 0 IP TRAFFIC (P-P) 1 0
Layer3 VPN (PE-PE) 2 1 Layer3 VPN (PE-PE) 2 1
Layer3 VPN (PE-P) 3 2 Layer3 VPN (PE-P) 3 2
Layer2 VC (PE-PE) 2 1 Layer2 VC (PE-PE) 2 1
Layer2 VC (PE-P) 3 2 Layer2 VC (PE-P) 3 2
Mid-point LSPs 1 0 Mid-point LSPs 1 0
Note: Please note the following:
a) For P-P case, R2, R3 and R3 acts as P routers
b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R4 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2, R3 and R4 act as shown in above
figure HE, Midpoint/PLR and TE respectively
6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE 6.2.2. Node Protection - 2 hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| R1 | | R2 | | R3 | | R4 | | R1 | | R2 | | R3 | | R4 |
TG-| HE | | MID |PRI | MID |PRI | TE |-TA | UR/HE | | HE/MID |PRI | MID |PRI | MP/TE |
| |----| PLR |----| |----| | | |----| PLR |----| |----| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| | | |
BKP| +--------+ | BKP| +--------+ |
| | R6 | | | | R6 | |
---------| BKP |--------- ---------| BKP |---------
| MID | | MID |
+--------+ +--------+
Figure 7. Figure 7.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 1 IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3 Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3 Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1 Mid-point LSPs 1 1
Note: Please note the following:
a) For P-P case, R2, R3 and R4 acts as P routers
b) For PE-PE case,R2 acts as PE and R4 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R4 acts as a
P router and R5 acts as remote PE router (Please refer
to figure 1 for complete setup)
d) For Mid-point case, R1, R2, R3 and R4 act as shown in
above figure HE, Midpoint/PLR and TE respectively
6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE 6.2.3. Node Protection - 3+ hop primary (from PLR) and 1 hop backup TE
tunnels tunnels
+--------+ +--------+PRI+--------+PRI+--------+PRI+--------+
| R1 | | R2 | | R3 | | R4 | | R5 |
| UR/HE |--| HE/MID |---| MID |---| MP |---| TE |
| | | PLR | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+
BKP| |
--------------------------
+--------+ +--------+PRI+--------+PRI+--------+PRI+--------+ Figure 8.
| R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE |--| MID |---| MID |---| MP |---| TE |-TA
| | | PLR | | | | | | |
+--------+ +--------+ +--------+ +--------+ +--------+
BKP| |
--------------------------
Figure 8. Traffic Num of Labels Num of labels
before failure after failure
IP TRAFFIC (P-P) 1 1
Layer3 VPN (PE-PE) 2 2
Layer3 VPN (PE-P) 3 3
Layer2 VC (PE-PE) 2 2
Layer2 VC (PE-P) 3 3
Mid-point LSPs 1 1
Traffic Num of Labels Num of labels Note: Please note the following:
before failure after failure
IP TRAFFIC (P-P) 1 1 a) For P-P case, R2, R3, R4 and R5 acts as P routers
Layer3 VPN (PE-PE) 2 2 b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE
Layer3 VPN (PE-P) 3 3 c) For PE-P case,R2 acts as a PE router, R4 acts as a P router
Layer2 VC (PE-PE) 2 2 and R5 acts as remote PE router (Please refer to figure 1
Layer2 VC (PE-P) 3 3 for complete setup)
Mid-point LSPs 1 1 d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in
above figure HE, Midpoint/PLR and TE respectively
6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE 6.2.4. Node Protection - 3+ hop primary (from PLR) and 2 hop backup TE
tunnels tunnels
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| R1 | | R2 | | R3 | | R4 | | R5 | | R1 | | R2 | | R3 | | R4 | | R5 |
TG-| HE | | MID |PRI| MID |PRI| MP |PRI| TE |-TA | UR/HE | | HE/MID |PRI| MID |PRI| MP |PRI| TE |
| |-- | PLR |---| |---| |---| | | |-- | PLR |---| |---| |---| |
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
BKP| | BKP| |
| +--------+ | | +--------+ |
| | R6 | | | | R6 | |
---------| BKP |------- ---------| BKP |-------
| MID | | MID |
+--------+ +--------+
Figure 9. Figure 9.
Traffic Num of Labels Num of labels Traffic Num of Labels Num of labels
before failure after failure before failure after failure
IP TRAFFIC (P-P) 1 2 IP TRAFFIC (P-P) 1 2
Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-PE) 2 3
Layer3 VPN (PE-P) 3 4 Layer3 VPN (PE-P) 3 4
Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-PE) 2 3
Layer2 VC (PE-P) 3 4 Layer2 VC (PE-P) 3 4
Mid-point LSPs 1 2 Mid-point LSPs 1 2
Note: Please note the following:
a) For P-P case, R2, R3, R4 and R5 acts as P routers
b) For PE-PE case,R2 acts as PE and R5 acts as a remote PE
c) For PE-P case,R2 acts as a PE router, R4 acts as a P router
and R5 acts as remote PE router (Please refer to figure 1
for complete setup)
d) For Mid-point case, R1, R2, R3, R4 and R5 act as shown in
above figure HE, Midpoint/PLR and TE respectively
7. Test Methodology 7. Test Methodology
The procedure described in this section can be applied to all the 8 The procedure described in this section can be applied to all the 8
base test cases and the associated topologies. The backup as well as base test cases and the associated topologies. The backup as well as
the primary tunnels are configured to be alike in terms of bandwidth the primary tunnels are configured to be alike in terms of bandwidth
usage. In order to benchmark failover with all possible label stack usage. In order to benchmark failover with all possible label stack
depth applicable as seen with current deployments, it is RECOMMENDED depth applicable as seen with current deployments, it is RECOMMENDED
to perform all of the test cases provided in this section. The to perform all of the test cases provided in this section. The
forwarding performance test cases in section 7.1 MUST be performed forwarding performance test cases in section 7.1 MUST be performed
prior to performing the failover test cases. prior to performing the failover test cases.
The considerations of Section 4 of [RFC2544] are applicable when The considerations of Section 4 of [RFC2544] are applicable when
evaluating the results obtained using these methodologies as well. evaluating the results obtained using these methodologies as well.
7.1. MPLS FRR Forwarding Performance 7.1. MPLS FRR Forwarding Performance
Benchmarking Failover Time [TERM-ID] for MPLS protection first Benchmarking Failover Time [RFC6414] for MPLS protection first
requires baseline measurement of the forwarding performance of the requires baseline measurement of the forwarding performance of the
test topology including the DUT. Forwarding performance is test topology including the DUT. Forwarding performance is
benchmarked by the Throughput as defined in [MPLS-FWD] and measured benchmarked by the Throughput as defined in [RFC5695] and measured in
in units pps. This section provides two test cases to benchmark units packet per second (pps). This section provides two test cases
forwarding performance. These are with the DUT configured as a to benchmark forwarding performance. These are with the DUT
Headend PLR, Mid-Point PLR, and Egress PLR. configured as a Headend PLR, Mid-Point PLR, and Egress PLR.
7.1.1. Headend PLR Forwarding Performance 7.1.1. Headend PLR Forwarding Performance
Objective: Objective:
To benchmark the maximum rate (pps) on the PLR (as headend) over To benchmark the maximum rate (pps) on the PLR (as headend) over
primary LSP and backup LSP. primary LSP and backup LSP.
Test Setup: Test Setup:
A. Select any one topology out of the 8 from section 6. A. Select any one topology out of the 8 from section 6.
B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with
as Headend PLR. DUT as Headend PLR.
C. The DUT will also have 2 interfaces connected to the traffic C. The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not Generator/analyzer. (If the node downstream of the PLR is not
a simulated node, then the Ingress of the tunnel should have a simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). a link connected to the traffic analyzer).
Procedure: Procedure:
skipping to change at page 19, line 7 skipping to change at page 22, line 9
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 5.7. 5. Setup traffic streams as described in section 5.7.
6. Send MPLS traffic over the primary LSP at the Throughput 6. Send the required MPLS traffic over the primary LSP to
supported by the DUT. achieve the throughput supported by the DUT (section 6, RFC
2544).
7. Record the Throughput over the primary LSP. 7. Record the Throughput over the primary LSP.
8. Trigger a link failure as described in section 5.1. 8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup tunnel 9. Verify that the offered load gets mapped to the backup tunnel
and measure the Additive Backup Delay. and measure the Additive Backup Delay (RFC 6414).
10. 30 seconds after Failover, stop the offered load and measure 10. 30 seconds after Failover, stop the offered load and measure
the Throughput, Packet Loss, Out-of-Order Packets, and the Throughput, Packet Loss, Out-of-Order Packets, and
Duplicate Packets over the Backup LSP. Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until 11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are the Throughput values for the primary and backup LSPs are
equal. equal.
12. Record the Throughput. This is the offered load that will be 12. Record the final Throughput, which corresponds to the offered
used for the Headend PLR failover test cases. load that will be used for the Headend PLR failover test
cases.
7.1.2. Mid-Point PLR Forwarding Performance 7.1.2. Mid-Point PLR Forwarding Performance
Objective: Objective:
To benchmark the maximum rate (pps) on the PLR (as mid-point) over To benchmark the maximum rate (pps) on the PLR (as mid-point) over
primary LSP and backup LSP. primary LSP and backup LSP.
Test Setup: Test Setup:
A. Select any one topology out of the 9 from section 6.
B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT
as Mid-Point PLR.
C. The DUT will also have 2 interfaces connected to the traffic
generator.
Procedure:
1. Establish the primary LSP on R1 required by the topology
selected.
2. Establish the backup LSP on R2 required by the selected
topology.
3. Verify primary and backup LSPs are up and that primary is
protected.
4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 5.7.
6. Send MPLS traffic over the primary LSP at the Throughput
supported by the DUT.
7. Record the Throughput over the primary LSP.
8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup tunnel
and measure the Additive Backup Delay.
10. 30 seconds after Failover, stop the offered load and measure
the Throughput, Packet Loss, Out-of-Order Packets, and
Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are
equal.
12. Record the Throughput. This is the offered load that will be
used for the Mid-Point PLR failover test cases.
7.1.3. Egress PLR Forwarding Performance
Objective:
To benchmark the maximum rate (pps) on the PLR (as egress) over
primary LSP and backup LSP.
Test Setup:
A. Select any one topology out of the 8 from section 6. A. Select any one topology out of the 8 from section 6.
B. Select overlay technologies (e.g. IGP, VPN, or VC) with DUT B. The DUT will also have 2 interfaces connected to the traffic
as Egress PLR.
C. The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Procedure: Procedure:
1. Establish the primary LSP on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams as described in section 5.7. 5. Setup traffic streams as described in section 5.7.
6. Send MPLS traffic over the primary LSP at the Throughput 6. Send MPLS traffic over the primary LSP at the Throughput
supported by the DUT. supported by the DUT (section 6, RFC 2544).
7. Record the Throughput over the primary LSP. 7. Record the Throughput over the primary LSP.
8. Trigger a link failure as described in section 5.1. 8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup tunnel 9. Verify that the offered load gets mapped to the backup tunnel
and measure the Additive Backup Delay. and measure the Additive Backup Delay (RFC 6414).
10. 30 seconds after Failover, stop the offered load and measure 10. 30 seconds after Failover, stop the offered load and measure
the Throughput, Packet Loss, Out-of-Order Packets, and the Throughput, Packet Loss, Out-of-Order Packets, and
Duplicate Packets over the Backup LSP. Duplicate Packets over the Backup LSP.
11. Adjust the offered load and repeat steps 6 through 10 until 11. Adjust the offered load and repeat steps 6 through 10 until
the Throughput values for the primary and backup LSPs are the Throughput values for the primary and backup LSPs are
equal. equal.
12. Record the Throughput. This is the offered load that will be 12. Record the final Throughput which corresponds to the offered
used for the Egress PLR failover test cases. load that will be used for the Mid-Point PLR failover test
cases.
7.2. Headend PLR with Link Failure 7.2. Headend PLR with Link Failure
Objective: Objective:
To benchmark the MPLS failover time due to link failure events To benchmark the MPLS failover time due to link failure events
described in section 5.1 experienced by the DUT which is the described in section 5.1 experienced by the DUT which is the
Headend PLR. Headend PLR.
Test Setup: Test Setup:
A. Select any one topology out of the 8 from section 6. A. Select any one topology out of the 8 from section 6.
B. Select overlay technology for FRR test (e.g. IGP, VPN, or B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with
VC). DUT as Headend PLR.
C. The DUT will also have 2 interfaces connected to the traffic C. The DUT will also have 2 interfaces connected to the traffic
Generator/analyzer. (If the node downstream of the PLR is not Generator/analyzer. (If the node downstream of the PLR is not
a simulated node, then the Ingress of the tunnel should have a simulated node, then the Ingress of the tunnel should have
one link connected to the traffic generator and the node one link connected to the traffic generator and the node
downstream to the PLR or the egress of the tunnel should have downstream to the PLR or the egress of the tunnel should have
a link connected to the traffic analyzer). a link connected to the traffic analyzer).
Test Configuration: Test Configuration:
skipping to change at page 23, line 9 skipping to change at page 24, line 48
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection is enabled and ready. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams for the offered load as described in 5. Setup traffic streams for the offered load as described in
section 5.7. section 5.7.
6. Provide the offered load from the tester at the Throughput 6. Provide the offered load from the tester at the Throughput
[Br91] level obtained from test case 7.1.1. [RFC1242] level obtained from test case 7.1.1.
7. Verify traffic is switched over Primary LSP without packet 7. Verify traffic is switched over Primary LSP without packet
loss. loss.
8. Trigger a link failure as described in section 5.1. 8. Trigger a link failure as described in section 5.1.
9. Verify that the offered load gets mapped to the backup tunnel 9. Verify that the offered load gets mapped to the backup tunnel
and measure the Additive Backup Delay. and measure the Additive Backup Delay.
10. 30 seconds after Failover [TERM-ID], stop the offered load 10. 30 seconds after Failover [RFC6414], stop the offered load
and measure the total Failover Packet Loss [TERM-ID]. and measure the total Failover Packet Loss [RFC6414].
11. Calculate the Failover Time [TERM-ID] benchmark using the 11. Calculate the Failover Time [RFC6414] benchmark using the
selected Failover Time Calculation Method (TBLM, PLBM, or selected Failover Time Calculation Method (TBLM, PLBM, or
TBM) [TERM-ID]. TBM) [RFC6414].
12. Restart the offered load and restore the primary LSP to 12. Restart the offered load and restore the primary LSP to
verify Reversion [TERM-ID] occurs and measure the Reversion verify Reversion [RFC6414] occurs and measure the Reversion
Packet Loss [TERM-ID]. Packet Loss [RFC6414].
13. Calculate the Reversion Time [TERM-ID] benchmark using the 13. Calculate the Reversion Time [RFC6414] benchmark using the
selected Failover Time Calculation Method (TBLM, PLBM, or selected Failover Time Calculation Method (TBLM, PLBM, or
TBM) [TERM-ID]. TBM) [RFC6414].
14. Verify Headend signals new LSP and protection should be in 14. Verify Headend signals new LSP and protection should be in
place again. place again.
IT is RECOMMENDED that this procedure be repeated for each of the IT is RECOMMENDED that this procedure be repeated for each of the
link failure triggers defined in section 5.1. link failure triggers defined in section 5.1.
7.3. Mid-Point PLR with Link Failure 7.3. Mid-Point PLR with Link Failure
Objective: Objective:
To benchmark the MPLS failover time due to link failure events To benchmark the MPLS failover time due to link failure events
described in section 5.1 experienced by the DUT which is the Mid- described in section 5.1 experienced by the DUT which is the Mid-
Point PLR. Point PLR.
Test Setup: Test Setup:
A. Select any one topology out of the 8 from section 6. A. Select any one topology out of the 8 from section 6.
B. Select overlay technology for FRR test as Mid-Point LSPs. B. The DUT will also have 2 interfaces connected to the traffic
C. The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration: Test Configuration:
1. Configure the number of primaries on R1 and the backups on R2 1. Configure the number of primaries on R1 and the backups on R2
as required by the topology selected. as required by the topology selected.
2. Configure the test setup to support Reversion. 2. Configure the test setup to support Reversion.
3. Advertise prefixes (as per FRR Scalability Table described in 3. Advertise prefixes (as per FRR Scalability Table described in
skipping to change at page 25, line 5 skipping to change at page 26, line 41
7.4. Headend PLR with Node Failure 7.4. Headend PLR with Node Failure
Objective: Objective:
To benchmark the MPLS failover time due to Node failure events To benchmark the MPLS failover time due to Node failure events
described in section 5.1 experienced by the DUT which is the described in section 5.1 experienced by the DUT which is the
Headend PLR. Headend PLR.
Test Setup: Test Setup:
A. Select any one topology from section 6. A. Select any one topology out of the 8 from section 6.
B. Select overlay technology for FRR test (e.g. IGP, VPN, or B. Select or enable IP, Layer 3 VPN or Layer 2 VPN services with
VC). DUT as Headend PLR.
C. The DUT will also have 2 interfaces connected to the traffic C. The DUT will also have 2 interfaces connected to the traffic
generator/analyzer. generator/analyzer.
Test Configuration: Test Configuration:
1. Configure the number of primaries on R2 and the backups on R2 1. Configure the number of primaries on R2 and the backups on R2
as required by the topology selected. as required by the topology selected.
2. Configure the test setup to support Reversion. 2. Configure the test setup to support Reversion.
skipping to change at page 25, line 38 skipping to change at page 27, line 33
1. Establish the primary LSP on R2 required by the topology 1. Establish the primary LSP on R2 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams for the offered load as described in 5. Setup traffic streams for the offered load as described in
section 5.7. section 5.7.
6. Provide the offered load from the tester at the Throughput 6. Provide the offered load from the tester at the Throughput
[Br91] level obtained from test case 7.1.1. [RFC1242] level obtained from test case 7.1.1.
7. Verify traffic is switched over Primary LSP without packet 7. Verify traffic is switched over Primary LSP without packet
loss. loss.
8. Trigger a node failure as described in section 5.1. 8. Trigger a node failure as described in section 5.1.
9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link
Failure. Failure.
IT is RECOMMENDED that this procedure be repeated for each of the IT is RECOMMENDED that this procedure be repeated for each of the
skipping to change at page 26, line 28 skipping to change at page 28, line 20
Objective: Objective:
To benchmark the MPLS failover time due to Node failure events To benchmark the MPLS failover time due to Node failure events
described in section 5.1 experienced by the DUT which is the Mid- described in section 5.1 experienced by the DUT which is the Mid-
Point PLR. Point PLR.
Test Setup: Test Setup:
A. Select any one topology from section 6.1 to 6.2. A. Select any one topology from section 6.1 to 6.2.
B. Select overlay technology for FRR test as Mid-Point LSPs. B. The DUT will also have 2 interfaces connected to the traffic
C. The DUT will also have 2 interfaces connected to the traffic
generator. generator.
Test Configuration: Test Configuration:
1. Configure the number of primaries on R1 and the backups on R2 1. Configure the number of primaries on R1 and the backups on R2
as required by the topology selected. as required by the topology selected.
2. Configure the test setup to support Reversion. 2. Configure the test setup to support Reversion.
3. Advertise prefixes (as per FRR Scalability Table described in 3. Advertise prefixes (as per FRR Scalability Table described in
skipping to change at page 27, line 14 skipping to change at page 29, line 5
1. Establish the primary LSP on R1 required by the topology 1. Establish the primary LSP on R1 required by the topology
selected. selected.
2. Establish the backup LSP on R2 required by the selected 2. Establish the backup LSP on R2 required by the selected
topology. topology.
3. Verify primary and backup LSPs are up and that primary is 3. Verify primary and backup LSPs are up and that primary is
protected. protected.
4. Verify Fast Reroute protection. 4. Verify Fast Reroute protection is enabled and ready.
5. Setup traffic streams for the offered load as described in 5. Setup traffic streams for the offered load as described in
section 5.7. section 5.7.
6. Provide the offered load from the tester at the Throughput 6. Provide the offered load from the tester at the Throughput
[Br91] level obtained from test case 7.1.1. [RFC1242] level obtained from test case 7.1.1.
7. Verify traffic is switched over Primary LSP without packet 7. Verify traffic is switched over Primary LSP without packet
loss. loss.
8. Trigger a node failure as described in section 5.1. 8. Trigger a node failure as described in section 5.1.
9. Perform steps 9 through 14 in 7.2 Headend PLR with Link 9. Perform steps 9 through 14 in 7.2 Headend PLR with Link
Failure. Failure.
IT is RECOMMENDED that this procedure be repeated for each of the IT is RECOMMENDED that this procedure be repeated for each of the
node failure triggers defined in section 5.1. node failure triggers defined in section 5.1.
8. Reporting Format 8. Reporting Format
For each test, it is recommended that the results be reported in the For each test, it is RECOMMENDED that the results be reported in the
following format. following format.
Parameter Units Parameter Units
IGP used for the test ISIS-TE/ OSPF-TE IGP used for the test ISIS-TE/ OSPF-TE
Interface types Gige,POS,ATM,VLAN etc. Interface types Gige,POS,ATM,VLAN etc.
Packet Sizes offered to the DUT Bytes (at layer 3) Packet Sizes offered to the DUT Bytes (at layer 3)
Offered Load packets per second Offered Load (Throughput) packets per second
IGP routes advertised Number of IGP routes IGP routes advertised Number of IGP routes
Penultimate Hop Popping Used/Not Used Penultimate Hop Popping Used/Not Used
RSVP hello timers Milliseconds RSVP hello timers Milliseconds
Number of Protected tunnels Number of tunnels Number of Protected tunnels Number of tunnels
Number of VPN routes installed Number of VPN routes Number of VPN routes installed Number of VPN routes
on the Headend on the Headend
skipping to change at page 28, line 47 skipping to change at page 30, line 36
Failover Time Calculation Method Method Used Failover Time Calculation Method Method Used
Reversion- Reversion-
Reversion Time seconds Reversion Time seconds
Reversion Packet Loss packets Reversion Packet Loss packets
Additive Backup Delay seconds Additive Backup Delay seconds
Out-of-Order Packets packets Out-of-Order Packets packets
Duplicate Packets packets Duplicate Packets packets
Failover Time Calculation Method Method Used Failover Time Calculation Method Method Used
Failover Time suggested above is calculated using one of the
following three methods
1. Packet-Loss Based method (PLBM): (Number of packets dropped/
packets per second * 1000) milliseconds. This method could also
be referred as Loss-Derived method.
2. Time-Based Loss Method (TBLM): This method relies on the ability
of the Traffic generators to provide statistics which reveal the
duration of failure in milliseconds based on when the packet loss
occurred (interval between non-zero packet loss and zero loss).
3. Timestamp Based Method (TBM): This method of failover calculation
is based on the timestamp that gets transmitted as payload in the
packets originated by the generator. The Traffic Analyzer
records the timestamp of the last packet received before the
failover event and the first packet after the failover and
derives the time based on the difference between these 2
timestamps. Note: The payload could also contain sequence
numbers for out-of-order packet calculation and duplicate
packets.
The timestamp based method method would be able to detect Reversion
impairments beyond loss, thus it is RECOMMENDED method as a Failover
Time method.
9. Security Considerations 9. Security Considerations
Benchmarking activities as described in this memo are limited to Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the constraints environment, with dedicated address space and the constraints
specified in the sections above. specified in the sections above.
The benchmarking network topology will be an independent test setup The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test traffic into a production network, or misroute traffic to the test
skipping to change at page 29, line 49 skipping to change at page 31, line 12
Further, benchmarking is performed on a "black-box" basis, relying Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT. solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production from the DUT/SUT SHOULD be identical in the lab and in production
networks. networks.
10. IANA Considerations 10. IANA Considerations
This draft does not require any new allocations by IANA. This document does not require any new allocations by IANA.
11. Acknowledgements 11. Acknowledgements
We would like to thank Jean Philip Vasseur for his invaluable input We would like to thank Jean Philip Vasseur for his invaluable input
to the document, Curtis Villamizar for his contribution in suggesting to the document, Curtis Villamizar for his contribution in suggesting
text on definition and need for benchmarking Correlated failures and text on definition and need for benchmarking Correlated failures and
Bhavani Parise for his textual input and review. Additionally we Bhavani Parise for his textual input and review. Additionally we
would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu would like to thank Al Morton, Arun Gandhi, Amrit Hanspal, Karu
Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their Ratnam, Raveesh Janardan, Andrey Kiselev, and Mohan Nanduri for their
formal reviews of this document. formal reviews of this document.
12. References 12. References
12.1. Informative References 12.1. Informative References
[IGP-METH] [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology Switching Devices", RFC 2285, February 1998.
for Benchmarking Link-State IGP Data Plane Route
Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-23
(work in progress), February 2011.
[Br91] Bradner, S., "Benchmarking terminology for network [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana,
"Terminology for Benchmarking Network-layer Traffic
Control Mechanisms", RFC 4689, October 2006.
12.2. Normative References
[RFC1242] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991. interconnection devices", RFC 1242, July 1991.
[Ma98] Mandeville, R., "Benchmarking Terminology for LAN [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Switching Devices", RFC 2285, February 1998. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[MPLS-FRR-EXT] [RFC4090] Pan, P., Swallow, G., and A. Atlas, "Fast Reroute
Pan, P., Swallow, G., and A. Atlas, "Fast Reroute
Extensions to RSVP-TE for LSP Tunnels", RFC 4090, Extensions to RSVP-TE for LSP Tunnels", RFC 4090,
May 2005. May 2005.
[Po06] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, [RFC5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding
"Terminology for Benchmarking Network-layer Traffic
Control Mechanisms", RFC 4689, October 2006.
[MPLS-FWD] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding
Benchmarking Methodology for IP Flows", RFC 5695, Benchmarking Methodology for IP Flows", RFC 5695,
November 2009. November 2009.
[RFC6414] Papneja, R., Poretsky, S., Vapiwala, S., and J. Karthik, [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology
"Benchmarking Terminology for Protection Performance", for Benchmarking Link-State IGP Data-Plane Route
RFC 6414, October 2011. Convergence", RFC 6412, November 2011.
12.2. Normative References
[Br97] Bradner, S., "Key words for use in RFCs to Indicate [RFC6414] Poretsky, S., Papneja, R., Karthik, J., and S. Vapiwala,
Requirement Levels", BCP 14, RFC 2119, March 1997. "Benchmarking Terminology for Protection Performance",
RFC 6414, November 2011.
Appendix A. Fast Reroute Scalability Table Appendix A. Fast Reroute Scalability Table
This section provides the recommended numbers for evaluating the This section provides the recommended numbers for evaluating the
scalability of fast reroute implementations. It also recommends the scalability of fast reroute implementations. It also recommends the
typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries.
Based on the features supported by the device under test (DUT), Based on the features supported by the device under test (DUT),
appropriate scaling limits can be used for the test bed. appropriate scaling limits can be used for the test bed.
A1. FRR IGP Table A1. FRR IGP Table
skipping to change at page 31, line 19 skipping to change at page 33, line 4
Appendix A. Fast Reroute Scalability Table Appendix A. Fast Reroute Scalability Table
This section provides the recommended numbers for evaluating the This section provides the recommended numbers for evaluating the
scalability of fast reroute implementations. It also recommends the scalability of fast reroute implementations. It also recommends the
typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries.
Based on the features supported by the device under test (DUT), Based on the features supported by the device under test (DUT),
appropriate scaling limits can be used for the test bed. appropriate scaling limits can be used for the test bed.
A1. FRR IGP Table A1. FRR IGP Table
No. of Headend TE Tunnels IGP Prefixes No. of Headend TE Tunnels IGP Prefixes
(L) (R)
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 5000 1 5000
skipping to change at page 32, line 6 skipping to change at page 34, line 4
100 100 100 100
500 500 500 500
1000 1000 1000 1000
2000 2000 2000 2000
A2. FRR VPN Table A2. FRR VPN Table
No. of Headend TE Tunnels VPNv4 Prefixes No. of Headend TE Tunnels VPNv4 Prefixes
(L) (R)
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 5000 1 5000
skipping to change at page 33, line 5 skipping to change at page 35, line 5
2 (Load Balance) Max 2 (Load Balance) Max
A3. FRR Mid-Point LSP Table A3. FRR Mid-Point LSP Table
No of Mid-point TE LSPs could be configured at recommended levels - No of Mid-point TE LSPs could be configured at recommended levels -
100, 500, 1000, 2000, or max supported number. 100, 500, 1000, 2000, or max supported number.
A2. FRR VC Table A2. FRR VC Table
No. of Headend TE Tunnels VC entries No. of Headend TE Tunnels VC entries
(L) (R)
1 100 1 100
1 500 1 500
1 1000 1 1000
1 2000 1 2000
1 Max 1 Max
100 100 100 100
500 500 500 500
1000 1000 1000 1000
2000 2000 2000 2000
Appendix B. Abbreviations Appendix B. Abbreviations
AIS - Alarm Indication Signal
BFD - Bidirectional Fault Detection BFD - Bidirectional Fault Detection
BGP - Border Gateway protocol BGP - Border Gateway protocol
CE - Customer Edge CE - Customer Edge
DUT - Device Under Test DUT - Device Under Test
FRR - Fast Reroute FRR - Fast Reroute
IGP - Interior Gateway Protocol IGP - Interior Gateway Protocol
IP - Internet Protocol IP - Internet Protocol
LOS - Loss of Signal
LSP - Label Switched Path LSP - Label Switched Path
MP - Merge Point MP - Merge Point
MPLS - Multi Protocol Label Switching MPLS - Multi Protocol Label Switching
N-Nhop - Next - Next Hop N-Nhop - Next - Next Hop
Nhop - Next Hop Nhop - Next Hop
OIR - Online Insertion and Removal OIR - Online Insertion and Removal
P - Provider P - Provider
PE - Provider Edge PE - Provider Edge
PHP - Penultimate Hop Popping PHP - Penultimate Hop Popping
PLR - Point of Local Repair PLR - Point of Local Repair
skipping to change at page 34, line 38 skipping to change at page 36, line 38
Email: jkarthik@cisco.com Email: jkarthik@cisco.com
Scott Poretsky Scott Poretsky
Allot Communications Allot Communications
USA USA
Email: sporetsky@allot.com Email: sporetsky@allot.com
Shankar Rao Shankar Rao
5005 E. Dartmouth Ave. Qwest Communications
Denver, CO 80222 950 17th Street
Suite 1900
Denver, CO 80210
USA USA
Email: srao@du.edu Email: shankar.rao@du.edu
Jean-Louis Le Roux JL. Le Roux
France Telecom France Telecom
2 av Pierre Marzin 2 av Pierre Marzin
22300 Lannion 22300 Lannion
France France
Email: jeanlouis.leroux@francetelecom.com Email: jeanlouis.leroux@orange.com
 End of changes. 137 change blocks. 
374 lines changed or deleted 398 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/