draft-ietf-bmwg-igp-dataplane-conv-meth-18.txt   draft-ietf-bmwg-igp-dataplane-conv-meth-19.txt 
Network Working Group S. Poretsky Network Working Group S. Poretsky
Internet-Draft Allot Communications Internet-Draft Allot Communications
Intended status: Informational B. Imhoff Intended status: Informational B. Imhoff
Expires: January 14, 2010 Juniper Networks Expires: April 29, 2010 Juniper Networks
K. Michielsen K. Michielsen
Cisco Systems Cisco Systems
July 13, 2009 October 26, 2009
Benchmarking Methodology for Link-State IGP Data Plane Route Convergence Benchmarking Methodology for Link-State IGP Data Plane Route Convergence
draft-ietf-bmwg-igp-dataplane-conv-meth-18 draft-ietf-bmwg-igp-dataplane-conv-meth-19
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. This document may contain material provisions of BCP 78 and BCP 79. This document may contain material
from IETF Documents or IETF Contributions published or made publicly from IETF Documents or IETF Contributions published or made publicly
available before November 10, 2008. The person(s) controlling the available before November 10, 2008. The person(s) controlling the
copyright in some of this material may not have granted the IETF copyright in some of this material may not have granted the IETF
Trust the right to allow modifications of such material outside the Trust the right to allow modifications of such material outside the
IETF Standards Process. Without obtaining an adequate license from IETF Standards Process. Without obtaining an adequate license from
skipping to change at page 1, line 45 skipping to change at page 1, line 45
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on January 14, 2010. This Internet-Draft will expire on April 29, 2010.
Copyright Notice Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info). publication of this document (http://trustee.ietf.org/license-info).
Please review these documents carefully, as they describe your rights Please review these documents carefully, as they describe your rights
skipping to change at page 3, line 16 skipping to change at page 3, line 16
1. Introduction and Scope . . . . . . . . . . . . . . . . . . . . 5 1. Introduction and Scope . . . . . . . . . . . . . . . . . . . . 5
2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 5 2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 5
3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Test topology for local changes . . . . . . . . . . . . . 5 3.1. Test topology for local changes . . . . . . . . . . . . . 5
3.2. Test topology for remote changes . . . . . . . . . . . . . 6 3.2. Test topology for remote changes . . . . . . . . . . . . . 6
3.3. Test topology for local ECMP changes . . . . . . . . . . . 7 3.3. Test topology for local ECMP changes . . . . . . . . . . . 7
3.4. Test topology for remote ECMP changes . . . . . . . . . . 7 3.4. Test topology for remote ECMP changes . . . . . . . . . . 7
3.5. Test topology for Parallel Link changes . . . . . . . . . 8 3.5. Test topology for Parallel Link changes . . . . . . . . . 8
4. Convergence Time and Loss of Connectivity Period . . . . . . . 9 4. Convergence Time and Loss of Connectivity Period . . . . . . . 9
4.1. Convergence Events without instant traffic loss . . . . . 10
4.2. Loss of Connectivity . . . . . . . . . . . . . . . . . . . 12
5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 13 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 13
5.1. IGP Selection . . . . . . . . . . . . . . . . . . . . . . 13 5.1. IGP Selection . . . . . . . . . . . . . . . . . . . . . . 13
5.2. Routing Protocol Configuration . . . . . . . . . . . . . . 13 5.2. Routing Protocol Configuration . . . . . . . . . . . . . . 13
5.3. IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 13 5.3. IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 13
5.4. Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.4. Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 14 5.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 14
5.6. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 14 5.6. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 14
5.7. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 15 5.7. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 15
5.8. Measurement Statistics . . . . . . . . . . . . . . . . . . 15 5.8. Measurement Statistics . . . . . . . . . . . . . . . . . . 15
5.9. Tester Capabilities . . . . . . . . . . . . . . . . . . . 15 5.9. Tester Capabilities . . . . . . . . . . . . . . . . . . . 15
6. Selection of Convergence Time Benchmark Metrics and Methods . 16 6. Selection of Convergence Time Benchmark Metrics and Methods . 16
6.1. Loss-Derived Method . . . . . . . . . . . . . . . . . . . 16 6.1. Loss-Derived Method . . . . . . . . . . . . . . . . . . . 16
6.1.1. Tester capabilities . . . . . . . . . . . . . . . . . 16 6.1.1. Tester capabilities . . . . . . . . . . . . . . . . . 16
6.1.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 16 6.1.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 17
6.1.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 16 6.1.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 17
6.2. Rate-Derived Method . . . . . . . . . . . . . . . . . . . 17 6.2. Rate-Derived Method . . . . . . . . . . . . . . . . . . . 17
6.2.1. Tester Capabilities . . . . . . . . . . . . . . . . . 17 6.2.1. Tester Capabilities . . . . . . . . . . . . . . . . . 17
6.2.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 17 6.2.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 17
6.2.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 17 6.2.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 17
6.3. Route-Specific Loss-Derived Method . . . . . . . . . . . . 17 6.3. Route-Specific Loss-Derived Method . . . . . . . . . . . . 18
6.3.1. Tester Capabilities . . . . . . . . . . . . . . . . . 17 6.3.1. Tester Capabilities . . . . . . . . . . . . . . . . . 18
6.3.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 18 6.3.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 18
6.3.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 18 6.3.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 18
7. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 18 7. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 18
8. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20 8. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20
8.1. Interface failures . . . . . . . . . . . . . . . . . . . . 21 8.1. Interface failures . . . . . . . . . . . . . . . . . . . . 21
8.1.1. Convergence Due to Local Interface Failure . . . . . . 21 8.1.1. Convergence Due to Local Interface Failure . . . . . . 21
8.1.2. Convergence Due to Remote Interface Failure . . . . . 22 8.1.2. Convergence Due to Remote Interface Failure . . . . . 22
8.1.3. Convergence Due to ECMP Member Local Interface 8.1.3. Convergence Due to ECMP Member Local Interface
Failure . . . . . . . . . . . . . . . . . . . . . . . 24 Failure . . . . . . . . . . . . . . . . . . . . . . . 23
8.1.4. Convergence Due to ECMP Member Remote Interface 8.1.4. Convergence Due to ECMP Member Remote Interface
Failure . . . . . . . . . . . . . . . . . . . . . . . 25 Failure . . . . . . . . . . . . . . . . . . . . . . . 25
8.1.5. Convergence Due to Parallel Link Interface Failure . . 26 8.1.5. Convergence Due to Parallel Link Interface Failure . . 26
8.2. Other failures . . . . . . . . . . . . . . . . . . . . . . 27 8.2. Other failures . . . . . . . . . . . . . . . . . . . . . . 27
8.2.1. Convergence Due to Layer 2 Session Loss . . . . . . . 27 8.2.1. Convergence Due to Layer 2 Session Loss . . . . . . . 27
8.2.2. Convergence Due to Loss of IGP Adjacency . . . . . . . 28 8.2.2. Convergence Due to Loss of IGP Adjacency . . . . . . . 28
8.2.3. Convergence Due to Route Withdrawal . . . . . . . . . 30 8.2.3. Convergence Due to Route Withdrawal . . . . . . . . . 30
8.3. Administrative changes . . . . . . . . . . . . . . . . . . 31 8.3. Administrative changes . . . . . . . . . . . . . . . . . . 31
8.3.1. Convergence Due to Local Adminstrative Shutdown . . . 31 8.3.1. Convergence Due to Local Adminstrative Shutdown . . . 31
8.3.2. Convergence Due to Cost Change . . . . . . . . . . . . 32 8.3.2. Convergence Due to Cost Change . . . . . . . . . . . . 32
9. Security Considerations . . . . . . . . . . . . . . . . . . . 34 9. Security Considerations . . . . . . . . . . . . . . . . . . . 34
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 34 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 34
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34
12. Normative References . . . . . . . . . . . . . . . . . . . . . 34 12. Normative References . . . . . . . . . . . . . . . . . . . . . 35
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35
1. Introduction and Scope 1. Introduction and Scope
This document describes the methodology for benchmarking Link-State This document describes the methodology for benchmarking Link-State
Interior Gateway Protocol (IGP) convergence. The motivation and Interior Gateway Protocol (IGP) convergence. The motivation and
applicability for this benchmarking is described in [Po09a]. The applicability for this benchmarking is described in [Po09a]. The
terminology to be used for this benchmarking is described in [Po09t]. terminology to be used for this benchmarking is described in [Po09t].
IGP convergence time is measured on the data plane at the Tester by IGP convergence time is measured on the data plane at the Tester by
skipping to change at page 6, line 11 skipping to change at page 6, line 11
Figure 1 shows the test topology to measure IGP convergence time due Figure 1 shows the test topology to measure IGP convergence time due
to local Convergence Events such as Local Interface failure to local Convergence Events such as Local Interface failure
(Section 8.1.1), layer 2 session failure (Section 8.2.1), and IGP (Section 8.1.1), layer 2 session failure (Section 8.2.1), and IGP
adjacency failure (Section 8.2.2). This topology is also used to adjacency failure (Section 8.2.2). This topology is also used to
measure IGP convergence time due to the route withdrawal measure IGP convergence time due to the route withdrawal
(Section 8.2.3), and route cost change (Section 8.3.2) Convergence (Section 8.2.3), and route cost change (Section 8.3.2) Convergence
Events. IGP adjancencies MUST be established between Tester and DUT, Events. IGP adjancencies MUST be established between Tester and DUT,
one on the Preferred Egress Interface and one on the Next-Best Egress one on the Preferred Egress Interface and one on the Next-Best Egress
Interface. For this purpose the Tester emulates two routers, each Interface. For this purpose the Tester emulates two routers, each
establishing one adjacency with the DUT. An IGP adjacency MAY be establishing one adjacency with the DUT. An IGP adjacency SHOULD be
established on the Ingress Interface between Tester and DUT. established on the Ingress Interface between Tester and DUT.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Preferred Egress Interface | | | | Preferred Egress Interface | |
| DUT |-------------------------------->| Tester | | DUT |-------------------------------->| Tester |
| | | | | | | |
| |-------------------------------->| | | |-------------------------------->| |
| | Next-Best Egress Interface | | | | Next-Best Egress Interface | |
skipping to change at page 6, line 35 skipping to change at page 6, line 35
3.2. Test topology for remote changes 3.2. Test topology for remote changes
Figure 2 shows the test topology to measure IGP convergence time due Figure 2 shows the test topology to measure IGP convergence time due
to Remote Interface failure (Section 8.1.2). In this topology the to Remote Interface failure (Section 8.1.2). In this topology the
two routers R1 and R2 are considered System Under Test (SUT) and two routers R1 and R2 are considered System Under Test (SUT) and
SHOULD be identically configured devices of the same model. IGP SHOULD be identically configured devices of the same model. IGP
adjancencies MUST be established between Tester and SUT, one on the adjancencies MUST be established between Tester and SUT, one on the
Preferred Egress Interface and one on the Next-Best Egress Interface. Preferred Egress Interface and one on the Next-Best Egress Interface.
For this purpose the Tester emulates one or two routers. An IGP For this purpose the Tester emulates one or two routers. An IGP
adjacency MAY be established on the Ingress Interface between Tester adjacency SHOULD be established on the Ingress Interface between
and SUT. In this topology there is a possibility of a transient Tester and SUT. In this topology there is a possibility of a
microloop between R1 and R2 during convergence. transient microloop between R1 and R2 during convergence.
------ ---------- ------ ----------
| | Preferred | | | | Preferred | |
------ | R2 |--------------------->| | ------ | R2 |--------------------->| |
| |-->| | Egress Interface | | | |-->| | Egress Interface | |
| | ------ | | | | ------ | |
| R1 | | Tester | | R1 | | Tester |
| | Next-Best | | | | Next-Best | |
| |------------------------------>| | | |------------------------------>| |
------ Egress Interface | | ------ Egress Interface | |
skipping to change at page 7, line 31 skipping to change at page 7, line 31
3.3. Test topology for local ECMP changes 3.3. Test topology for local ECMP changes
Figure 3 shows the test topology to measure IGP convergence time due Figure 3 shows the test topology to measure IGP convergence time due
to local Convergence Events with members of an Equal Cost Multipath to local Convergence Events with members of an Equal Cost Multipath
(ECMP) set (Section 8.1.3). In this topology, the DUT is configured (ECMP) set (Section 8.1.3). In this topology, the DUT is configured
with each egress interface as a member of a single ECMP set and the with each egress interface as a member of a single ECMP set and the
Tester emulates N next-hop routers, one router for each member. IGP Tester emulates N next-hop routers, one router for each member. IGP
adjancencies MUST be established between Tester and DUT, one on each adjancencies MUST be established between Tester and DUT, one on each
member of the ECMP set. For this purpose each of the N routers member of the ECMP set. For this purpose each of the N routers
emulated by the Tester establishes one adjacency with the DUT. An emulated by the Tester establishes one adjacency with the DUT. An
IGP adjacency MAY be established on the Ingress Interface between IGP adjacency SHOULD be established on the Ingress Interface between
Tester and DUT. Tester and DUT.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | ECMP set interface 1 | | | | ECMP set interface 1 | |
| |-------------------------------->| | | |-------------------------------->| |
| DUT | . | Tester | | DUT | . | Tester |
| | . | | | | . | |
| | . | | | | . | |
skipping to change at page 8, line 5 skipping to change at page 8, line 5
--------- ---------- --------- ----------
Figure 3: IGP convergence test topology for local ECMP change Figure 3: IGP convergence test topology for local ECMP change
3.4. Test topology for remote ECMP changes 3.4. Test topology for remote ECMP changes
Figure 4 shows the test topology to measure IGP convergence time due Figure 4 shows the test topology to measure IGP convergence time due
to remote Convergence Events with members of an Equal Cost Multipath to remote Convergence Events with members of an Equal Cost Multipath
(ECMP) set (Section 8.1.4). In this topology the two routers R1 and (ECMP) set (Section 8.1.4). In this topology the two routers R1 and
R2 are considered System Under Test (SUT) and MUST be identically R2 are considered System Under Test (SUT) and MUST be identically
configured devices of the same model. Route R1 is configured with configured devices of the same model. Router R1 is configured with
each egress interface as a member of a single ECMP set and the Tester each egress interface as a member of a single ECMP set and the Tester
emulates N next-hop routers, one router for each member. IGP emulates N next-hop routers, one router for each member. IGP
adjancencies MUST be established between Tester and SUT, one on each adjancencies MUST be established between Tester and SUT, one on each
egress interface of SUT. For this purpose each of the N routers egress interface of SUT. For this purpose each of the N routers
emulated by the Tester establishes one adjacency with the SUT. An emulated by the Tester establishes one adjacency with the SUT. An
IGP adjacency MAY be established on the Ingress Interface between IGP adjacency SHOULD be established on the Ingress Interface between
Tester and SUT. In this topology there is a possibility of a Tester and SUT. In this topology there is a possibility of a
transient microloop between R1 and R2 during convergence. transient microloop between R1 and R2 during convergence.
------ ---------- ------ ----------
| | | | | | | |
------ ECMP set | R2 |---->| | ------ ECMP set | R2 |---->| |
| |------------------->| | | | | |------------------->| | | |
| | Interface 1 ------ | | | | Interface 1 ------ | |
| | | | | | | |
| | ECMP set interface 2 | | | | ECMP set interface 2 | |
skipping to change at page 8, line 44 skipping to change at page 8, line 44
3.5. Test topology for Parallel Link changes 3.5. Test topology for Parallel Link changes
Figure 5 shows the test topology to measure IGP convergence time due Figure 5 shows the test topology to measure IGP convergence time due
to local Convergence Events with members of a Parallel Link to local Convergence Events with members of a Parallel Link
(Section 8.1.5). In this topology, the DUT is configured with each (Section 8.1.5). In this topology, the DUT is configured with each
egress interface as a member of a Parallel Link and the Tester egress interface as a member of a Parallel Link and the Tester
emulates the single next-hop router. IGP adjancencies MUST be emulates the single next-hop router. IGP adjancencies MUST be
established on all N members of the Parallel Link between Tester and established on all N members of the Parallel Link between Tester and
DUT. For this purpose the router emulated by the Tester establishes DUT. For this purpose the router emulated by the Tester establishes
N adjacencies with the DUT. An IGP adjacency MAY be established on N adjacencies with the DUT. An IGP adjacency SHOULD be established
the Ingress Interface between Tester and DUT. on the Ingress Interface between Tester and DUT.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Parallel Link Interface 1 | | | | Parallel Link Interface 1 | |
| |-------------------------------->| | | |-------------------------------->| |
| DUT | . | Tester | | DUT | . | Tester |
| | . | | | | . | |
| | . | | | | . | |
| |-------------------------------->| | | |-------------------------------->| |
skipping to change at page 10, line 5 skipping to change at page 10, line 5
has a direct impact on the network user's application performance. has a direct impact on the network user's application performance.
In general the Route Convergence time is larger than or equal to the In general the Route Convergence time is larger than or equal to the
Route Loss of Connectivity Period. Depending on which Convergence Route Loss of Connectivity Period. Depending on which Convergence
Event occurs and how this Convergence Event is applied, traffic for a Event occurs and how this Convergence Event is applied, traffic for a
route may still be forwarded over the Preferred Egress Interface route may still be forwarded over the Preferred Egress Interface
after the Convergence Event Instant, before converging to the Next- after the Convergence Event Instant, before converging to the Next-
Best Egress Interface. In that case the Route Loss of Connectivity Best Egress Interface. In that case the Route Loss of Connectivity
Period is shorter than the Route Convergence time. Period is shorter than the Route Convergence time.
At least one condition need to be fulfilled for Route Convergence At least one condition needs to be fulfilled for Route Convergence
time to be equal to Route Loss of Connectivity Period. The condition time to be equal to Route Loss of Connectivity Period. The condition
is that the Convergence Event causes an instantaneous traffic loss is that the Convergence Event causes an instantaneous traffic loss
for the measured route. A fiber cut on the Preferred Egress for the measured route. A fiber cut on the Preferred Egress
Interface is an example of such a Convergence Event. For Convergence Interface is an example of such a Convergence Event.
Events caused by the Tester, such as an IGP cost change, the Tester
may start to drop all traffic received from the Preferred Egress
Interface at the Convergence Event Instant to achieve the same
result.
A second condition applies to Route Convergence time measurements A second condition applies to Route Convergence time measurements
based on Connectivity Packet Loss [Po09t].This second condition is based on Connectivity Packet Loss [Po09t]. This second condition is
that there is only a single Loss Period during Route Convergence. that there is only a single Loss Period during Route Convergence.
For the testcases described in this document this is expected to be For the testcases described in this document this is expected to be
the case. the case.
To measure convergence time without real instantaneous traffic loss 4.1. Convergence Events without instant traffic loss
at the Convergence Event Instant, such as a reversion of a link
failure Convergence Event, the Tester SHOULD collect a timestamp at To measure convergence time benchmarks for Convergence Events caused
the time instant traffic starts and a timestamp at the Convergence by a Tester, such as an IGP cost change, the Tester MAY start to
Event Instant, and only observe packet statistics on the Next-Best discard all traffic received from the Preferred Egress Interface at
Egress Interface. the Convergence Event Instant, or MAY separately observe packets
received from the Preferred Egress Interface prior to the Convergence
Event Instant. This way these Convergence Events can be treated the
same as Convergence Events that cause instantaneous traffic loss.
To measure convergence time benchmarks without instantaneous traffic
loss (either real or induced by the Tester) at the Convergence Event
Instant, such as a reversion of a link failure Convergence Event, the
Tester SHALL only observe packet statistics on the Next-Best Egress
Interface. If using the Rate-Derived method to benchmark convergence
times for such Convergence Events, the Tester MUST collect a
timestamp at the Convergence Event Instant. If using a loss-derived
method to benchmark convergence times for such Convergence Events,
the Tester MUST measure the period in time between the Start Traffic
Instant and the Convergence Event Instant. To measure this period in
time the Tester can collect timestamps at the Start Traffic Instant
and the Convergence Event Instant.
The Convergence Event Instant together with the receive rate The Convergence Event Instant together with the receive rate
observations on the Next-Best Egress Interface allow to derive the observations on the Next-Best Egress Interface allow to derive the
convergence benchmarks using the Rate-Derived Method [Po09t]. convergence time benchmarks using the Rate-Derived Method [Po09t].
By observing lost packets on the Next-Best Egress Interface only, the By observing lost packets on the Next-Best Egress Interface only, the
measured packet loss is the number of lost packets between traffic observed packet loss is the number of lost packets between Traffic
start and Convergence Recovery Instant. To measure convergence times Start Instant and Convergence Recovery Instant. To measure
using a loss-derived method, packet loss between the Convergence convergence times using a loss-derived method, packet loss between
Event Instant and the Convergence Recovery Instant is needed. The the Convergence Event Instant and the Convergence Recovery Instant is
time between traffic start and Convergence Event Instant must be needed. The time between Traffic Start Instant and Convergence Event
accounted for Instant must be accounted for. An example may clarify this.
Figure 6 illustrates a Convergence Event without instantaneous Figure 6 illustrates a Convergence Event without instantaneous
traffic loss for all routes. The top graph shows the Forwarding Rate traffic loss for all routes. The top graph shows the Forwarding Rate
over all routes, the bottem graph shows the Forwarding Rate for a over all routes, the bottom graph shows the Forwarding Rate for a
single route Rta. Some time after the Convergence Event Instant, single route Rta. Some time after the Convergence Event Instant,
Forwarding Rate observed on the Preferred Egress Interface starts to Forwarding Rate observed on the Preferred Egress Interface starts to
decrease. In the example route Rta is the first route to experience decrease. In the example, route Rta is the first route to experience
packet loss at time Ta. Some time later, the Forwarding Rate packet loss at time Ta. Some time later, the Forwarding Rate
observed on the Next-Best Egress Interface starts to increase. In observed on the Next-Best Egress Interface starts to increase. In
the example route Rta is the first route to complete convergence at the example, route Rta is the first route to complete convergence at
time Ta'. time Ta'.
^ ^
Fwd | Fwd |
Rate |------------- ............ Rate |------------- ............
| \ . | \ .
| \ . | \ .
| \ . | \ .
| \ . | \ .
|.................-.-.-.-.-.-.---------------- |.................-.-.-.-.-.-.----------------
skipping to change at page 11, line 30 skipping to change at page 11, line 39
Rta | | . Rta | | .
| | . | | .
|.............-.-.-.-.-.-.-.-.---------------- |.............-.-.-.-.-.-.-.-.----------------
+----+-------+---------------+-----------------> +----+-------+---------------+----------------->
^ ^ ^ ^ time ^ ^ ^ ^ time
T0 CEI Ta Ta' T0 CEI Ta Ta'
Preferred Egress Interface: --- Preferred Egress Interface: ---
Next-Best Egress Interface: ... Next-Best Egress Interface: ...
With CEI the Convergence Event Instant; T0 the time instant traffic With T0 the Start Traffic Instant; CEI the Convergence Event Instant;
starts; Ta the time instant traffic loss for route Rta starts; Ta' Ta the time instant traffic loss for route Rta starts; Ta' the time
the time instant traffic loss for route Rta ends. instant traffic loss for route Rta ends.
Figure 6 Figure 6
If only packets received on the Next-Best Egress Interface are If only packets received on the Next-Best Egress Interface are
observed, the duration of the packet loss period for route Rta observed, the duration of the packet loss period for route Rta can be
observed on the Next-Best Egress Interface can be calculated from the calculated from the received packets as in Equation 1. Since the
received packets as in Equation 1. Since the Convergence Event Convergence Event Instant is the start time for convergence time
Instant is the start time for convergence time measurement, the measurement, the period in time between T0 and CEI needs to be
period in time between T0 and CEI needs to be substracted from the subtracted from the calculated result to become the convergence time,
calculated result to become the convergence time, as in Equation 2. as in Equation 2.
Next-Best Egress Interface packet loss period Next-Best Egress Interface packet loss period
= (packets transmitted = (packets transmitted
- packets received from Next-Best Egress Interface) / tx rate - packets received from Next-Best Egress Interface) / tx rate
= Ta' - T0 = Ta' - T0
Equation 1 Equation 1
convergence time convergence time
= Next-Best Egress Interface packet loss period - (CEI - T0) = Next-Best Egress Interface packet loss period - (CEI - T0)
= Ta' - CEI = Ta' - CEI
Equation 2 Equation 2
4.2. Loss of Connectivity
Route Loss of Connectivity Period SHOULD be measured using the Route- Route Loss of Connectivity Period SHOULD be measured using the Route-
Specific Loss-Derived Method. Since the start instant and end Specific Loss-Derived Method. Since the start instant and end
instant of the Route Loss of Connectivity Period can be different for instant of the Route Loss of Connectivity Period can be different for
each route, these can not be accurately derived by only observing each route, these can not be accurately derived by only observing
global statistics over all routes. An example may clarify this. global statistics over all routes. An example may clarify this.
Following a Convergence Event, route Rta is the first route for which Following a Convergence Event, route Rta is the first route for which
packet loss starts, the Route Loss of Connectivity Period for route packet loss starts, the Route Loss of Connectivity Period for route
Rta starts at time Ta. Route Rtb is the last route for which packet Rta starts at time Ta. Route Rtb is the last route for which packet
loss starts, the Route Loss of Connectivity Period for route Rtb loss starts, the Route Loss of Connectivity Period for route Rtb
skipping to change at page 13, line 22 skipping to change at page 13, line 31
The two implementation variations in the above example would result The two implementation variations in the above example would result
in the same derived minimum and maximum Route Loss of Connectivity in the same derived minimum and maximum Route Loss of Connectivity
Periods when only observing the global packet statistics, while the Periods when only observing the global packet statistics, while the
real Route Loss of Connectivity Periods are different. real Route Loss of Connectivity Periods are different.
5. Test Considerations 5. Test Considerations
5.1. IGP Selection 5.1. IGP Selection
The test cases described in section 4 MAY be used for link-state The test cases described in Section 8 MAY be used for link-state
IGPs, such as ISIS or OSPF. The IGP convergence time test IGPs, such as ISIS or OSPF. The IGP convergence time test
methodology is identical. methodology is identical.
5.2. Routing Protocol Configuration 5.2. Routing Protocol Configuration
The obtained results for IGP convergence time may vary if other The obtained results for IGP convergence time may vary if other
routing protocols are enabled and routes learned via those protocols routing protocols are enabled and routes learned via those protocols
are installed. IGP convergence times MUST be benchmarked without are installed. IGP convergence times SHOULD be benchmarked without
routes installed from other protocols. routes installed from other protocols.
5.3. IGP Topology 5.3. IGP Topology
The Tester emulates a single IGP topology. The DUT establishes IGP The Tester emulates a single IGP topology. The DUT establishes IGP
adjacencies with one or more of the emulated routers in this single adjacencies with one or more of the emulated routers in this single
IGP topology emulated by the Tester. See topology details in IGP topology emulated by the Tester. See test topology details in
Section 3. The emulated topology SHOULD only be advertised on the Section 3. The emulated topology SHOULD only be advertised on the
DUT egress interfaces. DUT egress interfaces.
The number of IGP routes will impact the measured IGP route The number of IGP routes will impact the measured IGP route
convergence time. To obtain results similar to those that would be convergence time. To obtain results similar to those that would be
observed in an operational network, it is RECOMMENDED that the number observed in an operational network, it is RECOMMENDED that the number
of installed routes and nodes closely approximates that of the of installed routes and nodes closely approximate that of the network
network (e.g. thousands of routes with tens or hundreds of nodes). (e.g. thousands of routes with tens or hundreds of nodes).
The number of areas (for OSPF) and levels (for ISIS) can impact the The number of areas (for OSPF) and levels (for ISIS) can impact the
benchmark results. benchmark results.
5.4. Timers 5.4. Timers
There are timers that may impact the measured IGP convergence times. There are timers that may impact the measured IGP convergence times.
The benchmark metrics MAY be measured at any fixed values for these The benchmark metrics MAY be measured at any fixed values for these
timers. To obtain results similar to those that would be observed in timers. To obtain results similar to those that would be observed in
an operational network, it is RECOMMENDED to configure the timers an operational network, it is RECOMMENDED to configure the timers
skipping to change at page 14, line 26 skipping to change at page 14, line 33
Interface failure indication Interface failure indication
IGP hello timer IGP hello timer
IGP dead-interval or hold-timer IGP dead-interval or hold-timer
LSA or LSP generation delay LSA or LSP generation delay
LSA or LSP flood packet pacing LSA or LSP flood packet pacing
LSA or LSP retransmission packet pacing
SPF delay SPF delay
5.5. Interface Types 5.5. Interface Types
All test cases in this methodology document MAY be executed with any All test cases in this methodology document MAY be executed with any
interface type. The type of media may dictate which test cases may interface type. The type of media may dictate which test cases may
be executed. This is because each interface type has a unique be executed. Each interface type has a unique mechanism for
mechanism for detecting link failures and the speed at which that detecting link failures and the speed at which that mechanism
mechanism operates will influence the measurement results. All operates will influence the measurement results. All interfaces MUST
interfaces MUST be the same media and Throughput [Br91][Br99] for be the same media and Throughput [Br91][Br99] for each test case.
each test case. All interfaces SHOULD be configured as point-to- All interfaces SHOULD be configured as point-to-point.
point.
5.6. Offered Load 5.6. Offered Load
The Throughput of the device, as defined in [Br91] and benchmarked in The Throughput of the device, as defined in [Br91] and benchmarked in
[Br99] at a fixed packet size, needs to be determined over the [Br99] at a fixed packet size, needs to be determined over the
preferred path and over the next-best path. The Offered Load SHOULD preferred path and over the next-best path. The Offered Load SHOULD
be the minumum of the measured Throughput of the device over the be the minimum of the measured Throughput of the device over the
primary path and over the backup path. The packet size is selectable primary path and over the backup path. The packet size is selectable
and MUST be recorded. Packet size is measured in bytes and includes and MUST be recorded. Packet size is measured in bytes and includes
the IP header and payload. the IP header and payload.
The destination addresses for the Offered Load MUST be distributed
such that all routes or a statistically representative subset of all
routes are matched and each of these routes is offered an equal share
of the Offered Load. It is RECOMMENDED to send traffic matching all
routes, but a statistically representative subset of all routes can
be used if required.
In the Remote Interface failure testcases using topologies 2 and 4 In the Remote Interface failure testcases using topologies 2 and 4
there is a possibility of a transient microloop between R1 and R2 there is a possibility of a transient microloop between R1 and R2
during convergence. The TTL value of the packets send by the Tester during convergence. The TTL or Hop Limit value of the packets sent
may influence the benchmark measurements since it determines which by the Tester may influence the benchmark measurements since it
device in the topology may send an ICMP Time Exceeded Message for determines which device in the topology may send an ICMP Time
looped packets. Exceeded Message for looped packets.
The duration of the Offered Load MUST be greater than the convergence The duration of the Offered Load MUST be greater than the convergence
time. time.
5.7. Measurement Accuracy 5.7. Measurement Accuracy
Since packet loss is observed to measure the Route Convergence Time, Since packet loss is observed to measure the Route Convergence Time,
the time between two successive packets offered to each individual the time between two successive packets offered to each individual
route is the highest possible accuracy of any packet loss based route is the highest possible accuracy of any packet loss based
measurement. When packet jitter is much less than the convergence measurement. When packet jitter is much less than the convergence
time, it is a negligible source of error and therefor it will be time, it is a negligible source of error and therefore it will be
ignored here. ignored here.
5.8. Measurement Statistics 5.8. Measurement Statistics
The benchmark measurements may vary for each trial, due to the The benchmark measurements may vary for each trial, due to the
statistical nature of timer expirations, cpu scheduling, etc. statistical nature of timer expirations, cpu scheduling, etc.
Evaluation of the test data must be done with an understanding of Evaluation of the test data must be done with an understanding of
generally accepted testing practices regarding repeatability, generally accepted testing practices regarding repeatability,
variance and statistical significance of a small number of trials. variance and statistical significance of a small number of trials.
skipping to change at page 16, line 5 skipping to change at page 16, line 18
4. Ability to distinguish traffic load received on the Preferred and 4. Ability to distinguish traffic load received on the Preferred and
Next-Best Interfaces [Po09t]. Next-Best Interfaces [Po09t].
5. Ability to disable or tune specific Layer-2 and Layer-3 protocol 5. Ability to disable or tune specific Layer-2 and Layer-3 protocol
functions on any interface(s). functions on any interface(s).
The Tester MAY be capable to make non-data plane convergence The Tester MAY be capable to make non-data plane convergence
observations and use those observations for measurements. The Tester observations and use those observations for measurements. The Tester
MAY be capable to send and receive multiple traffic Streams [Po06]. MAY be capable to send and receive multiple traffic Streams [Po06].
Also see Section 6 for method-specific capabilities.
6. Selection of Convergence Time Benchmark Metrics and Methods 6. Selection of Convergence Time Benchmark Metrics and Methods
Different convergence time benchmark methods MAY be used to measure Different convergence time benchmark methods MAY be used to measure
convergence time benchmark metrics. The Tester capabilities are convergence time benchmark metrics. The Tester capabilities are
important criteria to select a specific convergence time benchmark important criteria to select a specific convergence time benchmark
method. The criteria to select a specific benchmark method include, method. The criteria to select a specific benchmark method include,
but are not limited to: but are not limited to:
Tester capabilities: Sampling Interval, number of Tester capabilities: Sampling Interval, number of
Stream statistics to collect Stream statistics to collect
skipping to change at page 16, line 27 skipping to change at page 16, line 42
DUT capabilities: Throughput DUT capabilities: Throughput
6.1. Loss-Derived Method 6.1. Loss-Derived Method
6.1.1. Tester capabilities 6.1.1. Tester capabilities
The Offered Load SHOULD consist of a single Stream [Po06]. If The Offered Load SHOULD consist of a single Stream [Po06]. If
sending multiple Streams, the measured packet loss statistics for all sending multiple Streams, the measured packet loss statistics for all
Streams MUST be added together. Streams MUST be added together.
The destination addresses for the Offered Load MUST be distributed
such that all routes are matched and each route is offered an equal
share of the total Offered Load.
In order to verify Full Convergence completion and the Sustained In order to verify Full Convergence completion and the Sustained
Convergence Validation Time, the Tester MUST measure Forwarding Rate Convergence Validation Time, the Tester MUST measure Forwarding Rate
each Packet Sampling Interval. each Packet Sampling Interval.
The total number of packets lost between the start of the traffic and The total number of packets lost between the start of the traffic and
the end of the Sustained Convergence Validation Time is used to the end of the Sustained Convergence Validation Time is used to
calculate the Loss-Derived Convergence Time. calculate the Loss-Derived Convergence Time.
6.1.2. Benchmark Metrics 6.1.2. Benchmark Metrics
The Loss-Derived Method can be used to measure the Loss-Derived The Loss-Derived Method can be used to measure the Loss-Derived
Convergence Time, which is the average convergence time over all Convergence Time, which is the average convergence time over all
routes, and to measure the Loss-Derived Loss of Connectivity Period, routes, and to measure the Loss-Derived Loss of Connectivity Period,
which is the average Route Loss of Connectivity Period over all which is the average Route Loss of Connectivity Period over all
routes. routes.
6.1.3. Measurement Accuracy 6.1.3. Measurement Accuracy
TBD The measurement accuracy of the Loss-Derived Method is equal to the
time between two consecutive packets to the same route.
6.2. Rate-Derived Method 6.2. Rate-Derived Method
6.2.1. Tester Capabilities 6.2.1. Tester Capabilities
The Offered Load SHOULD consist of a single Stream. If sending The Offered Load SHOULD consist of a single Stream. If sending
multiple Streams, the measured traffic rate statistics for all multiple Streams, the measured traffic rate statistics for all
Streams MUST be added together. Streams MUST be added together.
The destination addresses for the Offered Load MUST be distributed
such that all routes are matched and each route is offered an equal
share of the total Offered Load.
The Tester measures Forwarding Rate each Sampling Interval. The The Tester measures Forwarding Rate each Sampling Interval. The
Packet Sampling Interval influences the observation of the different Packet Sampling Interval influences the observation of the different
convergence time instants. If the Packet Sampling Interval is large convergence time instants. If the Packet Sampling Interval is large
in comparison to the time between the convergence time instants, then compared to the time between the convergence time instants, then the
the different time instants may not be easily identifiable from the different time instants may not be easily identifiable from the
Forwarding Rate observation. The requirements for the Packet Forwarding Rate observation. The requirements for the Packet
Sampling Interval are specified in [Po09t]. The RECOMMENDED value Sampling Interval are specified in [Po09t]. The RECOMMENDED value
for the Packet Sampling Interval is 10 milliseconds. The Packet for the Packet Sampling Interval is 10 milliseconds. The Packet
Sampling Interval MUST be reported. Sampling Interval MUST be reported.
6.2.2. Benchmark Metrics 6.2.2. Benchmark Metrics
The Rate-Derived Method SHOULD be used to measure First Route The Rate-Derived Method SHOULD be used to measure First Route
Convergence Time and Full Convergence Time. It SHOULD NOT be used to Convergence Time and Full Convergence Time. It SHOULD NOT be used to
measure Loss of Connectivity Period (see Section Section 4). measure Loss of Connectivity Period (see Section 4).
6.2.3. Measurement Accuracy 6.2.3. Measurement Accuracy
The measurement accuracy of the Rate-Derived Method for transitions The measurement accuracy of the Rate-Derived Method for transitions
that occur for all routes at the same instant is equal to the Packet that occur for all routes at the same instant is equal to the Packet
Sampling Interval and for other transitions the measurement accuracy Sampling Interval and for other transitions the measurement accuracy
is equal to the Packet Sampling Interval plus the time between two is equal to the Packet Sampling Interval plus the time between two
consecutive packets to the same destination. The latter is the case consecutive packets to the same destination. The latter is the case
since packets are sent in a particular order to all destinations in a since packets are sent in a particular order to all destinations in a
stream and when part of the routes experience packet loss, it is stream and when part of the routes experience packet loss, it is
unknown where in the transmit cycle packets to these routes are sent. unknown where in the transmit cycle packets to these routes are sent.
This uncertainty adds to the error. This uncertainty adds to the error.
6.3. Route-Specific Loss-Derived Method 6.3. Route-Specific Loss-Derived Method
6.3.1. Tester Capabilities 6.3.1. Tester Capabilities
The Offered Load consists of multiple Streams. To measure Route- The Offered Load consists of multiple Streams. The Tester MUST
Specific Convergence Times, the Tester sends one Stream to each route measure packet loss for each Stream separately.
in the FIB. The Tester MUST measure packet loss for each Stream
seperately.
In order to verify Full Convergence completion and the Sustained In order to verify Full Convergence completion and the Sustained
Convergence Validation Time, the Tester MUST measure packet loss each Convergence Validation Time, the Tester MUST measure packet loss each
Packet Sampling Interval. This measurement at each Packet Sampling Packet Sampling Interval. This measurement at each Packet Sampling
Interval MAY be per Stream. Interval MAY be per Stream.
Only the total packet loss measured per Stream at the end of the Only the total packet loss measured per Stream at the end of the
Sustained Convergence Validation Time is used to calculate the Sustained Convergence Validation Time is used to calculate the
benchmark metrics with this method. benchmark metrics with this method.
skipping to change at page 19, line 15 skipping to change at page 19, line 15
Parameter Units Parameter Units
----------------------------------- ----------------------- ----------------------------------- -----------------------
Test Case test case number Test Case test case number
Test Topology (1, 2, 3, 4, or 5) Test Topology (1, 2, 3, 4, or 5)
IGP (ISIS, OSPF, other) IGP (ISIS, OSPF, other)
Interface Type (GigE, POS, ATM, other) Interface Type (GigE, POS, ATM, other)
Packet Size offered to DUT bytes Packet Size offered to DUT bytes
Offered Load packets per second Offered Load packets per second
IGP Routes advertised to DUT number of IGP routes IGP Routes advertised to DUT number of IGP routes
Nodes in emulated network number of nodes Nodes in emulated network number of nodes
Number of Routes measured number of routes
Packet Sampling Interval on Tester seconds Packet Sampling Interval on Tester seconds
Maximum Packet Delay Threshold seconds Forwarding Delay Threshold seconds
Timer Values configured on DUT: Timer Values configured on DUT:
Interface failure indication delay seconds Interface failure indication delay seconds
IGP Hello Timer seconds IGP Hello Timer seconds
IGP Dead-Interval or hold-time seconds IGP Dead-Interval or hold-time seconds
LSA Generation Delay seconds LSA Generation Delay seconds
LSA Flood Packet Pacing seconds LSA Flood Packet Pacing seconds
LSA Retransmission Packet Pacing seconds LSA Retransmission Packet Pacing seconds
SPF Delay seconds SPF Delay seconds
Test Details:
If the Offered Load matches a subset of routes, describe how this
subset is selected.
Describe how the Convergence Event is applied; does it cause
instantaneous traffic loss or not.
Complete the table below for the initial Convergence Event and the Complete the table below for the initial Convergence Event and the
reversion Convergence Event. reversion Convergence Event.
Parameter Units Parameter Units
------------------------------------------ ---------------------- ------------------------------------------ ----------------------
Conversion Event (initial or reversion) Conversion Event (initial or reversion)
Traffic Forwarding Metrics: Traffic Forwarding Metrics:
Total number of packets offered to DUT number of Packets Total number of packets offered to DUT number of Packets
Total number of packets forwarded by DUT number of Packets Total number of packets forwarded by DUT number of Packets
skipping to change at page 20, line 24 skipping to change at page 20, line 24
Out-of-Order Packets number of Packets Out-of-Order Packets number of Packets
Duplicate Packets number of Packets Duplicate Packets number of Packets
Convergence Benchmarks: Convergence Benchmarks:
Rate-Derived Method: Rate-Derived Method:
First Route Convergence Time seconds First Route Convergence Time seconds
Full Convergence Time seconds Full Convergence Time seconds
Loss-Derived Method: Loss-Derived Method:
Loss-Derived Convergence Time seconds Loss-Derived Convergence Time seconds
Route-Specific Loss-Derived Method: Route-Specific Loss-Derived Method:
Number of Routes Measured number of routes
Route-Specific Convergence Time[n] array of seconds Route-Specific Convergence Time[n] array of seconds
Minimum R-S Convergence Time seconds Minimum R-S Convergence Time seconds
Maximum R-S Convergence Time seconds Maximum R-S Convergence Time seconds
Median R-S Convergence Time seconds Median R-S Convergence Time seconds
Average R-S Convergence Time seconds Average R-S Convergence Time seconds
Loss of Connectivity Benchmarks: Loss of Connectivity Benchmarks:
Loss-Derived Method: Loss-Derived Method:
Loss-Derived Loss of Connectivity Period seconds Loss-Derived Loss of Connectivity Period seconds
Route-Specific Loss-Derived Method: Route-Specific Loss-Derived Method:
Number of Routes Measured number of routes
Route LoC Period[n] array of seconds Route LoC Period[n] array of seconds
Minimum Route LoC Period seconds Minimum Route LoC Period seconds
Maximum Route LoC Period seconds Maximum Route LoC Period seconds
Median Route LoC Period seconds Median Route LoC Period seconds
Average Route LoC Period seconds Average Route LoC Period seconds
8. Test Cases 8. Test Cases
It is RECOMMENDED that all applicable test cases be performed for It is RECOMMENDED that all applicable test cases be performed for
best characterization of the DUT. The test cases follow a generic best characterization of the DUT. The test cases follow a generic
procedure tailored to the specific DUT configuration and Convergence procedure tailored to the specific DUT configuration and Convergence
Event[Po09t]. This generic procedure is as follows: Event [Po09t]. This generic procedure is as follows:
1. Establish DUT and Tester configurations and advertise an IGP 1. Establish DUT and Tester configurations and advertise an IGP
topology from Tester to DUT. topology from Tester to DUT.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is routed correctly. 3. Verify traffic is routed correctly.
4. Introduce Convergence Event [Po09t]. 4. Introduce Convergence Event [Po09t].
skipping to change at page 28, line 41 skipping to change at page 28, line 35
Discussion Discussion
Configure IGP timers such that the IGP adjacency does not time out Configure IGP timers such that the IGP adjacency does not time out
before layer 2 failure is detected. before layer 2 failure is detected.
To measure convergence time, traffic SHOULD start dropping on the To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the layer 2 session is Preferred Egress Interface on the instant the layer 2 session is
removed. Alternatively the Tester SHOULD record the time the instant removed. Alternatively the Tester SHOULD record the time the instant
layer 2 session is removed and traffic loss SHOULD only be measured layer 2 session is removed and traffic loss SHOULD only be measured
on the Next-Best Egress Interface. on the Next-Best Egress Interface. For loss-derived benchmarks the
time of the Start Traffic Instant SHOULD be recorded as well. See
Section 4.1.
8.2.2. Convergence Due to Loss of IGP Adjacency 8.2.2. Convergence Due to Loss of IGP Adjacency
Objective Objective
To obtain the IGP convergence time due to loss of an IGP Adjacency. To obtain the IGP convergence time due to loss of an IGP Adjacency.
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the topology 1. Advertise an IGP topology from Tester to DUT using the topology
shown in Figure 1. shown in Figure 1.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is routed over Preferred Egress Interface. 3. Verify traffic is routed over Preferred Egress Interface.
4. Remove IGP adjacency from the Preferred Egress Interface while 4. Remove IGP adjacency from the Preferred Egress Interface while
the layer 2 session MUST be maintained. This is the Convergence the layer 2 session MUST be maintained. This is the Convergence
Event. Event.
skipping to change at page 30, line 9 skipping to change at page 30, line 4
Discussion Discussion
Configure layer 2 such that layer 2 does not time out before IGP Configure layer 2 such that layer 2 does not time out before IGP
adjacency failure is detected. adjacency failure is detected.
To measure convergence time, traffic SHOULD start dropping on the To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the IGP adjacency is Preferred Egress Interface on the instant the IGP adjacency is
removed. Alternatively the Tester SHOULD record the time the instant removed. Alternatively the Tester SHOULD record the time the instant
the IGP adjacency is removed and traffic loss SHOULD only be measured the IGP adjacency is removed and traffic loss SHOULD only be measured
on the Next-Best Egress Interface. on the Next-Best Egress Interface. For loss-derived benchmarks the
time of the Start Traffic Instant SHOULD be recorded as well. See
Section 4.1.
8.2.3. Convergence Due to Route Withdrawal 8.2.3. Convergence Due to Route Withdrawal
Objective Objective
To obtain the IGP convergence time due to route withdrawal. To obtain the IGP convergence time due to route withdrawal.
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the topology 1. Advertise an IGP topology from Tester to DUT using the topology
skipping to change at page 30, line 32 skipping to change at page 30, line 29
emulated topology. The topology SHOULD be such that before the emulated topology. The topology SHOULD be such that before the
withdrawal the DUT prefers the leaf routes advertised by a node withdrawal the DUT prefers the leaf routes advertised by a node
"nodeA" via the Preferred Egress Interface, and after the "nodeA" via the Preferred Egress Interface, and after the
withdrawal the DUT prefers the leaf routes advertised by a node withdrawal the DUT prefers the leaf routes advertised by a node
"nodeB" via the Next-Best Egress Interface. "nodeB" via the Next-Best Egress Interface.
2. Send Offered Load from Tester to DUT on Ingress Interface. 2. Send Offered Load from Tester to DUT on Ingress Interface.
3. Verify traffic is routed over Preferred Egress Interface. 3. Verify traffic is routed over Preferred Egress Interface.
4. The Tester withdraws the set of IGP leaf routes from nodeA. The 4. The Tester withdraws the set of IGP leaf routes from nodeA.
withdrawal update message MUST be a single unfragmented packet. This is the Convergence Event. The withdrawal update message
This is the Convergence Event. The Tester MAY record the time SHOULD be a single unfragmented packet. If the routes cannot be
it sends the withdrawal message(s). withdrawn by a single packet, the messages SHOULD be sent using
the same pacing characteristics as the DUT. The Tester MAY
record the time it sends the withdrawal message(s).
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
6. Measure Full Convergence Time. 6. Measure Full Convergence Time.
7. Stop Offered Load. 7. Stop Offered Load.
8. Measure Route-Specific Convergence Times, Loss-Derived 8. Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. Period.
9. Wait sufficient time for queues to drain. 9. Wait sufficient time for queues to drain.
10. Restart Offered Load. 10. Restart Offered Load.
11. Re-advertise the set of withdrawn IGP leaf routes from nodeA 11. Re-advertise the set of withdrawn IGP leaf routes from nodeA
emulated by the Tester. The update message MUST be a single emulated by the Tester. The update message SHOULD be a single
unfragmented packet. unfragmented packet. If the routes cannot be advertised by a
single packet, the messages SHOULD be sent using the same pacing
characteristics as the DUT. The Tester MAY record the time it
sends the update message(s).
12. Measure First Route Convergence Time. 12. Measure First Route Convergence Time.
13. Measure Full Convergence Time. 13. Measure Full Convergence Time.
14. Stop Offered Load. 14. Stop Offered Load.
15. Measure Route-Specific Convergence Times, Loss-Derived 15. Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. Period.
skipping to change at page 31, line 28 skipping to change at page 31, line 31
The measured IGP convergence time is influenced by SPF or route The measured IGP convergence time is influenced by SPF or route
calculation delay, SPF or route calculation execution time, and calculation delay, SPF or route calculation execution time, and
routing and forwarding tables update time [Po09a]. routing and forwarding tables update time [Po09a].
Discussion Discussion
To measure convergence time, traffic SHOULD start dropping on the To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the routes are withdrawn by Preferred Egress Interface on the instant the routes are withdrawn by
the Tester. Alternatively the Tester SHOULD record the time the the Tester. Alternatively the Tester SHOULD record the time the
instant the routes are withdrawn and traffic loss SHOULD only be instant the routes are withdrawn and traffic loss SHOULD only be
measured on the Next-Best Egress Interface. measured on the Next-Best Egress Interface. For loss-derived
benchmarks the time of the Start Traffic Instant SHOULD be recorded
as well. See Section 4.1.
8.3. Administrative changes 8.3. Administrative changes
8.3.1. Convergence Due to Local Adminstrative Shutdown 8.3.1. Convergence Due to Local Adminstrative Shutdown
Objective Objective
To obtain the IGP convergence time due to taking the DUT's Local To obtain the IGP convergence time due to taking the DUT's Local
Interface administratively out of service. Interface administratively out of service.
skipping to change at page 34, line 11 skipping to change at page 34, line 14
The measured IGP Convergence time may be influenced by SPF delay, SPF The measured IGP Convergence time may be influenced by SPF delay, SPF
execution time, and routing and forwarding tables update time execution time, and routing and forwarding tables update time
[Po09a]. [Po09a].
Discussion Discussion
To measure convergence time, traffic SHOULD start dropping on the To measure convergence time, traffic SHOULD start dropping on the
Preferred Egress Interface on the instant the cost is changed by the Preferred Egress Interface on the instant the cost is changed by the
Tester. Alternatively the Tester SHOULD record the time the instant Tester. Alternatively the Tester SHOULD record the time the instant
the cost is changed and traffic loss SHOULD only be measured on the the cost is changed and traffic loss SHOULD only be measured on the
Next-Best Egress Interface. Next-Best Egress Interface. For loss-derived benchmarks the time of
the Start Traffic Instant SHOULD be recorded as well. See Section
4.1.
9. Security Considerations 9. Security Considerations
Documents of this type do not directly affect the security of Benchmarking activities as described in this memo are limited to
Internet or corporate networks as long as benchmarking is not technology characterization using controlled stimuli in a laboratory
performed on devices or systems connected to production networks. environment, with dedicated address space and the constraints
Security threats and how to counter these in SIP and the media layer specified in the sections above.
is discussed in RFC3261, RFC3550, and RFC3711 and various other
drafts. This document attempts to formalize a set of common The benchmarking network topology will be an independent test setup
methodology for benchmarking IGP convergence performance in a lab and MUST NOT be connected to devices that may forward the test
environment. traffic into a production network, or misroute traffic to the test
management network.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.
10. IANA Considerations 10. IANA Considerations
This document requires no IANA considerations. This document requires no IANA considerations.
11. Acknowledgements 11. Acknowledgements
Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward,
Peter De Vriendt and the BMWG for their contributions to this work. Peter De Vriendt and the BMWG for their contributions to this work.
 End of changes. 59 change blocks. 
109 lines changed or deleted 151 lines changed or added

This html diff was produced by rfcdiff 1.37a. The latest version is available from http://tools.ietf.org/tools/rfcdiff/