draft-ietf-bmwg-igp-dataplane-conv-meth-19.txt   draft-ietf-bmwg-igp-dataplane-conv-meth-20.txt 
Network Working Group S. Poretsky Network Working Group S. Poretsky
Internet-Draft Allot Communications Internet-Draft Allot Communications
Intended status: Informational B. Imhoff Intended status: Informational B. Imhoff
Expires: April 29, 2010 Juniper Networks Expires: September 9, 2010 Juniper Networks
K. Michielsen K. Michielsen
Cisco Systems Cisco Systems
October 26, 2009 March 8, 2010
Benchmarking Methodology for Link-State IGP Data Plane Route Convergence Benchmarking Methodology for Link-State IGP Data Plane Route Convergence
draft-ietf-bmwg-igp-dataplane-conv-meth-19 draft-ietf-bmwg-igp-dataplane-conv-meth-20
Abstract
This document describes the methodology for benchmarking Link-State
Interior Gateway Protocol (IGP) Route Convergence. The methodology
is to be used for benchmarking IGP convergence time through
externally observable (black box) data plane measurements. The
methodology can be applied to any link-state IGP, such as ISIS and
OSPF.
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. This document may contain material provisions of BCP 78 and BCP 79.
from IETF Documents or IETF Contributions published or made publicly
available before November 10, 2008. The person(s) controlling the
copyright in some of this material may not have granted the IETF
Trust the right to allow modifications of such material outside the
IETF Standards Process. Without obtaining an adequate license from
the person(s) controlling the copyright in such materials, this
document may not be modified outside the IETF Standards Process, and
derivative works of it may not be created outside the IETF Standards
Process, except to format it for publication as an RFC or to
translate it into languages other than English.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 29, 2010. This Internet-Draft will expire on September 9, 2010.
Copyright Notice Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of Provisions Relating to IETF Documents
publication of this document (http://trustee.ietf.org/license-info). (http://trustee.ietf.org/license-info) in effect on the date of
Please review these documents carefully, as they describe your rights publication of this document. Please review these documents
and restrictions with respect to this document. carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
Abstract include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the BSD License.
This document describes the methodology for benchmarking Link-State This document may contain material from IETF Documents or IETF
Interior Gateway Protocol (IGP) Route Convergence. The methodology Contributions published or made publicly available before November
is to be used for benchmarking IGP convergence time through 10, 2008. The person(s) controlling the copyright in some of this
externally observable (black box) data plane measurements. The material may not have granted the IETF Trust the right to allow
methodology can be applied to any link-state IGP, such as ISIS and modifications of such material outside the IETF Standards Process.
OSPF. Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
Table of Contents Table of Contents
1. Introduction and Scope . . . . . . . . . . . . . . . . . . . . 5 1. Introduction and Scope . . . . . . . . . . . . . . . . . . . . 5
2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 5 2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 5
3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 3. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Test topology for local changes . . . . . . . . . . . . . 5 3.1. Test topology for local changes . . . . . . . . . . . . . 6
3.2. Test topology for remote changes . . . . . . . . . . . . . 6 3.2. Test topology for remote changes . . . . . . . . . . . . . 7
3.3. Test topology for local ECMP changes . . . . . . . . . . . 7 3.3. Test topology for local ECMP changes . . . . . . . . . . . 8
3.4. Test topology for remote ECMP changes . . . . . . . . . . 7 3.4. Test topology for remote ECMP changes . . . . . . . . . . 9
3.5. Test topology for Parallel Link changes . . . . . . . . . 8 3.5. Test topology for Parallel Link changes . . . . . . . . . 10
4. Convergence Time and Loss of Connectivity Period . . . . . . . 9 4. Convergence Time and Loss of Connectivity Period . . . . . . . 11
4.1. Convergence Events without instant traffic loss . . . . . 10 4.1. Convergence Events without instant traffic loss . . . . . 12
4.2. Loss of Connectivity . . . . . . . . . . . . . . . . . . . 12 4.2. Loss of Connectivity . . . . . . . . . . . . . . . . . . . 14
5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 13 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 15
5.1. IGP Selection . . . . . . . . . . . . . . . . . . . . . . 13 5.1. IGP Selection . . . . . . . . . . . . . . . . . . . . . . 15
5.2. Routing Protocol Configuration . . . . . . . . . . . . . . 13 5.2. Routing Protocol Configuration . . . . . . . . . . . . . . 15
5.3. IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 13 5.3. IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 15
5.4. Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.4. Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 14 5.5. Interface Types . . . . . . . . . . . . . . . . . . . . . 16
5.6. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 14 5.6. Offered Load . . . . . . . . . . . . . . . . . . . . . . . 16
5.7. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 15 5.7. Measurement Accuracy . . . . . . . . . . . . . . . . . . . 17
5.8. Measurement Statistics . . . . . . . . . . . . . . . . . . 15 5.8. Measurement Statistics . . . . . . . . . . . . . . . . . . 17
5.9. Tester Capabilities . . . . . . . . . . . . . . . . . . . 15 5.9. Tester Capabilities . . . . . . . . . . . . . . . . . . . 17
6. Selection of Convergence Time Benchmark Metrics and Methods . 16 6. Selection of Convergence Time Benchmark Metrics and Methods . 18
6.1. Loss-Derived Method . . . . . . . . . . . . . . . . . . . 16 6.1. Loss-Derived Method . . . . . . . . . . . . . . . . . . . 18
6.1.1. Tester capabilities . . . . . . . . . . . . . . . . . 16 6.1.1. Tester capabilities . . . . . . . . . . . . . . . . . 18
6.1.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 17 6.1.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 19
6.1.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 17 6.1.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 19
6.2. Rate-Derived Method . . . . . . . . . . . . . . . . . . . 17 6.2. Rate-Derived Method . . . . . . . . . . . . . . . . . . . 19
6.2.1. Tester Capabilities . . . . . . . . . . . . . . . . . 17 6.2.1. Tester Capabilities . . . . . . . . . . . . . . . . . 19
6.2.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 17 6.2.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 20
6.2.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 17 6.2.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 20
6.3. Route-Specific Loss-Derived Method . . . . . . . . . . . . 18 6.3. Route-Specific Loss-Derived Method . . . . . . . . . . . . 21
6.3.1. Tester Capabilities . . . . . . . . . . . . . . . . . 18 6.3.1. Tester Capabilities . . . . . . . . . . . . . . . . . 21
6.3.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 18 6.3.2. Benchmark Metrics . . . . . . . . . . . . . . . . . . 21
6.3.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 18 6.3.3. Measurement Accuracy . . . . . . . . . . . . . . . . . 21
7. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 18 7. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 22
8. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20 8. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8.1. Interface failures . . . . . . . . . . . . . . . . . . . . 21 8.1. Interface failures . . . . . . . . . . . . . . . . . . . . 24
8.1.1. Convergence Due to Local Interface Failure . . . . . . 21 8.1.1. Convergence Due to Local Interface Failure . . . . . . 24
8.1.2. Convergence Due to Remote Interface Failure . . . . . 22 8.1.2. Convergence Due to Remote Interface Failure . . . . . 25
8.1.3. Convergence Due to ECMP Member Local Interface 8.1.3. Convergence Due to ECMP Member Local Interface
Failure . . . . . . . . . . . . . . . . . . . . . . . 23 Failure . . . . . . . . . . . . . . . . . . . . . . . 27
8.1.4. Convergence Due to ECMP Member Remote Interface 8.1.4. Convergence Due to ECMP Member Remote Interface
Failure . . . . . . . . . . . . . . . . . . . . . . . 25 Failure . . . . . . . . . . . . . . . . . . . . . . . 28
8.1.5. Convergence Due to Parallel Link Interface Failure . . 26 8.1.5. Convergence Due to Parallel Link Interface Failure . . 29
8.2. Other failures . . . . . . . . . . . . . . . . . . . . . . 27 8.2. Other failures . . . . . . . . . . . . . . . . . . . . . . 30
8.2.1. Convergence Due to Layer 2 Session Loss . . . . . . . 27 8.2.1. Convergence Due to Layer 2 Session Loss . . . . . . . 30
8.2.2. Convergence Due to Loss of IGP Adjacency . . . . . . . 28 8.2.2. Convergence Due to Loss of IGP Adjacency . . . . . . . 31
8.2.3. Convergence Due to Route Withdrawal . . . . . . . . . 30 8.2.3. Convergence Due to Route Withdrawal . . . . . . . . . 33
8.3. Administrative changes . . . . . . . . . . . . . . . . . . 31 8.3. Administrative changes . . . . . . . . . . . . . . . . . . 34
8.3.1. Convergence Due to Local Adminstrative Shutdown . . . 31 8.3.1. Convergence Due to Local Adminstrative Shutdown . . . 34
8.3.2. Convergence Due to Cost Change . . . . . . . . . . . . 32 8.3.2. Convergence Due to Cost Change . . . . . . . . . . . . 36
9. Security Considerations . . . . . . . . . . . . . . . . . . . 34 9. Security Considerations . . . . . . . . . . . . . . . . . . . 37
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 34 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 38
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 38
12. Normative References . . . . . . . . . . . . . . . . . . . . . 35 12. Normative References . . . . . . . . . . . . . . . . . . . . . 38
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 39
1. Introduction and Scope 1. Introduction and Scope
This document describes the methodology for benchmarking Link-State This document describes the methodology for benchmarking Link-State
Interior Gateway Protocol (IGP) convergence. The motivation and Interior Gateway Protocol (IGP) convergence. The motivation and
applicability for this benchmarking is described in [Po09a]. The applicability for this benchmarking is described in [Po09a]. The
terminology to be used for this benchmarking is described in [Po09t]. terminology to be used for this benchmarking is described in [Po09t].
IGP convergence time is measured on the data plane at the Tester by IGP convergence time is measured on the data plane at the Tester by
observing packet loss through the DUT. All factors contributing to observing packet loss through the DUT. All factors contributing to
skipping to change at page 5, line 40 skipping to change at page 5, line 40
document uses these keywords, this document is not a standards track document uses these keywords, this document is not a standards track
document. document.
This document uses much of the terminology defined in [Po09t] and This document uses much of the terminology defined in [Po09t] and
uses existing terminology defined in other BMWG work. Examples uses existing terminology defined in other BMWG work. Examples
include, but are not limited to: include, but are not limited to:
Throughput [Ref.[Br91], section 3.17] Throughput [Ref.[Br91], section 3.17]
Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] Device Under Test (DUT) [Ref.[Ma98], section 3.1.1]
System Under Test (SUT) [Ref.[Ma98], section 3.1.2] System Under Test (SUT) [Ref.[Ma98], section 3.1.2]
Out-of-order Packet [Ref.[Po06], section 3.3.2] Out-of-Order Packet [Ref.[Po06], section 3.3.4]
Duplicate Packet [Ref.[Po06], section 3.3.3] Duplicate Packet [Ref.[Po06], section 3.3.5]
Stream [Ref.[Po06], section 3.3.2] Stream [Ref.[Po06], section 3.3.2]
Loss Period [Ref.[Ko02], section 4] Loss Period [Ref.[Ko02], section 4]
Forwarding Delay [Ref.[Po06], section 3.2.4]
Jitter [Ref.[Po06], section 3.2.5]
3. Test Topologies 3. Test Topologies
3.1. Test topology for local changes 3.1. Test topology for local changes
Figure 1 shows the test topology to measure IGP convergence time due Figure 1 shows the test topology to measure IGP convergence time due
to local Convergence Events such as Local Interface failure to local Convergence Events such as Local Interface failure
(Section 8.1.1), layer 2 session failure (Section 8.2.1), and IGP (Section 8.1.1), layer 2 session failure (Section 8.2.1), and IGP
adjacency failure (Section 8.2.2). This topology is also used to adjacency failure (Section 8.2.2). This topology is also used to
measure IGP convergence time due to the route withdrawal measure IGP convergence time due to the route withdrawal
(Section 8.2.3), and route cost change (Section 8.3.2) Convergence (Section 8.2.3), and route cost change (Section 8.3.2) Convergence
Events. IGP adjancencies MUST be established between Tester and DUT, Events. IGP adjacencies MUST be established between Tester and DUT,
one on the Preferred Egress Interface and one on the Next-Best Egress one on the Preferred Egress Interface and one on the Next-Best Egress
Interface. For this purpose the Tester emulates two routers, each Interface. For this purpose the Tester emulates two routers, each
establishing one adjacency with the DUT. An IGP adjacency SHOULD be establishing one adjacency with the DUT. An IGP adjacency SHOULD be
established on the Ingress Interface between Tester and DUT. established on the Ingress Interface between Tester and DUT.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Preferred Egress Interface | | | | Preferred Egress Interface | |
| DUT |-------------------------------->| Tester | | DUT |-------------------------------->| Tester |
| | | | | | | |
| |-------------------------------->| | | |-------------------------------->| |
| | Next-Best Egress Interface | | | | Next-Best Egress Interface | |
--------- ---------- --------- ----------
Figure 1: IGP convergence test topology for local changes Figure 1: IGP convergence test topology for local changes
Figure 2 shows the test topology to measure IGP convergence time due
to local Convergence Events with a non-ECMP Preferred Egress
Interface and ECMP Next-Best Egress Interfaces (Section 8.1.1). In
this topology, the DUT is configured with each Next-Best Egress
interface as a member of a single ECMP set. The Preferred Egress
Interface is not a member of an ECMP set. The Tester emulates N+1
next-hop routers, one router for the Preferred Egress Interface and N
routers for the members of the ECMP set. IGP adjacencies MUST be
established between Tester and DUT, one on the Preferred Egress
Interface, an one on each member of the ECMP set. For this purpose
each of the N+1 routers emulated by the Tester establishes one
adjacency with the DUT. An IGP adjacency SHOULD be established on
the Ingress Interface between Tester and DUT. When the test
specifies to observe the Next-Best Egress Interface statistics, the
combined statistics for all ECMP members should be observed.
--------- Ingress Interface ----------
| |<--------------------------------| |
| | Preferred Egress Interface | |
| |-------------------------------->| |
| | ECMP set interface 1 | |
| DUT |-------------------------------->| Tester |
| | . | |
| | . | |
| |-------------------------------->| |
| | ECMP set interface N | |
--------- ----------
Figure 2: IGP convergence test topology for local changes with non-
ECMP to ECMP convergence
3.2. Test topology for remote changes 3.2. Test topology for remote changes
Figure 2 shows the test topology to measure IGP convergence time due Figure 3 shows the test topology to measure IGP convergence time due
to Remote Interface failure (Section 8.1.2). In this topology the to Remote Interface failure (Section 8.1.2). In this topology the
two routers R1 and R2 are considered System Under Test (SUT) and two routers R1 and R2 are considered System Under Test (SUT) and
SHOULD be identically configured devices of the same model. IGP SHOULD be identically configured devices of the same model. IGP
adjancencies MUST be established between Tester and SUT, one on the adjacencies MUST be established between Tester and SUT, one on the
Preferred Egress Interface and one on the Next-Best Egress Interface. Preferred Egress Interface and one on the Next-Best Egress Interface.
For this purpose the Tester emulates one or two routers. An IGP For this purpose the Tester emulates one or two routers. An IGP
adjacency SHOULD be established on the Ingress Interface between adjacency SHOULD be established on the Ingress Interface between
Tester and SUT. In this topology there is a possibility of a Tester and SUT. In this topology there is a possibility of a
transient microloop between R1 and R2 during convergence. transient microloop between R1 and R2 during convergence.
------ ---------- ------ ----------
| | Preferred | | | | Preferred | |
------ | R2 |--------------------->| | ------ | R2 |--------------------->| |
| |-->| | Egress Interface | | | |-->| | Egress Interface | |
| | ------ | | | | ------ | |
| R1 | | Tester | | R1 | | Tester |
| | Next-Best | | | | Next-Best | |
| |------------------------------>| | | |------------------------------>| |
------ Egress Interface | | ------ Egress Interface | |
^ ---------- ^ ----------
| | | |
--------------------------------------- ---------------------------------------
Ingress Interface Ingress Interface
Figure 2: IGP convergence test topology for remote changes Figure 3: IGP convergence test topology for remote changes
Figure 4 shows the test topology to measure IGP convergence time due
to remote Convergence Events with a non-ECMP Preferred Egress
Interface and ECMP Next-Best Egress Interfaces (Section 8.1.2). In
this topology the two routers R1 and R2 are considered System Under
Test (SUT) and MUST be identically configured devices of the same
model. Router R1 is configured with each Next-Best Egress interface
as a member of the same ECMP set. The Preferred Egress Interface of
R1 is not a member of an ECMP set. The Tester emulates N+1 next-hop
routers, one for R2 and one for each member of the ECMP set. IGP
adjacencies MUST be established between Tester and SUT, one on each
egress interface of SUT. For this purpose each of the N+1 routers
emulated by the Tester establishes one adjacency with the SUT. An
IGP adjacency SHOULD be established on the Ingress Interface between
Tester and SUT. In this topology there is a possibility of a
transient microloop between R1 and R2 during convergence. When the
test specifies to observe the Next-Best Egress Interface statistics,
the combined statistics for all ECMP members should be observed.
------ ----------
| | | |
------ Preferred | R2 |---->| |
| |------------------->| | | |
| | Egress Interface ------ | |
| | | |
| | ECMP set interface 1 | |
| R1 |------------------------------>| Tester |
| | . | |
| | . | |
| | . | |
| |------------------------------>| |
------ ECMP set interface N | |
^ ----------
| |
---------------------------------------
Ingress Interface
Figure 4: IGP convergence test topology for remote changes with non-
ECMP to ECMP convergence
3.3. Test topology for local ECMP changes 3.3. Test topology for local ECMP changes
Figure 3 shows the test topology to measure IGP convergence time due Figure 5 shows the test topology to measure IGP convergence time due
to local Convergence Events with members of an Equal Cost Multipath to local Convergence Events of a member of an Equal Cost Multipath
(ECMP) set (Section 8.1.3). In this topology, the DUT is configured (ECMP) set (Section 8.1.3). In this topology, the DUT is configured
with each egress interface as a member of a single ECMP set and the with each egress interface as a member of a single ECMP set and the
Tester emulates N next-hop routers, one router for each member. IGP Tester emulates N next-hop routers, one router for each member. IGP
adjancencies MUST be established between Tester and DUT, one on each adjacencies MUST be established between Tester and DUT, one on each
member of the ECMP set. For this purpose each of the N routers member of the ECMP set. For this purpose each of the N routers
emulated by the Tester establishes one adjacency with the DUT. An emulated by the Tester establishes one adjacency with the DUT. An
IGP adjacency SHOULD be established on the Ingress Interface between IGP adjacency SHOULD be established on the Ingress Interface between
Tester and DUT. Tester and DUT. When the test specifies to observe the Next-Best
Egress Interface statistics, the combined statistics for all ECMP
members except the one affected by the Convergence Event, should be
observed.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | ECMP set interface 1 | | | | ECMP set interface 1 | |
| |-------------------------------->| | | |-------------------------------->| |
| DUT | . | Tester | | DUT | . | Tester |
| | . | | | | . | |
| | . | | | | . | |
| |-------------------------------->| | | |-------------------------------->| |
| | ECMP set interface N | | | | ECMP set interface N | |
--------- ---------- --------- ----------
Figure 3: IGP convergence test topology for local ECMP change Figure 5: IGP convergence test topology for local ECMP changes
3.4. Test topology for remote ECMP changes 3.4. Test topology for remote ECMP changes
Figure 4 shows the test topology to measure IGP convergence time due Figure 6 shows the test topology to measure IGP convergence time due
to remote Convergence Events with members of an Equal Cost Multipath to remote Convergence Events of a member of an Equal Cost Multipath
(ECMP) set (Section 8.1.4). In this topology the two routers R1 and (ECMP) set (Section 8.1.4). In this topology the two routers R1 and
R2 are considered System Under Test (SUT) and MUST be identically R2 are considered System Under Test (SUT) and MUST be identically
configured devices of the same model. Router R1 is configured with configured devices of the same model. Router R1 is configured with
each egress interface as a member of a single ECMP set and the Tester each egress interface as a member of a single ECMP set and the Tester
emulates N next-hop routers, one router for each member. IGP emulates N next-hop routers, one router for each member. IGP
adjancencies MUST be established between Tester and SUT, one on each adjacencies MUST be established between Tester and SUT, one on each
egress interface of SUT. For this purpose each of the N routers egress interface of SUT. For this purpose each of the N routers
emulated by the Tester establishes one adjacency with the SUT. An emulated by the Tester establishes one adjacency with the SUT. An
IGP adjacency SHOULD be established on the Ingress Interface between IGP adjacency SHOULD be established on the Ingress Interface between
Tester and SUT. In this topology there is a possibility of a Tester and SUT. In this topology there is a possibility of a
transient microloop between R1 and R2 during convergence. transient microloop between R1 and R2 during convergence. When the
test specifies to observe the Next-Best Egress Interface statistics,
the combined statistics for all ECMP members except the one affected
by the Convergence Event, should be observed.
------ ---------- ------ ----------
| | | | | | | |
------ ECMP set | R2 |---->| | ------ ECMP set | R2 |---->| |
| |------------------->| | | | | |------------------->| | | |
| | Interface 1 ------ | | | | Interface 1 ------ | |
| | | | | | | |
| | ECMP set interface 2 | | | | ECMP set interface 2 | |
| R1 |------------------------------>| Tester | | R1 |------------------------------>| Tester |
| | . | | | | . | |
| | . | | | | . | |
| | . | | | | . | |
| |------------------------------>| | | |------------------------------>| |
------ ECMP set interface N | | ------ ECMP set interface N | |
^ ---------- ^ ----------
| | | |
--------------------------------------- ---------------------------------------
Ingress Interface Ingress Interface
Figure 4: IGP convergence test topology for remote ECMP convergence Figure 6: IGP convergence test topology for remote ECMP changes
3.5. Test topology for Parallel Link changes 3.5. Test topology for Parallel Link changes
Figure 5 shows the test topology to measure IGP convergence time due Figure 7 shows the test topology to measure IGP convergence time due
to local Convergence Events with members of a Parallel Link to local Convergence Events with members of a Parallel Link
(Section 8.1.5). In this topology, the DUT is configured with each (Section 8.1.5). In this topology, the DUT is configured with each
egress interface as a member of a Parallel Link and the Tester egress interface as a member of a Parallel Link and the Tester
emulates the single next-hop router. IGP adjancencies MUST be emulates the single next-hop router. IGP adjacencies MUST be
established on all N members of the Parallel Link between Tester and established on all N members of the Parallel Link between Tester and
DUT. For this purpose the router emulated by the Tester establishes DUT. For this purpose the router emulated by the Tester establishes
N adjacencies with the DUT. An IGP adjacency SHOULD be established N adjacencies with the DUT. An IGP adjacency SHOULD be established
on the Ingress Interface between Tester and DUT. on the Ingress Interface between Tester and DUT. When the test
specifies to observe the Next-Best Egress Interface statistics, the
combined statistics for all Parallel Link members except the one
affected by the Convergence Event, should be observed.
--------- Ingress Interface ---------- --------- Ingress Interface ----------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Parallel Link Interface 1 | | | | Parallel Link Interface 1 | |
| |-------------------------------->| | | |-------------------------------->| |
| DUT | . | Tester | | DUT | . | Tester |
| | . | | | | . | |
| | . | | | | . | |
| |-------------------------------->| | | |-------------------------------->| |
| | Parallel Link Interface N | | | | Parallel Link Interface N | |
--------- ---------- --------- ----------
Figure 5: IGP convergence test topology for Parallel Link changes Figure 7: IGP convergence test topology for Parallel Link changes
4. Convergence Time and Loss of Connectivity Period 4. Convergence Time and Loss of Connectivity Period
Two concepts will be highlighted in this section: convergence time Two concepts will be highlighted in this section: convergence time
and loss of connectivity period. and loss of connectivity period.
The Route Convergence [Po09t] time indicates the period in time The Route Convergence [Po09t] time indicates the period in time
between the Convergence Event Instant [Po09t] and the instant in time between the Convergence Event Instant [Po09t] and the instant in time
the DUT is ready to forward traffic for a specific route on its Next- the DUT is ready to forward traffic for a specific route on its Next-
Best Egress Interface and maintains this state for the duration of Best Egress Interface and maintains this state for the duration of
skipping to change at page 10, line 52 skipping to change at page 12, line 40
convergence time benchmarks using the Rate-Derived Method [Po09t]. convergence time benchmarks using the Rate-Derived Method [Po09t].
By observing lost packets on the Next-Best Egress Interface only, the By observing lost packets on the Next-Best Egress Interface only, the
observed packet loss is the number of lost packets between Traffic observed packet loss is the number of lost packets between Traffic
Start Instant and Convergence Recovery Instant. To measure Start Instant and Convergence Recovery Instant. To measure
convergence times using a loss-derived method, packet loss between convergence times using a loss-derived method, packet loss between
the Convergence Event Instant and the Convergence Recovery Instant is the Convergence Event Instant and the Convergence Recovery Instant is
needed. The time between Traffic Start Instant and Convergence Event needed. The time between Traffic Start Instant and Convergence Event
Instant must be accounted for. An example may clarify this. Instant must be accounted for. An example may clarify this.
Figure 6 illustrates a Convergence Event without instantaneous Figure 8 illustrates a Convergence Event without instantaneous
traffic loss for all routes. The top graph shows the Forwarding Rate traffic loss for all routes. The top graph shows the Forwarding Rate
over all routes, the bottom graph shows the Forwarding Rate for a over all routes, the bottom graph shows the Forwarding Rate for a
single route Rta. Some time after the Convergence Event Instant, single route Rta. Some time after the Convergence Event Instant,
Forwarding Rate observed on the Preferred Egress Interface starts to Forwarding Rate observed on the Preferred Egress Interface starts to
decrease. In the example, route Rta is the first route to experience decrease. In the example, route Rta is the first route to experience
packet loss at time Ta. Some time later, the Forwarding Rate packet loss at time Ta. Some time later, the Forwarding Rate
observed on the Next-Best Egress Interface starts to increase. In observed on the Next-Best Egress Interface starts to increase. In
the example, route Rta is the first route to complete convergence at the example, route Rta is the first route to complete convergence at
time Ta'. time Ta'.
skipping to change at page 11, line 43 skipping to change at page 13, line 34
^ ^ ^ ^ time ^ ^ ^ ^ time
T0 CEI Ta Ta' T0 CEI Ta Ta'
Preferred Egress Interface: --- Preferred Egress Interface: ---
Next-Best Egress Interface: ... Next-Best Egress Interface: ...
With T0 the Start Traffic Instant; CEI the Convergence Event Instant; With T0 the Start Traffic Instant; CEI the Convergence Event Instant;
Ta the time instant traffic loss for route Rta starts; Ta' the time Ta the time instant traffic loss for route Rta starts; Ta' the time
instant traffic loss for route Rta ends. instant traffic loss for route Rta ends.
Figure 6 Figure 8
If only packets received on the Next-Best Egress Interface are If only packets received on the Next-Best Egress Interface are
observed, the duration of the packet loss period for route Rta can be observed, the duration of the packet loss period for route Rta can be
calculated from the received packets as in Equation 1. Since the calculated from the received packets as in Equation 1. Since the
Convergence Event Instant is the start time for convergence time Convergence Event Instant is the start time for convergence time
measurement, the period in time between T0 and CEI needs to be measurement, the period in time between T0 and CEI needs to be
subtracted from the calculated result to become the convergence time, subtracted from the calculated result to become the convergence time,
as in Equation 2. as in Equation 2.
Next-Best Egress Interface packet loss period Next-Best Egress Interface packet loss period
skipping to change at page 12, line 45 skipping to change at page 14, line 38
| \ / | \ /
| \ / | \ /
| \ / | \ /
| \ / | \ /
| --------------- | ---------------
+------------------------------------------> +------------------------------------------>
^ ^ ^ ^ time ^ ^ ^ ^ time
Ta Tb Ta' Tb' Ta Tb Ta' Tb'
Tb'' Ta'' Tb'' Ta''
Figure 7: Example Route Loss Of Connectivity Period Figure 9: Example Route Loss Of Connectivity Period
If the DUT implementation would be such that Route Rta would be the If the DUT implementation would be such that Route Rta would be the
first route for which traffic loss ends at time Ta' with Ta'>Tb. first route for which traffic loss ends at time Ta' with Ta'>Tb.
Route Rtb would be the last route for which traffic loss ends at time Route Rtb would be the last route for which traffic loss ends at time
Tb' with Tb'>Ta'. By using only observing global traffic statistics Tb' with Tb'>Ta'. By using only observing global traffic statistics
over all routes, the minimum Route Loss of Connectivity Period would over all routes, the minimum Route Loss of Connectivity Period would
be measured as Ta'-Ta. The maximum calculated Route Loss of be measured as Ta'-Ta. The maximum calculated Route Loss of
Connectivity Period would be Tb'-Ta. The real minimum and maximum Connectivity Period would be Tb'-Ta. The real minimum and maximum
Route Loss of Connectivity Periods are Ta'-Ta and Tb'-Tb. Route Loss of Connectivity Periods are Ta'-Ta and Tb'-Tb.
Illustrating this with the numbers Ta=0, Tb=1, Ta'=3, and Tb'=5, Illustrating this with the numbers Ta=0, Tb=1, Ta'=3, and Tb'=5,
skipping to change at page 13, line 50 skipping to change at page 15, line 44
routes installed from other protocols. routes installed from other protocols.
5.3. IGP Topology 5.3. IGP Topology
The Tester emulates a single IGP topology. The DUT establishes IGP The Tester emulates a single IGP topology. The DUT establishes IGP
adjacencies with one or more of the emulated routers in this single adjacencies with one or more of the emulated routers in this single
IGP topology emulated by the Tester. See test topology details in IGP topology emulated by the Tester. See test topology details in
Section 3. The emulated topology SHOULD only be advertised on the Section 3. The emulated topology SHOULD only be advertised on the
DUT egress interfaces. DUT egress interfaces.
The number of IGP routes will impact the measured IGP route The number of IGP routes and number of nodes in the topology, and the
convergence time. To obtain results similar to those that would be type of topology will impact the measured IGP convergence time. To
observed in an operational network, it is RECOMMENDED that the number obtain results similar to those that would be observed in an
of installed routes and nodes closely approximate that of the network operational network, it is RECOMMENDED that the number of installed
(e.g. thousands of routes with tens or hundreds of nodes). routes and nodes closely approximate that of the network (e.g.
thousands of routes with tens or hundreds of nodes).
The number of areas (for OSPF) and levels (for ISIS) can impact the The number of areas (for OSPF) and levels (for ISIS) can impact the
benchmark results. benchmark results.
5.4. Timers 5.4. Timers
There are timers that may impact the measured IGP convergence times. There are timers that may impact the measured IGP convergence times.
The benchmark metrics MAY be measured at any fixed values for these The benchmark metrics MAY be measured at any fixed values for these
timers. To obtain results similar to those that would be observed in timers. To obtain results similar to those that would be observed in
an operational network, it is RECOMMENDED to configure the timers an operational network, it is RECOMMENDED to configure the timers
skipping to change at page 15, line 14 skipping to change at page 17, line 8
and MUST be recorded. Packet size is measured in bytes and includes and MUST be recorded. Packet size is measured in bytes and includes
the IP header and payload. the IP header and payload.
The destination addresses for the Offered Load MUST be distributed The destination addresses for the Offered Load MUST be distributed
such that all routes or a statistically representative subset of all such that all routes or a statistically representative subset of all
routes are matched and each of these routes is offered an equal share routes are matched and each of these routes is offered an equal share
of the Offered Load. It is RECOMMENDED to send traffic matching all of the Offered Load. It is RECOMMENDED to send traffic matching all
routes, but a statistically representative subset of all routes can routes, but a statistically representative subset of all routes can
be used if required. be used if required.
In the Remote Interface failure testcases using topologies 2 and 4 In the Remote Interface failure testcases using topologies 3, 4, and
there is a possibility of a transient microloop between R1 and R2 6 there is a possibility of a transient microloop between R1 and R2
during convergence. The TTL or Hop Limit value of the packets sent during convergence. The TTL or Hop Limit value of the packets sent
by the Tester may influence the benchmark measurements since it by the Tester may influence the benchmark measurements since it
determines which device in the topology may send an ICMP Time determines which device in the topology may send an ICMP Time
Exceeded Message for looped packets. Exceeded Message for looped packets.
The duration of the Offered Load MUST be greater than the convergence The duration of the Offered Load MUST be greater than the convergence
time. time plus the Sustained Convergence Validation Time.
Offered load should send a packet to each destination before sending
another packet to the same destination. It is RECOMMENDED that the
packets are transmitted in a round-robin fashion with a uniform
interpacket delay.
5.7. Measurement Accuracy 5.7. Measurement Accuracy
Since packet loss is observed to measure the Route Convergence Time, Since packet loss is observed to measure the Route Convergence Time,
the time between two successive packets offered to each individual the time between two successive packets offered to each individual
route is the highest possible accuracy of any packet loss based route is the highest possible accuracy of any packet loss based
measurement. When packet jitter is much less than the convergence measurement. The higher the traffic rate offered to each route the
time, it is a negligible source of error and therefore it will be higher the possible measurement accuracy.
ignored here.
Also see Section 6 for method-specific measurement accuracy.
5.8. Measurement Statistics 5.8. Measurement Statistics
The benchmark measurements may vary for each trial, due to the The benchmark measurements may vary for each trial, due to the
statistical nature of timer expirations, cpu scheduling, etc. statistical nature of timer expirations, cpu scheduling, etc.
Evaluation of the test data must be done with an understanding of Evaluation of the test data must be done with an understanding of
generally accepted testing practices regarding repeatability, generally accepted testing practices regarding repeatability,
variance and statistical significance of a small number of trials. variance and statistical significance of a small number of trials.
5.9. Tester Capabilities 5.9. Tester Capabilities
It is RECOMMENDED that the Tester used to execute each test case has It is RECOMMENDED that the Tester used to execute each test case has
the following capabilities: the following capabilities:
1. Ability to establish IGP adjacencies and advertise a single IGP 1. Ability to establish IGP adjacencies and advertise a single IGP
topology to one or more peers. topology to one or more peers.
2. Ability to insert a timestamp in each data packet's IP payload. 2. Ability to measure Forwarding Delay, Duplicate Packets and Out-
of-Order Packets.
3. An internal time clock to control timestamping, time 3. An internal time clock to control timestamping, time
measurements, and time calculations. measurements, and time calculations.
4. Ability to distinguish traffic load received on the Preferred and 4. Ability to distinguish traffic load received on the Preferred and
Next-Best Interfaces [Po09t]. Next-Best Interfaces [Po09t].
5. Ability to disable or tune specific Layer-2 and Layer-3 protocol 5. Ability to disable or tune specific Layer-2 and Layer-3 protocol
functions on any interface(s). functions on any interface(s).
skipping to change at page 16, line 30 skipping to change at page 18, line 30
6. Selection of Convergence Time Benchmark Metrics and Methods 6. Selection of Convergence Time Benchmark Metrics and Methods
Different convergence time benchmark methods MAY be used to measure Different convergence time benchmark methods MAY be used to measure
convergence time benchmark metrics. The Tester capabilities are convergence time benchmark metrics. The Tester capabilities are
important criteria to select a specific convergence time benchmark important criteria to select a specific convergence time benchmark
method. The criteria to select a specific benchmark method include, method. The criteria to select a specific benchmark method include,
but are not limited to: but are not limited to:
Tester capabilities: Sampling Interval, number of Tester capabilities: Sampling Interval, number of
Stream statistics to collect Stream statistics to collect
Measurement accuracy: Sampling Interval, Offered Load Measurement accuracy: Sampling Interval, Offered Load,
number of routes
Test specification: number of routes Test specification: number of routes
DUT capabilities: Throughput DUT capabilities: Throughput, Jitter
6.1. Loss-Derived Method 6.1. Loss-Derived Method
6.1.1. Tester capabilities 6.1.1. Tester capabilities
The Offered Load SHOULD consist of a single Stream [Po06]. If The Offered Load SHOULD consist of a single Stream [Po06]. If
sending multiple Streams, the measured packet loss statistics for all sending multiple Streams, the measured packet loss statistics for all
Streams MUST be added together. Streams MUST be added together.
In order to verify Full Convergence completion and the Sustained In order to verify Full Convergence completion and the Sustained
skipping to change at page 17, line 15 skipping to change at page 19, line 15
6.1.2. Benchmark Metrics 6.1.2. Benchmark Metrics
The Loss-Derived Method can be used to measure the Loss-Derived The Loss-Derived Method can be used to measure the Loss-Derived
Convergence Time, which is the average convergence time over all Convergence Time, which is the average convergence time over all
routes, and to measure the Loss-Derived Loss of Connectivity Period, routes, and to measure the Loss-Derived Loss of Connectivity Period,
which is the average Route Loss of Connectivity Period over all which is the average Route Loss of Connectivity Period over all
routes. routes.
6.1.3. Measurement Accuracy 6.1.3. Measurement Accuracy
The measurement accuracy of the Loss-Derived Method is equal to the The actual value falls within the accuracy interval [-(number of
time between two consecutive packets to the same route. destinations/Offered Load), +(number of destinations/Offered Load)]
around the value as measured using the Loss-Derived Method.
6.2. Rate-Derived Method 6.2. Rate-Derived Method
6.2.1. Tester Capabilities 6.2.1. Tester Capabilities
The Offered Load SHOULD consist of a single Stream. If sending The Offered Load SHOULD consist of a single Stream. If sending
multiple Streams, the measured traffic rate statistics for all multiple Streams, the measured traffic rate statistics for all
Streams MUST be added together. Streams MUST be added together.
The Tester measures Forwarding Rate each Sampling Interval. The The Tester measures Forwarding Rate each Sampling Interval. The
Packet Sampling Interval influences the observation of the different Packet Sampling Interval influences the observation of the different
convergence time instants. If the Packet Sampling Interval is large convergence time instants. If the Packet Sampling Interval is large
compared to the time between the convergence time instants, then the compared to the time between the convergence time instants, then the
different time instants may not be easily identifiable from the different time instants may not be easily identifiable from the
Forwarding Rate observation. The requirements for the Packet Forwarding Rate observation. The presence of Jitter [Po06] may cause
Sampling Interval are specified in [Po09t]. The RECOMMENDED value fluctuations of the Forwarding Rate observation and can prevent
for the Packet Sampling Interval is 10 milliseconds. The Packet correct observation of the different convergence time instants.
Sampling Interval MUST be reported.
The Packet Sampling Interval MUST be larger than or equal to the time
between two consecutive packets to the same destination. For maximum
accuracy the value for the Packet Sampling Interval SHOULD be as
small as possible, but the presence of Jitter may enforce using a
larger Packet Sampling Interval. The Packet Sampling Interval MUST
be reported.
Jitter causes fluctuations in the number of received packets during
each Packet Sampling Interval. To account for the presence of Jitter
in determining if a convergence instant has been reached, Jitter
SHOULD be observed during each Packet Sampling Interval. The minimum
and maximum number of packets expected in a Packet Sampling Interval
in presence of Jitter can be calculated with Equation 3.
number of packets expected in a Packet Sampling Interval
in presence of Jitter
= expected number of packets without Jitter
+/-(max Jitter during Packet Sampling Interval * Offered Load)
Equation 3
To determine if a convergence instant has been reached the number of
packets received in a Packet Sampling Interval is compared with the
range of expected number of packets calculated in Equation 3.
6.2.2. Benchmark Metrics 6.2.2. Benchmark Metrics
The Rate-Derived Method SHOULD be used to measure First Route The Rate-Derived Method SHOULD be used to measure First Route
Convergence Time and Full Convergence Time. It SHOULD NOT be used to Convergence Time and Full Convergence Time. It SHOULD NOT be used to
measure Loss of Connectivity Period (see Section 4). measure Loss of Connectivity Period (see Section 4).
6.2.3. Measurement Accuracy 6.2.3. Measurement Accuracy
The measurement accuracy of the Rate-Derived Method for transitions The measurement accuracy interval of the Rate-Derived Method depends
that occur for all routes at the same instant is equal to the Packet on the metric being measured or calculated and the characteristics of
Sampling Interval and for other transitions the measurement accuracy the related transition. Jitter [Po06] adds uncertainty to the amount
is equal to the Packet Sampling Interval plus the time between two of packets received in a Packet Sampling Interval and this
consecutive packets to the same destination. The latter is the case uncertainty adds to the measurement error. The effect of Jitter is
since packets are sent in a particular order to all destinations in a not accounted for in the calculation of the accuracy intervals below.
stream and when part of the routes experience packet loss, it is Jitter is of importance for the convergence instants were a variation
unknown where in the transmit cycle packets to these routes are sent. in Forwarding Rate needs to be observed (Convergence Recovery Instant
This uncertainty adds to the error. and for topologies with ECMP also Convergence Event Instant and First
Route Convergence Instant).
If the Convergence Event Instant is observed on the dataplane using
the Rate Derived Method, it needs to be instantaneous for all routes
(see Section 4.1). The actual value of the Convergence Event Instant
falls within the accuracy interval [-(Packet Sampling Interval +
1/Offered Load), +0] around the value as measured using the Rate-
Derived Method.
If the Convergence Recovery Transition is non-instantaneous for all
routes then the actual value of the First Route Convergence Instant
falls within the accuracy interval [-(Packet Sampling Interval + time
between two consecutive packets to the same destination), +0] around
the value as measured using the Rate-Derived Method, and the actual
value of the Convergence Recovery Instant falls within the accuracy
interval [-(2 * Packet Sampling Interval), -(Packet Sampling Interval
- time between two consecutive packets to the same destination)]
around the value as measured using the Rate-Derived Method.
The term "time between two consecutive packets to the same
destination" is added in the above accuracy intervals since packets
are sent in a particular order to all destinations in a stream and
when part of the routes experience packet loss, it is unknown where
in the transmit cycle packets to these routes are sent. This
uncertainty adds to the error.
The accuracy intervals of the derived metrics First Route Convergence
Time and Rate-Derived Convergence Time are calculated from the above
convergence instants accuracy intervals. The actual value of First
Route Convergence Time falls within the accuracy interval [-(Packet
Sampling Interval + time between two consecutive packets to the same
destination), +(Packet Sampling Interval + 1/Offered Load)] around
the calculated value. The actual value of Rate-Derived Convergence
Time falls within the accuracy interval [-(2 * Packet Sampling
Interval), +(time between two consecutive packets to the same
destination + 1/Offered Load)] around the calculated value.
6.3. Route-Specific Loss-Derived Method 6.3. Route-Specific Loss-Derived Method
6.3.1. Tester Capabilities 6.3.1. Tester Capabilities
The Offered Load consists of multiple Streams. The Tester MUST The Offered Load consists of multiple Streams. The Tester MUST
measure packet loss for each Stream separately. measure packet loss for each Stream separately.
In order to verify Full Convergence completion and the Sustained In order to verify Full Convergence completion and the Sustained
Convergence Validation Time, the Tester MUST measure packet loss each Convergence Validation Time, the Tester MUST measure packet loss each
skipping to change at page 18, line 34 skipping to change at page 21, line 50
Route-Specific Convergence Times. It is the RECOMMENDED method to Route-Specific Convergence Times. It is the RECOMMENDED method to
measure Route Loss of Connectivity Period. measure Route Loss of Connectivity Period.
Under the conditions explained in Section 4, First Route Convergence Under the conditions explained in Section 4, First Route Convergence
Time and Full Convergence Time as benchmarked using Rate-Derived Time and Full Convergence Time as benchmarked using Rate-Derived
Method, may be equal to the minimum resp. maximum of the Route- Method, may be equal to the minimum resp. maximum of the Route-
Specific Convergence Times. Specific Convergence Times.
6.3.3. Measurement Accuracy 6.3.3. Measurement Accuracy
The measurement accuracy of the Route-Specific Loss-Derived Method is The actual value falls within the accuracy interval [-(number of
equal to the time between two consecutive packets to the same route. destinations/Offered Load), +(number of destinations/Offered Load)]
around the value as measured using the Route-Specific Loss-Derived
Method.
7. Reporting Format 7. Reporting Format
For each test case, it is recommended that the reporting tables below For each test case, it is recommended that the reporting tables below
are completed and all time values SHOULD be reported with resolution are completed and all time values SHOULD be reported with resolution
as specified in [Po09t]. as specified in [Po09t].
Parameter Units Parameter Units
----------------------------------- ----------------------- ----------------------------------- ---------------------------
Test Case test case number Test Case test case number
Test Topology (1, 2, 3, 4, or 5) Test Topology Test Topology Figure number
IGP (ISIS, OSPF, other) IGP (ISIS, OSPF, other)
Interface Type (GigE, POS, ATM, other) Interface Type (GigE, POS, ATM, other)
Packet Size offered to DUT bytes Packet Size offered to DUT bytes
Offered Load packets per second Offered Load packets per second
IGP Routes advertised to DUT number of IGP routes IGP Routes advertised to DUT number of IGP routes
Nodes in emulated network number of nodes Nodes in emulated network number of nodes
Number of Routes measured number of routes Number of Parallel or ECMP links number of links
Packet Sampling Interval on Tester seconds Number of Routes measured number of routes
Forwarding Delay Threshold seconds Packet Sampling Interval on Tester seconds
Forwarding Delay Threshold seconds
Timer Values configured on DUT: Timer Values configured on DUT:
Interface failure indication delay seconds Interface failure indication delay seconds
IGP Hello Timer seconds IGP Hello Timer seconds
IGP Dead-Interval or hold-time seconds IGP Dead-Interval or hold-time seconds
LSA Generation Delay seconds LSA Generation Delay seconds
LSA Flood Packet Pacing seconds LSA Flood Packet Pacing seconds
LSA Retransmission Packet Pacing seconds LSA Retransmission Packet Pacing seconds
SPF Delay seconds SPF Delay seconds
Test Details: Test Details:
If the Offered Load matches a subset of routes, describe how this If the Offered Load matches a subset of routes, describe how this
subset is selected. subset is selected.
Describe how the Convergence Event is applied; does it cause Describe how the Convergence Event is applied; does it cause
instantaneous traffic loss or not. instantaneous traffic loss or not.
Complete the table below for the initial Convergence Event and the Complete the table below for the initial Convergence Event and the
skipping to change at page 21, line 5 skipping to change at page 24, line 5
It is RECOMMENDED that all applicable test cases be performed for It is RECOMMENDED that all applicable test cases be performed for
best characterization of the DUT. The test cases follow a generic best characterization of the DUT. The test cases follow a generic
procedure tailored to the specific DUT configuration and Convergence procedure tailored to the specific DUT configuration and Convergence
Event [Po09t]. This generic procedure is as follows: Event [Po09t]. This generic procedure is as follows:
1. Establish DUT and Tester configurations and advertise an IGP 1. Establish DUT and Tester configurations and advertise an IGP
topology from Tester to DUT. topology from Tester to DUT.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is routed correctly. 3. Verify traffic is routed correctly. Verify if traffic is
forwarded without drops, without Out-of-Order Packets, and
without exceeding the Forwarding Delay Threshold [Po06].
4. Introduce Convergence Event [Po09t]. 4. Introduce Convergence Event [Po09t].
5. Measure First Route Convergence Time [Po09t]. 5. Measure First Route Convergence Time [Po09t].
6. Measure Full Convergence Time [Po09t]. 6. Measure Full Convergence Time [Po09t].
7. Stop Offered Load. 7. Stop Offered Load.
8. Measure Route-Specific Convergence Times, Loss-Derived 8. Measure Route-Specific Convergence Times, Loss-Derived
Convergence Time, Route LoC Periods, and Loss-Derived LoC Period Convergence Time, Route LoC Periods, and Loss-Derived LoC Period
[Po09t]. [Po09t].
9. Wait sufficient time for queues to drain. 9. Wait sufficient time for queues to drain. The duration of this
time period is equal to the Forwarding Delay Threshold. In
absence of a Forwarding Delay Threshold specification the
duration of this time period is 2 seconds [Br99].
10. Restart Offered Load. 10. Restart Offered Load.
11. Reverse Convergence Event. 11. Reverse Convergence Event.
12. Measure First Route Convergence Time. 12. Measure First Route Convergence Time.
13. Measure Full Convergence Time. 13. Measure Full Convergence Time.
14. Stop Offered Load. 14. Stop Offered Load.
skipping to change at page 21, line 42 skipping to change at page 24, line 47
Convergence Time, Route LoC Periods, and Loss-Derived LoC Convergence Time, Route LoC Periods, and Loss-Derived LoC
Period. Period.
8.1. Interface failures 8.1. Interface failures
8.1.1. Convergence Due to Local Interface Failure 8.1.1. Convergence Due to Local Interface Failure
Objective Objective
To obtain the IGP convergence times due to a Local Interface failure To obtain the IGP convergence times due to a Local Interface failure
event. event. The Next-Best Egress Interface can be a single interface
(Figure 1) or an ECMP set (Figure 2). The test with ECMP topology
(Figure 2) is OPTIONAL.
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the topology 1. Advertise an IGP topology from Tester to DUT using the topology
shown in Figure 1. shown in Figure 1 or 2.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is forwarded over Preferred Egress Interface. 3. Verify traffic is forwarded over Preferred Egress Interface.
4. Remove link on DUT's Preferred Egress Interface. This is the 4. Remove link on DUT's Preferred Egress Interface. This is the
Convergence Event. Convergence Event.
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
skipping to change at page 22, line 45 skipping to change at page 25, line 51
The measured IGP convergence time may be influenced by the link The measured IGP convergence time may be influenced by the link
failure indication time, LSA/LSP delay, LSA/LSP generation time, LSA/ failure indication time, LSA/LSP delay, LSA/LSP generation time, LSA/
LSP flood packet pacing, SPF delay, SPF execution time, and routing LSP flood packet pacing, SPF delay, SPF execution time, and routing
and forwarding tables update time [Po09a]. and forwarding tables update time [Po09a].
8.1.2. Convergence Due to Remote Interface Failure 8.1.2. Convergence Due to Remote Interface Failure
Objective Objective
To obtain the IGP convergence time due to a Remote Interface failure To obtain the IGP convergence time due to a Remote Interface failure
event. event. The Next-Best Egress Interface can be a single interface
(Figure 3) or an ECMP set (Figure 4). The test with ECMP topology
(Figure 4) is OPTIONAL.
Procedure Procedure
1. Advertise an IGP topology from Tester to SUT using the topology 1. Advertise an IGP topology from Tester to SUT using the topology
shown in Figure 2. shown in Figure 3 or 4.
2. Send Offered Load from Tester to SUT on ingress interface. 2. Send Offered Load from Tester to SUT on ingress interface.
3. Verify traffic is forwarded over Preferred Egress Interface. 3. Verify traffic is forwarded over Preferred Egress Interface.
4. Remove link on Tester's interface [Po09t] connected to SUT's 4. Remove link on Tester's interface [Po09t] connected to SUT's
Preferred Egress Interface. This is the Convergence Event. Preferred Egress Interface. This is the Convergence Event.
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
skipping to change at page 24, line 9 skipping to change at page 27, line 17
8.1.3. Convergence Due to ECMP Member Local Interface Failure 8.1.3. Convergence Due to ECMP Member Local Interface Failure
Objective Objective
To obtain the IGP convergence time due to a Local Interface link To obtain the IGP convergence time due to a Local Interface link
failure event of an ECMP Member. failure event of an ECMP Member.
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the test 1. Advertise an IGP topology from Tester to DUT using the test
setup shown in Figure 3. setup shown in Figure 5.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is forwarded over the DUT's ECMP member interface 3. Verify traffic is forwarded over the DUT's ECMP member interface
that will be failed in the next step. that will be failed in the next step.
4. Remove link on one of the DUT's ECMP member interfaces. This is 4. Remove link on one of the DUT's ECMP member interfaces. This is
the Convergence Event. the Convergence Event.
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
skipping to change at page 25, line 15 skipping to change at page 28, line 23
8.1.4. Convergence Due to ECMP Member Remote Interface Failure 8.1.4. Convergence Due to ECMP Member Remote Interface Failure
Objective Objective
To obtain the IGP convergence time due to a Remote Interface link To obtain the IGP convergence time due to a Remote Interface link
failure event for an ECMP Member. failure event for an ECMP Member.
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the test 1. Advertise an IGP topology from Tester to DUT using the test
setup shown in Figure 4. setup shown in Figure 6.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is forwarded over the DUT's ECMP member interface 3. Verify traffic is forwarded over the DUT's ECMP member interface
that will be failed in the next step. that will be failed in the next step.
4. Remove link on Tester's interface to R2. This is the 4. Remove link on Tester's interface to R2. This is the
Convergence Event Trigger. Convergence Event Trigger.
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
skipping to change at page 26, line 23 skipping to change at page 29, line 35
Objective Objective
To obtain the IGP convergence due to a local link failure event for a To obtain the IGP convergence due to a local link failure event for a
member of a parallel link. The links can be used for data load member of a parallel link. The links can be used for data load
balancing balancing
Procedure Procedure
1. Advertise an IGP topology from Tester to DUT using the test 1. Advertise an IGP topology from Tester to DUT using the test
setup shown in Figure 5. setup shown in Figure 7.
2. Send Offered Load from Tester to DUT on ingress interface. 2. Send Offered Load from Tester to DUT on ingress interface.
3. Verify traffic is forwarded over the parallel link member that 3. Verify traffic is forwarded over the parallel link member that
will be failed in the next step. will be failed in the next step.
4. Remove link on one of the DUT's parallel link member interfaces. 4. Remove link on one of the DUT's parallel link member interfaces.
This is the Convergence Event. This is the Convergence Event.
5. Measure First Route Convergence Time. 5. Measure First Route Convergence Time.
skipping to change at page 34, line 45 skipping to change at page 38, line 12
from the DUT/SUT SHOULD be identical in the lab and in production from the DUT/SUT SHOULD be identical in the lab and in production
networks. networks.
10. IANA Considerations 10. IANA Considerations
This document requires no IANA considerations. This document requires no IANA considerations.
11. Acknowledgements 11. Acknowledgements
Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward,
Peter De Vriendt and the BMWG for their contributions to this work. Peter De Vriendt, Anuj Dewagan and the BMWG for their contributions
to this work.
12. Normative References 12. Normative References
[Br91] Bradner, S., "Benchmarking terminology for network [Br91] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991. interconnection devices", RFC 1242, July 1991.
[Br97] Bradner, S., "Key words for use in RFCs to Indicate [Br97] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[Br99] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [Br99] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
 End of changes. 58 change blocks. 
166 lines changed or deleted 334 lines changed or added

This html diff was produced by rfcdiff 1.38. The latest version is available from http://tools.ietf.org/tools/rfcdiff/