draft-ietf-bmwg-igp-dataplane-conv-meth-13.txt   draft-ietf-bmwg-igp-dataplane-conv-meth-14.txt 
Network Working Group Network Working Group
INTERNET-DRAFT INTERNET-DRAFT
Expires in: January 2008
Intended Status: Informational Intended Status: Informational
Scott Poretsky Scott Poretsky
Reef Point Systems Reef Point Systems
Brent Imhoff Brent Imhoff
Juniper Networks Juniper Networks
November 2007
Benchmarking Methodology for Benchmarking Methodology for
IGP Data Plane Route Convergence Link-State IGP Data Plane Route Convergence
<draft-ietf-bmwg-igp-dataplane-conv-meth-13.txt> <draft-ietf-bmwg-igp-dataplane-conv-meth-14.txt>
Intellectual Property Rights (IPR) statement: Intellectual Property Rights (IPR) statement:
By submitting this Internet-Draft, each author represents that any By submitting this Internet-Draft, each author represents that any
applicable patent or other IPR claims of which he or she is aware applicable patent or other IPR claims of which he or she is aware
have been or will be disclosed, and any of which he or she becomes have been or will be disclosed, and any of which he or she becomes
aware will be disclosed, in accordance with Section 6 of BCP 79. aware will be disclosed, in accordance with Section 6 of BCP 79.
Status of this Memo Status of this Memo
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
skipping to change at page 2, line 12 skipping to change at page 2, line 12
observable (black box) data plane measurements. The methodology observable (black box) data plane measurements. The methodology
can be applied to any link-state IGP, such as ISIS and OSPF. can be applied to any link-state IGP, such as ISIS and OSPF.
IGP Data Plane Route Convergence IGP Data Plane Route Convergence
Table of Contents Table of Contents
1. Introduction ...............................................2 1. Introduction ...............................................2
2. Existing definitions .......................................2 2. Existing definitions .......................................2
3. Test Setup..................................................3 3. Test Setup..................................................3
3.1 Test Topologies............................................3 3.1 Test Topologies............................................3
3.2 Test Considerations........................................4 3.2 Test Considerations........................................5
3.3 Reporting Format...........................................6 3.3 Reporting Format...........................................7
4. Test Cases..................................................7 4. Test Cases..................................................7
4.1 Convergence Due to Link Failure............................7 4.1 Convergence Due to Link Failure............................8
4.1.1 Convergence Due to Local Interface Failure...............7 4.1.1 Convergence Due to Local Interface Failure...............8
4.1.2 Convergence Due to Neighbor Interface Failure............7 4.1.2 Convergence Due to Neighbor Interface Failure............8
4.1.3 Convergence Due to Remote Interface Failure..............8 4.1.3 Convergence Due to Remote Interface Failure..............9
4.2 Convergence Due to Layer 2 Session Failure.................9 4.2 Convergence Due to Local Administrative Shutdown...........10
4.3 Convergence Due to IGP Adjacency Failure...................10 4.3 Convergence Due to Layer 2 Session Failure.................11
4.4 Convergence Due to Route Withdrawal........................10 4.4 Convergence Due to IGP Adjacency Failure...................11
4.5 Convergence Due to Cost Change.............................11 4.5 Convergence Due to Route Withdrawal........................12
4.6 Convergence Due to ECMP Member Interface Failure...........11 4.6 Convergence Due to Cost Change.............................13
4.7 Convergence Due to Parallel Link Interface Failure.........12 4.7 Convergence Due to ECMP Member Interface Failure...........13
5. IANA Considerations.........................................13 4.8 Convergence Due to ECMP Member Remote Interface Failure....14
6. Security Considerations.....................................13 4.9 Convergence Due to Parallel Link Interface Failure.........15
7. Acknowledgements............................................13 5. IANA Considerations.........................................15
8. Normative References........................................13 6. Security Considerations.....................................15
9. Author's Address............................................14 7. Acknowledgements............................................15
8. Normative References........................................16
9. Author's Address............................................16
1. Introduction 1. Introduction
This document describes the methodology for benchmarking IGP Route This document describes the methodology for benchmarking Interior
Convergence. The applicability of this testing is described in Gateway Protocol (IGP) Route Convergence. The applicability of this
[Po07a] and the new terminology that it introduces is defined in testing is described in [Po07a] and the new terminology that it
[Po07t]. Service Providers use IGP Convergence time as a key metric introduces is defined in [Po07t]. Service Providers use IGP
of router design and architecture. Customers of Service Providers Convergence time as a key metric of router design and architecture.
observe convergence time by packet loss, so IGP Route Convergence Customers of Service Providers observe convergence time by packet
is considered a Direct Measure of Quality (DMOQ). The test cases loss, so IGP Route Convergence is considered a Direct Measure of
in this document are black-box tests that emulate the network Quality (DMOQ). The test cases in this document are black-box tests
events that cause route convergence, as described in [Po07a]. The that emulate the network events that cause route convergence, as
black-box test designs benchmark the data plane and account for described in [Po07a]. The black-box test designs benchmark the data
all of the factors contributing to convergence time, as discussed plane and account for all of the factors contributing to convergence
in [Po07a]. The methodology (and terminology) for benchmarking route time, as discussed in [Po07a]. The methodology (and terminology) for
convergence can be applied to any link-state IGP such as ISIS [Ca90] benchmarking route convergence can be applied to any link-state IGP
and OSPF [Mo98]. These methodologies apply to IPv4 and IPv6 traffic such as ISIS [Ca90] and OSPF [Mo98] and other IGPs such as RIP.
as well as IPv4 and IPv6 IGPs. These methodologies apply to IPv4 and IPv6 traffic and IGPs.
2. Existing definitions 2. Existing definitions
This document uses much of the terminology defined in [Po07t]. The
term "Throughput" is defined in RFC 2544 [Br99].
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, RFC 2119 document are to be interpreted as described in BCP 14, RFC 2119
[Br97]. RFC 2119 defines the use of these key words to help make the [Br97]. RFC 2119 defines the use of these key words to help make the
intent of standards track documents as clear as possible. While this intent of standards track documents as clear as possible. While this
document uses these keywords, this document is not a standards track document uses these keywords, this document is not a standards track
document. document.
IGP Data Plane Route Convergence IGP Data Plane Route Convergence
This document uses much of the terminology defined in [Po07t].
This document uses existing terminology defined in other BMWG
work. Examples include, but are not limited to:
Throughput [Ref.[Br91], section 3.17]
Device Under Test (DUT) [Ref.[Ma98], section 3.1.1]
System Under Test (SUT) [Ref.[Ma98], section 3.1.2]
Out-of-order Packet [Ref.[Po06], section 3.3.2]
Duplicate Packet [Ref.[Po06], section 3.3.3]
Packet Loss [Ref.[Po07t], Section 3.5]
This document adopts the definition format in Section 2 of RFC 1242
[Br91].
3. Test Setup 3. Test Setup
3.1 Test Topologies 3.1 Test Topologies
Figure 1 shows the test topology to measure IGP Route Convergence due
to local Convergence Events such as SONET Link Failure, Layer 2 Figure 1 shows the test topology to measure IGP Route Convergence
due to local Convergence Events such as Link Failure, Layer 2
Session Failure, IGP Adjacency Failure, Route Withdrawal, and route Session Failure, IGP Adjacency Failure, Route Withdrawal, and route
cost change. These test cases discussed in section 4 provide route cost change. These test cases discussed in section 4 provide route
convergence times that account for the Event Detection time, SPF convergence times that account for the Event Detection time, SPF
Processing time, and FIB Update time. These times are measured Processing time, and FIB Update time. These times are measured
by observing packet loss in the data plane at the Tester. by observing packet loss in the data plane at the Tester.
Figure 2 shows the test topology to measure IGP Route Convergence
time due to remote changes in the network topology. These times
are measured by observing packet loss in the data plane at the
Tester. In this topology the three routers are considered a System
Under Test (SUT). A Remote Interface [Po07t] failure on router R2
MUST result in convergence of traffic to router R3. NOTE: All
routers in the SUT must be the same model and identically
configured.
Figure 3 shows the test topology to measure IGP Route Convergence
time with members of an Equal Cost Multipath (ECMP) Set. These
times are measured by observing packet loss in the data plane at
the Tester. In this topology, the DUT is configured with each
Egress interface as a member of an ECMP set and the Tester emulates
multiple next-hop routers (emulates one router for each member).
Figure 4 shows the test topology to measure IGP Route Convergence
time with members of a Parallel Link. These times are measured by
observing packet loss in the data plane at the Tester. In this
topology, the DUT is configured with each Egress interface as a
member of a Parallel Link and the Tester emulates the single
next-hop router.
IGP Data Plane Route Convergence
--------- Ingress Interface --------- --------- Ingress Interface ---------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Preferred Egress Interface | | | | Preferred Egress Interface | |
| DUT |-------------------------------->| Tester| | DUT |-------------------------------->| Tester|
| | | | | | | |
| |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| |
| | Next-Best Egress Interface | | | | Next-Best Egress Interface | |
--------- --------- --------- ---------
Figure 1. IGP Route Convergence Test Topology for Local Changes Figure 1. Test Topology 1: IGP Convergence Test Topology
for Local Changes
Figure 2 shows the test topology to measure IGP Route Convergence
time due to remote changes in the network topology. These times are
measured by observing packet loss in the data plane at the Tester.
In this topology the three routers are considered a System Under
Test (SUT). NOTE: All routers in the SUT must be the same model and
identically configured.
----- --------- ----- ---------
| | Preferred | | | | Preferred | |
----- |R2 |---------------------->| | ----- |R2 |---------------------->| |
| |-->| | Egress Interface | | | |-->| | Egress Interface | |
| | ----- | | | | ----- | |
|R1 | |Tester | |R1 | |Tester |
| | ----- | | | | ----- | |
| |-->| | Next-Best | | | |-->| | Next-Best | |
----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| |
^ | | Egress Interface | | ^ | | Egress Interface | |
| ----- --------- | ----- ---------
| | | |
|-------------------------------------- |--------------------------------------
Ingress Interface Ingress Interface
Figure 2. IGP Route Convergence Test Topology Figure 2. Test Topology 2: IGP Convergence Test Topology
for Remote Changes for Convergence Due to Remote Changes
Figure 3 shows the test topology to measure IGP Route Convergence
time with members of an Equal Cost Multipath (ECMP) Set. These
times are measured by observing packet loss in the data plane at
the Tester. In this topology, the DUT is configured with each
Egress interface
IGP Data Plane Route Convergence
as a member of an ECMP set and the Tester emulates multiple
next-hop routers (emulates one router for each member).
--------- Ingress Interface --------- --------- Ingress Interface ---------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | ECMP Set Interface 1 | | | | ECMP Set Interface 1 | |
| DUT |-------------------------------->| Tester| | DUT |-------------------------------->| Tester|
| | . | | | | . | |
| | . | | | | . | |
| | . | | | | . | |
| |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| |
| | ECMP Set Interface N | | | | ECMP Set Interface N | |
--------- --------- --------- ---------
Figure 3. IGP Route Convergence Test Topology Figure 3. Test Topology 3: IGP Convergence Test Topology
for ECMP Convergence for ECMP Convergence
IGP Data Plane Route Convergence
Figure 4 shows the test topology to measure IGP Route Convergence
time with members of a Parallel Link. These times are measured by
observing packet loss in the data plane at the Tester. In this
topology, the DUT is configured with each Egress interface as a
member of a Parallel Link and the Tester emulates the single
next-hop router.
--------- Ingress Interface --------- --------- Ingress Interface ---------
| |<--------------------------------| | | |<--------------------------------| |
| | | | | | | |
| | Parallel Link Interface 1 | | | | Parallel Link Interface 1 | |
| DUT |-------------------------------->| Tester| | DUT |-------------------------------->| Tester|
| | . | | | | . | |
| | . | | | | . | |
| | . | | | | . | |
| |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| |
| | Parallel Link Interface N | | | | Parallel Link Interface N | |
--------- --------- --------- ---------
Figure 4. IGP Route Convergence Test Topology Figure 4. Test Topology 4: IGP Convergence Test Topology
for Parallel Link Convergence for Parallel Link Convergence
3.2 Test Considerations 3.2 Test Considerations
3.2.1 IGP Selection 3.2.1 IGP Selection
The test cases described in section 4 can be used for ISIS or The test cases described in section 4 can be used for ISIS or
OSPF. The Route Convergence test methodology for both is OSPF. The Route Convergence test methodology for both is
identical. The IGP adjacencies are established on the Preferred identical. The IGP adjacencies are established on the Preferred
Egress Interface and Next-Best Egress Interface. Egress Interface and Next-Best Egress Interface.
3.2.2 Routing Protocol Configuration 3.2.2 Routing Protocol Configuration
The obtained results for IGP Route Convergence may vary if The obtained results for IGP Route Convergence may vary if
other routing protocols are enabled and routes learned via those other routing protocols are enabled and routes learned via those
protocols are installed. IGP convergence times MUST be benchmarked protocols are installed. IGP convergence times MUST be benchmarked
without routes installed from other protocols. without routes installed from other protocols.
IGP Data Plane Route Convergence
3.2.3 IGP Route Scaling 3.2.3 IGP Route Scaling
The number of IGP routes will impact the measured IGP Route The number of IGP routes will impact the measured IGP Route
Convergence. To obtain results similar to those that would be Convergence. To obtain results similar to those that would be
observed in an operational network, it is recommended that the observed in an operational network, it is reocmmended that the
number of installed routes closely approximates that the network. number of installed routes and nodes closely approximates that
of the network (e.g. thousands of routes with tens of nodes).
The number of areas (for OSPF) and levels (for ISIS) can impact The number of areas (for OSPF) and levels (for ISIS) can impact
the benchmark results. the benchmark results.
3.2.4 Timers 3.2.4 Timers
There are some timers that will impact the measured IGP Convergence There are some timers that will impact the measured IGP Convergence
time. Benchmarking metrics may be measured at any fixed values for time. Benchmarking metrics may be measured at any fixed values for
these timers. It is RECOMMENDED that the following timers be these timers. It is RECOMMENDED that the following timers be
configured to the minimum values listed: configured to the minimum values listed:
Timer Recommended Value Timer Recommended Value
----- ----------------- ----- -----------------
Link Failure Indication Delay <10milliseconds Link Failure Indication Delay <10milliseconds
IGP Hello Timer 1 second IGP Hello Timer 1 second
IGP Dead-Interval 3 seconds IGP Dead-Interval 3 seconds
LSA Generation Delay 0 LSA Generation Delay 0
LSA Flood Packet Pacing 0 LSA Flood Packet Pacing 0
LSA Retransmission Packet Pacing 0 LSA Retransmission Packet Pacing 0
SPF Delay 0 SPF Delay 0
IGP Data Plane Route Convergence
3.2.5 Convergence Time Metrics 3.2.5 Convergence Time Metrics
The recommended value for the Packet Sampling Interval [Po07t] is The Packet Sampling Interval [Po07t] value is the fastest
100 milliseconds. Rate-Derived Convergence Time [Po07t] is the measurable convergence time. The RECOMMENDED value for the
preferred benchmark for IGP Route Convergence. This benchmark Packet Sampling Interval is 10 milliseconds. Rate-Derived
must always be reported when the Packet Sampling Interval [Po07t] Convergence Time [Po07t] is the preferred benchmark for IGP
<= 100 milliseconds. If the test equipment does not permit Route Convergence. This benchmark must always be reported
the Packet Sampling Interval to be set as low as 100 msec, when the Packet Sampling Interval is set <= 10 milliseconds
then both the Rate-Derived Convergence Time and Loss-Derived on the test equipment. If the test equipment does not permit
Convergence Time [Po07t] must be reported. The Packet Sampling the Packet Sampling Interval to be set as low as 10
Interval value MUST be reported as the smallest measurable milliseconds, then both the Rate-Derived Convergence Time and
convergence time. Loss-Derived Convergence Time [Po07t] MUST be reported.
3.2.6 Interface Types 3.2.6 Interface Types
All test cases in this methodology document may be executed with All test cases in this methodology document may be executed with
any interface type. All interfaces MUST be the same media and any interface type. All interfaces MUST be the same media and
Throughput [Br91][Br99] for each test case. This is because each Throughput [Br91][Br99] for each test case. The type of media
interface type has a unique mechanism for detecting link failures may dictate which test cases may be executed. This is because
and the speed at which that mechanism operates will influence each interface type has a unique mechanism for detecting link
the measure results. Media and protocols MUST be configured for failures and the speed at which that mechanism operates will
minimum failure detection delay to minimize the contribution to influence the measure results. Media and protocols MUST be
the measured Convergence time. For example, configure SONET with configured for minimum failure detection delay to minimize the
the minimum carrier-loss-delay. contribution to the measured Convergence time. For example,
configure SONET with the minimum carrier-loss-delay. All
IGP Data Plane Route Convergence interfaces SHOULD be configured as point-to-point.
3.2.7 Offered Load 3.2.7 offered load
The offered Load MUST be the Throughput of the device as defined The offered load MUST be the Throughput of the device as defined
in [Br91] and benchmarked in [Br99] at a fixed packet size. in [Br91] and benchmarked in [Br99] at a fixed packet size.
Packet size is measured in bytes and includes the IP header and Packet size is measured in bytes and includes the IP header and
payload. The packet size is selectable and MUST be recorded. payload. The packet size is selectable and MUST be recorded.
The Forwarding Rate [Ma98] MUST be measured at the Preferred Egress The Forwarding Rate [Ma98] MUST be measured at the Preferred Egress
Interface and the Next-Best Egress Interface. The duration of Interface and the Next-Best Egress Interface. The duration of
offered load MUST be greater than the convergence time. The offered load MUST be greater than the convergence time. The
destination addresses for the offered load MUST be distributed destination addresses for the offered load MUST be distributed
such that all routes are matched. This enables Full Convergence such that all routes are matched. This enables Full Convergence
[Po07t] to be observed. [Po07t] to be observed.
IGP Data Plane Route Convergence
3.3 Reporting Format 3.3 Reporting Format
For each test case, it is recommended that the following reporting For each test case, it is recommended that the reporting table below
format is completed: is completed and all time values SHOULD be reported with resolution
as specified in [Po07t].
Parameter Units Parameter Units
--------- ----- --------- -----
IGP (ISIS or OSPF) IGP (ISIS or OSPF)
Interface Type (GigE, POS, ATM, etc.) Interface Type (GigE, POS, ATM, etc.)
Test Topology (1, 2, 3, or 4)
Packet Size offered to DUT bytes Packet Size offered to DUT bytes
Total Packets Offered to DUT number of Packets
Total Packets Routed by DUT number of Packets
IGP Routes advertised to DUT number of IGP routes IGP Routes advertised to DUT number of IGP routes
Packet Sampling Interval on Tester seconds or milliseconds Nodes in emulated network number of nodes
Packet Sampling Interval on Tester milliseconds
IGP Timer Values configured on DUT IGP Timer Values configured on DUT
SONET Failure Indication Delay seconds or milliseconds Interface Failure Indication Delay seconds
IGP Hello Timer seconds or milliseconds IGP Hello Timer seconds
IGP Dead-Interval seconds or milliseconds IGP Dead-Interval seconds
LSA Generation Delay seconds or milliseconds LSA Generation Delay seconds
LSA Flood Packet Pacing seconds or milliseconds LSA Flood Packet Pacing seconds
LSA Retransmission Packet Pacing seconds or milliseconds LSA Retransmission Packet Pacing seconds
SPF Delay seconds or milliseconds SPF Delay seconds
Benchmarks Benchmarks
Rate-Derived Convergence Time seconds or milliseconds First Prefix Conversion Time seconds
Loss-Derived Convergence Time seconds or milliseconds Rate-Derived Convergence Time seconds
Restoration Convergence Time seconds or milliseconds Loss-Derived Convergence Time seconds
IGP Data Plane Route Convergence Reversion Convergence Time seconds
4. Test Cases 4. Test Cases
The test cases follow a generic procedure tailored to the specific
DUT configuration and Convergence Event. This generic procedure is
as follows:
1. Establish DUT configuration and install routes.
2. Send offered load with traffic traversing Preferred Egress
Interface [Po07t].
3. Introduce Convergence Event to force traffic to Next-Best
Egress Interface [Po07t].
4. Measure First Prefix Convergence Time.
4. Measure Rate-Derived Convergence Time.
5. Recover from Convergence Event.
6. Measure Reversion Convergence Time.
IGP Data Plane Route Convergence
4.1 Convergence Due to Link Failure 4.1 Convergence Due to Link Failure
4.1.1 Convergence Due to Local Interface Failure 4.1.1 Convergence Due to Local Interface Failure
Objective Objective
To obtain the IGP Route Convergence due to a local link failure event To obtain the IGP Route Convergence due to a local link failure event
at the DUT's Local Interface. at the DUT's Local Interface.
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Preferred Egress Interface [Po07t] and Next-Best Egress Interface
[Po07t] using the topology shown in Figure 1. Set the cost of [Po07t] using the topology shown in Figure 1. Set the cost of
the routes so that the Preferred Egress Interface is the the routes so that the Preferred Egress Interface is the
skipping to change at page 7, line 23 skipping to change at page 8, line 23
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Preferred Egress Interface [Po07t] and Next-Best Egress Interface
[Po07t] using the topology shown in Figure 1. Set the cost of [Po07t] using the topology shown in Figure 1. Set the cost of
the routes so that the Preferred Egress Interface is the the routes so that the Preferred Egress Interface is the
preferred next-hop. preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
4. Remove Preferred Egress link on DUT's Local Interface [Po07t] by 4. Remove link on DUT's Preferred Egress Interface.
performing an administrative shutdown of the interface. 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
link down event and converges all IGP routes and traffic over link down event and converges all IGP routes and traffic over
the Next-Best Egress Interface. the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Restore Preferred Egress link on DUT's Local Interface by 8. Restore link on DUT's Preferred Egress Interface.
administratively enabling the interface. 9. Measure Reversion Convergence Time [Po07t] as DUT detects the
8. Measure Restoration Convergence Time [Po07t] as DUT detects the
link up event and converges all IGP routes and traffic back link up event and converges all IGP routes and traffic back
to the Preferred Egress Interface. to the Preferred Egress Interface.
Results Results
The measured IGP Convergence time is influenced by the Local The measured IGP Convergence time is influenced by the Local
link failure indication, SPF delay, SPF Hold time, SPF Execution link failure indication, SPF delay, SPF Hold time, SPF Execution
Time, Tree Build Time, and Hardware Update Time [Po07a]. Time, Tree Build Time, and Hardware Update Time [Po07a].
4.1.2 Convergence Due to Neighbor Interface Failure 4.1.2 Convergence Due to Neighbor Interface Failure
Objective Objective
skipping to change at page 8, line 10 skipping to change at page 9, line 10
preferred next-hop. preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
IGP Data Plane Route Convergence IGP Data Plane Route Convergence
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
4. Remove link on Tester's Neighbor Interface [Po07t] connected to 4. Remove link on Tester's Neighbor Interface [Po07t] connected to
DUT's Preferred Egress Interface. DUT's Preferred Egress Interface.
5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
link down event and converges all IGP routes and traffic over link down event and converges all IGP routes and traffic over
the Next-Best Egress Interface. the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Restore link on Tester's Neighbor Interface connected to 8. Restore link on Tester's Neighbor Interface connected to
DUT's Preferred Egress Interface. DUT's Preferred Egress Interface.
8. Measure Restoration Convergence Time [Po07t] as DUT detects the 9. Measure Reversion Convergence Time [Po07t] as DUT detects the
link up event and converges all IGP routes and traffic back link up event and converges all IGP routes and traffic back
to the Preferred Egress Interface. to the Preferred Egress Interface.
Results Results
The measured IGP Convergence time is influenced by the Local The measured IGP Convergence time is influenced by the Local
link failure indication, SPF delay, SPF Hold time, SPF Execution link failure indication, SPF delay, SPF Hold time, SPF Execution
Time, Tree Build Time, and Hardware Update Time [Po07a]. Time, Tree Build Time, and Hardware Update Time [Po07a].
4.1.3 Convergence Due to Remote Interface Failure 4.1.3 Convergence Due to Remote Interface Failure
Objective Objective
To obtain the IGP Route Convergence due to a Remote Interface To obtain the IGP Route Convergence due to a Remote Interface
Failure event. Failure event.
Procedure Procedure
1. Advertise matching IGP routes from Tester to SUT on 1. Advertise matching IGP routes from Tester to SUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Preferred Egress Interface [Po07t] and Next-Best Egress
[Po07t] using the topology shown in Figure 2. Set the cost of Interface [Po07t] using the topology shown in Figure 2.
the routes so that the Preferred Egress Interface is the Set the cost of the routes so that the Preferred Egress
preferred next-hop. Interface is the preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. SUT on Ingress Interface [Po07t].
3. Verify traffic is routed over Preferred Egress Interface. 3. Verify traffic is routed over Preferred Egress Interface.
4. Remove link on Tester's Neighbor Interface [Po07t] connected to 4. Remove link on Tester's Neighbor Interface [Po07t] connected to
SUT's Preferred Egress Interface. SUT's Preferred Egress Interface.
5. Measure Rate-Derived Convergence Time [Po07t] as SUT detects 5. Measure First Prefix Convergence Time [Po07t] as SUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as SUT detects
the link down event and converges all IGP routes and traffic the link down event and converges all IGP routes and traffic
over the Next-Best Egress Interface. over the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Restore link on Tester's Neighbor Interface connected to 8. Restore link on Tester's Neighbor Interface connected to
DUT's Preferred Egress Interface. DUT's Preferred Egress Interface.
8. Measure Restoration Convergence Time [Po07t] as DUT detects the 9. Measure Reversion Convergence Time [Po07t] as DUT detects the
link up event and converges all IGP routes and traffic back link up event and converges all IGP routes and traffic back
to the Preferred Egress Interface. to the Preferred Egress Interface.
IGP Data Plane Route Convergence IGP Data Plane Route Convergence
Results Results
The measured IGP Convergence time is influenced by the The measured IGP Convergence time is influenced by the link failure
link failure indication, LSA/LSP Flood Packet Pacing, indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission
LSA/LSP Retransmission Packet Pacing, LSA/LSP Generation Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time,
time, SPF delay, SPF Hold time, SPF Execution Time, Tree SPF Execution Time, Tree Build Time, and Hardware Update Time
Build Time, and Hardware Update Time [Po07a]. The additional [Po07a]. This test case may produce Stale Forwarding [Po07t] due to
convergence time contributed by LSP Propagation can be microloops which may increase the Rate-Derived Convergence Time.
obtained by subtracting the Rate-Derived Convergence Time
measured in 4.1.2 (Convergence Due to Neighbor Interface
Failure) from the Rate-Derived Convergence Time measured in
this test case.
4.2 Convergence Due to Layer 2 Session Failure 4.2 Convergence Due to Local Adminstrative Shutdown
Objective
To obtain the IGP Route Convergence due to a local link failure event
at the DUT's Local Interface.
Procedure
1. Advertise matching IGP routes from Tester to DUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface
[Po07t] using the topology shown in Figure 1. Set the cost of
the routes so that the Preferred Egress Interface is the
preferred next-hop.
2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface.
4. Perform adminstrative shutdown on the DUT's Preferred Egress
Interface.
5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT converges
all IGP routes and traffic over the Next-Best Egress Interface.
7. Stop offered load. Wait 30 seconds for queues to drain.
Restart offered load.
8. Restore Preferred Egress Interface by administratively enabling
the interface.
9. Measure Reversion Convergence Time [Po07t] as DUT converges all
IGP routes and traffic back to the Preferred Egress Interface.
Results
The measured IGP Convergence time is influenced by SPF delay,
SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware
Update Time [Po07a].
IGP Data Plane Route Convergence
4.3 Convergence Due to Layer 2 Session Failure
Objective Objective
To obtain the IGP Route Convergence due to a Local Layer 2 To obtain the IGP Route Convergence due to a Local Layer 2
Session failure event. Session failure event, such as PPP session loss.
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Preferred Egress Interface [Po07t] and Next-Best Egress Interface
[Po07t] using the topology shown in Figure 1. Set the cost of [Po07t] using the topology shown in Figure 1. Set the cost of
the routes so that the IGP routes along the Preferred Egress the routes so that the IGP routes along the Preferred Egress
Interface is the preferred next-hop. Interface is the preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
4. Remove Layer 2 session from Tester's Neighbor Interface [Po07t] 4. Remove Layer 2 session from Tester's Neighbor Interface [Po07t]
connected to Preferred Egress Interface. connected to Preferred Egress Interface.
5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
Layer 2 session down event and converges all IGP routes and Layer 2 session down event and converges all IGP routes and
traffic over the Next-Best Egress Interface. traffic over the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Restore Layer 2 session on DUT's Preferred Egress Interface. 8. Restore Layer 2 session on DUT's Preferred Egress Interface.
8. Measure Restoration Convergence Time [Po07t] as DUT detects the 9. Measure Reversion Convergence Time [Po07t] as DUT detects the
session up event and converges all IGP routes and traffic session up event and converges all IGP routes and traffic
over the Preferred Egress Interface. over the Preferred Egress Interface.
Results Results
The measured IGP Convergence time is influenced by the Layer 2 The measured IGP Convergence time is influenced by the Layer 2
failure indication, SPF delay, SPF Hold time, SPF Execution failure indication, SPF delay, SPF Hold time, SPF Execution
Time, Tree Build Time, and Hardware Update Time [Po07a]. Time, Tree Build Time, and Hardware Update Time [Po07a].
IGP Data Plane Route Convergence 4.4 Convergence Due to IGP Adjacency Failure
4.3 Convergence Due to IGP Adjacency Failure
Objective Objective
To obtain the IGP Route Convergence due to a Local IGP Adjacency To obtain the IGP Route Convergence due to a Local IGP Adjacency
failure event. failure event.
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Preferred Egress Interface [Po07t] and Next-Best Egress Interface
[Po07t] using the topology shown in Figure 1. Set the cost of [Po07t] using the topology shown in Figure 1. Set the cost of
the routes so that the Preferred Egress Interface is the the routes so that the Preferred Egress Interface is the
preferred next-hop. preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
IGP Data Plane Route Convergence
4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t] 4. Remove IGP adjacency from Tester's Neighbor Interface [Po07t]
connected to Preferred Egress Interface. connected to Preferred Egress Interface.
5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
IGP session failure event and converges all IGP routes and IGP session failure event and converges all IGP routes and
traffic over the Next-Best Egress Interface. traffic over the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Restore IGP session on DUT's Preferred Egress Interface. 8. Restore IGP session on DUT's Preferred Egress Interface.
8. Measure Restoration Convergence Time [Po07t] as DUT detects the 9. Measure Reversion Convergence Time [Po07t] as DUT detects the
session up event and converges all IGP routes and traffic session up event and converges all IGP routes and traffic
over the Preferred Egress Interface. over the Preferred Egress Interface.
Results Results
The measured IGP Convergence time is influenced by the IGP Hello The measured IGP Convergence time is influenced by the IGP Hello
Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF
Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. Execution Time, Tree Build Time, and Hardware Update Time [Po07a].
4.4 Convergence Due to Route Withdrawal 4.5 Convergence Due to Route Withdrawal
Objective Objective
To obtain the IGP Route Convergence due to Route Withdrawal. To obtain the IGP Route Convergence due to Route Withdrawal.
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on Preferred
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Egress Interface [Po07t] and Next-Best Egress Interface [Po07t]
[Po07t] using the topology shown in Figure 1. Set the cost of using the topology shown in Figure 1. Set the cost of the routes
the routes so that the Preferred Egress Interface is the so that the Preferred Egress Interface is the preferred next-hop.
preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
4. Tester withdraws all IGP routes from DUT's Local Interface 4. Tester withdraws all IGP routes from DUT's Local Interface
on Preferred Egress Interface. on Preferred Egress Interface.
IGP Data Plane Route Convergence
5. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws 5. Measure Rate-Derived Convergence Time [Po07t] as DUT withdraws
routes and converges all IGP routes and traffic over the routes and converges all IGP routes and traffic over the
Next-Best Egress Interface. Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the
Restart Offered Load. link down event and begins to converge IGP routes and traffic
7. Re-advertise IGP routes to DUT's Preferred Egress Interface. over the Next-Best Egress Interface.
8. Measure Restoration Convergence Time [Po07t] as DUT converges all 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart offered load.
8. Re-advertise IGP routes to DUT's Preferred Egress Interface.
9. Measure Reversion Convergence Time [Po07t] as DUT converges all
IGP routes and traffic over the Preferred Egress Interface. IGP routes and traffic over the Preferred Egress Interface.
Results Results
The measured IGP Convergence time is the SPF Processing and FIB The measured IGP Convergence time is the SPF Processing and FIB
Update time as influenced by the SPF delay, SPF Hold time, SPF Update time as influenced by the SPF delay, SPF Hold time, SPF
Execution Time, Tree Build Time, and Hardware Update Time [Po07a]. Execution Time, Tree Build Time, and Hardware Update Time [Po07a].
4.5 Convergence Due to Cost Change IGP Data Plane Route Convergence
4.6 Convergence Due to Cost Change
Objective Objective
To obtain the IGP Route Convergence due to route cost change. To obtain the IGP Route Convergence due to route cost change.
Procedure Procedure
1. Advertise matching IGP routes from Tester to DUT on 1. Advertise matching IGP routes from Tester to DUT on Preferred
Preferred Egress Interface [Po07t] and Next-Best Egress Interface Egress Interface [Po07t] and Next-Best Egress Interface [Po07t]
[Po07t] using the topology shown in Figure 1. Set the cost of using the topology shown in Figure 1. Set the cost of the routes
the routes so that the Preferred Egress Interface is the so that the Preferred Egress Interface is the preferred next-hop.
preferred next-hop.
2. Send offered load at measured Throughput with fixed packet 2. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
3. Verify traffic routed over Preferred Egress Interface. 3. Verify traffic routed over Preferred Egress Interface.
4. Tester increases cost for all IGP routes at DUT's Preferred 4. Tester increases cost for all IGP routes at DUT's Preferred
Egress Interface so that the Next-Best Egress Interface Egress Interface so that the Next-Best Egress Interface
has lower cost and becomes preferred path. has lower cost and becomes preferred path.
5. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 5. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
cost change event and converges all IGP routes and traffic cost change event and converges all IGP routes and traffic
over the Next-Best Egress Interface. over the Next-Best Egress Interface.
6. Stop offered load. Wait 30 seconds for queues to drain. 7. Stop offered load. Wait 30 seconds for queues to drain.
Restart Offered Load. Restart offered load.
7. Re-advertise IGP routes to DUT's Preferred Egress Interface 8. Re-advertise IGP routes to DUT's Preferred Egress Interface
with original lower cost metric. with original lower cost metric.
8. Measure Restoration Convergence Time [Po07t] as DUT converges all 9. Measure Reversion Convergence Time [Po07t] as DUT converges all
IGP routes and traffic over the Preferred Egress Interface. IGP routes and traffic over the Preferred Egress Interface.
Results Results
There should be no externally observable IGP Route Convergence There should be no measured packet loss for this case.
and no measured packet loss for this case.
4.7 Convergence Due to ECMP Member Interface Failure
4.6 Convergence Due to ECMP Member Interface Failure
Objective Objective
To obtain the IGP Route Convergence due to a local link failure event To obtain the IGP Route Convergence due to a local link failure event
of an ECMP Member. of an ECMP Member.
IGP Data Plane Route Convergence
Procedure Procedure
1. Configure ECMP Set as shown in Figure 3. 1. Configure ECMP Set as shown in Figure 3.
2. Advertise matching IGP routes from Tester to DUT on 2. Advertise matching IGP routes from Tester to DUT on each ECMP
each ECMP member. member.
3. Send offered load at measured Throughput with fixed packet 3. Send offered load at measured Throughput with fixed packet size to
size to destinations matching all IGP routes from Tester to destinations matching all IGP routes from Tester to DUT on Ingress
DUT on Ingress Interface [Po07t]. Interface [Po07t].
4. Verify traffic routed over all members of ECMP Set. 4. Verify traffic routed over all members of ECMP Set.
5. Remove link on Tester's Neighbor Interface [Po07t] connected to 5. Remove link on Tester's Neighbor Interface [Po07t] connected to
one of the DUT's ECMP member interfaces. one of the DUT's ECMP member interfaces.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
IGP Data Plane Route Convergence
7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
link down event and converges all IGP routes and traffic link down event and converges all IGP routes and traffic
over the other ECMP members. over the other ECMP members. At the same time measure
7. Stop offered load. Wait 30 seconds for queues to drain. Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
Restart Offered Load. 8. Stop offered load. Wait 30 seconds for queues to drain.
8. Restore link on Tester's Neighbor Interface connected to Restart offered load.
9. Restore link on Tester's Neighbor Interface connected to
DUT's ECMP member interface. DUT's ECMP member interface.
9. Measure Restoration Convergence Time [Po07t] as DUT detects the 10. Measure Reversion Convergence Time [Po07t] as DUT detects the
link up event and converges IGP routes and some distribution link up event and converges IGP routes and some distribution
of traffic over the restored ECMP member. of traffic over the restored ECMP member.
Results Results
The measured IGP Convergence time is influenced by Local link The measured IGP Convergence time is influenced by Local link
failure indication, Tree Build Time, and Hardware Update Time failure indication, Tree Build Time, and Hardware Update Time
[Po07a]. [Po07a].
4.7 Convergence Due to Parallel Link Interface Failure 4.8 Convergence Due to ECMP Member Remote Interface Failure
Objective
To obtain the IGP Route Convergence due to a remote interface
failure event for an ECMP Member.
Procedure
1. Configure ECMP Set as shown in Figure 2 in which the links
from R1 to R2 and R1 to R3 are members of an ECMP Set.
2. Advertise matching IGP routes from Tester to SUT to balance
traffic to each ECMP member.
3. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to
SUT on Ingress Interface [Po07t].
4. Verify traffic routed over all members of ECMP Set.
5. Remove link on Tester's Neighbor Interface to R2 or R3.
6. Measure First Prefix Convergence Time [Po07t] as SUT detects
the link down event and begins to converge IGP routes and
traffic over the Next-Best Egress Interface.
7. Measure Rate-Derived Convergence Time [Po07t] as SUT detects
the link down event and converges all IGP routes and traffic
over the other ECMP members. At the same time measure
Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
8. Stop offered load. Wait 30 seconds for queues to drain.
Restart offered load.
9. Restore link on Tester's Neighbor Interface to R2 or R3.
10. Measure Reversion Convergence Time [Po07t] as SUT detects the
link up event and converges IGP routes and some distribution
of traffic over the restored ECMP member.
Results
The measured IGP Convergence time is influenced by Local link
failure indication, Tree Build Time, and Hardware Update Time
[Po07a].
IGP Data Plane Route Convergence
4.9 Convergence Due to Parallel Link Interface Failure
Objective Objective
To obtain the IGP Route Convergence due to a local link failure To obtain the IGP Route Convergence due to a local link failure
event for a Member of a Parallel Link. The links can be used event for a Member of a Parallel Link. The links can be used
for data Load Balancing for data Load Balancing
Procedure Procedure
1. Configure Parallel Link as shown in Figure 4. 1. Configure Parallel Link as shown in Figure 4.
2. Advertise matching IGP routes from Tester to DUT on 2. Advertise matching IGP routes from Tester to DUT on
each Parallel Link member. each Parallel Link member.
3. Send offered load at measured Throughput with fixed packet 3. Send offered load at measured Throughput with fixed packet
size to destinations matching all IGP routes from Tester to size to destinations matching all IGP routes from Tester to
DUT on Ingress Interface [Po07t]. DUT on Ingress Interface [Po07t].
4. Verify traffic routed over all members of Parallel Link. 4. Verify traffic routed over all members of Parallel Link.
5. Remove link on Tester's Neighbor Interface [Po07t] connected to 5. Remove link on Tester's Neighbor Interface [Po07t] connected to
one of the DUT's Parallel Link member interfaces. one of the DUT's Parallel Link member interfaces.
6. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the 6. Measure First Prefix Convergence Time [Po07t] as DUT detects the
link down event and begins to converge IGP routes and traffic
over the Next-Best Egress Interface.
7. Measure Rate-Derived Convergence Time [Po07t] as DUT detects the
link down event and converges all IGP routes and traffic over link down event and converges all IGP routes and traffic over
the other Parallel Link members. the other Parallel Link members. At the same time measure
7. Stop offered load. Wait 30 seconds for queues to drain. Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
Restart Offered Load. 8. Stop offered load. Wait 30 seconds for queues to drain.
8. Restore link on Tester's Neighbor Interface connected to Restart offered load.
9. Restore link on Tester's Neighbor Interface connected to
DUT's Parallel Link member interface. DUT's Parallel Link member interface.
10. Measure Reversion Convergence Time [Po07t] as DUT detects the
IGP Data Plane Route Convergence
9. Measure Restoration Convergence Time [Po07t] as DUT detects the
link up event and converges IGP routes and some distribution link up event and converges IGP routes and some distribution
of traffic over the restored Parallel Link member. of traffic over the restored Parallel Link member.
Results Results
The measured IGP Convergence time is influenced by the Local The measured IGP Convergence time is influenced by the Local
link failure indication, Tree Build Time, and Hardware Update link failure indication, Tree Build Time, and Hardware Update
Time [Po07a]. Time [Po07a].
5. IANA Considerations 5. IANA Considerations
This document requires no IANA considerations. This document requires no IANA considerations.
6. Security Considerations 6. Security Considerations
Documents of this type do not directly affect the security of Documents of this type do not directly affect the security of
the Internet or corporate networks as long as benchmarking the Internet or corporate networks as long as benchmarking
is not performed on devices or systems connected to operating is not performed on devices or systems connected to operating
networks. networks.
7. Acknowledgements 7. Acknowledgements
Thanks to Sue Hares, Al Morton, Kevin Dubray, and participants of Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward,
the BMWG for their contributions to this work. and the BMWG for their contributions to this work.
IGP Data Plane Route Convergence
8. References 8. References
8.1 Normative References 8.1 Normative References
[Br91] Bradner, S., "Benchmarking Terminology for Network [Br91] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, IETF, March 1991. Interconnection Devices", RFC 1242, IETF, March 1991.
[Br97] Bradner, S., "Key words for use in RFCs to Indicate [Br97] Bradner, S., "Key words for use in RFCs to Indicate
[Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, IETF, March 1999. Network Interconnect Devices", RFC 2544, IETF, March 1999.
[Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual
Environments", RFC 1195, IETF, December 1990. Environments", RFC 1195, IETF, December 1990.
[Ma98] Mandeville, R., "Benchmarking Terminology for LAN [Ma98] Mandeville, R., "Benchmarking Terminology for LAN
Switching Devices", RFC 2285, February 1998. Switching Devices", RFC 2285, February 1998.
[Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998.
[Po07a] Poretsky, S., "Considerations for Benchmarking IGP [Po06] Poretsky, S., et al., "Terminology for Benchmarking
Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-13, Network-layer Traffic Control Mechanisms", RFC 4689,
work in progress, July 2007. November 2006.
[Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for IGP [Po07a] Poretsky, S., "Considerations for Benchmarking Link-State
Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-13, IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-14,
work in progress, July 2007. work in progress, November 2007.
[Po07t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for
Link-State IGP Convergence",
draft-ietf-bmwg-igp-dataplane-conv-term-14, work in
progress, November 2007.
8.2 Informative References 8.2 Informative References
None None
IGP Data Plane Route Convergence
9. Author's Address 9. Author's Address
Scott Poretsky Scott Poretsky
Reef Point Systems Reef Point Systems
8 New England Executive Park 3 Federal Street
Burlington, MA 01803 Billerica, MA 01821
USA USA
Phone: + 1 508 439 9008 Phone: + 1 508 439 9008
EMail: sporetsky@reefpoint.com EMail: sporetsky@reefpoint.com
Brent Imhoff Brent Imhoff
Juniper Networks Juniper Networks
1194 North Mathilda Ave 1194 North Mathilda Ave
Sunnyvale, CA 94089 Sunnyvale, CA 94089
USA USA
Phone: + 1 314 378 2571 Phone: + 1 314 378 2571
EMail: bimhoff@planetspork.com EMail: bimhoff@planetspork.com
IGP Data Plane Route Convergence
Full Copyright Statement Full Copyright Statement
Copyright (C) The IETF Trust (2007). Copyright (C) The IETF Trust (2007).
This document is subject to the rights, licenses and restrictions This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors contained in BCP 78, and except as set forth therein, the authors
retain all their rights. retain all their rights.
This document and the information contained herein are provided This document and the information contained herein are provided
skipping to change at page 15, line 5 skipping to change at page 17, line 41
on the procedures with respect to rights in RFC documents can be on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79. found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at specification can be obtained from the IETF on-line IPR repository at
http://www.ietf.org/ipr. http://www.ietf.org/ipr.
IGP Data Plane Route Convergence
The IETF invites any interested party to bring to its attention any The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at ietf- this standard. Please address the information to the IETF at ietf-
ipr@ietf.org. ipr@ietf.org.
Acknowledgement Acknowledgement
Funding for the RFC Editor function is currently provided by the Funding for the RFC Editor function is currently provided by the
Internet Society. Internet Society.
 End of changes. 78 change blocks. 
220 lines changed or deleted 360 lines changed or added

This html diff was produced by rfcdiff 1.34. The latest version is available from http://tools.ietf.org/tools/rfcdiff/