Network Working Group                                        S. Poretsky
 Internet Draft
Internet-Draft                                      Allot Communications
 Expires: September 08, 2009
Intended Status: status: Informational                Brent                                 B. Imhoff
Expires: January 14, 2010                               Juniper Networks

                                               March 08,
                                                           K. Michielsen
                                                           Cisco Systems
                                                           July 13, 2009

Benchmarking Methodology for Link-State IGP Data Plane Route Convergence

          <draft-ietf-bmwg-igp-dataplane-conv-meth-17.txt>
               draft-ietf-bmwg-igp-dataplane-conv-meth-18

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.  This document may contain material
   from IETF Documents or IETF Contributions published or made publicly
   available before November 10, 2008.  The person(s) controlling the
   copyright in some of this material may not have granted the IETF
   Trust the right to allow modifications of such material outside the
   IETF Standards Process.  Without obtaining an adequate license from
   the person(s) controlling the copyright in such materials, this
   document may not be modified outside the IETF Standards Process, and
   derivative works of it may not be created outside the IETF Standards
   Process, except to format it for publication as an RFC or to
   translate it into languages other than English.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on September 8, 2009. January 14, 2010.

Copyright Notice

   Copyright (c) 2009 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents in effect on the date of
   publication of this document (http://trustee.ietf.org/license-info).
   Please review these documents carefully, as they describe your rights
   and restrictions with respect to this document.

ABSTRACT

Abstract

   This document describes the methodology for benchmarking Link-State
   Interior Gateway Protocol (IGP) Route Convergence.  The methodology
   is to be used for benchmarking IGP convergence time through
   externally observable (black box) data plane measurements.  The
   methodology can be applied to any link-state IGP, such as ISIS and
   OSPF.

               Link-State IGP Data Plane Route Convergence

Table of Contents

   1.  Introduction and Scope......................................2 Scope . . . . . . . . . . . . . . . . . . . .  5
   2.  Existing Definitions .......................................2 . . . . . . . . . . . . . . . . . . . . .  5
   3.  Test Setup..................................................3
     3.1 Topologies  . . . . . . . . . . . . . . . . . . . . . . .  5
     3.1.  Test Topologies............................................3
     3.2 topology for local changes  . . . . . . . . . . . . .  5
     3.2.  Test Considerations........................................5
     3.3 Reporting Format...........................................8
     4. topology for remote changes . . . . . . . . . . . . .  6
     3.3.  Test Cases..................................................9
     4.1 Convergence Due to Local Interface Failure.................9
     4.2 Convergence Due to Remote Interface Failure................10
     4.3 Convergence Due to Local Administrative Shutdown...........11
     4.4 Convergence Due to Layer 2 Session Loss....................11
     4.5 Convergence Due to Loss of IGP Adjacency...................12
     4.6 Convergence Due to Route Withdrawal........................13
     4.7 Convergence Due to Cost Change.............................14
     4.8 Convergence Due to topology for local ECMP Member Interface Failure...........15
     4.9 Convergence Due to changes . . . . . . . . . . .  7
     3.4.  Test topology for remote ECMP Member Remote Interface Failure....16
     4.10 Convergence Due to Parallel Link Interface Failure........16
     5. IANA Considerations.........................................17 changes  . . . . . . . . . .  7
     3.5.  Test topology for Parallel Link changes  . . . . . . . . .  8
   4.  Convergence Time and Loss of Connectivity Period . . . . . . .  9
   5.  Test Considerations  . . . . . . . . . . . . . . . . . . . . . 13
     5.1.  IGP Selection  . . . . . . . . . . . . . . . . . . . . . . 13
     5.2.  Routing Protocol Configuration . . . . . . . . . . . . . . 13
     5.3.  IGP Topology . . . . . . . . . . . . . . . . . . . . . . . 13
     5.4.  Timers . . . . . . . . . . . . . . . . . . . . . . . . . . 14
     5.5.  Interface Types  . . . . . . . . . . . . . . . . . . . . . 14
     5.6.  Offered Load . . . . . . . . . . . . . . . . . . . . . . . 14
     5.7.  Measurement Accuracy . . . . . . . . . . . . . . . . . . . 15
     5.8.  Measurement Statistics . . . . . . . . . . . . . . . . . . 15
     5.9.  Tester Capabilities  . . . . . . . . . . . . . . . . . . . 15
   6. Security Considerations.....................................17  Selection of Convergence Time Benchmark Metrics and Methods  . 16
     6.1.  Loss-Derived Method  . . . . . . . . . . . . . . . . . . . 16
       6.1.1.  Tester capabilities  . . . . . . . . . . . . . . . . . 16
       6.1.2.  Benchmark Metrics  . . . . . . . . . . . . . . . . . . 16
       6.1.3.  Measurement Accuracy . . . . . . . . . . . . . . . . . 16
     6.2.  Rate-Derived Method  . . . . . . . . . . . . . . . . . . . 17
       6.2.1.  Tester Capabilities  . . . . . . . . . . . . . . . . . 17
       6.2.2.  Benchmark Metrics  . . . . . . . . . . . . . . . . . . 17
       6.2.3.  Measurement Accuracy . . . . . . . . . . . . . . . . . 17
     6.3.  Route-Specific Loss-Derived Method . . . . . . . . . . . . 17
       6.3.1.  Tester Capabilities  . . . . . . . . . . . . . . . . . 17
       6.3.2.  Benchmark Metrics  . . . . . . . . . . . . . . . . . . 18
       6.3.3.  Measurement Accuracy . . . . . . . . . . . . . . . . . 18
   7. Acknowledgements............................................17  Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 18
   8. References..................................................18  Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 20
     8.1.  Interface failures . . . . . . . . . . . . . . . . . . . . 21
       8.1.1.  Convergence Due to Local Interface Failure . . . . . . 21
       8.1.2.  Convergence Due to Remote Interface Failure  . . . . . 22
       8.1.3.  Convergence Due to ECMP Member Local Interface
               Failure  . . . . . . . . . . . . . . . . . . . . . . . 24
       8.1.4.  Convergence Due to ECMP Member Remote Interface
               Failure  . . . . . . . . . . . . . . . . . . . . . . . 25
       8.1.5.  Convergence Due to Parallel Link Interface Failure . . 26
     8.2.  Other failures . . . . . . . . . . . . . . . . . . . . . . 27
       8.2.1.  Convergence Due to Layer 2 Session Loss  . . . . . . . 27
       8.2.2.  Convergence Due to Loss of IGP Adjacency . . . . . . . 28
       8.2.3.  Convergence Due to Route Withdrawal  . . . . . . . . . 30
     8.3.  Administrative changes . . . . . . . . . . . . . . . . . . 31
       8.3.1.  Convergence Due to Local Adminstrative Shutdown  . . . 31
       8.3.2.  Convergence Due to Cost Change . . . . . . . . . . . . 32
   9. Author's Address............................................18  Security Considerations  . . . . . . . . . . . . . . . . . . . 34
   10. IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 34
   11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 34
   12. Normative References . . . . . . . . . . . . . . . . . . . . . 34
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35

1.  Introduction and Scope Scope

   This document describes the methodology for benchmarking Link-State
   Interior Gateway Protocol (IGP) convergence.  The motivation and
   applicability for this benchmarking is described in [Po09a].  The
   terminology to be used for this benchmarking is described in [Po09t].

   IGP convergence time is measured on the data plane at the Tester by
   observing packet loss through the DUT.  All factors contributing to
   convergence time are accounted for by measuring on the data plane, as
   discussed in [Po09a].  The test cases in this document are black-box
   tests that emulate the network events that cause convergence, as
   described in [Po09a].

   The methodology described in this document can be applied to IPv4 and
   IPv6 traffic and link-state IGPs such as ISIS [Ca90][Ho08], OSPF
   [Mo98][Co08], and others.

2.  Existing Definitions

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in BCP 14, RFC 2119
   [Br97].  RFC 2119 defines the use of these key words to help make the
   intent of standards track documents as clear as possible.  While this
   document uses these keywords, this document is not a standards track
   document.

   This document uses much of the terminology defined in [Po09t] and
   uses existing terminology defined in other BMWG work.  Examples
   include, but are not limited to:

      Throughput                         [Ref.[Br91], section 3.17]
      Device Under Test (DUT)            [Ref.[Ma98], section 3.1.1]
      System Under Test (SUT)            [Ref.[Ma98], section 3.1.2]
      Out-of-order Packet                [Ref.[Po06], section 3.3.2]
      Duplicate Packet                   [Ref.[Po06], section 3.3.3]
      Stream                             [Ref.[Po06], section 3.3.2]
      Loss Period                        [Ref.[Ko02], section 4]

3.  Test Topologies

3.1.  Test topology for local changes

   Figure 1 shows the test topology to measure IGP convergence time due
   to local Convergence Events such as Local Interface failure
   (Section 8.1.1), layer 2 session failure (Section 8.2.1), and IGP
   adjacency failure (Section 8.2.2).  This topology is also used to
   measure IGP convergence time due to the route withdrawal
   (Section 8.2.3), and route cost change (Section 8.3.2) Convergence
   Events.  IGP adjancencies MUST be established between Tester and DUT,
   one on the Preferred Egress Interface and one on the Next-Best Egress
   Interface.  For this purpose the Tester emulates two routers, each
   establishing one adjacency with the DUT.  An IGP adjacency MAY be
   established on the Ingress Interface between Tester and DUT.

            ---------       Ingress Interface         ----------
            |       |<--------------------------------|        |
            |       |                                 |        |
            |       |    Preferred Egress Interface   |        |
            |  DUT  |-------------------------------->| Tester |
            |       |                                 |        |
            |       |-------------------------------->|        |
            |       |    Next-Best Egress Interface   |        |
            ---------                                 ----------

         Figure 1: IGP convergence test topology for local changes

3.2.  Test topology for remote changes

   Figure 2 shows the test topology to measure IGP convergence time due
   to Remote Interface failure (Section 8.1.2).  In this topology the
   two routers R1 and R2 are considered System Under Test (SUT) and
   SHOULD be identically configured devices of the same model.  IGP
   adjancencies MUST be established between Tester and SUT, one on the
   Preferred Egress Interface and one on the Next-Best Egress Interface.
   For this purpose the Tester emulates one or two routers.  An IGP
   adjacency MAY be established on the Ingress Interface between Tester
   and SUT.  In this topology there is a possibility of a transient
   microloop between R1 and R2 during convergence.

                       ------                      ----------
                       |    |  Preferred           |        |
              ------   | R2 |--------------------->|        |
              |    |-->|    |  Egress Interface    |        |
              |    |   ------                      |        |
              | R1 |                               | Tester |
              |    |           Next-Best           |        |
              |    |------------------------------>|        |
              ------           Egress Interface    |        |
                 ^                                 ----------
                 |                                     |
                 ---------------------------------------
                             Ingress Interface

        Figure 2: IGP convergence test topology for remote changes

3.3.  Test topology for local ECMP changes

   Figure 3 shows the test topology to measure IGP convergence time due
   to local Convergence Events with members of an Equal Cost Multipath
   (ECMP) set (Section 8.1.3).  In this topology, the DUT is configured
   with each egress interface as a member of a single ECMP set and the
   Tester emulates N next-hop routers, one router for each member.  IGP
   adjancencies MUST be established between Tester and DUT, one on each
   member of the ECMP set.  For this purpose each of the N routers
   emulated by the Tester establishes one adjacency with the DUT.  An
   IGP adjacency MAY be established on the Ingress Interface between
   Tester and DUT.

            ---------       Ingress Interface         ----------
            |       |<--------------------------------|        |
            |       |                                 |        |
            |       |     ECMP set interface 1        |        |
            |       |-------------------------------->|        |
            |  DUT  |               .                 | Tester |
            |       |               .                 |        |
            |       |               .                 |        |
            |       |-------------------------------->|        |
            |       |     ECMP set interface N        |        |
            ---------                                 ----------

       Figure 3: IGP convergence test topology for local ECMP change

3.4.  Test topology for remote ECMP changes

   Figure 4 shows the test topology to measure IGP convergence time due
   to remote Convergence Events with members of an Equal Cost Multipath
   (ECMP) set (Section 8.1.4).  In this topology the two routers R1 and
   R2 are considered System Under Test (SUT) and MUST be identically
   configured devices of the same model.  Route R1 is configured with
   each egress interface as a member of a single ECMP set and the Tester
   emulates N next-hop routers, one router for each member.  IGP
   adjancencies MUST be established between Tester and SUT, one on each
   egress interface of SUT.  For this purpose each of the N routers
   emulated by the Tester establishes one adjacency with the SUT.  An
   IGP adjacency MAY be established on the Ingress Interface between
   Tester and SUT.  In this topology there is a possibility of a
   transient microloop between R1 and R2 during convergence.

                                        ------     ----------
                                        |    |     |        |
              ------      ECMP set      | R2 |---->|        |
              |    |------------------->|    |     |        |
              |    |      Interface 1   ------     |        |
              |    |                               |        |
              |    |      ECMP set interface 2     |        |
              | R1 |------------------------------>| Tester |
              |    |               .               |        |
              |    |               .               |        |
              |    |               .               |        |
              |    |------------------------------>|        |
              ------      ECMP set interface N     |        |
                 ^                                 ----------
                 |                                     |
                 ---------------------------------------
                             Ingress Interface

    Figure 4: IGP convergence test topology for remote ECMP convergence

3.5.  Test topology for Parallel Link changes

   Figure 5 shows the test topology to measure IGP convergence time due
   to local Convergence Events with members of a Parallel Link
   (Section 8.1.5).  In this topology, the DUT is configured with each
   egress interface as a member of a Parallel Link and the Tester
   emulates the single next-hop router.  IGP adjancencies MUST be
   established on all N members of the Parallel Link between Tester and
   DUT.  For this purpose the router emulated by the Tester establishes
   N adjacencies with the DUT.  An IGP adjacency MAY be established on
   the Ingress Interface between Tester and DUT.

            ---------       Ingress Interface         ----------
            |       |<--------------------------------|        |
            |       |                                 |        |
            |       |     Parallel Link Interface 1   |        |
            |       |-------------------------------->|        |
            |  DUT  |               .                 | Tester |
            |       |               .                 |        |
            |       |               .                 |        |
            |       |-------------------------------->|        |
            |       |     Parallel Link Interface N   |        |
            ---------                                 ----------

     Figure 5: IGP convergence test topology for Parallel Link changes

4.  Convergence Time and Loss of Connectivity Period

   Two concepts will be highlighted in this section: convergence time
   and loss of connectivity period.

   The Route Convergence [Po09t] time indicates the period in time
   between the Convergence Event Instant [Po09t] and the instant in time
   the DUT is ready to forward traffic for a specific route on its Next-
   Best Egress Interface and maintains this state for the duration of
   the Sustained Convergence Validation Time [Po09t].  To measure Route
   Convergence time, the Convergence Event Instant and the traffic
   received from the Next-Best Egress Interface need to be observed.

   The Route Loss of Connectivity Period [Po09t] indicates the time
   during which traffic to a specific route is lost following a
   Convergence Event until Full Convergence [Po09t] completes.  This
   Route Loss of Connectivity Period can consist of one or more Loss
   Periods [Ko02].  For the testcases described in this document describes it is
   expected to have a single Loss Period.  To measure Route Loss of
   Connectivity Period, the traffic received from the Preferred Egress
   Interface and the traffic received from the Next-Best Egress
   Interface need to be observed.

   The Route Loss of Connectivity Period is most important since that
   has a direct impact on the network user's application performance.

   In general the Route Convergence time is larger than or equal to the
   Route Loss of Connectivity Period.  Depending on which Convergence
   Event occurs and how this Convergence Event is applied, traffic for a
   route may still be forwarded over the Preferred Egress Interface
   after the Convergence Event Instant, before converging to the Next-
   Best Egress Interface.  In that case the Route Loss of Connectivity
   Period is shorter than the Route Convergence time.

   At least one condition need to be fulfilled for Route Convergence
   time to be equal to Route Loss of Connectivity Period.  The condition
   is that the Convergence Event causes an instantaneous traffic loss
   for the measured route.  A fiber cut on the Preferred Egress
   Interface is an example of such a Convergence Event.  For Convergence
   Events caused by the Tester, such as an IGP cost change, the Tester
   may start to drop all traffic received from the Preferred Egress
   Interface at the methodology for benchmarking Interior
   Gateway Protocol (IGP) Convergence Event Instant to achieve the same
   result.

   A second condition applies to Route Convergence.  The motivation and
   applicability for this benchmarking Convergence time measurements
   based on Connectivity Packet Loss [Po09t].This second condition is
   that there is only a single Loss Period during Route Convergence.
   For the testcases described in [Po09a].
   The terminology to be used for this benchmarking document this is described
   in [Po09t].  Service Providers use IGP Convergence expected to be
   the case.

   To measure convergence time without real instantaneous traffic loss
   at the Convergence Event Instant, such as a key
   metric reversion of router design a link
   failure Convergence Event, the Tester SHOULD collect a timestamp at
   the time instant traffic starts and architecture.  Customers of Service
   Providers a timestamp at the Convergence
   Event Instant, and only observe convergence time by packet loss, so IGP Route statistics on the Next-Best
   Egress Interface.

   The Convergence Event Instant together with the receive rate
   observations on the Next-Best Egress Interface allow to derive the
   convergence benchmarks using the Rate-Derived Method [Po09t].

   By observing lost packets on the Next-Best Egress Interface only, the
   measured packet loss is considered a Direct Measure the number of Quality (DMOQ).  The
   test cases in this document are black-box tests that emulate lost packets between traffic
   start and Convergence Recovery Instant.  To measure convergence times
   using a loss-derived method, packet loss between the
   network events that cause route convergence, as described in
   [Po09a].  The black-box test designs benchmark Convergence
   Event Instant and the data plane Convergence Recovery Instant is needed.  The
   time between traffic start and
   account Convergence Event Instant must be
   accounted for

   Figure 6 illustrates a Convergence Event without instantaneous
   traffic loss for all of routes.  The top graph shows the Forwarding Rate
   over all routes, the bottem graph shows the Forwarding Rate for a
   single route Rta. Some time after the Convergence Event Instant,
   Forwarding Rate observed on the Preferred Egress Interface starts to
   decrease.  In the example route Rta is the first route to experience
   packet loss at time Ta.  Some time later, the Forwarding Rate
   observed on the Next-Best Egress Interface starts to increase.  In
   the example route Rta is the factors contributing first route to complete convergence time,
   as discussed in [Po09a].  Convergence times are measured at
   time Ta'.

                ^
           Fwd  |
           Rate |-------------                    ............
                |             \                  .
                |              \                .
                |               \              .
                |                \            .
                |.................-.-.-.-.-.-.----------------
                +----+-------+---------------+----------------->
                ^    ^       ^               ^             time
               T0   CEI      Ta              Ta'

                ^
           Fwd  |
           Rate |-------------               .................
           Rta  |            |               .
                |            |               .
                |.............-.-.-.-.-.-.-.-.----------------
                +----+-------+---------------+----------------->
                ^    ^       ^               ^             time
               T0   CEI      Ta              Ta'

                Preferred Egress Interface: ---
                Next-Best Egress Interface: ...

   With CEI the
   Tester on Convergence Event Instant; T0 the data plane by observing packet loss through time instant traffic
   starts; Ta the DUT.
   The methodology (and terminology) time instant traffic loss for benchmarking route
   convergence can be applied to any link-state IGP such as ISIS
   [Ca90] and OSPF [Mo98] and others.  These methodologies apply to
   IPv4 and IPv6 Rta starts; Ta'
   the time instant traffic and IGPs.

2. Existing Definitions
   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document loss for route Rta ends.

                                 Figure 6

   If only packets received on the Next-Best Egress Interface are to be interpreted as described in BCP 14, RFC 2119
   [Br97].  RFC 2119 defines
   observed, the use duration of these key words to help make the
               Link-State IGP Data Plane Route Convergence

   intent of standards track documents as clear as possible.  While this
   document uses these keywords, this document is not a standards track
   document.

   This document adopts packet loss period for route Rta
   observed on the definition format in Section 2 of RFC 1242
   [Br91]. This document uses much of Next-Best Egress Interface can be calculated from the
   received packets as in Equation 1.  Since the terminology defined in
   [Po09t].  This document uses existing terminology defined in other
   BMWG work.  Examples include, but are not limited to:

             Throughput                [Ref.[Br91], section 3.17]
             Device Under Test (DUT)   [Ref.[Ma98], section 3.1.1]
             System Under Test (SUT)   [Ref.[Ma98], section 3.1.2]
             Out-of-order Packet       [Ref.[Po06], section 3.3.2]
             Duplicate Packet          [Ref.[Po06], section 3.3.3]
             Packet Loss               [Ref.[Po09t], Section 3.5]

3.  Test Setup

   3.1 Test Topologies Convergence times are Event
   Instant is the start time for convergence time measurement, the
   period in time between T0 and CEI needs to be substracted from the
   calculated result to become the convergence time, as in Equation 2.

   Next-Best Egress Interface packet loss period
       = (packets transmitted
           - packets received from Next-Best Egress Interface) / tx rate
       = Ta' - T0

                                Equation 1
      convergence time
          = Next-Best Egress Interface packet loss period - (CEI - T0)
          = Ta' - CEI

                                Equation 2

   Route Loss of Connectivity Period SHOULD be measured at using the Tester on Route-
   Specific Loss-Derived Method.  Since the data plane start instant and end
   instant of the Route Loss of Connectivity Period can be different for
   each route, these can not be accurately derived by only observing
   global statistics over all routes.  An example may clarify this.

   Following a Convergence Event, route Rta is the first route for which
   packet loss through starts, the DUT.  Figure 1 shows Route Loss of Connectivity Period for route
   Rta starts at time Ta.  Route Rtb is the last route for which packet
   loss starts, the test
   topology to measure IGP Route Convergence due to local Convergence
   Events Loss of Connectivity Period for route Rtb
   starts at time Tb with Tb>Ta.

                  ^
             Fwd  |
             Rate |--------                       -----------
                  |        \                     /
                  |         \                   /
                  |          \                 /
                  |           \               /
                  |            ---------------
                  +------------------------------------------>
                           ^   ^             ^    ^      time
                          Ta   Tb           Ta'   Tb'
                                            Tb''  Ta''

            Figure 7: Example Route Loss Of Connectivity Period

   If the DUT implementation would be such as Link Failure, Layer 2 Session Failure, IGP
   Adjacency Failure, that Route Withdrawal, and Rta would be the
   first route cost change.  These
   test cases discussed in section 4 provide for which traffic loss ends at time Ta' with Ta'>Tb.
   Route Rtb would be the last route convergence times
   that include for which traffic loss ends at time
   Tb' with Tb'>Ta'.  By using only observing global traffic statistics
   over all routes, the Event Detection time, SPF Processing time, minimum Route Loss of Connectivity Period would
   be measured as Ta'-Ta.  The maximum calculated Route Loss of
   Connectivity Period would be Tb'-Ta.  The real minimum and
   FIB Update time.

   Figure 2 shows the test topology to measure IGP maximum
   Route Convergence
   time due to remote changes in Loss of Connectivity Periods are Ta'-Ta and Tb'-Tb.
   Illustrating this with the numbers Ta=0, Tb=1, Ta'=3, and Tb'=5,
   would give a LoC Period between 3 and 5 derived from the global
   traffic statistics, versus the real LoC Period between 3 and 4.

   If the DUT implementation would be such that route Rtb would be the network topology.  These times
   are measured by observing
   first for which packet loss in ends at time Tb'' and route Rta would be
   the data plane last for which packet loss ends at time Ta'', then the
   Tester.  In minimum
   and maximum Route Loss of Connectivity Periods derived by observing
   only global traffic statistics would be Tb''-Ta, and Ta''-Ta.  The
   real minimum and maximum Route Loss of Connectivity Periods are
   Tb''-Tb and Ta''-Ta.  Illustrating this topology with the three routers are considered numbers Ta=0, Tb=1,
   Ta''=5, Tb''=3, would give a System
   Under Test (SUT).  A Remote Interface [Po09t] failure on router R2
   MUST result in convergence of LoC Period between 3 and 5 derived from
   the global traffic to router R3.  NOTE: All
   routers statistics, versus the real LoC Period between 2
   and 5.

   The two implementation variations in the SUT must be above example would result
   in the same model derived minimum and identically
   configured.

        ---------       Ingress Interface         ---------
        |       |<--------------------------------|       |
        |       |                                 |       |
        |       |    Preferred Egress Interface   |       |
        |  DUT  |-------------------------------->| Tester|
        |       |                                 |       |
        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |
        |       |    Next-Best Egress Interface   |       |
        ---------                                 ---------

      Figure 1.  Test Topology 1: IGP Convergence Test Topology
                 for Local Changes
               Link-State IGP Data Plane maximum Route Convergence

                -----                       ---------
                |   | Preferred             |       |
        -----   |R2 |---------------------->|       |
        |   |-->|   | Egress Interface      |       |
        |   |   -----                       |       |
        |R1 |                               |Tester |
        |   |   -----                       |       |
        |   |-->|   |   Next-Best           |       |
        -----   |R3 |~~~~~~~~~~~~~~~~~~~~~~>|       |
          ^     |   |   Egress Interface    |       |
          |     -----                       ---------
          |                                     |
          |--------------------------------------
                      Ingress Interface

      Figure 2. Test Topology 2: IGP Convergence Test Topology
                for Convergence Due to Remote Changes

        ---------       Ingress Interface         ---------
        |       |<--------------------------------|       |
        |       |                                 |       |
        |       |     ECMP Set Interface 1        |       |
        |  DUT  |-------------------------------->| Tester|
        |       |               .                 |       |
        |       |               .                 |       |
        |       |               .                 |       |
        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |
        |       |     ECMP Set Interface N        |       |
        ---------                                 ---------

      Figure 3. Loss of Connectivity
   Periods when only observing the global packet statistics, while the
   real Route Loss of Connectivity Periods are different.

5.  Test Topology 3: Considerations

5.1.  IGP Convergence Test Topology Selection

   The test cases described in section 4 MAY be used for ECMP Convergence

   Figure 3 shows the link-state
   IGPs, such as ISIS or OSPF.  The IGP convergence time test topology to measure
   methodology is identical.

5.2.  Routing Protocol Configuration

   The obtained results for IGP Route Convergence convergence time may vary if other
   routing protocols are enabled and routes learned via those protocols
   are installed.  IGP convergence times MUST be benchmarked without
   routes installed from other protocols.

5.3.  IGP Topology

   The Tester emulates a single IGP topology.  The DUT establishes IGP
   adjacencies with members one or more of an Equal Cost Multipath (ECMP) Set.  These
   times are measured the emulated routers in this single
   IGP topology emulated by observing packet loss the Tester.  See topology details in
   Section 3.  The emulated topology SHOULD only be advertised on the data plane at
   DUT egress interfaces.

   The number of IGP routes will impact the Tester.  In this topology, measured IGP route
   convergence time.  To obtain results similar to those that would be
   observed in an operational network, it is RECOMMENDED that the number
   of installed routes and nodes closely approximates that of the
   network (e.g. thousands of routes with tens or hundreds of nodes).

   The number of areas (for OSPF) and levels (for ISIS) can impact the
   benchmark results.

5.4.  Timers

   There are timers that may impact the measured IGP convergence times.
   The benchmark metrics MAY be measured at any fixed values for these
   timers.  To obtain results similar to those that would be observed in
   an operational network, it is RECOMMENDED to configure the DUT is configured timers
   with each
   Egress interface as a member of an ECMP set and the Tester emulates
   multiple next-hop routers (emulates one router for each member).

   Figure 4 shows values as configured in the test topology to measure operational network.

   Examples of timers that may impact measured IGP Route Convergence convergence time with members of a Parallel Link.  These times
   include, but are measured by
   observing not limited to:

      Interface failure indication

      IGP hello timer

      IGP dead-interval or hold-timer

      LSA or LSP generation delay

      LSA or LSP flood packet loss pacing

      LSA or LSP retransmission packet pacing

      SPF delay

5.5.  Interface Types

   All test cases in the data plane at the Tester.  In this
   topology, the DUT is configured methodology document MAY be executed with each Egress any
   interface as a
   member type.  The type of media may dictate which test cases may
   be executed.  This is because each interface type has a Parallel Link unique
   mechanism for detecting link failures and the Tester emulates speed at which that
   mechanism operates will influence the single
   next-hop router.

               Link-State IGP Data Plane Route Convergence

        ---------       Ingress Interface         ---------
        |       |<--------------------------------|       |
        |       |                                 |       |
        |       |     Parallel Link Interface 1   |       |
        |  DUT  |-------------------------------->| Tester|
        |       |               .                 |       |
        |       |               .                 |       |
        |       |               .                 |       |
        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |
        |       |     Parallel Link Interface N   |       |
        ---------                                 ---------

      Figure 4. Test Topology 4: IGP Convergence Test Topology measurement results.  All
   interfaces MUST be the same media and Throughput [Br91][Br99] for Parallel Link Convergence

   3.2 Test Considerations
   3.2.1 IGP Selection
   The
   each test cases described case.  All interfaces SHOULD be configured as point-to-
   point.

5.6.  Offered Load

   The Throughput of the device, as defined in section 4 MAY [Br91] and benchmarked in
   [Br99] at a fixed packet size, needs to be used for link-state
   IGPs, such as ISIS or OSPF.  The Route Convergence test methodology
   is identical. determined over the
   preferred path and over the next-best path.  The IGP adjacencies are established on Offered Load SHOULD
   be the Preferred
   Egress Interface minumum of the measured Throughput of the device over the
   primary path and Next-Best Egress Interface.

   3.2.2 Routing Protocol Configuration over the backup path.  The obtained results for IGP Route Convergence may vary if
   other routing protocols are enabled packet size is selectable
   and routes learned via those
   protocols are installed.  IGP convergence times MUST be benchmarked
   without routes installed from other protocols.

   When performing test cases, advertise a single IGP topology from
   Tester to DUT on recorded.  Packet size is measured in bytes and includes
   the Preferred Egress Interface [Po09t] IP header and
   Next-Best Egress payload.

   In the Remote Interface [Po09t] failure testcases using topologies 2 and 4
   there is a possibility of a transient microloop between R1 and R2
   during convergence.  The TTL value of the test setup shown packets send by the Tester
   may influence the benchmark measurements since it determines which
   device in
   Figure 1.  These two interfaces on the DUT must peer with
   different emulated neighbor routers topology may send an ICMP Time Exceeded Message for their IGP adjacencies.
   looped packets.

   The IGP topology learned on both interfaces duration of the Offered Load MUST be greater than the same
   topology with convergence
   time.

5.7.  Measurement Accuracy

   Since packet loss is observed to measure the same nodes and routes.

   3.2.3 IGP Route Scaling
   The number of IGP routes will impact Convergence Time,
   the measured IGP Route
   Convergence.  To obtain results similar time between two successive packets offered to those that would be
   observed in an operational network, each individual
   route is the highest possible accuracy of any packet loss based
   measurement.  When packet jitter is much less than the convergence
   time, it is RECOMMENDED that a negligible source of error and therefor it will be
   ignored here.

5.8.  Measurement Statistics

   The benchmark measurements may vary for each trial, due to the
   number
   statistical nature of installed routes and nodes closely approximates that timer expirations, cpu scheduling, etc.
   Evaluation of the network (e.g. thousands of routes test data must be done with tens an understanding of nodes).
   The
   generally accepted testing practices regarding repeatability,
   variance and statistical significance of a small number of areas (for OSPF) and levels (for ISIS) can impact
   the benchmark results.

               Link-State IGP Data Plane Route Convergence

   3.2.4 Timers
   There are some timers that will impact the measured IGP Convergence
   time. Benchmarking metrics may be measured at any fixed values for
   these timers. trials.

5.9.  Tester Capabilities

   It is RECOMMENDED that the following timers be
   configured Tester used to execute each test case has
   the minimum values listed:

        Timer                                   Recommended Value
        -----                                   -----------------
        Link Failure Indication Delay           <10milliseconds following capabilities:

   1.  Ability to establish IGP Hello Timer                         1 second adjacencies and advertise a single IGP Dead-Interval                       3 seconds
        LSA Generation Delay                    0
        LSA Flood Packet Pacing                 0
        LSA Retransmission Packet Pacing        0
        SPF Delay                               0

   3.2.5 Interface Types
   All test cases
       topology to one or more peers.

   2.  Ability to insert a timestamp in this methodology document may be executed with any
   interface type.  All interfaces MUST be the same media and Throughput
   [Br91][Br99] for each test case.  The type of media may dictate which
   test cases may be executed.  This is because each interface type has
   a unique mechanism for detecting link failures data packet's IP payload.

   3.  An internal time clock to control timestamping, time
       measurements, and time calculations.

   4.  Ability to distinguish traffic load received on the speed at which
   that mechanism operates will influence the measure results.  Media Preferred and protocols MUST
       Next-Best Interfaces [Po09t].

   5.  Ability to disable or tune specific Layer-2 and Layer-3 protocol
       functions on any interface(s).

   The Tester MAY be configured capable to make non-data plane convergence
   observations and use those observations for minimum failure detection delay
   to minimize the contribution measurements.  The Tester
   MAY be capable to the measured send and receive multiple traffic Streams [Po06].

6.  Selection of Convergence time.  For
   example, configure SONET with the minimum carrier-loss-delay.  All
   interfaces SHOULD Time Benchmark Metrics and Methods

   Different convergence time benchmark methods MAY be configured as point-to-point.

   3.2.6 Packet Sampling Interval used to measure
   convergence time benchmark metrics.  The Packet Sampling Interval [Po09t] value is the fastest measurable Tester capabilities are
   important criteria to select a specific convergence time. time benchmark
   method.  The RECOMMENDED value for the Packet Sampling
   Interval criteria to be set on the select a specific benchmark method include,
   but are not limited to:

   Tester is 10 milliseconds.  The Packet capabilities:               Sampling Interval MUST be reported.

   3.2.7 Interval, number of
                                      Stream statistics to collect
   Measurement accuracy:              Sampling Interval, Offered Load
   The offered load MUST be the
   Test specification:                number of routes
   DUT capabilities:                  Throughput

6.1.  Loss-Derived Method

6.1.1.  Tester capabilities

   The Offered Load SHOULD consist of the device as defined in
   [Br91] and benchmarked in [Br99] at a fixed packet size.  At least
   one packet per route in single Stream [Po06].  If
   sending multiple Streams, the FIB measured packet loss statistics for all routes in
   Streams MUST be added together.

   The destination addresses for the FIB Offered Load MUST be distributed
   such that all routes are matched and each route is offered an equal
   share of the total Offered Load.

   In order to verify Full Convergence completion and the DUT within Sustained
   Convergence Validation Time, the Tester MUST measure Forwarding Rate
   each Packet Sampling interval.  Packet
   size is measured in bytes and includes Interval.

   The total number of packets lost between the IP header start of the traffic and payload.
   The packet size
   the end of the Sustained Convergence Validation Time is selectable and MUST be recorded. used to
   calculate the Loss-Derived Convergence Time.

6.1.2.  Benchmark Metrics

   The Throughput
   MUST Loss-Derived Method can be measured at used to measure the Preferred Egress Interface Loss-Derived
   Convergence Time, which is the average convergence time over all
   routes, and to measure the
   Next-Best Egress Interface. Loss-Derived Loss of Connectivity Period,
   which is the average Route Loss of Connectivity Period over all
   routes.

6.1.3.  Measurement Accuracy

   TBD

6.2.  Rate-Derived Method

6.2.1.  Tester Capabilities

   The duration Offered Load SHOULD consist of offered load a single Stream.  If sending
   multiple Streams, the measured traffic rate statistics for all
   Streams MUST be
   greater than the convergence time. added together.

   The destination addresses for the offered load Offered Load MUST be distributed
   such that all routes are matched and each route is offered an equal
   share of the total Offered Load.  This requirement for

   The Tester measures Forwarding Rate each Sampling Interval.  The
   Packet Sampling Interval influences the Offered
   Load to be distributed to match all destinations in observation of the route table
   creates separate flows that are offered different
   convergence time instants.  If the Packet Sampling Interval is large
   in comparison to the DUT.  The capability
   of time between the Tester to measure packet loss for each individual flow
               Link-State IGP Data Plane Route Convergence

   (identified by convergence time instants, then
   the destination address matching a route entry) and different time instants may not be easily identifiable from the scale
   Forwarding Rate observation.  The requirements for the number of individual flows Packet
   Sampling Interval are specified in [Po09t].  The RECOMMENDED value
   for which it can
   measure packet loss should the Packet Sampling Interval is 10 milliseconds.  The Packet
   Sampling Interval MUST be considered when benchmarking
   Route-Specific Convergence [Po09t].

   3.2.8 Selection of Convergence Time reported.

6.2.2.  Benchmark Metrics and Methods

   The methodologies in the section 4 test cases MAY Rate-Derived Method SHOULD be applied used to
   benchmark Full Convergence Time, measure First Route
   Convergence Time,
   Reversion Convergence Time, and Route-Specific Convergence Time
   [Po09t].  The First Route and Full Convergence Time benchmark metric MAY Time.  It SHOULD NOT be measured while measuring any used to
   measure Loss of these convergence benchmarks. Connectivity Period (see Section Section 4).

6.2.3.  Measurement Accuracy

   The benchmarking metrics may be obtained using either the
   Loss-Derived Convergence Method or Rate-Derived Convergence
   Method.  It is RECOMMENDED that measurement accuracy of the Rate-Derived Convergence
   Method be measured when benchmarking convergence times.  The
   Loss-Derived Convergence Method is not the preferred method to
   measure convergence benchmarks because it can produce a result for transitions
   that occur for all routes at the same instant is faster than equal to the actual convergence time.  When Packet
   Sampling Interval and for other transitions the measurement accuracy
   is equal to the Packet Sampling Interval plus the time between two
   consecutive packets to the same destination.  The latter is too large, the Rate-Derived
   Convergence Method may produce case
   since packets are sent in a larger than actual convergence
   time.  In such cases particular order to all destinations in a
   stream and when part of the routes experience packet loss, it is
   unknown where in the transmit cycle packets to these routes are sent.
   This uncertainty adds to the error.

6.3.  Route-Specific Loss-Derived Convergence Method may
   produce a more accurate result.

   3.2.9

6.3.1.  Tester Capabilities
   It is RECOMMENDED that

   The Offered Load consists of multiple Streams.  To measure Route-
   Specific Convergence Times, the Tester used to execute each test case
   have the following capabilities:
      1. Ability to establish IGP adjacencies and advertise a single
         IGP topology to sends one or more peers.
      2. Ability to produce convergence Event Triggers [Po09t].
      3. Ability Stream to insert a timestamp each route
   in the FIB.  The Tester MUST measure packet loss for each data packet's IP
         payload.
      2. An internal time clock Stream
   seperately.

   In order to control timestamping, time
         measurements, verify Full Convergence completion and time calculations.
      3. Ability the Sustained
   Convergence Validation Time, the Tester MUST measure packet loss each
   Packet Sampling Interval.  This measurement at each Packet Sampling
   Interval MAY be per Stream.

   Only the total packet loss measured per Stream at the end of the
   Sustained Convergence Validation Time is used to distinguish traffic load received on calculate the
         Preferred and Next-Best Interfaces [Po09t].
      4. Ability
   benchmark metrics with this method.

6.3.2.  Benchmark Metrics

   The Route-Specific Loss-Derived Method SHOULD be used to disable or tune specific Layer-2 and Layer-3
         protocol functions on any interface(s). measure
   Route-Specific Convergence Times.  It is not required that the Tester RECOMMENDED method to
   measure Route Loss of Connectivity Period.

   Under the conditions explained in Section 4, First Route Convergence
   Time and Full Convergence Time as benchmarked using Rate-Derived
   Method, may be capable of making non-data
    plane convergence observations nor equal to use those observations for
    measurements.

               Link-State IGP Data Plane Route the minimum resp. maximum of the Route-
   Specific Convergence

   3.3 Times.

6.3.3.  Measurement Accuracy

   The measurement accuracy of the Route-Specific Loss-Derived Method is
   equal to the time between two consecutive packets to the same route.

7.  Reporting Format

   For each test case, it is recommended that the reporting table tables below
   is
   are completed and all time values SHOULD be reported with resolution
   as specified in [Po09t].

        Parameter                           Units
        ---------                              -----
        ----------------------------------- -----------------------
        Test Case                           test case number
        Test Topology                       (1, 2, 3, 4, or 4) 5)
        IGP                                 (ISIS, OSPF, other)
        Interface Type                      (GigE, POS, ATM, other)
        Packet Size offered to DUT          bytes
        Offered Load                        packets per second
        IGP Routes advertised to DUT        number of IGP routes
        Nodes in emulated network           number of nodes
        Packet Sampling Interval on Tester     milliseconds
        IGP  seconds
        Maximum Packet Delay Threshold      seconds

        Timer Values configured on DUT:
         Interface Failure Indication Delay failure indication delay seconds
         IGP Hello Timer                    seconds
         IGP Dead-Interval or hold-time     seconds
         LSA Generation Delay               seconds
         LSA Flood Packet Pacing            seconds
         LSA Retransmission Packet Pacing   seconds
         SPF Delay                          seconds

   Complete the table below for the initial Convergence Event and the
   reversion Convergence Event.

     Parameter                                  Units
     ------------------------------------------ ----------------------
     Conversion Event                           (initial or reversion)

     Traffic Forwarding Metrics Metrics:
      Total Packets Offered number of packets offered to DUT    number of Packets
      Total Packets Routed number of packets forwarded by DUT  number of Packets
      Connectivity Packet Loss                  number of Packets
      Convergence Packet Loss                   number of Packets
      Out-of-Order Packets                      number of Packets
      Duplicate Packets                         number of Packets

     Convergence Benchmarks
          Full Convergence Benchmarks:
      Rate-Derived Method:
       First Route Convergence Time             seconds
       Full Convergence Time (Rate-Derived)                    seconds
              Full
      Loss-Derived Method:
       Loss-Derived Convergence Time (Loss-Derived)            seconds
      Route-Specific Convergence Loss-Derived Method:
       Number of Routes Measured                number of flows routes
       Route-Specific Convergence Time[n]       array of seconds
       Minimum R-S Convergence Time             seconds
       Maximum R-S Convergence Time             seconds
       Median R-S Convergence Time              seconds
       Average R-S Convergence Time             seconds
          Reversion
              Reversion Convergence Time        seconds
              First Route Convergence Time

     Loss of Connectivity Benchmarks:
      Loss-Derived Method:
       Loss-Derived Loss of Connectivity Period seconds
      Route-Specific Convergence Loss-Derived Method:
       Number of Routes Measured                number of flows
               Route-Specific Convergence Time[n] routes
       Route LoC Period[n]                      array of seconds
       Minimum R-S Convergence Time Route LoC Period                 seconds
       Maximum R-S Convergence Time Route LoC Period                 seconds
       Median R-S Convergence Time Route LoC Period                  seconds
       Average R-S Convergence Time     seconds
               Link-State IGP Data Plane Route Convergence

4. LoC Period                 seconds

8.  Test Cases

   It is RECOMMENDED that all applicable test cases be performed for
   best characterization of the DUT.  The test cases follow a generic
   procedure tailored to the specific DUT configuration and Convergence
   Event[Po09t].  This generic procedure is as follows:

   1.   Establish DUT configuration and install routes. Tester configurations and advertise an IGP
        topology from Tester to DUT.

   2.   Send offered load with traffic traversing Preferred Egress
         Interface [Po09t]. Offered Load from Tester to DUT on ingress interface.

   3.   Verify traffic is routed correctly.

   4.   Introduce Convergence Event to force traffic to Next-Best
         Egress Interface [Po09t].
      4.

   5.   Measure First Route Convergence Time.
      5. Time [Po09t].

   6.   Measure Full Convergence Time and, optionally, the [Po09t].

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times.
      6. Wait the Sustained Times, Loss-Derived
        Convergence Validation Time Time, Route LoC Periods, and Loss-Derived LoC Period
        [Po09t].

   9.   Wait sufficient time for queues to ensure there
         is no residual packet loss.
      7. Recover from drain.

   10.  Restart Offered Load.

   11.  Reverse Convergence Event.
      8.

   12.  Measure Reversion Convergence Time, and optionally the First Route Convergence Time and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times.

   4.1 Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

8.1.  Interface failures

8.1.1.  Convergence Due to Local Interface Failure

   Objective

   To obtain the IGP Route Convergence IGP convergence times due to a local link failure event
   at the DUT's Local Interface. Interface failure
   event.

   Procedure

   1.   Advertise matching an IGP routes and topology from Tester to DUT on
      the Preferred Egress Interface [Po09t] and Next-Best Egress
      Interface [Po09t] using the topology
        shown in Figure 1.  Set the
      cost of the routes so that the Preferred Egress Interface is the
      preferred next-hop.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed forwarded over Preferred Egress Interface.

   4.   Remove link on DUT's Preferred Egress Interface.  This is the
        Convergence Event Trigger[Po09t] that produces the Convergence
      Event Instant [Po09t]. Event.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      link down event and begins to converge IGP routes and traffic
      over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects the
      link down event and converges all IGP routes and traffic over
      the Next-Best Egress Interface.  Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be measured.
   7. Stop offered load. and Loss-Derived
        Convergence Time.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Offered Load.

   11.  Restore link on DUT's Preferred Egress Interface.
   9.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times, as DUT detects the link up event and
      converges all IGP routes and traffic back to the Preferred
      Egress Interface.

               Link-State IGP Data Plane Route Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence convergence time is may be influenced by the Local link
   failure indication, indication time, LSA/LSP delay, LSA/LSP generation time, LSA/
   LSP flood packet pacing, SPF delay, SPF Hold execution time, SPF Execution
   Time, Tree Build Time, and Hardware Update Time routing
   and forwarding tables update time [Po09a].

   4.2

8.1.2.  Convergence Due to Remote Interface Failure

   Objective

   To obtain the IGP Route Convergence convergence time due to a Remote Interface
   Failure failure
   event.

   Procedure
   1.   Advertise matching an IGP routes and topology from Tester to
      SUT on Preferred Egress Interface [Po09t] and Next-Best Egress
      Interface [Po09t] using the topology shown in Figure 2.
      Set the cost of the routes so that the Preferred Egress
      Interface is the preferred next-hop.
   2. Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes from Tester to SUT using the topology
        shown in Figure 2.

   2.   Send Offered Load from Tester to SUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed forwarded over Preferred Egress Interface.

   4.   Remove link on Tester's Neighbor Interface interface [Po09t] connected to SUT's
        Preferred Egress Interface.  This is the Convergence Event
      Trigger [Po09t] that produces the Convergence Event Instant
      [Po09t]. Event.

   5.   Measure First Route Convergence Time [Po09t] as SUT detects the
      link down event and begins to converge IGP routes and traffic
      over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as SUT detects
      the link down event and converges all IGP routes and traffic
      over the Next-Best Egress Interface.  Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be measured.
   7. Stop offered load. and Loss-Derived
        Convergence Time.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Offered Load.

   11.  Restore link on Tester's Neighbor Interface interface connected to DUT's Preferred
        Egress Interface.
   9.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as DUT detects the link up event and
      converges all IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and traffic back to the Preferred Egress
      Interface. Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence convergence time is may be influenced by the link
   failure
   indication, LSA/LSP Flood Packet Pacing, indication time, LSA/LSP Retransmission
   Packet Pacing, delay, LSA/LSP Generation generation time, LSA/
   LSP flood packet pacing, SPF delay, SPF Hold execution time,
   SPF Execution Time, Tree Build Time, and Hardware Update Time
   [Po09a]. routing
   and forwarding tables update time.  This test case may produce Stale
   Forwarding [Po09t] due to
   microloops a transient microloop between R1 and R2
   during convergence, which may increase the measured convergence times.

               Link-State IGP Data Plane Route Convergence

   4.3 times
   and loss of connectivity periods.

8.1.3.  Convergence Due to ECMP Member Local Adminstrative Shutdown Interface Failure

   Objective

   To obtain the IGP Route Convergence convergence time due to a administrative shutdown
   at the DUT's Local Interface. Interface link
   failure event of an ECMP Member.

   Procedure

   1.   Advertise matching an IGP routes and topology from Tester to DUT on
      Preferred Egress Interface [Po09t] and Next-Best Egress Interface
      [Po09t] using the topology test
        setup shown in Figure 1.  Set the cost of
      the routes so that the Preferred Egress Interface is the
      preferred next-hop. 3.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed forwarded over Preferred Egress Interface. the DUT's ECMP member interface
        that will be failed in the next step.

   4. Perform adminstrative shutdown   Remove link on one of the DUT's Preferred Egress
      Interface. ECMP member interfaces.  This is
        the Convergence Event Trigger [Po09t] that
      produces the Convergence Event Instant [Po09t]. Event.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      link down event and begins to converge IGP routes and traffic
      over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT converges
      all IGP routes and traffic over the Next-Best Egress Interface.
      Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be
      measured.
   7. Stop offered load. and Loss-Derived
        Convergence Time.  At the same time measure Out-of-Order Packets
        [Po06] and Duplicate Packets [Po06].

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Offered Load.

   11.  Restore Preferred Egress Interface by administratively enabling
      the link on DUT's ECMP member interface.
   9.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as DUT detects the link up event and
      converges all IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and traffic back to Loss-Derived LoC
        Period.  At the Preferred
      Egress Interface. same time measure Out-of-Order Packets [Po06]
        and Duplicate Packets [Po06].

   Results
   The measured IGP Convergence time is may be influenced by link failure
   indication time, LSA/LSP delay, LSA/LSP generation time, LSA/LSP
   flood packet pacing, SPF delay, SPF Hold execution time, SPF Execution Time, Tree Build Time, and Hardware
   Update Time routing and
   forwarding tables update time [Po09a].

   4.4

8.1.4.  Convergence Due to Layer 2 Session Loss ECMP Member Remote Interface Failure

   Objective

   To obtain the IGP Route Convergence convergence time due to a local Layer 2 loss. Remote Interface link
   failure event for an ECMP Member.

   Procedure

   1.   Advertise matching an IGP routes and topology from Tester to DUT on
      Preferred Egress Interface [Po09t] and Next-Best Egress Interface
      [Po09t] using the topology shown in Figure 1.  Set the cost of
      the routes so that the IGP routes along the Preferred Egress
      Interface is the preferred next-hop.
   2. Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes test
        setup shown in Figure 4.

   2.   Send Offered Load from Tester to DUT on Ingress Interface [Po09t].

               Link-State IGP Data Plane Route Convergence ingress interface.

   3.   Verify traffic is routed forwarded over Preferred Egress Interface.
   4. Tester removes Layer 2 session from the DUT's Preferred Egress
      Interface [Po09t].  It is RECOMMENDED ECMP member interface
        that this will be achieved with
      messaging, but the method MAY vary with failed in the Layer 2 protocol. next step.

   4.   Remove link on Tester's interface to R2.  This is the
        Convergence Event Trigger [Po09t] that produces the
      Convergence Event Instant [Po09t]. Trigger.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      Layer 2 session down event and begins to converge IGP routes and
      traffic over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects the
      Layer 2 session down event and converges all IGP routes and
      traffic over the Next-Best Egress Interface.  Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be measured.
   7. Stop offered load. and Loss-Derived
        Convergence Time.  At the same time measure Out-of-Order Packets
        [Po06] and Duplicate Packets [Po06].

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Offered Load.

   11.  Restore Layer 2 session link on DUT's Preferred Egress Interface.
   9. Tester's interface to R2.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t],  as DUT detects the session up event
      and converges all IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and traffic over Loss-Derived LoC
        Period.  At the Preferred Egress
      Interface. same time measure Out-of-Order Packets [Po06]
        and Duplicate Packets [Po06].

   Results

   The measured IGP Convergence convergence time is may influenced by the Layer 2 link failure indication,
   indication time, LSA/LSP delay, LSA/LSP generation time, LSA/LSP
   flood packet pacing, SPF delay, SPF Hold execution time, SPF Execution
   Time, Tree Build Time, and Hardware Update Time [Po09a].

   4.5 routing and
   forwarding tables update time.  This test case may produce Stale
   Forwarding [Po09t] due to a transient microloop between R1 and R2
   during convergence, which may increase the measured convergence times
   and loss of connectivity periods.

8.1.5.  Convergence Due to Loss of IGP Adjacency Parallel Link Interface Failure

   Objective

   To obtain the IGP Route Convergence convergence due to loss a local link failure event for a
   member of the IGP
   Adjacency. a parallel link.  The links can be used for data load
   balancing

   Procedure

   1.   Advertise matching an IGP routes and topology from Tester to DUT on
      Preferred Egress Interface [Po09t] and Next-Best Egress Interface
      [Po09t] using the topology test
        setup shown in Figure 1.  Set the cost of
      the routes so that the Preferred Egress Interface is the
      preferred next-hop. 5.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed forwarded over Preferred Egress Interface. the parallel link member that
        will be failed in the next step.

   4.   Remove IGP adjacency from Tester's Neighbor Interface [Po09t]
      connected to Preferred Egress Interface.  The Layer 2 session
      MUST be maintained. link on one of the DUT's parallel link member interfaces.
        This is the Convergence Event Trigger
      [Po09t] that produces the Convergence Event Instant [Po09t]. Event.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      loss of IGP adjacency and begins to converge IGP routes and
      traffic over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects the
      IGP session failure event and converges all IGP routes and
      traffic over the Next-Best Egress Interface.  Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be measured.

               Link-State IGP Data Plane Route and Loss-Derived
        Convergence

   7. Stop offered load. Time.  At the same time measure Out-of-Order Packets
        [Po06] and Duplicate Packets [Po06].

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Offered Load.

   11.  Restore IGP session link on DUT's Preferred Egress Interface.
   9. Parallel Link member interface.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as DUT detects the session recovery
      event and converges all IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and traffic over Loss-Derived LoC
        Period.  At the
      Preferred Egress Interface. same time measure Out-of-Order Packets [Po06]
        and Duplicate Packets [Po06].

   Results

   The measured IGP Convergence convergence time is may be influenced by the IGP Hello
   Interval, IGP Dead Interval, link
   failure indication time, LSA/LSP delay, LSA/LSP generation time, LSA/
   LSP flood packet pacing, SPF delay, SPF Hold execution time, SPF
   Execution Time, Tree Build Time, and Hardware Update Time routing
   and forwarding tables update time [Po09a].

   4.6

8.2.  Other failures

8.2.1.  Convergence Due to Route Withdrawal Layer 2 Session Loss

   Objective

   To obtain the IGP Route Convergence convergence time due to Route Withdrawal. a local layer 2 loss.

   Procedure

   1.   Advertise a single an IGP topology from Tester to DUT on Preferred
      Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the test setup topology
        shown in Figure 1.  These two interfaces
      on the DUT must peer with different emulated neighbor routers
      for their IGP adjacency.  The IGP topology learned on both
      interfaces MUST be the same topology with the same nodes and
      routes. It is RECOMMENDED that the IGP routes be IGP external
      routes for which the Tester would be emulating a preferred and
      a next-best Autonomous System Border Router (ASBR).  Set the
      cost of the routes so that the Preferred Egress Interface is
      the preferred next-hop.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed over Preferred Egress Interface.

   4. The Tester, emulating the neighbor node, withdraws one or
      more IGP leaf routes   Remove Layer 2 session from the DUT's Preferred Egress Interface.
      The withdrawal update message MUST be a single unfragmented
      packet.
        This is the Convergence Event Trigger [Po09t] that
      produces the Convergence Event Instant [Po09t].  The Tester
      MAY record the time it sends the withdrawal message(s). Event.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      route withdrawal event and begins to converge IGP routes and
      traffic over the Next-Best Egress Interface. Time.

   6. Measure Full Convergence Time [Po09t] as DUT withdraws
      routes and converges all IGP routes and traffic over the
      Next-Best Egress Interface.  Optionally, Route-Specific
      Convergence Times [Po09t] MAY be measured.   Measure Full Convergence Time.

   7.   Stop offered load. Offered Load.

   8.   Measure Route-Specific Convergence Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Re-advertise the withdrawn IGP leaf routes to Offered Load.

   11.  Restore Layer 2 session on DUT's Preferred Egress Interface.

               Link-State IGP Data Plane Route Convergence

   9.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as DUT converges all IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and
      traffic over the Preferred Egress Interface. Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence time is the SPF Processing and FIB
   Update time as may be influenced by the Layer 2
   failure indication time, LSA/LSP delay, LSA/LSP generation time, LSA/
   LSP flood packet pacing, SPF or route calculation delay,
   Hold SPF execution time, Execution Time, and Hardware Update Time routing
   and forwarding tables update time [Po09a].

   4.7 Convergence Due to Cost Change

   Objective
   To obtain the

   Discussion

   Configure IGP Route Convergence due to route cost change.

   Procedure
   1. Advertise a single timers such that the IGP topology from Tester to DUT adjacency does not time out
   before layer 2 failure is detected.

   To measure convergence time, traffic SHOULD start dropping on the
   Preferred Egress Interface [Po09t] and Next-Best Egress
      Interface [Po09t] using the test setup shown in Figure 1.
      These two interfaces on the DUT must peer with different
      emulated neighbor routers for their IGP adjacency.  The
      IGP topology learned on both interfaces MUST be the same
      topology with instant the same nodes and routes. It layer 2 session is RECOMMENDED
      that the IGP routes be IGP external routes for which
   removed.  Alternatively the Tester would be emulating a preferred and a next-best
      Autonomous System Border Router (ASBR).  Set SHOULD record the cost of time the routes so that instant
   layer 2 session is removed and traffic loss SHOULD only be measured
   on the Preferred Next-Best Egress Interface is Interface.

8.2.2.  Convergence Due to Loss of IGP Adjacency

   Objective

   To obtain the IGP convergence time due to loss of an IGP Adjacency.

   Procedure
   1.   Advertise an IGP topology from Tester to DUT using the
      preferred next-hop. topology
        shown in Figure 1.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t]. ingress interface.

   3.   Verify traffic is routed over Preferred Egress Interface.

   4. The Tester, emulating the neighbor node, increases the cost for
      all   Remove IGP routes at DUT's Preferred Egress Interface so that adjacency from the
      Next-Best Preferred Egress Interface has lower cost and becomes preferred
      path.  The update message advertising while
        the higher cost layer 2 session MUST be a
      single unfragmented packet. maintained.  This is the Convergence Event
      Trigger [Po09t] that produces the Convergence Event Instant
      [Po09t].  The Tester MAY record the time it sends the message
      advertising the higher cost on the Preferred Egress Interface.
        Event.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      cost change event and begins to converge IGP routes and traffic
      over the Next-Best Egress Interface. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects the
      cost change event and converges all IGP routes and traffic
      over the Next-Best Egress Interface.  Optionally, Route-Specific
      Convergence Times [Po09t] MAY be measured. Time.

   7.   Stop offered load. Offered Load.

   8.   Measure Route-Specific Convergence Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   8. Re-advertise Offered Load.

   11.  Restore IGP session on DUT's Preferred Egress Interface.

   12.  Measure First Route Convergence Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence time may be influenced by the IGP Hello
   Interval, IGP Dead Interval, LSA/LSP delay, LSA/LSP generation time,
   LSA/LSP flood packet pacing, SPF delay, SPF execution time, and
   routing and forwarding tables update time [Po09a].

   Discussion

   Configure layer 2 such that layer 2 does not time out before IGP routes to DUT's
   adjacency failure is detected.

   To measure convergence time, traffic SHOULD start dropping on the
   Preferred Egress Interface
      with original lower cost metric.

               Link-State on the instant the IGP Data Plane Route Convergence

   9. Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Route-Specific
      Convergence Times [Po09t], as DUT converges all adjacency is
   removed.  Alternatively the Tester SHOULD record the time the instant
   the IGP routes adjacency is removed and traffic over loss SHOULD only be measured
   on the Preferred Next-Best Egress Interface.

   Results
   It is possible that no measured packet loss will be observed for
   this test case.

   4.8

8.2.3.  Convergence Due to ECMP Member Interface Failure Route Withdrawal

   Objective

   To obtain the IGP Route Convergence convergence time due to a local link failure event
   of an ECMP Member. route withdrawal.

   Procedure

   1. Configure ECMP Set as shown in Figure 3.
   2.   Advertise matching an IGP routes and topology from Tester to DUT on
      each ECMP member.
   3. Send offered load using the topology
        shown in Figure 1.  The routes that will be withdrawn MUST be a
        set of leaf routes advertised by at measured Throughput with fixed packet size to
      destinations matching all IGP least two nodes in the
        emulated topology.  The topology SHOULD be such that before the
        withdrawal the DUT prefers the leaf routes advertised by a node
        "nodeA" via the Preferred Egress Interface, and after the
        withdrawal the DUT prefers the leaf routes advertised by a node
        "nodeB" via the Next-Best Egress Interface.

   2.   Send Offered Load from Tester to DUT on Ingress
      Interface [Po09t].
   4. Interface.

   3.   Verify traffic is routed over all members of ECMP Set.
   5. Remove link on Tester's Neighbor Interface [Po09t] connected to
      one of Preferred Egress Interface.

   4.   The Tester withdraws the DUT's ECMP member interfaces. set of IGP leaf routes from nodeA.  The
        withdrawal update message MUST be a single unfragmented packet.
        This is the Convergence
      Event Trigger [Po09t] that produces Event.  The Tester MAY record the time
        it sends the Convergence Event Instant
      [Po09t].
   6. withdrawal message(s).

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      link down event and begins to converge IGP routes and traffic
      over the other ECMP members.
   7. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects
      the link down event and converges all IGP routes and traffic
      over the other ECMP members. At the same time measure
      Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
      Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be
      measured.
   8. Stop offered load. Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   9.   Wait 30 seconds sufficient time for queues to drain.
      Restart offered load.
   9. Restore link on Tester's Neighbor Interface connected to
      DUT's ECMP member interface.

   10.  Restart Offered Load.

   11.  Re-advertise the set of withdrawn IGP leaf routes from nodeA
        emulated by the Tester.  The update message MUST be a single
        unfragmented packet.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as DUT detects the link up event and
      converges IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and some distribution of traffic over the
      restored ECMP member. Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence convergence time is influenced by Local link
   failure indication, Tree Build Time, SPF or route
   calculation delay, SPF or route calculation execution time, and Hardware Update Time
   routing and forwarding tables update time [Po09a].

               Link-State IGP Data Plane Route Convergence

   4.9 Convergence Due to ECMP Member Remote Interface Failure

   Objective

   Discussion

   To obtain measure convergence time, traffic SHOULD start dropping on the IGP Route Convergence due to a remote interface
   failure event for an ECMP Member.

   Procedure
   1. Configure ECMP Set as shown in Figure 2 in which
   Preferred Egress Interface on the instant the routes are withdrawn by
   the Tester.  Alternatively the Tester SHOULD record the time the
   instant the routes are withdrawn and traffic loss SHOULD only be
   measured on the links
      from R1 Next-Best Egress Interface.

8.3.  Administrative changes

8.3.1.  Convergence Due to R2 and R1 Local Adminstrative Shutdown

   Objective

   To obtain the IGP convergence time due to R3 are members taking the DUT's Local
   Interface administratively out of an ECMP Set.
   2. service.

   Procedure

   1.   Advertise matching an IGP routes and topology from Tester to
      SUT to balance traffic to each ECMP member.
   3. DUT using the topology
        shown in Figure 1.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to
      SUT DUT on Ingress Interface [Po09t].
   4. ingress interface.

   3.   Verify traffic is routed over all members of ECMP Set.
   5. Remove link on Tester's Neighbor Preferred Egress Interface.

   4.   Take the DUT's Preferred Egress Interface to R2 or R3. administratively out
        of service.  This is the Convergence Event Trigger [Po09t] that produces
      the Convergence Event Instant [Po09t].
   6. Event.

   5.   Measure First Route Convergence Time [Po09t] as SUT detects
      the link down event and begins to converge IGP routes and
      traffic over the other ECMP members.
   7. Time.

   6.   Measure Full Convergence Time [Po09t] as SUT detects
      the link down event and converges all IGP routes and traffic
      over the other ECMP members.  At the same time measure
      Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
      Optionally, Time.

   7.   Stop Offered Load.

   8.   Measure Route-Specific Convergence Times [Po09t] MAY be
      measured.
   8. Stop offered load. Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   9. Offered Load.

   11.  Restore link on Tester's Neighbor Preferred Egress Interface to R2 or R3.
   10. by administratively enabling
        the interface.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t], as SUT detects
      the link up event and converges IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and some
      distribution of traffic over the restored ECMP member. Loss-Derived LoC
        Period.

   16.  It is possible that no measured packet loss will be observed for
        this test case.

   Results

   The measured IGP Convergence time is may be influenced by Local link
   failure indication, Tree Build Time, LSA/LSP delay,
   LSA/LSP generation time, LSA/LSP flood packet pacing, SPF delay, SPF
   execution time, and Hardware Update Time routing and forwarding tables update time
   [Po09a].

   4.10

8.3.2.  Convergence Due to Parallel Link Interface Failure Cost Change

   Objective

   To obtain the IGP Route Convergence convergence time due to a local link failure
   event for a Member of a Parallel Link.  The links can be used
   for data Load Balancing route cost change.

   Procedure

   1. Configure Parallel Link as shown in Figure 4.
   2.   Advertise matching an IGP routes and topology from Tester to DUT
      on each Parallel Link member.

               Link-State IGP Data Plane Route Convergence

   3. using the topology
        shown in Figure 1.

   2.   Send offered load at measured Throughput with fixed packet
      size to destinations matching all IGP routes Offered Load from Tester to DUT on Ingress Interface [Po09t].
   4. ingress interface.

   3.   Verify traffic is routed over Preferred Egress Interface.

   4.   The Tester, emulating the neighbor node, increases the cost for
        all members of Parallel Link.
   5. Remove link on Tester's Neighbor IGP routes at DUT's Preferred Egress Interface [Po09t] connected to
      one of so that the DUT's Parallel Link member interfaces.
        Next-Best Egress Interface becomes preferred path.  The update
        message advertising the higher cost MUST be a single
        unfragmented packet.  This is the Convergence Event Trigger [Po09t] that produces Event.  The Tester
        MAY record the Convergence
      Event Instant [Po09t].
   6. time it sends the update message advertising the
        higher cost on the Preferred Egress Interface.

   5.   Measure First Route Convergence Time [Po09t] as DUT detects the
      link down event and begins to converge IGP routes and traffic
      over the other Parallel Link members.
   7. Time.

   6.   Measure Full Convergence Time [Po09t] as DUT detects the
      link down event and converges all IGP routes and traffic over
      the other Parallel Link members.  At the same time measure
      Out-of-Order Packets [Po06] and Duplicate Packets [Po06].
      Optionally, Route-Specific Convergence Times [Po09t] MAY be
      measured.
   8. Time.

   7.   Stop offered load. Offered Load.

   8.   Measure Route-Specific Convergence Times, Loss-Derived
        Convergence Time, Route LoC Periods, and Loss-Derived LoC
        Period.

   9.   Wait 30 seconds sufficient time for queues to drain.

   10.  Restart offered load.
   9. Restore link on Tester's Neighbor Interface connected to Offered Load.

   11.  The Tester, emulating the neighbor node, decreases the cost for
        all IGP routes at DUT's Parallel Link member interface.
   10. Preferred Egress Interface so that the
        Preferred Egress Interface becomes preferred path.  The update
        message advertising the lower cost MUST be a single unfragmented
        packet.

   12.  Measure Reversion Convergence Time [Po09t], and optionally
      measure First Route Convergence Time [Po09t] and Time.

   13.  Measure Full Convergence Time.

   14.  Stop Offered Load.

   15.  Measure Route-Specific Convergence Times [Po09t],  as DUT
      detects the link up event and converges IGP routes Times, Loss-Derived
        Convergence Time, Route LoC Periods, and some
      distribution of traffic over the restored Parallel Link member. Loss-Derived LoC
        Period.

   Results

   The measured IGP Convergence time is may be influenced by the Local
   link failure indication, Tree Build Time, SPF delay, SPF
   execution time, and Hardware Update
   Time routing and forwarding tables update time
   [Po09a].

5. IANA Considerations

   This document requires no IANA considerations.

6.

   Discussion

   To measure convergence time, traffic SHOULD start dropping on the
   Preferred Egress Interface on the instant the cost is changed by the
   Tester.  Alternatively the Tester SHOULD record the time the instant
   the cost is changed and traffic loss SHOULD only be measured on the
   Next-Best Egress Interface.

9.  Security Considerations

   Documents of this type do not directly affect the security of
   Internet or corporate networks as long as benchmarking is not
   performed on devices or systems connected to production networks.
   Security threats and how to counter these in SIP and the media layer
   is discussed in RFC3261, RFC3550, and RFC3711 and various other
   drafts.  This document attempts to formalize a set of common
   methodology for benchmarking IGP convergence performance in a lab
   environment.

7.

10.  IANA Considerations

   This document requires no IANA considerations.

11.  Acknowledgements

   Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward,
   Kris Michielsen,
   Peter De Vriendt and the BMWG for their contributions to this work.

               Link-State IGP Data Plane Route Convergence

8. References
8.1

12.  Normative References

   [Br91]   Bradner, S., "Benchmarking Terminology terminology for Network
          Interconnection Devices", network
            interconnection devices", RFC 1242, IETF, March July 1991.

   [Br97]   Bradner, S., "Key words for use in RFCs to Indicate
            Requirement Levels", BCP 14, RFC 2119, March 1997.

   [Br99]   Bradner, S. and J. McQuaid, J., "Benchmarking Methodology for
            Network Interconnect Devices", RFC 2544, IETF, March 1999.

   [Ca90]   Callon, R., "Use of OSI IS-IS for Routing routing in TCP/IP and Dual
          Environments", dual
            environments", RFC 1195, IETF, December 1990.

   [Co08]   Coltun, R., Ferguson, D., Moy, J., and A. Lindem, "OSPF for
            IPv6", RFC 5340, July 2008.

   [Ho08]   Hopps, C., "Routing IPv6 with IS-IS", RFC 5308,
            October 2008.

   [Ko02]   Koodli, R. and R. Ravikanth, "One-way Loss Pattern Sample
            Metrics", RFC 3357, August 2002.

   [Ma98]   Mandeville, R., "Benchmarking Terminology for LAN Switching
            Devices", RFC 2285, February 1998.

   [Mo98]   Moy, J., "OSPF Version 2", STD 54, RFC 2328, IETF, April 1998.

   [Po06]   Poretsky, S., et al., Perser, J., Erramilli, S., and S. Khurana,
            "Terminology for Benchmarking Network-layer Traffic Control
            Mechanisms", RFC 4689,
          November October 2006.

   [Po09a]  Poretsky, S., "Considerations for Benchmarking Link-State
            IGP Data Plane Route Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-17,
           work
            draft-ietf-bmwg-igp-dataplane-conv-app-17 (work in progress,
            progress), March 2009.

   [Po09t]  Poretsky, S., S. and B. Imhoff, B., "Benchmarking Terminology "Terminology for Benchmarking
            Link-State IGP Data Plane Route Convergence",
           draft-ietf-bmwg-igp-dataplane-conv-term-17, work
            draft-ietf-bmwg-igp-dataplane-conv-term-18 (work in
           progress, March
            progress), July 2009.

8.2 Informative References
      None

9. Author's Address

Authors' Addresses

   Scott Poretsky
   Allot Communications
   67 South Bedford Street, Suite 400
   Burlington, MA  01803
   USA

   Phone: + 1 508 309 2179
   Email: sporetsky@allot.com

   Brent Imhoff
   Juniper Networks
   1194 North Mathilda Ave
   Sunnyvale, CA  94089
   USA

   Phone: + 1 314 378 2571
        EMail:
   Email: bimhoff@planetspork.com
   Kris Michielsen
   Cisco Systems
   6A De Kleetlaan
   Diegem, BRABANT  1831
   Belgium

   Email: kmichiel@cisco.com