Benchmarking Working Group                                       M. Kaeo
Internet-Draft                                      Double Shot Security
Expires: January 9, September 2, 2008                                  T. Van Herck
                                                           Cisco Systems
                                                            July 8, 2007
               Methodology for Benchmarking IPsec Devices
                     draft-ietf-bmwg-ipsec-meth-02
                     draft-ietf-bmwg-ipsec-meth-03

Status of this Memo

   By submitting this Internet-Draft, each author represents that any
   applicable patent or other IPR claims of which he or she is aware
   have been or will be disclosed, and any of which he or she becomes
   aware will be disclosed, in accordance with Section 6 of BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on January 9, September 2, 2008.

Copyright Notice

   Copyright (C) The IETF Trust (2007). (2008).

Abstract

   The purpose of this draft is to describe methodology specific to the
   benchmarking of IPsec IP forwarding devices.  It builds upon the
   tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking
   Methodology Working Group (BMWG) efforts.  This document seeks to
   extend these efforts to the IPsec paradigm.

   The BMWG produces two major classes of documents: Benchmarking
   Terminology documents and Benchmarking Methodology documents.  The
   Terminology documents present the benchmarks and other related terms.
   The Methodology documents define the procedures required to collect
   the benchmarks cited in the corresponding Terminology documents.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
   2.  Document Scope . . . . . . . . . . . . . . . . . . . . . . . .  4
   3.  Methodology Format . . . . . . . . . . . . . . . . . . . . . .  4
   4.  Key Words to Reflect Requirements  . . . . . . . . . . . . . .  4
   4.  5
   5.  Test Considerations  . . . . . . . . . . . . . . . . . . . . .  4
   5.  5
   6.  Test Topologies  . . . . . . . . . . . . . . . . . . . . . . .  5
   6.
   7.  Test Parameters  . . . . . . . . . . . . . . . . . . . . . . .  8
     6.1.
     7.1.  Frame Type . . . . . . . . . . . . . . . . . . . . . . . .  8
       6.1.1.
       7.1.1.  IP . . . . . . . . . . . . . . . . . . . . . . . . . .  8
       6.1.2.
       7.1.2.  UDP  . . . . . . . . . . . . . . . . . . . . . . . . .  8
       6.1.3.
       7.1.3.  TCP  . . . . . . . . . . . . . . . . . . . . . . . . .  8
     6.2.
     7.2.  Frame Sizes  . . . . . . . . . . . . . . . . . . . . . . .  8
     6.3.
     7.3.  Fragmentation and Reassembly . . . . . . . . . . . . . . .  9
     6.4.
     7.4.  Time To Live . . . . . . . . . . . . . . . . . . . . . . . 10
     6.5.
     7.5.  Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10
     6.6.
     7.6.  Security Context Parameters  . . . . . . . . . . . . . . . 10
       6.6.1.
       7.6.1.  IPsec Transform Sets . . . . . . . . . . . . . . . . . 10
       6.6.2.
       7.6.2.  IPsec Topologies . . . . . . . . . . . . . . . . . . . 12
       6.6.3.
       7.6.3.  IKE Keepalives . . . . . . . . . . . . . . . . . . . . 13
       6.6.4.
       7.6.4.  IKE DH-group . . . . . . . . . . . . . . . . . . . . . 13
       6.6.5.
       7.6.5.  IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 13
       6.6.6.
       7.6.6.  IPsec Selectors  . . . . . . . . . . . . . . . . . . . 14
       6.6.7.
       7.6.7.  NAT-Traversal  . . . . . . . . . . . . . . . . . . . . 14
   7.
   8.  Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
     7.1.  IKE SA
     8.1.  IPsec Tunnel Capacity  . . . . . . . . . . . . . . . . . . . . . 14
     7.2.
     8.2.  IPsec SA Capacity  . . . . . . . . . . . . . . . . . . . . 15
   8.
   9.  Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 16
     8.1.
     9.1.  Throughput baseline  . . . . . . . . . . . . . . . . . . . 16
     8.2.
     9.2.  IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 17
     8.3.
     9.3.  IPsec Encryption Throughput  . . . . . . . . . . . . . . . 17
     8.4. 18
     9.4.  IPsec Decryption Throughput  . . . . . . . . . . . . . . . 18
   9. 19
   10. Latency  . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
     9.1.
     10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 19
     9.2. 20
     10.2. IPsec Latency  . . . . . . . . . . . . . . . . . . . . . . 20
     9.3. 21
     10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 21
     9.4. 22
     10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 22
   10. 23
     10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . . . 22 23
   11. Frame Loss Rate  . . . . . . . . . . . . . . . . . . . . . . . 23 24
     11.1. Frame Loss Baseline  . . . . . . . . . . . . . . . . . . . 23 24
     11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 24 25
     11.3. IPsec Encryption Frame Loss  . . . . . . . . . . . . . . . 24 26
     11.4. IPsec Decryption Frame Loss  . . . . . . . . . . . . . . . 25 26
     11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 26
   12. Back-to-back Frames  . . . . . . . . . . . . . . . . . . . . . 26
     12.1. Back-to-back Frames Baseline . . . . . . . . . . . . . . . 26
     12.2. IPsec Back-to-back Frames  . . . . . . . . . . . . . . . . 27
     12.3. IPsec Encryption Back-to-back Frames . . . . . . . . . . . 27
     12.4. IPsec Decryption Back-to-back Frames . . . . . . . . . . . 28
   13.
   12. IPsec Tunnel Setup Behavior  . . . . . . . . . . . . . . . . . 28
     13.1.
     12.1. IPsec Tunnel Setup Rate  . . . . . . . . . . . . . . . . . 28
     13.2.
     12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 29
     13.3.
     12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 30
   14. 29
   13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 31
     14.1.
     13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 31
     14.2.
     13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 32
   15.
   14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 32
   16.
   15. DoS Attack Resiliency  . . . . . . . . . . . . . . . . . . . . . . . . 34
     16.1.
     15.1. Phase 1 DoS Resiliency Rate  . . . . . . . . . . . . . . . 34
     16.2.
     15.2. Phase 2 Hash Mismatch DoS Resiliency Rate  . . . . . . . . 35
     15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . . 35
   17. 36
   16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 35
   18. 37
   17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 35
     18.1. 37
     17.1. Normative References . . . . . . . . . . . . . . . . . . . 35
     18.2. 37
     17.2. Informative References . . . . . . . . . . . . . . . . . . 37 39
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 37 39
   Intellectual Property and Copyright Statements . . . . . . . . . . 38 40

1.  Introduction

   This document defines a specific set of tests that can be used to
   measure and report the performance characteristics of IPsec devices.
   It extends the methodology already defined for benchmarking network
   interconnecting devices in [RFC2544] to IPsec gateways and
   additionally introduces tests which can be used to measure end-host
   IPsec performance.

2.  Document Scope

   The primary focus of this document is to establish a performance
   testing methodology for IPsec devices that support manual keying and
   IKEv1.  A seperate document will be written specifically to address
   testing using the updated IKEv2 specification.  Both IPv4 and IPv6
   addressing will be taken into consideration for all relevant test
   methodologies.

   The testing will be constrained to:

   o  Devices acting as IPsec gateways whose tests will pertain to both
      IPsec tunnel and transport mode.

   o  Devices acting as IPsec end-hosts whose tests will pertain to both
      IPsec tunnel and transport mode.

   Note that special considerations will be presented for IPsec end-host
   testing since the tests cannot be conducted without introducing
   additional variables that may cause variations in test results.

   What is specifically out of scope is any testing that pertains to
   considerations involving, L2TP [RFC2661], GRE [RFC2784], BGP/MPLS
   VPNs
   VPN's [RFC2547] and anything that does not specifically relate to the
   establishment and tearing down of IPsec tunnels.

3.  Methodology Format

   The Methodology is described in the following format:

   Objective:  The reason for performing the test.

   Topology:  Physical test layout to be used as further clarified in
      Section 6.

   Procedure:  Describes the method used for carrying out the test.

   Reporting Format:  Description of reporting of the test results.

4.  Key Words to Reflect Requirements

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.  RFC 2119
   defines the use of these key words to help make the intent of
   standards track documents as clear as possible.  While this document
   uses these keywords, this document is not a standards track document.

4.

5.  Test Considerations

   Before any of the IPsec data plane benchmarking tests are carried
   out, a baseline MUST be established.  I.e. the particular test in
   question must first be measured for executed to measure its performance characteristics without
   enabling IPsec.  Once both the Baseline clear text performance and
   the performance using an IPsec enabled datapath have been measured,
   the difference between the two can be discerned.

   This document explicitly assumes that you MUST follow logical
   performance test methodology that includes the pre-configuration or
   pre-population of routing protocols, ARP caches, IPv6 neighbor
   discovery and all other extraneous IPv4 and IPv6 parameters required
   to pass packets before the tester is ready to send IPsec protected
   packets.  IPv6 nodes that implement Path MTU Discovery [RFC1981] MUST
   ensure that the PMTUD process has been completed before any of the
   tests have been run.

   For every IPsec data plane benchmarking test, the SA database (SADB)
   MUST be created and populated with the appropriate SAs SA's before any
   actual test traffic is sent, i.e. the DUT/SUT MUST have active
   tunnels. Active
   Tunnels.  This may require a manual command commands to be executed on the
   DUT/SUT DUT/
   SUT or the sending of appropriate learning frames to the DUT/SUT. DUT/SUT to
   trigger IKE negotiation.  This is to ensure that none of the control
   plane parameters (such as IPsec tunnel setup rates Tunnel Setup Rates and IPsec tunnel rekey rates) Tunnel
   Rekey Rates) are factored into these tests.

   For control plane benchmarking tests (i.e.  IPsec tunnel setup rate Tunnel Setup Rate
   and IPsec tunnel rekey rates), Tunnel Rekey Rates), the authentication mechanisms(s) used
   for the authenticated Diffie-Hellman exchange MUST be reported.

5.

6.  Test Topologies

   The tests can be performed as a DUT or SUT.  When the tests are
   performed as a DUT, the Tester itself must be an IPsec peer.  This
   scenario is shown in Figure 1.  When tested testing an IPsec Device as a DUT where the Tester
   has
   DUT, one considerations that needs to be an IPsec peer, take into account is that
   the measurements have several disadvantages:

   o  The Tester can introduce interoperability issues and skew results.

   o  The measurements may not potentially limiting
   the scope of the tests that can be accurate due to Tester inaccuracies. executed.  On the other hand, the measurement of a DUT where the Tester is an
   IPsec peer this
   method has two distinct advantages:

   o the advantage that IPsec client scenarios side testing can be benchmarked.

   o  IPsec device encryption/decryption
   performed as well as it is able to identify abnormalities may be
      identified. and
   assymetry between the encryption and decryption behavior.

                              +------------+
                              |            |
                       +----[D]   Tester   [A]----+
                       |      |            |      |
                       |      +------------+      |
                       |                          |
                       |      +------------+      |
                       |      |            |      |
                       +----[C]    DUT     [B]----+
                              |            |
                              +------------+

                   Figure 1: Device Under Test Topology 1

   The SUT scenario is depicted in Figure 2.  Two identical DUTs are
   used in this test set up which more accurately simulate the use of
   IPsec gateways.  IPsec SA (i.e.  AH/ESP transport or tunnel mode)
   configurations can be tested using this set-up where the tester is
   only required to send and receive cleartext traffic.

                              +------------+
                              |            |
          +-----------------[F]   Tester   [A]-----------------+
          |                   |            |                   |
          |                   +------------+                   |
          |                                                    |
          |      +------------+            +------------+      |
          |      |            |            |            |      |
          +----[E]    DUTa    [D]--------[C]    DUTb    [B]----+
                 |            |            |            |
                 +------------+            +------------+

                   Figure 2: System Under Test Topology 2

   When an IPsec DUT needs to be tested in a chassis failover topology,
   a second DUT needs to be used as shown in figure 3.  This is the
   high-availability equivalent of the topology as depicted in Figure 1.
   Note that in this topology the Tester MUST be an IPsec peer.

                              +------------+
                              |            |
                  +---------[F]   Tester   [A]---------+
                  |           |            |           |
                  |           +------------+           |
                  |                                    |
                  |           +------------+           |
                  |           |            |           |
                  |    +----[C]    DUTa    [B]----+    |
                  |    |      |            |      |    |
                  |    |      +------------+      |    |
                  +----+                          +----+
                       |      +------------+      |
                       |      |            |      |
                       +----[E]    DUTb    [D]----+
                              |            |
                              +------------+

              Figure 3: Redundant Device Under Test Topology 3

   When no IPsec enabled Tester is available and an IPsec failover
   scenario needs to be tested, the topology as shown in Figure 4 can be
   used.  In this case, either the high availability pair of IPsec
   devices can be used as an Initiator or as a Responder.  The remaining
   chassis will take the opposite role.

                              +------------+
                              |            |
       +--------------------[H]   Tester   [A]----------------+
       |                      |            |                  |
       |                      +------------+                  |
       |                                                      |
       |         +------------+                               |
       |         |            |                               |
       |   +---[E]    DUTa    [D]---+                         |
       |   |     |            |     |      +------------+     |
       |   |     +------------+     |      |            |     |
       +---+                        +----[C]    DUTc    [B]---+
           |     +------------+     |      |            |
           |     |            |     |      +------------+
           +---[G]    DUTb    [F]---+
                 |            |
                 +------------+

              Figure 4: Redundant System Under Test Topology 4

6.

7.  Test Parameters

   For each individual test performed, all of the following parameters
   MUST be explicitly reported in any test results.

6.1.

7.1.  Frame Type

6.1.1.

7.1.1.  IP

   Both IPv4 and IPv6 frames MUST be used.  The basic IPv4 header is 20
   bytes long (which may be increased by the use of an options field).
   The basic IPv6 header is a fixed 40 bytes and uses an extension field
   for additional headers.  Only the basic headers plus the IPsec AH
   and/or ESP headers MUST be present.

   It is recommended RECOMMENDED that IPv4 and IPv6 frames be tested separately to
   ascertain performance parameters for either IPv4 or IPv6 traffic.  If
   both IPv4 and IPv6 traffic are to be tested, the device SHOULD be
   pre-configured for a dual-stack environment to handle both traffic
   types.

   IP traffic with L4 protocol set to 'reserved' (255) MUST be used.
   This ensures maximum space for instrumentation data

   It is RECOMMENDED that a test payload field is added in the payload
   section, even with framesizes
   of minimum allowed length on the
   transport media.

6.1.2. each packet that allows flow identification and timestamping of a
   received packet.

7.1.2.  UDP

   It is also RECOMMENDED that the test is executed using UDP as the L4
   protocol.  When using UDP, instrumentation data SHOULD be present in
   the payload of the packet.  It is OPTIONAL to have application
   payload.

6.1.3.

7.1.3.  TCP

   It is OPTIONAL to perform the tests with TCP as the L4 protocol but
   in case this is considered, the TCP traffic is RECOMMENDED to be
   stateful.  With a TCP as a L4 header it is possible that there will
   not be enough room to add all instrumentation data to identify the
   packets within the DUT/SUT.

6.2.

7.2.  Frame Sizes

   Each test SHOULD MUST be run with different frame sizes.  The recommended  It is RECOMMENDED
   to use teh following cleartext layer 2 frame sizes for IPv4 tests
   over Ethernet media are media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes,
   per RFC2544 section 9 [RFC2544].  The four CRC bytes are included in
   the frame size specified.

   For GigabitEthernet, supporting jumboframes, the cleartext layer 2
   framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072,
   4096, 5120, 6144, 7168, 8192 and 8192, 9234 bytes

   For SONET these are: 47, 67, 128, 256, 512, 1024, 1280, 1518, 2048,
   4096 bytes

   To accomodate IEEE 802.1q and IEEE 802.3as it is RECOMMENDED to
   respectively include 1522 and 2000 byte framesizes in all tests.

   Since IPv6 requires that every link has an MTU of 1280 octets or
   greater, it is MANDATORY to execute tests with cleartext layer 2
   frame sizes that include 1280 and 1518 bytes.  It is RECOMMENDED that
   additional frame sizes are included in the IPv6 test execution,
   including the maximum supported datagram size for the linktype used.

6.3.

7.3.  Fragmentation and Reassembly

   IPsec devices can and must fragment packets in specific scenarios.
   Depending on whether the fragmentation is performed in software or
   using specialized custom hardware, there may be a significant impact
   on performance.

   In IPv4, unless the DF (don't fragment) bit is set by the packet
   source, the sender cannot guarantee that some intermediary device on
   the way will not fragment an IPsec packet.  For transport mode IPsec,
   the peers must be able to fragment and reassemble IPsec packets.
   Reassembly of fragmented packets is especially important if an IPv4
   port selector (or IPv6 transport protocol selector) is configured.
   For tunnel mode IPsec, it is not a requirement.  Note that
   fragmentation is handled differently in IPv6 than in IPv4.  In IPv6
   networks, fragmentation is no longer done by intermediate routers in
   the networks, but by the source node that originates the packet.  The
   path MTU discovery (PMTUD) mechanism is recommended for every IPv6
   node to avoid fragmentation.

   Packets generated by hosts that do not support PMTUD, and have not
   set the DF bit in the IP header, will undergo fragmentation before
   IPsec encapsulation.  Packets generated by hosts that do support
   PMTUD will use it locally to match the statically configured MTU on
   the tunnel.  If you manually set the MTU on the tunnel, you must set
   it low enough to allow packets to pass through the smallest link on
   the path.  Otherwise, the packets that are too large to fit will be
   dropped.

   Fragmentation can occur due to encryption overhead and is closely
   linked to the choice of transform used.  Since each test SHOULD be
   run with a maximum cleartext frame size (as per the previous section)
   it will cause fragmentation to occur since the maximum frame size
   will be exceeded.  All tests MUST be run with the DF bit not set.  It
   is also recommended that all tests be run with the DF bit set.

   Note that some implementations predetermine the encapsulated packet
   size from information available in transform sets, which are
   configured as part of the IPsec security association (SA).  If it is
   predetermined that the packet will exceed the MTU of the output
   interface, the packet is fragmented before encryption.  This
   optimization may favorably impact performance and vendors SHOULD
   report whether any such optimization is configured.

6.4.

7.4.  Time To Live

   The source frames should have a TTL value large enough to accommodate
   the DUT/SUT.  A Minimum TTL of 64 is RECOMMENDED.

6.5.

7.5.  Trial Duration

   The duration of the test portion of each trial SHOULD be at least 60
   seconds.  In the case of IPsec tunnel rekeying tests, the test
   duration must be at least two times the IPsec tunnel rekey time to
   ensure a reasonable worst case scenario test.

6.6.

7.6.  Security Context Parameters

   All of the security context parameters listed in section 7.13 of the
   IPsec Benchmarking Terminology document MUST be reported.  When
   merely discussing the behavior of traffic flows through IPsec
   devices, an IPsec context MUST be provided.  In the cases where IKE
   is configured (as opposed to using manually keyed tunnels), both an
   IPsec and an IKE context MUST be provided.  Additional considerations
   for reporting security context parameters are detailed below.  These
   all MUST be reported.

6.6.1.

7.6.1.  IPsec Transform Sets

   All tests should be done on different IPsec transform set
   combinations.  An IPsec transform specifies a single IPsec security
   protocol (either AH or ESP) with its corresponding security
   algorithms and mode.  A transform set is a combination of individual
   IPsec transforms designed to enact a specific security policy for
   protecting a particular traffic flow.  At minumim, the transform set
   must include one AH algorithm and a mode or one ESP algorithm and a
   mode.

   +-------------+------------------+----------------------+-----------+
   |     ESP     |    Encryption    |    Authentication    |    Mode   |
   |  Transform  |     Algorithm    |       Algorithm      |           |
   +-------------+------------------+----------------------+-----------+
   |      1      |       NULL       |     HMAC-SHA1-96     | Transport |
   |      2      |       NULL       |     HMAC-SHA1-96     |   Tunnel  |
   |      3      |     3DES-CBC     |     HMAC-SHA1-96     | Transport |
   |      4      |     3DES-CBC     |     HMAC-SHA1-96     |   Tunnel  |
   |      5      |    AES-CBC-128   |     HMAC-SHA1-96     | Transport |
   |      6      |    AES-CBC-128   |     HMAC-SHA1-96     |   Tunnel  |
   |      7      |       NULL       |    AES-XCBC-MAC-96   | Transport |
   |      8      |       NULL       |    AES-XCBC-MAC-96   |   Tunnel  |
   |      9      |     3DES-CBC     |    AES-XCBC-MAC-96   | Transport |
   |      10     |     3DES-CBC     |    AES-XCBC-MAC-96   |   Tunnel  |
   |      11     |    AES-CBC-128   |    AES-XCBC-MAC-96   | Transport |
   |      12     |    AES-CBC-128   |    AES-XCBC-MAC-96   |   Tunnel  |
   +-------------+------------------+----------------------+-----------+

                                  Table 1

   Testing of ESP Transforms 1-4 MUST be supported.  Testing of ESP
   Transforms 5-12 SHOULD be supported.

          +--------------+--------------------------+-----------+
          | AH Transform | Authentication Algorithm |    Mode   |
          +--------------+--------------------------+-----------+
          |       1      |       HMAC-SHA1-96       | Transport |
          |       2      |       HMAC-SHA1-96       |   Tunnel  |
          |       3      |      AES-XBC-MAC-96      | Transport |
          |       4      |      AES-XBC-MAC-96      |   Tunnel  |
          +--------------+--------------------------+-----------+

                                  Table 2

   Testing of AH Transforms 1 and 2 MUST be supported.  Testing of AH
   Transforms 3 And 4 SHOULD be supported.

   Note that this these tables are derived from the Cryptographic
   Algorithms for AH and ESP requirements as described in [RFC4305].
   Optionally, other AH and/or ESP transforms MAY be supported.

                   +-----------------------+----+-----+
                   | Transform Combination | AH | ESP |
                   +-----------------------+----+-----+
                   |           1           |  1 |  1  |
                   |           2           |  2 |  2  |
                   |           3           |  1 |  3  |
                   |           4           |  2 |  4  |
                   +-----------------------+----+-----+

                                  Table 3

   It is RECOMMENDED that the transforms shown in Table 3 be supported
   for IPv6 traffic selectors since AH may be used with ESP in these
   environments.  Since AH will provide the overall authentication and
   integrity, the ESP Authentication algorithm MUST be Null for these
   tests.  Optionally, other combined AH/ESP transform sets MAY be
   supported.

6.6.2.

7.6.2.  IPsec Topologies

   All tests should be done at various IPsec topology configurations and
   the IPsec topology used MUST be reported.  Since IPv6 requires the
   implementation of manual keys for IPsec, both manual keying and IKE
   configurations MUST be tested.

   For manual keying tests, the IPsec SAs SA's used should vary from 1 to
   101, increasing in increments of 50.  Although it is not expected
   that manual keying (i.e. manually configuring the IPsec SA) will be
   deployed in any operational setting with the exception of very small
   controlled environments (i.e. less than 10 nodes), it is prudent to
   test for potentially larger scale deployments.

   For IKE specific tests, the following IPsec topologies MUST be
   tested:

   o  1 IKE SA & 1 2 IPsec SA (i.e. 1 IPsec Tunnel)

   o  1 IKE SA & {max} IPsec SA's

   o  {max} IKE SA's & {max} IPsec SA's

   It is RECOMMENDED to also test with the following IPsec topologies in
   order to gain more datapoints:

   o  {max/2} IKE SA's & {(max/2) IKE SA's} IPsec SA's

   o  {max} IKE SA's & {(max) IKE SA's} IPsec SA's

6.6.3.

7.6.3.  IKE Keepalives

   IKE keepalives track reachability of peers by sending hello packets
   between peers.  During the typical life of an IKE Phase 1 SA, packets
   are only exchanged over this IKE Phase 1 SA when an IPsec IKE Quick
   Mode (QM) negotiation is required at the expiration of the IPSec
   Tunnel SA's.  There is no standards-based mechanism for either type
   of SA to detect the loss of a peer, except when the QM negotiation
   fails.  Most IPsec implementations use the Dead Peer Detection (i.e.
   Keepalive) mechanism to determine whether connectivity has been lost
   with a peer before the expiration of the IPsec Tunnel SA's.

   All tests using IKEv1 MUST use the same IKE keepalive parameters.

6.6.4.

7.6.4.  IKE DH-group

   There are 3 Diffie-Hellman groups which can be supported by IPsec
   standards compliant devices:

   o  DH-group 1: 768 bits

   o  DH-group 2: 1024 bits

   o  DH-group 14: 2048 bits

   DH-group 2 MUST be tested, to support the new IKEv1 algorithm
   requirements listed in [RFC4109].  It is recommended that the same
   DH-group be used for both IKE Phase 1 and IKE phase 2.  All test
   methodologies using IKE MUST report which DH-group was configured to
   be used for IKE Phase 1 and IKE Phase 2 negotiations.

6.6.5.

7.6.5.  IKE SA / IPsec SA Lifetime

   An IKE SA or IPsec SA is retained by each peer until the Tunnel
   lifetime expires.  IKE SA's and IPsec SA's have individual lifetime
   parameters.  In many real-world environments, the IPsec SA's will be
   configured with shorter lifetimes than that of the IKE SA's.  This
   will force a rekey to happen more often for IPsec SA's.

   When the initiator begins an IKE negotiation between itself and a
   remote peer (the responder), an IKE policy can be selected only if
   the lifetime of the responder's policy is shorter than or equal to
   the lifetime of the initiator's policy.  If the lifetimes are not the
   same, the shorter lifetime will be used.

   To avoid any incompatibilities in data plane benchmark testing, all
   devices MUST have the same IKE SA and lifetime as well as an identical
   IPsec SA lifetime configured
   and they must configured.  Both SHALL be configured to a time
   which exceeds the test duration timeframe or and the total number of
   bytes to be transmitted during the test.

   Note that the IPsec SA lifetime MUST be equal to or less than the IKE
   SA lifetime.  Both the IKE SA lifetime and the IPsec SA lifetime used
   MUST be reported.  This parameter SHOULD be variable when testing IKE
   rekeying performance.

6.6.6.

7.6.6.  IPsec Selectors

   All tests MUST be performed using standard IPsec selectors as
   described in [RFC2401] section 4.4.2.

6.6.7.

7.6.7.  NAT-Traversal

   For any tests that include network address translation
   considerations, the use of NAT-T in the test environment MUST be
   recorded.

7.

8.  Capacity

7.1.  IKE SA

8.1.  IPsec Tunnel Capacity

   Objective:  Measure the maximum number of IKE SA's IPsec Tunnels or Active
      Tunnels that can be sustained on an IPsec Device.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  The IPsec Device under test initially MUST NOT have any
      Active IPsec Tunnels.  The Initiator (either a tester or an IPsec
      peer) will start the negotiation of an IPsec Tunnel (a single
      Phase 1 SA and a pair Phase 2 SA's).

      After it is detected that the tunnel is established, a limited
      number (50 packets RECOMMENDED) SHALL be sent through the tunnel.
      If all packet are received by the Responder (i.e. the DUT), a new
      IPsec Tunnel may be attempted.

      This proces will be repeated until no more IPsec Tunnels can be
      established.

      At the end of the test, a traffic pattern is sent to the initiator
      that will be distributed over all Active IPsec Established Tunnels, where each
      tunnel will need to propagate a fixed number of packets at a
      minimum rate of e.g. 5 pps.  The aggregate rate of all Active
      Tunnels SHALL NOT exceed the IPsec Throughput.  When all packets
      sent by the Iniator are being received by the Responder, the test
      has succesfully determined the IKE SA Capacity.  If however this
      final check fails, the test needs to be re-executed with a lower
      number of Active IPsec Tunnels.  There MAY be a need to enforce a
      lower number of Active IPsec Tunnels i.e. an upper limit of Active
      IPsec Tunnel SHOULD be defined in the test.

      During the entire duration of the test rekeying of Tunnels SHALL
      NOT be permitted.  If a rekey event occurs, the test is invalid
      and MUST be restarted.

   Reporting Format:  The reporting format SHOULD should reflect the maximum
      number of IPsec Tunnels that can be established when all packets
      by the same as listed
      in 7.1 with initiator are received by the additional requirement that responder.  In addition the
      Security Context parameters defined in 5.6 Section 7.6 and utilized
      for this test MUST be included in any statement of performance.

7.2. capacity.

8.2.  IPsec SA Capacity

   Objective:  Measure the maximum number of IPsec SA's that can be
      sustained on an IPsec Device.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  The IPsec Device under test initially MUST NOT have any
      Active IPsec Tunnels.  The Initiator (either a tester or an IPsec
      peer) will start the negotiation of an IPsec Tunnel (a single
      Phase 1 SA and a pair Phase 2 SA's).

      After it is detected that the tunnel is established, a limited
      number (50 packets RECOMMENDED) SHALL be sent through the tunnel.
      If all packet are received by the Responder (i.e. the DUT), a new
      pair of IPsec SA's may be attempted.  This will be achieved by
      offering a specific traffic pattern to the Initiator that matches
      a given selector and therfore triggering the negotiation of a new
      pair of IPsec SA's.

      This proces will be repeated until no more IPsec SA' can be
      established.

      At the end of the test, a traffic pattern is sent to the initiator
      that will be distributed over all IPsec SA's, where each SA will
      need to propagate a fixed number of packets at a minimum rate of 5
      pps.  When all packets sent by the Iniator are being received by
      the Responder, the test has succesfully determined the IPsec SA
      Capacity.  If however this final check fails, the test needs to be
      re-executed with a lower number of IPsec SA's.  There MAY be a
      need to enforce a lower number IPsec SA's i.e. an upper limit of
      IPsec SA's SHOULD be defined in the test.

   Reporting Format:  The reporting format SHOULD be

      During the same as listed
      in 7.1 with entire duration of the additional requirement that test rekeying of Tunnels SHALL
      NOT be permitted.  If a rekey event occurs, the Security Context
      parameters defined in 5.6 and utilized for this test is invalid
      and MUST be
      included restarted.

   Reporting Format:  The reporting format SHOULD be the same as listed
      in any statement Section 8.1 for the maximum number of performance.

8. IPsec SAs.

9.  Throughput

   This section contains the description of the tests that are related
   to the characterization of the packet forwarding of a DUT/SUT in an
   IPsec environment.  Some metrics extend the concept of throughput
   presented in RFC 1242. [RFC1242].  The notion of Forwarding Rate is cited in
   RFC2285.
   [RFC2285].

   A separate test SHOULD be performed for Throughput tests using IPv4/
   UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic.

8.1.

9.1.  Throughput baseline

   Objective:  Measure the intrinsic cleartext throughput of a device
      without the use of IPsec.  The throughput baseline methodology and
      reporting format is derived from [RFC2544].

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  Send a specific number of frames that matches the IPsec
      SA selector(s) to be tested at a specific rate through the DUT and
      then count the frames that are transmitted by the DUT.  If the
      count of offered frames is equal to the count of received frames,
      the rate of the offered stream is increased and the test is rerun.
      If fewer frames are received than were transmitted, the rate of
      the offered stream is reduced and the test is rerun.

      The throughput is the fastest rate at which the count of test
      frames transmitted by the DUT is equal to the number of test
      frames sent to it by the test equipment.

   Reporting Format:  The results of the throughput test SHOULD be
      reported in the form of a graph.  If it is, the x coordinate
      SHOULD be the frame size, the y coordinate SHOULD be the frame
      rate.  There SHOULD be at least two lines on the graph.  There
      SHOULD be one line showing the theoretical frame rate for the
      media at the various frame sizes.  The second line SHOULD be the
      plot of the test results.  Additional lines MAY be used on the
      graph to report the results for each type of data stream tested.
      Text accompanying the graph SHOULD indicate the protocol, data
      stream format, and type of media used in the tests.

      We assume that if a single value is desired for advertising
      purposes the vendor will select the rate for the minimum frame
      size for the media.  If this is done then the figure MUST be
      expressed in packets per second.  The rate MAY also be expressed
      in bits (or bytes) per second if the vendor so desires.  The
      statement of performance MUST include:

      *  Measured maximum frame rate

      *  Size of the frame used

      *  Theoretical limit of the media for that frame size

      *  Type of protocol used in the test

      Even if a single value is used as part of the advertising copy,
      the full table of results SHOULD be included in the product data
      sheet.

8.2.

9.2.  IPsec Throughput

   Objective:  Measure the intrinsic throughput of a device utilizing
      IPsec.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  Send a specific number of cleartext frames that match the
      IPsec SA selector(s) at a specific rate through the DUT/SUT.  DUTa
      will encrypt the traffic and forward to DUTb which will in turn
      decrypt the traffic and forward to the testing device.  The
      testing device counts the frames that are transmitted by the DUTb.
      If the count of offered frames is equal to the count of received
      frames, the rate of the offered stream is increased and the test
      is rerun.  If fewer frames are received than were transmitted, the
      rate of the offered stream is reduced and the test is rerun.

      The IPsec Throughput is the fastest rate at which the count of
      test frames transmitted by the DUT/SUT is equal to the number of
      test frames sent to it by the test equipment.

      For tests using multiple IPsec SA's, the test traffic associated
      with the individual traffic selectors defined for each IPsec SA
      MUST be sent in a round robin type fashion to keep the test
      balanced so as not to overload any single IPsec SA.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 7.1 Section 9.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

8.3.

9.3.  IPsec Encryption Throughput

   Objective:  Measure the intrinsic DUT vendor specific IPsec
      Encryption Throughput.

   Topology  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a specific number of cleartext frames that match the
      IPsec SA selector(s) at a specific rate to the DUT.  The DUT will
      receive the cleartext frames, perform IPsec operations and then
      send the IPsec protected frame to the tester.  Upon receipt of the
      encrypted packet, the testing device will timestamp the packet(s)
      and record the result.  If the count of offered frames is equal to
      the count of received frames, the rate of the offered stream is
      increased and the test is rerun.  If fewer frames are received
      than were transmitted, the rate of the offered stream is reduced
      and the test is rerun.

      The IPsec Encryption Throughput is the fastest rate at which the
      count of test frames transmitted by the DUT is equal to the number
      of test frames sent to it by the test equipment.

      For tests using multiple IPsec SA's, the test traffic associated
      with the individual traffic selectors defined for each IPsec SA
      MUST be sent in a round robin type fashion to keep the test
      balanced so as not to overload any single IPsec SA.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 7.1 Section 9.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

8.4.

9.4.  IPsec Decryption Throughput

   Objective:  Measure the intrinsic DUT vendor specific IPsec
      Decryption Throughput.

   Topology  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a specific number of IPsec protected frames that
      match the IPsec SA selector(s) at a specific rate to the DUT.  The
      DUT will receive the IPsec protected frames, perform IPsec
      operations and then send the cleartext frame to the tester.  Upon
      receipt of the cleartext packet, the testing device will timestamp
      the packet(s) and record the result.  If the count of offered
      frames is equal to the count of received frames, the rate of the
      offered stream is increased and the test is rerun.  If fewer
      frames are received than were transmitted, the rate of the offered
      stream is reduced and the test is rerun.

      The IPsec Decryption Throughput is the fastest rate at which the
      count of test frames transmitted by the DUT is equal to the number
      of test frames sent to it by the test equipment.

      For tests using multiple IPsec SAs, SA's, the test traffic associated
      with the individual traffic selectors defined for each IPsec SA
      MUST be sent in a round robin type fashion to keep the test
      balanced so as not to overload any single IPsec SA.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 7.1 Section 9.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

9.

10.  Latency

   This section presents methodologies relating to the characterization
   of the forwarding latency of a DUT/SUT.  It extends the concept of
   latency characterization presented in [RFC2544] to an IPsec
   environment.

   A separate tests SHOULD be performed for latency tests using IPv4/
   UDP, IPv6/UDP, IPv4/TCP and IPv6/TCP traffic.

   In order to lessen the effect of packet buffering in the DUT/SUT, the
   latency tests MUST be run at the measured IPsec throughput level of
   the DUT/SUT; IPsec latency at other offered loads is optional.

   Lastly, [RFC1242] and [RFC2544] draw distinction between two classes
   of devices: "store and forward" and "bit-forwarding".  Each class
   impacts how latency is collected and subsequently presented.  See the
   related RFC's for more information.  In practice, much of the test
   equipment will collect the latency measurement for one class or the
   other, and, if needed, mathematically derive the reported value by
   the addition or subtraction of values accounting for medium
   propagation delay of the packet, bit times to the timestamp trigger
   within the packet, etc.  Test equipment vendors SHOULD provide
   documentation regarding the composition and calculation latency
   values being reported.  The user of this data SHOULD understand the
   nature of the latency values being reported, especially when
   comparing results collected from multiple test vendors.  (E.g., If
   test vendor A presents a "store and forward" latency result and test
   vendor B presents a "bit-forwarding" latency result, the user may
   erroneously conclude the DUT has two differing sets of latency
   values.).

9.1.

10.1.  Latency Baseline

   Objective:  Measure the intrinsic latency (min/avg/max) introduced by
      a device without the use of IPsec.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  First determine the throughput for the DUT/SUT at each of
      the listed frame sizes.  Send a stream of frames at a particular
      frame size through the DUT at the determined throughput rate using
      frames that match the IPsec SA selector(s) to be tested.  The
      stream SHOULD be at least 120 seconds in duration.  An identifying
      tag SHOULD be included in one frame after 60 seconds with the type
      of tag being implementation dependent.  The time at which this
      frame is fully transmitted is recorded (timestamp A).  The
      receiver logic in the test equipment MUST recognize the tag
      information in the frame stream and record the time at which the
      tagged frame was received (timestamp B).

      The latency is timestamp B minus timestamp A as per the relevant
      definition from RFC 1242, namely latency as defined for store and
      forward devices or latency as defined for bit forwarding devices.

      The test MUST be repeated at least 20 times with the reported
      value being the average of the recorded values.

   Reporting Format  The report MUST state which definition of latency
      (from [RFC1242]) was used for this test.  The latency results
      SHOULD be reported in the format of a table with a row for each of
      the tested frame sizes.  There SHOULD be columns for the frame
      size, the rate at which the latency test was run for that frame
      size, for the media types tested, and for the resultant latency
      values for each type of data stream tested.

9.2.

10.2.  IPsec Latency

   Objective:  Measure the intrinsic IPsec Latency (min/avg/max)
      introduced by a device when using IPsec.

   Procedure:  First determine

   Topology  If no IPsec aware tester is available the throughput for test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  First determine the throughput for the DUT/SUT at each of
      the listed frame sizes.  Send a stream of cleartext frames at a
      particular frame size through the DUT/SUT at the determined
      throughput rate using frames that match the IPsec SA selector(s)
      to be tested.  DUTa will encrypt the traffic and forward to DUTb
      which will in turn decrypt the traffic and forward to the testing
      device.

      The stream SHOULD be at least 120 seconds in duration.  An
      identifying tag SHOULD be included in one frame after 60 seconds
      with the type of tag being implementation dependent.  The time at
      which this frame is fully transmitted is recorded (timestamp A).
      The receiver logic in the test equipment MUST recognize the tag
      information in the frame stream and record the time at which the
      tagged frame was received (timestamp B).

      The IPsec Latency is timestamp B minus timestamp A as per the
      relevant definition from [RFC1242], namely latency as defined for
      store and forward devices or latency as defined for bit forwarding
      devices.

      The test MUST be repeated at least 20 times with the reported
      value being the average of the recorded values.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 8.1 Section 10.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

9.3.

10.3.  IPsec Encryption Latency

   Objective:  Measure the DUT vendor specific IPsec Encryption Latency
      for IPsec protected traffic.

   Topology  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a stream of cleartext frames at a particular frame
      size through the DUT/SUT at the determined throughput rate using
      frames that match the IPsec SA selector(s) to be tested.

      The stream SHOULD be at least 120 seconds in duration.  An
      identifying tag SHOULD be included in one frame after 60 seconds
      with the type of tag being implementation dependent.  The time at
      which this frame is fully transmitted is recorded (timestamp A).
      The DUT will receive the cleartext frames, perform IPsec
      operations and then send the IPsec protected frames to the tester.
      Upon receipt of the encrypted frames, the receiver logic in the
      test equipment MUST recognize the tag information in the frame
      stream and record the time at which the tagged frame was received
      (timestamp B).

      The IPsec Encryption Latency is timestamp B minus timestamp A as
      per the relevant definition from [RFC1242], namely latency as
      defined for store and forward devices or latency as defined for
      bit forwarding devices.

      The test MUST be repeated at least 20 times with the reported
      value being the average of the recorded values.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 8.1 Section 10.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

9.4.

10.4.  IPsec Decryption Latency

   Objective:  Measure the DUT Vendor Specific IPsec Decryption Latency
      for IPsec protected traffic.

   Topology  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a stream of IPsec protected frames at a particular
      frame size through the DUT/SUT at the determined throughput rate
      using frames that match the IPsec SA selector(s) to be tested.

      The stream SHOULD be at least 120 seconds in duration.  An
      identifying tag SHOULD be included in one frame after 60 seconds
      with the type of tag being implementation dependent.  The time at
      which this frame is fully transmitted is recorded (timestamp A).
      The DUT will receive the IPsec protected frames, perform IPsec
      operations and then send the cleartext frames to the tester.  Upon
      receipt of the decrypted frames, the receiver logic in the test
      equipment MUST recognize the tag information in the frame stream
      and record the time at which the tagged frame was received
      (timestamp B).

      The IPsec Decryption Latency is timestamp B minus timestamp A as
      per the relevant definition from [RFC1242], namely latency as
      defined for store and forward devices or latency as defined for
      bit forwarding devices.

      The test MUST be repeated at least 20 times with the reported
      value being the average of the recorded values.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 8.1 Section 10.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 5.6 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

10.

10.5.  Time To First Packet

   Objective:  Measure the time it takes to transmit a packet when no
      SA's have been established.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  Determine the IPsec throughput for the DUT/SUT at each of
      the listed frame sizes.  Start with a DUT/SUT with Configured
      Tunnels.  Send a stream of cleartext frames at a particular frame
      size through the DUT/SUT at the determined throughput rate using
      frames that match the IPsec SA selector(s) to be tested.

      The time at which the first frame is fully transmitted from the
      testing device is recorded as timestamp A. The time at which the
      testing device receives its first frame from the DUT/SUT is
      recorded as timestamp B. The Time To First Packet is the
      difference between Timestamp B and Timestamp A.

      Note that it is possible that packets can be lost during IPsec
      Tunnel establishment and that timestamp A & B are not required to
      be associated with a unique packet.

   Reporting format:  The Time To First Packet results SHOULD be
      reported in the format of a table with a row for each of the
      tested frame sizes.  There SHOULD be columns for the frame size,
      the rate at which the TTFP test was run for that frame size, for
      the media types tested, and for the resultant TTFP values for each
      type of data stream tested.  The Security Context parameters Parameters
      defined in 5.6 Section 7.6 and utilized for this test MUST be included
      in any statement of performance.

11.  Frame Loss Rate

   This section presents methodologies relating to the characterization
   of frame loss rate, as defined in [RFC1242], in an IPsec environment.

11.1.  Frame Loss Baseline

   Objective:  To determine the frame loss rate, as defined in
      [RFC1242], of a DUT/SUT throughout the entire range of input data
      rates and frame sizes without the use of IPsec.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  Send a specific number of frames at a specific rate
      through the DUT/SUT to be tested using frames that match the IPsec
      SA selector(s) to be tested and count the frames that are
      transmitted by the DUT/SUT.  The frame loss rate at each point is
      calculated using the following equation:

      ( ( input_count - output_count ) * 100 ) / input_count

      The first trial SHOULD be run for the frame rate that corresponds
      to 100% of the maximum rate for the frame size on the input media.
      Repeat the procedure for the rate that corresponds to 90% of the
      maximum rate used and then for 80% of this rate.  This sequence
      SHOULD be continued (at reducing reduced 10% intervals) until there are two
      successive trials in which no frames are lost.  The maximum
      granularity of the trials MUST be 10% of the maximum rate, a finer
      granularity is encouraged.

   Reporting Format:  The results of the frame loss rate test SHOULD be
      plotted as a graph.  If this is done then the X axis MUST be the
      input frame rate as a percent of the theoretical rate for the
      media at the specific frame size.  The Y axis MUST be the percent
      loss at the particular input rate.  The left end of the X axis and
      the bottom of the Y axis MUST be 0 percent; the right end of the X
      axis and the top of the Y axis MUST be 100 percent.  Multiple
      lines on the graph MAY used to report the frame loss rate for
      different frame sizes, protocols, and types of data streams.

11.2.  IPsec Frame Loss

   Objective:  To measure the frame loss rate of a device when using
      IPsec to protect the data flow.

   Topology  If no IPsec aware tester is available the test MUST be
      conducted using a System Under Test Topology as depicted in
      Figure 2.  When an IPsec aware tester is available the test MUST
      be executed using a Device Under Test Topology as depicted in
      Figure 1.

   Procedure:  Ensure that the DUT/SUT is in active tunnel mode.  Send a
      specific number of cleartext frames that match the IPsec SA
      selector(s) to be tested at a specific rate through the DUT/SUT.
      DUTa will encrypt the traffic and forward to DUTb which will in
      turn decrypt the traffic and forward to the testing device.  The
      testing device counts the frames that are transmitted by the DUTb.
      The frame loss rate at each point is calculated using the
      following equation:

      ( ( input_count - output_count ) * 100 ) / input_count

      The first trial SHOULD be run for the frame rate that corresponds
      to 100% of the maximum rate for the frame size on the input media.
      Repeat the procedure for the rate that corresponds to 90% of the
      maximum rate used and then for 80% of this rate.  This sequence
      SHOULD be continued (at reducing 10% intervals) until there are
      two successive trials in which no frames are lost.  The maximum
      granularity of the trials MUST be 10% of the maximum rate, a finer
      granularity is encouraged.

   Reporting Format:  The reporting format SHOULD SHALL be the same as listed
      in 10.1 Section 11.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 6.7 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

11.3.  IPsec Encryption Frame Loss

   Objective:  To measure the effect of IPsec encryption on the frame
      loss rate of a device.

   Procedure:  Send a specific number of cleartext

   Topology  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a specific number of cleartext frames that match the
      IPsec SA selector(s) at a specific rate to the DUT.  The DUT will
      receive the cleartext frames, perform IPsec operations and then
      send the IPsec protected frame to the tester.  The testing device
      counts the encrypted frames that are transmitted by the DUT.  The
      frame loss rate at each point is calculated using the following
      equation:

      ( ( input_count - output_count ) * 100 ) / input_count

      The first trial SHOULD be run for the frame rate that corresponds
      to 100% of the maximum rate for the frame size on the input media.
      Repeat the procedure for the rate that corresponds to 90% of the
      maximum rate used and then for 80% of this rate.  This sequence
      SHOULD be continued (at reducing 10% intervals) until there are
      two successive trials in which no frames are lost.  The maximum
      granularity of the trials MUST be 10% of the maximum rate, a finer
      granularity is encouraged.

   Reporting Format:  The reporting format SHOULD SHALL be the same as listed
      in 10.1 Section 11.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 6.7 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

11.4.  IPsec Decryption Frame Loss

   Objective:  To measure the effects of IPsec encryption on the frame
      loss rate of a device.

   Topology:  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a specific number of IPsec protected frames that
      match the IPsec SA selector(s) at a specific rate to the DUT.  The
      DUT will receive the IPsec protected frames, perform IPsec
      operations and then send the cleartext frames to the tester.  The
      testing device counts the cleartext frames that are transmitted by
      the DUT.  The frame loss rate at each point is calculated using
      the following equation:

      ( ( input_count - output_count ) * 100 ) / input_count

      The first trial SHOULD be run for the frame rate that corresponds
      to 100% of the maximum rate for the frame size on the input media.
      Repeat the procedure for the rate that corresponds to 90% of the
      maximum rate used and then for 80% of this rate.  This sequence
      SHOULD be continued (at reducing 10% intervals) until there are
      two successive trials in which no frames are lost.  The maximum
      granularity of the trials MUST be 10% of the maximum rate, a finer
      granularity is encouraged.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 10.1 Section 11.1 with theadditional the additional requirement that the Security
      Context
      parameters Parameters, as defined in 6.7 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

11.5.  IKE Phase 2 Rekey Frame Loss

   Objective:  To measure the frame loss due to an IKE Phase 2 (i.e.
      IPsec SA) Rekey event.

   Topology:  The test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  The procedure is the same as in 10.2 Section 11.2 with the
      exception that the IPsec SA lifetime MUST be configured to be one-third one-
      third of the trial test duration or one-third of the total number
      of bytes to be transmitted during the trial duration.

   Reporting format:  The reporting format SHOULD SHALL be the same as listed
      in 10.1 Section 11.1 with the additional requirement that the Security
      Context
      parameters Parameters, as defined in 6.7 and Section 7.6, utilized for this
      test MUST be included in any statement of performance.

12.  Back-to-back Frames

   This section presents methodologies relating to the characterization
   of back-to-back frame processing, as defined in [RFC1242], in an  IPsec environment. Tunnel Setup Behavior

12.1.  Back-to-back Frames Baseline  IPsec Tunnel Setup Rate

   Objective:  To characterize  Determine the ability of rate at which IPsec Tunnels can be
      established.

   Topology:  The test MUST be conducted using a DUT to process back-to-
      back frames Device Under Test
      Topology as defined depicted in [RFC1242], without the use of IPsec. Figure 1.

   Procedure:  Send a burst of frames that matches  Configure the IPsec SA
      selector(s) to be tested with minimum inter-frame gaps to Responder (where the DUT Responder is the DUT)
      with n IKE Phase 1 and count corresponding IKE Phase 2 policies.  Ensure
      that no SA's are established and that the number Responder has
      Established Tunnels for all n policies.  Send a stream of
      cleartext frames forwarded by at a particular frame size to the DUT.  If Responder at
      the count
      of transmitted determined throughput rate using frames is equal to with selectors
      matching the number of frames forwarded first IKE Phase 1 policy.  As soon as the length of testing
      device receives its first frame from the burst Responder, it knows that
      the IPsec Tunnel is increased established and starts sending the test is rerun.  If
      the number next stream
      of forwarded cleartext frames is less than the number
      transmitted, the length of using the burst is reduced same frame size and throughput rate
      but this time using selectors matching the test second IKE Phase 1
      policy.  This process is
      rerun. repeated until all configured IPsec
      Tunnels have been established.

      The back-to-back value IPsec Tunnel Setup Rate is the number of frames measured in Tunnels Per Second
      (TPS) and is determined by the longest
      burst that the DUT will handle without the loss following formula:

      Tunnel Setup Rate = n / [Duration of any frames. Test - (n *
      frame_transmit_time)] TPS

      The trial length MUST be at least 2 seconds IKE SA lifetime and SHOULD the IPsec SA lifetime MUST be repeated
      at least 50 times with configured
      to exceed the average duration of the recorded values being
      reported. test time.  It is RECOMMENDED that
      n=100 IPsec Tunnels are tested at a minimum to get a large enough
      sample size to depict some real-world behavior.

   Reporting format: Format:  The back-to-back Tunnel Setup Rate results SHOULD be reported
      in the format of a table with a row for each of the tested frame
      sizes.  There SHOULD be columns for:

         The throughput rate at which the test was run for the specified
         frame size and

         The media type used for the test

         The resultant
      average frame count Tunnel Setup Rate values, in TPS, for each type of the
         particular data stream tested. tested for that frame size

      The
      standard deviation Security Context Parameters defined in Section 7.6 and
      utilized for each measurement MAY also this test MUST be reported. included in any statement of
      performance.

12.2.  IPsec Back-to-back Frames  IKE Phase 1 Setup Rate

   Objective:  To measure  Determine the back-to-back frame processing rate of a
      device when IKE SA's that can be established.

   Topology:  The test MUST be conducted using IPsec to protect the data flow.

   Procedure:  Send a burst of cleartext frames that matches Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Configure the IPsec
      SA selector(s) to be tested Responder with minimum inter-frame gaps to the
      DUT/SUT.  DUTa will encrypt the traffic and forward to DUTb which
      will in turn decrypt the traffic n IKE Phase 1 and forward to the testing
      device.  The testing device counts the frames
      corresponding IKE Phase 2 policies.  Ensure that no SA's are transmitted
      by the DUTb.  If
      established and that the count Responder has Configured Tunnel for all n
      policies.  Send a stream of transmitted cleartext frames is equal to at a particular frame
      size through the
      number of Responder at the determined throughput rate using
      frames forwarded with selectors matching the length of first IKE Phase 1 policy.  As
      soon as the burst Phase 1 SA is increased
      and established, the test is rerun.  If testing device starts
      sending the number next stream of forwarded cleartext frames is less
      than the number transmitted, the length of using the burst is reduced same frame
      size and throughput rate but this time using selectors matching
      the test second IKE Phase 1 policy.  This process is rerun. repeated until all
      configured IKE SA's have been established.

      The back-to-back value IKE SA Setup Rate is determined by the number of frames in the longest
      burst that the DUT/SUT will handle without the loss following formula:

      IKE SA Setup Rate = n / [Duration of any frames. Test - (n *
      frame_transmit_time)]

      The trial length MUST be at least 2 seconds IKE SA lifetime and SHOULD be repeated
      at least 50 times with the average of the recorded values being
      reported.

   Reporting Format:  The reporting format SHOULD IPsec SA lifetime MUST be configured
      to exceed the same as listed
      in 11.1 with duration of the additional requirement test time.  It is RECOMMENDED that the Security
      n=100 IKE SA's are tested at a minumum to get a large enough
      sample size to depict some real-world behavior.

   Reporting Format:  The IKE Phase 1 Setup Rate results SHOULD be
      reported in the format of a table with a row for each of the
      tested frame sizes.  There SHOULD be columns for the frame size,
      the rate at which the test was run for that frame size, for the
      media types tested, and for the resultant IKE Phase 1 Setup Rate
      values for each type of data stream tested.  The Security Context
      parameters
      Parameters defined in 6.7 Section 7.6 and utilized for this test MUST
      be included in any statement of performance.

12.3.  IPsec Encryption Back-to-back Frames  IKE Phase 2 Setup Rate

   Objective:  To measure the effect of IPsec encryption on  Determine the back-to-
      back frame processing rate of a device.

   Procedure:  Send a burst of cleartext frames that matches the IPsec
      SA selector(s) to SA's that can be tested with minimum inter-frame gaps to the
      DUT. established.

   Topology:  The DUT will receive the cleartext frames, perform IPsec
      operations and then send test MUST be conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Configure the IPsec protected frame to Responder (where the tester.
      The testing device counts Responder is the encrypted frames DUT)
      with a single IKE Phase 1 policy and n corresponding IKE Phase 2
      policies.  Ensure that no SA's are
      transmitted by the DUT.  If established and that the count
      Responder has Configured Tunnels for all policies.  Send a stream
      of transmitted encrypted cleartext frames is equal to at a particular frame size through the number of
      Responder at the determined throughput rate using frames forwarded with
      selectors matching the length of first IPsec SA policy.

      The time at which the burst IKE SA is increased and the test established is rerun.  If recorded as
      timestamp_A. As soon as the number of
      forwarded frames Phase 1 SA is less than established, the number transmitted, IPsec
      SA negotiation will be initiated.  Once the first IPsec SA has
      been established, start sending the length next stream of cleartext
      frames using the burst is reduced same frame size and throughput rate but this time
      using selectors matching the test second IKE Phase 2 policy.  This
      process is rerun. repeated until all configured IPsec SA's have been
      established.

      The back-to-back value IPsec SA Setup Rate is determined by the number of frames following formula,
      where test_duration and frame_transmit_times are expressed in the longest
      burst that the DUT will handle without the loss
      units of any frames. seconds:

      IPsec SA Setup Rate = n / [test_duration - {timestamp_A +((n-1) *
      frame_transmit_time)}] IPsec SA's per Second

      The trial length IKE SA lifetime and the IPsec SA lifetime MUST be configured
      to exceed the duration of the test time.  It is RECOMMENDED that
      n=100 IPsec SA's are tested at least a minumum to get a large enough
      sample size to depict some real-world behavior.

   Reporting Format:  The IKE Phase 2 seconds and Setup Rate results SHOULD be repeated
      at least 50 times with
      reported in the average format of a table with a row for each of the recorded values being
      reported.

   Reporting format:  The reporting format
      tested frame sizes.  There SHOULD be columns for:

         The throughput rate at which the same as listed test was run for the specified
         frame size

         The media type used for the test

         The resultant IKE Phase 2 Setup Rate values, in 11.1 with IPsec SA's per
         second, for the additional requirement particular data stream tested for that the frame
         size

      The Security Context parameters defined in 6.7 Section 7.6 and
      utilized for this test MUST be included in any statement of
      performance.

12.4.

13.  IPsec Decryption Back-to-back Frames

   Objective:  To measure the effect of Rekey Behavior

   The IPsec decryption on Rekey Behavior test all need to be executed by an IPsec
   aware test device since the back-to-
      back frame processing test needs to be closely linked with the
   IKE FSM (Finite State Machine) and cannot be done by offering
   specific traffic pattern at either the Initiator or the Responder.

13.1.  IKE Phase 1 Rekey Rate

   Objective:  Determine the maximum rate of at which an IPsec Device can
      rekey IKE SA's.

   Topology:  The test MUST be conducted using a device. Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Send a burst of cleartext frames that matches the  The IPsec
      SA selector(s) to Device under test should initially be tested set up
      with minimum inter-frame gaps to the
      DUT. determined IPsec Tunnel Capacity number of Active IPsec
      Tunnels.

      The DUT will receive the IPsec protected frames, aware tester should then perform a binary search where
      it initiates an IKE Phase 1 SA rekey for all Active IPsec operations and then send the cleartext frame to the tester. Tunnels.
      The testing device counts tester MUST timestamp for each IKE SA when it initiated the frames that are transmitted by
      rekey (timestamp_A) and MUST timestamp once more once the
      DUT.  If FSM
      declares the count of transmitted frames rekey is equal to completed (timestamp_B).The rekey time for a
      specific SA equals timestamp_B - timestamp_A. Once the number of
      frames forwarded iteration
      is complete the length tester now has a table of rekey times for each IKE
      SA.  The reciproce of the burst average of this table is increased and the test IKE Phase 1
      Rekey Rate.

      It is rerun. expected that all IKE SA were able to rekey succesfully.  If the number of forwarded frames
      this is less than the
      number transmitted, not the length of case, the burst is reduced IPsec Tunnels are all re-established and
      the
      test is rerun.

      The back-to-back value is binary search goes to the number next value of frames in the longest
      burst that the DUT IKE SA's it will handle without the loss of any frames.
      rekey.  The trial length MUST be process will repeat itself until a rate is determined
      at least 2 seconds and which all SA's in that timeframe rekey correctly.

   Reporting Format:  The IKE Phase 1 Rekey Rate results SHOULD be repeated
      at least 50 times with
      reported in the average format of a table with a row for each of the recorded values being
      reported.

   Reporting format:  The reporting format
      tested frame sizes.  There SHOULD be columns for the same as listed
      in 11.1 with frame size,
      the additional requirement rate at which the test was run for that frame size, for the
      media types tested, and for the resultant IKE Phase 1 Rekey Rate
      values for each type of data stream tested.  The Security Context
      parameters
      Parameters defined in 6.7 Section 7.6 and utilized for this test MUST
      be included in any statement of performance.

13.  IPsec Tunnel Setup Behavior

13.1.  IPsec Tunnel Setup

13.2.  IKE Phase 2 Rekey Rate

   Objective:  Determine the maximum rate at which an IPsec Tunnels Device can
      rekey IPsec SA's.

   Topology:  The test MUST be
      established. conducted using a Device Under Test
      Topology as depicted in Figure 1.

   Procedure:  Configure the DUT/SUT  The IPsec Device under test should initially be set up
      with n IKE Phase 1 and
      corresponding the determined IPsec Tunnel Capacity number of Active IPsec
      Tunnels.

      The IPsec aware tester should then perform a binary search where
      it initiates an IKE Phase 2 policies.  Ensure that no SA's are
      established and that the DUT/SUT is in configured tunnel mode SA rekey for all n policies.  Send a stream of cleartext frames at a particular
      frame size through the DUT/SUT at the determined throughput rate
      using frames with selectors matching the first IKE Phase 1 policy.
      As soon as IPsec SA's.  The
      tester MUST timestamp for each IPsec SA when it initiated the testing device receives its first frame from
      rekey (timestamp_A) and MUST timestamp once more once the
      DUT/SUT, it knows that FSM
      declares the rekey is completed (timestamp_B).  The rekey time for
      a specific IPsec Tunnel SA is established and starts
      sending timestamp_B - timestamp_A. Once the next stream
      itteration is complete the tester now has a table of rekey times
      for each IPsec SA.  The reciproce of cleartext frames using the same frame
      size and throughput rate but average of this time using selectors matching table is
      the second IKE Phase 1 policy.  This process 2 Rekey Rate.

      It is repeated until expected that all
      configured IPsec Tunnels have been established.

      The IPsec Tunnel Setup Rate SA's were able to rekey succesfully.
      If this is determined by not the following
      formula:

      Tunnel Setup Rate = n / [Duration of Test - (n *
      frame_transmit_time)]

      The IKE SA lifetime and case, the IPsec SA lifetime MUST be configured Tunnels are all re-established
      and the binary search goes to exceed the duration next value of the test time.  It is RECOMMENDED that
      n=100 IPsec Tunnels are tested at SA's it will
      rekey.  The process will repeat itself until a minimum to get rate is determined
      at which a large enough
      sample size to depict some real-world behavior. all SA's in that timeframe rekey correctly.

   Reporting Format:  The Tunnel Setup IKE Phase 2 Rekey Rate results SHOULD be
      reported in the format of a table with a row for each of the
      tested frame sizes.  There SHOULD be columns for the frame size,
      the rate at which the test was run for that frame size, for the
      media types tested, and for the resultant Tunnel Setup IKE Phase 2 Rekey Rate
      values for each type of data stream tested.  The Security Context parameters
      Parameters defined in 6.7 Section 7.6 and utilized for this test MUST
      be included in any statement of performance.

13.2.  IKE Phase 1 Setup Rate

   Objective:  Determine the rate of IKE SA's that can be established.

   Procedure:  Configure the DUT with n IKE Phase 1 and corresponding
      IKE Phase 2 policies.  Ensure that no SAs are established and that
      the DUT is in configured tunnel mode for all n policies.  Send a
      stream of cleartext frames at a particular frame size through the
      DUT at the determined throughput rate using frames with selectors
      matching the first IKE Phase 1 policy.  As soon as the Phase 1 SA
      is established, the testing device starts sending the next stream
      of cleartext frames using the same frame size and throughput rate
      but this time using selectors matching the second IKE Phase 1
      policy.  This process is repeated until all configured IKE SA's
      have been established.

      The IKE SA Setup Rate is determined by the following formula:

      IKE SA Setup Rate = n / [Duration of Test - (n *
      frame_transmit_time)]

      The IKE SA lifetime and the IPsec SA lifetime MUST be configured
      to exceed the duration of the test time.  It is RECOMMENDED that
      n=100 IKE SA's are tested at a minumum to get a large enough
      sample size to depict some real-world behavior.

   Reporting Format:  The IKE Phase 1 Setup Rate results SHOULD be
      reported in the format of a table with a row for each of the
      tested frame sizes.  There SHOULD be columns for the frame size,
      the rate at which the test was run for that frame size, for the
      media types tested, and for the resultant IKE Phase 1 Setup Rate
      values for each type of data stream tested.  The Security Context
      parameters defined in 6.7 and utilized for this test MUST be
      included in any statement of performance.

13.3.  IKE Phase 2 Setup Rate

   Objective:  Determine the rate of IPsec SA's that can be established.

   Procedure:  Configure the DUT with a single IKE Phase 1 policy and n
      corresponding IKE Phase 2 policies.  Ensure that no SAs are
      established and that the DUT is in configured tunnel mode for all
      policies.  Send a stream of cleartext frames at a particular frame
      size through the DUT at the determined throughput rate using
      frames with selectors matching the first IPsec SA policy.

      The time at which the IKE SA is established is recorded as
      timestamp A. As soon as the Phase 1 SA is established, the IPsec
      SA negotiation will be initiated.  Once the first IPsec SA has
      been established, start sending the next stream of cleartext
      frames using the same frame size and throughput rate but this time
      using selectors matching the second IKE Phase 2 policy.  This
      process is repeated until all configured IPsec SA's have been
      established.

      The IPsec SA Setup Rate is determined by the following formula:

      IPsec SA Setup Rate = n / [Duration of Test - {A +((n-1) *
      frame_transmit_time)}]

      The IKE SA lifetime and the IPsec SA lifetime MUST be configured
      to exceed the duration of the test time.  It is RECOMMENDED that
      n=100 IPsec SA's are tested at a minumum to get a large enough
      sample size to depict some real-world behavior.

   Reporting Format:  The IKE Phase 2 Setup Rate results SHOULD be
      reported in the format of a table with a row for each of the
      tested frame sizes.  There SHOULD be columns for the frame size,
      the rate at which the test was run for that frame size, for the
      media types tested, and for the resultant IKE Phase 2 Setup Rate
      values for each type of data stream tested.  The Security Context
      parameters defined in 6.7 and utilized for this test MUST be
      included in any statement of performance.

14.  IPsec Rekey Behavior

   The IPsec Rekey Behavior test all need to be executed by an IPsec
   aware test device since the test needs to be closely linked with the
   IKE FSM and cannot be done by offering specific traffic pattern at
   either the Initiator or the Responder.

14.1.  IKE Phase 1 Rekey Rate

   Objective:  Determine the maximum rate at which an IPsec Device can
      rekey IKE SA's.

   Procedure:  The IPsec Device under test should initially be set up
      with the determined IKE SA Capacity number of Active IPsec
      Tunnels.

      The IPsec aware tester should then perform a binary search where
      it initiates an IKE Phase 1 SA rekey for all Active IPsec Tunnels.
      The tester MUST timestamp for each IKE SA when it initiated the
      rekey and MUST timestamp once more once the FSM declares the rekey
      is completed.  Once the itteration is complete

14.  IPsec Tunnel Failover Time

   This section presents methodologies relating to the tester now has
      a table of rekey times for each IKE SA.  The reciproce characterization
   of the
      average failover behavior of this table is the IKE Phase 1 Rekey Rate.

      This is obviously granted that all IKE SA were able a DUT/SUT in a IPsec environment.

   In order to rekey
      succesfully.  If this is not lessen the case, effect of packet buffering in the IPsec Tunnels are all
      re-established and DUT/SUT, the binary search goes to
   Tunnel Failover Time tests MUST be run at the next value measured IPsec
   Throughput level of IKE
      SA's it will rekey.  The process will repeat itself until a rate
      is determined the DUT.  Tunnel Failover Time tests at which a all SA's in that timeframe rekey
      correctly.

   Reporting Format:  The IKE Phase 1 Rekey Rate results SHOULD other
   offered constant loads are OPTIONAL.

   Tunnel Failovers can be
      reported achieved in the format various ways, for example:

   o  Failover between two Software Instances of an IPsec stack.

   o  Failover between two IPsec devices.

   o  Failover between two Hardware IPsec Engines within a table with single IPsec
      Device.

   o  Fallback to Software IPsec from Hardware IPsec within a row for each single
      IPsec Device.

   In all of the
      tested frame sizes.  There SHOULD above cases there shall be columns for the frame size,
      the rate at which least one active IPsec
   device and a standby device.  In some cases the test was run for that frame size, for standby device is not
   present and two or more IPsec devices are backing eachother up in
   case of a catastrophic device or stack failure.  The standby (or
   potential other active) IPsec Devices can back up the
      media types tested, and for active IPsec
   Device in either a stateless or statefull method.  In the resultant IKE former
   case, Phase 1 Rekey Rate
      values for each type of data stream tested.  The Security Context
      parameters defined in 6.7 and utilized for this test MUST SA's as well as Phase 2 SA's will need to be
      included re-
   established in any statement order to guarantuee packet forwarding.  In the latter
   case, the SPD and SADB of performance.

14.2.  IKE Phase 2 Rekey Rate

   Objective:  Determine the maximum rate at which an active IPsec Device can
      rekey IPsec SA's.

   Procedure:  The is synchronized to
   the standby IPsec Device under test should initially be set up
      with to ensure immediate packet path recovery.

   Objective:  Determine the determined IKE SA Capacity number of time required to fail over all Active
      Tunnels from an active IPsec
      Tunnels.

      The Device to its standby device.

   Topology:  If no IPsec aware tester should then perform is available, the test MUST be
      conducted using a binary search where
      it initiates Redundant System Under Test Topology as depicted
      in Figure 4.  When an IKE Phase 2 SA rekey for all IPsec SA's.  The aware tester MUST timestamp for each IPsec SA when it initiated is available the
      rekey and test
      MUST timestamp once more once the FSM declares be executed using a Redundant Unit Under Test Topology as
      depicted in Figure 3.  If the rekey failover is completed.  Once being tested withing a
      single DUT e.g. crypto engine based failovers, a Device Under Test
      Topology as depicted in Figure 1 MAY be used as well.

   Procedure:  Before a failover can be triggered, the itteration is complete IPsec Device has
      to be in a state where the tester now active stack/engine/node has a table of rekey times for each IPsec SA.  The reciproce of the
      average
      maximum supported number of this table is the IKE Phase 2 Rekey Rate.

      This is obviously granted that all IPsec SA were able to rekey
      succesfully.  If this is not the case, Active Tunnnels.  The Tunnels will be
      transporting bidirectional traffic at the determined IPsec Tunnels are all
      re-established and
      Throughput rate for the binary search goes to smallest framesize that the next value stack/engine/
      node is capable of
      IPsec SA's it forwarding (In most cases, this will rekey. be 64
      Bytes).  The process will repeat itself until traffic should traverse in a
      rate round robin fashion
      through all Active Tunnels.

      When traffic is determined at which a flowing through all SA's in that timeframe rekey
      correctly.

   Reporting Format:  The IKE Phase 2 Rekey Rate results SHOULD be
      reported Active Tunnels in the format of a table with steady
      state, a row for each of the
      tested frame sizes.  There SHOULD failover shall be columns for the frame size, triggered.

      Both receiver sides of the rate testers will now look at which sequence
      counters in the test was run for instrumented packets that frame size, for the
      media types tested, and for are being forwarded
      through the resultant IKE Phase 2 Rekey Rate
      values for each type of data stream tested.  The Security Context
      parameters defined in 6.7 and utilized for this test MUST be
      included in any statement of performance.

15.  IPsec Tunnels.  Each Tunnel Failover Time

   This section presents methodologies relating MUST have its own counter to the characterization
   of the failover behavior
      keep track of packetloss on a DUT/SUT in a IPsec environment.

   In order to lessen per SA basis.

      If the effect tester observes no sequence number drops on any of packet buffering in the DUT/SUT,
      Tunnels in both directions then the
   Tunnel Failover Time tests MUST be run at the measured IPsec
   throughput level of listed
      as 'null', indicating that the DUT.  Tunnel Failover Time tests at other
   offered constant loads are OPTIONAL.

   Tunnel Failovers can be achieved in various ways like :

   o  Failover between two or more software instances of an IPsec stack.

   o  Failover between two IPsec devices.

   o  Failover between two or more crypto engines.

   o  Failover between hardware failover was immediate and software crypto. without
      any packetloss.

      In all other cases where the tester observes a gap in the sequence
      numbers of the above cases there instrumented payload of the packets, the tester
      will monitor all SA's and look for any Tunnels that are still not
      receiving packets after the Failover.  These will be marked as
      'pending' Tunnels.  Active Tunnels that are forwarding packets
      again without any packetloss shall be at least one active IPsec
   device and a standby device. marked as 'recovered'
      Tunnels.  In some cases background the standby device is not
   present and two or more IPsec devices tester will keep monitoring all SA's
      to make sure that no packets are backing eachother up in dropped.  If this is the case of a catastrophic device or stack failure.  The standby (or
   potential other active) IPsec Devices can back up
      then the active IPsec
   Device Tunnel in either question will be placed back in 'pending'
      state.

      Note that reordered packets can naturally occur after en/
      decryption.  This is not a valid reason to place a stateless or statefull method.  In the former
   case, Phase 1 SA's as well Tunnel back in
      'pending' state.

      The tester will wait until all Tunnel are marked as Phase 2 SA's 'recovered'.
      Then it will need to be re-
   established find the SA with the largest gap in order to guarantuee packet forwarding.  In sequence number.
      Given the latter
   case, fact that the SPD framesize is fixed and SADB of the active IPsec Device is synchronized to time of that
      framesize can easily be calculated for the standby IPsec Device to ensure immediate packet path recovery.

   Objective:  Determine initiator links, a
      simple multiplication of the framesize time required to fail over all * largest packetloss
      gap will yield the Tunnel Failover Time.

      It is RECOMMENDED that the test is repeated for various number of
      Active Tunnels from an active IPsec Device to its standby device.

   Procedure:  Before a failover can be triggered, the IPsec Device has
      to as well as for different framesizes and framerates.

   Reporting Format:  The results shall be represented in a state tabular
      format, where the active stack/engine/node has a first column will list the
      maximum supported number of Active Tunnnels.  The Tunnels will be
      transporting bidirectional traffic at
      Tunnels, the Tunnel Throughput rate
      for second column the smallest framesize that Framesize, the stack/engine/node is capable
      of forwarding (In most cases, this will be 64 Bytes).  The traffic
      should traverse third column the
      Framerate and the fourth column the Tunnel Failover Time in
      milliseconds.

15.  DoS Attack Resiliency

15.1.  Phase 1 DoS Resiliency Rate
   Objective:  Determine how many invalid IKE phase 1 sessions can be
      dropped before a round robin fashion through all Active
      Tunnels.

      It is RECOMMENDED that the valid IKE session.

   Topology:  The test is repeated for various number of
      Active Tunnels as well MUST be conducted using a Device Under Test
      Topology as for different framesizes and framerates.

      When traffic is flowing through all Active Tunnels depicted in steady
      state, Figure 1.

   Procedure:  Send a failover shall be triggered.

      Both receiver sides burst of the testers will now look IKE Phase 1 messages, at sequence
      counters in the instrumented packets determined
      IPsec Throughput, to the DUT.  This burst contain a series of
      invalid IKE messages (containing either a mismatch pre-shared-key
      or an invalid certificate), followed by a single valid IKE
      message.  The objective is to increase the string of invalid
      messags that are being forwarded
      through prepended before the Tunnels.  Each Tunnel MUST have it's own counter valid IKE message up to
      keep track of packetloss on the
      point where the Tunnel associated with the valid IKE request can
      no longer be processed and does not yield an Established Tunnel
      anymore.  The test SHALL start with 1 invalid IKE and a per SA basis. single
      valid IKE message.  If the tester observes no sequence number drops on any of Tunnel associated with the
      Tunnels in both directions valid IKE
      message can be Established, then the Failover Time MUST be listed
      as 'null', indicating that Tunnel is torn down and the failover was immediate
      test will be restarted with an increased count of invalid IKE
      messages.

   Reporting Format:  Failed Attempts.  The Security Context Parameters
      defined in Section 7.6 and without utilized for this test MUST be included
      in any packetloss.

      In all other cases where statement of performance.

15.2.  Phase 2 Hash Mismatch DoS Resiliency Rate

   Objective:  Determine the tester observes rate of Hash Mismatched packets at which a gap
      valid IPsec stream start dropping frames.

   Topology:  The test MUST be conducted using a Device Under Test
      Topology as depicted in the sequence
      numbers Figure 1.

   Procedure:  A stream of the instrumented payload IPsec traffic is offered to a DUT for
      decryption.  This stream consists of the packets, the tester
      will monitor all SA's two microflows.  One valid
      microflow and look for any Tunnels one that are still not
      receiving contains altered IPsec packets after the Failover.  These will with a Hash
      Mismatch.  The aggregate rate of both microflows MUST be marked as
      'pending' Tunnels.  Active Tunnels that are forwarding packets
      again without any packetloss shall equal to
      the IPsec Throughput and should therefore be marked as 'recovered'
      Tunnels.  In background able to pass the tester DUT.
      A binary search will keep monitoring all SA's be applied to make sure that no packets are dropped.  If this is determine the case
      then ratio between the
      two microflows that causes packetloss on the valid microflow of
      traffic.

      The test MUST be conducted with a single Active Tunnel.  It MAY be
      repeated at various Tunnel scalability data points.

   Reporting Format:  PPS (of invalid traffic) The Security Context
      Parameters defined in question will Section 7.6 and utilized for this test MUST
      be placed back included in 'pending'
      state.

      Note that reordered any statement of performance.

15.3.  Phase 2 Anti Replay Attack DoS Resiliency Rate

   Objective:  Determine the rate of replayed packets can naturally occur after en/
      decryption.  This is not at which a valid reason to place
      IPsec stream start dropping frames.

   Topology:  The test MUST be conducted using a Tunnel back Device Under Test
      Topology as depicted in
      'pending' state. Figure 1.

   Procedure:  A sliding window stream of 128 packets per SA SHALL be
      allowed before packetloss IPsec traffic is declared on offered to a DUT for
      decryption.  This stream consists of two microflows.  One valid
      microflow and one that contains replayed packets of the SA. valid
      microflow.  The tester will wait until all Tunnel are marked as 'recovered'.
      Then it will find aggregate rate of both microflows MUST be equal to
      the SA with IPsec Throughput ad should therefore be able to pass the DUT.
      A binary seach will be applied to determine the largest gap in sequence number.
      Given ration between the fact
      two microflows that causes packetloss on the framesize is fixed and the time valid microflow of that
      framesize can easily
      traffic.

      The replayed packets should always be calculated for offered within the initiator links, a
      simple multiplication window of
      which the framesize time * largest packetloss
      gap will yield original packet arrived i.e. it MUST be replayed
      directly after the Tunnel Failover Time.

      If original packet has been sent to the tester never reaches DUT.  The
      binary search SHOULD start with a state low anti replay count where all Tunnels are marked
      as 'recovered',
      every few anti replay windows, a single packet in the window is
      replayed.  To increase this, one should obey the Failover Time MUST be listed as
      'infinite'.

   Reporting Format:  The results shall be represented in following
      sequence:

      *  Increase the replayed packets so every window contains a tabular
      format, where single
         replayed packet

      *  Increase the first column will list replayed packets so every packet within a window
         is replayed once

      *  Increase the number of Active
      Tunnels, replayed packets so packets within a single window
         are replayed multiple times following the second column same fill sequence

      If the Framesize, flow of replayed traffic equals the third column IPsec Throughput, the
      Framerate and
      flow SHOULD be increased till the fourth column point where packetloss is
      observed on the replayed traffic flow.

      The test MUST be conducted with a single Active Tunnel.  It MAY be
      repeated at various Tunnel Failover Time in
      milliseconds.

16.  DoS Resiliency

16.1.  Phase 1 DoS Resiliency Rate

   Objective:

   Procedure:

   Reporting Format:

16.2.  Phase 2 DoS Resiliency Rate

   Objective:

   Procedure: scalability data points.  The test
      SHOULD also be repeated on all configurable Anti Replay Window
      Sizes.

   Reporting Format:

17.  PPS (of replayed traffic).  The Security Context
      Parameters defined in Section 7.6 and utilized for this test MUST
      be included in any statement of performance.

16.  Acknowledgements

   The authors would like to acknowledge the following individual for
   their help and participation of the compilation and editing of this
   document: Michele Bustos, Ixia. Bustos ; Paul Hoffman, VPNC

18. Benno Overeinder, Scott
   Poretsky, Cisco NSITE Labs

17.  References

18.1.

17.1.  Normative References

   [RFC1242]  Bradner, S., "Benchmarking terminology for network
              interconnection devices", RFC 1242, July 1991.

   [RFC1981]  McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery
              for IP version 6", RFC 1981, August 1996.

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC2285]  Mandeville, R., "Benchmarking Terminology for LAN
              Switching Devices", RFC 2285, February 1998.

   [RFC2393]  Shacham, A., Monsour, R., Pereira, R., and M. Thomas, "IP
              Payload Compression Protocol (IPComp)", RFC 2393,
              December 1998.

   [RFC2401]  Kent, S. and R. Atkinson, "Security Architecture for the
              Internet Protocol", RFC 2401, November 1998.

   [RFC2402]  Kent, S. and R. Atkinson, "IP Authentication Header",
              RFC 2402, November 1998.

   [RFC2403]  Madson, C. and R. Glenn, "The Use of HMAC-MD5-96 within
              ESP and AH", RFC 2403, November 1998.

   [RFC2404]  Madson, C. and R. Glenn, "The Use of HMAC-SHA-1-96 within
              ESP and AH", RFC 2404, November 1998.

   [RFC2405]  Madson, C. and N. Doraswamy, "The ESP DES-CBC Cipher
              Algorithm With Explicit IV", RFC 2405, November 1998.

   [RFC2406]  Kent, S. and R. Atkinson, "IP Encapsulating Security
              Payload (ESP)", RFC 2406, November 1998.

   [RFC2407]  Piper, D., "The Internet IP Security Domain of
              Interpretation for ISAKMP", RFC 2407, November 1998.

   [RFC2408]  Maughan, D., Schneider, M., and M. Schertler, "Internet
              Security Association and Key Management Protocol
              (ISAKMP)", RFC 2408, November 1998.

   [RFC2409]  Harkins, D. and D. Carrel, "The Internet Key Exchange
              (IKE)", RFC 2409, November 1998.

   [RFC2410]  Glenn, R. and S. Kent, "The NULL Encryption Algorithm and
              Its Use With IPsec", RFC 2410, November 1998.

   [RFC2411]  Thayer, R., Doraswamy, N., and R. Glenn, "IP Security
              Document Roadmap", RFC 2411, November 1998.

   [RFC2412]  Orman, H., "The OAKLEY Key Determination Protocol",
              RFC 2412, November 1998.

   [RFC2432]  Dubray, K., "Terminology for IP Multicast Benchmarking",
              RFC 2432, October 1998.

   [RFC2451]  Pereira, R. and R. Adams, "The ESP CBC-Mode Cipher
              Algorithms", RFC 2451, November 1998.

   [RFC2544]  Bradner, S. and J. McQuaid, "Benchmarking Methodology for
              Network Interconnect Devices", RFC 2544, March 1999.

   [RFC2547]  Rosen, E. and Y. Rekhter, "BGP/MPLS VPNs", RFC 2547,
              March 1999.

   [RFC2661]  Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn,
              G., and B. Palter, "Layer Two Tunneling Protocol "L2TP"",
              RFC 2661, August 1999.

   [RFC2784]  Farinacci, D., Li, T., Hanks, S., Meyer, D., and P.
              Traina, "Generic Routing Encapsulation (GRE)", RFC 2784,
              March 2000.

   [RFC4109]  Hoffman, P., "Algorithms for Internet Key Exchange version
              1 (IKEv1)", RFC 4109, May 2005.

   [RFC4305]  Eastlake, D., "Cryptographic Algorithm Implementation
              Requirements for Encapsulating Security Payload (ESP) and
              Authentication Header (AH)", RFC 4305, December 2005.

   [I-D.ietf-ipsec-ikev2]
              Kaufman, C., "Internet Key Exchange (IKEv2) Protocol",
              draft-ietf-ipsec-ikev2-17 (work in progress),
              October 2004.

   [I-D.ietf-ipsec-properties]
              Krywaniuk, A., "Security Properties of the IPsec Protocol
              Suite", draft-ietf-ipsec-properties-02 (work in progress),
              July 2002.

18.2.

   [I-D.ietf-bmwg-ipv6-meth]
              Popoviciu, C., "IPv6 Benchmarking Methodology for Network
              Interconnect Devices", draft-ietf-bmwg-ipv6-meth-03 (work
              in progress), August 2007.

17.2.  Informative References

   [FIPS.186-1.1998]
              National Institute of Standards and Technology, "Digital
              Signature Standard", FIPS PUB 186-1, December 1998,
              <http://csrc.nist.gov/fips/fips1861.pdf>.

Authors' Addresses

   Merike Kaeo
   Double Shot Security
   3518 Fremont Ave N #363
   Seattle, WA  98103
   USA

   Phone: +1(310)866-0165
   Email: kaeo@merike.com

   Tim Van Herck
   Cisco Systems
   170 West Tasman Drive
   San Jose, CA  95134-1706
   USA

   Phone: +1(408)853-2284
   Email: herckt@cisco.com

Full Copyright Statement

   Copyright (C) The IETF Trust (2007). (2008).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
   THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
   OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
   THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at
   ietf-ipr@ietf.org.

Acknowledgment

   Funding for the RFC Editor function is provided by the IETF
   Administrative Support Activity (IASA).