draft-ietf-bmwg-ipsec-meth-03.txt   draft-ietf-bmwg-ipsec-meth-04.txt 
Benchmarking Working Group M. Kaeo Benchmarking Working Group M. Kaeo
Internet-Draft Double Shot Security Internet-Draft Double Shot Security
Expires: September 2, 2008 T. Van Herck Expires: October 5, 2009 T. Van Herck
Cisco Systems Cisco Systems
April 3, 2009
Methodology for Benchmarking IPsec Devices Methodology for Benchmarking IPsec Devices
draft-ietf-bmwg-ipsec-meth-03 draft-ietf-bmwg-ipsec-meth-04
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that any This Internet-Draft is submitted to IETF in full conformance with the
applicable patent or other IPR claims of which he or she is aware provisions of BCP 78 and BCP 79. This document may contain material
have been or will be disclosed, and any of which he or she becomes from IETF Documents or IETF Contributions published or made publicly
aware will be disclosed, in accordance with Section 6 of BCP 79. available before November 10, 2008. The person(s) controlling the
copyright in some of this material may not have granted the IETF
Trust the right to allow modifications of such material outside the
IETF Standards Process. Without obtaining an adequate license from
the person(s) controlling the copyright in such materials, this
document may not be modified outside the IETF Standards Process, and
derivative works of it may not be created outside the IETF Standards
Process, except to format it for publication as an RFC or to
translate it into languages other than English.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on September 2, 2008. This Internet-Draft will expire on October 5, 2009.
Copyright Notice Copyright Notice
Copyright (C) The IETF Trust (2008). Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info).
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document.
Abstract Abstract
The purpose of this draft is to describe methodology specific to the The purpose of this draft is to describe methodology specific to the
benchmarking of IPsec IP forwarding devices. It builds upon the benchmarking of IPsec IP forwarding devices. It builds upon the
tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking tenets set forth in [RFC2544], [RFC2432] and other IETF Benchmarking
Methodology Working Group (BMWG) efforts. This document seeks to Methodology Working Group (BMWG) efforts. This document seeks to
extend these efforts to the IPsec paradigm. extend these efforts to the IPsec paradigm.
The BMWG produces two major classes of documents: Benchmarking The BMWG produces two major classes of documents: Benchmarking
Terminology documents and Benchmarking Methodology documents. The Terminology documents and Benchmarking Methodology documents. The
Terminology documents present the benchmarks and other related terms. Terminology documents present the benchmarks and other related terms.
The Methodology documents define the procedures required to collect The Methodology documents define the procedures required to collect
the benchmarks cited in the corresponding Terminology documents. the benchmarks cited in the corresponding Terminology documents.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Document Scope . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Methodology Format . . . . . . . . . . . . . . . . . . . . . . 4 3. Methodology Format . . . . . . . . . . . . . . . . . . . . . . 5
4. Key Words to Reflect Requirements . . . . . . . . . . . . . . 5 4. Key Words to Reflect Requirements . . . . . . . . . . . . . . 6
5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 5 5. Test Considerations . . . . . . . . . . . . . . . . . . . . . 6
6. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 5 6. Test Topologies . . . . . . . . . . . . . . . . . . . . . . . 6
7. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 7. Test Parameters . . . . . . . . . . . . . . . . . . . . . . . 9
7.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 8 7.1. Frame Type . . . . . . . . . . . . . . . . . . . . . . . . 9
7.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 8 7.1.1. IP . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 8 7.1.2. UDP . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 8 7.1.3. TCP . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 8 7.1.4. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 9
7.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 9 7.2. Frame Sizes . . . . . . . . . . . . . . . . . . . . . . . 10
7.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 10 7.3. Fragmentation and Reassembly . . . . . . . . . . . . . . . 10
7.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 10 7.4. Time To Live . . . . . . . . . . . . . . . . . . . . . . . 11
7.6. Security Context Parameters . . . . . . . . . . . . . . . 10 7.5. Trial Duration . . . . . . . . . . . . . . . . . . . . . . 11
7.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 10 7.6. Security Context Parameters . . . . . . . . . . . . . . . 11
7.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 12 7.6.1. IPsec Transform Sets . . . . . . . . . . . . . . . . . 11
7.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 13 7.6.2. IPsec Topologies . . . . . . . . . . . . . . . . . . . 13
7.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 13 7.6.3. IKE Keepalives . . . . . . . . . . . . . . . . . . . . 14
7.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 13 7.6.4. IKE DH-group . . . . . . . . . . . . . . . . . . . . . 14
7.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 14 7.6.5. IKE SA / IPsec SA Lifetime . . . . . . . . . . . . . . 14
7.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 14 7.6.6. IPsec Selectors . . . . . . . . . . . . . . . . . . . 15
8. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 7.6.7. NAT-Traversal . . . . . . . . . . . . . . . . . . . . 15
8.1. IPsec Tunnel Capacity . . . . . . . . . . . . . . . . . . 14 8. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 15 8.1. IPsec Tunnel Capacity . . . . . . . . . . . . . . . . . . 15
9. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 16 8.2. IPsec SA Capacity . . . . . . . . . . . . . . . . . . . . 16
9.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 16 9. Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . 17
9.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 17 9.1. Throughput baseline . . . . . . . . . . . . . . . . . . . 17
9.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 18 9.2. IPsec Throughput . . . . . . . . . . . . . . . . . . . . . 18
9.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 19 9.3. IPsec Encryption Throughput . . . . . . . . . . . . . . . 19
10. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 9.4. IPsec Decryption Throughput . . . . . . . . . . . . . . . 20
10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 20 10. Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 21 10.1. Latency Baseline . . . . . . . . . . . . . . . . . . . . . 21
10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 22 10.2. IPsec Latency . . . . . . . . . . . . . . . . . . . . . . 22
10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 23 10.3. IPsec Encryption Latency . . . . . . . . . . . . . . . . . 23
10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . 23 10.4. IPsec Decryption Latency . . . . . . . . . . . . . . . . . 24
11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 24 10.5. Time To First Packet . . . . . . . . . . . . . . . . . . . 24
11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 24 11. Frame Loss Rate . . . . . . . . . . . . . . . . . . . . . . . 25
11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 25 11.1. Frame Loss Baseline . . . . . . . . . . . . . . . . . . . 25
11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 26 11.2. IPsec Frame Loss . . . . . . . . . . . . . . . . . . . . . 26
11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 26 11.3. IPsec Encryption Frame Loss . . . . . . . . . . . . . . . 27
11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 27 11.4. IPsec Decryption Frame Loss . . . . . . . . . . . . . . . 28
12. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 28 11.5. IKE Phase 2 Rekey Frame Loss . . . . . . . . . . . . . . . 28
12.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 28 12. IPsec Tunnel Setup Behavior . . . . . . . . . . . . . . . . . 29
12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 29 12.1. IPsec Tunnel Setup Rate . . . . . . . . . . . . . . . . . 29
12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 29 12.2. IKE Phase 1 Setup Rate . . . . . . . . . . . . . . . . . . 30
13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 31 12.3. IKE Phase 2 Setup Rate . . . . . . . . . . . . . . . . . . 31
13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 31 13. IPsec Rekey Behavior . . . . . . . . . . . . . . . . . . . . . 32
13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 32 13.1. IKE Phase 1 Rekey Rate . . . . . . . . . . . . . . . . . . 32
14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 32 13.2. IKE Phase 2 Rekey Rate . . . . . . . . . . . . . . . . . . 33
15. DoS Attack Resiliency . . . . . . . . . . . . . . . . . . . . 34 14. IPsec Tunnel Failover Time . . . . . . . . . . . . . . . . . . 34
15.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 34 15. DoS Attack Resiliency . . . . . . . . . . . . . . . . . . . . 36
15.2. Phase 2 Hash Mismatch DoS Resiliency Rate . . . . . . . . 35 15.1. Phase 1 DoS Resiliency Rate . . . . . . . . . . . . . . . 36
15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . 36 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate . . . . . . . . 37
16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate . . . . . . 37
17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 37 16. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 39
17.1. Normative References . . . . . . . . . . . . . . . . . . . 37 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39
17.2. Informative References . . . . . . . . . . . . . . . . . . 39 17.1. Normative References . . . . . . . . . . . . . . . . . . . 39
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 39 17.2. Informative References . . . . . . . . . . . . . . . . . . 41
Intellectual Property and Copyright Statements . . . . . . . . . . 40 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 41
1. Introduction 1. Introduction
This document defines a specific set of tests that can be used to This document defines a specific set of tests that can be used to
measure and report the performance characteristics of IPsec devices. measure and report the performance characteristics of IPsec devices.
It extends the methodology already defined for benchmarking network It extends the methodology already defined for benchmarking network
interconnecting devices in [RFC2544] to IPsec gateways and interconnecting devices in [RFC2544] to IPsec gateways and
additionally introduces tests which can be used to measure end-host additionally introduces tests which can be used to measure end-host
IPsec performance. IPsec performance.
skipping to change at page 8, line 45 skipping to change at page 9, line 45
payload. payload.
7.1.3. TCP 7.1.3. TCP
It is OPTIONAL to perform the tests with TCP as the L4 protocol but It is OPTIONAL to perform the tests with TCP as the L4 protocol but
in case this is considered, the TCP traffic is RECOMMENDED to be in case this is considered, the TCP traffic is RECOMMENDED to be
stateful. With a TCP as a L4 header it is possible that there will stateful. With a TCP as a L4 header it is possible that there will
not be enough room to add all instrumentation data to identify the not be enough room to add all instrumentation data to identify the
packets within the DUT/SUT. packets within the DUT/SUT.
7.1.4. NAT-Traversal
It is RECOMMENDED to test the scenario where IPsec protected traffic
must traverse network address translation (NAT) gateways. This is
commonly referred to as Nat-Traversal and requires UDP encapsulation.
7.2. Frame Sizes 7.2. Frame Sizes
Each test MUST be run with different frame sizes. It is RECOMMENDED Each test MUST be run with different frame sizes. It is RECOMMENDED
to use teh following cleartext layer 2 frame sizes for IPv4 tests to use teh following cleartext layer 2 frame sizes for IPv4 tests
over Ethernet media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes, over Ethernet media: 64, 128, 256, 512, 1024, 1280, and 1518 bytes,
per RFC2544 section 9 [RFC2544]. The four CRC bytes are included in per RFC2544 section 9 [RFC2544]. The four CRC bytes are included in
the frame size specified. the frame size specified.
For GigabitEthernet, supporting jumboframes, the cleartext layer 2 For GigabitEthernet, supporting jumboframes, the cleartext layer 2
framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072, framesizes used are 64, 128, 256, 512, 1024, 1280, 1518, 2048, 3072,
skipping to change at page 11, line 39 skipping to change at page 12, line 39
| AH Transform | Authentication Algorithm | Mode | | AH Transform | Authentication Algorithm | Mode |
+--------------+--------------------------+-----------+ +--------------+--------------------------+-----------+
| 1 | HMAC-SHA1-96 | Transport | | 1 | HMAC-SHA1-96 | Transport |
| 2 | HMAC-SHA1-96 | Tunnel | | 2 | HMAC-SHA1-96 | Tunnel |
| 3 | AES-XBC-MAC-96 | Transport | | 3 | AES-XBC-MAC-96 | Transport |
| 4 | AES-XBC-MAC-96 | Tunnel | | 4 | AES-XBC-MAC-96 | Tunnel |
+--------------+--------------------------+-----------+ +--------------+--------------------------+-----------+
Table 2 Table 2
Testing of AH Transforms 1 and 2 MUST be supported. Testing of AH If AH is supported by the DUT/SUT testing of AH Transforms 1 and 2
Transforms 3 And 4 SHOULD be supported. MUST be supported. Testing of AH Transforms 3 And 4 SHOULD be
supported.
Note that this these tables are derived from the Cryptographic Note that this these tables are derived from the Cryptographic
Algorithms for AH and ESP requirements as described in [RFC4305]. Algorithms for AH and ESP requirements as described in [RFC4305].
Optionally, other AH and/or ESP transforms MAY be supported. Optionally, other AH and/or ESP transforms MAY be supported.
+-----------------------+----+-----+ +-----------------------+----+-----+
| Transform Combination | AH | ESP | | Transform Combination | AH | ESP |
+-----------------------+----+-----+ +-----------------------+----+-----+
| 1 | 1 | 1 | | 1 | 1 | 1 |
| 2 | 2 | 2 | | 2 | 2 | 2 |
skipping to change at page 17, line 12 skipping to change at page 18, line 12
then count the frames that are transmitted by the DUT. If the then count the frames that are transmitted by the DUT. If the
count of offered frames is equal to the count of received frames, count of offered frames is equal to the count of received frames,
the rate of the offered stream is increased and the test is rerun. the rate of the offered stream is increased and the test is rerun.
If fewer frames are received than were transmitted, the rate of If fewer frames are received than were transmitted, the rate of
the offered stream is reduced and the test is rerun. the offered stream is reduced and the test is rerun.
The throughput is the fastest rate at which the count of test The throughput is the fastest rate at which the count of test
frames transmitted by the DUT is equal to the number of test frames transmitted by the DUT is equal to the number of test
frames sent to it by the test equipment. frames sent to it by the test equipment.
Note that the IPsec SA selectors refer to the IP addresses and
port numbers. So eventhough this is a test of only cleartext
traffic, the same type of traffic should be sent for the baseline
test as for tests utilizing IPsec.
Reporting Format: The results of the throughput test SHOULD be Reporting Format: The results of the throughput test SHOULD be
reported in the form of a graph. If it is, the x coordinate reported in the form of a graph. If it is, the x coordinate
SHOULD be the frame size, the y coordinate SHOULD be the frame SHOULD be the frame size, the y coordinate SHOULD be the frame
rate. There SHOULD be at least two lines on the graph. There rate. There SHOULD be at least two lines on the graph. There
SHOULD be one line showing the theoretical frame rate for the SHOULD be one line showing the theoretical frame rate for the
media at the various frame sizes. The second line SHOULD be the media at the various frame sizes. The second line SHOULD be the
plot of the test results. Additional lines MAY be used on the plot of the test results. Additional lines MAY be used on the
graph to report the results for each type of data stream tested. graph to report the results for each type of data stream tested.
Text accompanying the graph SHOULD indicate the protocol, data Text accompanying the graph SHOULD indicate the protocol, data
stream format, and type of media used in the tests. stream format, and type of media used in the tests.
We assume that if a single value is desired for advertising Any values for throughput rate MUST be expressed in packets per
purposes the vendor will select the rate for the minimum frame second. The rate MAY also be expressed in bits (or bytes) per
size for the media. If this is done then the figure MUST be second if the vendor so desires. The statement of performance
expressed in packets per second. The rate MAY also be expressed MUST include:
in bits (or bytes) per second if the vendor so desires. The
statement of performance MUST include:
* Measured maximum frame rate * Measured maximum frame rate
* Size of the frame used * Size of the frame used
* Theoretical limit of the media for that frame size * Theoretical limit of the media for that frame size
* Type of protocol used in the test * Type of protocol used in the test
Even if a single value is used as part of the advertising copy,
the full table of results SHOULD be included in the product data
sheet.
9.2. IPsec Throughput 9.2. IPsec Throughput
Objective: Measure the intrinsic throughput of a device utilizing Objective: Measure the intrinsic throughput of a device utilizing
IPsec. IPsec.
Topology If no IPsec aware tester is available the test MUST be Topology If no IPsec aware tester is available the test MUST be
conducted using a System Under Test Topology as depicted in conducted using a System Under Test Topology as depicted in
Figure 2. When an IPsec aware tester is available the test MUST Figure 2. When an IPsec aware tester is available the test MUST
be executed using a Device Under Test Topology as depicted in be executed using a Device Under Test Topology as depicted in
Figure 1. Figure 1.
skipping to change at page 25, line 8 skipping to change at page 26, line 8
Procedure: Send a specific number of frames at a specific rate Procedure: Send a specific number of frames at a specific rate
through the DUT/SUT to be tested using frames that match the IPsec through the DUT/SUT to be tested using frames that match the IPsec
SA selector(s) to be tested and count the frames that are SA selector(s) to be tested and count the frames that are
transmitted by the DUT/SUT. The frame loss rate at each point is transmitted by the DUT/SUT. The frame loss rate at each point is
calculated using the following equation: calculated using the following equation:
( ( input_count - output_count ) * 100 ) / input_count ( ( input_count - output_count ) * 100 ) / input_count
The first trial SHOULD be run for the frame rate that corresponds The first trial SHOULD be run for the frame rate that corresponds
to 100% of the maximum rate for the frame size on the input media. to 100% of the maximum rate for the nominal device throughput,
which is the throughput that is actually supported on an interface
for a specific packet size and may not be the theoretical maximum.
Repeat the procedure for the rate that corresponds to 90% of the Repeat the procedure for the rate that corresponds to 90% of the
maximum rate used and then for 80% of this rate. This sequence maximum rate used and then for 80% of this rate. This sequence
SHOULD be continued (at reduced 10% intervals) until there are two SHOULD be continued (at reduced 10% intervals) until there are two
successive trials in which no frames are lost. The maximum successive trials in which no frames are lost. The maximum
granularity of the trials MUST be 10% of the maximum rate, a finer granularity of the trials MUST be 10% of the maximum rate, a finer
granularity is encouraged. granularity is encouraged.
Reporting Format: The results of the frame loss rate test SHOULD be Reporting Format: The results of the frame loss rate test SHOULD be
plotted as a graph. If this is done then the X axis MUST be the plotted as a graph. If this is done then the X axis MUST be the
input frame rate as a percent of the theoretical rate for the input frame rate as a percent of the theoretical rate for the
skipping to change at page 25, line 31 skipping to change at page 26, line 33
the bottom of the Y axis MUST be 0 percent; the right end of the X the bottom of the Y axis MUST be 0 percent; the right end of the X
axis and the top of the Y axis MUST be 100 percent. Multiple axis and the top of the Y axis MUST be 100 percent. Multiple
lines on the graph MAY used to report the frame loss rate for lines on the graph MAY used to report the frame loss rate for
different frame sizes, protocols, and types of data streams. different frame sizes, protocols, and types of data streams.
11.2. IPsec Frame Loss 11.2. IPsec Frame Loss
Objective: To measure the frame loss rate of a device when using Objective: To measure the frame loss rate of a device when using
IPsec to protect the data flow. IPsec to protect the data flow.
Topology If no IPsec aware tester is available the test MUST be Topology When an IPsec aware tester is available the test MUST be
executed using a Device Under Test Topology as depicted in
Figure 1. If no IPsec aware tester is available the test MUST be
conducted using a System Under Test Topology as depicted in conducted using a System Under Test Topology as depicted in
Figure 2. When an IPsec aware tester is available the test MUST Figure 2. In this scenario, it is common practice to use an
be executed using a Device Under Test Topology as depicted in asymmetric topology, where a less powerful (lower throughput) DUT
Figure 1. is used in conjunction with a much more powerful IPsec device.
This topology variant can in may cases produce more accurate
results that the symmetric variant depicted in the figure, since
all bottlenecks are expected to be on the less performant device.
Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a Procedure: Ensure that the DUT/SUT is in active tunnel mode. Send a
specific number of cleartext frames that match the IPsec SA specific number of cleartext frames that match the IPsec SA
selector(s) to be tested at a specific rate through the DUT/SUT. selector(s) to be tested at a specific rate through the DUT/SUT.
DUTa will encrypt the traffic and forward to DUTb which will in DUTa will encrypt the traffic and forward to DUTb which will in
turn decrypt the traffic and forward to the testing device. The turn decrypt the traffic and forward to the testing device. The
testing device counts the frames that are transmitted by the DUTb. testing device counts the frames that are transmitted by the DUTb.
The frame loss rate at each point is calculated using the The frame loss rate at each point is calculated using the
following equation: following equation:
( ( input_count - output_count ) * 100 ) / input_count ( ( input_count - output_count ) * 100 ) / input_count
The first trial SHOULD be run for the frame rate that corresponds The first trial SHOULD be run for the frame rate that corresponds
to 100% of the maximum rate for the frame size on the input media. to 100% of the maximum rate for the nominal device throughput,
which is the throughput that is actually supported on an interface
for a specific packet size and may not be the theoretical maximum.
Repeat the procedure for the rate that corresponds to 90% of the Repeat the procedure for the rate that corresponds to 90% of the
maximum rate used and then for 80% of this rate. This sequence maximum rate used and then for 80% of this rate. This sequence
SHOULD be continued (at reducing 10% intervals) until there are SHOULD be continued (at reducing 10% intervals) until there are
two successive trials in which no frames are lost. The maximum two successive trials in which no frames are lost. The maximum
granularity of the trials MUST be 10% of the maximum rate, a finer granularity of the trials MUST be 10% of the maximum rate, a finer
granularity is encouraged. granularity is encouraged.
Reporting Format: The reporting format SHALL be the same as listed Reporting Format: The reporting format SHALL be the same as listed
in Section 11.1 with the additional requirement that the Security in Section 11.1 with the additional requirement that the Security
Context Parameters, as defined in Section 7.6, utilized for this Context Parameters, as defined in Section 7.6, utilized for this
skipping to change at page 26, line 32 skipping to change at page 27, line 42
IPsec SA selector(s) at a specific rate to the DUT. The DUT will IPsec SA selector(s) at a specific rate to the DUT. The DUT will
receive the cleartext frames, perform IPsec operations and then receive the cleartext frames, perform IPsec operations and then
send the IPsec protected frame to the tester. The testing device send the IPsec protected frame to the tester. The testing device
counts the encrypted frames that are transmitted by the DUT. The counts the encrypted frames that are transmitted by the DUT. The
frame loss rate at each point is calculated using the following frame loss rate at each point is calculated using the following
equation: equation:
( ( input_count - output_count ) * 100 ) / input_count ( ( input_count - output_count ) * 100 ) / input_count
The first trial SHOULD be run for the frame rate that corresponds The first trial SHOULD be run for the frame rate that corresponds
to 100% of the maximum rate for the frame size on the input media. to 100% of the maximum rate for the nominal device throughput,
which is the throughput that is actually supported on an interface
for a specific packet size and may not be the theoretical maximum.
Repeat the procedure for the rate that corresponds to 90% of the Repeat the procedure for the rate that corresponds to 90% of the
maximum rate used and then for 80% of this rate. This sequence maximum rate used and then for 80% of this rate. This sequence
SHOULD be continued (at reducing 10% intervals) until there are SHOULD be continued (at reducing 10% intervals) until there are
two successive trials in which no frames are lost. The maximum two successive trials in which no frames are lost. The maximum
granularity of the trials MUST be 10% of the maximum rate, a finer granularity of the trials MUST be 10% of the maximum rate, a finer
granularity is encouraged. granularity is encouraged.
Reporting Format: The reporting format SHALL be the same as listed Reporting Format: The reporting format SHALL be the same as listed
in Section 11.1 with the additional requirement that the Security in Section 11.1 with the additional requirement that the Security
Context Parameters, as defined in Section 7.6, utilized for this Context Parameters, as defined in Section 7.6, utilized for this
skipping to change at page 27, line 19 skipping to change at page 28, line 29
match the IPsec SA selector(s) at a specific rate to the DUT. The match the IPsec SA selector(s) at a specific rate to the DUT. The
DUT will receive the IPsec protected frames, perform IPsec DUT will receive the IPsec protected frames, perform IPsec
operations and then send the cleartext frames to the tester. The operations and then send the cleartext frames to the tester. The
testing device counts the cleartext frames that are transmitted by testing device counts the cleartext frames that are transmitted by
the DUT. The frame loss rate at each point is calculated using the DUT. The frame loss rate at each point is calculated using
the following equation: the following equation:
( ( input_count - output_count ) * 100 ) / input_count ( ( input_count - output_count ) * 100 ) / input_count
The first trial SHOULD be run for the frame rate that corresponds The first trial SHOULD be run for the frame rate that corresponds
to 100% of the maximum rate for the frame size on the input media. to 100% of the maximum rate for the nominal device throughput,
which is the throughput that is actually supported on an interface
for a specific packet size and may not be the theoretical maximum.
Repeat the procedure for the rate that corresponds to 90% of the Repeat the procedure for the rate that corresponds to 90% of the
maximum rate used and then for 80% of this rate. This sequence maximum rate used and then for 80% of this rate. This sequence
SHOULD be continued (at reducing 10% intervals) until there are SHOULD be continued (at reducing 10% intervals) until there are
two successive trials in which no frames are lost. The maximum two successive trials in which no frames are lost. The maximum
granularity of the trials MUST be 10% of the maximum rate, a finer granularity of the trials MUST be 10% of the maximum rate, a finer
granularity is encouraged. granularity is encouraged.
Reporting format: The reporting format SHALL be the same as listed Reporting format: The reporting format SHALL be the same as listed
in Section 11.1 with the additional requirement that the Security in Section 11.1 with the additional requirement that the Security
Context Parameters, as defined in Section 7.6, utilized for this Context Parameters, as defined in Section 7.6, utilized for this
skipping to change at page 28, line 29 skipping to change at page 29, line 39
cleartext frames at a particular frame size to the Responder at cleartext frames at a particular frame size to the Responder at
the determined throughput rate using frames with selectors the determined throughput rate using frames with selectors
matching the first IKE Phase 1 policy. As soon as the testing matching the first IKE Phase 1 policy. As soon as the testing
device receives its first frame from the Responder, it knows that device receives its first frame from the Responder, it knows that
the IPsec Tunnel is established and starts sending the next stream the IPsec Tunnel is established and starts sending the next stream
of cleartext frames using the same frame size and throughput rate of cleartext frames using the same frame size and throughput rate
but this time using selectors matching the second IKE Phase 1 but this time using selectors matching the second IKE Phase 1
policy. This process is repeated until all configured IPsec policy. This process is repeated until all configured IPsec
Tunnels have been established. Tunnels have been established.
Some devices may support policy configurations where you do not
need a one-to-one correspondence between an IKE Phase 1 policy and
a specific IKE SA. In this case, the number of IKE Phase 1
policies configured should be sufficient so that the transmitted
(i.e. offered) test traffic will create 'n' IKE SAs.
The IPsec Tunnel Setup Rate is measured in Tunnels Per Second The IPsec Tunnel Setup Rate is measured in Tunnels Per Second
(TPS) and is determined by the following formula: (TPS) and is determined by the following formula:
Tunnel Setup Rate = n / [Duration of Test - (n * Tunnel Setup Rate = n / [Duration of Test - (n *
frame_transmit_time)] TPS frame_transmit_time)] TPS
The IKE SA lifetime and the IPsec SA lifetime MUST be configured The IKE SA lifetime and the IPsec SA lifetime MUST be configured
to exceed the duration of the test time. It is RECOMMENDED that to exceed the duration of the test time. It is RECOMMENDED that
n=100 IPsec Tunnels are tested at a minimum to get a large enough n=100 IPsec Tunnels are tested at a minimum to get a large enough
sample size to depict some real-world behavior. sample size to depict some real-world behavior.
Reporting Format: The Tunnel Setup Rate results SHOULD be reported Reporting Format: The Tunnel Setup Rate results SHOULD be reported
in the format of a table with a row for each of the tested frame in the format of a table with a row for each of the tested frame
sizes. There SHOULD be columns for: sizes. There SHOULD be columns for:
The throughput rate at which the test was run for the specified The throughput rate at which the test was run for the specified
skipping to change at page 29, line 16 skipping to change at page 30, line 34
12.2. IKE Phase 1 Setup Rate 12.2. IKE Phase 1 Setup Rate
Objective: Determine the rate of IKE SA's that can be established. Objective: Determine the rate of IKE SA's that can be established.
Topology: The test MUST be conducted using a Device Under Test Topology: The test MUST be conducted using a Device Under Test
Topology as depicted in Figure 1. Topology as depicted in Figure 1.
Procedure: Configure the Responder with n IKE Phase 1 and Procedure: Configure the Responder with n IKE Phase 1 and
corresponding IKE Phase 2 policies. Ensure that no SA's are corresponding IKE Phase 2 policies. Ensure that no SA's are
established and that the Responder has Configured Tunnel for all n established and that the Responder has Configured Tunnels for all
policies. Send a stream of cleartext frames at a particular frame n policies. Send a stream of cleartext frames at a particular
size through the Responder at the determined throughput rate using frame size through the Responder at the determined throughput rate
frames with selectors matching the first IKE Phase 1 policy. As using frames with selectors matching the first IKE Phase 1 policy.
soon as the Phase 1 SA is established, the testing device starts As soon as the Phase 1 SA is established, the testing device
sending the next stream of cleartext frames using the same frame starts sending the next stream of cleartext frames using the same
size and throughput rate but this time using selectors matching frame size and throughput rate but this time using selectors
the second IKE Phase 1 policy. This process is repeated until all matching the second IKE Phase 1 policy. This process is repeated
configured IKE SA's have been established. until all configured IKE SA's have been established.
Some devices may support policy configurations where you do not
need a one-to-one correspondence between an IKE Phase 1 policy and
a specific IKE SA. In this case, the number of IKE Phase 1
policies configured should be sufficient so that the transmitted
(i.e. offered) test traffic will create 'n' IKE SAs.
The IKE SA Setup Rate is determined by the following formula: The IKE SA Setup Rate is determined by the following formula:
IKE SA Setup Rate = n / [Duration of Test - (n * IKE SA Setup Rate = n / [Duration of Test - (n *
frame_transmit_time)] frame_transmit_time)] IKE SAs per second
The IKE SA lifetime and the IPsec SA lifetime MUST be configured The IKE SA lifetime and the IPsec SA lifetime MUST be configured
to exceed the duration of the test time. It is RECOMMENDED that to exceed the duration of the test time. It is RECOMMENDED that
n=100 IKE SA's are tested at a minumum to get a large enough n=100 IKE SA's are tested at a minumum to get a large enough
sample size to depict some real-world behavior. sample size to depict some real-world behavior.
Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be Reporting Format: The IKE Phase 1 Setup Rate results SHOULD be
reported in the format of a table with a row for each of the reported in the format of a table with a row for each of the
tested frame sizes. There SHOULD be columns for the frame size, tested frame sizes. There SHOULD be columns for the frame size,
the rate at which the test was run for that frame size, for the the rate at which the test was run for that frame size, for the
skipping to change at page 34, line 37 skipping to change at page 36, line 12
decryption. This is not a valid reason to place a Tunnel back in decryption. This is not a valid reason to place a Tunnel back in
'pending' state. 'pending' state.
The tester will wait until all Tunnel are marked as 'recovered'. The tester will wait until all Tunnel are marked as 'recovered'.
Then it will find the SA with the largest gap in sequence number. Then it will find the SA with the largest gap in sequence number.
Given the fact that the framesize is fixed and the time of that Given the fact that the framesize is fixed and the time of that
framesize can easily be calculated for the initiator links, a framesize can easily be calculated for the initiator links, a
simple multiplication of the framesize time * largest packetloss simple multiplication of the framesize time * largest packetloss
gap will yield the Tunnel Failover Time. gap will yield the Tunnel Failover Time.
It is RECOMMENDED that the test is repeated for various number of This test MUST be repeated for the single tunnel, maximum
Active Tunnels as well as for different framesizes and framerates. throughput failover case. It is RECOMMENDED that the test is
repeated for various number of Active Tunnels as well as for
different framesizes and framerates.
Reporting Format: The results shall be represented in a tabular Reporting Format: The results shall be represented in a tabular
format, where the first column will list the number of Active format, where the first column will list the number of Active
Tunnels, the second column the Framesize, the third column the Tunnels, the second column the Framesize, the third column the
Framerate and the fourth column the Tunnel Failover Time in Framerate and the fourth column the Tunnel Failover Time in
milliseconds. milliseconds.
15. DoS Attack Resiliency 15. DoS Attack Resiliency
15.1. Phase 1 DoS Resiliency Rate 15.1. Phase 1 DoS Resiliency Rate
skipping to change at page 35, line 4 skipping to change at page 36, line 26
Reporting Format: The results shall be represented in a tabular Reporting Format: The results shall be represented in a tabular
format, where the first column will list the number of Active format, where the first column will list the number of Active
Tunnels, the second column the Framesize, the third column the Tunnels, the second column the Framesize, the third column the
Framerate and the fourth column the Tunnel Failover Time in Framerate and the fourth column the Tunnel Failover Time in
milliseconds. milliseconds.
15. DoS Attack Resiliency 15. DoS Attack Resiliency
15.1. Phase 1 DoS Resiliency Rate 15.1. Phase 1 DoS Resiliency Rate
Objective: Determine how many invalid IKE phase 1 sessions can be Objective: Determine how many invalid IKE phase 1 sessions can be
dropped before a valid IKE session. directed at a DUT before the Responder ignores or rejects valid
IKE SA attempts.
Topology: The test MUST be conducted using a Device Under Test Topology: The test MUST be conducted using a Device Under Test
Topology as depicted in Figure 1. Topology as depicted in Figure 1.
Procedure: Send a burst of IKE Phase 1 messages, at the determined Procedure: Configure the Responder with n IKE Phase 1 and
IPsec Throughput, to the DUT. This burst contain a series of corresponding IKE Phase 2 policies, where n is equal to the IPsec
invalid IKE messages (containing either a mismatch pre-shared-key Tunnel Capacity. Ensure that no SA's are established and that the
or an invalid certificate), followed by a single valid IKE Responder has Configured Tunnels for all n policies. Start with
message. The objective is to increase the string of invalid 95% of the offered test traffic containing an IKE Phase 1 policy
messags that are prepended before the valid IKE message up to the mismatch (either a mismatched pre-shared-key or an invalid
point where the Tunnel associated with the valid IKE request can certificate).
no longer be processed and does not yield an Established Tunnel
anymore. The test SHALL start with 1 invalid IKE and a single
valid IKE message. If the Tunnel associated with the valid IKE
message can be Established, then the Tunnel is torn down and the
test will be restarted with an increased count of invalid IKE
messages.
Reporting Format: Failed Attempts. The Security Context Parameters Send a burst of cleartext frames at a particular frame size
defined in Section 7.6 and utilized for this test MUST be included through the Responder at the determined throughput rate using
in any statement of performance. frames with selectors matching all n policies. Once the test
completes, check whether all 5% of the correct IKE Phase 1 SAs
have been established. If not, keep repeating the test by
decrementing the number of mismatched IKE Phase 1 policies
configured by 5% until all correct IKE Phase 1 SAs have been
established. Between each retest, ensure that the DUT is reset
and cleared of all previous state information.
The IKE SA lifetime and the IPsec SA lifetime MUST be configured
to exceed the duration of the test time. It is RECOMMENDED that
the test duration is 2 x (n x IKE SA set up rate) to ensure that
there is enough time to establish the valid IKE Phase 1 SAs.
Some devices may support policy configurations where you do not
need a one-to-one correspondence between an IKE Phase 1 policy and
a specific IKE SA. In this case, the number of IKE Phase 1
policies configured should be sufficient so that the transmitted
(i.e. offered) test traffic will create 'n' IKE SAs.
Reporting Format: The result shall be represented as the highest
percentage of invalid IKE Phase1 messages that still allowed all
the valid attempts to be completed. The Security Context
Parameters defined in Section 7.6 and utilized for this test MUST
be included in any statement of performance.
15.2. Phase 2 Hash Mismatch DoS Resiliency Rate 15.2. Phase 2 Hash Mismatch DoS Resiliency Rate
Objective: Determine the rate of Hash Mismatched packets at which a Objective: Determine the rate of Hash Mismatched packets at which a
valid IPsec stream start dropping frames. valid IPsec stream start dropping frames.
Topology: The test MUST be conducted using a Device Under Test Topology: The test MUST be conducted using a Device Under Test
Topology as depicted in Figure 1. Topology as depicted in Figure 1.
Procedure: A stream of IPsec traffic is offered to a DUT for Procedure: A stream of IPsec traffic is offered to a DUT for
decryption. This stream consists of two microflows. One valid decryption. This stream consists of two microflows. One valid
microflow and one that contains altered IPsec packets with a Hash microflow and one that contains altered IPsec packets with a Hash
Mismatch. The aggregate rate of both microflows MUST be equal to Mismatch. The aggregate rate of both microflows MUST be equal to
the IPsec Throughput and should therefore be able to pass the DUT. the IPsec Throughput and should therefore be able to pass the DUT.
A binary search will be applied to determine the ratio between the A binary search will be applied to determine the ratio between the
two microflows that causes packetloss on the valid microflow of two microflows that causes packetloss on the valid microflow of
traffic. traffic.
The test MUST be conducted with a single Active Tunnel. It MAY be The test MUST be conducted with a single Active Tunnel. It MAY be
repeated at various Tunnel scalability data points. repeated at various Tunnel scalability data points (e.g. 90%).
Reporting Format: PPS (of invalid traffic) The Security Context Reporting Format: The results shall be listed as PPS (of invalid
Parameters defined in Section 7.6 and utilized for this test MUST traffic). The Security Context Parameters defined in Section 7.6
be included in any statement of performance. and utilized for this test MUST be included in any statement of
performance. The aggregate rate of both microflows which act as
the offrered testing load MUST also be reported.
15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate 15.3. Phase 2 Anti Replay Attack DoS Resiliency Rate
Objective: Determine the rate of replayed packets at which a valid Objective: Determine the rate of replayed packets at which a valid
IPsec stream start dropping frames. IPsec stream start dropping frames.
Topology: The test MUST be conducted using a Device Under Test Topology: The test MUST be conducted using a Device Under Test
Topology as depicted in Figure 1. Topology as depicted in Figure 1.
Procedure: A stream of IPsec traffic is offered to a DUT for Procedure: A stream of IPsec traffic is offered to a DUT for
decryption. This stream consists of two microflows. One valid decryption. This stream consists of two microflows. One valid
microflow and one that contains replayed packets of the valid microflow and one that contains replayed packets of the valid
microflow. The aggregate rate of both microflows MUST be equal to microflow. The aggregate rate of both microflows MUST be equal to
skipping to change at page 37, line 9 skipping to change at page 39, line 9
Sizes. Sizes.
Reporting Format: PPS (of replayed traffic). The Security Context Reporting Format: PPS (of replayed traffic). The Security Context
Parameters defined in Section 7.6 and utilized for this test MUST Parameters defined in Section 7.6 and utilized for this test MUST
be included in any statement of performance. be included in any statement of performance.
16. Acknowledgements 16. Acknowledgements
The authors would like to acknowledge the following individual for The authors would like to acknowledge the following individual for
their help and participation of the compilation and editing of this their help and participation of the compilation and editing of this
document: Michele Bustos ; Paul Hoffman, Benno Overeinder, Scott document: Michele Bustos, Paul Hoffman, Benno Overeinder, Scott
Poretsky, Cisco NSITE Labs Poretsky and Yaron Sheffer
17. References 17. References
17.1. Normative References 17.1. Normative References
[RFC1242] Bradner, S., "Benchmarking terminology for network [RFC1242] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991. interconnection devices", RFC 1242, July 1991.
[RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery
for IP version 6", RFC 1981, August 1996. for IP version 6", RFC 1981, August 1996.
skipping to change at page 38, line 48 skipping to change at page 40, line 48
Traina, "Generic Routing Encapsulation (GRE)", RFC 2784, Traina, "Generic Routing Encapsulation (GRE)", RFC 2784,
March 2000. March 2000.
[RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version [RFC4109] Hoffman, P., "Algorithms for Internet Key Exchange version
1 (IKEv1)", RFC 4109, May 2005. 1 (IKEv1)", RFC 4109, May 2005.
[RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation [RFC4305] Eastlake, D., "Cryptographic Algorithm Implementation
Requirements for Encapsulating Security Payload (ESP) and Requirements for Encapsulating Security Payload (ESP) and
Authentication Header (AH)", RFC 4305, December 2005. Authentication Header (AH)", RFC 4305, December 2005.
[I-D.ietf-ipsec-ikev2] [RFC4306] Kaufman, C., "Internet Key Exchange (IKEv2) Protocol",
Kaufman, C., "Internet Key Exchange (IKEv2) Protocol", RFC 4306, December 2005.
draft-ietf-ipsec-ikev2-17 (work in progress),
October 2004. [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D.
Dugatkin, "IPv6 Benchmarking Methodology for Network
Interconnect Devices", RFC 5180, May 2008.
[I-D.ietf-ipsec-properties] [I-D.ietf-ipsec-properties]
Krywaniuk, A., "Security Properties of the IPsec Protocol Krywaniuk, A., "Security Properties of the IPsec Protocol
Suite", draft-ietf-ipsec-properties-02 (work in progress), Suite", draft-ietf-ipsec-properties-02 (work in progress),
July 2002. July 2002.
[I-D.ietf-bmwg-ipv6-meth]
Popoviciu, C., "IPv6 Benchmarking Methodology for Network
Interconnect Devices", draft-ietf-bmwg-ipv6-meth-03 (work
in progress), August 2007.
17.2. Informative References 17.2. Informative References
[FIPS.186-1.1998] [FIPS.186-1.1998]
National Institute of Standards and Technology, "Digital National Institute of Standards and Technology, "Digital
Signature Standard", FIPS PUB 186-1, December 1998, Signature Standard", FIPS PUB 186-1, December 1998,
<http://csrc.nist.gov/fips/fips1861.pdf>. <http://csrc.nist.gov/fips/fips1861.pdf>.
Authors' Addresses Authors' Addresses
Merike Kaeo Merike Kaeo
skipping to change at page 40, line 4 skipping to change at line 1841
Email: kaeo@merike.com Email: kaeo@merike.com
Tim Van Herck Tim Van Herck
Cisco Systems Cisco Systems
170 West Tasman Drive 170 West Tasman Drive
San Jose, CA 95134-1706 San Jose, CA 95134-1706
USA USA
Phone: +1(408)853-2284 Phone: +1(408)853-2284
Email: herckt@cisco.com Email: herckt@cisco.com
Full Copyright Statement
Copyright (C) The IETF Trust (2008).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information
on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at
http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at
ietf-ipr@ietf.org.
Acknowledgment
Funding for the RFC Editor function is provided by the IETF
Administrative Support Activity (IASA).
 End of changes. 34 change blocks. 
135 lines changed or deleted 201 lines changed or added

This html diff was produced by rfcdiff 1.35. The latest version is available from http://tools.ietf.org/tools/rfcdiff/