draft-ietf-bmwg-b2b-frame-04.txt   rfc9004.txt 
Network Working Group A. Morton Internet Engineering Task Force (IETF) A. Morton
Internet-Draft AT&T Labs Request for Comments: 9004 AT&T Labs
Updates: 2544 (if approved) December 18, 2020 Updates: 2544 May 2021
Intended status: Informational Category: Informational
Expires: June 21, 2021 ISSN: 2070-1721
Updates for the Back-to-back Frame Benchmark in RFC 2544 Updates for the Back-to-Back Frame Benchmark in RFC 2544
draft-ietf-bmwg-b2b-frame-04
Abstract Abstract
Fundamental Benchmarking Methodologies for Network Interconnect Fundamental benchmarking methodologies for network interconnect
Devices of interest to the IETF are defined in RFC 2544. This memo devices of interest to the IETF are defined in RFC 2544. This memo
updates the procedures of the test to measure the Back-to-back frames updates the procedures of the test to measure the Back-to-Back Frames
Benchmark of RFC 2544, based on further experience. benchmark of RFC 2544, based on further experience.
This memo updates Section 26.4 of RFC 2544. This memo updates Section 26.4 of RFC 2544.
Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP
14[RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This document is not an Internet Standards Track specification; it is
provisions of BCP 78 and BCP 79. published for informational purposes.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Engineering Task Force
and may be updated, replaced, or obsoleted by other documents at any (IETF). It represents the consensus of the IETF community. It has
time. It is inappropriate to use Internet-Drafts as reference received public review and has been approved for publication by the
material or to cite them other than as "work in progress." Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are candidates for any level of Internet
Standard; see Section 2 of RFC 7841.
This Internet-Draft will expire on June 21, 2021. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc9004.
Copyright Notice Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the Copyright (c) 2021 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction
2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 3 2. Requirements Language
3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 4 3. Scope and Goals
4. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . 6 4. Motivation
5. Back-to-back Frames . . . . . . . . . . . . . . . . . . . . . 7 5. Prerequisites
5.1. Preparing the list of Frame sizes . . . . . . . . . . . . 7 6. Back-to-Back Frames
5.2. Test for a Single Frame Size . . . . . . . . . . . . . . 8 6.1. Preparing the List of Frame Sizes
5.3. Test Repetition and Benchmark . . . . . . . . . . . . . . 9 6.2. Test for a Single Frame Size
5.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 9 6.3. Test Repetition and Benchmark
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.4. Benchmark Calculations
7. Security Considerations . . . . . . . . . . . . . . . . . . . 12 7. Reporting
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 8. Security Considerations
9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 12 9. IANA Considerations
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 10. References
10.1. Normative References . . . . . . . . . . . . . . . . . . 13 10.1. Normative References
10.2. Informative References . . . . . . . . . . . . . . . . . 13 10.2. Informative References
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 Acknowledgments
Author's Address
1. Introduction 1. Introduction
The IETF's fundamental Benchmarking Methodologies are defined in The IETF's fundamental benchmarking methodologies are defined in
[RFC2544], supported by the terms and definitions in [RFC1242], and [RFC2544], supported by the terms and definitions in [RFC1242].
[RFC2544] actually obsoletes an earlier specification, [RFC1944]. [RFC2544] actually obsoletes an earlier specification, [RFC1944].
Over time, the benchmarking community has updated [RFC2544] several Over time, the benchmarking community has updated [RFC2544] several
times, including the Device Reset Benchmark [RFC6201], and the times, including the Device Reset benchmark [RFC6201] and the
important Applicability Statement [RFC6815] concerning use outside important Applicability Statement [RFC6815] concerning use outside
the Isolated Test Environment (ITE) required for accurate the Isolated Test Environment (ITE) required for accurate
benchmarking. Other specifications implicitly update [RFC2544], such benchmarking. Other specifications implicitly update [RFC2544], such
as the IPv6 Benchmarking Methodologies in [RFC5180]. as the IPv6 benchmarking methodologies in [RFC5180].
Recent testing experience with the Back-to-back Frame test and Recent testing experience with the Back-to-Back Frame test and
Benchmark in Section 26.4 of [RFC2544] indicates that an update is benchmark in Section 26.4 of [RFC2544] indicates that an update is
warranted [OPNFV-2017] [VSPERF-b2b]. In particular, analysis of the warranted [OPNFV-2017] [VSPERF-b2b]. In particular, analysis of the
results indicates that buffer size matters when compensating for results indicates that buffer size matters when compensating for
interruptions of software packet processing, and this finding interruptions of software-packet processing, and this finding
increases the importance of the Back-to-back frame characterization increases the importance of the Back-to-Back Frame characterization
described here. This memo describes additional rationale and described here. This memo provides additional rationale and the
provides the updated method. updated method.
[RFC2544] (which obsoletes [RFC1944]) provides its own Requirements [RFC2544] provides its own requirements language consistent with
Language consistent with [RFC2119], since [RFC1944] pre-dates [RFC2119], since [RFC1944] (which it obsoletes) predates [RFC2119].
[RFC2119] and all three memos share common authorship. All three memos share common authorship. Today, [RFC8174] clarifies
Today,[RFC8174] clarifies the usage of Requirements Language, so the the usage of requirements language, so the requirements language
requirements presented in this memo are expressed in [RFC8174] terms, presented in this memo are expressed in accordance with [RFC8174].
and intended for those performing/reporting laboratory tests to They are intended for those performing/reporting laboratory tests to
improve clarity and repeatability, and for those designing devices improve clarity and repeatability, and for those designing devices
that facilitate these tests. that facilitate these tests.
2. Scope and Goals 2. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
3. Scope and Goals
The scope of this memo is to define an updated method to The scope of this memo is to define an updated method to
unambiguously perform tests, measure the benchmark(s), and report the unambiguously perform tests, measure the benchmark(s), and report the
results for Back-to-back Frames (presently described in Section 26.4 results for Back-to-Back Frames (as described in Section 26.4 of
of [RFC2544]). [RFC2544]).
The goal is to provide more efficient test procedures where possible, The goal is to provide more efficient test procedures where possible
and to expand reporting with additional interpretation of the and expand reporting with additional interpretation of the results.
results. The tests described in this memo address the cases in which The tests described in this memo address the cases in which the
the maximum frame rate of a single ingress port cannot be transferred maximum frame rate of a single ingress port cannot be transferred to
loss-free to an egress port (for some frame sizes of interest). an egress port without loss (for some frame sizes of interest).
[RFC2544] Benchmarks rely on test conditions with constant frame Benchmarks as described in [RFC2544] rely on test conditions with
sizes, with the goal of understanding what network device capability constant frame sizes, with the goal of understanding what network-
has been tested. Tests with the smallest size stress the header device capability has been tested. Tests with the smallest size
processing capacity, and tests with the largest size stress the stress the header-processing capacity, and tests with the largest
overall bit processing capacity. Tests with sizes in-between may size stress the overall bit-processing capacity. Tests with sizes in
determine the transition between these two capacities. However, between may determine the transition between these two capacities.
conditions simultaneously sending a mixture of Internet frame sizes However, conditions simultaneously sending a mixture of Internet
(IMIX), such as those described in [RFC6985], MUST NOT be used in (IMIX) frame sizes, such as those described in [RFC6985], MUST NOT be
Back-to-back Frame testing. used in Back-to-Back Frame testing.
Section 3 of [RFC8239] describes buffer size testing for physical Section 3 of [RFC8239] describes buffer-size testing for physical
networking devices in a data center. The [RFC8239] methods measure networking devices in a data center. Those methods measure buffer
buffer latency directly with traffic on multiple ingress ports that latency directly with traffic on multiple ingress ports that overload
overload an egress port on the Device Under Test (DUT) and are not an egress port on the Device Under Test (DUT) and are not subject to
subject to the revised calculations presented in this memo. the revised calculations presented in this memo. Likewise, the
Likewise, the methods of [RFC8239] SHOULD be used for test cases methods of [RFC8239] SHOULD be used for test cases where the egress-
where the egress port buffer is the known point of overload. port buffer is the known point of overload.
3. Motivation 4. Motivation
Section 3.1 of [RFC1242] describes the rationale for the Back-to-back Section 3.1 of [RFC1242] describes the rationale for the Back-to-Back
Frames Benchmark. To summarize, there are several reasons that Frames benchmark. To summarize, there are several reasons that
devices on a network produce bursts of frames at the minimum allowed devices on a network produce bursts of frames at the minimum allowed
spacing; and it is, therefore, worthwhile to understand the Device spacing; and it is, therefore, worthwhile to understand the DUT limit
Under Test (DUT) limit on the length of such bursts in practice. on the length of such bursts in practice. The same document also
Also, [RFC1242] states: states:
"Tests of this parameter are intended to determine the extent | Tests of this parameter are intended to determine the extent of
of data buffering in the device." | data buffering in the device.
After this test was defined, there have been occasional discussions Since this test was defined, there have been occasional discussions
of the stability and repeatability of the results, both over time and of the stability and repeatability of the results, both over time and
across labs. Fortunately, the Open Platform for Network Function across labs. Fortunately, the Open Platform for Network Function
Virtualization (OPNFV) VSPERF project's Continuous Integration (CI) Virtualization (OPNFV) project on Virtual Switch Performance (VSPERF)
[VSPERF-CI] testing routinely repeats Back-to-back Frame tests to Continuous Integration (CI) [VSPERF-CI] testing routinely repeats
verify that test functionality has been maintained through Back-to-Back Frame tests to verify that test functionality has been
development of the test control programs. These tests were used as a maintained through development of the test-control programs. These
basis to evaluate stability and repeatability, even across lab set- tests were used as a basis to evaluate stability and repeatability,
ups when the test platform was migrated to new DUT hardware at the even across lab setups when the test platform was migrated to new DUT
end of 2016. hardware at the end of 2016.
When the VSPERF CI results were examined [VSPERF-b2b], several When the VSPERF CI results were examined [VSPERF-b2b], several
aspects of the results were considered notable: aspects of the results were considered notable:
1. Back-to-back Frame Benchmark was very consistent for some fixed 1. Back-to-Back Frame benchmark was very consistent for some fixed
frame sizes, and somewhat variable for other frame sizes. frame sizes, and somewhat variable for other frame sizes.
2. The number of Back-to-back Frames with zero loss reported for 2. The number of Back-to-Back Frames with zero loss reported for
large frame sizes was unexpectedly long (translating to 30 large frame sizes was unexpectedly long (translating to 30
seconds of buffer time), and no explanation or measurement limit seconds of buffer time), and no explanation or measurement limit
condition was indicated. It was important that the buffering condition was indicated. It was important that the buffering
time calculations were part of the referenced testing and time calculations were part of the referenced testing and
analysis[VSPERF-b2b], because the calculated buffer times of 30 analysis [VSPERF-b2b], because the calculated buffer time of 30
seconds for some frame sizes were clearly wrong or highly seconds for some frame sizes was clearly wrong or highly suspect.
suspect. On the other hand, a result expressed only as a large On the other hand, a result expressed only as a large number of
number of Back-to-back Frames does not permit such an easy Back-to-Back Frames does not permit such an easy comparison with
comparison with reality. reality.
3. Calculation of the extent of buffer time in the DUT helped to 3. Calculation of the extent of buffer time in the DUT helped to
explain the results observed with all frame sizes (for example, explain the results observed with all frame sizes. For example,
tests with some frame sizes cannot exceed the frame header tests with some frame sizes cannot exceed the frame-header-
processing rate of the DUT and thus no buffering occurs; processing rate of the DUT, thus, no buffering occurs.
therefore, the results depended on the test equipment and not the Therefore, the results depended on the test equipment and not the
DUT). DUT.
4. It was found that a better estimate of the DUT buffer time could 4. It was found that a better estimate of the DUT buffer time could
be calculated using measurements of both the longest burst in be calculated using measurements of both the longest burst in
frames without loss and results from the Throughput tests frames without loss and results from the Throughput tests
conducted according to Section 26.1 of [RFC2544]. It is apparent conducted according to Section 26.1 of [RFC2544]. It is apparent
that the DUT's frame processing rate empties the buffer during a that the DUT's frame-processing rate empties the buffer during a
trial and tends to increase the "implied" buffer size estimate trial and tends to increase the "implied" buffer-size estimate
(measured according to Section 26.4 of [RFC2544] because many (measured according to Section 26.4 of [RFC2544] because many
frames have departed the buffer when the burst of frames ends). frames have departed the buffer when the burst of frames ends).
A calculation using the Throughput measurement can reveal a A calculation using the Throughput measurement can reveal a
"corrected" buffer size estimate. "corrected" buffer-size estimate.
Further, if the Throughput tests of Section 26.1 of [RFC2544] are Further, if the Throughput tests of Section 26.1 of [RFC2544] are
conducted as a prerequisite test, the number of frame sizes required conducted as a prerequisite, the number of frame sizes required for
for Back-to-back Frame Benchmarking can be reduced to one or more of Back-to-Back Frame benchmarking can be reduced to one or more of the
the small frame sizes, or the results for large frame sizes can be small frame sizes, or the results for large frame sizes can be noted
noted as invalid in the results if tested anyway (these are the as invalid in the results if tested anyway. These are the larger
larger frame sizes for which the back-to-back frame rate cannot frame sizes for which the Back-to-Back Frame rate cannot exceed the
exceed the frame header processing rate of the DUT and little or no frame-header-processing rate of the DUT and little or no buffering
buffering occurs). occurs.
The material below provides the details of the calculation to The material below provides the details of the calculation to
estimate the actual buffer storage available in the DUT, using estimate the actual buffer storage available in the DUT, using
results from the Throughput tests for each frame size, and the results from the Throughput tests for each frame size and the Max
maximum theoretical frame rate for the DUT links (which constrain the Theoretical Frame Rate for the DUT links (which constrain the minimum
minimum frame spacing). frame spacing).
In reality, there are many buffers and packet header processing steps In reality, there are many buffers and packet-header-processing steps
in a typical DUT. The simplified model used in these calculations in a typical DUT. The simplified model used in these calculations
for the DUT includes a packet header processing function with limited for the DUT includes a packet-header-processing function with limited
rate of operation, as shown below: rate of operation, as shown in Figure 1.
|------------ DUT --------| |------------ DUT --------|
Generator -> Ingress -> Buffer -> HeaderProc -> Egress -> Receiver Generator -> Ingress -> Buffer -> HeaderProc -> Egress -> Receiver
So, in the Back-to-back Frame testing: Figure 1: Simplified Model for DUT Testing
So, in the Back-to-Back Frame testing:
1. The ingress burst arrives at Max Theoretical Frame Rate, and 1. The ingress burst arrives at Max Theoretical Frame Rate, and
initially the frames are buffered. initially the frames are buffered.
2. The packet header processing function (HeaderProc) operates at 2. The packet-header-processing function (HeaderProc) operates at
the "Measured Throughput" (Section 26.1 of [RFC2544]), removing the "Measured Throughput" (Section 26.1 of [RFC2544]), removing
frames from the buffer (this is the best approximation we have). frames from the buffer (this is the best approximation we have,
another acceptable approximation is the received frame rate
during Back-to-back Frame testing, if Measured Throughput is not
available).
3. Frames that have been processed are clearly not in the buffer, so 3. Frames that have been processed are clearly not in the buffer, so
the Corrected DUT buffer time equation (Section 5.4) estimates the Corrected DUT Buffer Time equation (Section 6.4) estimates
and removes the frames that the DUT forwarded on egress during and removes the frames that the DUT forwarded on egress during
the burst. We define buffer time as the number of frames the burst. We define buffer time as the number of frames
occupying the buffer divided by the Maximum Theoretical Frame occupying the buffer divided by the Max Theoretical Frame Rate
Rate (on ingress) for the frame size under test. (on ingress) for the frame size under test.
4. A helpful concept is the buffer filling rate, which is the 4. A helpful concept is the buffer-filling rate, which is the
difference between the Max Theoretical Frame Rate (ingress) and difference between the Max Theoretical Frame Rate (ingress) and
the Measured Throughput (HeaderProc on egress). If the actual the Measured Throughput (HeaderProc on egress). If the actual
buffer size in frames was known, the time to fill the buffer buffer size in frames is known, the time to fill the buffer
during a measurement can be calculated using the filling rate as during a measurement can be calculated using the filling rate, as
a check on measurements. However, the buffer in the model a check on measurements. However, the buffer in the model
represents many buffers of different sizes in the DUT data path. represents many buffers of different sizes in the DUT data path.
Knowledge of approximate buffer storage size (in time or bytes) may Knowledge of approximate buffer storage size (in time or bytes) may
be useful to estimate whether frame losses will occur if DUT be useful in estimating whether frame losses will occur if DUT
forwarding is temporarily suspended in a production deployment, due forwarding is temporarily suspended in a production deployment due to
to an unexpected interruption of frame processing (an interruption of an unexpected interruption of frame processing (an interruption of
duration greater than the estimated buffer would certainly cause lost duration greater than the estimated buffer would certainly cause lost
frames). In Section 5, the calculations for the correct buffer time frames). In Section 6, the calculations for the correct buffer time
use the combination of offered load at Max Theoretical Frame Rate and use the combination of offered load at Max Theoretical Frame Rate and
header processing speed at 100% of Measured Throughput. Other header-processing speed at 100% of Measured Throughput. Other
combinations are possible, such as changing the percent of measured combinations are possible, such as changing the percent of Measured
Throughput to account for other processes reducing the header Throughput to account for other processes reducing the header
processing rate. processing rate.
The presentation of OPNFV VSPERF evaluation and development of The presentation of OPNFV VSPERF evaluation and development of
enhanced search algorithms [VSPERF-BSLV] was discussed at IETF-102. enhanced search algorithms [VSPERF-BSLV] was given and discussed at
The enhancements are intended to compensate for transient interrupts IETF 102. The enhancements are intended to compensate for transient
that may cause loss at near-Throughput levels of offered load. processor interrupts that may cause loss at near-Throughput levels of
Subsequent analysis of the results indicates that buffers within the offered load. Subsequent analysis of the results indicates that
DUT can compensate for some interrupts, and this finding increases buffers within the DUT can compensate for some interrupts, and this
the importance of the Back-to-back frame characterization described finding increases the importance of the Back-to-Back Frame
here. characterization described here.
4. Prerequisites 5. Prerequisites
The Test Setup MUST be consistent with Figure 1 of [RFC2544], or The test setup MUST be consistent with Figure 1 of [RFC2544], or
Figure 2 when the tester's sender and receiver are different devices. Figure 2 of that document when the tester's sender and receiver are
Other mandatory testing aspects described in [RFC2544] MUST be different devices. Other mandatory testing aspects described in
included, unless explicitly modified in the next section. [RFC2544] MUST be included, unless explicitly modified in the next
section.
The ingress and egress link speeds and link layer protocols MUST be The ingress and egress link speeds and link-layer protocols MUST be
specified and used to compute the maximum theoretical frame rate when specified and used to compute the Max Theoretical Frame Rate when
respecting the minimum inter-frame gap. respecting the minimum interframe gap.
The test results for the Throughput Benchmark conducted according to The test results for the Throughput benchmark conducted according to
Section 26.1 of [RFC2544] for all [RFC2544]-RECOMMENDED frame sizes Section 26.1 of [RFC2544] for all frame sizes RECOMMENDED by
MUST be available to reduce the tested frame size list, or to note [RFC2544] MUST be available to reduce the tested-frame-size list or
invalid results for individual frame sizes (because the burst length to note invalid results for individual frame sizes (because the burst
may be essentially infinite for large frame sizes). length may be essentially infinite for large frame sizes).
Note that: Note that:
o the Throughput and the Back-to-back Frame measurement * the Throughput and the Back-to-Back Frame measurement-
configuration traffic characteristics (unidirectional or bi- configuration traffic characteristics (unidirectional or
directional, and number of flows generated) MUST match. bidirectional, and number of flows generated) MUST match.
o the Throughput measurement MUST be under zero-loss conditions, * the Throughput measurement MUST be taken under zero-loss
according to Section 26.1 of [RFC2544]. conditions, according to Section 26.1 of [RFC2544].
The Back-to-back Benchmark described in Section 3.1 of [RFC1242] MUST The Back-to-Back Benchmark described in Section 3.1 of [RFC1242] MUST
be measured directly by the tester, where buffer size is inferred be measured directly by the tester, where buffer size is inferred
from Back-to-back Frame bursts and associated packet loss from Back-to-Back Frame bursts and associated packet-loss
measurements. Therefore, sources of packet loss that are unrelated measurements. Therefore, sources of frame loss that are unrelated to
to consistent evaluation of buffer size SHOULD be identified and consistent evaluation of buffer size SHOULD be identified and removed
removed or mitigated. Example sources include: or mitigated. Example sources include:
o On-path active components that are external to the DUT * On-path active components that are external to the DUT
o Operating system environment interrupting DUT operation * Operating-system environment interrupting DUT operation
o Shared resource contention between the DUT and other off-path * Shared-resource contention between the DUT and other off-path
component(s) impacting DUT's behaviour, sometimes called the component(s) impacting DUT's behavior, sometimes called the "noisy
"noisy neighbour" problem with virtualized network functions. neighbor" problem with virtualized network functions.
Mitigations applicable to some of the sources above are discussed in Mitigations applicable to some of the sources above are discussed in
Section 5.2, with the other measurement requirements described below Section 6.2, with the other measurement requirements described below
in Section 5. in Section 6.
5. Back-to-back Frames 6. Back-to-Back Frames
Objective: To characterize the ability of a DUT to process back-to- Objective: To characterize the ability of a DUT to process Back-to-
back frames as defined in [RFC1242]. Back Frames as defined in [RFC1242].
The Procedure follows. The procedure follows.
5.1. Preparing the list of Frame sizes 6.1. Preparing the List of Frame Sizes
From the list of RECOMMENDED frame sizes (Section 9 of [RFC2544]), From the list of RECOMMENDED frame sizes (Section 9 of [RFC2544]),
select the subset of frame sizes whose measured Throughput (during select the subset of frame sizes whose Measured Throughput (during
prerequisite testing) was less than the Maximum Theoretical Frame prerequisite testing) was less than the Max Theoretical Frame Rate of
Rate of the DUT/test-set-up. These are the only frame sizes where it the DUT/test setup. These are the only frame sizes where it is
is possible to produce a burst of frames that cause the DUT buffers possible to produce a burst of frames that cause the DUT buffers to
to fill and eventually overflow, producing one or more discarded fill and eventually overflow, producing one or more discarded frames.
frames.
5.2. Test for a Single Frame Size 6.2. Test for a Single Frame Size
Each trial in the test requires the tester to send a burst of frames Each trial in the test requires the tester to send a burst of frames
(after idle time) with the minimum inter-frame gap, and to count the (after idle time) with the minimum interframe gap and to count the
corresponding frames forwarded by the DUT. corresponding frames forwarded by the DUT.
The duration of the trial includes three REQUIRED components: The duration of the trial includes three REQUIRED components:
1. The time to send the burst of frames (at the back-to-back rate), 1. The time to send the burst of frames (at the back-to-back rate),
determined by the search algorithm. determined by the search algorithm.
2. The time to receive the transferred burst of frames (at the 2. The time to receive the transferred burst of frames (at the
[RFC2544] Throughput rate), possibly truncated by buffer [RFC2544] Throughput rate), possibly truncated by buffer
overflow, and certainly including the latency of the DUT. overflow, and certainly including the latency of the DUT.
3. At least 2 seconds not overlapping the time to receive the burst 3. At least 2 seconds not overlapping the time to receive the burst
(2.), to ensure that DUT buffers have depleted. Longer times (Component 2, above), to ensure that DUT buffers have depleted.
MUST be used when conditions warrant, such as when buffer times Longer times MUST be used when conditions warrant, such as when
>2 seconds are measured or when burst sending times are >2 buffer times >2 seconds are measured or when burst sending times
seconds, but care is needed since this time component directly are >2 seconds, but care is needed, since this time component
increases trial duration and many trials and tests comprise a directly increases trial duration, and many trials and tests
complete benchmarking study. comprise a complete benchmarking study.
The upper search limit for the time to send each burst MUST be The upper search limit for the time to send each burst MUST be
configurable, to values as high as 30 seconds (buffer time results configurable to values as high as 30 seconds (buffer time results
reported at or near the configured upper limit are likely invalid, reported at or near the configured upper limit are likely invalid,
and the test MUST be repeated with a higher search limit). and the test MUST be repeated with a higher search limit).
If all frames have been received, the tester increases the length of If all frames have been received, the tester increases the length of
the burst according to the search algorithm and performs another the burst according to the search algorithm and performs another
trial. trial.
If the received frame count is less than the number of frames in the If the received frame count is less than the number of frames in the
burst, then the limit of DUT processing and buffering may have been burst, then the limit of DUT processing and buffering may have been
exceeded, and the burst length is determined by the search algorithm exceeded, and the burst length for the next trial is determined by
for the next trial (the burst length is typically reduced, but see the search algorithm (the burst length is typically reduced, but see
below). below).
Classic search algorithms have been adapted for use in benchmarking, Classic search algorithms have been adapted for use in benchmarking,
where the search requires discovery of a pair of outcomes, one with where the search requires discovery of a pair of outcomes, one with
no loss and another with loss, at load conditions within the no loss and another with loss, at load conditions within the
acceptable tolerance or accuracy. Conditions encountered when acceptable tolerance or accuracy. Conditions encountered when
benchmarking the Infrastructure for Network Function Virtualization benchmarking the infrastructure for network function virtualization
require algorithm enhancement. Fortunately, the adaptation of Binary require algorithm enhancement. Fortunately, the adaptation of Binary
Search, and an enhanced Binary Search with Loss Verification have Search, and an enhanced Binary Search with Loss Verification, have
been specified in clause 12.3 of [TST009]. These algorithms can been specified in Clause 12.3 of [TST009]. These algorithms can
easily be used for Back-to-back Frame benchmarking by replacing the easily be used for Back-to-Back Frame benchmarking by replacing the
Offered Load level with burst length in frames. [TST009] Annex B offered load level with burst length in frames. [TST009], Annex B
describes the theory behind the enhanced Binary Search with Loss describes the theory behind the enhanced Binary Search with Loss
Verification algorithm. Verification algorithm.
There is also promising work-in-progress that may prove useful in There are also promising works in progress that may prove useful in
Back-to-back Frame benchmarking. Back-to-Back Frame benchmarking. [BMWG-MLRSEARCH] and
[I-D.vpolak-mkonstan-bmwg-mlrsearch] and [I-D.vpolak-bmwg-plrsearch] [BMWG-PLRSEARCH] are two such examples.
are two such examples.
Either the [TST009] Binary Search or Binary Search with Loss Either the [TST009] Binary Search or Binary Search with Loss
Verification algorithms MUST be used, and input parameters to the Verification algorithms MUST be used, and input parameters to the
algorithm(s) MUST be reported. algorithm(s) MUST be reported.
The tester usually imposes a (configurable) minimum step size for The tester usually imposes a (configurable) minimum step size for
burst length, and the step size MUST be reported with the results (as burst length, and the step size MUST be reported with the results (as
this influences the accuracy and variation of test results). this influences the accuracy and variation of test results).
The original Section 26.4 of [RFC2544] definition is stated below: The original Section 26.4 of [RFC2544] definition is stated below:
The Back-to-back Frame value is the longest burst of frames that | The back-to-back value is the number of frames in the longest
the DUT can successfully process and buffer without frame loss, as | burst that the DUT will handle without the loss of any frames.
determined from the series of trials.
5.3. Test Repetition and Benchmark 6.3. Test Repetition and Benchmark
On this topic, Section 26.4 of [RFC2544] requires: On this topic, Section 26.4 of [RFC2544] requires:
The trial length MUST be at least 2 seconds and SHOULD be repeated | The trial length MUST be at least 2 seconds and SHOULD be repeated
at least 50 times with the average of the recorded values being | at least 50 times with the average of the recorded values being
reported. | reported.
Therefore, the Back-to-back Frame Benchmark is the average of burst Therefore, the Back-to-Back Frame benchmark is the average of burst-
length values over repeated tests to determine the longest burst of length values over repeated tests to determine the longest burst of
frames that the DUT can successfully process and buffer without frame frames that the DUT can successfully process and buffer without frame
loss. Each of the repeated tests completes an independent search loss. Each of the repeated tests completes an independent search
process. process.
In this update, the test MUST be repeated N times (the number of In this update, the test MUST be repeated N times (the number of
repetitions is now a variable that must be reported),for each frame repetitions is now a variable that must be reported) for each frame
size in the subset list, and each Back-to-back Frame value made size in the subset list, and each Back-to-Back Frame value MUST be
available for further processing (below). made available for further processing (below).
5.4. Benchmark Calculations 6.4. Benchmark Calculations
For each frame size, calculate the following summary statistics for For each frame size, calculate the following summary statistics for
longest Back-to-back Frame values over the N tests: longest Back-to-Back Frame values over the N tests:
o Average (Benchmark) * Average (Benchmark)
o Minimum
o Maximum * Minimum
o Standard Deviation * Maximum
* Standard Deviation
Further, calculate the Implied DUT Buffer Time and the Corrected DUT Further, calculate the Implied DUT Buffer Time and the Corrected DUT
Buffer Time in seconds, as follows: Buffer Time in seconds, as follows:
Implied DUT Buffer Time = Implied DUT buffer time =
Average num of Back-to-back Frames / Max Theoretical Frame Rate Average num of Back-to-back Frames / Max Theoretical Frame Rate
The formula above is simply expressing the burst of frames in units The formula above is simply expressing the burst of frames in units
of time. of time.
The next step is to apply a correction factor that accounts for the The next step is to apply a correction factor that accounts for the
DUT's frame forwarding operation during the test (assuming the simple DUT's frame forwarding operation during the test (assuming the simple
model of the DUT composed of a buffer and a forwarding function, model of the DUT composed of a buffer and a forwarding function,
described in Section 3). described in Section 4).
Corrected DUT Buffer Time = Corrected DUT Buffer Time =
/ \ / \
Implied DUT |Implied DUT Measured Throughput | Implied DUT |Implied DUT Measured Throughput |
= Buffer Time - |Buffer Time * -------------------------- | = Buffer Time - |Buffer Time * -------------------------- |
| Max Theoretical Frame Rate | | Max Theoretical Frame Rate |
\ / \ /
where: where:
1. The "Measured Throughput" is the [RFC2544] Throughput Benchmark 1. The "Measured Throughput" is the [RFC2544] Throughput Benchmark
for the frame size tested, as augmented by methods including the for the frame size tested, as augmented by methods including the
Binary Search with Loss Verification algorithm in [TST009] where Binary Search with Loss Verification algorithm in [TST009] where
applicable, and MUST be expressed in frames per second in this applicable and MUST be expressed in frames per second in this
equation. equation.
2. The "Max Theoretical Frame Rate" is a calculated value for the 2. The "Max Theoretical Frame Rate" is a calculated value for the
interface speed and link layer technology used, and MUST be interface speed and link-layer technology used, and it MUST be
expressed in frames per second in this equation. expressed in frames per second in this equation.
The term on the far right in the formula for Corrected DUT Buffer The term on the far right in the formula for Corrected DUT Buffer
Time accounts for all the frames in the Burst that were transmitted Time accounts for all the frames in the burst that were transmitted
by the DUT *while the Burst of frames were sent in*. So, these frames by the DUT *while the burst of frames was sent in*. So, these frames
are not in the buffer and the buffer size is more accurately are not in the buffer, and the buffer size is more accurately
estimated by excluding them. estimated by excluding them. If Measured Throughput is not
available, an acceptable approximation is the received frame rate
(see Forwarding Rate in [RFC2889] measured during Back-to-back Frame
testing).
6. Reporting 7. Reporting
The back-to-back frame results SHOULD be reported in the format of a The Back-to-Back Frame results SHOULD be reported in the format of a
table with a row for each of the tested frame sizes. There SHOULD be table with a row for each of the tested frame sizes. There SHOULD be
columns for the frame size and for the resultant average frame count columns for the frame size and the resultant average frame count for
for each type of data stream tested. each type of data stream tested.
The number of tests Averaged for the Benchmark, N, MUST be reported. The number of tests averaged for the benchmark, N, MUST be reported.
The Minimum, Maximum, and Standard Deviation across all complete The minimum, maximum, and standard deviation across all complete
tests SHOULD also be reported (they are referred to as tests SHOULD also be reported (they are referred to as
"Min,Max,StdDev" in the table below). "Min,Max,StdDev" in Table 1).
The Corrected DUT Buffer Time SHOULD also be reported. The Corrected DUT Buffer Time SHOULD also be reported.
If the tester operates using a limited maximum burst length in If the tester operates using a limited maximum burst length in
frames, then this maximum length SHOULD be reported. frames, then this maximum length SHOULD be reported.
+--------------+----------------+----------------+------------------+ +=============+================+================+================+
| Frame Size, | Ave B2B | Min,Max,StdDev | Corrected Buff | | Frame Size, | Ave B2B | Min,Max,StdDev | Corrected Buff |
| octets | Length, frames | | Time, Sec | | octets | Length, frames | | Time, Sec |
+--------------+----------------+----------------+------------------+ +=============+================+================+================+
| 64 | 26000 | 25500,27000,20 | 0.00004 | | 64 | 26000 | 25500,27000,20 | 0.00004 |
+--------------+----------------+----------------+------------------+ +-------------+----------------+----------------+----------------+
Back-to-Back Frame Results Table 1: Back-to-Back Frame Results
Static and configuration parameters (reported with the table above): Static and configuration parameters (reported with Table 1):
Number of test repetitions, N * Number of test repetitions, N
Minimum Step Size (during searches), in frames. * Minimum Step Size (during searches), in frames.
If the tester has a specific (actual) frame rate of interest (less If the tester has a specific (actual) frame rate of interest (less
than the Throughput rate), it is useful to estimate the buffer time than the Throughput rate), it is useful to estimate the buffer time
at that actual frame rate: at that actual frame rate:
Actual Buffer Time = Actual Buffer Time =
Max Theoretical Frame Rate Max Theoretical Frame Rate
= Corrected DUT Buffer Time * -------------------------- = Corrected DUT Buffer Time * --------------------------
Actual Frame Rate Actual Frame Rate
and report this value, properly labeled. and report this value, properly labeled.
7. Security Considerations 8. Security Considerations
Benchmarking activities as described in this memo are limited to Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the other constraints environment, with dedicated address space and the other constraints
of[RFC2544]. of [RFC2544].
The benchmarking network topology will be an independent test setup The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test traffic into a production network or misroute traffic to the test
management network. See [RFC6815]. management network. See [RFC6815].
Further, benchmarking is performed on an "opaque-box" (a.k.a. Further, benchmarking is performed on an "opaque-box" (a.k.a.
"black-box") basis, relying solely on measurements observable "black-box") basis, relying solely on measurements observable
external to the DUT/SUT. external to the Device or System Under Test (SUT).
The DUT developers are commonly independent from the personnel and The DUT developers are commonly independent from the personnel and
institutions conducting benchmarking studies. DUT developers might institutions conducting benchmarking studies. DUT developers might
have incentives to alter the performance of the DUT if the test have incentives to alter the performance of the DUT if the test
conditions can be detected. Special capabilities SHOULD NOT exist in conditions can be detected. Special capabilities SHOULD NOT exist in
the DUT/SUT specifically for benchmarking purposes. Procedures the DUT/SUT specifically for benchmarking purposes. Procedures
described in this document are not designed to detect such activity. described in this document are not designed to detect such activity.
Additional testing outside of the scope of this document would be Additional testing outside of the scope of this document would be
needed and has been used successfully in the past to discover such needed and has been used successfully in the past to discover such
malpractices. malpractices.
Any implications for network security arising from the DUT/SUT SHOULD Any implications for network security arising from the DUT/SUT SHOULD
be identical in the lab and in production networks. be identical in the lab and in production networks.
8. IANA Considerations 9. IANA Considerations
This memo makes no requests of IANA.
9. Acknowledgements
Thanks to Trevor Cooper, Sridhar Rao, and Martin Klozik of the VSPERF This document has no IANA actions.
project for many contributions to the early testing [VSPERF-b2b].
Yoshiaki Itou has also investigated the topic, and made useful
suggestions. Maciek Konstantyowicz and Vratko Polak also provided
many comments and suggestions based on extensive integration testing
and resulting search algorithm proposals - the most up-to-date
feedback possible. Tim Carlin also provided comments and support for
the draft. Warren Kumari's review improved readability in several
key passages. David Black, Martin Duke, and Scott Bradner's comments
improved the clarity and configuration advice on trial duration.
Malisa Vucinic suggested additional text on DUT design cautions in
the Security Considerations section.
10. References 10. References
10.1. Normative References 10.1. Normative References
[RFC1242] Bradner, S., "Benchmarking Terminology for Network [RFC1242] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242,
July 1991, <https://www.rfc-editor.org/info/rfc1242>. July 1991, <https://www.rfc-editor.org/info/rfc1242>.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
skipping to change at page 13, line 36 skipping to change at line 584
<https://www.rfc-editor.org/info/rfc6985>. <https://www.rfc-editor.org/info/rfc6985>.
[RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
May 2017, <https://www.rfc-editor.org/info/rfc8174>. May 2017, <https://www.rfc-editor.org/info/rfc8174>.
[RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking [RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking
Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017, Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017,
<https://www.rfc-editor.org/info/rfc8239>. <https://www.rfc-editor.org/info/rfc8239>.
[TST009] Morton, A., "ETSI GS NFV-TST 009 V3.4.1 (2020-12), [TST009] ETSI, "Network Functions Virtualisation (NFV) Release 3;
"Network Functions Virtualisation (NFV) Release 3;
Testing; Specification of Networking Benchmarks and Testing; Specification of Networking Benchmarks and
Measurement Methods for NFVI"", December 2020, Measurement Methods for NFVI", Rapporteur: A. Morton, ETSI
GS NFV-TST 009 v3.4.1, December 2020,
<https://www.etsi.org/deliver/etsi_gs/NFV- <https://www.etsi.org/deliver/etsi_gs/NFV-
TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf>. TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf>.
10.2. Informative References 10.2. Informative References
[I-D.vpolak-bmwg-plrsearch] [BMWG-MLRSEARCH]
Konstantynowicz, M. and V. Polak, "Probabilistic Loss Konstantynowicz, M., Ed. and V. Polák, Ed., "Multiple Loss
Ratio Search for Packet Throughput (PLRsearch)", draft- Ratio Search for Packet Throughput (MLRsearch)", Work in
vpolak-bmwg-plrsearch-03 (work in progress), March 2020. Progress, Internet-Draft, draft-ietf-bmwg-mlrsearch-00, 9
February 2021, <https://tools.ietf.org/html/draft-ietf-
bmwg-mlrsearch-00>.
[I-D.vpolak-mkonstan-bmwg-mlrsearch] [BMWG-PLRSEARCH]
Konstantynowicz, M. and V. Polak, "Multiple Loss Ratio Konstantynowicz, M., Ed. and V. Polák, Ed., "Probabilistic
Search for Packet Throughput (MLRsearch)", draft-vpolak- Loss Ratio Search for Packet Throughput (PLRsearch)", Work
mkonstan-bmwg-mlrsearch-03 (work in progress), March 2020. in Progress, Internet-Draft, draft-vpolak-bmwg-plrsearch-
03, 6 March 2020, <https://tools.ietf.org/html/draft-
vpolak-bmwg-plrsearch-03>.
[OPNFV-2017] [OPNFV-2017]
Cooper, T., Morton, A., and S. Rao, "Dataplane Cooper, T., Rao, S., and A. Morton, "Dataplane
Performance, Capacity, and Benchmarking in OPNFV", June Performance, Capacity, and Benchmarking in OPNFV", 15 June
2017, 2017,
<https://wiki.opnfv.org/download/attachments/10293193/ <https://wiki.anuket.io/download/attachments/4404001/
VSPERF-Dataplane-Perf-Cap-Bench.pptx?api=v2>. VSPERF-Dataplane-Perf-Cap-Bench.pdf?version=1&modification
Date=1621191833500&api=v2>.
[RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 1944, Network Interconnect Devices", RFC 1944,
DOI 10.17487/RFC1944, May 1996, DOI 10.17487/RFC1944, May 1996,
<https://www.rfc-editor.org/info/rfc1944>. <https://www.rfc-editor.org/info/rfc1944>.
[RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology
for LAN Switching Devices", RFC 2889,
DOI 10.17487/RFC2889, August 2000,
<https://www.rfc-editor.org/info/rfc2889>.
[RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D.
Dugatkin, "IPv6 Benchmarking Methodology for Network Dugatkin, "IPv6 Benchmarking Methodology for Network
Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May
2008, <https://www.rfc-editor.org/info/rfc5180>. 2008, <https://www.rfc-editor.org/info/rfc5180>.
[RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera, [RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera,
"Device Reset Characterization", RFC 6201, "Device Reset Characterization", RFC 6201,
DOI 10.17487/RFC6201, March 2011, DOI 10.17487/RFC6201, March 2011,
<https://www.rfc-editor.org/info/rfc6201>. <https://www.rfc-editor.org/info/rfc6201>.
[RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton,
"Applicability Statement for RFC 2544: Use on Production "Applicability Statement for RFC 2544: Use on Production
Networks Considered Harmful", RFC 6815, Networks Considered Harmful", RFC 6815,
DOI 10.17487/RFC6815, November 2012, DOI 10.17487/RFC6815, November 2012,
<https://www.rfc-editor.org/info/rfc6815>. <https://www.rfc-editor.org/info/rfc6815>.
[VSPERF-b2b] [VSPERF-b2b]
Morton, A., "Back2Back Testing Time Series (from CI)", Morton, A., "Back2Back Testing Time Series (from CI)", May
June 2017, <https://wiki.opnfv.org/display/vsperf/ 2021, <https://wiki.anuket.io/display/HOME/
Traffic+Generator+Testing#TrafficGeneratorTesting- Traffic+Generator+Testing#TrafficGeneratorTesting-
AppendixB:Back2BackTestingTimeSeries(fromCI)>. AppendixB:Back2BackTestingTimeSeries(fromCI)>.
[VSPERF-BSLV] [VSPERF-BSLV]
Morton, A. and S. Rao, "Evolution of Repeatability in Rao, S. and A. Morton, "Evolution of Repeatability in
Benchmarking: Fraser Plugfest (Summary for IETF BMWG)", Benchmarking: Fraser Plugfest (Summary for IETF BMWG)",
July 2018, July 2018,
<https://datatracker.ietf.org/meeting/102/materials/ <https://datatracker.ietf.org/meeting/102/materials/
slides-102-bmwg-evolution-of-repeatability-in- slides-102-bmwg-evolution-of-repeatability-in-
benchmarking-fraser-plugfest-summary-for-ietf-bmwg-00>. benchmarking-fraser-plugfest-summary-for-ietf-bmwg-00>.
[VSPERF-CI] [VSPERF-CI]
Tahhan, M., "OPNFV VSPERF CI", June 2019, Tahhan, M., "OPNFV VSPERF CI", September 2019,
<https://wiki.opnfv.org/display/vsperf/VSPERF+CI>. <https://wiki.anuket.io/display/HOME/VSPERF+CI>.
Acknowledgments
Thanks to Trevor Cooper, Sridhar Rao, and Martin Klozik of the VSPERF
project for many contributions to the early testing [VSPERF-b2b].
Yoshiaki Itou has also investigated the topic and made useful
suggestions. Maciek Konstantyowicz and Vratko Polák also provided
many comments and suggestions based on extensive integration testing
and resulting search-algorithm proposals -- the most up-to-date
feedback possible. Tim Carlin also provided comments and support for
the document. Warren Kumari's review improved readability in several
key passages. David Black, Martin Duke, and Scott Bradner's comments
improved the clarity and configuration advice on trial duration.
Mališa Vučinić suggested additional text on DUT design cautions in
the Security Considerations section.
Author's Address Author's Address
Al Morton Al Morton
AT&T Labs AT&T Labs
200 Laurel Avenue South 200 Laurel Avenue South
Middletown,, NJ 07748 Middletown, NJ 07748
USA United States of America
Phone: +1 732 420 1571 Phone: +1 732 420 1571
Fax: +1 732 368 1192
Email: acmorton@att.com Email: acmorton@att.com
 End of changes. 116 change blocks. 
302 lines changed or deleted 317 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/