draft-ietf-ippm-model-based-metrics-02.txt   draft-ietf-ippm-model-based-metrics-03.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: August 18, 2014 AT&T Labs Expires: January 4, 2015 AT&T Labs
February 14, 2014 July 3, 2014
Model Based Bulk Performance Metrics Model Based Bulk Performance Metrics
draft-ietf-ippm-model-based-metrics-02.txt draft-ietf-ippm-model-based-metrics-03.txt
Abstract Abstract
We introduce a new class of model based metrics designed to determine We introduce a new class of model based metrics designed to determine
if an end-to-end Internet path can meet predefined transport if an end-to-end Internet path can meet predefined transport
performance targets by applying a suite of IP diagnostic tests to performance targets by applying a suite of IP diagnostic tests to
successive subpaths. The subpath-at-a-time tests are designed to successive subpaths. The subpath-at-a-time tests can be robustly
accurately detect if any subpath will prevent the full end-to-end applied to key infrastructure, such as interconnects, to accurately
path from meeting the specified target performance. Each IP detect if it will prevent the full end-to-end paths that traverse it
diagnostic test consists of a precomputed traffic pattern and a from meeting the specified target performance.
statistical criteria for evaluating packet delivery.
The IP diagnostics tests are based on traffic patterns that are Each IP diagnostic test consists of a precomputed traffic pattern and
precomputed to mimic TCP or other transport protocol over a long path a statistical criteria for evaluating packet delivery. The traffic
but are independent of the actual details of the subpath under test. patterns are precomputed to mimic TCP or other transport protocol
Likewise the success criteria depends on the target performance and over a long path but are independent of the actual details of the
not the actual performance of the subpath. This makes the subpath under test. Likewise the success criteria depends on the
measurements open loop, eliminating nearly all of the difficulties target performance for the long path and not the details of the
encountered by traditional bulk transport metrics. subpath. This makes the measurements open loop, which introduces
several important new properties and eliminates most of the
difficulties encountered by traditional bulk transport metrics.
This document does not fully define diagnostic tests, but provides a This document does not define diagnostic tests, but provides a
framework for designing suites of diagnostics tests that are tailored framework for designing suites of diagnostics tests that are tailored
the confirming the target performance. the confirming the target performance.
By making the tests open loop, we eliminate standards congestion Interim DRAFT Formatted: Thu Jul 3 20:19:04 PDT 2014
control equilibrium behavior, which otherwise causes every measured
parameter to be sensitive to every component of the system. As an
open loop test, various measurable properties become independent, and
potentially subject to an algebra enabling several important new
uses.
Interim DRAFT Formatted: Fri Feb 14 14:07:33 PST 2014
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 18, 2014. This Internet-Draft will expire on January 4, 2015.
Copyright Notice Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 10 skipping to change at page 3, line 10
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. New requirements relative to RFC 2330 . . . . . . . . . . . . 10 3. New requirements relative to RFC 2330 . . . . . . . . . . . . 11
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 12 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 12
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 13 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 14
5. Common Models and Parameters . . . . . . . . . . . . . . . . . 15 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 15
5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 15 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 15
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 15 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 16
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 16 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 17
6. Common testing procedures . . . . . . . . . . . . . . . . . . 17 6. Common testing procedures . . . . . . . . . . . . . . . . . . 17
6.1. Traffic generating techniques . . . . . . . . . . . . . . 17 6.1. Traffic generating techniques . . . . . . . . . . . . . . 17
6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 17 6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 17
6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 18 6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 18
6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 18 6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 19
6.1.4. Concurrent or channelized testing . . . . . . . . . . 19 6.1.4. Concurrent or channelized testing . . . . . . . . . . 19
6.1.5. Intermittent Testing . . . . . . . . . . . . . . . . . 19
6.1.6. Intermittent Scatter Testing . . . . . . . . . . . . . 20
6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 20 6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 20
6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 20 6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 20
6.2.2. Statistical criteria for measuring run_length . . . . 22 6.2.2. Statistical criteria for measuring run_length . . . . 22
6.2.2.1. Alternate criteria for measuring run_length . . . 24 6.2.2.1. Alternate criteria for measuring run_length . . . 23
6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 25 6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 25
6.3. Test Qualifications . . . . . . . . . . . . . . . . . . . 26 6.3. Test Preconditions . . . . . . . . . . . . . . . . . . . . 25
7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 27 7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 26
7.1. Basic Data Rate and Run Length Tests . . . . . . . . . . . 27 7.1. Basic Data Rate and Delivery Statistics Tests . . . . . . 26
7.1.1. Run Length at Paced Full Data Rate . . . . . . . . . . 27 7.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 27
7.1.2. Run Length at Full Data Windowed Rate . . . . . . . . 28 7.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 27
7.1.3. Background Run Length Tests . . . . . . . . . . . . . 28 7.1.3. Background Delivery Statistics Tests . . . . . . . . . 27
7.2. Standing Queue tests . . . . . . . . . . . . . . . . . . . 28 7.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 28
7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 29 7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 29
7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 30 7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 29
7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 30 7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 30
7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 30 7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 30
7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 30 7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 30
7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 31 7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 31
7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 31 7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 31
7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 31 7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 31
7.5. Combined Tests . . . . . . . . . . . . . . . . . . . . . . 32 7.5. Combined Tests . . . . . . . . . . . . . . . . . . . . . . 32
7.5.1. Sustained burst test . . . . . . . . . . . . . . . . . 32 7.5.1. Sustained burst test . . . . . . . . . . . . . . . . . 32
7.5.2. Live Streaming Media . . . . . . . . . . . . . . . . . 33 7.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 33
8. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 8. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1. Near serving HD streaming video . . . . . . . . . . . . . 34
8.2. Far serving SD streaming video . . . . . . . . . . . . . . 34
8.3. Bulk delivery of remote scientific data . . . . . . . . . 35
9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 35 9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37
11. Informative References . . . . . . . . . . . . . . . . . . . . 37 11. Informative References . . . . . . . . . . . . . . . . . . . . 37
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 39 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 40
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 39 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 40
A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 41 Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 42
Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 42 Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 43
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 42 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43
1. Introduction 1. Introduction
Bulk performance metrics evaluate an Internet path's ability to carry Bulk performance metrics evaluate an Internet path's ability to carry
bulk data. Model based bulk performance metrics rely on mathematical bulk data. Model based bulk performance metrics rely on mathematical
TCP models to design a targeted diagnostic suite (TDS) of IP TCP models to design a targeted diagnostic suite (TDS) of IP
performance tests which can be applied independently to each subpath performance tests which can be applied independently to each subpath
of the full end-to-end path. These targeted diagnostic suites allow of the full end-to-end path. These targeted diagnostic suites allow
independent tests of subpaths to accurately detect if any subpath independent tests of subpaths to accurately detect if any subpath
will prevent the full end-to-end path from delivering bulk data at will prevent the full end-to-end path from delivering bulk data at
skipping to change at page 6, line 9 skipping to change at page 6, line 9
subpaths of the end-to-end path, the end-to-end statistical bounds subpaths of the end-to-end path, the end-to-end statistical bounds
need to be apportioned as a separate bound for each subpath. Note need to be apportioned as a separate bound for each subpath. Note
that links that are expected to be bottlenecks are expected to that links that are expected to be bottlenecks are expected to
contribute more packet loss and/or delay. In compensation, other contribute more packet loss and/or delay. In compensation, other
links have to be constrained to contribute less packet loss and links have to be constrained to contribute less packet loss and
delay. The criteria for passing each test of a TDS is an apportioned delay. The criteria for passing each test of a TDS is an apportioned
share of the total bound determined by the mathematical model from share of the total bound determined by the mathematical model from
the end-to-end target performance. the end-to-end target performance.
In addition to passing or failing, a test can be deemed to be In addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including, the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated, measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant, and others such as failing to were not statistically significant; and others such as failing to
meet some test preconditions. meet some required test preconditions.
This document describes a framework for deriving traffic patterns and This document describes a framework for deriving traffic patterns and
delivery statistics for model based metrics. It does not fully delivery statistics for model based metrics. It does not fully
specify any measurement techniques. Important details such as packet specify any measurement techniques. Important details such as packet
type-p selection, sampling techniques, vantage selection, etc. are type-p selection, sampling techniques, vantage selection, etc. are
not specified here. We imagine Fully Specified Targeted Diagnostic not specified here. We imagine Fully Specified Targeted Diagnostic
Suites (FSTDS), that define all of these details. We use TDS to Suites (FSTDS), that define all of these details. We use TDS to
refer to the subset of such a specification that is in scope for this refer to the subset of such a specification that is in scope for this
document. A TDS includes the target parameters, documentation of the document. A TDS includes the target parameters, documentation of the
models and assumptions used to derive the diagnostic test parameters, models and assumptions used to derive the diagnostic test parameters,
skipping to change at page 6, line 38 skipping to change at page 6, line 38
It has been difficult to develop Bulk Transport Capacity [RFC3148] It has been difficult to develop Bulk Transport Capacity [RFC3148]
metrics due to some overlooked requirements described in Section 3 metrics due to some overlooked requirements described in Section 3
and some intrinsic problems with using protocols for measurement, and some intrinsic problems with using protocols for measurement,
described in Section 4. described in Section 4.
In Section 5 we describe the models and common parameters used to In Section 5 we describe the models and common parameters used to
derive the targeted diagnostic suite. In Section 6 we describe derive the targeted diagnostic suite. In Section 6 we describe
common testing procedures. Each subpath is evaluated using suite of common testing procedures. Each subpath is evaluated using suite of
far simpler and more predictable diagnostic tests described in far simpler and more predictable diagnostic tests described in
Section 7. In Section 8 we present three example TDS', one that Section 7. In Section 8 we present an example TDS that might be
might be representative of HD video, when served fairly close to the representative of HD video, and illustrate how MBM can be used to
user, a second that might be representative of standard video, served address difficult measurement situations, such as confirming that
from a greater distance, and a third that might be representative of intercarrier exchanges have sufficient performance and capacity to
high performance bulk data delivered over a transcontinental path. deliver HD video between ISPs.
There exists a small risk that model based metric itself might yield There exists a small risk that model based metric itself might yield
a false pass result, in the sense that every subpath of an end-to-end a false pass result, in the sense that every subpath of an end-to-end
path passes every IP diagnostic test and yet a real application fails path passes every IP diagnostic test and yet a real application fails
to attain the performance target over the end-to-end path. If this to attain the performance target over the end-to-end path. If this
happens, then the validation procedure described in Section 9 needs happens, then the validation procedure described in Section 9 needs
to be used to prove and potentially revise the models. to be used to prove and potentially revise the models.
Future documents will define model based metrics for other traffic Future documents will define model based metrics for other traffic
classes and application types, such as real time streaming media. classes and application types, such as real time streaming media.
1.1. TODO 1.1. TODO
Please send comments on this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Fri Feb 14 14:07:33 PST 2014 Formatted: Thu Jul 3 20:19:04 PDT 2014
2. Terminology 2. Terminology
Terminology about paths, etc. See [RFC2330] and Terminology about paths, etc. See [RFC2330] and
[I-D.morton-ippm-lmap-path]. [I-D.ietf-ippm-lmap-path].
[data] sender Host sending data and receiving ACKs. [data] sender Host sending data and receiving ACKs.
[data] receiver Host receiving data and sending ACKs. [data] receiver Host receiving data and sending ACKs.
subpath A portion of the full path. Note that there is no subpath A portion of the full path. Note that there is no
requirement that subpaths be non-overlapping. requirement that subpaths be non-overlapping.
Measurement Point Measurement points as described in Measurement Point Measurement points as described in
[I-D.morton-ippm-lmap-path]. [I-D.ietf-ippm-lmap-path].
test path A path between two measurement points that includes a test path A path between two measurement points that includes a
subpath of the end-to-end path under test, and could include subpath of the end-to-end path under test, and could include
infrastructure between the measurement points and the subpath. infrastructure between the measurement points and the subpath.
[Dominant] Bottleneck The Bottleneck that generally dominates [Dominant] Bottleneck The Bottleneck that generally dominates
traffic statistics for the entire path. It typically determines a traffic statistics for the entire path. It typically determines a
flow's self clock timing, packet loss and ECN marking rate. See flow's self clock timing, packet loss and ECN marking rate. See
Section 4.1. Section 4.1.
front path The subpath from the data sender to the dominant front path The subpath from the data sender to the dominant
bottleneck. bottleneck.
back path The subpath from the dominant bottleneck to the receiver. back path The subpath from the dominant bottleneck to the receiver.
return path The path taken by the ACKs from the data receiver to the return path The path taken by the ACKs from the data receiver to the
data sender. data sender.
cross traffic Other, potentially interfering, traffic competing for cross traffic Other, potentially interfering, traffic competing for
resources (network and/or queue capacity). network resources (bandwidth and/or queue capacity).
Properties determined by the end-to-end path and application. They Properties determined by the end-to-end path and application. They
are described in more detail in Section 5.1. are described in more detail in Section 5.1.
Application Data Rate General term for the data rate as seen by the Application Data Rate General term for the data rate as seen by the
application above the transport layer. This is the payload data application above the transport layer. This is the payload data
rate, and excludes transport and lower level headers(TCP/IP or rate, and excludes transport and lower level headers(TCP/IP or
other protocols) and as well as retransmissions and other data other protocols) and as well as retransmissions and other data
that does not contribute to the total quantity of data delivered that does not contribute to the total quantity of data delivered
to the application. to the application.
skipping to change at page 8, line 17 skipping to change at page 8, line 17
headers, retransmits and other transport layer overhead. This headers, retransmits and other transport layer overhead. This
document is agnostic as to whether the link data rate includes or document is agnostic as to whether the link data rate includes or
excludes framing, MAC, or other lower layer overheads, except that excludes framing, MAC, or other lower layer overheads, except that
they must be treated uniformly. they must be treated uniformly.
end-to-end target parameters: Application or transport performance end-to-end target parameters: Application or transport performance
goals for the end-to-end path. They include the target data rate, goals for the end-to-end path. They include the target data rate,
RTT and MTU described below. RTT and MTU described below.
Target Data Rate: The application data rate, typically the ultimate Target Data Rate: The application data rate, typically the ultimate
user's performance goal. user's performance goal.
Target RTT (Round Trip Time): The baseline (minimum) RTT of the Target RTT (Round Trip Time): The baseline (minimum) RTT of the
longest end-to-end path over which the application expects to meet longest end-to-end path over which the application expects to be
the target performance. TCP and other transport protocol's able meet the target performance. TCP and other transport
ability to compensate for path problems is generally proportional protocol's ability to compensate for path problems is generally
to the number of round trips per second. The Target RTT proportional to the number of round trips per second. The Target
determines both key parameters of the traffic patterns (e.g. burst RTT determines both key parameters of the traffic patterns (e.g.
sizes) and the thresholds on acceptable traffic statistics. The burst sizes) and the thresholds on acceptable traffic statistics.
Target RTT must be specified considering authentic packets sizes: The Target RTT must be specified considering authentic packets
MTU sized packets on the forward path, ACK sized packets sizes: MTU sized packets on the forward path, ACK sized packets
(typically the header_overhead) on the return path. (typically header_overhead) on the return path.
Target MTU (Maximum Transmission Unit): The maximum MTU supported by Target MTU (Maximum Transmission Unit): The maximum MTU supported by
the end-to-end path the over which the application expects to meet the end-to-end path the over which the application expects to meet
the target performance. Assume 1500 Byte packet unless otherwise the target performance. Assume 1500 Byte packet unless otherwise
specified. If some subpath forces a smaller MTU, then it becomes specified. If some subpath forces a smaller MTU, then it becomes
the target MTU, and all model calculations and subpath tests must the target MTU, and all model calculations and subpath tests must
use the same smaller MTU. use the same smaller MTU.
Effective Bottleneck Data Rate: This is the bottleneck data rate Effective Bottleneck Data Rate: This is the bottleneck data rate
inferred from the ACK stream, by looking at how much data the ACK inferred from the ACK stream, by looking at how much data the ACK
stream reports delivered per unit time. If the path is thinning stream reports delivered per unit time. If the path is thinning
ACKs or batching packets the effective bottleneck rate can be much ACKs or batching packets the effective bottleneck rate can be much
skipping to change at page 9, line 14 skipping to change at page 9, line 14
pipe size A general term for number of packets needed in flight (the pipe size A general term for number of packets needed in flight (the
window size) to exactly fill some network path or subpath. This window size) to exactly fill some network path or subpath. This
is the window size which is normally the onset of queueing. is the window size which is normally the onset of queueing.
target_pipe_size: The number of packets in flight (the window size) target_pipe_size: The number of packets in flight (the window size)
needed to exactly meet the target rate, with a single stream and needed to exactly meet the target rate, with a single stream and
no cross traffic for the specified application target data rate, no cross traffic for the specified application target data rate,
RTT, and MTU. It is the amount of circulating data required to RTT, and MTU. It is the amount of circulating data required to
meet the target data rate, and implies the scale of the bursts meet the target data rate, and implies the scale of the bursts
that the network might experience. that the network might experience.
Delivery Statistics Raw or summary statistics about packet delivery,
packet losses, ECN marks, reordering, or any other properties of
packet delivery that may be germane to transport performance.
run length A general term for the observed, measured, or specified run length A general term for the observed, measured, or specified
number of packets that are (to be) delivered between losses or ECN number of packets that are (to be) delivered between losses or ECN
marks. Nominally one over the loss or ECN marking probability, if marks. Nominally one over the loss or ECN marking probability, if
there are independently and identically distributed. there are independently and identically distributed.
target_run_length The target_run_length is an estimate of the target_run_length The target_run_length is an estimate of the
minimum required headway between losses or ECN marks necessary to minimum required headway between losses or ECN marks necessary to
attain the target_data_rate over a path with the specified attain the target_data_rate over a path with the specified
target_RTT and target_MTU, as computed by a mathematical model of target_RTT and target_MTU, as computed by a mathematical model of
TCP congestion control. A reference calculation is show in TCP congestion control. A reference calculation is show in
Section 5.2 and alternatives in Appendix A Section 5.2 and alternatives in Appendix A
Ancillary parameters used for some tests Ancillary parameters used for some tests
derating: Under some conditions the standard models are too derating: Under some conditions the standard models are too
conservative. The modeling framework permits some latitude in conservative. The modeling framework permits some latitude in
relaxing or derating some test parameters as described in relaxing or "derating" some test parameters as described in
Section 5.3 in exchange for a more stringent TDS validation Section 5.3 in exchange for a more stringent TDS validation
procedures, described in Section 9. procedures, described in Section 9.
subpath_data_rate The maximum IP data rate supported by a subpath. subpath_data_rate The maximum IP data rate supported by a subpath.
This typically includes TCP/IP overhead, including headers, This typically includes TCP/IP overhead, including headers,
retransmits, etc. retransmits, etc.
test_path_RTT The RTT between two measurement points using test_path_RTT The RTT between two measurement points using
appropriate data and ACK packet sizes. appropriate data and ACK packet sizes.
test_path_pipe The amount of data necessary to fill a test path. test_path_pipe The amount of data necessary to fill a test path.
Nominally the test path RTT times the subpath_data_rate (which Nominally the test path RTT times the subpath_data_rate (which
should be part of the end-to-end subpath). should be part of the end-to-end subpath).
skipping to change at page 14, line 15 skipping to change at page 14, line 31
likely to generate under normal operation at the target rate and RTT. likely to generate under normal operation at the target rate and RTT.
By opening the protocol control loops, we remove most sources of By opening the protocol control loops, we remove most sources of
temporal and spatial correlation in the traffic delivery statistics, temporal and spatial correlation in the traffic delivery statistics,
such that each subpath's contribution to the end-to-end statistics such that each subpath's contribution to the end-to-end statistics
can be assumed to be independent and stationary (The delivery can be assumed to be independent and stationary (The delivery
statistics depend on the fine structure of the data transmissions, statistics depend on the fine structure of the data transmissions,
but not on long time scale state imbedded in the sender, receiver or but not on long time scale state imbedded in the sender, receiver or
other network components.) Therefore each subpath's contribution to other network components.) Therefore each subpath's contribution to
the end-to-end delivery statistics can be assumed to be independent, the end-to-end delivery statistics can be assumed to be independent,
and spatial composition techniques such as [RFC5835] apply. and spatial composition techniques such as [RFC5835] and [RFC6049]
apply.
In typical networks, the dominant bottleneck contributes the majority In typical networks, the dominant bottleneck contributes the majority
of the packet loss and ECN marks. Often the rest of the path makes of the packet loss and ECN marks. Often the rest of the path makes
insignificant contribution to these properties. A TDS should insignificant contribution to these properties. A TDS should
apportion the end-to-end budget for the specified parameters apportion the end-to-end budget for the specified parameters
(primarily packet loss and ECN marks) to each subpath or group of (primarily packet loss and ECN marks) to each subpath or group of
subpaths. For example the dominant bottleneck may be permitted to subpaths. For example the dominant bottleneck may be permitted to
contribute 90% of the loss budget, while the rest of the path is only contribute 90% of the loss budget, while the rest of the path is only
permitted to contribute 10%. permitted to contribute 10%.
A TDS or FSTDS MUST apportion all relevant packet delivery statistics A TDS or FSTDS MUST apportion all relevant packet delivery statistics
between different subpaths, such that the spatial composition of the between different subpaths, such that the spatial composition of the
metrics yields end-to-end statics which are within the bounds apportioned metrics yields end-to-end statics which are within the
determined by the models. bounds determined by the models.
A network is expected to be able to sustain a Bulk TCP flow of a A network is expected to be able to sustain a Bulk TCP flow of a
given data rate, MTU and RTT when the following conditions are met: given data rate, MTU and RTT when the following conditions are met:
o The raw link rate is higher than the target data rate. o The raw link rate is higher than the target data rate.
o The observed run length is larger than required by a suitable TCP
performance model o The observed delivery statistics are better than required by a
suitable TCP performance model (e.g. fewer losses).
o There is sufficient buffering at the dominant bottleneck to absorb o There is sufficient buffering at the dominant bottleneck to absorb
a slowstart rate burst large enough to get the flow out of a slowstart rate burst large enough to get the flow out of
slowstart at a suitable window size. slowstart at a suitable window size.
o There is sufficient buffering in the front path to absorb and o There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in the to be generated by the application, any channel arbitration in the
ACK path or other mechanisms. ACK path or other mechanisms.
o When there is a standing queue at a bottleneck for a shared media o When there is a standing queue at a bottleneck for a shared media
subpath, there are suitable bounds on how the data and ACKs subpath, there are suitable bounds on how the data and ACKs
interact, for example due to the channel arbitration mechanism. interact, for example due to the channel arbitration mechanism.
skipping to change at page 15, line 25 skipping to change at page 15, line 44
sense to upper layers: payload bytes delivered to the application, sense to upper layers: payload bytes delivered to the application,
above TCP. They exclude overheads associated with TCP and IP above TCP. They exclude overheads associated with TCP and IP
headers, retransmits and other protocols (e.g. DNS). headers, retransmits and other protocols (e.g. DNS).
Other end-to-end parameters defined in Section 2 include the Other end-to-end parameters defined in Section 2 include the
effective bottleneck data rate, the sender interface data rate and effective bottleneck data rate, the sender interface data rate and
the TCP/IP header sizes (overhead). the TCP/IP header sizes (overhead).
The target data rate must be smaller than all link data rates by The target data rate must be smaller than all link data rates by
enough headroom to carry the transport protocol overhead, explicitly enough headroom to carry the transport protocol overhead, explicitly
including retransmissions and an allowance fluctuations in the actual including retransmissions and an allowance for fluctuations in the
data rate, needed to meet the specified average rate. Specifying a actual data rate, needed to meet the specified average rate.
target rate with insufficient headroom are likely to result in Specifying a target rate with insufficient headroom are likely to
brittle measurements having little predictive value. result in brittle measurements having little predictive value.
Note that the target parameters can be specified for a hypothetical Note that the target parameters can be specified for a hypothetical
path, for example to construct TDS designed for bench testing in the path, for example to construct TDS designed for bench testing in the
absence of a real application, or for a real physical test, for in absence of a real application, or for a real physical test, for in
situ testing of production infrastructure. situ testing of production infrastructure.
The number of concurrent connections is explicitly not a parameter to The number of concurrent connections is explicitly not a parameter to
this model. If a subpath requires multiple connections in order to this model. If a subpath requires multiple connections in order to
meet the specified performance, that must be stated explicitly and meet the specified performance, that must be stated explicitly and
the procedure described in Section 6.1.4 applies. the procedure described in Section 6.1.4 applies.
skipping to change at page 16, line 33 skipping to change at page 17, line 4
Times per increase. To exactly fill the pipe losses must be no Times per increase. To exactly fill the pipe losses must be no
closer than when the peak of the AIMD sawtooth reached exactly twice closer than when the peak of the AIMD sawtooth reached exactly twice
the target_pipe_size otherwise the multiplicative window reduction the target_pipe_size otherwise the multiplicative window reduction
triggered by the loss would cause the network to be underfilled. triggered by the loss would cause the network to be underfilled.
Following [MSMO97] the number of packets between losses must be the Following [MSMO97] the number of packets between losses must be the
area under the AIMD sawtooth. They must be no more frequent than area under the AIMD sawtooth. They must be no more frequent than
every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets, every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets,
which simplifies to: which simplifies to:
target_run_length = 3*(target_pipe_size^2) target_run_length = 3*(target_pipe_size^2)
Note that this calculation is very conservative and is based on a Note that this calculation is very conservative and is based on a
number of assumptions that may not apply. Appendix A discusses these number of assumptions that may not apply. Appendix A discusses these
assumptions and provides some alternative models. If a less assumptions and provides some alternative models. If a different
conservative model is used, a fully specified TDS or FSTDS MUST model is used, a fully specified TDS or FSTDS MUST document the
document the actual method for computing target_run_length along with actual method for computing target_run_length along with the
the rationale for the underlying assumptions and the ratio of chosen rationale for the underlying assumptions and the ratio of chosen
target_run_length to the reference target_run_length calculated target_run_length to the reference target_run_length calculated
above. above.
These two parameters, target_pipe_size and target_run_length, These two parameters, target_pipe_size and target_run_length,
directly imply most of the individual parameters for the tests in directly imply most of the individual parameters for the tests in
Section 7. Section 7.
5.3. Parameter Derating 5.3. Parameter Derating
Since some aspects of the models are very conservative, this Since some aspects of the models are very conservative, this
skipping to change at page 18, line 6 skipping to change at page 18, line 29
Repeated Slowstart bursts: Slowstart bursts are typically part of Repeated Slowstart bursts: Slowstart bursts are typically part of
larger scale pattern of repeated bursts, such as sending larger scale pattern of repeated bursts, such as sending
target_pipe_size packets as slowstart bursts on a target_RTT target_pipe_size packets as slowstart bursts on a target_RTT
headway (burst start to burst start). Such a stream has three headway (burst start to burst start). Such a stream has three
different average rates, depending on the averaging interval. At different average rates, depending on the averaging interval. At
the finest time scale the average rate is the same as the sender the finest time scale the average rate is the same as the sender
interface rate, at a medium scale the average rate is twice the interface rate, at a medium scale the average rate is twice the
effective bottleneck link rate and at the longest time scales the effective bottleneck link rate and at the longest time scales the
average rate is equal to the target data rate. average rate is equal to the target data rate.
Note that in conventional measurement theory exponential Note that in conventional measurement theory, exponential
distributions are often used to eliminate many sorts of correlations. distributions are often used to eliminate many sorts of correlations.
For the procedures above, the correlations are created by the network For the procedures above, the correlations are created by the network
elements and accurately reflect their behavior. At some point in the elements and accurately reflect their behavior. At some point in the
future, it may be desirable to introduce noise sources into the above future, it may be desirable to introduce noise sources into the above
pacing models, but the are not warranted at this time. pacing models, but the are not warranted at this time.
6.1.2. Constant window pseudo CBR 6.1.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard protocol Implement pseudo constant bit rate by running a standard protocol
such as TCP with a fixed bound on the window size. The rate is only such as TCP with a fixed window size. The rate is only maintained in
maintained in average over each RTT, and is subject to limitations of average over each RTT, and is subject to limitations of the transport
the transport protocol. protocol.
The bound on the window size is computed from the target_data_rate The window size is computed from the target_data_rate and the actual
and the actual RTT of the test path. RTT of the test path.
If the transport protocol fails to maintain the test rate within If the transport protocol fails to maintain the test rate within
prescribed limits the test would typically be considered inconclusive prescribed limits the test would typically be considered inconclusive
or failing, depending depending on what mechanism caused the reduced or failing, depending on what mechanism caused the reduced rate. See
rate. See the discussion of test outcomes in Section 6.2.1. the discussion of test outcomes in Section 6.2.1.
6.1.3. Scanned window pseudo CBR 6.1.3. Scanned window pseudo CBR
Same as the above, except the window is scanned across a range of Same as the above, except the window is scanned across a range of
sizes designed to include two key events, the onset of queueing and sizes designed to include two key events, the onset of queueing and
the onset of packet loss or ECN marks. The window is scanned by the onset of packet loss or ECN marks. The window is scanned by
incrementing it by one packet for every 2*target_pipe_size delivered incrementing it by one packet for every 2*target_pipe_size delivered
packets. This mimics the additive increase phase of standard packets. This mimics the additive increase phase of standard TCP
congestion avoidance and normally separates the the window increases congestion avoidance and normally separates the the window increases
by approximately twice the target_RTT. by approximately twice the target_RTT.
There are two versions of this test: one built by applying a window There are two versions of this test: one built by applying a window
clamp to standard congestion control and one one built by stiffening clamp to standard congestion control and the other built by
a non-standard transport protocol. When standard congestion control stiffening a non-standard transport protocol. When standard
is in effect, any losses or ECN marks cause the transport to revert congestion control is in effect, any losses or ECN marks cause the
to a window smaller than the clamp such that the scanning clamp loses transport to revert to a window smaller than the clamp such that the
control the window size. The NPAD pathdiag tool is an example of scanning clamp loses control the window size. The NPAD pathdiag tool
this class of algorithms [Pathdiag]. is an example of this class of algorithms [Pathdiag].
Alternatively a non-standard congestion control algorithm can respond Alternatively a non-standard congestion control algorithm can respond
to losses by transmitting extra data, such that it maintains the to losses by transmitting extra data, such that it maintains the
specified window size independent of losses or ECN marks. Such a specified window size independent of losses or ECN marks. Such a
stiffened transport explicitly violates mandatory Internet congestion stiffened transport explicitly violates mandatory Internet congestion
control and is not suitable for in situ testing. It is only control and is not suitable for in situ testing. It is only
appropriate for engineering testing under laboratory conditions. The appropriate for engineering testing under laboratory conditions. The
Windowed Ping tools implemented such a test [WPING]. This tool has Windowed Ping tools implemented such a test [WPING]. The tool
been updated and is under test.[mpingSource] described in the paper has been updated.[mpingSource]
The test procedures in Section 7.2 describe how to the partition the The test procedures in Section 7.2 describe how to the partition the
scans into regions and how to interpret the results. scans into regions and how to interpret the results.
6.1.4. Concurrent or channelized testing 6.1.4. Concurrent or channelized testing
The procedures described in his document are only directly applicable The procedures described in this document are only directly
to single stream performance measurement, e.g. one TCP connection. applicable to single stream performance measurement, e.g. one TCP
In an ideal world, we would disallow all performance claims based connection. In an ideal world, we would disallow all performance
multiple concurrent streams but this is not practical due to at least claims based multiple concurrent streams, but this is not practical
two different issues. First, many very high rate link technologies due to at least two different issues. First, many very high rate
are channelized and pin individual flows to specific channels to link technologies are channelized and pin individual flows to
minimize reordering or other problems and second, TCP itself has specific channels to minimize reordering or other problems and
scaling limits. Although the former problem might be overcome second, TCP itself has scaling limits. Although the former problem
through different design decisions, the later problem is more deeply might be overcome through different design decisions, the later
rooted. problem is more deeply rooted.
All standard [RFC5681] and de facto standard congestion control All standard [RFC5681] and de facto standard congestion control
algorithms [CUBIC] have scaling limits, in the sense that as a long algorithms [CUBIC] have scaling limits, in the sense that as a long
fast network (LFN) with a fixed RTT and MTU gets faster, all fast network (LFN) with a fixed RTT and MTU gets faster, these
congestion control algorithms get less accurate and as a consequence congestion control algorithms get less accurate and as a consequence
have difficulty filling the network [SLowScaling]. These properties have difficulty filling the network[CCscaling]. These properties are
are a consequence of the original Reno AIMD congestion control design a consequence of the original Reno AIMD congestion control design and
and the requirement in RFC 5681 that all transport protocols have the requirement in [RFC5681] that all transport protocols have
uniform response to congestion. uniform response to congestion.
There are a number of reasons to want to specify performance in term There are a number of reasons to want to specify performance in term
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several Mb/s, which can be attained recommended for data rates below several megabits per second, which
with run lengths under 10000 packets. Since run length goes as the can be attained with run lengths under 10000 packets. Since the
square of the data rate, at higher rates the run lengths can be required run length goes as the square of the data rate, at higher
unfeasibly large, and multiple connection might be the only feasible rates the run lengths can be unreasonably large, and multiple
approach. For an example of this problem see Section 8.3. connection might be the only feasible approach.
If multiple connections are deemed necessary to meet aggregate If multiple connections are deemed necessary to meet aggregate
performance targets then this MUST be stated both the design of the performance targets then this MUST be stated both the design of the
TDS and in any claims about network performance. The tests MUST be TDS and in any claims about network performance. The tests MUST be
performed concurrently with the specified number of connections. For performed concurrently with the specified number of connections. For
the the tests that using bursty traffic, the bursts should be the the tests that use bursty traffic, the bursts should be
synchronized across flows. synchronized across flows.
6.1.5. Intermittent Testing
Any test which does not depend on queueing (e.g. the CBR tests) or
experiences periodic zero outstanding data during normal operation
(e.g. between bursts for the various burst tests), can be formulated
as an intermittent test, to reduce the perceived impact on other
traffic. The approach is to insert periodic pauses in the test at
any point when there is no expected queue occupancy.
Intermittent testing can be used for ongoing monitoring for changes
in subpath quality with minimal disruption users. However it is not
suitable in environments where there are reactive links[REACTIVE].
6.1.6. Intermittent Scatter Testing
Intermittent scatter testing is a technique for non-disruptively
evaluating the front path from a sender to a subscriber aggregation
point within an ISP at full load by intermittently testing across a
pool of subscriber access links, such that each subscriber sees
tolerable test traffic loads. The load on the front path should be
limited to be no more than that which would be caused by a single
test to an known to otherwise be idle subscriber. This test in
aggregate mimics a full load test from a content provider to the
aggregation point.
Intermittent scatter testing can be used to reduce the measurement
noise introduced by unknown traffic on customer access links.
6.2. Interpreting the Results 6.2. Interpreting the Results
6.2.1. Test outcomes 6.2.1. Test outcomes
To perform an exhaustive test of an end-to-end network path, each To perform an exhaustive test of an end-to-end network path, each
test of the TDS is applied to each subpath of an end-to-end path. If test of the TDS is applied to each subpath of an end-to-end path. If
any subpath fails any test then an application running over the end- any subpath fails any test then an application running over the end-
to-end path can also be expected to fail to attain the target to-end path can also be expected to fail to attain the target
performance under some conditions. performance under some conditions.
In addition to passing or failing, a test can be deemed to be In addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons. Proper instrumentation and inconclusive for a number of reasons. Proper instrumentation and
treatment of inclusive outcomes is critical to the accuracy and treatment of inconclusive outcomes is critical to the accuracy and
robustness of Model Based Metrics. Tests can be inconclusive if the robustness of Model Based Metrics. Tests can be inconclusive if the
precomputed traffic pattern was not accurately generated; the precomputed traffic pattern or data rates were not accurately
measurement results were not statistically significant; and others generated; the measurement results were not statistically
causes such as failing to meet some required preconditions for the significant; and others causes such as failing to meet some required
test. preconditions for the test.
For example consider a test that implements Constant Window Pseudo For example consider a test that implements Constant Window Pseudo
CBR (Section 6.1.2) by adding rate controls and detailed traffic CBR (Section 6.1.2) by adding rate controls and detailed traffic
instrumentation to TCP (e.g. [RFC4898]). TCP includes built in instrumentation to TCP (e.g. [RFC4898]). TCP includes built in
control systems which might interfere with the sending data rate. If control systems which might interfere with the sending data rate. If
such a test meets the the run length specification while failing to such a test meets the required delivery statistics (e.g. run length)
attain the specified data rate it must be treated as an inconclusive while failing to attain the specified data rate it must be treated as
result, because we can not a priori determine if the reduced data an inconclusive result, because we can not a priori determine if the
rate was caused by a TCP problem or a network problem, or if the reduced data rate was caused by a TCP problem or a network problem,
reduced data rate had a material effect on the run length measurement or if the reduced data rate had a material effect on the delivery
itself. statistics themselves.
Note that for load tests such as this example, an observed run length Note that for load tests such as this example, an if the observed
that is too small can be considered to have failed the test because delivery statistics fail to meet the targets, the test can can be
it doesn't really matter that the test didn't attain the required considered to have failed the test because it doesn't really matter
data rate. that the test didn't attain the required data rate.
The really important new properties of MBM, such as vantage The really important new properties of MBM, such as vantage
independence, are a direct consequence of opening the control loops independence, are a direct consequence of opening the control loops
in the protocols, such that the test traffic does not depend on in the protocols, such that the test traffic does not depend on
network conditions or traffic received. Any mechanism that network conditions or traffic received. Any mechanism that
introduces feedback between the traffic measurements and the traffic introduces feedback between the traffic measurements and the traffic
generation is at risk of introducing nonlinearities that spoil these generation is at risk of introducing nonlinearities that spoil these
properties. Any exceptional event that indicates that such feedback properties. Any exceptional event that indicates that such feedback
has happened should cause the test to be considered inconclusive. has happened should cause the test to be considered inconclusive.
One way to view inconclusive tests is that they reflect situations One way to view inconclusive tests is that they reflect situations
where a test outcome is ambiguous between limitations of the network where a test outcome is ambiguous between limitations of the network
and some unknown limitation of the diagnostic test itself, which was and some unknown limitation of the diagnostic test itself, which may
presumably caused by some uncontrolled feedback from the network. have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to sweep the target parameter space Note that procedures that attempt to sweep the target parameter space
to find the bounds on some parameter (for example to find the highest to find the limits on some parameter (for example to find the highest
data rate for a subpath) are likely to break the location independent data rate for a subpath) are likely to break the location independent
properties of Model Based Metrics, because the boundary between properties of Model Based Metrics, because the boundary between
passing and inconclusive is sensitive to the RTT because TCP's passing and inconclusive is generally sensitive to RTT. This
ability to compensate for problems scales with the number of round interaction is because TCP's ability to compensate for flaws in the
trips per second. Repeating the same procedure from another vantage network scales with the number of round trips per second. Repeating
point with a different RTT is likely get a different result, because the same procedure from a different vantage point with a larger RTT
TCP will get lower performance on the path with the longer RTT. is likely get a different result, because with the larger TCP will
less accurately control the data rate.
One of the goals for evolving TDS designs will be to keep sharpening One of the goals for evolving TDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inclusive tests MUST be criteria for for passing, failing and inconclusive tests MUST be
explicitly stated for every test in the TDS or FSTDS. explicitly stated for every test in the TDS or FSTDS.
One of the goals of evolving the testing process, procedures tools One of the goals of evolving the testing process, procedures tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw data delivery statistics for deeper It may be useful to keep raw data delivery statistics for deeper
study of the behavior of the network path and to measure the tools. study of the behavior of the network path and to measure the tools.
This can help to drive tool evolution. Under some conditions it Raw delivery statistics can help to drive tool evolution. Under some
might be possible to reevaluate the raw data for satisfying alternate conditions it might be possible to reevaluate the raw data for
performance targets. However such procedures are likely to introduce satisfying alternate performance targets. However it is important to
sampling bias and other implicit feedback which can cause false guard against sampling bias and other implicit feedback which can
results and exhibit MP vantage sensitivity. cause false results and exhibit measurement point vantage
sensitivity.
6.2.2. Statistical criteria for measuring run_length 6.2.2. Statistical criteria for measuring run_length
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels for appropriate packet stream sizes and acceptable error levels for
efficient measurement. In practice, can we compare the empirically efficient measurement. In practice, can we compare the empirically
estimated packet loss and ECN marking probabilities with the targets estimated packet loss and ECN marking probabilities with the targets
as the sample size grows? How large a sample is needed to say that as the sample size grows? How large a sample is needed to say that
the measurements of packet transfer indicate a particular run length the measurements of packet transfer indicate a particular run length
is present? is present?
skipping to change at page 25, line 37 skipping to change at page 25, line 11
This algorithm allows accurate comparison of the observed failure This algorithm allows accurate comparison of the observed failure
probability with the corresponding values predicted based on a fixed probability with the corresponding values predicted based on a fixed
target_failure_rate, which is equal to 1.0 / target_run_length. target_failure_rate, which is equal to 1.0 / target_run_length.
6.2.3. Reordering Tolerance 6.2.3. Reordering Tolerance
All tests must be instrumented for packet level reordering [RFC4737]. All tests must be instrumented for packet level reordering [RFC4737].
However, there is no consensus for how much reordering should be However, there is no consensus for how much reordering should be
acceptable. Over the last two decades the general trend has been to acceptable. Over the last two decades the general trend has been to
make protocols and applications more tolerant to reordering, in make protocols and applications more tolerant to reordering (see for
response to the gradual increase in reordering in the network. This example [RFC4015]), in response to the gradual increase in reordering
increase has been due to the gradual deployment of parallelism in the in the network. This increase has been due to the gradual deployment
network, as a consequence of such technologies as multithreaded route of technologies such as multi threaded routing lookups and Equal Cost
lookups and Equal Cost Multipath (ECMP) routing. These techniques to Multipath (ECMP) routing. These techniques increase parallelism in
increase network parallelism are critical to enabling overall network and are critical to enabling overall Internet growth to
Internet growth to exceed Moore's Law. exceed Moore's Law.
Section 5 of [RFC4737] proposed a metric that may be sufficient to
designate isolated reordered packets as effectively lost, because
TCP's retransmission response would be the same.
TCP should be able to adapt to reordering as long as the reordering Note that transport retransmission strategies can trade off
reordering tolerance vs how quickly can repair losses vs overhead
from spurious retransmissions. In advance of new retransmission
strategies we propose the following strawman: Transport protocols
should be able to adapt to reordering as long as the reordering
extent is no more than the maximum of one half window or 1 mS, extent is no more than the maximum of one half window or 1 mS,
whichever is larger. Note that there is a fundamental tradeoff whichever is larger. Within this limit on reorder extent, there
between tolerance to reordering and how quickly algorithms such as should be no bound on reordering density.
fast retransmit can repair losses. Within this limit on reorder
extent, there should be no bound on reordering density.
NB: Traditional TCP implementations were not compatible with this
metric, however newer implementations still need to be evaluated
Parameters:
Reordering displacement: the maximum of one half of target_pipe_size
or 1 mS.
6.3. Test Qualifications
This entire section need to be completely overhauled. @@@@ It might By implication, recording which is less than these bounds should not
be summarized as "needs to be specified in a FSTDS". be treated as a network impairment. However [RFC4737] still applies:
reordering should be instrumented and the maximum reordering that can
be properly characterized by the test (e.g. bound on history buffers)
should be recorded with the measurement results.
Send pre-load traffic as needed to activate radios with a sleep mode, Reordering tolerance and diagnostic bounds must be specified in a
or other "reactive network" elements (term defined in FSTDS.
[draft-morton-ippm-2330-update-01]).
In general failing to accurately generate the test traffic has to be 6.3. Test Preconditions
treated as an inconclusive test, since it must be presumed that the
error in traffic generation might have affected the test outcome. To
the extent that the network itself had an effect on the the traffic
generation (e.g. in the standing queue tests) the possibility exists
that allowing too large of error margin in the traffic generation
might introduce feedback loops that comprise the vantage independents
properties of these tests.
The proper treatment of cross traffic is different for different Many tests have preconditions which are required to assure their
subpaths. In general when testing infrastructure which is associated validity. For example the presence or nonpresence of cross traffic
with only one subscriber, the test should be treated as inconclusive on specific subpaths, or appropriate preloading to put reactive
it that subscriber is active on the network. However, for shared network elements into the proper states[I-D.ietf-ippm-2330-update])
infrastructure managed by an ISP, the question at hand is likely to If preconditions are not properly satisfied for some reason, the
be testing if ISP has sufficient total capacity. In such cases the tests should be considered to be inconclusive. In general it is
presence of cross traffic due to other subscribers is explicitly part useful to preserve diagnostic information about why the preconditions
of the network conditions and its effects are explicitly part of the were not met, and the test data that was collected, if any.
test.
These two cases do not cover all subpaths. For example, WiFI which It is important to preserve the record that a test was scheduled,
itself shares unmanaged channel space with other devices is unlikely because otherwise precondition enforcement mechanisms can introduce
to be unsuitable for any prescriptive measurement. sampling bias. For example, canceling tests due to load on
subscriber access links may introduce sampling bias for tests of the
rest of the network by reducing the number of tests during peak
network load.
Note that canceling tests due to load on subscriber lines may Test preconditions and failure actions must be specified in a FSTDS.
introduce sampling bias for testing other parts of the
infrastructure. For this reason tests that are scheduled but not run
due to load should be treated as a special case of "inconclusive".
7. Diagnostic Tests 7. Diagnostic Tests
The diagnostic tests below are organized by traffic pattern: basic The diagnostic tests below are organized by traffic pattern: basic
data rate and run length, standing queues, slowstart bursts, and data rate and delivery statistics, standing queues, slowstart bursts,
sender rate bursts. We also introduce some combined tests which are and sender rate bursts. We also introduce some combined tests which
more efficient the expense of conflating the signatures of different are more efficient when networks are expected to pass, but conflate
failures. diagnostic signatures when they fail.
7.1. Basic Data Rate and Run Length Tests There are a number of test details which are not fully defined here.
They must be fully specified in a FSTDS. From a standardization
perspective, this lack of specificity will weaken this version of
Model Based Metrics, however it is anticipated that this it be more
than offset by the extent to which MBM suppresses the problems caused
by using transport protocols for measurement. e.g. non-specific MBM
metrics are likely to have better repeatability than many existing
BTC like metrics. Once we have good field experience, the missing
details can be fully specified.
We propose several versions of the basic data rate and run length 7.1. Basic Data Rate and Delivery Statistics Tests
test. All measure the number of packets delivered between losses or
ECN marks, using a data stream that is rate controlled at or below We propose several versions of the basic data rate and delivery
the target_data_rate. statistics test. All measure the number of packets delivered between
losses or ECN marks, using a data stream that is rate controlled at
or below the target_data_rate.
The tests below differ in how the data rate is controlled. The data The tests below differ in how the data rate is controlled. The data
can be paced on a timer, or window controlled at full target data can be paced on a timer, or window controlled at full target data
rate. The first two tests implicitly confirm that sub_path has rate. The first two tests implicitly confirm that sub_path has
sufficient raw capacity to carry the target_data_rate. They are sufficient raw capacity to carry the target_data_rate. They are
recommend for relatively infrequent testing, such as an installation recommend for relatively infrequent testing, such as an installation
or auditing process. The third, background run length, is a low rate or periodic auditing process. The third, background delivery
test designed for ongoing monitoring for changes in subpath quality. statistics, is a low rate test designed for ongoing monitoring for
changes in subpath quality.
All rely on the receiver accumulating packet delivery statistics as All rely on the receiver accumulating packet delivery statistics as
described in Section 6.2.2 to score the outcome: described in Section 6.2.2 to score the outcome:
Pass: it is statistically significant that the observed run length is Pass: it is statistically significant that the observed interval
larger than the target_run_length. between losses or ECN marks is larger than the target_run_length.
Fail: it is statistically significant that the observed run length is Fail: it is statistically significant that the observed interval
smaller than the target_run_length. between losses or ECN marks is smaller than the target_run_length.
A test is considered to be inconclusive if it failed to meet the data A test is considered to be inconclusive if it failed to meet the data
rate as specified below, meet the qualifications defined in rate as specified below, meet the qualifications defined in
Section 6.3 or neither run length statistical hypothesis was Section 6.3 or neither run length statistical hypothesis was
confirmed in the allotted test duration. confirmed in the allotted test duration.
7.1.1. Run Length at Paced Full Data Rate 7.1.1. Delivery Statistics at Paced Full Data Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while relying on timer to send data at the target_run_length while relying on timer to send data at the
target_rate using the procedure described in in Section 6.1.1 with a target_rate using the procedure described in in Section 6.1.1 with a
burst size of 1 (single packets). burst size of 1 (single packets) or 2 (packet pairs).
The test is considered to be inconclusive if the packet transmission The test is considered to be inconclusive if the packet transmission
can not be accurately controlled for any reason. can not be accurately controlled for any reason.
7.1.2. Run Length at Full Data Windowed Rate RFC 6673 [RFC6673] is appropriate for measuring delivery statistics
at full data rate.
7.1.2. Delivery Statistics at Full Data Windowed Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while sending at an average rate equal to the target_run_length while sending at an average rate approximately
target_data_rate, by controlling (or clamping) the window size of a equal to the target_data_rate, by controlling (or clamping) the
conventional transport protocol to a fixed value computed from the window size of a conventional transport protocol to a fixed value
properties of the test path, typically computed from the properties of the test path, typically
test_window=target_data_rate*test_RTT/target_MTU. test_window=target_data_rate*test_RTT/target_MTU. Note that if there
is any interaction between the forward and return path, test_window
may need to be adjusted slightly to compensate for the resulting
inflated RTT.
Since losses and ECN marks generally cause transport protocols to at Since losses and ECN marks generally cause transport protocols to at
least temporarily reduce their data rates, this test is expected to least temporarily reduce their data rates, this test is expected to
be less precise about controlling its data rate. It should not be be less precise about controlling its data rate. It should not be
considered inconclusive as long as at least some of the round trips considered inconclusive as long as at least some of the round trips
reached the full target_data_rate, without incurring losses. To pass reached the full target_data_rate without incurring losses or ECN
this test the network MUST deliver target_pipe_size packets in marks. To pass this test the network MUST deliver target_pipe_size
target_RTT time without any losses or ECN marks at least once per two packets in target_RTT time without any losses or ECN marks at least
target_pipe_size round trips, in addition to meeting the run length once per two target_pipe_size round trips, in addition to meeting the
statistical test. run length statistical test.
7.1.3. Background Run Length Tests 7.1.3. Background Delivery Statistics Tests
The background run length is a low rate version of the target target The background run length is a low rate version of the target target
rate test above, designed for ongoing lightweight monitoring for rate test above, designed for ongoing lightweight monitoring for
changes in the observed subpath run length without disrupting users. changes in the observed subpath run length without disrupting users.
It should be used in conjunction with one of the above full rate It should be used in conjunction with one of the above full rate
tests because it does not confirm that the subpath can support raw tests because it does not confirm that the subpath can support raw
data rate. data rate.
Existing loss metrics such as [RFC6673] might be appropriate for RFC 6673 [RFC6673] is appropriate for measuring background delivery
measuring background run length. statistics.
7.2. Standing Queue tests 7.2. Standing Queue Tests
These test confirm that the bottleneck is well behaved across the These test confirm that the bottleneck is well behaved across the
onset of packet loss, which typically follows after the onset of onset of packet loss, which typically follows after the onset of
queueing. Well behaved generally means lossless for transient queueing. Well behaved generally means lossless for transient
queues, but once the queue has been sustained for a sufficient period queues, but once the queue has been sustained for a sufficient period
of time (or reaches a sufficient queue depth) there should be a small of time (or reaches a sufficient queue depth) there should be a small
number of losses to signal to the transport protocol that it should number of losses to signal to the transport protocol that it should
reduce its window. Losses that are too early can prevent the reduce its window. Losses that are too early can prevent the
transport from averaging at the target_data_rate. Losses that are transport from averaging at the target_data_rate. Losses that are
too late indicate that the queue might be subject to bufferbloat too late indicate that the queue might be subject to bufferbloat
[Bufferbloat] and inflict excess queuing delays on all flows sharing [wikiBloat] and inflict excess queuing delays on all flows sharing
the bottleneck queue. Excess losses make loss recovery problematic the bottleneck queue. Excess losses (more than a few per RTT) make
for the transport protocol. Non-linear or erratic RTT fluctuations loss recovery problematic for the transport protocol. Non-linear or
suggest poor interactions between the channel acquisition systems and erratic RTT fluctuations suggest poor interactions between the
the transport self clock. All of the tests in this section use the channel acquisition algorithms and the transport self clock. All of
same basic scanning algorithm but score the link on the basis of how the tests in this section use the same basic scanning algorithm,
well it avoids each of these problems. described here, but score the link on the basis of how well it avoids
each of these problems.
For some technologies the data might not be subject to increasing For some technologies the data might not be subject to increasing
delays, in which case the data rate will vary with the window size delays, in which case the data rate will vary with the window size
all the way up to the onset of losses or ECN marks. For theses all the way up to the onset of load induced losses or ECN marks. For
technologies, the discussion of queueing does not apply, but it is theses technologies, the discussion of queueing does not apply, but
still required that the onset of losses (or ECN marks) be at an it is still required that the onset of losses (or ECN marks) be at an
appropriate point and progressive. appropriate point and progressive.
Use the procedure in Section 6.1.3 to sweep the window across the Use the procedure in Section 6.1.3 to sweep the window across the
onset of queueing and the onset of loss. The tests below all assume onset of queueing and the onset of loss. The tests below all assume
that the scan emulates standard additive increase and delayed ACK by that the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_pipe_size incrementing the window by one packet for every 2*target_pipe_size
packets delivered. A scan can be divided into three regions: below packets delivered. A scan can typically be divided into three
the onset of queueing, a standing queue, and at or beyond the onset regions: below the onset of queueing, a standing queue, and at or
of loss. beyond the onset of loss.
Below the onset of queueing the RTT is typically fairly constant, and Below the onset of queueing the RTT is typically fairly constant, and
the data rate varies in proportion to the window size. Once the data the data rate varies in proportion to the window size. Once the data
rate reaches the link rate, the data rate becomes fairly constant, rate reaches the link rate, the data rate becomes fairly constant,
and the RTT increases in proportion to the the window size. The and the RTT increases in proportion to the increase in window size.
precise transition from one region to the other can be identified by The precise transition across the start of queueing can be identified
the maximum network power, defined to be the ratio data rate over the by the maximum network power, defined to be the ratio data rate over
RTT[POWER]. the RTT. The network power can be computed at each window size, and
the window with the maximum are taken as the start of the queueing
region.
For technologies that do not have conventional queues, start the scan For technologies that do not have conventional queues, start the scan
at a window equal to the test_window, i.e. starting at the target at a window equal to the test_window=target_data_rate*test_RTT/
rate, instead of the power point. target_MTU, i.e. starting at the target rate, instead of the power
point.
If there is random background loss (e.g. bit errors, etc), precise If there is random background loss (e.g. bit errors, etc), precise
determination of the onset of packet loss may require multiple scans. determination of the onset of queue induced packet loss may require
Above the onset of loss, all transport protocols are expected to multiple scans. Above the onset of queuing loss, all transport
experience periodic losses. For the stiffened transport case they protocols are expected to experience periodic losses determined by
will be determined by the AQM algorithm in the network or the details the interaction between the congestion control and AQM algorithms.
of how the the window increase function responds to loss. For the For standard congestion control algorithms the periodic losses are
standard transport case the details of periodic losses are typically likely to be relatively widely spaced and the details are typically
dominated by the behavior of the transport protocol itself. dominated by the behavior of the transport protocol itself. For the
stiffened transport protocols case (with non-standard, aggressive
congestion control algorithms) the details of periodic losses will be
dominated by how the the window increase function responds to loss.
7.2.1. Congestion Avoidance 7.2.1. Congestion Avoidance
A link passes the congestion avoidance standing queue test if more A link passes the congestion avoidance standing queue test if more
than target_run_length packets are delivered between the power point than target_run_length packets are delivered between the onset of
(or test_window) and the first loss or ECN mark. If this test is queueing (as determined by the window with the maximum network power)
implemented using a standards congestion control algorithm with a and the first loss or ECN mark. If this test is implemented using a
clamp, it can be used in situ in the production internet as a standards congestion control algorithm with a clamp, it can be used
capacity test. For an example of such a test see [NPAD]. in situ in the production internet as a capacity test. For an
example of such a test see [Pathdiag].
For technologies that do not have conventional queues, use the
test_window inplace of the onset of queueing. i.e. A link passes the
congestion avoidance standing queue test if more than
target_run_length packets are delivered between start of the scan at
test_window and the first loss or ECN mark.
7.2.2. Bufferbloat 7.2.2. Bufferbloat
This test confirms that there is some mechanism to limit buffer This test confirms that there is some mechanism to limit buffer
occupancy (e.g. that prevents bufferbloat). Note that this is not occupancy (e.g. that prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk performance, however if strictly a requirement for single stream bulk performance, however if
there is no mechanism to limit buffer occupancy then a single stream there is no mechanism to limit buffer queue occupancy then a single
with sufficient data to deliver is likely to cause the problems stream with sufficient data to deliver is likely to cause the
described in [RFC2309] and [Bufferbloat]. This may cause only minor problems described in [RFC2309] and [wikiBloat]. This may cause only
symptoms for the dominant flow, but has the potential to make the minor symptoms for the dominant flow, but has the potential to make
link unusable for other flows and applications. the link unusable for other flows and applications.
Pass if the onset of loss is before a standing queue has introduced Pass if the onset of loss occurs before a standing queue has
more delay than than twice target_RTT, or other well defined limit. introduced more delay than than twice target_RTT, or other well
Note that there is not yet a model for how much standing queue is defined and specified limit. Note that there is not yet a model for
acceptable. The factor of two chosen here reflects a rule of thumb. how much standing queue is acceptable. The factor of two chosen here
Note that in conjunction with the previous test, this test implies reflects a rule of thumb. In conjunction with the previous test,
that the first loss should occur at a queueing delay which is between this test implies that the first loss should occur at a queueing
one and two times the target_RTT. delay which is between one and two times the target_RTT.
Specified RTT limits that are larger than twice the target_RTT must
be fully justified in the FSTDS.
7.2.3. Non excessive loss 7.2.3. Non excessive loss
This test confirm that the onset of loss is not excessive. Pass if This test confirm that the onset of loss is not excessive. Pass if
losses are bound by the the fluctuations in the cross traffic, such losses are equal or less than the increase in the cross traffic plus
that transient load (bursts) do not cause dips in aggregate raw the test traffic window increase on the previous RTT. This could be
throughput. e.g. pass as long as the losses are no more bursty than restated as non-decreasing link throughput at the onset of loss,
are expected from a simple drop tail queue. Although this test could which is easy to meet as long as discarding packets in not more
be made more precise it is really included here for pedantic expensive than delivering them. (Note when there is a transient drop
completeness. in link throughput, outside of a standing queue test, a link that
passes other queue tests in this document will have sufficient queue
space to hold one RTT worth of data).
7.2.4. Duplex Self Interference 7.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. Fail if the RTT rises the forward data path and the ACK return path.
by more than some fixed bound above the expected queueing time
computed from trom the excess window divided by the link data rate. Some historical half duplex technologies had the property that each
direction held the channel until it completely drains its queue.
When a self clocked transport protocol, such as TCP, has data and
acks passing in opposite directions through such a link, the behavior
often reverts to stop-and-wait. Each additional packet added to the
window raises the observed RTT by two forward path packet times, once
as it passes through the data path, and once for the additional delay
incurred by the ACK waiting on the return path.
The duplex self interference test fails if the RTT rises by more than
some fixed bound above the expected queueing time computed from trom
the excess window divided by the link data rate.
7.3. Slowstart tests 7.3. Slowstart tests
These tests mimic slowstart: data is sent at twice the effective These tests mimic slowstart: data is sent at twice the effective
bottleneck rate to exercise the queue at the dominant bottleneck. bottleneck rate to exercise the queue at the dominant bottleneck.
They are deemed inconclusive if the elapsed time to send the data In general they are deemed inconclusive if the elapsed time to send
burst is not less than half of the time to receive the ACKs. (i.e. the data burst is not less than half of the time to receive the ACKs.
sending data too fast is ok, but sending it slower than twice the (i.e. sending data too fast is ok, but sending it slower than twice
actual bottleneck rate as indicated by the ACKs is deemed the actual bottleneck rate as indicated by the ACKs is deemed
inconclusive). Space the bursts such that the average data rate is inconclusive). Space the bursts such that the average data rate is
equal to the target_data_rate. equal to the target_data_rate.
7.3.1. Full Window slowstart test 7.3.1. Full Window slowstart test
This is a capacity test to confirm that slowstart is not likely to This is a capacity test to confirm that slowstart is not likely to
exit prematurely. Send slowstart bursts that are target_pipe_size exit prematurely. Send slowstart bursts that are target_pipe_size
total packets. total packets.
Accumulate packet delivery statistics as described in Section 6.2.2 Accumulate packet delivery statistics as described in Section 6.2.2
to score the outcome. Pass if it is statistically significant that to score the outcome. Pass if it is statistically significant that
the observed run length is larger than the target_run_length. Fail the observed interval between losses or ECN marks is larger than the
if it is statistically significant that the observed run length is target_run_length. Fail if it is statistically significant that the
smaller than the target_run_length. observed interval between losses or ECN marks is smaller than the
target_run_length.
Note that these are the same parameters as the Sender Full Window Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than burst test, except the burst rate is at slowestart rate, rather than
sender interface rate. sender interface rate.
7.3.2. Slowstart AQM test 7.3.2. Slowstart AQM test
Do a continuous slowstart (send data continuously at slowstart_rate), Do a continuous slowstart (send data continuously at slowstart_rate),
until the first loss, stop, allow the network to drain and repeat, until the first loss, stop, allow the network to drain and repeat,
gathering statistics on the last packet delivered before the loss, gathering statistics on the last packet delivered before the loss,
the loss pattern, maximum observed RTT and window size. Justify the the loss pattern, maximum observed RTT and window size. Justify the
results. There is not currently sufficient theory justifying results. There is not currently sufficient theory justifying
requiring any particular result, however design decisions that affect requiring any particular result, however design decisions that affect
the outcome of this tests also affect how the network balances the outcome of this tests also affect how the network balances
between long and short flows (the "mice and elephants" problem). between long and short flows (the "mice and elephants" problem). The
queue at the time of the first loss should be at least one half of
the target_RTT.
This is an engineering test: It would be best performed on a This is an engineering test: It would be best performed on a
quiescent network or testbed, since cross traffic has the potential quiescent network or testbed, since cross traffic has the potential
to change the results. to change the results.
7.4. Sender Rate Burst tests 7.4. Sender Rate Burst tests
These tests determine how well the network can deliver bursts sent at These tests determine how well the network can deliver bursts sent at
sender's interface rate. Note that this test most heavily exercises sender's interface rate. Note that this test most heavily exercises
the front path, and is likely to include infrastructure may be out of the front path, and is likely to include infrastructure may be out of
scope for a subscriber ISP. scope for a subscriber ISP.
Also, there are a several details that are not precisely defined. Also, there are a several details that are not precisely defined.
For starters there is not a standard server interface rate. 1 Gb/s For starters there is not a standard server interface rate. 1 Gb/s
and 10 Gb/s are very common today, but higher rates will become cost and 10 Gb/s are very common today, but higher rates will become cost
effective and can be expected to be dominant some time in the future. effective and can be expected to be dominant some time in the future.
Current standards permit TCP to send a full window bursts following Current standards permit TCP to send a full window bursts following
an application pause. Congestion Window Validation [RFC2861], is not an application pause. (Congestion Window Validation [RFC2861], is
required, but even if was it does not take effect until an not required, but even if was, it does not take effect until an
application pause is longer than an RTO. Since this is standard application pause is longer than an RTO.) Since full window bursts
behavior, it is desirable that the network be able to deliver such are consistent with standard behavior, it is desirable that the
bursts, otherwise application pauses will cause unwarranted losses. network be able to deliver such bursts, otherwise application pauses
will cause unwarranted losses. Note that the AIMD sawtooth requires
a peak window that is twice target_pipe_size, so the worst case burst
may be 2*target_pipe_size.
It is also understood in the application and serving community that It is also understood in the application and serving community that
interface rate bursts have a cost to the network that has to be interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example balanced against other costs in the servers themselves. For example
TCP Segmentation Offload [TSO] reduces server CPU in exchange for TCP Segmentation Offload (TSO) reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. memory.
There is not yet theory to unify these costs or to provide a There is not yet theory to unify these costs or to provide a
framework for trying to optimize global efficiency. We do not yet framework for trying to optimize global efficiency. We do not yet
have a model for how much the network should tolerate server rate have a model for how much the network should tolerate server rate
bursts. Some bursts must be tolerated by the network, but it is bursts. Some bursts must be tolerated by the network, but it is
probably unreasonable to expect the network to be able to efficiently probably unreasonable to expect the network to be able to efficiently
deliver all data as a series of bursts. deliver all data as a series of bursts.
For this reason, this is the only test for which we explicitly For this reason, this is the only test for which we explicitly
encourage detrateing. A TDS should include a table of pairs of encourage derating. A TDS should include a table of pairs of
derating parameters: what burst size to use as a fraction of the derating parameters: what burst size to use as a fraction of the
target_pipe_size, and how much each burst size is permitted to reduce target_pipe_size, and how much each burst size is permitted to reduce
the run length, relative to to the target_run_length. the run length, relative to to the target_run_length.
7.5. Combined Tests 7.5. Combined Tests
These tests are more efficient from a deployment/operational Combined tests efficiently confirm multiple network properties in a
perspective, but may not be possible to diagnose if they fail. single test, possibly as a side effect of production content
delivery. They require less measurement traffic than other testing
strategies at the cost of conflating diagnostic signatures when they
fail. These are by far the most efficient for testing networks that
are expected to pass all tests.
7.5.1. Sustained burst test 7.5.1. Sustained burst test
Send target_pipe_size*derate sender interface rate bursts every The sustained burst test implements a combined worst case version of
target_RTT*derate, for derate between 0 and 1. Verify that the all of the capacity tests above. In its simplest form send
observed run length meets target_run_length. Key observations: target_pipe_size bursts of packets at server interface rate with
o This test is subpath RTT invariant, as long as the tester can target_RTT headway (burst start to burst start). Verify that the
generate the required pattern. observed delivery statistics meets the target_run_length. Key
observations:
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time: (subpath_data_rate-target_rate)/subpath_data_rate. the time: (subpath_data_rate-target_rate)/subpath_data_rate.
Failing to do so suggests a problem with the procedure and an Failing to do so indicates a problem with the procedure and an
inconclusive test result. inconclusive test result.
o This test is more strenuous than the slowstart tests: they are not o The burst sensitivity can be derated by sending smaller bursts
needed if the link passes this test with derate=1. more frequently. E.g. send target_pipe_size*derate packet bursts
every target_RTT*derate.
o When not derated this test is more strenuous than the slowstart
capacity tests.
o A link that passes this test is likely to be able to sustain o A link that passes this test is likely to be able to sustain
higher rates (close to subpath_data_rate) for paths with RTTs higher rates (close to subpath_data_rate) for paths with RTTs
smaller than the target_RTT. Offsetting this performance significantly smaller than the target_RTT. Offsetting this
underestimation is part of the rationale behind permitting performance underestimation is part of the rationale behind
derating in general. permitting derating in general.
o This test can be implemented with instrumented TCP [RFC4898],
o This test can be implemented with standard instrumented using a specialized measurement application at one end [MBMSource]
TCP[RFC4898], using a specialized measurement application at one and a minimal service at the other end [RFC0863] [RFC0864]. A
end and a minimal service at the other end [RFC 863, RFC 864]. It prototype tool exists and is under evaluation .
may require tweaks to the TCP implementation. [MBMSource]
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make use of TSO in modern NIC hardware. per-packet timers, and can make use of TSO in modern NIC hardware.
o This test is not totally sufficient: the standing window o This test is not completely sufficient: the standing window
engineering tests are also needed to be sure that the link is well engineering tests are also needed to ensure that the link is well
behaved at and beyond the onset of congestion. behaved at and beyond the onset of congestion. Links that exhibit
o This one test can be proven to be the one capacity test to punitive behaviors such as sudden high loss under overload may not
supplant them all. interact well with TCP's self clock.
o Assuming the link passes relevant standing window engineering
tests (particularly that it has a progressive onset of loss at an
appropriate queue depth) the passing sustained burst test is
(believed to be) a sufficient verify that the subpath will not
impair stream at the target performance under all conditions.
Proving this statement is the subject of ongoing research.
7.5.2. Live Streaming Media Note that this test is clearly independent of the subpath RTT, or
other details of the measurement infrastructure, as long as the
measurement infrastructure can accurately and reliably deliver the
required bursts to the subpath under test.
7.5.2. Streaming Media
Model Based Metrics can be implemented as a side effect of serving Model Based Metrics can be implemented as a side effect of serving
any non-throughput maximizing traffic*, such as streaming media, with any non-throughput maximizing traffic*, such as streaming media, with
some additional controls and instrumentation in the servers. The some additional controls and instrumentation in the servers. The
essential requirement is that the traffic be constrained such that essential requirement is that the traffic be constrained such that
even with arbitrary application pauses, bursts and data rate even with arbitrary application pauses, bursts and data rate
fluctuations, the traffic stays within the envelope defined by the fluctuations, the traffic stays within the envelope defined by the
individual tests described above, for a specific TDS. individual tests described above.
If the serving_data_rate is less than or equal to the If the serving_data_rate is less than or equal to the
target_data_rate and the serving_RTT (the RTT between the sender and target_data_rate and the serving_RTT (the RTT between the sender and
client) is less than the target_RTT, this constraint is most easily client) is less than the target_RTT, this constraint is most easily
implemented by clamping the transport window size to: implemented by clamping the transport window size to be no larger
than:
serving_window_clamp=target_data_rate*serving_RTT/ serving_window_clamp=target_data_rate*serving_RTT/
(target_MTU-header_overhead) (target_MTU-header_overhead)
The serving_window_clamp will limit the both the serving data rate Under the above constraints the serving_window_clamp will limit the
and burst sizes to be no larger than the procedures in Section 7.1.2 both the serving data rate and burst sizes to be no larger than the
and Section 7.4 or Section 7.5.1. Since the serving RTT is smaller procedures in Section 7.1.2 and Section 7.4 or Section 7.5.1. Since
than the target_RTT, the worst case bursts that might be generated the serving RTT is smaller than the target_RTT, the worst case bursts
under these conditions will be smaller than called for by Section 7.4 that might be generated under these conditions will be smaller than
and the sender rate burst sizes are implicitly derated by the called for by Section 7.4 and the sender rate burst sizes are
serving_window_clamp divided by the target_pipe_size at the very implicitly derated by the serving_window_clamp divided by the
least. (The traffic might be smoother than specified by the sender target_pipe_size at the very least. (The traffic might be smoother
interface rate bursts test.) than specified by the sender interface rate bursts test.)
Note that if the application tolerates fluctuations in its actual
data rate (say by use of a playout buffer) it is important that the
target_data_rate be above the actual average rate needed by the
application so it can recover after transient pauses caused by
congestion or the application itself.
Alternatively the sender data rate and bursts might be explicitly
controlled by a host shaper or pacing at the sender. This would
provide better control and work for serving_RTTs that are larger than
the target_RTT, but it is substantially more complicated to
implement. With this technique, any traffic might be used for
measurement.
* Note that this technique might be applied to any content, if users
are willing to tolerate reduced data rate to inhibit TCP equilibrium
behavior.
8. Examples
In this section we present TDS for a couple of performance Note that it is important that the target_data_rate be above the
specifications. actual average rate needed by the application so it can recover after
transient pauses caused by congestion or the application itself.
Tentatively: 5 Mb/s*50 ms, 1 Mb/s*50ms, 250kbp*100mS In an alternative implementation the data rate and bursts might be
explicitly controlled by a host shaper or pacing at the sender. This
would provide better control over transmissions but it is
substantially more complicated to implement and would be likely to
have a higher CPU overhead.
8.1. Near serving HD streaming video * Note that these techniques can be applied to any content delivery
that can be subjected to a reduced data rate in order to inhibit TCP
equilibrium behavior.
Today the best quality HD video requires slightly less than 5 Mb/s 8. An Example
[HDvideo]. Since it is desirable to serve such content locally, we
assume that the content will be within 50 mS, which is enough to
cover continental Europe or either US coast from a single site.
5 Mb/s over a 50 ms path In this section a we illustrate a TDS designed to confirm that an
access ISP can reliably deliver HD video from multiple content
providers to all of their customers. With modern codecs HD video
generally fits in 2.5 Mb/s [@@HDvideo]. Due to their geographical
size, network topology and modem designs the ISP determines that most
content is within a 50 mS RTT from their users (This is a sufficient
RTT to cover continental Europe or either US coast from a single
serving site.)
2.5 Mb/s over a 50 ms path
+----------------------+-------+---------+ +----------------------+-------+---------+
| End to End Parameter | Value | units | | End to End Parameter | value | units |
+----------------------+-------+---------+ +----------------------+-------+---------+
| target_rate | 5 | Mb/s | | target_rate | 2.5 | Mb/s |
| target_RTT | 50 | ms | | target_RTT | 50 | ms |
| traget_MTU | 1500 | bytes | | target_MTU | 1500 | bytes |
| target_pipe_size | 22 | packets | | header_overhead | 64 | bytes |
| target_run_length | 1452 | packets | | target_pipe_size | 11 | packets |
| target_run_length | 363 | packets |
+----------------------+-------+---------+ +----------------------+-------+---------+
Table 1 Table 1
This example uses the most conservative TCP model and no derating. Table 1 shows the default TCP model with no derating, and as such is
quite conservative. The simplest TDS would be to use the sustained
8.2. Far serving SD streaming video burst test, described in Section 7.5.1. Such a test would send 11
packet bursts every 50mS, and confirming that there was no more than
Standard Quality video typically fits in 1 Mb/s [SDvideo]. This can 1 packet loss per 33 bursts (363 total packets in 1.650 seconds).
be reasonably delivered via longer paths with larger. We assume
100mS.
1 Mb/s over a 100 ms path
+----------------------+-------+---------+
| End to End Parameter | Value | units |
+----------------------+-------+---------+
| target_rate | 1 | Mb/s |
| target_RTT | 100 | ms |
| traget_MTU | 1500 | bytes |
| target_pipe_size | 9 | packets |
| target_run_length | 243 | packets |
+----------------------+-------+---------+
Table 2
This example uses the most conservative TCP model and no derating.
8.3. Bulk delivery of remote scientific data
This example corresponds to 100 Mb/s bulk scientific data over a
moderately long RTT. Note that the target_run_length is infeasible
for most networks.
100 Mb/s over a 200 ms path
+----------------------+---------+---------+ Since this number represents is the entire end-to-ends loss budget,
| End to End Parameter | Value | units | independent subpath tests could be implemented by apportioning the
+----------------------+---------+---------+ loss rate across subpaths. For example 50% of the losses might be
| target_rate | 100 | Mb/s | allocated to the access or last mile link to the user, 40% to the
| target_RTT | 200 | ms | interconnects with other ISPs and 1% to each internal hop (assuming
| traget_MTU | 1500 | bytes | no more than 10 internal hops). Then all of the subpaths can be
| target_pipe_size | 1741 | packets | tested independently, and the spatial composition of passing subpaths
| target_run_length | 9093243 | packets | would be expected to be within the end-to-end loss budget.
+----------------------+---------+---------+
Table 3 Testing interconnects has generally been problematic: conventional
performance tests run between Measurement Points adjacent to either
side of the interconnect, are not generally useful. Unconstrained
TCP tests, such as netperf tests [@@netperf] are typically overly
aggressive because the RTT is so small (often less than 1 mS). These
tools are likely to report inflated numbers by pushing other traffic
off of the network. As a consequence they are useless for predicting
actual user performance, and may themselves be quite disruptive.
Model Based Metrics solves this problem. The same test pattern as
used on other links can be applied to the interconnect. For our
example, when apportioned 40% of the losses, 11 packet bursts sent
every 50mS should have fewer than one loss per 82 bursts (902
packets).
9. Validation 9. Validation
Since some aspects of the models are likely to be too conservative, Since some aspects of the models are likely to be too conservative,
Section 5.2 and Section 5.3 permit alternate protocol models and test Section 5.2 permits alternate protocol models and Section 5.3 permits
parameter derating. In exchange for this latitude in the modelling test parameter derating. If either of these techniques are used, we
process, we require demonstrations that such a TDS can robustly require demonstrations that such a TDS can robustly detect links that
detect links that will prevent authentic applications using state-of- will prevent authentic applications using state-of-the-art protocol
the-art protocol implementations from meeting the specified implementations from meeting the specified performance targets. This
performance targets. This correctness criteria is potentially correctness criteria is potentially difficult to prove, because it
difficult to prove, because it implicitly requires validating a TDS implicitly requires validating a TDS against all possible links and
against all possible links and subpaths. subpaths. The procedures described here are still experimental.
We suggest two strategies, both of which should be applied: first, We suggest two approaches, both of which should be applied: first,
publish a fully open description of the TDS, including what publish a fully open description of the TDS, including what
assumptions were used and and how it was derived, such that the assumptions were used and and how it was derived, such that the
research community can evaluate these decisions, test them and research community can evaluate the design decisions, test them and
comment on there applicability; and second, demonstrate that an comment on their applicability; and second, demonstrate that an
applications running over an infinitessimally passing testbed do meet applications running over an infinitessimally passing testbed do meet
the performance targets. the performance targets.
An infinitessimally passing testbed resembles a epsilon-delta proof An infinitessimally passing testbed resembles a epsilon-delta proof
in calculus. Construct a test network such that all of the in calculus. Construct a test network such that all of the
individual tests of the TDS only pass by small (infinitesimal) individual tests of the TDS pass by only small (infinitesimal)
margins, and demonstrate that a variety of authentic applications margins, and demonstrate that a variety of authentic applications
running over real TCP implementations (or other protocol as running over real TCP implementations (or other protocol as
appropriate) meets the end-to-end target parameters over such a appropriate) meets the end-to-end target parameters over such a
network. The workloads should include multiple types of streaming network. The workloads should include multiple types of streaming
media and transaction oriented short flows (e.g. synthetic web media and transaction oriented short flows (e.g. synthetic web
traffic ). traffic ).
For example using our example in our HD streaming video TDS described For example, for the HD streaming video TDS described in Section 8,
in Section 8.1, the bottleneck data rate should be 5 Mb/s, the per the link layer bottleneck data rate should be exactly the header
packet random background loss probability should be 1/1453, for a run overhead above 2.5 Mb/s, the per packet random background loss
length of 1452 packets, the bottleneck queue should be 22 packets and probability should be 1/363, for a run length of 363 packets, the
the front path should have just enough buffering to withstand 22 bottleneck queue should be 11 packets and the front path should have
packet line rate bursts. We want every one of the TDS tests to fail just enough buffering to withstand 11 packet interface rate bursts.
if we slightly increase the relevant test parameter, so for example We want every one of the TDS tests to fail if we slightly increase
sending a 23 packet slowstart bursts should cause excess (possibly the relevant test parameter, so for example sending a 12 packet
deterministic) packet drops at the dominant queue at the bottleneck. bursts should cause excess (possibly deterministic) packet drops at
On this infinitessimally passing network it should be possible for a the dominant queue at the bottleneck. On this infinitessimally
real ral application using a stock TCP implementation in the vendor's passing network it should be possible for a real application using a
default configuration to attain 5 Mb/s over an 50 mS path. stock TCP implementation in the vendor's default configuration to
attain 2.5 Mb/s over an 50 mS path.
The most difficult part of setting up such a testbed is arranging to The most difficult part of setting up such a testbed is arranging for
infinitesimally pass the individual tests. We suggest two it to infinitesimally pass the individual tests. Two approaches:
approaches: constraining the network devices not to use all available constraining the network devices not to use all available resources
resources (limiting available buffer space or data rate); and (e.g. by limiting available buffer space or data rate); and
preloading subpaths with cross traffic. Note that is it important preloading subpaths with cross traffic. Note that is it important
that a single environment be constructed which infinitessimally that a single environment be constructed which infinitessimally
passes all tests at the same time, otherwise there is a chance that passes all tests at the same time, otherwise there is a chance that
TCP can exploit extra latitude in some parameters (such as data rate) TCP can exploit extra latitude in some parameters (such as data rate)
to partially compensate for constraints in other parameters (queue to partially compensate for constraints in other parameters (queue
space, or viceversa). space, or viceversa).
To the extent that a TDS is used to inform public dialog it should be To the extent that a TDS is used to inform public dialog it should be
fully publicly documented, including the details of the tests, what fully publicly documented, including the details of the tests, what
assumptions were used and how it was derived. All of the details of assumptions were used and how it was derived. All of the details of
the validation experiment should also be public with sufficient the validation experiment should also be published with sufficient
detail for the experiments to be replicated by other researchers. detail for the experiments to be replicated by other researchers.
All components should either be open source of fully described All components should either be open source of fully described
proprietary implementations that are available to the research proprietary implementations that are available to the research
community. community.
This work here is inspired by open tools running on an open platform,
using open techniques to collect open data. See Measurement Lab
[http://www.measurementlab.net/]
10. Acknowledgements 10. Acknowledgements
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur for helping with
the statistics and contributing and alternate model. the statistics and contributing and alternate model.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
This work was inspired by Measurement Lab: open tools running on an
open platform, using open tools to collect open data. See
http://www.measurementlab.net/
11. Informative References 11. Informative References
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983.
[RFC0864] Postel, J., "Character Generator Protocol", STD 22,
RFC 864, May 1983.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
S., Wroclawski, J., and L. Zhang, "Recommendations on S., Wroclawski, J., and L. Zhang, "Recommendations on
Queue Management and Congestion Avoidance in the Queue Management and Congestion Avoidance in the
Internet", RFC 2309, April 1998. Internet", RFC 2309, April 1998.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
May 1998. May 1998.
skipping to change at page 37, line 42 skipping to change at page 38, line 12
[RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion
Window Validation", RFC 2861, June 2000. Window Validation", RFC 2861, June 2000.
[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148, Empirical Bulk Transfer Capacity Metrics", RFC 3148,
July 2001. July 2001.
[RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte [RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte
Counting (ABC)", RFC 3465, February 2003. Counting (ABC)", RFC 3465, February 2003.
[RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP [RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm
Extended Statistics MIB", RFC 4898, May 2007. for TCP", RFC 4015, February 2005.
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
S., and J. Perser, "Packet Reordering Metrics", RFC 4737, S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
November 2006. November 2006.
[RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP
Extended Statistics MIB", RFC 4898, May 2007.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010. Composition", RFC 5835, April 2010.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, January 2011.
[RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673,
August 2012. August 2012.
[I-D.morton-ippm-lmap-path] [I-D.ietf-ippm-2330-update]
Fabini, J. and A. Morton, "Advanced Stream and Sampling
Framework for IPPM", draft-ietf-ippm-2330-update-05 (work
in progress), May 2014.
[I-D.ietf-ippm-lmap-path]
Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
A. Morton, "A Reference Path and Measurement Points for A. Morton, "A Reference Path and Measurement Points for
LMAP", draft-morton-ippm-lmap-path-00 (work in progress), LMAP", draft-ietf-ippm-lmap-path-04 (work in progress),
January 2013. June 2014.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
skipping to change at page 39, line 6 skipping to change at page 39, line 30
Control - 2nd ed.", ISBN 0-471-51988-X, 1990. Control - 2nd ed.", ISBN 0-471-51988-X, 1990.
[Rtool] R Development Core Team, "R: A language and environment [Rtool] R Development Core Team, "R: A language and environment
for statistical computing. R Foundation for Statistical for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011. http://www.R-project.org/", , 2011.
[CVST] Krueger, T. and M. Braun, "R package: Fast Cross- [CVST] Krueger, T. and M. Braun, "R package: Fast Cross-
Validation via Sequential Testing", version 0.1, 11 2012. Validation via Sequential Testing", version 0.1, 11 2012.
[CUBIC] Ha, S., Rhee, I., and L. Xu, "CUBIC: a new TCP-friendly
high-speed TCP variant", SIGOPS Oper. Syst. Rev. 42, 5,
July 2008.
[LMCUBIC] Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss [LMCUBIC] Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss
Model Based Analysis of CUBIC, IEEE International Model Based Analysis of CUBIC, IEEE International
Conference on Computing, Networking and Communications Conference on Computing, Networking and Communications
(ICNC), E-ISBN : 978-1-4673-5286-4", January 2013. (ICNC), E-ISBN : 978-1-4673-5286-4", January 2013.
[AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker,
"Approximate fairness through differential dropping",
SIGCOMM Comput. Commun. Rev. 33, 2, April 2003.
[wikiBloat]
Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/
index.php?title=Bufferbloat&oldid=608805474, June 2014.
[CCscaling]
Fernando, F., Doyle, J., and S. Steven, "Scalable laws for
stable network congestion control", Proceedings of
Conference on Decision and
Control, http://www.ee.ucla.edu/~paganini, December 2001.
Appendix A. Model Derivations Appendix A. Model Derivations
The reference target_run_length described in Section 5.2 is based on The reference target_run_length described in Section 5.2 is based on
very conservative assumptions: that all window above target_pipe_size very conservative assumptions: that all window above target_pipe_size
contributes to a standing queue that raises the RTT, and that classic contributes to a standing queue that raises the RTT, and that classic
Reno congestion control with delayed ACKs are in effect. In this Reno congestion control with delayed ACKs are in effect. In this
section we provide two alternative calculations using different section we provide two alternative calculations using different
assumptions. assumptions.
It may seem out of place to allow such latitude in a measurement It may seem out of place to allow such latitude in a measurement
skipping to change at page 40, line 14 skipping to change at page 41, line 7
queueing delay, and losses are determined monitoring the average data queueing delay, and losses are determined monitoring the average data
rate, for example by the use of a virtual queue as in [AFD]. In such rate, for example by the use of a virtual queue as in [AFD]. In such
a scheme the RTT is constant and TCP's AIMD congestion control causes a scheme the RTT is constant and TCP's AIMD congestion control causes
the data rate to fluctuate in a sawtooth. If the traffic is being the data rate to fluctuate in a sawtooth. If the traffic is being
controlled in a manner that is consistent with the metrics here, goal controlled in a manner that is consistent with the metrics here, goal
would be to make the actual average rate equal to the would be to make the actual average rate equal to the
target_data_rate. target_data_rate.
We can derive a model for Reno TCP and delayed ACK under the above We can derive a model for Reno TCP and delayed ACK under the above
set of assumptions: for some value of Wmin, the window will sweep set of assumptions: for some value of Wmin, the window will sweep
from Wmin to 2*Wmin in 2*Wmin RTT. Unlike the queueing case where from Wmin packets to 2*Wmin packets in 2*Wmin RTT. Unlike the
Wmin = Target_pipe_size, we want the average of Wmin and 2*Wmin to be queueing case where Wmin = Target_pipe_size, we want the average of
the target_pipe_size, so the average rate is the target rate. Thus Wmin and 2*Wmin to be the target_pipe_size, so the average rate is
we want Wmin = (2/3)*target_pipe_size. the target rate. Thus we want Wmin = (2/3)*target_pipe_size.
Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin)
packets in 2*Wmin round trip times. packets in 2*Wmin round trip times.
Substituting these together we get: Substituting these together we get:
target_run_length = (4/3)(target_pipe_size^2) target_run_length = (4/3)(target_pipe_size^2)
Note that this is 44% of the reference run length. This makes sense Note that this is 44% of the reference run length. This makes sense
because under the assumptions in Section 5.2 the AMID sawtooth caused because under the assumptions in Section 5.2 the AMID sawtooth caused
skipping to change at page 41, line 4 skipping to change at page 41, line 45
authors transform the time to reach the maximum Window size in terms authors transform the time to reach the maximum Window size in terms
of RTT and a parameter for the multiplicative rate decrease on of RTT and a parameter for the multiplicative rate decrease on
observing loss, beta (whose default value is 0.2 in CUBIC). The observing loss, beta (whose default value is 0.2 in CUBIC). The
expected value of Window size, E[W], is also dependent on C, a expected value of Window size, E[W], is also dependent on C, a
parameter of CUBIC that determines its window-growth aggressiveness parameter of CUBIC that determines its window-growth aggressiveness
(values from 0.01 to 4). (values from 0.01 to 4).
E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4 E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4
and, further assuming Poisson arrival, the mean throughput, x, is and, further assuming Poisson arrival, the mean throughput, x, is
x = E[W]/RTT x = E[W]/RTT
We note that under these conditions (deterministic single losses), We note that under these conditions (deterministic single losses),
the value of E[W] is always greater than 0.8 of the maximum window the value of E[W] is always greater than 0.8 of the maximum window
size ~= reference_run_length. (as far as I can tell) size ~= reference_run_length. @@@@
Appendix B. Complex Queueing Appendix B. Complex Queueing
For many network technologies simple queueing models do not apply: For many network technologies simple queueing models do not apply:
the network schedules, thins or otherwise alters the timing of ACKs the network schedules, thins or otherwise alters the timing of ACKs
and data, generally to raise the efficiency of the channel allocation and data, generally to raise the efficiency of the channel allocation
process when confronted with relatively widely spaced small ACKs. process when confronted with relatively widely spaced small ACKs.
These efficiency strategies are ubiquitous for half duplex, wireless These efficiency strategies are ubiquitous for half duplex, wireless
and broadcast media. and broadcast media.
Altering the ACK stream generally has two consequences: it raises the Altering the ACK stream generally has two consequences: it raises the
effective bottleneck data rate, making slowstart burst at higher effective bottleneck data rate, making slowstart burst at higher
rates (possibly as high as the sender's interface rate) and it rates (possibly as high as the sender's interface rate) and it
effectively raises the RTT by the average time that the ACKs were effectively raises the RTT by the average time that the ACKs were
delayed. The first effect can be partially mitigated by reclocking delayed. The first effect can be partially mitigated by reclocking
ACKs once they are beyond the bottleneck on the return path to the ACKs once they are beyond the bottleneck on the return path to the
sender, however this further raises the effective RTT. sender, however this further raises the effective RTT.
The most extreme example of this sort of behavior would be a half The most extreme example of this sort of behavior would be a half
duplex channel that is not released as long as end point currently duplex channel that is not released as long as end point currently
holding the channel has pending traffic. Such environments cause holding the channel has queued traffic. Such environments cause self
self clocked protocols under full load to revert to extremely clocked protocols under full load to revert to extremely inefficient
inefficient stop and wait behavior, where they send an entire window stop and wait behavior, where they send an entire window of data as a
of data as a single burst, followed by the entire window of ACKs on single burst, followed by the entire window of ACKs on the return
the return path. path.
If a particular end-to-end path contains a link or device that alters If a particular end-to-end path contains a link or device that alters
the ACK stream, then the entire path from the sender up to the the ACK stream, then the entire path from the sender up to the
bottleneck must be tested at the burst parameters implied by the ACK bottleneck must be tested at the burst parameters implied by the ACK
scheduling algorithm. The most important parameter is the Effective scheduling algorithm. The most important parameter is the Effective
Bottleneck Data Rate, which is the average rate at which the ACKs Bottleneck Data Rate, which is the average rate at which the ACKs
advance snd.una. Note that thinning the ACKs (relying on the advance snd.una. Note that thinning the ACKs (relying on the
cumulative nature of seg.ack to permit discarding some ACKs) is cumulative nature of seg.ack to permit discarding some ACKs) is
implies an effectively infinite bottleneck data rate. It is implies an effectively infinite bottleneck data rate. It is
important to note that due to the self clock, ill conceived channel important to note that due to the self clock, ill conceived channel
skipping to change at page 42, line 10 skipping to change at page 43, line 7
Holding data or ACKs for channel allocation or other reasons (such as Holding data or ACKs for channel allocation or other reasons (such as
error correction) always raises the effective RTT relative to the error correction) always raises the effective RTT relative to the
minimum delay for the path. Therefore it may be necessary to replace minimum delay for the path. Therefore it may be necessary to replace
target_RTT in the calculation in Section 5.2 by an effective_RTT, target_RTT in the calculation in Section 5.2 by an effective_RTT,
which includes the target_RTT reflecting the fixed part of the path which includes the target_RTT reflecting the fixed part of the path
plus a term to account for the extra delays introduced by these plus a term to account for the extra delays introduced by these
mechanisms. mechanisms.
Appendix C. Version Control Appendix C. Version Control
Formatted: Fri Feb 14 14:07:33 PST 2014 Formatted: Thu Jul 3 20:19:04 PDT 2014
Authors' Addresses Authors' Addresses
Matt Mathis Matt Mathis
Google, Inc Google, Inc
1600 Amphitheater Parkway 1600 Amphitheater Parkway
Mountain View, California 94043 Mountain View, California 94043
USA USA
Email: mattmathis@google.com Email: mattmathis@google.com
 End of changes. 137 change blocks. 
496 lines changed or deleted 529 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/