draft-ietf-ippm-model-based-metrics-08.txt   draft-ietf-ippm-model-based-metrics-09.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: January 9, 2017 AT&T Labs Expires: August 31, 2017 AT&T Labs
July 8, 2016 February 27, 2017
Model Based Metrics for Bulk Transport Capacity Model Based Metrics for Bulk Transport Capacity
draft-ietf-ippm-model-based-metrics-08.txt draft-ietf-ippm-model-based-metrics-09.txt
Abstract Abstract
We introduce a new class of Model Based Metrics designed to assess if We introduce a new class of Model Based Metrics designed to assess if
a complete Internet path can be expected to meet a predefined Target a complete Internet path can be expected to meet a predefined Target
Transport Performance by applying a suite of IP diagnostic tests to Transport Performance by applying a suite of IP diagnostic tests to
successive subpaths. The subpath-at-a-time tests can be robustly successive subpaths. The subpath-at-a-time tests can be robustly
applied to key infrastructure, such as interconnects or even applied to critical infrastructure, such as network interconnections
individual devices, to accurately detect if any part of the or even individual devices, to accurately detect if any part of the
infrastructure will prevent paths traversing it from meeting the infrastructure will prevent paths traversing it from meeting the
Target Transport Performance. Target Transport Performance.
Model Based Metrics rely on peer-reviewed mathematical models to
specify a Targeted Suite of IP Diagnostic tests, designed to assess
whether common transport protocols can be expected to meet a
predetermined Target Transport Performance over an Internet path.
For Bulk Transport Capacity, the IP diagnostics are built on test For Bulk Transport Capacity, the IP diagnostics are built on test
streams that mimic TCP over the complete path and statistical streams that mimic TCP over the complete path and statistical
criteria for evaluating the packet transfer statistics of those criteria for evaluating the packet transfer statistics of those
streams. The temporal structure of the test stream (bursts, etc) streams. The temporal structure of the test stream (bursts, etc)
mimic TCP or other transport protocol carrying bulk data over a long mimic TCP or other transport protocol carrying bulk data over a long
path but are constructed to be independent of the details of the path. However they are constructed to be independent of the details
subpath under test, end systems or applications. Likewise the of the subpath under test, end systems or applications. Likewise the
success criteria evaluates the packet transfer statistics of the success criteria evaluates the packet transfer statistics of the
subpath against criteria determined by protocol performance models subpath against criteria determined by protocol performance models
applied to the Target Transport Performance of the complete path. applied to the Target Transport Performance of the complete path.
The success criteria also does not depend on the details of the The success criteria also does not depend on the details of the
subpath, end systems or application. subpath, end systems or application.
Model Based Metrics exhibit several important new properties not Model Based Metrics exhibit several important new properties not
present in other Bulk Transport Capacity Metrics, including the present in other Bulk Transport Capacity Metrics, including the
ability to reason about concatenated or overlapping subpaths. The ability to reason about concatenated or overlapping subpaths. The
results are vantage independent which is critical for supporting results are vantage independent which is critical for supporting
skipping to change at page 2, line 20 skipping to change at page 2, line 22
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 9, 2017. This Internet-Draft will expire on August 31, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5
2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 10
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 15 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 17 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 18
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19
4.3. New requirements relative to RFC 2330 . . . . . . . . . . 20 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 20
5. Common Models and Parameters . . . . . . . . . . . . . . . . 20 5. Common Models and Parameters . . . . . . . . . . . . . . . . 21
5.1. Target End-to-end parameters . . . . . . . . . . . . . . 21 5.1. Target End-to-end parameters . . . . . . . . . . . . . . 21
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 21 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 22
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 22 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 23
5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 23 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 23
6. Generating test streams . . . . . . . . . . . . . . . . . . . 23 6. Generating test streams . . . . . . . . . . . . . . . . . . . 24
6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 24 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 25
6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 25 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 27
6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 26 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 27
6.4. Concurrent or channelized testing . . . . . . . . . . . . 27 6.4. Concurrent or channelized testing . . . . . . . . . . . . 28
7. Interpreting the Results . . . . . . . . . . . . . . . . . . 27 7. Interpreting the Results . . . . . . . . . . . . . . . . . . 29
7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 27 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 29
7.2. Statistical criteria for estimating run_length . . . . . 29 7.2. Statistical criteria for estimating run_length . . . . . 30
7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 31 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 32
8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 32 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 33
8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 32 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 33
8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 33 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 34
8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 33 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 34
8.1.3. Background Packet Transfer Statistics Tests . . . . . 33 8.1.3. Background Packet Transfer Statistics Tests . . . . . 35
8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 34 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 35
8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 35 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 36
8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 35 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 37
8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 36 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 37
8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 36 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 38
8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 37 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 38
8.3.1. Full Window slowstart test . . . . . . . . . . . . . 37 8.3.1. Full Window slowstart test . . . . . . . . . . . . . 38
8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 37 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 39
8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 38 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 39
8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 39 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 40
8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 39 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 40
8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 40 8.5.2. Passive Measurements . . . . . . . . . . . . . . . . 41
9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 40 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 42
9.1. Observations about applicability . . . . . . . . . . . . 41 9.1. Observations about applicability . . . . . . . . . . . . 43
10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 42 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 44
11. Security Considerations . . . . . . . . . . . . . . . . . . . 43 11. Security Considerations . . . . . . . . . . . . . . . . . . . 45
12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 44 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 45
13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 44 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46
14. References . . . . . . . . . . . . . . . . . . . . . . . . . 44 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 46
14.1. Normative References . . . . . . . . . . . . . . . . . . 44 14.1. Normative References . . . . . . . . . . . . . . . . . . 46
14.2. Informative References . . . . . . . . . . . . . . . . . 44 14.2. Informative References . . . . . . . . . . . . . . . . . 46
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 48 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 49
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 48 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 50
Appendix B. The effects of ACK scheduling . . . . . . . . . . . 49 Appendix B. The effects of ACK scheduling . . . . . . . . . . . 51
Appendix C. Version Control . . . . . . . . . . . . . . . . . . 50 Appendix C. Version Control . . . . . . . . . . . . . . . . . . 52
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 50 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 52
1. Introduction 1. Introduction
Model Based Metrics (MBM) rely on peer-reviewed mathematical models Model Based Metrics (MBM) rely on peer-reviewed mathematical models
to specify a Targeted Suite of IP Diagnostic tests, designed to to specify a Targeted Suite of IP Diagnostic tests, designed to
assess whether common transport protocols can be expected to meet a assess whether common transport protocols can be expected to meet a
predetermined Target Transport Performance over an Internet path. predetermined Target Transport Performance over an Internet path.
This note describes the modeling framework to derive the test This note describes the modeling framework to derive the test
parameters for assessing an Internet path's ability to support a parameters for assessing an Internet path's ability to support a
predetermined Bulk Transport Capacity. predetermined Bulk Transport Capacity.
skipping to change at page 4, line 26 skipping to change at page 4, line 28
yield pass/fail evaluations of the ability of standard transport yield pass/fail evaluations of the ability of standard transport
protocols to meet the specific performance objective over some protocols to meet the specific performance objective over some
network path. network path.
In most cases, the IP diagnostic tests can be implemented by In most cases, the IP diagnostic tests can be implemented by
combining existing IPPM metrics with additional controls for combining existing IPPM metrics with additional controls for
generating test streams having a specified temporal structure (bursts generating test streams having a specified temporal structure (bursts
or standing queues caused by constant bit rate streams, etc.) and or standing queues caused by constant bit rate streams, etc.) and
statistical criteria for evaluating packet transfer. The temporal statistical criteria for evaluating packet transfer. The temporal
structure of the test streams mimic transport protocol behavior over structure of the test streams mimic transport protocol behavior over
the complete path, the statistical criteria models the transport the complete path; the statistical criteria models the transport
protocol's response to less than ideal IP packet transfer. protocol's response to less than ideal IP packet transfer.
This note addresses Bulk Transport Capacity. It describes an This note addresses Bulk Transport Capacity. It describes an
alternative to the approach presented in "A Framework for Defining alternative to the approach presented in "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics" [RFC3148]. In the future, Empirical Bulk Transfer Capacity Metrics" [RFC3148]. Other Model
other Model Based Metrics may cover other applications and Based Metrics may cover other applications and transports, such as
transports, such as VoIP over UDP and RTP, and new transport VoIP over UDP and RTP, and new transport protocols.
protocols.
The MBM approach, mapping Target Transport Performance to a Targeted The MBM approach, mapping Target Transport Performance to a Targeted
IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic
problems with using TCP or other throughput maximizing protocols for problems with using TCP or other throughput maximizing protocols for
measurement. In particular all throughput maximizing protocols (and measurement. In particular all throughput maximizing protocols (and
TCP congestion control in particular) cause some level of congestion TCP congestion control in particular) cause some level of congestion
in order to detect when they have reached the available capacity in order to detect when they have reached the available capacity
limitation of the network. This self inflicted congestion obscures limitation of the network. This self inflicted congestion obscures
the network properties of interest and introduces non-linear dynamic the network properties of interest and introduces non-linear dynamic
equilibrium behaviors that make any resulting measurements useless as equilibrium behaviors that make any resulting measurements useless as
metrics because they have no predictive value for conditions or paths metrics because they have no predictive value for conditions or paths
different than that of the measurement itself. In order to prevent different than that of the measurement itself. In order to prevent
these effects it is necessary to suppress the effects of TCP these effects it is necessary to suppress the effects of TCP
congestion control in the measurement method. These issues are congestion control in the measurement method. These issues are
discussed at length in Section 4. discussed at length in Section 4. Readers whom are unfamiliar with
basic properties of TCP and TCP-like congestion control may find it
easier to start at Section 4 or Section 4.1.
A Targeted IP Diagnostic Suite does not have such difficulties. IP A Targeted IP Diagnostic Suite does not have such difficulties. IP
diagnostics can be constructed such that they make strong statistical diagnostics can be constructed such that they make strong statistical
statements about path properties that are independent of the statements about path properties that are independent of the
measurement details, such as vantage and choice of measurement measurement details, such as vantage and choice of measurement
points. Model Based Metrics are designed to bridge the gap between points. Model Based Metrics are designed to bridge the gap between
empirical IP measurements and expected TCP performance for multiple empirical IP measurements and expected TCP performance for multiple
standardized versions of TCP. standardized versions of TCP.
1.1. Version Control 1.1. Version Control
RFC Editor: Please remove this entire subsection prior to RFC Editor: Please remove this entire subsection prior to
publication. publication.
Please send comments about this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Fri Jul 8 16:00:10 PDT 2016 Formatted: Mon Feb 27 13:49:06 PST 2017
Changes since -08 draft:
o Language, spelling and usage nits.
o Expanded the abstract describe the models.
o Remove superfluous standards like language
o Remove superfluous "future technology" language.
o Interconnects -> network interconnections.
o Added more labels to Figure 1.
o Defined Bulk Transport.
o Clarified "implied bottleneck IP capacity"
o Clarified the history of the BTC metrics.
o Clarified stochastic vs non-stochastic test traffic generation.
o Reworked Fig 2 and 6.1 "Mimicking slowstart"
o Described the unsynchronized parallel stream failure case.
o Discussed how to measure devices that use virtual queues.
o Changed section 8.5.2 (Streaming Media) to be Passive
Measurements.
Changes since -07 draft: Changes since -07 draft:
o Sharpened the use of "statistical criteria" o Sharpened the use of "statistical criteria"
o Sharpened the definition of test_window, and removed related o Sharpened the definition of test_window, and removed related
redundant text in several places redundant text in several places
o Clarified "equilibrium" as "dynamic equilibrium, similar to o Clarified "equilibrium" as "dynamic equilibrium, similar to
processes observed in chemistry" processes observed in chemistry"
o Properly explained "Heisenberg" as "observer effect" o Properly explained "Heisenberg" as "observer effect"
o Added the observation from RFC 6576 that HW and SW congestion o Added the observation from RFC 6576 that HW and SW congestion
skipping to change at page 7, line 25 skipping to change at page 7, line 47
scope for this document. This terminology is defined in Section 3. scope for this document. This terminology is defined in Section 3.
Section 4 describes some key aspects of TCP behavior and what they Section 4 describes some key aspects of TCP behavior and what they
imply about the requirements for IP packet transfer. Most of the IP imply about the requirements for IP packet transfer. Most of the IP
diagnostic tests needed to confirm that the path meets these diagnostic tests needed to confirm that the path meets these
properties can be built on existing IPPM metrics, with the addition properties can be built on existing IPPM metrics, with the addition
of statistical criteria for evaluating packet transfer and in a few of statistical criteria for evaluating packet transfer and in a few
cases, new mechanisms to implement the required temporal structure. cases, new mechanisms to implement the required temporal structure.
(One group of tests, the standing queue tests described in (One group of tests, the standing queue tests described in
Section 8.2, don't correspond to existing IPPM metrics, but suitable Section 8.2, don't correspond to existing IPPM metrics, but suitable
metrics can be patterned after the existing definitions.) new IPPM metrics can be patterned after the existing definitions.)
Figure 1 shows the MBM modeling and measurement framework. The Figure 1 shows the MBM modeling and measurement framework. The
Target Transport Performance, at the top of the figure, is determined Target Transport Performance, at the top of the figure, is determined
by the needs of the user or application, outside the scope of this by the needs of the user or application, outside the scope of this
document. For Bulk Transport Capacity, the main performance document. For Bulk Transport Capacity, the main performance
parameter of interest is the Target Data Rate. However, since TCP's parameter of interest is the Target Data Rate. However, since TCP's
ability to compensate for less than ideal network conditions is ability to compensate for less than ideal network conditions is
fundamentally affected by the Round Trip Time (RTT) and the Maximum fundamentally affected by the Round Trip Time (RTT) and the Maximum
Transmission Unit (MTU) of the complete path, these parameters must Transmission Unit (MTU) of the complete path, these parameters must
also be specified in advance based on knowledge about the intended also be specified in advance based on knowledge about the intended
skipping to change at page 8, line 33 skipping to change at page 8, line 43
| | pattern | | Evaluation | | | | | | pattern | | Evaluation | | | |
| | generation | | | | | | | | generation | | | | | |
| -------v-------- ------^-------- | | | | -------v-------- ------^-------- | | |
| | v test stream via ^ | | |-- | | v test stream via ^ | | |--
| | -->======================>-- | | | | | -->======================>-- | | |
| | subpath under test | |- | | subpath under test | |-
----V----------------------------------V--- | ----V----------------------------------V--- |
| | | | | | | | | | | |
V V V V V V V V V V V V
fail/inconclusive pass/fail/inconclusive fail/inconclusive pass/fail/inconclusive
(traffic generation status) (test result)
Overall Modeling Framework Overall Modeling Framework
Figure 1 Figure 1
The mathematical models are used to determine Traffic parameters and The mathematical models are used to determine Traffic parameters and
subsequently to design traffic patterns that mimic TCP or other subsequently to design traffic patterns that mimic TCP or other
transport protocol delivering bulk data and operating at the Target transport protocol delivering bulk data and operating at the Target
Data Rate, MTU and RTT over a full range of conditions, including Data Rate, MTU and RTT over a full range of conditions, including
flows that are bursty at multiple time scales. The traffic patterns flows that are bursty at multiple time scales. The traffic patterns
are generated based on the three Target parameters of complete path are generated based on the three Target parameters of complete path
and independent of the properties of individual subpaths using the and independent of the properties of individual subpaths using the
techniques described in Section 6. As much as possible the test techniques described in Section 6. As much as possible the test
stream is generated deterministically (precomputed) to minimize the streams are generated deterministically (precomputed) to minimize the
extent to which test methodology, measurement points, measurement extent to which test methodology, measurement points, measurement
vantage or path partitioning affect the details of the measurement vantage or path partitioning affect the details of the measurement
traffic. traffic.
Section 7 describes packet transfer statistics and methods to test Section 7 describes packet transfer statistics and methods to test
them against the statistical criteria provided by the mathematical them against the statistical criteria provided by the mathematical
models. Since the statistical criteria are typically for the models. Since the statistical criteria typically apply to the
complete path (a composition of subpaths) [RFC6049], in situ testing complete path (a composition of subpaths) [RFC6049], in situ testing
requires that the end-to-end statistical criteria be apportioned as requires that the end-to-end statistical criteria be apportioned as
separate criteria for each subpath. Subpaths that are expected to be separate criteria for each subpath. Subpaths that are expected to be
bottlenecks would then be permitted to contribute a larger fraction bottlenecks would then be permitted to contribute a larger fraction
of the end-to-end packet loss budget. In compensation, non- of the end-to-end packet loss budget. In compensation, subpaths that
bottlenecked subpaths have to be constrained to contribute less to not exhibit bottlenecks have must be constrained to contribute
packet loss. Thus the statistical criteria for each subpath in each less packet loss. Thus the statistical criteria for each subpath in
test of a TIDS is an apportioned share of the end-to-end statistical each test of a TIDS is an apportioned share of the end-to-end
criteria for the complete path which was determined by the statistical criteria for the complete path which was determined by
mathematical model. the mathematical model.
Section 8 describes the suite of individual tests needed to verify Section 8 describes the suite of individual tests needed to verify
all of required IP delivery properties. A subpath passes if and only all of required IP delivery properties. A subpath passes if and only
if all of the individual IP diagnostic tests pass. Any subpath that if all of the individual IP diagnostic tests pass. Any subpath that
fails any test indicates that some users are likely to fail to attain fails any test indicates that some users are likely to fail to attain
their Target Transport Performance under some conditions. In their Target Transport Performance under some conditions. In
addition to passing or failing, a test can be deemed to be addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including: the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated; the measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant; and others such as failing to were not statistically significant; and others such as failing to
meet some required test preconditions. If all tests pass but some meet some required test preconditions. If all tests pass but some
are inconclusive, then the entire suite is deemed to be inconclusive. are inconclusive, then the entire suite is deemed to be inconclusive.
In Section 9 we present an example TIDS that might be representative In Section 9 we present an example TIDS that might be representative
of HD video, and illustrate how Model Based Metrics can be used to of High Definition (HD) video, and illustrate how Model Based Metrics
address difficult measurement situations, such as confirming that can be used to address difficult measurement situations, such as
inter-carrier exchanges have sufficient performance and capacity to confirming that inter-carrier exchanges have sufficient performance
deliver HD video between ISPs. and capacity to deliver HD video between ISPs.
Since there is some uncertainty in the modeling process, Section 10 Since there is some uncertainty in the modeling process, Section 10
describes a validation procedure to diagnose and minimize false describes a validation procedure to diagnose and minimize false
positive and false negative results. positive and false negative results.
3. Terminology 3. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
Note that terms containing underscores (rather than spaces) appear in Terms containing underscores (rather than spaces) appear in equations
equations in the modeling sections. In some cases both forms are and typically have algorithmic definitions.
used for aesthetic reasons, they do not have different meanings.
General Terminology: General Terminology:
Target: A general term for any parameter specified by or derived Target: A general term for any parameter specified by or derived
from the user's application or transport performance requirements. from the user's application or transport performance requirements.
Target Transport Performance: Application or transport performance Target Transport Performance: Application or transport performance
target values for the complete path. For Bulk Transport Capacity target values for the complete path. For Bulk Transport Capacity
defined in this note the Target Transport Performance includes the defined in this note the Target Transport Performance includes the
Target Data Rate, Target RTT and Target MTU as described below. Target Data Rate, Target RTT and Target MTU as described below.
Target Data Rate: The specified application data rate required for Target Data Rate: The specified application data rate required for
an application's proper operation. Conventional BTC metrics are an application's proper operation. Conventional Bulk Transport
focused on the Target Data Rate, however these metrics had little Capacity (BTC) metrics are focused on the Target Data Rate,
or no predictive value because they do not consider the effects of however these metrics had little or no predictive value because
the other two parameters of the Target Transport Performance, the they do not consider the effects of the other two parameters of
RTT and MTU of the complete paths. the Target Transport Performance, the RTT and MTU of the complete
paths.
Target RTT (Round Trip Time): The specified baseline (minimum) RTT Target RTT (Round Trip Time): The specified baseline (minimum) RTT
of the longest complete path over which the user expects to be of the longest complete path over which the user expects to be
able meet the target performance. TCP and other transport able to meet the target performance. TCP and other transport
protocol's ability to compensate for path problems is generally protocol's ability to compensate for path problems is generally
proportional to the number of round trips per second. The Target proportional to the number of round trips per second. The Target
RTT determines both key parameters of the traffic patterns (e.g. RTT determines both key parameters of the traffic patterns (e.g.
burst sizes) and the thresholds on acceptable IP packet transfer burst sizes) and the thresholds on acceptable IP packet transfer
statistics. The Target RTT must be specified considering statistics. The Target RTT must be specified considering
appropriate packets sizes: MTU sized packets on the forward path, appropriate packets sizes: MTU sized packets on the forward path,
ACK sized packets (typically header_overhead) on the return path. ACK sized packets (typically header_overhead) on the return path.
Note that Target RTT is specified and not measured, MBM Note that Target RTT is specified and not measured, MBM
measurements derived for a given target_RTT will be applicable to measurements derived for a given target_RTT will be applicable to
any path with a smaller RTTs. any path with a smaller RTTs.
Target MTU (Maximum Transmission Unit): The specified maximum MTU Target MTU (Maximum Transmission Unit): The specified maximum MTU
supported by the complete path the over which the application supported by the complete path the over which the application
expects to meet the target performance. Assume 1500 Byte MTU expects to meet the target performance. In this document assume a
unless otherwise specified. If some subpath has a smaller MTU, 1500 Byte MTU unless otherwise specified. If some subpath has a
then it becomes the Target MTU for the complete path, and all smaller MTU, then it becomes the Target MTU for the complete path,
model calculations and subpath tests must use the same smaller and all model calculations and subpath tests must use the same
MTU. smaller MTU.
Targeted IP Diagnostic Suite (TIDS): A set of IP diagnostic tests Targeted IP Diagnostic Suite (TIDS): A set of IP diagnostic tests
designed to determine if an otherwise ideal complete path designed to determine if an otherwise ideal complete path
containing the subpath under test can sustain flows at a specific containing the subpath under test can sustain flows at a specific
target_data_rate using target_MTU sized packets when the RTT of target_data_rate using target_MTU sized packets when the RTT of
the complete path is target_RTT. the complete path is target_RTT.
Fully Specified Targeted IP Diagnostic Suite (FS-TIDS): A TIDS Fully Specified Targeted IP Diagnostic Suite (FS-TIDS): A TIDS
together with additional specification such as "type-p", etc which together with additional specification such as measurement packet
are out of scope for this document, but need to be drawn from type ("type-p" [RFC2330]), etc. which are out of scope for this
other standards documents. document, but need to be drawn from other standards documents.
Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an
Internet path's ability to carry bulk data, such as large files, Internet path's ability to carry bulk data, such as large files,
streaming (non-real time) video, and under some conditions, web streaming (non-real time) video, and under some conditions, web
images and other content. Prior efforts to define BTC metrics images and other content. Prior efforts to define BTC metrics
have been based on [RFC3148], which predates our understanding of have been based on [RFC3148], which predates our understanding of
TCP and the requirements described in Section 4 TCP and the requirements described in Section 4. In general "Bulk
Transport" indicates that performance is determined by the
interplay between the network, cross traffic and congestion
control in the transport protocol. It excludes situations where
performance is dominated by the RTT alone (e.g. transactions) or
bottlenecks elsewhere, such as in the application itself.
IP diagnostic tests: Measurements or diagnostics to determine if IP diagnostic tests: Measurements or diagnostics to determine if
packet transfer statistics meet some precomputed target. packet transfer statistics meet some precomputed target.
traffic patterns: The temporal patterns or burstiness of traffic traffic patterns: The temporal patterns or burstiness of traffic
generated by applications over transport protocols such as TCP. generated by applications over transport protocols such as TCP.
There are several mechanisms that cause bursts at various time There are several mechanisms that cause bursts at various time
scales as described in Section 4.1. Our goal here is to mimic the scales as described in Section 4.1. Our goal here is to mimic the
range of common patterns (burst sizes and rates, etc), without range of common patterns (burst sizes and rates, etc), without
tying our applicability to specific applications, implementations tying our applicability to specific applications, implementations
or technologies, which are sure to become stale. or technologies, which are sure to become stale.
packet transfer statistics: Raw, detailed or summary statistics packet transfer statistics: Raw, detailed or summary statistics
about packet transfer properties of the IP layer including packet about packet transfer properties of the IP layer including packet
losses, ECN Congestion Experienced (CE) marks, reordering, or any losses, ECN Congestion Experienced (CE) marks, reordering, or any
skipping to change at page 11, line 27 skipping to change at page 11, line 45
apportioned: To divide and allocate, for example budgeting packet apportioned: To divide and allocate, for example budgeting packet
loss across multiple subpaths such that the losses will accumulate loss across multiple subpaths such that the losses will accumulate
to less than a specified end-to-end loss ratio. Apportioning to less than a specified end-to-end loss ratio. Apportioning
metrics is essentially the inverse of the process described in metrics is essentially the inverse of the process described in
[RFC5835]. [RFC5835].
open loop: A control theory term used to describe a class of open loop: A control theory term used to describe a class of
techniques where systems that naturally exhibit circular techniques where systems that naturally exhibit circular
dependencies can be analyzed by suppressing some of the dependencies can be analyzed by suppressing some of the
dependencies, such that the resulting dependency graph is acyclic. dependencies, such that the resulting dependency graph is acyclic.
Terminology about paths, etc. See [RFC2330] and [RFC7398]. Terminology about paths, etc. See [RFC2330] and [RFC7398] for
existing terms and definitions.
data sender: Host sending data and receiving ACKs. data sender: Host sending data and receiving ACKs.
data receiver: Host receiving data and sending ACKs. data receiver: Host receiving data and sending ACKs.
complete path: The end-to-end path from the data sender to the data complete path: The end-to-end path from the data sender to the data
receiver. receiver.
subpath: A portion of the complete path. Note that there is no subpath: A portion of the complete path. Note that there is no
requirement that subpaths be non-overlapping. A subpath can be a requirement that subpaths be non-overlapping. A subpath can be a
small as a single device, link or interface. small as a single device, link or interface.
measurement point: Measurement points as described in [RFC7398]. measurement point: Measurement points as described in [RFC7398].
test path: A path between two measurement points that includes a test path: A path between two measurement points that includes a
subpath of the complete path under test. If the measurement subpath of the complete path under test. If the measurement
points are off path, the test path may include "test leads" points are off path, the test path may include "test leads"
between the measurement points and the subpath. between the measurement points and the subpath.
dominant bottleneck: The bottleneck that generally determines most dominant bottleneck: The bottleneck that generally determines most
of packet transfer statistics for the entire path. It typically of packet transfer statistics for the entire path. It typically
skipping to change at page 12, line 39 skipping to change at page 13, line 9
timing of this data. See Section 4.1 and Appendix B for more timing of this data. See Section 4.1 and Appendix B for more
details. details.
implied bottleneck IP capacity: This is the bottleneck IP capacity implied bottleneck IP capacity: This is the bottleneck IP capacity
implied by the ACKs returning from the receiver. It is determined implied by the ACKs returning from the receiver. It is determined
by looking at how much application data the ACK stream at the by looking at how much application data the ACK stream at the
sender reports delivered to the data receiver per unit time at sender reports delivered to the data receiver per unit time at
various time scales. If the return path is thinning, batching or various time scales. If the return path is thinning, batching or
otherwise altering the ACK timing the implied bottleneck IP otherwise altering the ACK timing the implied bottleneck IP
capacity over short time scales might be substantially larger than capacity over short time scales might be substantially larger than
the bottleneck IP capacity averaged over a full RTT. Since TCP the bottleneck IP capacity averaged over a full RTT. Since TCP
derives its clock from the data delivered through the bottleneck derives its clock from the data delivered through the bottleneck,
the front path must have sufficient buffering to absorb any data the front path must have sufficient buffering to absorb any data
bursts at the dimensions (duration and IP rate) implied by the ACK bursts at the dimensions (size and IP rate) implied by the ACK
stream, potentially doubled during slowstart. If the return path stream, which are potentially doubled during slowstart. If the
is not altering the ACK stream, then the implied bottleneck IP return path is not altering the ACK stream, then the implied
capacity will be the same as the bottleneck IP capacity. See bottleneck IP capacity will be the same as the bottleneck IP
Section 4.1 and Appendix B for more details. capacity. See Section 4.1 and Appendix B for more details.
sender interface rate: The IP rate which corresponds to the IP sender interface rate: The IP rate which corresponds to the IP
capacity of the data sender's interface. Due to sender efficiency capacity of the data sender's interface. Due to sender efficiency
algorithms including technologies such as TCP segmentation offload algorithms including technologies such as TCP segmentation offload
(TSO), nearly all moderns servers deliver data in bursts at full (TSO), nearly all modern servers deliver data in bursts at full
interface link rate. Today 1 or 10 Gb/s are typical. interface link rate. Today 1 or 10 Gb/s are typical.
Header_overhead: The IP and TCP header sizes, which are the portion Header_overhead: The IP and TCP header sizes, which are the portion
of each MTU not available for carrying application payload. of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for Without loss of generality this is assumed to be the size for
returning acknowledgments (ACKs). For TCP, the Maximum Segment returning acknowledgments (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header_overhead. Size (MSS) is the Target MTU minus the header_overhead.
Basic parameters common to models and subpath tests are defined here Basic parameters common to models and subpath tests are defined here
are described in more detail in Section 5.2. Note that these are are described in more detail in Section 5.2. Note that these are
mixed between application transport performance (excludes headers) mixed between application transport performance (excludes headers)
and IP performance (which include TCP headers and retransmissions as and IP performance (which include TCP headers and retransmissions as
part of the IP payload). part of the IP payload).
skipping to change at page 13, line 15 skipping to change at page 13, line 33
Without loss of generality this is assumed to be the size for Without loss of generality this is assumed to be the size for
returning acknowledgments (ACKs). For TCP, the Maximum Segment returning acknowledgments (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header_overhead. Size (MSS) is the Target MTU minus the header_overhead.
Basic parameters common to models and subpath tests are defined here Basic parameters common to models and subpath tests are defined here
are described in more detail in Section 5.2. Note that these are are described in more detail in Section 5.2. Note that these are
mixed between application transport performance (excludes headers) mixed between application transport performance (excludes headers)
and IP performance (which include TCP headers and retransmissions as and IP performance (which include TCP headers and retransmissions as
part of the IP payload). part of the IP payload).
Window [size]: The total quantity of data plus the data represented Window [size]: The total quantity of data carried by packets in-
by ACKs circulating in the network is referred to as the window. flight plus the data represented by ACKs circulating in the
See Section 4.1. Sometimes used with other qualifiers (congestion network is referred to as the window. See Section 4.1. Sometimes
window, cwnd or receiver window) to indicate which mechanism is used with other qualifiers (congestion window, cwnd or receiver
controlling the window. window) to indicate which mechanism is controlling the window.
pipe size: A general term for number of packets needed in flight pipe size: A general term for number of packets needed in flight
(the window size) to exactly fill some network path or subpath. (the window size) to exactly fill some network path or subpath.
It corresponds to the window size which maximizes network power, It corresponds to the window size which maximizes network power,
the observed data rate divided by the observed RTT. Often used the observed data rate divided by the observed RTT. Often used
with additional qualifiers to specify which path, or under what with additional qualifiers to specify which path, or under what
conditions, etc. conditions, etc.
target_window_size: The average number of packets in flight (the target_window_size: The average number of packets in flight (the
window size) needed to meet the Target Data Rate, for the window size) needed to meet the Target Data Rate, for the
specified Target RTT, and MTU. It implies the scale of the bursts specified Target RTT, and MTU. It implies the scale of the bursts
that the network might experience. that the network might experience.
skipping to change at page 13, line 52 skipping to change at page 14, line 23
reference target_run_length: target_run_length computed precisely by reference target_run_length: target_run_length computed precisely by
the method in Section 5.2. This is likely to be slightly more the method in Section 5.2. This is likely to be slightly more
conservative than required by modern TCP implementations. conservative than required by modern TCP implementations.
Ancillary parameters used for some tests: Ancillary parameters used for some tests:
derating: Under some conditions the standard models are too derating: Under some conditions the standard models are too
conservative. The modeling framework permits some latitude in conservative. The modeling framework permits some latitude in
relaxing or "derating" some test parameters as described in relaxing or "derating" some test parameters as described in
Section 5.3 in exchange for a more stringent TIDS validation Section 5.3 in exchange for a more stringent TIDS validation
procedures, described in Section 10. procedures, described in Section 10. Models can be derated by
including a multiplicative derating factor to make tests less
stringent.
subpath_IP_capacity: The IP capacity of a specific subpath. subpath_IP_capacity: The IP capacity of a specific subpath.
test path: A subpath of a complete path under test. test path: A subpath of a complete path under test.
test_path_RTT: The RTT observed between two measurement points using test_path_RTT: The RTT observed between two measurement points using
packet sizes that are consistent with the transport protocol. packet sizes that are consistent with the transport protocol.
This is generally MTU sized packets of the forward path, This is generally MTU sized packets of the forward path,
header_overhead sized packets on the return path. header_overhead sized packets on the return path.
test_path_pipe: The pipe size of a test path. Nominally the test test_path_pipe: The pipe size of a test path. Nominally the
path RTT times the test path IP_capacity. test_path_RTT times the test path IP_capacity.
test_window: The smallest window sufficient to meet or exceeded the test_window: The smallest window sufficient to meet or exceed the
target_rate when operating with a pure self clock over a test target_rate when operating with a pure self clock over a test
path. The test_window is typically given by path. The test_window is typically given by
ceiling(target_data_rate*test_path_RTT/(target_MTU- ceiling(target_data_rate*test_path_RTT/(target_MTU-
header_overhead)) but see the discussion in Appendix B about the header_overhead)) but see the discussion in Appendix B about the
effects of channel scheduling on RTT. On some test paths the effects of channel scheduling on RTT. On some test paths the
test_window may need to be adjusted slightly to compensate for the test_window may need to be adjusted slightly to compensate for the
RTT being inflated by the devices that schedule packets. RTT being inflated by the devices that schedule packets.
The terminology below is used to define temporal patterns for test The terminology below is used to define temporal patterns for test
stream. These patterns are designed to mimic TCP behavior, as stream. These patterns are designed to mimic TCP behavior, as
skipping to change at page 14, line 30 skipping to change at page 15, line 4
RTT being inflated by the devices that schedule packets. RTT being inflated by the devices that schedule packets.
The terminology below is used to define temporal patterns for test The terminology below is used to define temporal patterns for test
stream. These patterns are designed to mimic TCP behavior, as stream. These patterns are designed to mimic TCP behavior, as
described in Section 4.1. described in Section 4.1.
packet headway: Time interval between packets, specified from the packet headway: Time interval between packets, specified from the
start of one to the start of the next. e.g. If packets are sent start of one to the start of the next. e.g. If packets are sent
with a 1 mS headway, there will be exactly 1000 packets per with a 1 mS headway, there will be exactly 1000 packets per
second. second.
burst headway: Time interval between bursts, specified from the burst headway: Time interval between bursts, specified from the
start of the first packet one burst to the start of the first start of the first packet one burst to the start of the first
packet of the next burst. e.g. If 4 packet bursts are sent with a packet of the next burst. e.g. If 4 packet bursts are sent with a
1 mS burst headway, there will be exactly 4000 packets per second. 1 mS burst headway, there will be exactly 4000 packets per second.
paced single packets: Send individual packets at the specified rate paced single packets: Send individual packets at the specified rate
or packet headway. or packet headway.
paced bursts: Send bursts on a timer. Specify any 3 of: average paced bursts: Send bursts on a timer. Specify any 3 of: average
data rate, packet size, burst size (number of packets) and burst data rate, packet size, burst size (number of packets) and burst
headway (burst start to start). By default the bursts are assumed headway (burst start to start). By default the bursts are assumed
full sender interface rate, such that the packet headway within to occur at full sender interface rate, such that the packet
each burst is the minimum supported by the sender's interface. headway within each burst is the minimum supported by the sender's
Under some conditions it is useful to explicitly specify the interface. Under some conditions it is useful to explicitly
packet headway within each burst. specify the packet headway within each burst.
slowstart rate: Mimic TCP slowstart by sending 4 packet paced bursts slowstart rate: Mimic TCP slowstart by sending 4 packet paced bursts
at an average data rate equal to twice the implied bottleneck IP at an average data rate equal to twice the implied bottleneck IP
capacity (but not more than the sender interface rate). This is a capacity (but not more than the sender interface rate). This is a
two level burst pattern described in more detail in Section 6.1. two level burst pattern described in more detail in Section 6.1.
If the implied bottleneck IP capacity is more than half of the If the implied bottleneck IP capacity is more than half of the
sender interface rate, slowstart rate becomes sender interface sender interface rate, slowstart rate becomes sender interface
rate. rate.
slowstart burst: Mimic one round of TCP slowstart by sending a slowstart burst: Mimic one round of TCP slowstart by sending a
specified number of packets packets in a two level burst pattern specified number of packets packets in a two level burst pattern
that resembles slowstart. that resembles slowstart.
skipping to change at page 15, line 32 skipping to change at page 16, line 6
consequence of cross traffic, additional presented load or the consequence of cross traffic, additional presented load or the
actions of other network users. By definition, capacity tests actions of other network users. By definition, capacity tests
also consume significant network resources (data capacity and/or also consume significant network resources (data capacity and/or
queue buffer space), and the test schedules must be balanced by queue buffer space), and the test schedules must be balanced by
their cost. their cost.
Monitoring tests: Monitoring tests are designed to capture the most Monitoring tests: Monitoring tests are designed to capture the most
important aspects of a capacity test, but without presenting important aspects of a capacity test, but without presenting
excessive ongoing load themselves. As such they may miss some excessive ongoing load themselves. As such they may miss some
details of the network's performance, but can serve as a useful details of the network's performance, but can serve as a useful
reduced-cost proxy for a capacity test, for example to support reduced-cost proxy for a capacity test, for example to support
ongoing monitoring. continuous production network monitoring.
Engineering tests: Engineering tests evaluate how network algorithms Engineering tests: Engineering tests evaluate how network algorithms
(such as AQM and channel allocation) interact with TCP-style self (such as AQM and channel allocation) interact with TCP-style self
clocked protocols and adaptive congestion control based on packet clocked protocols and adaptive congestion control based on packet
loss and ECN Congestion Experienced (CE) marks. These tests are loss and ECN Congestion Experienced (CE) marks. These tests are
likely to have complicated interactions with cross traffic and likely to have complicated interactions with cross traffic and
under some conditions can be inversely sensitive to load. For under some conditions can be inversely sensitive to load. For
example a test to verify that an AQM algorithm causes ECN CE marks example a test to verify that an AQM algorithm causes ECN CE marks
or packet drops early enough to limit queue occupancy may or packet drops early enough to limit queue occupancy may
experience a false pass result in the presence of cross traffic. experience a false pass result in the presence of cross traffic.
It is important that engineering tests be performed under a wide It is important that engineering tests be performed under a wide
range of conditions, including both in situ and bench testing, and range of conditions, including both in situ and bench testing, and
over a wide variety of load conditions. Ongoing monitoring is over a wide variety of load conditions. Ongoing monitoring is
less likely to be useful for engineering tests, although sparse in less likely to be useful for engineering tests, although sparse in
situ testing might be appropriate. situ testing might be appropriate.
4. Background 4. Background
At the time the IPPM WG was chartered, sound Bulk Transport Capacity At the time the "Framework for IP Performance Metrics" [RFC2330] was
(BTC) measurement was known to be well beyond our capabilities. Even published (1998), sound Bulk Transport Capacity (BTC) measurement was
at the time that Framework for Empirical BTC Metrics [RFC3148] was known to be well beyond our capabilities. Even when Framework for
written we knew that we didn't fully understand the problem. Now, by Empirical BTC Metrics [RFC3148] was published, we knew that we didn't
hindsight we understand why assessing BTC is such a hard problem: really understand the problem. Now, by hindsight we understand why
assessing BTC is such a hard problem:
o TCP is a control system with circular dependencies - everything o TCP is a control system with circular dependencies - everything
affects performance, including components that are explicitly not affects performance, including components that are explicitly not
part of the test (for example, the host processing power is not part of the test (for example, the host processing power is not
in-scope of path performance tests). in-scope of path performance tests).
o Congestion control is a dynamic equilibrium process, similar to o Congestion control is a dynamic equilibrium process, similar to
processes observed in chemistry and other fields. The network and processes observed in chemistry and other fields. The network and
transport protocols find an operating point which balances between transport protocols find an operating point which balances between
opposing forces: the transport protocol pushing harder (raising opposing forces: the transport protocol pushing harder (raising
the data rate and or window) while the network pushes back the data rate and/or window) while the network pushes back
(raising packet loss ratio, RTT and/or ECN CE marks). By design (raising packet loss ratio, RTT and/or ECN CE marks). By design
TCP congestion control keeps raising the data rate until the TCP congestion control keeps raising the data rate until the
network gives some indication that its capacity has been exceeded network gives some indication that its capacity has been exceeded
by dropping packets or ECN CE marks. If a TCP sender accurately by dropping packets or adding ECN CE marks. If a TCP sender
fills a path to its IP capacity, (e.g. the bottleneck is 100% accurately fills a path to its IP capacity, (e.g. the bottleneck
utilized), then packet losses and ECN CE marks are mostly is 100% utilized), then packet losses and ECN CE marks are mostly
determined by the TCP sender and how aggressively it seeks determined by the TCP sender and how aggressively it seeks
additional capacity, and not the network itself, since the network additional capacity, and not the network itself, since the network
must send exactly the signals that TCP needs to set its rate. must send exactly the signals that TCP needs to set its rate.
o TCP's ability to compensate for network impairments (such as loss, o TCP's ability to compensate for network impairments (such as loss,
delay and delay variation, outside of those caused by TCP itself) delay and delay variation, outside of those caused by TCP itself)
is directly proportional to the number of send-ACK round trip is directly proportional to the number of send-ACK round trip
exchanges per second (i.e. inversely proportional to the RTT). As exchanges per second (i.e. inversely proportional to the RTT). As
a consequence an impaired subpath may pass a short RTT local test a consequence an impaired subpath may pass a short RTT local test
even though it fails when the subpath is extended by an even though it fails when the subpath is extended by an
effectively perfect network to some larger RTT. effectively perfect network to some larger RTT.
skipping to change at page 16, line 47 skipping to change at page 17, line 23
measured particles. For network measurement you can not in measured particles. For network measurement you can not in
general determine even the order of magnitude of the effect. It general determine even the order of magnitude of the effect. It
is possible to construct measurement scenarios where the is possible to construct measurement scenarios where the
measurement traffic starves real user traffic, yielding an overly measurement traffic starves real user traffic, yielding an overly
inflated measurement. The inverse is also possible: the user inflated measurement. The inverse is also possible: the user
traffic can fill the network, such that the measurement traffic traffic can fill the network, such that the measurement traffic
detects only minimal available capacity. You can not in general detects only minimal available capacity. You can not in general
determine which scenario might be in effect, so you can not gauge determine which scenario might be in effect, so you can not gauge
the relative magnitude of the uncertainty introduced by the relative magnitude of the uncertainty introduced by
interactions with other network traffic. interactions with other network traffic.
o It is difficult, if not impossible, for two independent o As a consequence of the properties listed above it is difficult,
implementations (HW or SW) of TCP congestion control to produce if not impossible, for two independent implementations (HW or SW)
equivalent performance results [RFC6576] under the same network of TCP congestion control to produce equivalent performance
conditions, as an outcome of the other properties listed. results [RFC6576] under the same network conditions,
These properties are a consequence of the dynamic equilibrium These properties are a consequence of the dynamic equilibrium
behavior intrinsic to how all throughput maximizing protocols behavior intrinsic to how all throughput maximizing protocols
interact with the Internet. These protocols rely on control systems interact with the Internet. These protocols rely on control systems
based on estimated network metrics to regulate the quantity of data based on estimated network metrics to regulate the quantity of data
sent into the network. The packet sending characteristics in turn to send into the network. The packet sending characteristics in turn
alter the network properties estimated by the control system metrics, alter the network properties estimated by the control system metrics,
such that there are circular dependencies between every transmission such that there are circular dependencies between every transmission
characteristic and every estimated metric. Since some of these characteristic and every estimated metric. Since some of these
dependencies are nonlinear, the entire system is nonlinear, and any dependencies are nonlinear, the entire system is nonlinear, and any
change causes a response in packet sending characteristics or change anywhere causes a difficult to predict response in network
estimated network metrics that is difficult to predict. metrics. As a consequence Bulk Transport Capacity metrics have
entirely thwarted the analytic framework envisioned in [RFC2330]
Model Based Metrics overcome these problems by making the measurement Model Based Metrics overcome these problems by making the measurement
system open loop: the packet transfer statistics (akin to the network system open loop: the packet transfer statistics (akin to the network
estimators) do not affect the traffic or traffic patterns (bursts), estimators) do not affect the traffic or traffic patterns (bursts),
which are computed on the basis of the Target Transport Performance. which are computed on the basis of the Target Transport Performance.
A path or subpath meeting the Target Transfer Performance A path or subpath meeting the Target Transfer Performance
requirements would exhibit packet transfer statistics and estimated requirements would exhibit packet transfer statistics and estimated
metrics that would not cause the control system to slow the traffic metrics that would not cause the control system to slow the traffic
below the Target Data Rate. below the Target Data Rate.
4.1. TCP properties 4.1. TCP properties
TCP and SCTP are self clocked protocols that carry the vast majority TCP and other self clocked protocols (e.g. SCTP) carry the vast
of all Internet data. Their dominant behavior is to have an majority of all Internet data. Their dominant bulk data transport
approximately fixed quantity of data and acknowledgments (ACKs) behavior is to have an approximately fixed quantity of data and
circulating in the network. The data receiver reports arriving data acknowledgments (ACKs) circulating in the network. The data receiver
by returning ACKs to the data sender, the data sender typically reports arriving data by returning ACKs to the data sender, the data
responds by sending exactly the same quantity of data back into the sender typically responds by sending approximately the same quantity
network. The total quantity of data plus the data represented by of data back into the network. The total quantity of data plus the
ACKs circulating in the network is referred to as the window. The data represented by ACKs circulating in the network is referred to as
mandatory congestion control algorithms incrementally adjust the the window. The mandatory congestion control algorithms
window by sending slightly more or less data in response to each ACK. incrementally adjust the window by sending slightly more or less data
The fundamentally important property of this system is that it is in response to each ACK. The fundamentally important property of
self clocked: The data transmissions are a reflection of the ACKs this system is that it is self clocked: The data transmissions are a
that were delivered by the network, the ACKs are a reflection of the reflection of the ACKs that were delivered by the network, the ACKs
data arriving from the network. are a reflection of the data arriving from the network.
A number of protocol features cause bursts of data, even in idealized A number of protocol features cause bursts of data, even in idealized
networks that can be modeled as simple queuing systems. networks that can be modeled as simple queuing systems.
During slowstart the IP rate is doubled on each RTT by sending twice During slowstart the IP rate is doubled on each RTT by sending twice
as much data as was delivered to the receiver during the prior RTT. as much data as was delivered to the receiver during the prior RTT.
Each returning ACK causes the sender to transmit twice the data the Each returning ACK causes the sender to transmit twice the data the
ACK reported arriving at the receiver. For slowstart to be able to ACK reported arriving at the receiver. For slowstart to be able to
fill the pipe, the network must be able to tolerate slowstart bursts fill the pipe, the network must be able to tolerate slowstart bursts
up to the full pipe size inflated by the anticipated window reduction up to the full pipe size inflated by the anticipated window reduction
skipping to change at page 19, line 34 skipping to change at page 20, line 13
than the Target RTT and equal to or larger than the Target MTU than the Target RTT and equal to or larger than the Target MTU
respectively, is expected to be able to attain a specified Bulk respectively, is expected to be able to attain a specified Bulk
Transport Capacity when all of the following conditions are met: Transport Capacity when all of the following conditions are met:
1. The IP capacity is above the Target Data Rate by sufficient 1. The IP capacity is above the Target Data Rate by sufficient
margin to cover all TCP/IP overheads. This can be confirmed by margin to cover all TCP/IP overheads. This can be confirmed by
the tests described in Section 8.1 or any number of IP capacity the tests described in Section 8.1 or any number of IP capacity
tests adapted to implement MBM. tests adapted to implement MBM.
2. The observed packet transfer statistics are better than required 2. The observed packet transfer statistics are better than required
by a suitable TCP performance model (e.g. fewer packet losses or by a suitable TCP performance model (e.g. fewer packet losses or
ECN CE marks). See Section 8.1 or any number of low rate packet ECN CE marks). See Section 8.1 or any number of low or fixed
loss tests outside of MBM. rate packet loss tests outside of MBM.
3. There is sufficient buffering at the dominant bottleneck to 3. There is sufficient buffering at the dominant bottleneck to
absorb a slowstart bursts large enough to get the flow out of absorb a slowstart bursts large enough to get the flow out of
slowstart at a suitable window size. See Section 8.3. slowstart at a suitable window size. See Section 8.3.
4. There is sufficient buffering in the front path to absorb and 4. There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in to be generated by the application, any channel arbitration in
the ACK path or any other mechanisms. See Section 8.4. the ACK path or any other mechanisms. See Section 8.4.
5. When there is a slowly rising standing queue at the bottleneck 5. When there is a slowly rising standing queue at the bottleneck
the onset of packet loss has to be at an appropriate point (time the onset of packet loss has to be at an appropriate point (time
or queue depth) and progressive [RFC7567]. See Section 8.2. or queue depth) and progressive [RFC7567]. See Section 8.2.
skipping to change at page 20, line 4 skipping to change at page 20, line 32
5. When there is a slowly rising standing queue at the bottleneck 5. When there is a slowly rising standing queue at the bottleneck
the onset of packet loss has to be at an appropriate point (time the onset of packet loss has to be at an appropriate point (time
or queue depth) and progressive [RFC7567]. See Section 8.2. or queue depth) and progressive [RFC7567]. See Section 8.2.
6. When there is a standing queue at a bottleneck for a shared media 6. When there is a standing queue at a bottleneck for a shared media
subpath (e.g. half duplex), there must be a suitable bounds on subpath (e.g. half duplex), there must be a suitable bounds on
the interaction between ACKs and data, for example due to the the interaction between ACKs and data, for example due to the
channel arbitration mechanism. See Section 8.2.4. channel arbitration mechanism. See Section 8.2.4.
Note that conditions 1 through 4 require capacity tests for Note that conditions 1 through 4 require capacity tests for
validation, and thus may need to be monitored on an ongoing basis. validation, and thus may need to be monitored on an ongoing basis.
Conditions 5 and 6 require engineering tests, which are best Conditions 5 and 6 require engineering tests, which are best
performed in controlled environments such as a bench test. They performed in controlled environments such as a bench test. They
won't generally fail due to load, but may fail in the field due to won't generally fail due to load, but may fail in the field due to
configuration errors, etc. and should be spot checked. configuration errors, etc. and should be spot checked.
We are developing a tool that can perform many of the tests described A tool that can perform many of the tests is available from
here [MBMSource]. [MBMSource].
4.3. New requirements relative to RFC 2330 4.3. New requirements relative to RFC 2330
Model Based Metrics are designed to fulfill some additional Model Based Metrics are designed to fulfill some additional
requirements that were not recognized at the time RFC 2330 was requirements that were not recognized at the time RFC 2330 was
written [RFC2330]. These missing requirements may have significantly written [RFC2330]. These missing requirements may have significantly
contributed to policy difficulties in the IP measurement space. Some contributed to policy difficulties in the IP measurement space. Some
additional requirements are: additional requirements are:
o IP metrics must be actionable by the ISP - they have to be o IP metrics must be actionable by the ISP - they have to be
skipping to change at page 22, line 4 skipping to change at page 22, line 28
The Target Transport Performance is used to derive the The Target Transport Performance is used to derive the
target_window_size and the reference target_run_length. target_window_size and the reference target_run_length.
The target_window_size, is the average window size in packets needed The target_window_size, is the average window size in packets needed
to meet the target_rate, for the specified target_RTT and target_MTU. to meet the target_rate, for the specified target_RTT and target_MTU.
It is given by: It is given by:
target_window_size = ceiling( target_rate * target_RTT / ( target_MTU target_window_size = ceiling( target_rate * target_RTT / ( target_MTU
- header_overhead ) ) - header_overhead ) )
Target_run_length is an estimate of the minimum required number of Target_run_length is an estimate of the minimum required number of
unmarked packets that must be delivered between losses or ECN unmarked packets that must be delivered between losses or ECN
Congestion Experienced (CE) marks, as computed by a mathematical Congestion Experienced (CE) marks, as computed by a mathematical
model of TCP congestion control. The derivation here follows model of TCP congestion control. The derivation here follows
[MSMO97], and by design is quite conservative. [MSMO97], and by design is quite conservative.
Reference target_run_length is derived as follows: assume the Reference target_run_length is derived as follows: assume the
subpath_IP_capacity is infinitesimally larger than the subpath_IP_capacity is infinitesimally larger than the
target_data_rate plus the required header_overhead. Then target_data_rate plus the required header_overhead. Then
target_window_size also predicts the onset of queueing. A larger target_window_size also predicts the onset of queuing. A larger
window will cause a standing queue at the bottleneck. window will cause a standing queue at the bottleneck.
Assume the transport protocol is using standard Reno style Additive Assume the transport protocol is using standard Reno style Additive
Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681] Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681]
(but not Appropriate Byte Counting [RFC3465]) and the receiver is (but not Appropriate Byte Counting [RFC3465]) and the receiver is
using standard delayed ACKs. Reno increases the window by one packet using standard delayed ACKs. Reno increases the window by one packet
every pipe_size worth of ACKs. With delayed ACKs this takes 2 Round every pipe_size worth of ACKs. With delayed ACKs this takes 2 Round
Trip Times per increase. To exactly fill the pipe, losses must be no Trip Times per increase. To exactly fill the pipe, the spacing of
closer than when the peak of the AIMD sawtooth reached exactly twice losses must be no closer than when the peak of the AIMD sawtooth
the target_window_size otherwise the multiplicative window reduction reached exactly twice the target_window_size. Otherwise, the
triggered by the loss would cause the network to be underfilled. multiplicative window reduction triggered by the loss would cause the
Following [MSMO97] the number of packets between losses must be the network to be under-filled. Following [MSMO97] the number of packets
area under the AIMD sawtooth. They must be no more frequent than between losses must be the area under the AIMD sawtooth. They must
every 1 in ((3/2)*target_window_size)*(2*target_window_size) packets, be no more frequent than every 1 in
which simplifies to: ((3/2)*target_window_size)*(2*target_window_size) packets, which
simplifies to:
target_run_length = 3*(target_window_size^2) target_run_length = 3*(target_window_size^2)
Note that this calculation is very conservative and is based on a Note that this calculation is very conservative and is based on a
number of assumptions that may not apply. Appendix A discusses these number of assumptions that may not apply. Appendix A discusses these
assumptions and provides some alternative models. If a different assumptions and provides some alternative models. If a different
model is used, a fully specified TIDS or FSTIDS MUST document the model is used, a FS-TIDS must document the actual method for
actual method for computing target_run_length and ratio between computing target_run_length and ratio between alternate
alternate target_run_length and the reference target_run_length target_run_length and the reference target_run_length calculated
calculated above, along with a discussion of the rationale for the above, along with a discussion of the rationale for the underlying
underlying assumptions. assumptions.
These two parameters, target_window_size and target_run_length, These two parameters, target_window_size and target_run_length,
directly imply most of the individual parameters for the tests in directly imply most of the individual parameters for the tests in
Section 8. Section 8.
5.3. Parameter Derating 5.3. Parameter Derating
Since some aspects of the models are very conservative, the MBM Since some aspects of the models are very conservative, the MBM
framework permits some latitude in derating test parameters. Rather framework permits some latitude in derating test parameters. Rather
than trying to formalize more complicated models we permit some test than trying to formalize more complicated models we permit some test
parameters to be relaxed as long as they meet some additional parameters to be relaxed as long as they meet some additional
procedural constraints: procedural constraints:
o The TIDS or FSTIDS MUST document and justify the actual method o The FS-TIDS must document and justify the actual method used to
used to compute the derated metric parameters. compute the derated metric parameters.
o The validation procedures described in Section 10 must be used to o The validation procedures described in Section 10 must be used to
demonstrate the feasibility of meeting the Target Transport demonstrate the feasibility of meeting the Target Transport
Performance with infrastructure that infinitesimally passes the Performance with infrastructure that infinitesimally passes the
derated tests. derated tests.
o The validation process for a FSTIDS itself must be documented is o The validation process for a FS-TIDS itself must be documented is
such a way that other researchers can duplicate the validation such a way that other researchers can duplicate the validation
experiments. experiments.
Except as noted, all tests below assume no derating. Tests where Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required there is not currently a well established model for the required
parameters explicitly include derating as a way to indicate parameters explicitly include derating as a way to indicate
flexibility in the parameters. flexibility in the parameters.
5.4. Test Preconditions 5.4. Test Preconditions
Many tests have preconditions which are required to assure their Many tests have preconditions which are required to assure their
validity. Examples include: the presence or non-presence of cross validity. Examples include: the presence or non-presence of cross
traffic on specific subpaths; negotiating ECN; and appropriate traffic on specific subpaths; negotiating ECN; and appropriate
preamble packet stream to testing to put reactive network elements preamble packet stream to testing to put reactive network elements
into the proper states [RFC7312]. If preconditions are not properly into the proper states [RFC7312]. If preconditions are not properly
satisfied for some reason, the tests should be considered to be satisfied for some reason, the tests should be considered to be
inconclusive. In general it is useful to preserve diagnostic inconclusive. In general it is useful to preserve diagnostic
information as to why the preconditions were not met, and any test information as to why the preconditions were not met, and any test
data that was collected even if it is not useful for the intended data that was collected even if it is not useful for the intended
test. Such diagnostic information and partial test data may be test. Such diagnostic information and partial test data may be
useful for improving the test in the future. useful for improving the test or test procedures themselves.
It is important to preserve the record that a test was scheduled, It is important to preserve the record that a test was scheduled,
because otherwise precondition enforcement mechanisms can introduce because otherwise precondition enforcement mechanisms can introduce
sampling bias. For example, canceling tests due to cross traffic on sampling bias. For example, canceling tests due to cross traffic on
subscriber access links might introduce sampling bias in tests of the subscriber access links might introduce sampling bias in tests of the
rest of the network by reducing the number of tests during peak rest of the network by reducing the number of tests during peak
network load. network load.
Test preconditions and failure actions MUST be specified in a FSTIDS. Test preconditions and failure actions must be specified in a FS-
TIDS.
6. Generating test streams 6. Generating test streams
Many important properties of Model Based Metrics, such as vantage Many important properties of Model Based Metrics, such as vantage
independence, are a consequence of using test streams that have independence, are a consequence of using test streams that have
temporal structures that mimic TCP or other transport protocols temporal structures that mimic TCP or other transport protocols
running over a complete path. As described in Section 4.1, self running over a complete path. As described in Section 4.1, self
clocked protocols naturally have burst structures related to the RTT clocked protocols naturally have burst structures related to the RTT
and pipe size of the complete path. These bursts naturally get and pipe size of the complete path. These bursts naturally get
larger (contain more packets) as either the Target RTT or Target Data larger (contain more packets) as either the Target RTT or Target Data
Rate get larger, or the Target MTU gets smaller. An implication of Rate get larger, or the Target MTU gets smaller. An implication of
these relationships is that test streams generated by running self these relationships is that test streams generated by running self
clocked protocols over short subpaths may not adequately exercise the clocked protocols over short subpaths may not adequately exercise the
queuing at any bottleneck to determine if the subpath can support the queuing at any bottleneck to determine if the subpath can support the
full Target Transport Performance over the complete path. full Target Transport Performance over the complete path.
Failing to authentically mimic TCP's temporal structure is part the Failing to authentically mimic TCP's temporal structure is part of
reason why simple performance tools such as iPerf, netperf, nc, etc the reason why simple performance tools such as iPerf, netperf, nc,
have the reputation of yielding false pass results over short test etc have the reputation of yielding false pass results over short
paths, even when some subpath has a flaw. test paths, even when some subpath has a flaw.
The definitions in Section 3 are sufficient for most test streams. The definitions in Section 3 are sufficient for most test streams.
We describe the slowstart and standing queue test streams in more We describe the slowstart and standing queue test streams in more
detail. detail.
In conventional measurement practice stochastic processes are used to In conventional measurement practice stochastic processes are used to
eliminate many unintended correlations and sample biases. However eliminate many unintended correlations and sample biases. However
MBM tests are designed to explicitly mimic temporal correlations MBM tests are designed to explicitly mimic temporal correlations
caused by network or protocol elements themselves and are intended to caused by network or protocol elements themselves. Some portions of
accurately reflect implementation behavior. Some portion of the these system, such as traffic arrival (test scheduling) are naturally
system, such as traffic arrival (test scheduling) are naturally stochastic. Other behaviors, such as back-to-back packet
stochastic. Other details, such as protocol processing times, are transmissions, are dominated by implementation specific deterministic
technically non-deterministic and might be modeled stochastically, effects. Although these behaviors always contain non-deterministic
but are only a tiny part of the overall behavior which is dominated elements and might be modeled stochastically, these details typically
by implementation specific deterministic effects. Furthermore, it is do not contribute significantly to the overall system behavior.
known that sampling bias is a real problem for some protocol Furthermore, it is known that real protocols are subject to failures
implementations. For example TCP's RTT estimator used to determine caused by network property estimators suffering from bias due to
the Retransmit Time Out (RTO), can be fooled by periodic cross correlation in their own traffic. For example TCP's RTT estimator
traffic or start-stop applications. used to determine the Retransmit Time Out (RTO), can be fooled by
periodic cross traffic or start-stop applications. For these reasons
many details of the test streams are specified deterministically.
At some point in the future it may make sense to introduce fine It may prove useful to introduce fine grained noise sources into the
grained noise sources into the models used for generating test models used for generating test streams in an update of Model Based
streams, but they are not warranted at this time. Metrics, but the complexity is not warranted at the time this
document was written.
6.1. Mimicking slowstart 6.1. Mimicking slowstart
TCP slowstart has a two level burst structure as shown in Figure 2. TCP slowstart has a two level burst structure as shown in Figure 2.
The fine structure is caused by the interaction between the ACK clock The fine time structure is caused by efficiency algorithms that
and TCP efficiency algorithms. Each ACK passing through the return deliberately batch work (CPU, channel allocation, etc) to better
path triggers a small data burst. These bursts are typically full amortize certain network and host overheads. ACKs passing through
sender interface rate, with the same headway as the returning ACKs, the return path typically cause the sender to transmit small bursts
but having twice as much data as the ACK reported was delivered to of data at full sender interface rate. For example TCP Segmentation
the receiver. Due to variations in delayed ACK and algorithms such Offload (TSO) and Delayed Acknowledgment both contribute to this
as Appropriate Byte Counting [RFC3465], different pairs of senders effect. During slowstart these bursts are at the same headway as the
and receivers produce different burst patterns. Without loss of returning ACKs, but are typically twice as large (e.g. having twice
generality, we assume each ACK causes 4 packet bursts at an average as much data) as the ACK reported was delivered to the receiver. Due
headway equal to the ACK headway, and corresponding to sending at an to variations in delayed ACK and algorithms such as Appropriate Byte
average rate equal to twice the effective bottleneck IP rate. This Counting [RFC3465], different pairs of senders and receivers produce
fine structure defines one slowstart rate burst. slightly different burst patterns. Without loss of generality, we
assume each ACK causes 4 packet sender interface rate bursts at an
average headway equal to the ACK headway, and corresponding to
sending at an average rate equal to twice the effective bottleneck IP
rate. Each slowstart burst consists of a series of 4 packet sender
interface rate bursts such that the total number of packets is the
current window size (as of the last packet in the burst).
For a transport protocol the slowstart bursts are repeated every The coarse time structure is due to each RTT being a reflection of
target_RTT. Each slowstart burst is twice as large as the previous the prior RTT. For real transport protocols, each slowstart burst is
burst, and slowstart ends on the first lost packet or ECN mark. For twice as large (twice the window) as the previous burst but is spread
diagnostic tests described below we preserve the fine structure but out in time by the network bottleneck, such that each successive RTT
manipulate the burst size and headway to measure the ability of the exhibits the same effective bottleneck IP rate. The slowstart phase
dominant bottleneck to absorb and smooth slowstart bursts. ends on the first lost packet or ECN mark, which is intended to
happen after successive slowstart bursts merge in time: the next
burst starts before the bottleneck queue is fully drained and the
prior burst is complete.
For diagnostic tests described below we preserve the fine time
structure but manipulate the coarse structure of the slowstart bursts
(burst size and headway) to measure the ability of the dominant
bottleneck to absorb and smooth slowstart bursts.
Note that a stream of repeated slowstart bursts has three different Note that a stream of repeated slowstart bursts has three different
average rates, depending on the averaging interval. At the finest average rates, depending on the averaging time interval. At the
time scale (a few packet times at the sender interface) the peak of finest time scale (a few packet times at the sender interface) the
the average IP rate is the same as the sender interface rate; at a peak of the average IP rate is the same as the sender interface rate;
medium timescale (a few packet times at the dominant bottleneck) the at a medium timescale (a few ACK times at the dominant bottleneck)
peak of the average IP rate is twice the implied bottleneck IP the peak of the average IP rate is twice the implied bottleneck IP
capacity; and at time scales longer than the target_RTT and when the capacity; and at time scales longer than the target_RTT and when the
burst size is equal to the target_window_size the average rate is burst size is equal to the target_window_size, the average rate is
equal to the target_data_rate. This pattern corresponds to repeating equal to the target_data_rate. This pattern corresponds to repeating
the last RTT of TCP slowstart when delayed ACK and sender side byte the last RTT of TCP slowstart when delayed ACK and sender side byte
counting are present but without the limits specified in Appropriate counting are present but without the limits specified in Appropriate
Byte Counting [RFC3465]. Byte Counting [RFC3465].
time ==> ( - equals one packet) time ==> ( - equals one packet)
Packet stream:
---- ---- ---- ---- ---- ---- ---- ... Fine time structure of the packet stream:
---- ---- ---- ---- ----
|<>| sender interface rate bursts (typically 3 or 4 packets) |<>| sender interface rate bursts (typically 3 or 4 packets)
|<===>| burst headway (determined by ACK headway) |<===>| burst headway (from the ACK headway)
|<========================>| slowstart burst size(from the window)
\____repeating sender______/
rate bursts
Coarse (RTT level) time structure of the packet stream:
---- ---- ---- ---- ---- ---- ---- ...
|<========================>| slowstart burst size (from the window)
|<==============================================>| slowstart headway |<==============================================>| slowstart headway
\____________ _____________/ \______ __ ... (from the RTT)
V V \__________________________/ \_________ ...
One slowstart burst Repeated slowstart bursts one slowstart burst Repeated slowstart bursts
Multiple levels of Slowstart Bursts Multiple levels of Slowstart Bursts
Figure 2 Figure 2
6.2. Constant window pseudo CBR 6.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard self clocked Implement pseudo constant bit rate by running a standard self clocked
protocol such as TCP with a fixed window size. If that window size protocol such as TCP with a fixed window size. If that window size
is test_window, the data rate will be slightly above the target_rate. is test_window, the data rate will be slightly above the target_rate.
Since the test_window is constrained to be an integer number of Since the test_window is constrained to be an integer number of
packets, for small RTTs or low data rates there may not be packets, for small RTTs or low data rates there may not be
sufficiently precise control over the data rate. Rounding the sufficiently precise control over the data rate. Rounding the
test_window up (the default) is likely to result in data rates that test_window up (the default) is likely to result in data rates that
are higher than the target rate, but reducing the window by one are higher than the target rate, but reducing the window by one
packet may result in data rates that are too small. Also cross packet may result in data rates that are too small. Also cross
traffic potentially raises the RTT, implicitly reducing the rate. traffic potentially raises the RTT, implicitly reducing the rate.
Cross traffic that raises the RTT nearly always makes the test more Cross traffic that raises the RTT nearly always makes the test more
strenuous. A FS-TIDS specifying a constant window CBR tests MUST strenuous (more demanding for the network path). A FS-TIDS
explicitly indicate under what conditions errors in the data rate specifying a constant window CBR test must explicitly indicate under
cause tests to inconclusive. what conditions errors in the data rate cause tests to inconclusive.
Since constant window pseudo CBR testing is sensitive to RTT Since constant window pseudo CBR testing is sensitive to RTT
fluctuations it will be less accurate at controlling the data rate in fluctuations it will be less accurate at controlling the data rate in
environments with fluctuating delays. Conventional paced measurement environments with fluctuating delays. Conventional paced measurement
traffic may be more appropriate for these environments. traffic may be more appropriate for these environments.
6.3. Scanned window pseudo CBR 6.3. Scanned window pseudo CBR
Scanned window pseudo CBR is similar to the constant window CBR Scanned window pseudo CBR is similar to the constant window CBR
described above, except the window is scanned across a range of sizes described above, except the window is scanned across a range of sizes
skipping to change at page 27, line 11 skipping to change at page 28, line 18
The test procedures in Section 8.2 describe how to the partition the The test procedures in Section 8.2 describe how to the partition the
scans into regions and how to interpret the results. scans into regions and how to interpret the results.
6.4. Concurrent or channelized testing 6.4. Concurrent or channelized testing
The procedures described in this document are only directly The procedures described in this document are only directly
applicable to single stream measurement, e.g. one TCP connection or applicable to single stream measurement, e.g. one TCP connection or
measurement stream. In an ideal world, we would disallow all measurement stream. In an ideal world, we would disallow all
performance claims based multiple concurrent streams, but this is not performance claims based multiple concurrent streams, but this is not
practical due to at least two different issues. First, many very practical due to at least two issues. First, many very high rate
high rate link technologies are channelized and at last partially pin link technologies are channelized and at last partially pin the flow
the flow to channel mapping to minimize packet reordering within to channel mapping to minimize packet reordering within flows.
flows. Second, TCP itself has scaling limits. Although the former Second, TCP itself has scaling limits. Although the former problem
problem might be overcome through different design decisions, the might be overcome through different design decisions, the later
later problem is more deeply rooted. problem is more deeply rooted.
All congestion control algorithms that are philosophically aligned All congestion control algorithms that are philosophically aligned
with the standard [RFC5681] (e.g. claim some level of TCP with the standard [RFC5681] (e.g. claim some level of TCP
compatibility, friendliness or fairness) have scaling limits, in the compatibility, friendliness or fairness) have scaling limits, in the
sense that as a long fast network (LFN) with a fixed RTT and MTU gets sense that as a long fast network (LFN) with a fixed RTT and MTU gets
faster, these congestion control algorithms get less accurate and as faster, these congestion control algorithms get less accurate and as
a consequence have difficulty filling the network [CCscaling]. These a consequence have difficulty filling the network [CCscaling]. These
properties are a consequence of the original Reno AIMD congestion properties are a consequence of the original Reno AIMD congestion
control design and the requirement in [RFC5681] that all transport control design and the requirement in [RFC5681] that all transport
protocols have similar responses to congestion. protocols have similar responses to congestion.
There are a number of reasons to want to specify performance in term There are a number of reasons to want to specify performance in terms
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several megabits per second, which recommended for data rates below several megabits per second, which
can be attained with run lengths under 10000 packets on many paths. can be attained with run lengths under 10000 packets on many paths.
Since the required run length goes as the square of the data rate, at Since the required run length goes as the square of the data rate, at
higher rates the run lengths can be unreasonably large, and multiple higher rates the run lengths can be unreasonably large, and multiple
flows might be the only feasible approach. flows might be the only feasible approach.
If multiple flows are deemed necessary to meet aggregate performance If multiple flows are deemed necessary to meet aggregate performance
targets then this MUST be stated both the design of the TIDS and in targets then this MUST be stated in both the design of the TIDS and
any claims about network performance. The IP diagnostic tests MUST in any claims about network performance. The IP diagnostic tests
be performed concurrently with the specified number of connections. MUST be performed concurrently with the specified number of
For the tests that use bursty test streams, the bursts should be connections. For the tests that use bursty test streams, the bursts
synchronized across streams. should be synchronized across streams unless there is a priori
knowledge that the applications have some explicit mechanism to
stagger their own bursts. In the absences of an explicit mechanism
to stagger bursts many network and application artifacts will
sometimes implicitly synchronize bursts. A test that does not
control burst synchronization may be prone to false pass results for
some applications.
7. Interpreting the Results 7. Interpreting the Results
7.1. Test outcomes 7.1. Test outcomes
To perform an exhaustive test of a complete network path, each test To perform an exhaustive test of a complete network path, each test
of the TIDS is applied to each subpath of the complete path. If any of the TIDS is applied to each subpath of the complete path. If any
subpath fails any test then a standard transport protocol running subpath fails any test then a standard transport protocol running
over the complete path can also be expected to fail to attain the over the complete path can also be expected to fail to attain the
Target Transport Performance under some conditions. Target Transport Performance under some conditions.
skipping to change at page 28, line 36 skipping to change at page 29, line 49
statistics meet the statistical criteria for failing (accepting statistics meet the statistical criteria for failing (accepting
hypnosis H1 in Section 7.2), the test can can be considered to have hypnosis H1 in Section 7.2), the test can can be considered to have
failed because it doesn't really matter that the test didn't attain failed because it doesn't really matter that the test didn't attain
the required data rate. the required data rate.
The really important new properties of MBM, such as vantage The really important new properties of MBM, such as vantage
independence, are a direct consequence of opening the control loops independence, are a direct consequence of opening the control loops
in the protocols, such that the test stream does not depend on in the protocols, such that the test stream does not depend on
network conditions or IP packets received. Any mechanism that network conditions or IP packets received. Any mechanism that
introduces feedback between the path's measurements and the test introduces feedback between the path's measurements and the test
stream generation is at risk of introducing non-linearities that stream generation is at risk of introducing nonlinearities that spoil
spoil these properties. Any exceptional event that indicates that these properties. Any exceptional event that indicates that such
such feedback has happened should cause the test to be considered feedback has happened should cause the test to be considered
inconclusive. inconclusive.
One way to view inconclusive tests is that they reflect situations One way to view inconclusive tests is that they reflect situations
where a test outcome is ambiguous between limitations of the network where a test outcome is ambiguous between limitations of the network
and some unknown limitation of the IP diagnostic test itself, which and some unknown limitation of the IP diagnostic test itself, which
may have been caused by some uncontrolled feedback from the network. may have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to search the target parameter Note that procedures that attempt to search the target parameter
space to find the limits on some parameter such as target_data_rate space to find the limits on some parameter such as target_data_rate
are at risk of breaking the location independent properties of Model are at risk of breaking the location independent properties of Model
Based Metrics, if any part of the boundary between passing and Based Metrics, if any part of the boundary between passing and
inconclusive or failing results is sensitive to RTT (which is inconclusive or failing results is sensitive to RTT (which is
normally the case). For example the maximum data rate for a normally the case). For example the maximum data rate for a marginal
margional link (e.g. exhibiting excess errors) is likely to be link (e.g. exhibiting excess errors) is likely to be sensitive to
sensitive to the test path RTT. The maximum observed data rate over the test_path_RTT. The maximum observed data rate over the test path
the test path has very little predictive value for the maximum rate has very little predictive value for the maximum rate over a
over a different path. different path.
One of the goals for evolving TIDS designs will be to keep sharpening One of the goals for evolving TIDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inconclusive tests MUST be criteria for for passing, failing and inconclusive tests MUST be
explicitly stated for every test in the TIDS or FSTIDS. explicitly stated for every test in the TIDS or FS-TIDS.
One of the goals of evolving the testing process, procedures, tools One of the goals of evolving the testing process, procedures, tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw packet transfer statistics and ancillary It may be useful to keep raw packet transfer statistics and ancillary
metrics [RFC3148] for deeper study of the behavior of the network metrics [RFC3148] for deeper study of the behavior of the network
path and to measure the tools themselves. Raw packet transfer path and to measure the tools themselves. Raw packet transfer
statistics can help to drive tool evolution. Under some conditions statistics can help to drive tool evolution. Under some conditions
it might be possible to re-evaluate the raw data for satisfying it might be possible to re-evaluate the raw data for satisfying
skipping to change at page 29, line 42 skipping to change at page 31, line 9
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels for appropriate packet stream sizes and acceptable error levels for
efficient measurement. In practice, can we compare the empirically efficient measurement. In practice, can we compare the empirically
estimated packet loss and ECN Congestion Experienced (CE) marking estimated packet loss and ECN Congestion Experienced (CE) marking
ratios with the targets as the sample size grows? How large a sample ratios with the targets as the sample size grows? How large a sample
is needed to say that the measurements of packet transfer indicate a is needed to say that the measurements of packet transfer indicate a
particular run length is present? particular run length is present?
The generalized measurement can be described as recursive testing: The generalized measurement can be described as recursive testing:
send packets (individually or in patterns) and observe the packet send packets (individually or in patterns) and observe the packet
delivery performance (packet loss ratio or other metric, any marking transfer performance (packet loss ratio or other metric, any marking
we define). we define).
As each packet is sent and measured, we have an ongoing estimate of As each packet is sent and measured, we have an ongoing estimate of
the performance in terms of the ratio of packet loss or ECN CE mark the performance in terms of the ratio of packet loss or ECN CE mark
to total packets (i.e. an empirical probability). We continue to to total packets (i.e. an empirical probability). We continue to
send until conditions support a conclusion or a maximum sending limit send until conditions support a conclusion or a maximum sending limit
has been reached. has been reached.
We have a target_mark_probability, 1 mark per target_run_length, We have a target_mark_probability, 1 mark per target_run_length,
where a "mark" is defined as a lost packet, a packet with ECN CE where a "mark" is defined as a lost packet, a packet with ECN CE
skipping to change at page 33, line 41 skipping to change at page 35, line 7
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while sending at an average rate approximately target_run_length while sending at an average rate approximately
equal to the target_data_rate, by controlling (or clamping) the equal to the target_data_rate, by controlling (or clamping) the
window size of a conventional transport protocol to test_window. window size of a conventional transport protocol to test_window.
Since losses and ECN CE marks cause transport protocols to reduce Since losses and ECN CE marks cause transport protocols to reduce
their data rates, this test is expected to be less precise about their data rates, this test is expected to be less precise about
controlling its data rate. It should not be considered inconclusive controlling its data rate. It should not be considered inconclusive
as long as at least some of the round trips reached the full as long as at least some of the round trips reached the full
target_data_rate without incurring losses or ECN CE marks. To pass target_data_rate without incurring losses or ECN CE marks. To pass
this test the network MUST deliver target_window_size packets in this test the network must deliver target_window_size packets in
target_RTT time without any losses or ECN CE marks at least once per target_RTT time without any losses or ECN CE marks at least once per
two target_window_size round trips, in addition to meeting the run two target_window_size round trips, in addition to meeting the run
length statistical test. length statistical test.
8.1.3. Background Packet Transfer Statistics Tests 8.1.3. Background Packet Transfer Statistics Tests
The background run length is a low rate version of the target target The background run length is a low rate version of the target target
rate test above, designed for ongoing lightweight monitoring for rate test above, designed for ongoing lightweight monitoring for
changes in the observed subpath run length without disrupting users. changes in the observed subpath run length without disrupting users.
It should be used in conjunction with one of the above full rate It should be used in conjunction with one of the above full rate
tests because it does not confirm that the subpath can support raw tests because it does not confirm that the subpath can support raw
data rate. data rate.
RFC 6673 [RFC6673] is appropriate for measuring background packet RFC 6673 [RFC6673] is appropriate for measuring background packet
transfer statistics. transfer statistics.
8.2. Standing Queue Tests 8.2. Standing Queue Tests
These engineering tests confirm that the bottleneck is well behaved These engineering tests confirm that the bottleneck is well behaved
across the onset of packet loss, which typically follows after the across the onset of packet loss, which typically follows after the
onset of queueing. Well behaved generally means lossless for onset of queuing. Well behaved generally means lossless for
transient queues, but once the queue has been sustained for a transient queues, but once the queue has been sustained for a
sufficient period of time (or reaches a sufficient queue depth) there sufficient period of time (or reaches a sufficient queue depth) there
should be a small number of losses or ECN CE marks to signal to the should be a small number of losses or ECN CE marks to signal to the
transport protocol that it should reduce its window. Losses that are transport protocol that it should reduce its window. Losses that are
too early can prevent the transport from averaging at the too early can prevent the transport from averaging at the
target_data_rate. Losses that are too late indicate that the queue target_data_rate. Losses that are too late indicate that the queue
might be subject to bufferbloat [wikiBloat] and inflict excess might be subject to bufferbloat [wikiBloat] and inflict excess
queuing delays on all flows sharing the bottleneck queue. Excess queuing delays on all flows sharing the bottleneck queue. Excess
losses (more than half of the window) at the onset of congestion make losses (more than half of the window) at the onset of congestion make
loss recovery problematic for the transport protocol. Non-linear, loss recovery problematic for the transport protocol. Non-linear,
erratic or excessive RTT increases suggest poor interactions between erratic or excessive RTT increases suggest poor interactions between
the channel acquisition algorithms and the transport self clock. All the channel acquisition algorithms and the transport self clock. All
of the tests in this section use the same basic scanning algorithm, of the tests in this section use the same basic scanning algorithm,
described here, but score the link or subpath on the basis of how described here, but score the link or subpath on the basis of how
well it avoids each of these problems. well it avoids each of these problems.
For some technologies the data might not be subject to increasing Some network technologies rely on virtual queues or other techniques
delays, in which case the data rate will vary with the window size to meter traffic without adding any queuing delay, in which case the
all the way up to the onset of load induced packet loss or ECN CE data rate will vary with the window size all the way up to the onset
marks. For theses technologies, the discussion of queueing does not of load induced packet loss or ECN CE marks. For theses
apply, but it is still required that the onset of losses or ECN CE technologies, the discussion of queuing in Section 6.3 does not
marks be at an appropriate point and progressive. Start the scan at apply, but it is still necessary to confirm that the onset of losses
a window equal to or slightly below the test_window. or ECN CE marks be at an appropriate point and progressive. If the
network bottleneck does not introduce significant queuing delay,
modify the procedure described in Section 6.3 to start scan at a
window equal to or slightly smaller than the test_window.
Use the procedure in Section 6.3 to sweep the window across the onset Use the procedure in Section 6.3 to sweep the window across the onset
of queueing and the onset of loss. The tests below all assume that of queuing and the onset of loss. The tests below all assume that
the scan emulates standard additive increase and delayed ACK by the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_window_size incrementing the window by one packet for every 2*target_window_size
packets delivered. A scan can typically be divided into three packets delivered. A scan can typically be divided into three
regions: below the onset of queueing, a standing queue, and at or regions: below the onset of queuing, a standing queue, and at or
beyond the onset of loss. beyond the onset of loss.
Below the onset of queueing the RTT is typically fairly constant, and Below the onset of queuing the RTT is typically fairly constant, and
the data rate varies in proportion to the window size. Once the data the data rate varies in proportion to the window size. Once the data
rate reaches the subpath IP rate, the data rate becomes fairly rate reaches the subpath IP rate, the data rate becomes fairly
constant, and the RTT increases in proportion to the increase in constant, and the RTT increases in proportion to the increase in
window size. The precise transition across the start of queueing can window size. The precise transition across the start of queuing can
be identified by the maximum network power, defined to be the ratio be identified by the maximum network power, defined to be the ratio
data rate over the RTT. The network power can be computed at each data rate over the RTT. The network power can be computed at each
window size, and the window with the maximum is taken as the start of window size, and the window with the maximum is taken as the start of
the queueing region. the queuing region.
If there is random background loss (e.g. bit errors, etc), precise If there is random background loss (e.g. bit errors, etc), precise
determination of the onset of queue induced packet loss may require determination of the onset of queue induced packet loss may require
multiple scans. Above the onset of queuing loss, all transport multiple scans. Above the onset of queuing loss, all transport
protocols are expected to experience periodic losses determined by protocols are expected to experience periodic losses determined by
the interaction between the congestion control and AQM algorithms. the interaction between the congestion control and AQM algorithms.
For standard congestion control algorithms the periodic losses are For standard congestion control algorithms the periodic losses are
likely to be relatively widely spaced and the details are typically likely to be relatively widely spaced and the details are typically
dominated by the behavior of the transport protocol itself. For the dominated by the behavior of the transport protocol itself. For the
stiffened transport protocols case (with non-standard, aggressive stiffened transport protocols case (with non-standard, aggressive
congestion control algorithms) the details of periodic losses will be congestion control algorithms) the details of periodic losses will be
dominated by how the window increase function responds to loss. dominated by how the window increase function responds to loss.
8.2.1. Congestion Avoidance 8.2.1. Congestion Avoidance
A subpath passes the congestion avoidance standing queue test if more A subpath passes the congestion avoidance standing queue test if more
than target_run_length packets are delivered between the onset of than target_run_length packets are delivered between the onset of
queueing (as determined by the window with the maximum network power queuing (as determined by the window with the maximum network power
as described above) and the first loss or ECN CE mark. If this test as described above) and the first loss or ECN CE mark. If this test
is implemented using a standard congestion control algorithm with a is implemented using a standard congestion control algorithm with a
clamp, it can be performed in situ in the production internet as a clamp, it can be performed in situ in the production internet as a
capacity test. For an example of such a test see [Pathdiag]. capacity test. For an example of such a test see [Pathdiag].
For technologies that do not have conventional queues, use the For technologies that do not have conventional queues, use the
test_window in place of the onset of queueing. i.e. A subpath passes test_window in place of the onset of queuing. i.e. A subpath passes
the congestion avoidance standing queue test if more than the congestion avoidance standing queue test if more than
target_run_length packets are delivered between start of the scan at target_run_length packets are delivered between start of the scan at
test_window and the first loss or ECN CE mark. test_window and the first loss or ECN CE mark.
8.2.2. Bufferbloat 8.2.2. Bufferbloat
This test confirms that there is some mechanism to limit buffer This test confirms that there is some mechanism to limit buffer
occupancy (e.g. that prevents bufferbloat). Note that this is not occupancy (e.g. that prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk transport capacity, strictly a requirement for single stream bulk transport capacity,
however if there is no mechanism to limit buffer queue occupancy then however if there is no mechanism to limit buffer queue occupancy then
a single stream with sufficient data to deliver is likely to cause a single stream with sufficient data to deliver is likely to cause
the problems described in [RFC7567], and [wikiBloat]. This may cause the problems described in [RFC7567], and [wikiBloat]. This may cause
only minor symptoms for the dominant flow, but has the potential to only minor symptoms for the dominant flow, but has the potential to
make the subpath unusable for other flows and applications. make the subpath unusable for other flows and applications.
Pass if the onset of loss occurs before a standing queue has Pass if the onset of loss occurs before a standing queue has
introduced more delay than than twice target_RTT, or other well introduced more delay than than twice target_RTT, or other well
defined and specified limit. Note that there is not yet a model for defined and specified limit. Note that there is not yet a model for
how much standing queue is acceptable. The factor of two chosen here how much standing queue is acceptable. The factor of two chosen here
reflects a rule of thumb. In conjunction with the previous test, reflects a rule of thumb. In conjunction with the previous test,
this test implies that the first loss should occur at a queueing this test implies that the first loss should occur at a queuing delay
delay which is between one and two times the target_RTT. which is between one and two times the target_RTT.
Specified RTT limits that are larger than twice the target_RTT must Specified RTT limits that are larger than twice the target_RTT must
be fully justified in the FSTIDS. be fully justified in the FS-TIDS.
8.2.3. Non excessive loss 8.2.3. Non excessive loss
This test confirms that the onset of loss is not excessive. Pass if This test confirms that the onset of loss is not excessive. Pass if
losses are equal or less than the increase in the cross traffic plus losses are equal or less than the increase in the cross traffic plus
the test stream window increase since the previous RTT. This could the test stream window increase since the previous RTT. This could
be restated as non-decreasing total throughput of the subpath at the be restated as non-decreasing total throughput of the subpath at the
onset of loss. (Note that when there is a transient drop in subpath onset of loss. (Note that when there is a transient drop in subpath
throughput and there is not already a standing queue, a subpath that throughput and there is not already a standing queue, a subpath that
passes other queue tests in this document will have sufficient queue passes other queue tests in this document will have sufficient queue
space to hold one full RTT worth of data). space to hold one full RTT worth of data).
Note that token bucket policers will not pass this test, which is as Note that token bucket policers will not pass this test, which is as
intended. TCP often stumbles badly if more than a small fraction of intended. TCP often stumbles badly if more than a small fraction of
the packets are dropped in one RTT. Many TCP implementations will the packets are dropped in one RTT. Many TCP implementations will
require a timeout and slowstart to recover their self clock. Even if require a timeout and slowstart to recover their self clock. Even if
they can recover from the massive losses the sudden change in they can recover from the massive losses the sudden change in
available capacity at the bottleneck waists serving and front path available capacity at the bottleneck wastes serving and front path
capacity until TCP can adapt to the new rate [Policing]. capacity until TCP can adapt to the new rate [Policing].
8.2.4. Duplex Self Interference 8.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. the forward data path and the ACK return path.
Some historical half duplex technologies had the property that each Some historical half duplex technologies had the property that each
direction held the channel until it completely drained its queue. direction held the channel until it completely drained its queue.
When a self clocked transport protocol, such as TCP, has data and When a self clocked transport protocol, such as TCP, has data and
ACKs passing in opposite directions through such a link, the behavior ACKs passing in opposite directions through such a link, the behavior
often reverts to stop-and-wait. Each additional packet added to the often reverts to stop-and-wait. Each additional packet added to the
window raises the observed RTT by two packet times, once as it passes window raises the observed RTT by two packet times, once as it passes
through the data path, and once for the additional delay incurred by through the data path, and once for the additional delay incurred by
the ACK waiting on the return path. the ACK waiting on the return path.
The duplex self interference test fails if the RTT rises by more than The duplex self interference test fails if the RTT rises by more than
a fixed bound above the expected queueing time computed from trom the a fixed bound above the expected queugit staing time computed from
excess window divided by the subpath IP Capacity. This bound must be trom the excess window divided by the subpath IP Capacity. This
smaller than target_RTT/2 to avoid reverting to stop and wait bound must be smaller than target_RTT/2 to avoid reverting to stop
behavior. (e.g. Data packets and ACKs both have to be released at and wait behavior. (e.g. Data packets and ACKs both have to be
least twice per RTT.) released at least twice per RTT.)
8.3. Slowstart tests 8.3. Slowstart tests
These tests mimic slowstart: data is sent at twice the effective These tests mimic slowstart: data is sent at twice the effective
bottleneck rate to exercise the queue at the dominant bottleneck. bottleneck rate to exercise the queue at the dominant bottleneck.
8.3.1. Full Window slowstart test 8.3.1. Full Window slowstart test
This is a capacity test to confirm that slowstart is not likely to This is a capacity test to confirm that slowstart is not likely to
exit prematurely. Send slowstart bursts that are target_window_size exit prematurely. Send slowstart bursts that are target_window_size
skipping to change at page 37, line 31 skipping to change at page 39, line 6
or ECN CE marks is smaller than the target_run_length. or ECN CE marks is smaller than the target_run_length.
It is deemed inconclusive if the elapsed time to send the data burst It is deemed inconclusive if the elapsed time to send the data burst
is not less than half of the time to receive the ACKs. (i.e. It is is not less than half of the time to receive the ACKs. (i.e. It is
acceptable to send data too fast, but sending it slower than twice acceptable to send data too fast, but sending it slower than twice
the actual bottleneck rate as indicated by the ACKs is deemed the actual bottleneck rate as indicated by the ACKs is deemed
inconclusive). The headway for the slowstart bursts should be the inconclusive). The headway for the slowstart bursts should be the
target_RTT. target_RTT.
Note that these are the same parameters as the Sender Full Window Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than burst test, except the burst rate is at slowstart rate, rather than
sender interface rate. sender interface rate.
8.3.2. Slowstart AQM test 8.3.2. Slowstart AQM test
Do a continuous slowstart (send data continuously at twice the Do a continuous slowstart (send data continuously at twice the
implied IP bottleneck capacity), until the first loss, stop, allow implied IP bottleneck capacity), until the first loss, stop, allow
the network to drain and repeat, gathering statistics on how many the network to drain and repeat, gathering statistics on how many
packets were delivered before the loss, the pattern of losses, packets were delivered before the loss, the pattern of losses,
maximum observed RTT and window size. Justify the results. There is maximum observed RTT and window size. Justify the results. There is
not currently sufficient theory justifying requiring any particular not currently sufficient theory justifying requiring any particular
skipping to change at page 38, line 14 skipping to change at page 39, line 36
8.4. Sender Rate Burst tests 8.4. Sender Rate Burst tests
These tests determine how well the network can deliver bursts sent at These tests determine how well the network can deliver bursts sent at
sender's interface rate. Note that this test most heavily exercises sender's interface rate. Note that this test most heavily exercises
the front path, and is likely to include infrastructure may be out of the front path, and is likely to include infrastructure may be out of
scope for an access ISP, even though the bursts might be caused by scope for an access ISP, even though the bursts might be caused by
ACK compression, thinning or channel arbitration in the access ISP. ACK compression, thinning or channel arbitration in the access ISP.
See Appendix B. See Appendix B.
Also, there are a several details that are not precisely defined. Also, there are a several details about sender interface rate bursts
For starters there is not a standard server interface rate. 1 Gb/s that are not fully defined here. These details, such as the assumed
and 10 Gb/s are common today, but higher rates will become cost sender interface rate, should be explicitly stated is a FS-TIDS.
effective and can be expected to be dominant some time in the future.
Current standards permit TCP to send a full window bursts following Current standards permit TCP to send full window bursts following an
an application pause. (Congestion Window Validation [RFC2861] application pause. (Congestion Window Validation [RFC2861] and
[RFC7661], is not required, but even if was, it does not take effect updates to support Rate-Limited Traffic [RFC7661], are not required).
until an application pause is longer than an RTO.) Since full window Since full window bursts are consistent with standard behavior, it is
bursts are consistent with standard behavior, it is desirable that desirable that the network be able to deliver such bursts, otherwise
the network be able to deliver such bursts, otherwise application application pauses will cause unwarranted losses. Note that the AIMD
pauses will cause unwarranted losses. Note that the AIMD sawtooth sawtooth requires a peak window that is twice target_window_size, so
requires a peak window that is twice target_window_size, so the worst the worst case burst may be 2*target_window_size.
case burst may be 2*target_window_size.
It is also understood in the application and serving community that It is also understood in the application and serving community that
interface rate bursts have a cost to the network that has to be interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example balanced against other costs in the servers themselves. For example
TCP Segmentation Offload (TSO) reduces server CPU in exchange for TCP Segmentation Offload (TSO) reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. Some newer TCP implementations can pace traffic at scale memory. Some newer TCP implementations can pace traffic at scale
[TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how [TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how
quickly these changes will be deployed. quickly these changes will be deployed.
skipping to change at page 39, line 28 skipping to change at page 40, line 44
Send target_window_size bursts of packets at server interface rate Send target_window_size bursts of packets at server interface rate
with target_RTT burst headway (burst start to next burst start). with target_RTT burst headway (burst start to next burst start).
Verify that the observed packet transfer statistics meets the Verify that the observed packet transfer statistics meets the
target_run_length. target_run_length.
Key observations: Key observations:
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time, determined by the difference between the time to drain the time, determined by the difference between the time to drain
the queue at the subpath IP capacity, and the target_RTT. If the the queue at the subpath_IP_capacity, and the target_RTT. If the
queue does not drain completely it may be an indication that the queue does not drain completely it may be an indication that the
the subpath has insufficient IP capacity or that there is some the subpath has insufficient IP capacity or that there is some
other problem with the test (e.g. inconclusive). other problem with the test (e.g. inconclusive).
o The burst sensitivity can be derated by sending smaller bursts o The burst sensitivity can be derated by sending smaller bursts
more frequently. E.g. send target_window_size*derate packet more frequently. E.g. send target_window_size*derate packet
bursts every target_RTT*derate, where "derate" is less than one. bursts every target_RTT*derate, where "derate" is less than one.
o When not derated, this test is the most strenuous capacity test. o When not derated, this test is the most strenuous capacity test.
o A subpath that passes this test is likely to be able to sustain o A subpath that passes this test is likely to be able to sustain
higher rates (close to subpath_IP_capacity) for paths with RTTs higher rates (close to subpath_IP_capacity) for paths with RTTs
significantly smaller than the target_RTT. significantly smaller than the target_RTT.
o This test can be implemented with instrumented TCP [RFC4898], o This test can be implemented with instrumented TCP [RFC4898],
using a specialized measurement application at one end [MBMSource] using a specialized measurement application at one end [MBMSource]
and a minimal service at the other end [RFC0863] [RFC0864]. and a minimal service at the other end [RFC0863] [RFC0864].
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make use of TSO in modern NIC hardware. per-packet timers, and can make use of TSO in modern NIC hardware.
o If a subpath is known to pass the Standing Queue engineering tests o If a subpath is known to pass the Standing Queue engineering tests
(particularly that it has a progressive onset of loss at an (particularly that it has a progressive onset of loss at an
skipping to change at page 40, line 10 skipping to change at page 41, line 26
sufficient to assure that the subpath under test will not impair sufficient to assure that the subpath under test will not impair
Bulk Transport Capacity at the target performance under all Bulk Transport Capacity at the target performance under all
conditions. See Section 8.2 for a discussion of the standing conditions. See Section 8.2 for a discussion of the standing
queue tests. queue tests.
Note that this test is clearly independent of the subpath RTT, or Note that this test is clearly independent of the subpath RTT, or
other details of the measurement infrastructure, as long as the other details of the measurement infrastructure, as long as the
measurement infrastructure can accurately and reliably deliver the measurement infrastructure can accurately and reliably deliver the
required bursts to the subpath under test. required bursts to the subpath under test.
8.5.2. Streaming Media 8.5.2. Passive Measurements
Model Based Metrics can be implicitly implemented as a side effect Any non-throughput maximizing application, such as fixed rate
any non-throughput maximizing application, such as streaming media, streaming media, can be used to implement passive or hybrid (defined
with some additional controls and instrumentation in the servers. in [RFC7799]) versions of Model Based Metrics with some additional
The essential requirement is that the data rate be constrained such instrumentation and possibly a traffic shaper or other controls in
that even with arbitrary application pauses and bursts, the data rate the servers. The essential requirement is that the data transmission
and burst sizes stay within the envelope defined by the individual be constrained such that even with arbitrary application pauses and
tests described above. bursts, the data rate and burst sizes stay within the envelope
defined by the individual tests described above.
If the application's serving data rate can be constrained to be less If the application's serving data rate can be constrained to be less
than or equal to the target_data_rate and the serving_RTT (the RTT than or equal to the target_data_rate and the serving_RTT (the RTT
between the sender and client) is less than the target_RTT, this between the sender and client) is less than the target_RTT, this
constraint is most easily implemented by clamping the transport constraint is most easily implemented by clamping the transport
window size to serving_window_clamp, set to the test_window, computed window size to serving_window_clamp, set to the test_window, computed
for the actual serving path. for the actual serving path.
Under the above constraints the serving_window_clamp will limit the Under the above constraints the serving_window_clamp will limit the
both the serving data rate and burst sizes to be no larger than the both the serving data rate and burst sizes to be no larger than the
skipping to change at page 40, line 42 skipping to change at page 42, line 11
called for by Section 8.4 and the sender rate burst sizes are called for by Section 8.4 and the sender rate burst sizes are
implicitly derated by the serving_window_clamp divided by the implicitly derated by the serving_window_clamp divided by the
target_window_size at the very least. (Depending on the application target_window_size at the very least. (Depending on the application
behavior, the data might be significantly smoother than specified by behavior, the data might be significantly smoother than specified by
any of the burst tests.) any of the burst tests.)
In an alternative implementation the data rate and bursts might be In an alternative implementation the data rate and bursts might be
explicitly controlled by a programmable traffic shaper or pacing at explicitly controlled by a programmable traffic shaper or pacing at
the sender. This would provide better control over transmissions but the sender. This would provide better control over transmissions but
is more complicated to implement, although the required technology is is more complicated to implement, although the required technology is
available[TSO_pacing][TSO_fq_pacing]. available [TSO_pacing][TSO_fq_pacing].
Note that these techniques can be applied to any content delivery Note that these techniques can be applied to any content delivery
that can be constrained to a reduced data rate in order to inhibit that can operated at a constrained data rate to inhibit TCP
TCP equilibrium behavior. equilibrium behavior.
Furthermore note that Dynamic Adaptive Streaming over HTTP (DASH) is
generally in conflict with passive Model Based Metrics measurement,
because it is a rate maximizing protocol. It can still meet the
requirement here if the rate can be capped, for example by knowing a
priori the maximum rate needed to deliver a particular piece of
content.
9. An Example 9. An Example
In this section a we illustrate a TIDS designed to confirm that an In this section we illustrate a TIDS designed to confirm that an
access ISP can reliably deliver HD video from multiple content access ISP can reliably deliver HD video from multiple content
providers to all of their customers. With modern codecs, minimal HD providers to all of their customers. With modern codecs, minimal HD
video (720p) generally fits in 2.5 Mb/s. Due to their geographical video (720p) generally fits in 2.5 Mb/s. Due to their geographical
size, network topology and modem characteristics the ISP determines size, network topology and modem characteristics the ISP determines
that most content is within a 50 mS RTT of their users (This example that most content is within a 50 mS RTT of their users (This example
RTT is a sufficient to cover the propagation delay to continental RTT is a sufficient to cover the propagation delay to continental
Europe or either US coast with low delay modems or somewhat smaller Europe or either US coast with low delay modems or somewhat smaller
geographical regions if the modems require additional delay to geographical regions if the modems require additional delay to
implement advanced compression and error recovery). implement advanced compression and error recovery).
skipping to change at page 41, line 38 skipping to change at page 43, line 15
Table 1 shows the default TCP model with no derating, and as such is Table 1 shows the default TCP model with no derating, and as such is
quite conservative. The simplest TIDS would be to use the sustained quite conservative. The simplest TIDS would be to use the sustained
burst test, described in Section 8.5.1. Such a test would send 11 burst test, described in Section 8.5.1. Such a test would send 11
packet bursts every 50mS, and confirming that there was no more than packet bursts every 50mS, and confirming that there was no more than
1 packet loss per 33 bursts (363 total packets in 1.650 seconds). 1 packet loss per 33 bursts (363 total packets in 1.650 seconds).
Since this number represents is the entire end-to-end loss budget, Since this number represents is the entire end-to-end loss budget,
independent subpath tests could be implemented by apportioning the independent subpath tests could be implemented by apportioning the
packet loss ratio across subpaths. For example 50% of the losses packet loss ratio across subpaths. For example 50% of the losses
might be allocated to the access or last mile link to the user, 40% might be allocated to the access or last mile link to the user, 40%
to the interconnects with other ISPs and 1% to each internal hop to the network interconnections with other ISPs and 1% to each
(assuming no more than 10 internal hops). Then all of the subpaths internal hop (assuming no more than 10 internal hops). Then all of
can be tested independently, and the spatial composition of passing the subpaths can be tested independently, and the spatial composition
subpaths would be expected to be within the end-to-end loss budget. of passing subpaths would be expected to be within the end-to-end
loss budget.
9.1. Observations about applicability 9.1. Observations about applicability
Guidance on deploying and using MBM belong in a future document. Guidance on deploying and using MBM belong in a future document.
However this example illustrates some the issues that may need to be However this example illustrates some the issues that may need to be
considered. considered.
Note that a another ISP, with different geographical coverage, Note that another ISP, with different geographical coverage, topology
topology or modem technology may need to assume a different or modem technology may need to assume a different target_RTT, and as
target_RTT, and as a consequence different target_window_size and a consequence different target_window_size and target_run_length,
target_run_length, even for the same target_data rate. One of the even for the same target_data rate. One of the implications of this
implications of this is that infrastructure shared by multiple ISPs, is that infrastructure shared by multiple ISPs, such as inter-
such as inter-exchange points (IXPs) and other interconnects may need exchange points (IXPs) and other interconnects may need to be
to be evaluated on the basis of the most stringent target_window_size evaluated on the basis of the most stringent target_window_size and
and target_run_length of any participating ISP. One way to do this target_run_length of any participating ISP. One way to do this might
might be to choose target parameters for evaluating such shared be to choose target parameters for evaluating such shared
infrastructure on the basis of a hypothetical reference path that infrastructure on the basis of a hypothetical reference path that
does not necessarily match any actual paths. does not necessarily match any actual paths.
Testing interconnects has generally been problematic: conventional Testing interconnects has generally been problematic: conventional
performance tests run between measurement points adjacent to either performance tests run between measurement points adjacent to either
side of the interconnect are not generally useful. Unconstrained TCP side of the interconnect are not generally useful. Unconstrained TCP
tests, such as iPerf [iPerf] are usually overly aggressive due to the tests, such as iPerf [iPerf] are usually overly aggressive due to the
small RTT (often less than 1 mS). With a short RTT these tools are small RTT (often less than 1 mS). With a short RTT these tools are
likely to report inflated data rates because on a short RTT these likely to report inflated data rates because on a short RTT these
tools can tolerate very high packet loss ratios and can push other tools can tolerate very high packet loss ratios and can push other
skipping to change at page 43, line 12 skipping to change at page 44, line 38
An infinitesimally passing testbed resembles a epsilon-delta proof in An infinitesimally passing testbed resembles a epsilon-delta proof in
calculus. Construct a test network such that all of the individual calculus. Construct a test network such that all of the individual
tests of the TIDS pass by only small (infinitesimal) margins, and tests of the TIDS pass by only small (infinitesimal) margins, and
demonstrate that a variety of authentic applications running over demonstrate that a variety of authentic applications running over
real TCP implementations (or other protocol as appropriate) meets the real TCP implementations (or other protocol as appropriate) meets the
Target Transport Performance over such a network. The workloads Target Transport Performance over such a network. The workloads
should include multiple types of streaming media and transaction should include multiple types of streaming media and transaction
oriented short flows (e.g. synthetic web traffic). oriented short flows (e.g. synthetic web traffic).
For example, for the HD streaming video TIDS described in Section 9, For example, for the HD streaming video TIDS described in Section 9,
the IP capacity should be exactly the header overhead above 2.5 Mb/s, the IP capacity should be exactly the header_overhead above 2.5 Mb/s,
the per packet random background loss ratio should be 1/363, for a the per packet random background loss ratio should be 1/363, for a
run length of 363 packets, the bottleneck queue should be 11 packets run length of 363 packets, the bottleneck queue should be 11 packets
and the front path should have just enough buffering to withstand 11 and the front path should have just enough buffering to withstand 11
packet interface rate bursts. We want every one of the TIDS tests to packet interface rate bursts. We want every one of the TIDS tests to
fail if we slightly increase the relevant test parameter, so for fail if we slightly increase the relevant test parameter, so for
example sending a 12 packet bursts should cause excess (possibly example sending a 12 packet bursts should cause excess (possibly
deterministic) packet drops at the dominant queue at the bottleneck. deterministic) packet drops at the dominant queue at the bottleneck.
On this infinitesimally passing network it should be possible for a On this infinitesimally passing network it should be possible for a
real application using a stock TCP implementation in the vendor's real application using a stock TCP implementation in the vendor's
default configuration to attain 2.5 Mb/s over an 50 mS path. default configuration to attain 2.5 Mb/s over an 50 mS path.
skipping to change at page 44, line 5 skipping to change at page 45, line 31
11. Security Considerations 11. Security Considerations
Measurement is often used to inform business and policy decisions, Measurement is often used to inform business and policy decisions,
and as a consequence is potentially subject to manipulation. Model and as a consequence is potentially subject to manipulation. Model
Based Metrics are expected to be a huge step forward because Based Metrics are expected to be a huge step forward because
equivalent measurements can be performed from multiple vantage equivalent measurements can be performed from multiple vantage
points, such that performance claims can be independently validated points, such that performance claims can be independently validated
by multiple parties. by multiple parties.
Much of the acrimony in the Net Neutrality debate is due by the Much of the acrimony in the Net Neutrality debate is due to the
historical lack of any effective vantage independent tools to historical lack of any effective vantage independent tools to
characterize network performance. Traditional methods for measuring characterize network performance. Traditional methods for measuring
Bulk Transport Capacity are sensitive to RTT and as a consequence Bulk Transport Capacity are sensitive to RTT and as a consequence
often yield very different results when run local to an ISP or often yield very different results when run local to an ISP or
interconnect and when run over a customer's complete path. Neither interconnect and when run over a customer's complete path. Neither
the ISP nor customer can repeat the others measurements, leading to the ISP nor customer can repeat the others measurements, leading to
high levels of distrust and acrimony. Model Based Metrics are high levels of distrust and acrimony. Model Based Metrics are
expected to greatly improve this situation. expected to greatly improve this situation.
This document only describes a framework for designing Fully This document only describes a framework for designing Fully
Specified Targeted IP Diagnostic Suite. Each FS-TIDS MUST include Specified Targeted IP Diagnostic Suite. Each FS-TIDS MUST include
its own security section. its own security section.
12. Acknowledgements 12. Acknowledgments
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur for helping with
the statistics. the statistics.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
Ruediger Geib provided feedback which greatly improved the document. Ruediger Geib provided feedback which greatly improved the document.
This work was inspired by Measurement Lab: open tools running on an This work was inspired by Measurement Lab: open tools running on an
skipping to change at page 46, line 29 skipping to change at page 48, line 5
[RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating
TCP to Support Rate-Limited Traffic", RFC 7661, TCP to Support Rate-Limited Traffic", RFC 7661,
DOI 10.17487/RFC7661, October 2015, DOI 10.17487/RFC7661, October 2015,
<http://www.rfc-editor.org/info/rfc7661>. <http://www.rfc-editor.org/info/rfc7661>.
[RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton,
Ed., "A One-Way Loss Metric for IP Performance Metrics Ed., "A One-Way Loss Metric for IP Performance Metrics
(IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January
2016, <http://www.rfc-editor.org/info/rfc7680>. 2016, <http://www.rfc-editor.org/info/rfc7680>.
[RFC7799] Morton, A., "Active and Passive Metrics and Methods (with
Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799,
May 2016, <http://www.rfc-editor.org/info/rfc7799>.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
Fan, X., Mathis, M., and D. Hamon, "Git Repository for Fan, X., Mathis, M., and D. Hamon, "Git Repository for
skipping to change at page 48, line 8 skipping to change at page 49, line 35
[Policing] [Policing]
Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng, Flach, T., Papageorge, P., Terzis, A., Pedrosa, L., Cheng,
Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An Y., Karim, T., Katz-Bassett, E., and R. Govindan, "An
Internet-Wide Analysis of Traffic Policing", ACM SIGCOMM , Internet-Wide Analysis of Traffic Policing", ACM SIGCOMM ,
August 2016. August 2016.
Appendix A. Model Derivations Appendix A. Model Derivations
The reference target_run_length described in Section 5.2 is based on The reference target_run_length described in Section 5.2 is based on
very conservative assumptions: that all window above very conservative assumptions: that all excess data in flight
target_window_size contributes to a standing queue that raises the (window) above the target_window_size contributes to a standing queue
RTT, and that classic Reno congestion control with delayed ACKs are that raises the RTT, and that classic Reno congestion control with
in effect. In this section we provide two alternative calculations delayed ACKs are in effect. In this section we provide two
using different assumptions. alternative calculations using different assumptions.
It may seem out of place to allow such latitude in a measurement It may seem out of place to allow such latitude in a measurement
standard, but this section provides offsetting requirements. method, but this section provides offsetting requirements.
The estimates provided by these models make the most sense if network The estimates provided by these models make the most sense if network
performance is viewed logarithmically. In the operational Internet, performance is viewed logarithmically. In the operational Internet,
data rates span more than 8 orders of magnitude, RTT spans more than data rates span more than 8 orders of magnitude, RTT spans more than
3 orders of magnitude, and packet loss ratio spans at least 8 orders 3 orders of magnitude, and packet loss ratio spans at least 8 orders
of magnitude if not more. When viewed logarithmically (as in of magnitude if not more. When viewed logarithmically (as in
decibels), these correspond to 80 dB of dynamic range. On an 80 dB decibels), these correspond to 80 dB of dynamic range. On an 80 dB
scale, a 3 dB error is less than 4% of the scale, even though it scale, a 3 dB error is less than 4% of the scale, even though it
represents a factor of 2 in untransformed parameter. represents a factor of 2 in untransformed parameter.
skipping to change at page 49, line 9 skipping to change at page 50, line 36
not involve extra delay, for example by the use of a virtual queue as not involve extra delay, for example by the use of a virtual queue as
done in Approximate Fair Dropping [AFD]. A flow controlled by such a done in Approximate Fair Dropping [AFD]. A flow controlled by such a
bottleneck would have a constant RTT and a data rate that fluctuates bottleneck would have a constant RTT and a data rate that fluctuates
in a sawtooth due to AIMD congestion control. Assume the losses are in a sawtooth due to AIMD congestion control. Assume the losses are
being controlled to make the average data rate meet some goal which being controlled to make the average data rate meet some goal which
is equal or greater than the target_rate. The necessary run length is equal or greater than the target_rate. The necessary run length
to meet the target_rate can be computed as follows: to meet the target_rate can be computed as follows:
For some value of Wmin, the window will sweep from Wmin packets to For some value of Wmin, the window will sweep from Wmin packets to
2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the 2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the
queueing case where Wmin = target_window_size, we want the average of queuing case where Wmin = target_window_size, we want the average of
Wmin and 2*Wmin to be the target_window_size, so the average data Wmin and 2*Wmin to be the target_window_size, so the average data
rate is the target rate. Thus we want Wmin = rate is the target rate. Thus we want Wmin =
(2/3)*target_window_size. (2/3)*target_window_size.
Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin)
packets in 2*Wmin round trip times. packets in 2*Wmin round trip times.
Substituting these together we get: Substituting these together we get:
target_run_length = (4/3)(target_window_size^2) target_run_length = (4/3)(target_window_size^2)
skipping to change at page 50, line 12 skipping to change at page 51, line 38
contiguous burst on the forward path, followed by the entire window contiguous burst on the forward path, followed by the entire window
of ACKs on the return path. of ACKs on the return path.
If a particular return path contains a subpath or device that alters If a particular return path contains a subpath or device that alters
the timing of the ACK stream, then the entire front path from the the timing of the ACK stream, then the entire front path from the
sender up to the bottleneck must be tested at the burst parameters sender up to the bottleneck must be tested at the burst parameters
implied by the ACK scheduling algorithm. The most important implied by the ACK scheduling algorithm. The most important
parameter is the Implied Bottleneck IP Capacity, which is the average parameter is the Implied Bottleneck IP Capacity, which is the average
rate at which the ACKs advance snd.una. Note that thinning the ACK rate at which the ACKs advance snd.una. Note that thinning the ACK
stream (relying on the cumulative nature of seg.ack to permit stream (relying on the cumulative nature of seg.ack to permit
discarding some ACKs) causes most TCP implementation to send discarding some ACKs) causes most TCP implementations to send
interface rate bursts to offset the longer times between ACKs in interface rate bursts to offset the longer times between ACKs in
order to maintain the average data rate. order to maintain the average data rate.
Note that due to ubiquitous self clocking in Internet protocols, ill Note that due to ubiquitous self clocking in Internet protocols, ill
conceived channel allocation mechanisms are likely to increases the conceived channel allocation mechanisms are likely to increases the
queuing stress on the front path because they cause larger full queuing stress on the front path because they cause larger full
sender rate data bursts. sender rate data bursts.
Holding data or ACKs for channel allocation or other reasons (such as Holding data or ACKs for channel allocation or other reasons (such as
forward error correction) always raises the effective RTT relative to forward error correction) always raises the effective RTT relative to
 End of changes. 112 change blocks. 
323 lines changed or deleted 405 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/