draft-ietf-ippm-model-based-metrics-06.txt   draft-ietf-ippm-model-based-metrics-07.txt 
IP Performance Working Group M. Mathis IP Performance Working Group M. Mathis
Internet-Draft Google, Inc Internet-Draft Google, Inc
Intended status: Experimental A. Morton Intended status: Experimental A. Morton
Expires: January 7, 2016 AT&T Labs Expires: April 21, 2016 AT&T Labs
July 6, 2015 Oct 19, 2015
Model Based Metrics for Bulk Transport Capacity Model Based Metrics for Bulk Transport Capacity
draft-ietf-ippm-model-based-metrics-06.txt draft-ietf-ippm-model-based-metrics-07.txt
Abstract Abstract
We introduce a new class of Model Based Metrics designed to assess if We introduce a new class of Model Based Metrics designed to assess if
a complete Internet path can be expected to meet a predefined Bulk a complete Internet path can be expected to meet a predefined Target
Transport Performance target by applying a suite of IP diagnostic Transport Performance by applying a suite of IP diagnostic tests to
tests to successive subpaths. The subpath-at-a-time tests can be successive subpaths. The subpath-at-a-time tests can be robustly
robustly applied to key infrastructure, such as interconnects or even applied to key infrastructure, such as interconnects or even
individual devices, to accurately detect if any part of the individual devices, to accurately detect if any part of the
infrastructure will prevent any path traversing it from meeting the infrastructure will prevent paths traversing it from meeting the
specified Target Transport Performance. Target Transport Performance.
The IP diagnostic tests consist of precomputed traffic patterns and For Bulk Transport Capacity, the IP diagnostics are built on test
statistical criteria for evaluating packet delivery. The traffic streams that mimic TCP over the complete path and statistical
patterns are precomputed to mimic TCP or other transport protocol criteria for evaluating the packet transfer statistics of those
over a long path but are constructed in such a way that they are streams. The temporal structure of the test stream (bursts, etc)
independent of the actual details of the subpath under test, end mimic TCP or other transport protocol carrying bulk data over a long
systems or applications. Likewise the success criteria depends on path but are constructed to be independent of the details of the
the packet delivery statistics of the subpath, as evaluated against a subpath under test, end systems or applications. Likewise the
protocol model applied to the Target Transport Performance. The success criteria evaluates the packet transfer statistics of the
success criteria also does not depend on the details of the subpath, subpath against criteria determined by protocol performance models
end systems or application. This makes the measurements open loop, applied to the Target Transport Performance of the complete path.
eliminating most of the difficulties encountered by traditional bulk The success criteria also does not depend on the details of the
transport metrics. subpath, end systems or application.
Model based metrics exhibit several important new properties not Model Based Metrics exhibit several important new properties not
present in other Bulk Capacity Metrics, including the ability to present in other Bulk Transport Capacity Metrics, including the
reason about concatenated or overlapping subpaths. The results are ability to reason about concatenated or overlapping subpaths. The
vantage independent which is critical for supporting independent results are vantage independent which is critical for supporting
validation of tests results from multiple Measurement Points. independent validation of tests by comparing results from multiple
measurement points.
This document does not define IP diagnostic tests directly, but This document does not define the IP diagnostic tests, but provides a
provides a framework for designing suites of IP diagnostics tests framework for designing suites of IP diagnostic tests that are
that are tailored to confirming that infrastructure can meet a tailored to confirming that infrastructure can meet the predetermined
predetermined Target Transport Performance. Target Transport Performance.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
skipping to change at page 2, line 16 skipping to change at page 2, line 17
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 7, 2016. This Internet-Draft will expire on April 21, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 11 skipping to change at page 3, line 11
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 6 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 6
2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9
4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 16 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 17
4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 17 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19
4.3. New requirements relative to RFC 2330 . . . . . . . . . . 18 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 19
5. Common Models and Parameters . . . . . . . . . . . . . . . . . 18 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 20
5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 18 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 20
5.2. Common Model Calculations . . . . . . . . . . . . . . . . 19 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 21
5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 20 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 22
5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . . 21 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . . 22
6. Traffic generating techniques . . . . . . . . . . . . . . . . 21 6. Generating test streams . . . . . . . . . . . . . . . . . . . 23
6.1. Paced transmission . . . . . . . . . . . . . . . . . . . . 21 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 24
6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . . 23 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . . 25
6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 24 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 25
6.4. Concurrent or channelized testing . . . . . . . . . . . . 24 6.4. Concurrent or channelized testing . . . . . . . . . . . . 26
7. Interpreting the Results . . . . . . . . . . . . . . . . . . . 25 7. Interpreting the Results . . . . . . . . . . . . . . . . . . . 27
7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 25 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 27
7.2. Statistical criteria for estimating run_length . . . . . . 27 7.2. Statistical criteria for estimating run_length . . . . . . 29
7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . . 29 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . . 31
8. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 29 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 31
8.1. Basic Data Rate and Packet Delivery Tests . . . . . . . . 30 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 32
8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 30 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 32
8.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 31 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 33
8.1.3. Background Packet Delivery Statistics Tests . . . . . 31 8.1.3. Background Packet Transfer Statistics Tests . . . . . 33
8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 31 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 33
8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 33 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 35
8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 33 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 35
8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 33 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 35
8.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 34 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 36
8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 34 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 36
8.3.1. Full Window slowstart test . . . . . . . . . . . . . . 35 8.3.1. Full Window slowstart test . . . . . . . . . . . . . . 36
8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 35 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 37
8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 35 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 37
8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 36 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 38
8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 36 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 38
8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 37 8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 39
9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 38 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 40 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 41
11. Security Considerations . . . . . . . . . . . . . . . . . . . 41 11. Security Considerations . . . . . . . . . . . . . . . . . . . 42
12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 41 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 43
13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 43
14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 42 14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 43
14.1. Normative References . . . . . . . . . . . . . . . . . . . 42 14.1. Normative References . . . . . . . . . . . . . . . . . . . 43
14.2. Informative References . . . . . . . . . . . . . . . . . . 42 14.2. Informative References . . . . . . . . . . . . . . . . . . 44
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 44 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 46
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 45 A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 47
Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 46 Appendix B. The effects of ACK scheduling . . . . . . . . . . . . 48
Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 47 Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 49
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 47 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 49
1. Introduction 1. Introduction
Model Based Metrics (MBM) rely on mathematical models to specify a Model Based Metrics (MBM) rely on mathematical models to specify a
targeted suite of IP diagnostic tests, designed to assess whether Targeted Suite of IP Diagnostic tests, designed to assess whether
common transport protocols can be expected to meet a predetermined common transport protocols can be expected to meet a predetermined
performance target over an Internet path. Each test in the Targeted Target Transport Performance over an Internet path. This note
Diagnostic Suite (TDS) measures some aspect of IP packet transfer describes the modeling framework to derive the test parameters for
that is required to meet the Target Transport Performance. For accessing an Internet path's ability to support a predetermined Bulk
example a TDS may have separate diagnostic tests to verify that there Transport Capacity.
is: sufficient IP capacity (rate); sufficient queue space to deliver
typical transport bursts; and that the background packet loss ratio
is small enough not to interfere with congestion control. Unlike
other metrics which yield measures of network properties, Model Based
Metrics nominally yield pass/fail evaluations of the ability of
standard transport protocols to meet a specific performance objective
over some network path.
This note describes the modeling framework to derive the IP Each test in the Targeted IP Diagnostic Suite (TIDS) measures some
diagnostic test parameters from the Target Transport Performance aspect of IP packet transfer needed to meet the Target Transport
specified for TCP Bulk Transport Capacity. Model Based Metrics is an Performance. For Bulk Transport Capacity the TIDS includes IP
alternative to the approach described in [RFC3148]. In the future, diagnostic tests to verify that there is: sufficient IP capacity
other Model Based Metrics may cover other applications and (data rate); sufficient queue space at bottlenecks to absorb and
transports, such as VoIP over RTP. In most cases the IP diagnostic deliver typical transport bursts; and that the background packet loss
tests can be implemented by combining existing IPPM metrics with ratio is low enough not to interfere with congestion control; and
additional controls for generating precomputed traffic patterns and other properties described below. Unlike typical IPPM metrics which
statistical criteria for evaluating packet delivery. yield measures of network properties, Model Based Metrics nominally
yield pass/fail evaluations of the ability of standard transport
protocols to meet the specific performance objective over some
network path.
This approach, mapping Target Transport Performance to a targeted In most cases the IP diagnostic tests can be implemented by combining
diagnostic suite (TDS) of IP tests, solves some intrinsic problems existing IPPM metrics with additional controls for generating test
with using TCP or other throughput maximizing protocols for streams having a specified temporal structure (busts or standing
queues, etc) and statistical criteria for evaluating packet transfer.
The temporal structure of the test streams mimic transport protocol
behavior over the complete path, the statistical criteria models the
transport protocol's response to less than ideal IP packet transfer.
This note describes an alternative to the approach presented in "A
Framework for Defining Empirical Bulk Transfer Capacity Metrics"
[RFC3148]. In the future, other Model Based Metrics may cover other
applications and transports, such as VoIP over RTP.
The MBM approach, mapping Target Transport Performance to a Targeted
IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic
problems with using TCP or other throughput maximizing protocols for
measurement. In particular all throughput maximizing protocols (and measurement. In particular all throughput maximizing protocols (and
TCP congestion control in particular) cause some level of congestion TCP congestion control in particular) cause some level of congestion
in order to detect when they have filled the network. This self in order to detect when they have filled the network. This self
inflicted congestion obscures the network properties of interest and inflicted congestion obscures the network properties of interest and
introduces non-linear equilibrium behaviors that make any resulting introduces non-linear equilibrium behaviors that make any resulting
measurements useless as metrics because they have no predictive value measurements useless as metrics because they have no predictive value
for conditions or paths other than that of the measurement itself. for conditions or paths different than that of the measurement
These problems are discussed at length in Section 4. itself. These problems are discussed at length in Section 4.
A targeted suite of IP diagnostic tests does not have such A Targeted IP Diagnostic Suite does not have such difficulties. IP
difficulties. They can be constructed such that they make strong diagnostics can be constructed such that they make strong statistical
statistical statements about path properties that are independent of statements about path properties that are independent of the
the measurement details, such as vantage and choice of measurement measurement details, such as vantage and choice of measurement
points. Model Based Metrics bridge the gap between empirical IP points. Model Based Metrics are designed to bridge the gap between
measurements and expected TCP performance. empirical IP measurements and expected TCP performance.
1.1. Version Control 1.1. Version Control
RFC Editor: Please remove this entire subsection prior to RFC Editor: Please remove this entire subsection prior to
publication. publication.
Please send comments about this draft to ippm@ietf.org. See Please send comments about this draft to ippm@ietf.org. See
http://goo.gl/02tkD for more information including: interim drafts, http://goo.gl/02tkD for more information including: interim drafts,
an up to date todo list and information on contributing. an up to date todo list and information on contributing.
Formatted: Mon Jul 6 13:49:30 PDT 2015 Formatted: Mon Oct 19 15:59:51 PDT 2015
Changes since -06 draft:
o More language nits:
* "Targeted IP Diagnostic Suite (TIDS)" replaces "Targeted
Diagnostic Suite (TDS)".
* "implied bottleneck IP capacity" replaces "implied bottleneck
IP rate".
* Updated to ECN CE Marks.
* Added "specified temporal structure"
* "test stream" replaces "test traffic"
* "packet transfer" replaces "packet delivery"
* Reworked discussion of slowstart, bursts and pacing.
* RFC 7567 replaces RFC 2309.
Changes since -05 draft: Changes since -05 draft:
o Wordsmithing on sections overhauled in -05 draft. o Wordsmithing on sections overhauled in -05 draft.
o Reorganized the document: o Reorganized the document:
* Relocated subsection "Preconditions". * Relocated subsection "Preconditions".
* Relocated subsection "New Requirements relative to RFC 2330". * Relocated subsection "New Requirements relative to RFC 2330".
o Addressed nits and not so nits by Ruediger Geib. (Thanks!) o Addressed nits and not so nits by Ruediger Geib. (Thanks!)
o Substantially tightened the entire definitions section. o Substantially tightened the entire definitions section.
o Many terminology changes, to better conform to other docs : o Many terminology changes, to better conform to other docs :
* IP rate and IP capacity (following RFC 5136) replaces various * IP rate and IP capacity (following RFC 5136) replaces various
forms of link data rate. forms of link data rate.
* subpath replaces link. * subpath replaces link.
* target_window_size replaces target_pipe_size. * target_window_size replaces target_pipe_size.
* Implied Bottleneck IP Rate replaces effective bottleneck link * implied bottleneck IP rate replaces effective bottleneck link
rate. rate.
* Packet delivery statistics replaces delivery statistics. * Packet delivery statistics replaces delivery statistics.
Changes since -04 draft: Changes since -04 draft:
o The introduction was heavily overhauled: split into a separate o The introduction was heavily overhauled: split into a separate
introduction and overview. introduction and overview.
o The new shorter introduction: o The new shorter introduction:
* Is a problem statement; * Is a problem statement;
* This document provides a framework; * This document provides a framework;
* That it replaces TCP measurement by IP tests; * That it replaces TCP measurement by IP tests;
* That the results are pass/fail. * That the results are pass/fail.
o Added a diagram of the framework to the overview o Added a diagram of the framework to the overview
o and introduces all of the elements of the framework. o and introduces all of the elements of the framework.
o Renumbered sections, reducing the depth of some section numbers. o Renumbered sections, reducing the depth of some section numbers.
skipping to change at page 7, line 4 skipping to change at page 7, line 23
o and introduces all of the elements of the framework. o and introduces all of the elements of the framework.
o Renumbered sections, reducing the depth of some section numbers. o Renumbered sections, reducing the depth of some section numbers.
o Updated definitions to better agree with other documents: o Updated definitions to better agree with other documents:
* Reordered section 2 * Reordered section 2
* Bulk [data] performance -> Bulk Transport Capacity, everywhere * Bulk [data] performance -> Bulk Transport Capacity, everywhere
including the title. including the title.
* loss rate and loss probability -> packet loss ratio * loss rate and loss probability -> packet loss ratio
* end-to-end path -> complete path * end-to-end path -> complete path
* [end-to-end][target] performance -> Target Transport * [end-to-end][target] performance -> Target Transport
Performance Performance
* load test -> capacity test * load test -> capacity test
2. Overview 2. Overview
This document describes a modeling framework for deriving a Targeted This document describes a modeling framework for deriving a Targeted
Diagnostic Suite from a predetermined Target Transport Performance. IP Diagnostic Suite from a predetermined Target Transport
It is not a complete specification, and relies on other standards Performance. It is not a complete specification, and relies on other
documents to define important details such as packet type-p standards documents to define important details such as packet type-p
selection, sampling techniques, vantage selection, etc. We imagine selection, sampling techniques, vantage selection, etc. We imagine
Fully Specified Targeted Diagnostic Suites (FSTDS), that define all Fully Specified Targeted IP Diagnostic Suites (FSTIDS), that define
of these details. We use Targeted Diagnostic Suite (TDS) to refer to all of these details. We use Targeted IP Diagnostic Suite (TIDS) to
the subset of such a specification that is in scope for this refer to the subset of such a specification that is in scope for this
document. This terminology is defined in Section 3. document. This terminology is defined in Section 3.
Section 4 describes some key aspects of TCP behavior and what it Section 4 describes some key aspects of TCP behavior and what they
implies about the requirements for IP packet delivery. Most of the imply about the requirements for IP packet transfer. Most of the IP
IP diagnostic tests needed to confirm that the path meets these diagnostic tests needed to confirm that the path meets these
properties can be built on existing IPPM metrics, with the addition properties can be built on existing IPPM metrics, with the addition
of statistical criteria for evaluating packet delivery and in a few of statistical criteria for evaluating packet transfer and in a few
cases, new mechanisms to implement precomputed traffic patterns. cases, new mechanisms to implement the required temporal structure.
(One group of tests, the standing queue tests described in (One group of tests, the standing queue tests described in
Section 8.2, don't correspond to existing IPPM metrics, but suitable Section 8.2, don't correspond to existing IPPM metrics, but suitable
metrics can be patterned after existing tools.) metrics can be patterned after existing tools.)
Figure 1 shows the MBM modeling and measurement framework. The Figure 1 shows the MBM modeling and measurement framework. The
Target Transport Performance, at the top of the figure, is determined Target Transport Performance, at the top of the figure, is determined
by the needs of the user or application, outside the scope of this by the needs of the user or application, outside the scope of this
document. For Bulk Transport Capacity, the main performance document. For Bulk Transport Capacity, the main performance
parameter of interest is the target data rate. However, since TCP's parameter of interest is the Target Data Rate. However, since TCP's
ability to compensate for less than ideal network conditions is ability to compensate for less than ideal network conditions is
fundamentally affected by the Round Trip Time (RTT) and the Maximum fundamentally affected by the Round Trip Time (RTT) and the Maximum
Transmission Unit (MTU) of the complete path, these parameters must Transmission Unit (MTU) of the complete path, these parameters must
also be specified in advance based on knowledge about the intended also be specified in advance based on knowledge about the intended
application setting. They may reflect a specific application over application setting. They may reflect a specific application over
real path through the Internet or an idealized application and real path through the Internet or an idealized application and
hypothetical path representing a typical user community. Section 5 hypothetical path representing a typical user community. Section 5
describes the common parameters and models derived from the Target describes the common parameters and models derived from the Target
Transport Performance. Transport Performance.
Target Transport Performance Target Transport Performance
(target data rate, target RTT and target MTU) (Target Data Rate, Target RTT and Target MTU)
| |
________V_________ ________V_________
| mathematical | | mathematical |
| models | | models |
| | | |
------------------ ------------------
Traffic parameters | | Statistical criteria Traffic parameters | | Statistical criteria
| | | |
_______V____________V____Targeted_______ _______V____________V____Targeted_______
| | * * * | Diagnostic Suite | | | * * * | Diagnostic Suite |
_____|_______V____________V________________ | _____|_______V____________V________________ |
__|____________V____________V______________ | | __|____________V____________V______________ | |
| IP Diagnostic test | | | | IP diagnostic test | | |
| | | | | | | | | | | |
| _____________V__ __V____________ | | | | _____________V__ __V____________ | | |
| | Traffic | | Delivery | | | | | | test | | Delivery | | | |
| | Generation | | Evaluation | | | | | | stream | | Evaluation | | | |
| | | | | | | | | | generation | | | | | |
| -------v-------- ------^-------- | | | | -------v-------- ------^-------- | | |
| | v Test Traffic via ^ | | |-- | | v test stream via ^ | | |--
| | -->======================>-- | | | | | -->======================>-- | | |
| | subpath under test | |- | | subpath under test | |-
----V----------------------------------V--- | ----V----------------------------------V--- |
| | | | | | | | | | | |
V V V V V V V V V V V V
fail/inconclusive pass/fail/inconclusive fail/inconclusive pass/fail/inconclusive
Overall Modeling Framework Overall Modeling Framework
Figure 1 Figure 1
The mathematical models are used to design traffic patterns that The mathematical models are used to design traffic patterns that
mimic TCP or other bulk transport protocol operating at the target mimic TCP or other transport protocol delivering bulk data and
data rate, MTU and RTT over a full range of conditions, including operating at the Target Data Rate, MTU and RTT over a full range of
flows that are bursty at multiple time scales. The traffic patterns conditions, including flows that are bursty at multiple time scales.
are generated based on the three target parameters of complete path The traffic patterns are generated based on the three target
and independent of the properties of individual subpaths using the parameters of complete path and independent of the properties of
techniques described in Section 6. As much as possible the individual subpaths using the techniques described in Section 6. As
measurement traffic is generated deterministically (precomputed) to much as possible the measurement traffic is generated
minimize the extent to which test methodology, measurement points, deterministically (precomputed) to minimize the extent to which test
measurement vantage or path partitioning affect the details of the methodology, measurement points, measurement vantage or path
measurement traffic. partitioning affect the details of the measurement traffic.
Section 7 describes packet delivery statistics and methods test them Section 7 describes packet transfer statistics and methods test them
against the bounds provided by the mathematical models. Since these against the bounds provided by the mathematical models. Since these
statistics are typically the composition of subpaths of the complete statistics are typically the composition of subpaths of the complete
path [RFC6049] , in situ testing requires that the end-to-end path [RFC6049] , in situ testing requires that the end-to-end
statistical bounds be apportioned as separate bounds for each statistical bounds be apportioned as separate bounds for each
subpath. Subpaths that are expected to be bottlenecks may be subpath. Subpaths that are expected to be bottlenecks may be
expected to contribute a larger fraction of the total packet loss. expected to contribute a larger fraction of the total packet loss.
In compensation, non-bottlenecked subpaths have to be constrained to In compensation, non-bottlenecked subpaths have to be constrained to
contribute less packet loss. The criteria for passing each test of a contribute less packet loss. The criteria for passing each test of a
TDS is an apportioned share of the total bound determined by the TIDS is an apportioned share of the total bound determined by the
mathematical model from the Target Transport Performance. mathematical model from the Target Transport Performance.
Section 8 describes the suite of individual tests needed to verify Section 8 describes the suite of individual tests needed to verify
all of required IP delivery properties. A subpath passes if and only all of required IP delivery properties. A subpath passes if and only
if all of the individual IP diagnostics tests pass. Any subpath that if all of the individual IP diagnostic tests pass. Any subpath that
fails any test indicates that some users are likely fail to attain fails any test indicates that some users are likely fail to attain
their Target Transport Performance under some conditions. In their Target Transport Performance under some conditions. In
addition to passing or failing, a test can be deemed to be addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons including: the precomputed inconclusive for a number of reasons including: the precomputed
traffic pattern was not accurately generated; the measurement results traffic pattern was not accurately generated; the measurement results
were not statistically significant; and others such as failing to were not statistically significant; and others such as failing to
meet some required test preconditions. If all tests pass, except meet some required test preconditions. If all tests pass, except
some are inconclusive then the entire suite is deemed to be some are inconclusive then the entire suite is deemed to be
inconclusive. inconclusive.
In Section 9 we present an example TDS that might be representative In Section 9 we present an example TIDS that might be representative
of HD video, and illustrate how Model Based Metrics can be used to of HD video, and illustrate how Model Based Metrics can be used to
address difficult measurement situations, such as confirming that address difficult measurement situations, such as confirming that
intercarrier exchanges have sufficient performance and capacity to intercarrier exchanges have sufficient performance and capacity to
deliver HD video between ISPs. deliver HD video between ISPs.
Since there is some uncertainty in the modeling process, Section 10 Since there is some uncertainty in the modeling process, Section 10
describes a validation procedure to diagnose and minimize false describes a validation procedure to diagnose and minimize false
positive and false negative results. positive and false negative results.
3. Terminology 3. Terminology
skipping to change at page 10, line 9 skipping to change at page 10, line 15
Note that terms containing underscores (rather than spaces) appear in Note that terms containing underscores (rather than spaces) appear in
equations in the modeling sections. In some cases both forms are equations in the modeling sections. In some cases both forms are
used for aesthetic reasons, they do not have different meanings. used for aesthetic reasons, they do not have different meanings.
General Terminology: General Terminology:
Target: A general term for any parameter specified by or derived Target: A general term for any parameter specified by or derived
from the user's application or transport performance requirements. from the user's application or transport performance requirements.
Target Transport Performance: Application or transport performance Target Transport Performance: Application or transport performance
goals for the complete path. For Bulk Transport Capacity defined goals for the complete path. For Bulk Transport Capacity defined
in this note the Target Transport Performance includes the target in this note the Target Transport Performance includes the Target
data rate, target RTT and target MTU as described below. Data Rate, Target RTT and Target MTU as described below.
Target Data Rate: The specified application data rate required for Target Data Rate: The specified application data rate required for
an application's proper operation. Conventional BTC metrics are an application's proper operation. Conventional BTC metrics are
focused on the target data rate, however these metrics had little focused on the Target Data Rate, however these metrics had little
or no predictive value because they do not consider the effects of or no predictive value because they do not consider the effects of
the other two parameters of the Target Transport Performance, the the other two parameters of the Target Transport Performance, the
RTT and MTU of the complete paths. RTT and MTU of the complete paths.
Target RTT (Round Trip Time): The specified baseline (minimum) RTT Target RTT (Round Trip Time): The specified baseline (minimum) RTT
of the longest complete path over which the application expects to of the longest complete path over which the user expects to be
be able meet the target performance. TCP and other transport able meet the target performance. TCP and other transport
protocol's ability to compensate for path problems is generally protocol's ability to compensate for path problems is generally
proportional to the number of round trips per second. The Target proportional to the number of round trips per second. The Target
RTT determines both key parameters of the traffic patterns (e.g. RTT determines both key parameters of the traffic patterns (e.g.
burst sizes) and the thresholds on acceptable traffic statistics. burst sizes) and the thresholds on acceptable IP packet transfer
The Target RTT must be specified considering appropriate packets statistics. The Target RTT must be specified considering
sizes: MTU sized packets on the forward path, ACK sized packets appropriate packets sizes: MTU sized packets on the forward path,
(typically header_overhead) on the return path. Note that target ACK sized packets (typically header_overhead) on the return path.
RTT is specified and not measured, it determines the applicability Note that Target RTT is specified and not measured, it determines
MBM evaluations for paths that are different than the measured the applicability of MBM measuremets for paths that are different
path. than the measured path.
Target MTU (Maximum Transmission Unit): The specified maximum MTU Target MTU (Maximum Transmission Unit): The specified maximum MTU
supported by the complete path the over which the application supported by the complete path the over which the application
expects to meet the target performance. Assume 1500 Byte MTU expects to meet the target performance. Assume 1500 Byte MTU
unless otherwise specified. If some subpath forces a smaller MTU, unless otherwise specified. If some subpath has a smaller MTU,
then it becomes the target MTU for the complete path, and all then it becomes the Target MTU for the complete path, and all
model calculations and subpath tests must use the same smaller model calculations and subpath tests must use the same smaller
MTU. MTU.
Targeted Diagnostic Suite (TDS): A set of IP diagnostic tests Targeted IP Diagnostic Suite (TIDS): A set of IP diagnostic tests
designed to determine if an otherwise ideal complete path designed to determine if an otherwise ideal complete path
containing the subpath under test can sustain flows at a specific containing the subpath under test can sustain flows at a specific
target_data_rate using target_MTU sized packets when the RTT of target_data_rate using target_MTU sized packets when the RTT of
the complete path is target_RTT. the complete path is target_RTT.
Fully Specified Targeted Diagnostic Suite: A TDS together with Fully Specified Targeted IP Diagnostic Suite: A TIDS together with
additional specification such as "type-p", etc which are out of additional specification such as "type-p", etc which are out of
scope for this document, but need to be drawn from other standards scope for this document, but need to be drawn from other standards
documents. documents.
Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an
Internet path's ability to carry bulk data, such as large files, Internet path's ability to carry bulk data, such as large files,
streaming (non-real time) video, and under some conditions, web streaming (non-real time) video, and under some conditions, web
images and other content. Prior efforts to define BTC metrics images and other content. Prior efforts to define BTC metrics
have been based on [RFC3148], which predates our understanding of have been based on [RFC3148], which predates our understanding of
TCP ant the requirements described in Section 4 TCP and the requirements described in Section 4
IP diagnostic tests: Measurements or diagnostics to determine if
IP diagnostic tests: Measurements or diagnostic tests to determine packet transfer statistics meet some precomputed target.
if packet delivery statistics meet some precomputed target. traffic patterns: The temporal patterns or burstiness of traffic
traffic patterns: The temporal patterns or statistics of traffic
generated by applications over transport protocols such as TCP. generated by applications over transport protocols such as TCP.
There are several mechanisms that cause bursts at various time There are several mechanisms that cause bursts at various time
scales as described in Section 4.1. Our goal here is to mimic the scales as described in Section 4.1. Our goal here is to mimic the
range of common patterns (burst sizes and rates, etc), without range of common patterns (burst sizes and rates, etc), without
tying our applicability to specific applications, implementations tying our applicability to specific applications, implementations
or technologies, which are sure to become stale. or technologies, which are sure to become stale.
packet delivery statistics: Raw, detailed or summary statistics packet transfer statistics: Raw, detailed or summary statistics
about packet delivery properties of the IP layer including packet about packet transfer properties of the IP layer including packet
losses, ECN marks, reordering, or any other properties that may be losses, ECN Congestion Experienced (CE) marks, reordering, or any
germane to transport performance. other properties that may be germane to transport performance.
packet loss ratio: As defined in [I-D.ietf-ippm-2680-bis]. packet loss ratio: As defined in [I-D.ietf-ippm-2680-bis].
apportioned: To divide and allocate, for example budgeting packet apportioned: To divide and allocate, for example budgeting packet
loss across multiple subpaths such that they will accumulate to loss across multiple subpaths such that the losses will accumulate
less than a specified end-to-end loss ratio. to less than a specified end-to-end loss ratio. Apportioning
metrics is essentially the inverse of the process described in
[RFC5835].
open loop: A control theory term used to describe a class of open loop: A control theory term used to describe a class of
techniques where systems that naturally exhibit circular techniques where systems that naturally exhibit circular
dependencies can be analyzed by suppressing some of the dependencies can be analyzed by suppressing some of the
dependencies, such that the resulting dependency graph is acyclic. dependencies, such that the resulting dependency graph is acyclic.
Terminology about paths, etc. See [RFC2330] and [RFC7398]. Terminology about paths, etc. See [RFC2330] and [RFC7398].
[data] sender: Host sending data and receiving ACKs. data sender: Host sending data and receiving ACKs.
[data] receiver: Host receiving data and sending ACKs. data receiver: Host receiving data and sending ACKs.
complete path: The end-to-end path from the data sender to the data complete path: The end-to-end path from the data sender to the data
receiver. receiver.
subpath: A portion of the complete path. Note that there is no subpath: A portion of the complete path. Note that there is no
requirement that subpaths be non-overlapping. A subpath can be a requirement that subpaths be non-overlapping. A subpath can be a
small as a single device, link or interface. small as a single device, link or interface.
Measurement Point: Measurement points as described in [RFC7398]. measurement point: Measurement points as described in [RFC7398].
test path: A path between two measurement points that includes a test path: A path between two measurement points that includes a
subpath of the complete path under test, and if the measurement subpath of the complete path under test. If the measurement
points are off path, may include "test leads" between the points are off path, the test path may include "test leads"
measurement points and the subpath. between the measurement points and the subpath.
[Dominant] Bottleneck: The Bottleneck that generally dominates
packet delivery statistics for the entire path. It typically dominant bottleneck: The bottleneck that generally determines most
determines a flow's self clock timing, packet loss and ECN marking of packet transfer statistics for the entire path. It typically
rate. See Section 4.1. determines a flow's self clock timing, packet loss and ECN
Congestion Experienced (CE) marking rate, with other potential
bottlenecks having less effect on the packet transfer statistics.
See Section 4.1 on TCP properties.
front path: The subpath from the data sender to the dominant front path: The subpath from the data sender to the dominant
bottleneck. bottleneck.
back path: The subpath from the dominant bottleneck to the receiver. back path: The subpath from the dominant bottleneck to the receiver.
return path: The path taken by the ACKs from the data receiver to return path: The path taken by the ACKs from the data receiver to
the data sender. the data sender.
cross traffic: Other, potentially interfering, traffic competing for cross traffic: Other, potentially interfering, traffic competing for
network resources (bandwidth and/or queue capacity). network resources (bandwidth and/or queue capacity).
Properties determined by the complete path and application. They are Properties determined by the complete path and application. These
described in more detail in Section 5.1. are described in more detail in Section 5.1.
Application Data Rate: General term for the data rate as seen by the Application Data Rate: General term for the data rate as seen by the
application above the transport layer in bytes per second. This application above the transport layer in bytes per second. This
is the payload data rate, and explicitly excludes transport and is the payload data rate, and explicitly excludes transport and
lower level headers (TCP/IP or other protocols), retransmissions lower level headers (TCP/IP or other protocols), retransmissions
and other overhead that is not part to the total quantity of data and other overhead that is not part to the total quantity of data
delivered to the application. delivered to the application.
IP Rate: The actual number of IP-layer bytes delivered through a IP rate: The actual number of IP-layer bytes delivered through a
subpath, per unit time, including TCP and IP headers, retransmits subpath, per unit time, including TCP and IP headers, retransmits
and other TCP/IP overhead. Follows from IP-type-P Link Usage and other TCP/IP overhead. Follows from IP-type-P Link Usage
[RFC5136]. [RFC5136].
IP Capacity: The maximum number of IP-layer bytes that can be IP capacity: The maximum number of IP-layer bytes that can be
transmitted through a subpath, per unit time, including TCP and IP transmitted through a subpath, per unit time, including TCP and IP
headers, retransmits and other TCP/IP overhead. Follows from IP- headers, retransmits and other TCP/IP overhead. Follows from IP-
type-P Link Capacity [RFC5136]. type-P Link Capacity [RFC5136].
Bottleneck IP Rate: This is the IP rate of the data flowing through bottleneck IP capacity: The IP capacity of the dominant bottleneck
the dominant bottleneck in the forward path. TCP and other in the forward path. All throughput maximizing protocols estimate
protocols normally derive their self clocks from the timing of this capacity by observing the IP rate delivered through the
this data. See Section 4.1 and Appendix B for more details. bottleneck. Most protocols derive their self clocks from the
Implied Bottleneck IP Rate: This is the bottleneck IP rate implied timing of this data. See Section 4.1 and Appendix B for more
by the returning ACKs from the receiver. It is determined by
looking at how much application data the ACK stream reports
delivered per unit time. If the return path is thinning, batching
or otherwise altering ACK timing TCP will derive its clock from
the the implied bottleneck IP rate of the ACK stream, which in the
short term, might be much different than the actual bottleneck IP
rate. In the case of thinned or batched ACKs front path must have
sufficient buffering to smooth any data bursts to the IP capacity
of the bottleneck. If the return path is not altering the ACK
stream, then the Implied Bottleneck IP Rate will be the same as
the Bottleneck IP Rate. See Section 4.1 and Appendix B for more
details. details.
[sender | interface] rate: The IP rate which corresponds to the IP implied bottleneck IP capacity: This is the bottleneck IP capacity
Capacity of the data sender's interface. Due to issues of sender implied by the ACKs returning from the receiver. It is determined
efficiency and technologies such as TCP offload engines, nearly by looking at how much application data the ACK stream at the
all moderns servers deliver data in bursts at full interface link sender reports delivered to the data receiver per unit time at
rate. Today 1 or 10 Gb/s are typical. various time scales. If the return path is thinning, batching or
otherwise altering the ACK timing the implied bottleneck IP
capacity over short time scales might be substantially larger than
the bottleneck IP capacity averaged over a full RTT. Since TCP
derives its clock from the data delivered through the bottleneck
the front path must have sufficient buffering to absorb any data
bursts at the dimensions (duration and IP rate) implied by the ACK
stream, potentially doubled during slowstart. If the return path
is not altering the ACK stream, then the implied bottleneck IP
capacity will be the same as the bottleneck IP capacity. See
Section 4.1 and Appendix B for more details.
sender interface rate: The IP rate which corresponds to the IP
capacity of the data sender's interface. Due to sender efficiency
algorithms including technologies such as TCP segmentation offload
(TSO), nearly all moderns servers deliver data in bursts at full
interface link rate. Today 1 or 10 Gb/s are typical.
Header_overhead: The IP and TCP header sizes, which are the portion Header_overhead: The IP and TCP header sizes, which are the portion
of each MTU not available for carrying application payload. of each MTU not available for carrying application payload.
Without loss of generality this is assumed to be the size for Without loss of generality this is assumed to be the size for
returning acknowledgements (ACKs). For TCP, the Maximum Segment returning acknowledgements (ACKs). For TCP, the Maximum Segment
Size (MSS) is the Target MTU minus the header_overhead. Size (MSS) is the Target MTU minus the header_overhead.
Basic parameters common to models and subpath tests are defined here Basic parameters common to models and subpath tests are defined here
are described in more detail in Section 5.2. Note that these are are described in more detail in Section 5.2. Note that these are
mixed between application transport performance (excludes headers) mixed between application transport performance (excludes headers)
and IP performance (which include TCP headers and retransmissions as and IP performance (which include TCP headers and retransmissions as
part of the payload). part of the IP payload).
Window: The total quantity of data plus the data represented by ACKs Window [size]: The total quantity of data plus the data represented
circulating in the network is referred to as the window. See by ACKs circulating in the network is referred to as the window.
Section 4.1. Sometimes used with other qualifiers (congestion See Section 4.1. Sometimes used with other qualifiers (congestion
window, cwnd or receiver window) to indicate which mechanism is window, cwnd or receiver window) to indicate which mechanism is
controlling the window. controlling the window.
pipe size: A general term for number of packets needed in flight pipe size: A general term for number of packets needed in flight
(the window size) to exactly fill some network path or subpath. (the window size) to exactly fill some network path or subpath.
It corresponds to the window size which maximizes network power, It corresponds to the window size which maximizes network power,
the observed data rate divided by the observed RTT. Often used the observed data rate divided by the observed RTT. Often used
with additional qualifiers to specify which path, or under what with additional qualifiers to specify which path, or under what
conditions, etc. conditions, etc.
target_window_size: The average number of packets in flight (the target_window_size: The average number of packets in flight (the
window size) needed to meet the target data rate, for the window size) needed to meet the Target Data Rate, for the
specified target RTT, and MTU. It implies the scale of the bursts specified Target RTT, and MTU. It implies the scale of the bursts
that the network might experience. that the network might experience.
run length: A general term for the observed, measured, or specified run length: A general term for the observed, measured, or specified
number of packets that are (expected to be) delivered between number of packets that are (expected to be) delivered between
losses or ECN marks. Nominally one over the sum of the loss and losses or ECN Congestion Experienced (CE) marks. Nominally one
ECN marking probabilities, if there are independently and over the sum of the loss and ECN CE marking probabilities, if
identically distributed. there are independently and identically distributed.
target_run_length: The target_run_length is an estimate of the target_run_length: The target_run_length is an estimate of the
minimum number of non-congestion marked packets needed between minimum number of non-congestion marked packets needed between
losses or ECN marks necessary to attain the target_data_rate over losses or ECN Congestion Experienced (CE) marks necessary to
a path with the specified target_RTT and target_MTU, as computed attain the target_data_rate over a path with the specified
by a mathematical model of TCP congestion control. A reference target_RTT and target_MTU, as computed by a mathematical model of
calculation is shown in Section 5.2 and alternatives in Appendix A TCP congestion control. A reference calculation is shown in
Section 5.2 and alternatives in Appendix A
reference target_run_length: target_run_length computed precisely by reference target_run_length: target_run_length computed precisely by
the method in Section 5.2. This is likely to be more slightly the method in Section 5.2. This is likely to be more slightly
conservative than required by modern TCP implementations. conservative than required by modern TCP implementations.
Ancillary parameters used for some tests: Ancillary parameters used for some tests:
derating: Under some conditions the standard models are too derating: Under some conditions the standard models are too
conservative. The modeling framework permits some latitude in conservative. The modeling framework permits some latitude in
relaxing or "derating" some test parameters as described in relaxing or "derating" some test parameters as described in
Section 5.3 in exchange for a more stringent TDS validation Section 5.3 in exchange for a more stringent TIDS validation
procedures, described in Section 10. procedures, described in Section 10.
subpath_IP_capacity: The IP capacity of a specific subpath. subpath_IP_capacity: The IP capacity of a specific subpath.
test path: A subpath of a complete path under test. test path: A subpath of a complete path under test.
test_path_RTT: The RTT observed between two measurement points using test_path_RTT: The RTT observed between two measurement points using
packet sizes that are consistent with the transport protocol. packet sizes that are consistent with the transport protocol.
Generally MTU sized packets of the forward path, header_overhead Generally MTU sized packets of the forward path, header_overhead
sized packets on the return path. sized packets on the return path.
test_path_pipe: The pipe size of a test path. Nominally the test test_path_pipe: The pipe size of a test path. Nominally the test
path RTT times the test path IP_capacity. path RTT times the test path IP_capacity.
test_window: The window necessary to meet the target_rate over a test_window: The window necessary to meet the target_rate over a
test path. Typically test_window=target_data_rate*test_path_RTT/ test path. Typically test_window=target_data_rate*test_path_RTT/
(target_MTU - header_overhead). (target_MTU - header_overhead).
skipping to change at page 14, line 16 skipping to change at page 14, line 28
test_path_RTT: The RTT observed between two measurement points using test_path_RTT: The RTT observed between two measurement points using
packet sizes that are consistent with the transport protocol. packet sizes that are consistent with the transport protocol.
Generally MTU sized packets of the forward path, header_overhead Generally MTU sized packets of the forward path, header_overhead
sized packets on the return path. sized packets on the return path.
test_path_pipe: The pipe size of a test path. Nominally the test test_path_pipe: The pipe size of a test path. Nominally the test
path RTT times the test path IP_capacity. path RTT times the test path IP_capacity.
test_window: The window necessary to meet the target_rate over a test_window: The window necessary to meet the target_rate over a
test path. Typically test_window=target_data_rate*test_path_RTT/ test path. Typically test_window=target_data_rate*test_path_RTT/
(target_MTU - header_overhead). (target_MTU - header_overhead).
The terminology below is used to define temporal patterns for test
stream. These patterns are designed to mimic TCP behavior, as
described in Section 4.1.
packet headway: Time interval between packets, specified from the
start of one to the start of the next. e.g. If packets are sent
with a 1 mS headway, there will be exactly 1000 packets per
second.
burst headway: Time interval between bursts, specified from the
start of the first packet one burst to the start of the first
packet of the next burst. e.g. If 4 packet bursts are sent with a
1 mS burst headway, there will be exactly 4000 packets per second.
paced single packets: Send individual packets at the specified rate
or packet headway.
paced bursts: Send bursts on a timer. Specify any 3 of: average
data rate, packet size, burst size (number of packets) and burst
headway (burst start to start). By default the bursts are assumed
full sender interface rate, such that the packet headway within
each burst is the minimum supported by the sender's interface.
Under some conditions it is useful to explicitly specify the
packet headway within each burst.
slowstart rate: Mimic TCP slowstart by sending 4 packet paced bursts
at an average data rate equal to twice the implied bottleneck IP
capacity (but not more than the sender interface rate). This is a
two level burst pattern described in more detail in Section 6.1.
If the implied bottleneck IP capacity is more than half of the
sender interface rate, slowstart rate becomes sender interface
rate.
slowstart burst: Mimic one round of TCP slowstart by sending a
specified number of packets packets in a two level burst pattern
that resembles slowstart.
repeated slowstart bursts: Repeat Slowstart bursts once per
target_RTT. For TCP each burst would be twice as large as the
prior burst, and the sequence would end at the first ECN CE mark
or lost packet. For measurement, all slowstart bursts would be
the same size (nominally target_window_size but other sizes might
be specified), and the ECN CE marks and lost packets are counted.
The tests described in this note can be grouped according to their The tests described in this note can be grouped according to their
applicability. applicability.
capacity tests: determine if a network subpath has sufficient Capacity tests: Capacity tests determine if a network subpath has
capacity to deliver the Target Transport Performance. As long as sufficient capacity to deliver the Target Transport Performance.
the test traffic is within the proper envelope for the Target As long as the test stream is within the proper envelope for the
Transport Performance, the average packet losses or ECN marks must Target Transport Performance, the average packet losses or ECN
be below the threshold computed by the model. As such, capacity Congestion Experienced (CE) marks must be below the threshold
tests reflect parameters that can transition from passing to computed by the model. As such, capacity tests reflect parameters
failing as a consequence of cross traffic, additional presented that can transition from passing to failing as a consequence of
load or the actions of other network users. By definition, cross traffic, additional presented load or the actions of other
capacity tests also consume significant network resources (data network users. By definition, capacity tests also consume
capacity and/or queue buffer space), and the test schedules must significant network resources (data capacity and/or queue buffer
be balanced by their cost. space), and the test schedules must be balanced by their cost.
Monitoring tests: are designed to capture the most important aspects Monitoring tests: Monitoring tests are designed to capture the most
of a capacity test, but without presenting excessive ongoing load important aspects of a capacity test, but without presenting
themselves. As such they may miss some details of the network's excessive ongoing load themselves. As such they may miss some
performance, but can serve as a useful reduced-cost proxy for a details of the network's performance, but can serve as a useful
capacity test, for example to support ongoing monitoring. reduced-cost proxy for a capacity test, for example to support
Engineering tests: evaluate how network algorithms (such as AQM and ongoing monitoring.
channel allocation) interact with TCP-style self clocked protocols Engineering tests: Engineering tests evaluate how network algorithms
and adaptive congestion control based on packet loss and ECN (such as AQM and channel allocation) interact with TCP-style self
marks. These tests are likely to have complicated interactions clocked protocols and adaptive congestion control based on packet
with cross traffic and under some conditions can be inversely loss and ECN Congestion Experienced (CE) marks. These tests are
sensitive to load. For example a test to verify that an AQM likely to have complicated interactions with cross traffic and
algorithm causes ECN marks or packet drops early enough to limit under some conditions can be inversely sensitive to load. For
queue occupancy may experience a false pass result in the presence example a test to verify that an AQM algorithm causes ECN CE marks
of cross traffic. It is important that engineering tests be or packet drops early enough to limit queue occupancy may
performed under a wide range of conditions, including both in situ experience a false pass result in the presence of cross traffic.
and bench testing, and over a wide variety of load conditions. It is important that engineering tests be performed under a wide
Ongoing monitoring is less likely to be useful for engineering range of conditions, including both in situ and bench testing, and
tests, although sparse in situ testing might be appropriate. over a wide variety of load conditions. Ongoing monitoring is
less likely to be useful for engineering tests, although sparse in
situ testing might be appropriate.
4. Background 4. Background
At the time the IPPM WG was chartered, sound Bulk Transport Capacity At the time the IPPM WG was chartered, sound Bulk Transport Capacity
measurement was known to be well beyond our capabilities. Even at measurement was known to be well beyond our capabilities. Even at
the time [RFC3148] was written we knew that we didn't fully the time that Framework for IP Performance Metrics [RFC3148] was
understand the problem. Now, by hindsight we understand why BTC is written we knew that we didn't fully understand the problem. Now, by
such a hard problem: hindsight we understand why BTC is such a hard problem:
o TCP is a control system with circular dependencies - everything o TCP is a control system with circular dependencies - everything
affects performance, including components that are explicitly not affects performance, including components that are explicitly not
part of the test. part of the test.
o Congestion control is an equilibrium process, such that transport o Congestion control is an equilibrium process, such that transport
protocols change the network statistics (raise the packet loss protocols change the packet transfer statistics (raise the packet
ratio and/or RTT) to conform to their behavior. By design TCP loss ratio and/or RTT) to conform to their behavior. By design
congestion control keep raising the data rate until the network TCP congestion control keeps raising the data rate until the
gives some indication that it is full by dropping or ECN marking network gives some indication that it is full by dropping or ECN
packets. If TCP successfully fills the network the packet loss CE marking packets. If TCP successfully fills the network (e.g.
and ECN marks are mostly determined by TCP and how hard TCP drives the bottleneck is 100% utilized) the packet loss and ECN CE marks
the network and not by the network itself. are mostly determined by TCP and how hard TCP drives the network
at that rate and not by the network itself.
o TCP's ability to compensate for network flaws is directly o TCP's ability to compensate for network flaws is directly
proportional to the number of roundtrips per second (i.e. proportional to the number of roundtrips per second (i.e.
inversely proportional to the RTT). As a consequence a flawed inversely proportional to the RTT). As a consequence a flawed
subpath may pass a short RTT local test even though it fails when subpath may pass a short RTT local test even though it fails when
the path is extended by a perfect network to some larger RTT. the subpath is extended by an effectively perfect network to some
larger RTT.
o TCP has an extreme form of the Heisenberg problem - Measurement o TCP has an extreme form of the Heisenberg problem - Measurement
and cross traffic interact in unknown and ill defined ways. The and cross traffic interact in unknown and ill defined ways. The
situation is actually worse than the traditional physics problem situation is actually worse than the traditional physics problem
where you can at least estimate bounds on the relative momentum of where you can at least estimate bounds on the relative momentum of
the measurement and measured particles. For network measurement the measurement and measured particles. For network measurement
you can not in general determine the relative "mass" of either the you can not in general determine the relative "mass" of either the
measurement traffic or the cross traffic, so you can not gauge the test stream or the cross traffic, so you can not gauge the
relative magnitude of the uncertainty that might be introduced by relative magnitude of the uncertainty that might be introduced by
any interaction. any interaction.
These properties are a consequence of the equilibrium behavior These properties are a consequence of the equilibrium behavior
intrinsic to how all throughput maximizing protocols interact with intrinsic to how all throughput maximizing protocols interact with
the Internet. These protocols rely on control systems based on the Internet. These protocols rely on control systems based on
estimated network parameters to regulate the quantity of data traffic estimated network parameters to regulate the quantity of data sent
sent into the network. The data traffic in turn alters network and into the network. The sent data in turn alters the network
the properties observed by the estimators, such that there are properties observed by the estimators, such that there are circular
circular dependencies between every component and every property. dependencies between every component and every property. Since some
Since some of these properties are nonlinear, the entire system is of these properties are nonlinear, the entire system is nonlinear,
nonlinear, and any change anywhere causes difficult to predict and any change anywhere causes difficult to predict changes in every
changes in every parameter. parameter.
Model Based Metrics overcome these problems by forcing the Model Based Metrics overcome these problems by making the measurement
measurement system to be open loop: the packet delivery statistics system open loop: the packet transfer statistics (akin to the network
(akin to the network estimators) do not affect the traffic or traffic estimators) do not affect the traffic or traffic patterns (bursts),
patterns (bursts), which computed on the basis of the Target which computed on the basis of the Target Transport Performance. In
Transport Performance. In order for a network to pass, the resulting order for a network to pass, the resulting packet transfer statistics
packet delivery statistics and corresponding network estimators have and corresponding network estimators have to be such that they would
to be such that they would not cause the control systems slow the not cause the control systems slow the traffic below the Target Data
traffic below the target data rate. Rate.
4.1. TCP properties 4.1. TCP properties
TCP and SCTP are self clocked protocols. The dominant steady state TCP and SCTP are self clocked protocols that carry the vast majority
behavior is to have an approximately fixed quantity of data and of all Internet data. Their dominant behavior is to have an
acknowledgements (ACKs) circulating in the network. The receiver approximately fixed quantity of data and acknowledgements (ACKs)
reports arriving data by returning ACKs to the data sender, the data circulating in the network. The data receiver reports arriving data
sender typically responds by sending exactly the same quantity of by returning ACKs to the data sender, the data sender typically
data back into the network. The total quantity of data plus the data responds by sending exactly the same quantity of data back into the
represented by ACKs circulating in the network is referred to as the network. The total quantity of data plus the data represented by
window. The mandatory congestion control algorithms incrementally ACKs circulating in the network is referred to as the window. The
adjust the window by sending slightly more or less data in response mandatory congestion control algorithms incrementally adjust the
to each ACK. The fundamentally important property of this system is window by sending slightly more or less data in response to each ACK.
that it is self clocked: The data transmissions are a reflection of The fundamentally important property of this system is that it is
the ACKs that were delivered by the network, the ACKs are a self clocked: The data transmissions are a reflection of the ACKs
reflection of the data arriving from the network. that were delivered by the network, the ACKs are a reflection of the
data arriving from the network.
A number of phenomena can cause bursts of data, even in idealized A number of protocol features cause bursts of data, even in idealized
networks that can be modeled as simple queueing systems. networks that can be modeled as simple queueing systems.
During slowstart the data rate is doubled on each RTT by sending During slowstart the IP rate is doubled on each RTT by sending twice
twice as much data as was delivered to the receiver on the prior RTT. as much data as was delivered to the receiver during the prior RTT.
For slowstart to be able to fill such a network the network must be Each returning ACK causes the sender to transmit twice the data the
able to tolerate slowstart bursts up to the full pipe size inflated ACK reported arriving at the receiver. For slowstart to be able to
by the anticipated window reduction on the first loss or ECN mark. fill a network the network must be able to tolerate slowstart bursts
For example, with classic Reno congestion control, an optimal up to the full pipe size inflated by the anticipated window reduction
slowstart has to end with a burst that is twice the bottleneck rate on the first loss or ECN CE mark. For example, with classic Reno
for exactly one RTT in duration. This burst causes a queue which is congestion control, an optimal slowstart has to end with a burst that
exactly equal to the pipe size (i.e. the window is exactly twice the is twice the bottleneck rate for one RTT in duration. This burst
pipe size) so when the window is halved in response to the first causes a queue which is equal to the pipe size (i.e. the window is
loss, the new window will be exactly the pipe size. twice the pipe size) so when the window is halved in response to the
first packet loss, the new window will be the pipe size.
Note that if the bottleneck data rate is significantly slower than Note that if the bottleneck IP rate is less that half of the capacity
the rest of the path, the slowstart bursts will not cause significant of the front path (which is almost always the case), the slowstart
queues anywhere else along the path; they primarily exercise the bursts will not by themselves cause significant queues anywhere else
queue at the dominant bottleneck. along the front path; they primarily exercise the queue at the
dominant bottleneck.
Other sources of bursts include application pauses and channel Several common efficiency algorithms also cause bursts. The self
allocation mechanisms. Appendix B describes the treatment of channel clock is typically applied to groups of packets: the receiver's
allocation systems. If the application pauses (stops reading or delayed ACK algorithm generally sends only one ACK per two data
writing data) for some fraction of one RTT, state-of-the-art TCP segments. Furthermore the modern senders use TCP segmentation
catches up to the earlier window size by sending a burst of data at offload (TSO) to reduce CPU overhead. The sender's software stack
the full sender interface rate. To fill such a network with a builds supersized TCP segments that the TSO hardware splits into MTU
realistic application, the network has to be able to tolerate sized segments on the wire. The net effect of TSO, delayed ACK and
interface rate bursts from the data sender large enough to cover other efficiency algorithms is to send bursts of segments at full
application pauses. sender interface rate.
Although the interface rate bursts are typically smaller than the Note that these efficiency algorithms are almost always in effect,
last burst of a slowstart, they are at a higher data rate so they including during slowstart, such that slowstart typically has a two
level burst structure. Section 6.1 describes slowstart in more
detail.
Additional sources of bursts include TCP's initial window [RFC6928],
application pauses, channel allocation mechanisms and network devices
that schedule ACKs. Appendix B describes these last two items. If
the application pauses (stops reading or writing data) for some
fraction of an RTT, many TCP implementations catch up to their
earlier window size by sending a burst of data at the full sender
interface rate. To fill a network with a realistic application, the
network has to be able to tolerate sender interface rate bursts large
enough to restore the prior window following application pauses.
Although the sender interface rate bursts are typically smaller than
the last burst of a slowstart, they are at a higher IP rate so they
potentially exercise queues at arbitrary points along the front path potentially exercise queues at arbitrary points along the front path
from the data sender up to and including the queue at the dominant from the data sender up to and including the queue at the dominant
bottleneck. There is no model for how frequent or what sizes of bottleneck. There is no model for how frequent or what sizes of
sender rate bursts should be tolerated. sender rate bursts the network should tolerate.
To verify that a path can meet a Target Transport Performance, it is In conclusion, to verify that a path can meet a Target Transport
necessary to independently confirm that the path can tolerate bursts Performance, it is necessary to independently confirm that the path
in the dimensions that can be caused by these mechanisms. Three can tolerate bursts at the scales that can be caused by these
cases are likely to be sufficient: mechanisms. Three cases are believed to be sufficient:
o Slowstart bursts sufficient to get connections started properly. o Two level slowstart bursts sufficient to get connections started
o Frequent sender interface rate bursts that are small enough where properly.
they can be assumed not to significantly affect packet delivery o Ubiquitous sender interface rate bursts caused by efficiency
statistics. (Implicitly derated by limiting the burst size). algorithms. We assume 4 packet bursts to be the most common case,
o Infrequent sender interface rate full target_window_size bursts since it matches the effects of delayed ACK during slowstart.
that might affect the packet delivery statistics. These bursts should be assumed not to significantly affect packet
(Target_run_length may be derated). transfer statistics.
o Infrequent sender interface rate bursts that are full
target_window_size. Target_run_length may be derated for these
large fast bursts.
If a subpath can meet the required packet loss ratio for bursts at
all of these scales then it has sufficient buffering at all potential
bottlenecks to tolerate any of the bursts that are likely introduced
by TCP or other transport protocols.
4.2. Diagnostic Approach 4.2. Diagnostic Approach
A complete path is expected to be able to sustain a Bulk TCP flow of A complete path of a given RTT and MTU, which are equal to or smaller
a given data rate, MTU and RTT when all of the following conditions than the Target RTT and equal to or larger than the Target MTU
are met: respectively, is expected to be able to attain a specified Bulk
1. The IP capacity is above the target data rate by sufficient Transport Capacity when all of the following conditions are met:
margin to cover all TCP/IP overheads. See Section 8.1 or any 1. The IP capacity is above the Target Data Rate by sufficient
number of data rate tests outside of MBM. margin to cover all TCP/IP overheads. This can be confirmed by
2. The observed packet delivery statistics are better than required the tests described in Section 8.1 or any number of IP capacity
by a suitable TCP performance model (e.g. fewer losses or ECN tests adapted to implement MBM.
marks). See Section 8.1 or any number of low rate packet loss 2. The observed packet transfer statistics are better than required
tests outside of MBM. by a suitable TCP performance model (e.g. fewer packet losses or
ECN CE marks). See Section 8.1 or any number of low rate packet
loss tests outside of MBM.
3. There is sufficient buffering at the dominant bottleneck to 3. There is sufficient buffering at the dominant bottleneck to
absorb a slowstart rate burst large enough to get the flow out of absorb a slowstart bursts large enough to get the flow out of
slowstart at a suitable window size. See Section 8.3. slowstart at a suitable window size. See Section 8.3.
4. There is sufficient buffering in the front path to absorb and 4. There is sufficient buffering in the front path to absorb and
smooth sender interface rate bursts at all scales that are likely smooth sender interface rate bursts at all scales that are likely
to be generated by the application, any channel arbitration in to be generated by the application, any channel arbitration in
the ACK path or any other mechanisms. See Section 8.4. the ACK path or any other mechanisms. See Section 8.4.
5. When there is a slowly rising standing queue at the bottleneck 5. When there is a slowly rising standing queue at the bottleneck
the onset of packet loss has to be at an appropriate point (time the onset of packet loss has to be at an appropriate point (time
or queue depth) and progressive. See Section 8.2. or queue depth) and progressive [RFC7567]. See Section 8.2.
6. When there is a standing queue at a bottleneck for a shared media 6. When there is a standing queue at a bottleneck for a shared media
subpath (e.g. half duplex), there are suitable bounds on how the subpath (e.g. half duplex), there must be a suitable bounds on
data and ACKs interact, for example due to the channel the interaction between ACKs and data, for example due to the
arbitration mechanism. See Section 8.2.4. channel arbitration mechanism. See Section 8.2.4.
Note that conditions 1 through 4 require capacity tests for Note that conditions 1 through 4 require capacity tests for
validation, and thus may need to be monitored on an ongoing basis. validation, and thus may need to be monitored on an ongoing basis.
Conditions 5 and 6 require engineering tests best performed in Conditions 5 and 6 require engineering tests, which are best
controlled environments such as a bench test. They won't generally performed in controlled environments such as a bench test. They
fail due to load, but may fail in the field due to configuration won't generally fail due to load, but may fail in the field due to
errors, etc. and should be spot checked. configuration errors, etc. and should be spot checked.
We are developing a tool that can perform many of the tests described We are developing a tool that can perform many of the tests described
here [MBMSource]. here [MBMSource].
4.3. New requirements relative to RFC 2330 4.3. New requirements relative to RFC 2330
Model Based Metrics are designed to fulfill some additional Model Based Metrics are designed to fulfill some additional
requirements that were not recognized at the time RFC 2330 was requirements that were not recognized at the time RFC 2330 was
written [RFC2330]. These missing requirements may have significantly written [RFC2330]. These missing requirements may have significantly
contributed to policy difficulties in the IP measurement space. Some contributed to policy difficulties in the IP measurement space. Some
skipping to change at page 18, line 35 skipping to change at page 20, line 20
o Metrics must be vantage point invariant over a significant range o Metrics must be vantage point invariant over a significant range
of measurement point choices, including off path measurement of measurement point choices, including off path measurement
points. The only requirements on MP selection should be that the points. The only requirements on MP selection should be that the
RTT between the MPs is below some reasonable bound, and that the RTT between the MPs is below some reasonable bound, and that the
effects of the "test leads" connecting MPs to the subpath under effects of the "test leads" connecting MPs to the subpath under
test can be can be calibrated out of the measurements. The latter test can be can be calibrated out of the measurements. The latter
might be be accomplished if the test leads are effectively ideal might be be accomplished if the test leads are effectively ideal
or their properties can be deducted from the measurements between or their properties can be deducted from the measurements between
the MPs. While many of tests require that the test leads have at the MPs. While many of tests require that the test leads have at
least as much IP capacity as the subpath under test, some do not, least as much IP capacity as the subpath under test, some do not,
for example Background Packet Delivery Tests described in for example Background Packet Transfer Tests described in
Section 8.1.3. Section 8.1.3.
o Metric measurements must be repeatable by multiple parties with no o Metric measurements must be repeatable by multiple parties with no
specialized access to MPs or diagnostic infrastructure. It must specialized access to MPs or diagnostic infrastructure. It must
be possible for different parties to make the same measurement and be possible for different parties to make the same measurement and
observe the same results. In particular it is specifically observe the same results. In particular it is specifically
important that both a consumer (or their delegate) and ISP be able important that both a consumer (or their delegate) and ISP be able
to perform the same measurement and get the same result. Note to perform the same measurement and get the same result. Note
that vantage independence is key to meeting this requirement. that vantage independence is key to meeting this requirement.
5. Common Models and Parameters 5. Common Models and Parameters
5.1. Target End-to-end parameters 5.1. Target End-to-end parameters
The target end-to-end parameters are the target data rate, target RTT The target end-to-end parameters are the Target Data Rate, Target RTT
and target MTU as defined in Section 3. These parameters are and Target MTU as defined in Section 3. These parameters are
determined by the needs of the application or the ultimate end user determined by the needs of the application or the ultimate end user
and the complete Internet path over which the application is expected and the complete Internet path over which the application is expected
to operate. The target parameters are in units that make sense to to operate. The target parameters are in units that make sense to
upper layers: payload bytes delivered to the application, above TCP. upper layers: payload bytes delivered to the application, above TCP.
They exclude overheads associated with TCP and IP headers, They exclude overheads associated with TCP and IP headers,
retransmits and other protocols (e.g. DNS). retransmits and other protocols (e.g. DNS).
Other end-to-end parameters defined in Section 3 include the Other end-to-end parameters defined in Section 3 include the
effective bottleneck data rate, the sender interface data rate and effective bottleneck data rate, the sender interface data rate and
the TCP and IP header sizes. the TCP and IP header sizes.
The target_data_rate must be smaller than all subpath IP capacities The target_data_rate must be smaller than all subpath IP capacities
by enough headroom to carry the transport protocol overhead, by enough headroom to carry the transport protocol overhead,
explicitly including retransmissions and an allowance for explicitly including retransmissions and an allowance for
fluctuations in TCP's actual data rate. Specifying a fluctuations in TCP's actual data rate. Specifying a
target_data_rate with insufficient headroom is likely to result in target_data_rate with insufficient headroom is likely to result in
brittle measurements having little predictive value. brittle measurements having little predictive value.
Note that the target parameters can be specified for a hypothetical Note that the target parameters can be specified for a hypothetical
path, for example to construct TDS designed for bench testing in the path, for example to construct TIDS designed for bench testing in the
absence of a real application; or for a live in situ test of absence of a real application; or for a live in situ test of
production infrastructure. production infrastructure.
The number of concurrent connections is explicitly not a parameter to The number of concurrent connections is explicitly not a parameter to
this model. If a subpath requires multiple connections in order to this model. If a subpath requires multiple connections in order to
meet the specified performance, that must be stated explicitly and meet the specified performance, that must be stated explicitly and
the procedure described in Section 6.4 applies. the procedure described in Section 6.4 applies.
5.2. Common Model Calculations 5.2. Common Model Calculations
skipping to change at page 19, line 45 skipping to change at page 21, line 30
target_window_size and the reference target_run_length. target_window_size and the reference target_run_length.
The target_window_size, is the average window size in packets needed The target_window_size, is the average window size in packets needed
to meet the target_rate, for the specified target_RTT and target_MTU. to meet the target_rate, for the specified target_RTT and target_MTU.
It is given by: It is given by:
target_window_size = ceiling( target_rate * target_RTT / ( target_MTU target_window_size = ceiling( target_rate * target_RTT / ( target_MTU
- header_overhead ) ) - header_overhead ) )
Target_run_length is an estimate of the minimum required number of Target_run_length is an estimate of the minimum required number of
unmarked packets that must be delivered between losses or ECN marks, unmarked packets that must be delivered between losses or ECN
as computed by a mathematical model of TCP congestion control. The Congestion Experienced (CE) marks, as computed by a mathematical
derivation here follows [MSMO97], and by design is quite model of TCP congestion control. The derivation here follows
conservative. [MSMO97], and by design is quite conservative.
Reference target_run_length is derived as follows: assume the Reference target_run_length is derived as follows: assume the
subpath_IP_capacity is infinitesimally larger than the subpath_IP_capacity is infinitesimally larger than the
target_data_rate plus the required header_overhead. Then target_data_rate plus the required header_overhead. Then
target_window_size also predicts the onset of queueing. A larger target_window_size also predicts the onset of queueing. A larger
window will cause a standing queue at the bottleneck. window will cause a standing queue at the bottleneck.
Assume the transport protocol is using standard Reno style Additive Assume the transport protocol is using standard Reno style Additive
Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681] Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681]
(but not Appropriate Byte Counting [RFC3465]) and the receiver is (but not Appropriate Byte Counting [RFC3465]) and the receiver is
skipping to change at page 20, line 27 skipping to change at page 22, line 12
Following [MSMO97] the number of packets between losses must be the Following [MSMO97] the number of packets between losses must be the
area under the AIMD sawtooth. They must be no more frequent than area under the AIMD sawtooth. They must be no more frequent than
every 1 in ((3/2)*target_window_size)*(2*target_window_size) packets, every 1 in ((3/2)*target_window_size)*(2*target_window_size) packets,
which simplifies to: which simplifies to:
target_run_length = 3*(target_window_size^2) target_run_length = 3*(target_window_size^2)
Note that this calculation is very conservative and is based on a Note that this calculation is very conservative and is based on a
number of assumptions that may not apply. Appendix A discusses these number of assumptions that may not apply. Appendix A discusses these
assumptions and provides some alternative models. If a different assumptions and provides some alternative models. If a different
model is used, a fully specified TDS or FSTDS MUST document the model is used, a fully specified TIDS or FSTIDS MUST document the
actual method for computing target_run_length and ratio between actual method for computing target_run_length and ratio between
alternate target_run_length and the reference target_run_length alternate target_run_length and the reference target_run_length
calculated above, along with a discussion of the rationale for the calculated above, along with a discussion of the rationale for the
underlying assumptions. underlying assumptions.
These two parameters, target_window_size and target_run_length, These two parameters, target_window_size and target_run_length,
directly imply most of the individual parameters for the tests in directly imply most of the individual parameters for the tests in
Section 8. Section 8.
5.3. Parameter Derating 5.3. Parameter Derating
Since some aspects of the models are very conservative, the MBM Since some aspects of the models are very conservative, the MBM
framework permits some latitude in derating test parameters. Rather framework permits some latitude in derating test parameters. Rather
than trying to formalize more complicated models we permit some test than trying to formalize more complicated models we permit some test
parameters to be relaxed as long as they meet some additional parameters to be relaxed as long as they meet some additional
procedural constraints: procedural constraints:
o The TDS or FSTDS MUST document and justify the actual method used o The TIDS or FSTIDS MUST document and justify the actual method
to compute the derated metric parameters. used to compute the derated metric parameters.
o The validation procedures described in Section 10 must be used to o The validation procedures described in Section 10 must be used to
demonstrate the feasibility of meeting the Target Transport demonstrate the feasibility of meeting the Target Transport
Performance with infrastructure that infinitesimally passes the Performance with infrastructure that infinitesimally passes the
derated tests. derated tests.
o The validation process for a FSTIDS itself must be documented is
o The validation process for a FSTDS itself must be documented is
such a way that other researchers can duplicate the validation such a way that other researchers can duplicate the validation
experiments. experiments.
Except as noted, all tests below assume no derating. Tests where Except as noted, all tests below assume no derating. Tests where
there is not currently a well established model for the required there is not currently a well established model for the required
parameters explicitly include derating as a way to indicate parameters explicitly include derating as a way to indicate
flexibility in the parameters. flexibility in the parameters.
5.4. Test Preconditions 5.4. Test Preconditions
Many tests have preconditions which are required to assure their Many tests have preconditions which are required to assure their
validity. For example the presence or nonpresence of cross traffic validity. Examples include: the presence or nonpresence of cross
on specific subpaths, or appropriate preloading to put reactive traffic on specific subpaths; negotiating ECN; and appropriate
network elements into the proper states [RFC7312]. If preconditions preloading to put reactive network elements into the proper states
are not properly satisfied for some reason, the tests should be [RFC7312]. If preconditions are not properly satisfied for some
considered to be inconclusive. In general it is useful to preserve reason, the tests should be considered to be inconclusive. In
diagnostic information about why the preconditions were not met, and general it is useful to preserve diagnostic information as to why the
any test data that was collected even if it is not useful for the preconditions were not met, and any test data that was collected even
intended test. Such diagnostic information and partial test data may if it is not useful for the intended test. Such diagnostic
be useful for improving the test in the future. information and partial test data may be useful for improving the
test in the future.
It is important to preserve the record that a test was scheduled, It is important to preserve the record that a test was scheduled,
because otherwise precondition enforcement mechanisms can introduce because otherwise precondition enforcement mechanisms can introduce
sampling bias. For example, canceling tests due to cross traffic on sampling bias. For example, canceling tests due to cross traffic on
subscriber access links might introduce sampling bias in tests of the subscriber access links might introduce sampling bias in tests of the
rest of the network by reducing the number of tests during peak rest of the network by reducing the number of tests during peak
network load. network load.
Test preconditions and failure actions MUST be specified in a FSTDS. Test preconditions and failure actions MUST be specified in a FSTIDS.
6. Traffic generating techniques 6. Generating test streams
6.1. Paced transmission Many important properties of Model Based Metrics, such as vantage
independence, are a consequence of using test streams that have
temporal structures that mimic TCP or other transport protocols
running over a complete path. As described in Section 4.1, self
clocked protocols naturally have burst structures related to the RTT
and pipe size of the complete path. These bursts naturally get
larger (contain more packets) as either the Target RTT or Target Data
Rate get larger, or the Target MTU gets smaller. An implication of
these relationships is that test streams generated by running self
clocked protocols over short subpaths may not adequately exercise the
queueing at any bottleneck to determine if the subpath can support
the full Target Transport Performance over the complete path.
Paced (burst) transmissions: send bursts of data on a timer to meet a Failing to authentically mimic TCP's temporal structure is part the
particular target rate and pattern. In all cases the specified data reason why simple performance tools such as iperf, netperf, nc, etc
rate can either be the application or IP rates. Header overheads have the reputation of yielding false pass results over short test
must be included in the calculations as appropriate. paths, even when some subpath has a flaw.
Packet Headway: Time interval between packets, specified from the
start of one to the start of the next. e.g. If packets are sent
with a 1 mS headway, there will be exactly 1000 packets per
second.
Burst Headway: Time interval between bursts, specified from the The definitions in Section 3 are sufficient for most test streams.
start of the first packet one burst to the start of the first We describe the slowstart and standing queue test streams in more
packet of the next burst. e.g. If 4 packet bursts are sent with a detail.
1 mS burst headway, there will be exactly 4000 packets per second.
Paced single packets: Send individual packets at the specified rate
or packet headway.
Paced Bursts: Send sender interface rate bursts on a timer. Specify
any 3 of: average rate, packet size, burst size (number of
packets) and burst headway (burst start to start). The packet
headway within a burst is typically assumed to be the minimum
supported by the tester's interface. i.e. Bursts are normally
sent as back-to-back packets. The packet headway within the
bursts can also be explicitly specified.
Slowstart burst: Mimic TCP slowstart by sending 4 packet paced
bursts at an average data rate equal to twice the implied
bottleneck IP rate (but not more than the sender interface rate).
If the implied bottleneck IP rate is more than half of the sender
interface rate, slowstart rate bursts become sender interface rate
bursts. See the discussion and figure below.
Repeated Slowstart bursts: Repeat Slowstart bursts once per
target_RTT. For TCP each burst would be twice as large as the
prior burst, and the sequence would end at the first ECN mark or
lost packet. For measurement, all slowstart bursts would be the
same size (nominally target_window_size but other sizes might be
specified). See the discussion and figure below.
The slowstart bursts mimic TCP slowstart under a particular set of In conventional measurement practice stochastic processes are used to
implementation assumptions. The burst headway shown in Figure 2 eliminate many unintended correlations and sample biases. However
reflects the TCP self clock derived from the data passing through the MBM tests are designed to explicitly mimic temporal correlations
dominant bottleneck. The slow start burst size is nominally caused by network or protocol elements themselves and are intended to
target_window_size (so it might end with a bust that is less than 4 accurately reflect implementation behavior. Some portion of the
packets). The slowstart bursts are repeated every target_RTT. Note system, such as traffic arrival (test scheduling) are naturally
that a stream of repeated slowstart bursts has three different stochastic. Other details, such as protocol processing times, are
technically nondeterministic and might be modeled stochastically, but
are only a tiny part of the overall behavior which is dominated by
implementation specific deterministic effects. Furthermore, it is
known that sampling bias is a real problem for some protocol
implementations. For example TCP's RTT estimator used to determine
the Retransmit Time Out (RTO), can be fooled by periodic cross
traffic or start-stop applications.
At some point in the future it may make sense to introduce fine
grained noise sources into the models used for generating test
streams, but they are not warranted at this time.
6.1. Mimicking slowstart
TCP slowstart has a two level burst structure as shown in Figure 2.
The fine structure is caused by the interaction between the ACK clock
and TCP efficiency algorithms. Each ACK passing through the return
path triggers a small data burst. These bursts are typically full
sender interface rate, with the same headway as the returning ACKs,
but having twice as much data as the ACK reported was delivered to
the receiver. Due to variations in delayed ACK and algorithms such
as Appropriate Byte Counting [RFC3465], different pairs of senders
and receivers produce different burst patterns. Without loss of
generality, we assume each ACK causes 4 packet bursts at an average
headway equal to the ACK headway, and corresponding to sending at an
average rate equal to twice the effective bottleneck IP rate. This
fine structure defines one slowstart rate burst.
For a transport protocol the slowstart bursts are repeated every
target_RTT. Each slowstart burst is twice as large as the previous
burst, and slowstart ends on the first lost packet or ECN mark. For
diagnostic tests described below we preserve the fine structure but
manipulate the burst size and headway to measure the ability of the
dominant bottleneck to absorb and smooth slowstart bursts.
Note that a stream of repeated slowstart bursts has three different
average rates, depending on the averaging interval. At the finest average rates, depending on the averaging interval. At the finest
time scale (a few packet times at the sender interface) the peak of time scale (a few packet times at the sender interface) the peak of
the average rate is the same as the sender interface rate; at a the average IP rate is the same as the sender interface rate; at a
medium scale (a few packet times at the dominant bottleneck) the peak medium timescale (a few packet times at the dominant bottleneck) the
of the average rate is twice the implied bottleneck IP rate; and at peak of the average IP rate is twice the implied bottleneck IP
time scales longer than the target_RTT and when the burst size is capacity; and at time scales longer than the target_RTT and when the
equal to the target_window_size the average rate is equal to the burst size is equal to the target_window_size the average rate is
target_data_rate. This pattern corresponds to repeating the last RTT equal to the target_data_rate. This pattern corresponds to repeating
of TCP slowstart when delayed ACK and sender side byte counting are the last RTT of TCP slowstart when delayed ACK and sender side byte
present but without the limits specified in Appropriate Byte Counting counting are present but without the limits specified in Appropriate
[RFC3465]. Byte Counting [RFC3465].
time --> ( - = one packet) time --> ( - = one packet)
Packet stream: Packet stream:
---- ---- ---- ---- ---- ---- ---- ... ---- ---- ---- ---- ---- ---- ---- ...
|<>| 4 packet sender interface rate bursts |<>| sender interface rate bursts (typically 3 or 4 packets)
|<--->| Burst headway |<--->| burst headway (determined by ACK headway)
|<------------------------>| slowstart burst size |<------------------------>| slowstart burst size(from the window)
|<---------------------------------------------->| slowstart headway |<---------------------------------------------->| slowstart headway
\____________ _____________/ \______ __ ... \____________ _____________/ \______ __ ...
V V V V
One slowstart burst Repeated slowstart bursts One slowstart burst Repeated slowstart bursts
Slowstart Burst Structure Multiple levels of Slowstart Bursts
Figure 2 Figure 2
Note that in conventional measurement practice, exponentially
distributed intervals are often used to eliminate many sorts of
correlations. For the procedures above, the correlations are created
by the network or protocol elements and accurately reflect their
behavior. At some point in the future, it will be desirable to
introduce noise sources into the above pacing models, but they are
not warranted at this time.
6.2. Constant window pseudo CBR 6.2. Constant window pseudo CBR
Implement pseudo constant bit rate by running a standard protocol Implement pseudo constant bit rate by running a standard protocol
such as TCP with a fixed window size, such that it is self clocked. such as TCP with a fixed window size, such that it is self clocked.
Data packets arriving at the receiver trigger acknowledgements (ACKs) Data packets arriving at the receiver trigger acknowledgements (ACKs)
which travel back to the sender where they trigger additional which travel back to the sender where they trigger additional
transmissions. The window size is computed from the target_data_rate transmissions. The window size is computed from the target_data_rate
and the actual RTT of the test path. The rate is only maintained in and the actual RTT of the test path. The rate is only maintained in
average over each RTT, and is subject to limitations of the transport average over each RTT, and is subject to limitations of the transport
protocol. protocol.
Since the window size is constrained to be an integer number of Since the window size is constrained to be an integer number of
packets, for small RTTs or low data rates there may not be packets, for small RTTs or low data rates there may not be
sufficiently precise control over the data rate. Rounding the window sufficiently precise control over the data rate. Rounding the window
size up (the default) is likely to be result in data rates that are size up (the default) is likely to be result in data rates that are
higher than the target rate, but reducing the window by one packet higher than the target rate, but reducing the window by one packet
may result in data rates that are too small. Also cross traffic may result in data rates that are too small. Also cross traffic
potentially raises the RTT, implicitly reducing the rate. Cross potentially raises the RTT, implicitly reducing the rate. Cross
traffic that raises the RTT nearly always makes the test more traffic that raises the RTT nearly always makes the test more
strenuous. A FSTDS specifying a constant window CBR tests MUST strenuous. A FSTIDS specifying a constant window CBR tests MUST
explicitly indicate under what conditions errors in the data cause explicitly indicate under what conditions errors in the data rate
tests to inconclusive. causes tests to inconclusive.
Since constant window pseudo CBR testing is sensitive to RTT Since constant window pseudo CBR testing is sensitive to RTT
fluctuations it is less accurate at control the data rate in fluctuations it is less accurate at controling the data rate in
environments with fluctuating delays. environments with fluctuating delays.
6.3. Scanned window pseudo CBR 6.3. Scanned window pseudo CBR
Scanned window pseudo CBR is similar to the constant window CBR Scanned window pseudo CBR is similar to the constant window CBR
described above, except the window is scanned across a range of sizes described above, except the window is scanned across a range of sizes
designed to include two key events, the onset of queueing and the designed to include two key events, the onset of queueing and the
onset of packet loss or ECN marks. The window is scanned by onset of packet loss or ECN CE marks. The window is scanned by
incrementing it by one packet every 2*target_window_size delivered incrementing it by one packet every 2*target_window_size delivered
packets. This mimics the additive increase phase of standard TCP packets. This mimics the additive increase phase of standard Reno
congestion avoidance when delayed ACKs are in effect. Normally the TCP congestion avoidance when delayed ACKs are in effect. Normally
window increases separated by intervals slightly longer than twice the window increases separated by intervals slightly longer than
the target_RTT. twice the target_RTT.
There are two ways to implement this test: one built by applying a There are two ways to implement this test: one built by applying a
window clamp to standard congestion control in a standard protocol window clamp to standard congestion control in a standard protocol
such as TCP and the other built by stiffening a non-standard such as TCP and the other built by stiffening a non-standard
transport protocol. When standard congestion control is in effect, transport protocol. When standard congestion control is in effect,
any losses or ECN marks cause the transport to revert to a window any losses or ECN CE marks cause the transport to revert to a window
smaller than the clamp such that the scanning clamp loses control the smaller than the clamp such that the scanning clamp loses control the
window size. The NPAD pathdiag tool is an example of this class of window size. The NPAD pathdiag tool is an example of this class of
algorithms [Pathdiag]. algorithms [Pathdiag].
Alternatively a non-standard congestion control algorithm can respond Alternatively a non-standard congestion control algorithm can respond
to losses by transmitting extra data, such that it maintains the to losses by transmitting extra data, such that it maintains the
specified window size independent of losses or ECN marks. Such a specified window size independent of losses or ECN CE marks. Such a
stiffened transport explicitly violates mandatory Internet congestion stiffened transport explicitly violates mandatory Internet congestion
control and is not suitable for in situ testing. [RFC5681] It is control [RFC5681] and is not suitable for in situ testing. It is
only appropriate for engineering testing under laboratory conditions. only appropriate for engineering testing under laboratory conditions.
The Windowed Ping tool implements such a test [WPING]. The tool The Windowed Ping tool implements such a test [WPING]. The tool
described in the paper has been updated.[mpingSource] described in the paper has been updated.[mpingSource]
The test procedures in Section 8.2 describe how to the partition the The test procedures in Section 8.2 describe how to the partition the
scans into regions and how to interpret the results. scans into regions and how to interpret the results.
6.4. Concurrent or channelized testing 6.4. Concurrent or channelized testing
The procedures described in this document are only directly The procedures described in this document are only directly
skipping to change at page 25, line 25 skipping to change at page 27, line 17
There are a number of reasons to want to specify performance in term There are a number of reasons to want to specify performance in term
of multiple concurrent flows, however this approach is not of multiple concurrent flows, however this approach is not
recommended for data rates below several megabits per second, which recommended for data rates below several megabits per second, which
can be attained with run lengths under 10000 packets on many paths. can be attained with run lengths under 10000 packets on many paths.
Since the required run length goes as the square of the data rate, at Since the required run length goes as the square of the data rate, at
higher rates the run lengths can be unreasonably large, and multiple higher rates the run lengths can be unreasonably large, and multiple
flows might be the only feasible approach. flows might be the only feasible approach.
If multiple flows are deemed necessary to meet aggregate performance If multiple flows are deemed necessary to meet aggregate performance
targets then this MUST be stated both the design of the TDS and in targets then this MUST be stated both the design of the TIDS and in
any claims about network performance. The IP diagnostic tests MUST any claims about network performance. The IP diagnostic tests MUST
be performed concurrently with the specified number of connections. be performed concurrently with the specified number of connections.
For the the tests that use bursty traffic, the bursts should be For the the tests that use bursty test streams, the bursts should be
synchronized across flows. synchronized across streams.
7. Interpreting the Results 7. Interpreting the Results
7.1. Test outcomes 7.1. Test outcomes
To perform an exhaustive test of a complete network path, each test To perform an exhaustive test of a complete network path, each test
of the TDS is applied to each subpath of the complete path. If any of the TIDS is applied to each subpath of the complete path. If any
subpath fails any test then a standard transport protocol running subpath fails any test then a standard transport protocol running
over the complete path can also be expected to fail to attain the over the complete path can also be expected to fail to attain the
Target Transport Performance under some conditions. Target Transport Performance under some conditions.
In addition to passing or failing, a test can be deemed to be In addition to passing or failing, a test can be deemed to be
inconclusive for a number of reasons. Proper instrumentation and inconclusive for a number of reasons. Proper instrumentation and
treatment of inconclusive outcomes is critical to the accuracy and treatment of inconclusive outcomes is critical to the accuracy and
robustness of Model Based Metrics. Tests can be inconclusive if the robustness of Model Based Metrics. Tests can be inconclusive if the
precomputed traffic pattern or data rates were not accurately precomputed traffic pattern or data rates were not accurately
generated; the measurement results were not statistically generated; the measurement results were not statistically
significant; and others causes such as failing to meet some required significant; and others causes such as failing to meet some required
preconditions for the test. See Section 5.4 preconditions for the test. See Section 5.4
For example consider a test that implements Constant Window Pseudo For example consider a test that implements Constant Window Pseudo
CBR (Section 6.2) by adding rate controls and detailed traffic CBR (Section 6.2) by adding rate controls and detailed IP packet
instrumentation to TCP (e.g. [RFC4898]). TCP includes built in transfer instrumentation to TCP (e.g. [RFC4898]). TCP includes
control systems which might interfere with the sending data rate. If built in control systems which might interfere with the sending data
such a test meets the required packet delivery statistics (e.g. run rate. If such a test meets the required packet transfer statistics
length) while failing to attain the specified data rate it must be (e.g. run length) while failing to attain the specified data rate it
treated as an inconclusive result, because we can not a priori must be treated as an inconclusive result, because we can not a
determine if the reduced data rate was caused by a TCP problem or a priori determine if the reduced data rate was caused by a TCP problem
network problem, or if the reduced data rate had a material effect on or a network problem, or if the reduced data rate had a material
the observed packet delivery statistics. effect on the observed packet transfer statistics.
Note that for capacity tests, if the observed packet delivery Note that for capacity tests, if the observed packet transfer
statistics meet the statistical criteria for failing (accepting statistics meet the statistical criteria for failing (accepting
hypnosis H1 in Section 7.2), the test can can be considered to have hypnosis H1 in Section 7.2), the test can can be considered to have
failed because it doesn't really matter that the test didn't attain failed because it doesn't really matter that the test didn't attain
the required data rate. the required data rate.
The really important new properties of MBM, such as vantage The really important new properties of MBM, such as vantage
independence, are a direct consequence of opening the control loops independence, are a direct consequence of opening the control loops
in the protocols, such that the test traffic does not depend on in the protocols, such that the test stream does not depend on
network conditions or traffic received. Any mechanism that network conditions or IP packets received. Any mechanism that
introduces feedback between the paths measurements and the traffic introduces feedback between the path's measurements and the test
generation is at risk of introducing nonlinearities that spoil these stream generation is at risk of introducing nonlinearities that spoil
properties. Any exceptional event that indicates that such feedback these properties. Any exceptional event that indicates that such
has happened should cause the test to be considered inconclusive. feedback has happened should cause the test to be considered
inconclusive.
One way to view inconclusive tests is that they reflect situations One way to view inconclusive tests is that they reflect situations
where a test outcome is ambiguous between limitations of the network where a test outcome is ambiguous between limitations of the network
and some unknown limitation of the IP diagnostic test itself, which and some unknown limitation of the IP diagnostic test itself, which
may have been caused by some uncontrolled feedback from the network. may have been caused by some uncontrolled feedback from the network.
Note that procedures that attempt to sweep the target parameter space Note that procedures that attempt to sweep the target parameter space
to find the limits on some parameter such as target_data_rate are at to find the limits on some parameter such as target_data_rate are at
risk of breaking the location independent properties of Model Based risk of breaking the location independent properties of Model Based
Metrics, if any part of the boundary between passing and inconclusive Metrics, if any part of the boundary between passing and inconclusive
is sensitive to RTT (which is normally the case). results is sensitive to RTT (which is normally the case).
One of the goals for evolving TDS designs will be to keep sharpening One of the goals for evolving TIDS designs will be to keep sharpening
distinction between inconclusive, passing and failing tests. The distinction between inconclusive, passing and failing tests. The
criteria for for passing, failing and inconclusive tests MUST be criteria for for passing, failing and inconclusive tests MUST be
explicitly stated for every test in the TDS or FSTDS. explicitly stated for every test in the TIDS or FSTIDS.
One of the goals of evolving the testing process, procedures, tools One of the goals of evolving the testing process, procedures, tools
and measurement point selection should be to minimize the number of and measurement point selection should be to minimize the number of
inconclusive tests. inconclusive tests.
It may be useful to keep raw packet delivery statistics and ancillary It may be useful to keep raw packet transfer statistics and ancillary
metrics [RFC3148] for deeper study of the behavior of the network metrics [RFC3148] for deeper study of the behavior of the network
path and to measure the tools themselves. Raw packet delivery path and to measure the tools themselves. Raw packet transfer
statistics can help to drive tool evolution. Under some conditions statistics can help to drive tool evolution. Under some conditions
it might be possible to reevaluate the raw data for satisfying it might be possible to reevaluate the raw data for satisfying
alternate Target Transport Performance. However it is important to alternate Target Transport Performance. However it is important to
guard against sampling bias and other implicit feedback which can guard against sampling bias and other implicit feedback which can
cause false results and exhibit measurement point vantage cause false results and exhibit measurement point vantage
sensitivity. Simply applying different delivery criteria based on a sensitivity. Simply applying different delivery criteria based on a
different Target Transport Performance is insufficient if the test different Target Transport Performance is insufficient if the test
traffic patterns (bursts, etc) does not match the alternate Target traffic patterns (bursts, etc) does not match the alternate Target
Transport Performance. Transport Performance.
7.2. Statistical criteria for estimating run_length 7.2. Statistical criteria for estimating run_length
When evaluating the observed run_length, we need to determine When evaluating the observed run_length, we need to determine
appropriate packet stream sizes and acceptable error levels for appropriate packet stream sizes and acceptable error levels for
efficient measurement. In practice, can we compare the empirically efficient measurement. In practice, can we compare the empirically
estimated packet loss and ECN marking ratios with the targets as the estimated packet loss and ECN Congestion Experienced (CE) marking
sample size grows? How large a sample is needed to say that the ratios with the targets as the sample size grows? How large a sample
measurements of packet transfer indicate a particular run length is is needed to say that the measurements of packet transfer indicate a
present? particular run length is present?
The generalized measurement can be described as recursive testing: The generalized measurement can be described as recursive testing:
send packets (individually or in patterns) and observe the packet send packets (individually or in patterns) and observe the packet
delivery performance (packet loss ratio or other metric, any marking delivery performance (packet loss ratio or other metric, any marking
we define). we define).
As each packet is sent and measured, we have an ongoing estimate of As each packet is sent and measured, we have an ongoing estimate of
the performance in terms of the ratio of packet loss or ECN mark to the performance in terms of the ratio of packet loss or ECN CE mark
total packets (i.e. an empirical probability). We continue to send to total packets (i.e. an empirical probability). We continue to
until conditions support a conclusion or a maximum sending limit has send until conditions support a conclusion or a maximum sending limit
been reached. has been reached.
We have a target_mark_probability, 1 mark per target_run_length, We have a target_mark_probability, 1 mark per target_run_length,
where a "mark" is defined as a lost packet, a packet with ECN mark, where a "mark" is defined as a lost packet, a packet with ECN CE
or other signal. This constitutes the null Hypothesis: mark, or other signal. This constitutes the null Hypothesis:
H0: no more than one mark in target_run_length = H0: no more than one mark in target_run_length =
3*(target_window_size)^2 packets 3*(target_window_size)^2 packets
and we can stop sending packets if on-going measurements support and we can stop sending packets if on-going measurements support
accepting H0 with the specified Type I error = alpha (= 0.05 for accepting H0 with the specified Type I error = alpha (= 0.05 for
example). example).
We also have an alternative Hypothesis to evaluate: if performance is We also have an alternative Hypothesis to evaluate: if performance is
significantly lower than the target_mark_probability. Based on significantly lower than the target_mark_probability. Based on
skipping to change at page 28, line 19 skipping to change at page 30, line 10
preferring the alternate hypothesis H1. preferring the alternate hypothesis H1.
H0 and H1 constitute the Success and Failure outcomes described H0 and H1 constitute the Success and Failure outcomes described
elsewhere in the memo, and while the ongoing measurements do not elsewhere in the memo, and while the ongoing measurements do not
support either hypothesis the current status of measurements is support either hypothesis the current status of measurements is
inconclusive. inconclusive.
The problem above is formulated to match the Sequential Probability The problem above is formulated to match the Sequential Probability
Ratio Test (SPRT) [StatQC]. Note that as originally framed the Ratio Test (SPRT) [StatQC]. Note that as originally framed the
events under consideration were all manufacturing defects. In events under consideration were all manufacturing defects. In
networking, ECN marks and lost packets are not defects but signals, networking, ECN CE marks and lost packets are not defects but
indicating that the transport protocol should slow down. signals, indicating that the transport protocol should slow down.
The Sequential Probability Ratio Test also starts with a pair of The Sequential Probability Ratio Test also starts with a pair of
hypothesis specified as above: hypothesis specified as above:
H0: p0 = one defect in target_run_length H0: p0 = one defect in target_run_length
H1: p1 = one defect in target_run_length/4 H1: p1 = one defect in target_run_length/4
As packets are sent and measurements collected, the tester evaluates As packets are sent and measurements collected, the tester evaluates
the cumulative defect count against two boundaries representing H0 the cumulative defect count against two boundaries representing H0
Acceptance or Rejection (and acceptance of H1): Acceptance or Rejection (and acceptance of H1):
skipping to change at page 29, line 30 skipping to change at page 31, line 26
technologies such as multi threaded routing lookups and Equal Cost technologies such as multi threaded routing lookups and Equal Cost
MultiPath (ECMP) routing. These techniques increase parallelism in MultiPath (ECMP) routing. These techniques increase parallelism in
network and are critical to enabling overall Internet growth to network and are critical to enabling overall Internet growth to
exceed Moore's Law. exceed Moore's Law.
Note that transport retransmission strategies can trade off Note that transport retransmission strategies can trade off
reordering tolerance vs how quickly they can repair losses vs reordering tolerance vs how quickly they can repair losses vs
overhead from spurious retransmissions. In advance of new overhead from spurious retransmissions. In advance of new
retransmission strategies we propose the following strawman: retransmission strategies we propose the following strawman:
Transport protocols should be able to adapt to reordering as long as Transport protocols should be able to adapt to reordering as long as
the reordering extent is no more than the maximum of one quarter the reordering extent is not more than the maximum of one quarter
window or 1 mS, whichever is larger. Within this limit on reorder window or 1 mS, whichever is larger. Within this limit on reorder
extent, there should be no bound on reordering density. extent, there should be no bound on reordering density.
By implication, recording which is less than these bounds should not By implication, recording which is less than these bounds should not
be treated as a network impairment. However [RFC4737] still applies: be treated as a network impairment. However [RFC4737] still applies:
reordering should be instrumented and the maximum reordering that can reordering should be instrumented and the maximum reordering that can
be properly characterized by the test (e.g. bound on history buffers) be properly characterized by the test (e.g. bound on history buffers)
should be recorded with the measurement results. should be recorded with the measurement results.
Reordering tolerance and diagnostic limitations, such as history Reordering tolerance and diagnostic limitations, such as the size of
buffer size, MUST be specified in a FSTDS. the history buffer used to diagnose packets that are way out-of-
order, MUST be specified in a FSTIDS.
8. Diagnostic Tests 8. IP Diagnostic Tests
The IP diagnostic tests below are organized by traffic pattern: basic The IP diagnostic tests below are organized by traffic pattern: basic
data rate and packet delivery statistics, standing queues, slowstart data rate and packet transfer statistics, standing queues, slowstart
bursts, and sender rate bursts. We also introduce some combined bursts, and sender rate bursts. We also introduce some combined
tests which are more efficient when networks are expected to pass, tests which are more efficient when networks are expected to pass,
but conflate diagnostic signatures when they fail. but conflate diagnostic signatures when they fail.
There are a number of test details which are not fully defined here. There are a number of test details which are not fully defined here.
They must be fully specified in a FSTIDS. From a standardization
They must be fully specified in a FSTDS. From a standardization
perspective, this lack of specificity will weaken this version of perspective, this lack of specificity will weaken this version of
Model Based Metrics, however it is anticipated that this it be more Model Based Metrics, however it is anticipated that this it be more
than offset by the extent to which MBM suppresses the problems caused than offset by the extent to which MBM suppresses the problems caused
by using transport protocols for measurement. e.g. non-specific MBM by using transport protocols for measurement. e.g. non-specific MBM
metrics are likely to have better repeatability than many existing metrics are likely to have better repeatability than many existing
BTC like metrics. Once we have good field experience, the missing BTC like metrics. Once we have good field experience, the missing
details can be fully specified. details can be fully specified.
8.1. Basic Data Rate and Packet Delivery Tests 8.1. Basic Data Rate and Packet Transfer Tests
We propose several versions of the basic data rate and packet We propose several versions of the basic data rate and packet
delivery statistics test. All measure the number of packets transfer statistics test. All measure the number of packets
delivered between losses or ECN marks, using a data stream that is delivered between losses or ECN Congestion Experienced (CE) marks,
rate controlled at or below the target_data_rate. using a data stream that is rate controlled at or below the
target_data_rate.
The tests below differ in how the data rate is controlled. The data The tests below differ in how the data rate is controlled. The data
can be paced on a timer, or window controlled at full target data can be paced on a timer, or window controlled at full Target Data
rate. The first two tests implicitly confirm that sub_path has Rate. The first two tests implicitly confirm that sub_path has
sufficient raw capacity to carry the target_data_rate. They are sufficient raw capacity to carry the target_data_rate. They are
recommend for relatively infrequent testing, such as an installation recommend for relatively infrequent testing, such as an installation
or periodic auditing process. The third, background packet delivery or periodic auditing process. The third, background packet transfer
statistics, is a low rate test designed for ongoing monitoring for statistics, is a low rate test designed for ongoing monitoring for
changes in subpath quality. changes in subpath quality.
All rely on the receiver accumulating packet delivery statistics as All rely on the data receiver accumulating packet transfer statistics
described in Section 7.2 to score the outcome: as described in Section 7.2 to score the outcome:
Pass: it is statistically significant that the observed interval Pass: it is statistically significant that the observed interval
between losses or ECN marks is larger than the target_run_length. between losses or ECN CE marks is larger than the target_run_length.
Fail: it is statistically significant that the observed interval Fail: it is statistically significant that the observed interval
between losses or ECN marks is smaller than the target_run_length. between losses or ECN CE marks is smaller than the target_run_length.
A test is considered to be inconclusive if it failed to meet the data A test is considered to be inconclusive if it failed to meet the data
rate as specified below, meet the qualifications defined in rate as specified below, meet the qualifications defined in
Section 5.4 or neither run length statistical hypothesis was Section 5.4 or neither run length statistical hypothesis was
confirmed in the allotted test duration. confirmed in the allotted test duration.
8.1.1. Delivery Statistics at Paced Full Data Rate 8.1.1. Delivery Statistics at Paced Full Data Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while relying on timer to send data at the target_run_length while relying on timer to send data at the
target_rate using the procedure described in in Section 6.1 with a target_rate using the procedure described in in Section 6.1 with a
burst size of 1 (single packets) or 2 (packet pairs). burst size of 1 (single packets) or 2 (packet pairs).
The test is considered to be inconclusive if the packet transmission The test is considered to be inconclusive if the packet transmission
can not be accurately controlled for any reason. can not be accurately controlled for any reason.
RFC 6673 [RFC6673] is appropriate for measuring packet delivery RFC 6673 [RFC6673] is appropriate for measuring packet transfer
statistics at full data rate. statistics at full data rate.
8.1.2. Delivery Statistics at Full Data Windowed Rate 8.1.2. Delivery Statistics at Full Data Windowed Rate
Confirm that the observed run length is at least the Confirm that the observed run length is at least the
target_run_length while sending at an average rate approximately target_run_length while sending at an average rate approximately
equal to the target_data_rate, by controlling (or clamping) the equal to the target_data_rate, by controlling (or clamping) the
window size of a conventional transport protocol to a fixed value window size of a conventional transport protocol to a fixed value
computed from the properties of the test path, typically computed from the properties of the test path, typically
test_window=target_data_rate*test_path_RTT/target_MTU. Note that if test_window=target_data_rate*test_path_RTT/target_MTU. Note that if
there is any interaction between the forward and return path, there is any interaction between the forward and return path,
test_window may need to be adjusted slightly to compensate for the test_window may need to be adjusted slightly to compensate for the
resulting inflated RTT. resulting inflated RTT. However see the discussion in Section 8.2.4.
Since losses and ECN marks generally cause transport protocols to at Since losses and ECN CE marks cause transport protocols to reduce
least temporarily reduce their data rates, this test is expected to their data rates, this test is expected to be less precise about
be less precise about controlling its data rate. It should not be controlling its data rate. It should not be considered inconclusive
considered inconclusive as long as at least some of the round trips as long as at least some of the round trips reached the full
reached the full target_data_rate without incurring losses or ECN target_data_rate without incurring losses or ECN CE marks. To pass
marks. To pass this test the network MUST deliver target_window_size this test the network MUST deliver target_window_size packets in
packets in target_RTT time without any losses or ECN marks at least target_RTT time without any losses or ECN CE marks at least once per
once per two target_window_size round trips, in addition to meeting two target_window_size round trips, in addition to meeting the run
the run length statistical test. length statistical test.
8.1.3. Background Packet Delivery Statistics Tests 8.1.3. Background Packet Transfer Statistics Tests
The background run length is a low rate version of the target target The background run length is a low rate version of the target target
rate test above, designed for ongoing lightweight monitoring for rate test above, designed for ongoing lightweight monitoring for
changes in the observed subpath run length without disrupting users. changes in the observed subpath run length without disrupting users.
It should be used in conjunction with one of the above full rate It should be used in conjunction with one of the above full rate
tests because it does not confirm that the subpath can support raw tests because it does not confirm that the subpath can support raw
data rate. data rate.
RFC 6673 [RFC6673] is appropriate for measuring background packet RFC 6673 [RFC6673] is appropriate for measuring background packet
delivery statistics. transfer statistics.
8.2. Standing Queue Tests 8.2. Standing Queue Tests
These engineering tests confirm that the bottleneck is well behaved These engineering tests confirm that the bottleneck is well behaved
across the onset of packet loss, which typically follows after the across the onset of packet loss, which typically follows after the
onset of queueing. Well behaved generally means lossless for onset of queueing. Well behaved generally means lossless for
transient queues, but once the queue has been sustained for a transient queues, but once the queue has been sustained for a
sufficient period of time (or reaches a sufficient queue depth) there sufficient period of time (or reaches a sufficient queue depth) there
should be a small number of losses to signal to the transport should be a small number of losses to signal to the transport
protocol that it should reduce its window. Losses that are too early protocol that it should reduce its window. Losses that are too early
skipping to change at page 32, line 16 skipping to change at page 34, line 15
the window) at the onset of congestion make loss recovery problematic the window) at the onset of congestion make loss recovery problematic
for the transport protocol. Non-linear, erratic or excessive RTT for the transport protocol. Non-linear, erratic or excessive RTT
increases suggest poor interactions between the channel acquisition increases suggest poor interactions between the channel acquisition
algorithms and the transport self clock. All of the tests in this algorithms and the transport self clock. All of the tests in this
section use the same basic scanning algorithm, described here, but section use the same basic scanning algorithm, described here, but
score the link or subpath on the basis of how well it avoids each of score the link or subpath on the basis of how well it avoids each of
these problems. these problems.
For some technologies the data might not be subject to increasing For some technologies the data might not be subject to increasing
delays, in which case the data rate will vary with the window size delays, in which case the data rate will vary with the window size
all the way up to the onset of load induced losses or ECN marks. For all the way up to the onset of load induced packet loss or ECN CE
theses technologies, the discussion of queueing does not apply, but marks. For theses technologies, the discussion of queueing does not
it is still required that the onset of losses or ECN marks be at an apply, but it is still required that the onset of losses or ECN CE
appropriate point and progressive. marks be at an appropriate point and progressive.
Use the procedure in Section 6.3 to sweep the window across the onset Use the procedure in Section 6.3 to sweep the window across the onset
of queueing and the onset of loss. The tests below all assume that of queueing and the onset of loss. The tests below all assume that
the scan emulates standard additive increase and delayed ACK by the scan emulates standard additive increase and delayed ACK by
incrementing the window by one packet for every 2*target_window_size incrementing the window by one packet for every 2*target_window_size
packets delivered. A scan can typically be divided into three packets delivered. A scan can typically be divided into three
regions: below the onset of queueing, a standing queue, and at or regions: below the onset of queueing, a standing queue, and at or
beyond the onset of loss. beyond the onset of loss.
Below the onset of queueing the RTT is typically fairly constant, and Below the onset of queueing the RTT is typically fairly constant, and
skipping to change at page 33, line 13 skipping to change at page 35, line 12
dominated by the behavior of the transport protocol itself. For the dominated by the behavior of the transport protocol itself. For the
stiffened transport protocols case (with non-standard, aggressive stiffened transport protocols case (with non-standard, aggressive
congestion control algorithms) the details of periodic losses will be congestion control algorithms) the details of periodic losses will be
dominated by how the the window increase function responds to loss. dominated by how the the window increase function responds to loss.
8.2.1. Congestion Avoidance 8.2.1. Congestion Avoidance
A subpath passes the congestion avoidance standing queue test if more A subpath passes the congestion avoidance standing queue test if more
than target_run_length packets are delivered between the onset of than target_run_length packets are delivered between the onset of
queueing (as determined by the window with the maximum network power) queueing (as determined by the window with the maximum network power)
and the first loss or ECN mark. If this test is implemented using a and the first loss or ECN CE mark. If this test is implemented using
standards congestion control algorithm with a clamp, it can be a standards congestion control algorithm with a clamp, it can be
performed in situ in the production internet as a capacity test. For performed in situ in the production internet as a capacity test. For
an example of such a test see [Pathdiag]. an example of such a test see [Pathdiag].
For technologies that do not have conventional queues, use the For technologies that do not have conventional queues, use the
test_window inplace of the onset of queueing. i.e. A subpath passes test_window in place of the onset of queueing. i.e. A subpath passes
the congestion avoidance standing queue test if more than the congestion avoidance standing queue test if more than
target_run_length packets are delivered between start of the scan at target_run_length packets are delivered between start of the scan at
test_window and the first loss or ECN mark. test_window and the first loss or ECN CE mark.
8.2.2. Bufferbloat 8.2.2. Bufferbloat
This test confirms that there is some mechanism to limit buffer This test confirms that there is some mechanism to limit buffer
occupancy (e.g. that prevents bufferbloat). Note that this is not occupancy (e.g. that prevents bufferbloat). Note that this is not
strictly a requirement for single stream bulk transport capacity, strictly a requirement for single stream bulk transport capacity,
however if there is no mechanism to limit buffer queue occupancy then however if there is no mechanism to limit buffer queue occupancy then
a single stream with sufficient data to deliver is likely to cause a single stream with sufficient data to deliver is likely to cause
the problems described in [RFC2309], [I-D.ietf-aqm-recommendation] the problems described in [RFC7567], and [wikiBloat]. This may cause
and [wikiBloat]. This may cause only minor symptoms for the dominant only minor symptoms for the dominant flow, but has the potential to
flow, but has the potential to make the subpath unusable for other make the subpath unusable for other flows and applications.
flows and applications.
Pass if the onset of loss occurs before a standing queue has Pass if the onset of loss occurs before a standing queue has
introduced more delay than than twice target_RTT, or other well introduced more delay than than twice target_RTT, or other well
defined and specified limit. Note that there is not yet a model for defined and specified limit. Note that there is not yet a model for
how much standing queue is acceptable. The factor of two chosen here how much standing queue is acceptable. The factor of two chosen here
reflects a rule of thumb. In conjunction with the previous test, reflects a rule of thumb. In conjunction with the previous test,
this test implies that the first loss should occur at a queueing this test implies that the first loss should occur at a queueing
delay which is between one and two times the target_RTT. delay which is between one and two times the target_RTT.
Specified RTT limits that are larger than twice the target_RTT must Specified RTT limits that are larger than twice the target_RTT must
be fully justified in the FSTDS. be fully justified in the FSTIDS.
8.2.3. Non excessive loss 8.2.3. Non excessive loss
This test confirm that the onset of loss is not excessive. Pass if This test confirm that the onset of loss is not excessive. Pass if
losses are equal or less than the increase in the cross traffic plus losses are equal or less than the increase in the cross traffic plus
the test traffic window increase on the previous RTT. This could be the test stream window increase on the previous RTT. This could be
restated as non-decreasing subpath throughput at the onset of loss, restated as non-decreasing subpath throughput at the onset of loss,
which is easy to meet as long as discarding packets is not more which is easy to meet as long as discarding packets is not more
expensive than delivering them. (Note when there is a transient drop expensive than delivering them. (Note when there is a transient drop
in subpath throughput, outside of a standing queue test, a subpath in subpath throughput, outside of a standing queue test, a subpath
that passes other queue tests in this document will have sufficient that passes other queue tests in this document will have sufficient
queue space to hold one RTT worth of data). queue space to hold one RTT worth of data).
Note that conventional Internet traffic policers will not pass this Note that conventional Internet policers will not pass this test,
test, which is correct. TCP often fails to come into equilibrium at which is correct. TCP often fails to come into equilibrium at more
more than a small fraction of the available capacity, if the capacity than a small fraction of the available capacity, if the capacity is
is enforced by a policer. [Citation Pending]. enforced by a policer. [Citation Pending].
8.2.4. Duplex Self Interference 8.2.4. Duplex Self Interference
This engineering test confirms a bound on the interactions between This engineering test confirms a bound on the interactions between
the forward data path and the ACK return path. the forward data path and the ACK return path.
Some historical half duplex technologies had the property that each Some historical half duplex technologies had the property that each
direction held the channel until it completely drained its queue. direction held the channel until it completely drained its queue.
When a self clocked transport protocol, such as TCP, has data and When a self clocked transport protocol, such as TCP, has data and
ACKs passing in opposite directions through such a link, the behavior ACKs passing in opposite directions through such a link, the behavior
often reverts to stop-and-wait. Each additional packet added to the often reverts to stop-and-wait. Each additional packet added to the
window raises the observed RTT by two forward path packet times, once window raises the observed RTT by two packet times, once as it passes
as it passes through the data path, and once for the additional delay through the data path, and once for the additional delay incurred by
incurred by the ACK waiting on the return path. the ACK waiting on the return path.
The duplex self interference test fails if the RTT rises by more than The duplex self interference test fails if the RTT rises by more than
some fixed bound above the expected queueing time computed from trom a fixed bound above the expected queueing time computed from trom the
the excess window divided by the subpath IP Capacity. This bound excess window divided by the subpath IP Capacity. This bound must be
must be smaller than target_RTT/2 to avoid reverting to stop and wait smaller than target_RTT/2 to avoid reverting to stop and wait
behavior. (e.g. Data packets and ACKs have to be released at least behavior. (e.g. Data packets and ACKs both have to be released at
twice per RTT.) least twice per RTT.)
8.3. Slowstart tests 8.3. Slowstart tests
These tests mimic slowstart: data is sent at twice the effective These tests mimic slowstart: data is sent at twice the effective
bottleneck rate to exercise the queue at the dominant bottleneck. bottleneck rate to exercise the queue at the dominant bottleneck.
In general they are deemed inconclusive if the elapsed time to send
the data burst is not less than half of the time to receive the ACKs.
(i.e. sending data too fast is ok, but sending it slower than twice
the actual bottleneck rate as indicated by the ACKs is deemed
inconclusive). Space the bursts such that the average data rate is
equal to the target_data_rate.
8.3.1. Full Window slowstart test 8.3.1. Full Window slowstart test
This is a capacity test to confirm that slowstart is not likely to This is a capacity test to confirm that slowstart is not likely to
exit prematurely. Send slowstart bursts that are target_window_size exit prematurely. Send slowstart bursts that are target_window_size
total packets. total packets.
Accumulate packet delivery statistics as described in Section 7.2 to Accumulate packet transfer statistics as described in Section 7.2 to
score the outcome. Pass if it is statistically significant that the score the outcome. Pass if it is statistically significant that the
observed number of good packets delivered between losses or ECN marks observed number of good packets delivered between losses or ECN CE
is larger than the target_run_length. Fail if it is statistically marks is larger than the target_run_length. Fail if it is
significant that the observed interval between losses or ECN marks is statistically significant that the observed interval between losses
smaller than the target_run_length. or ECN CE marks is smaller than the target_run_length.
It is deemed inconclusive if the elapsed time to send the data burst
is not less than half of the time to receive the ACKs. (i.e. sending
data too fast is ok, but sending it slower than twice the actual
bottleneck rate as indicated by the ACKs is deemed inconclusive).
The headway for the slowstart bursts should be the target_RTT.
Note that these are the same parameters as the Sender Full Window Note that these are the same parameters as the Sender Full Window
burst test, except the burst rate is at slowestart rate, rather than burst test, except the burst rate is at slowestart rate, rather than
sender interface rate. sender interface rate.
8.3.2. Slowstart AQM test 8.3.2. Slowstart AQM test
Do a continuous slowstart (send data continuously at slowstart_rate), Do a continuous slowstart (send data continuously at twice the
until the first loss, stop, allow the network to drain and repeat, implied IP bottleneck capacity), until the first loss, stop, allow
gathering statistics on the last packet delivered before the loss, the network to drain and repeat, gathering statistics on how many
the loss pattern, maximum observed RTT and window size. Justify the packets were delivered before the loss, the pattern of losses,
results. There is not currently sufficient theory justifying maximum observed RTT and window size. Justify the results. There is
requiring any particular result, however design decisions that affect not currently sufficient theory justifying requiring any particular
the outcome of this tests also affect how the network balances result, however design decisions that affect the outcome of this
between long and short flows (the "mice and elephants" problem). The tests also affect how the network balances between long and short
queue at the time of the first loss should be at least one half of flows (the "mice vs elephants" problem). The queue at the time of
the target_RTT. the first loss should be at least one half of the target_RTT.
This is an engineering test: It would be best performed on a This is an engineering test: It must be performed on a quiescent
quiescent network or testbed, since cross traffic has the potential network or testbed, since cross traffic has the potential to change
to change the results. the results.
8.4. Sender Rate Burst tests 8.4. Sender Rate Burst tests
These tests determine how well the network can deliver bursts sent at These tests determine how well the network can deliver bursts sent at
sender's interface rate. Note that this test most heavily exercises sender's interface rate. Note that this test most heavily exercises
the front path, and is likely to include infrastructure may be out of the front path, and is likely to include infrastructure may be out of
scope for an access ISP, even though the bursts might be caused by scope for an access ISP, even though the bursts might be caused by
ACK compression, thinning or channel arbitration in the access ISP. ACK compression, thinning or channel arbitration in the access ISP.
See Appendix B. See Appendix B.
Also, there are a several details that are not precisely defined. Also, there are a several details that are not precisely defined.
For starters there is not a standard server interface rate. 1 Gb/s For starters there is not a standard server interface rate. 1 Gb/s
and 10 Gb/s are very common today, but higher rates will become cost and 10 Gb/s are common today, but higher rates will become cost
effective and can be expected to be dominant some time in the future. effective and can be expected to be dominant some time in the future.
Current standards permit TCP to send a full window bursts following Current standards permit TCP to send a full window bursts following
an application pause. (Congestion Window Validation [RFC2861], is an application pause. (Congestion Window Validation [RFC2861], is
not required, but even if was, it does not take effect until an not required, but even if was, it does not take effect until an
application pause is longer than an RTO.) Since full window bursts application pause is longer than an RTO.) Since full window bursts
are consistent with standard behavior, it is desirable that the are consistent with standard behavior, it is desirable that the
network be able to deliver such bursts, otherwise application pauses network be able to deliver such bursts, otherwise application pauses
will cause unwarranted losses. Note that the AIMD sawtooth requires will cause unwarranted losses. Note that the AIMD sawtooth requires
a peak window that is twice target_window_size, so the worst case a peak window that is twice target_window_size, so the worst case
burst may be 2*target_window_size. burst may be 2*target_window_size.
It is also understood in the application and serving community that It is also understood in the application and serving community that
interface rate bursts have a cost to the network that has to be interface rate bursts have a cost to the network that has to be
balanced against other costs in the servers themselves. For example balanced against other costs in the servers themselves. For example
TCP Segmentation Offload (TSO) reduces server CPU in exchange for TCP Segmentation Offload (TSO) reduces server CPU in exchange for
larger network bursts, which increase the stress on network buffer larger network bursts, which increase the stress on network buffer
memory. memory. Some newer TCP implementations can pace traffic at scale
[TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how
quickly these changes will be deployed.
There is not yet theory to unify these costs or to provide a There is not yet theory to unify these costs or to provide a
framework for trying to optimize global efficiency. We do not yet framework for trying to optimize global efficiency. We do not yet
have a model for how much the network should tolerate server rate have a model for how much the network should tolerate server rate
bursts. Some bursts must be tolerated by the network, but it is bursts. Some bursts must be tolerated by the network, but it is
probably unreasonable to expect the network to be able to efficiently probably unreasonable to expect the network to be able to efficiently
deliver all data as a series of bursts. deliver all data as a series of bursts.
For this reason, this is the only test for which we encourage For this reason, this is the only test for which we encourage
derating. A TDS could include a table of pairs of derating derating. A TIDS could include a table of pairs of derating
parameters: what burst size to use as a fraction of the parameters: burst sizes and how much each burst size is permitted to
target_window_size, and how much each burst size is permitted to
reduce the run length, relative to to the target_run_length. reduce the run length, relative to to the target_run_length.
8.5. Combined and Implicit Tests 8.5. Combined and Implicit Tests
Combined tests efficiently confirm multiple network properties in a Combined tests efficiently confirm multiple network properties in a
single test, possibly as a side effect of normal content delivery. single test, possibly as a side effect of normal content delivery.
They require less measurement traffic than other testing strategies They require less measurement traffic than other testing strategies
at the cost of conflating diagnostic signatures when they fail. at the cost of conflating diagnostic signatures when they fail.
These are by far the most efficient for monitoring networks that are These are by far the most efficient for monitoring networks that are
nominally expected to pass all tests. nominally expected to pass all tests.
8.5.1. Sustained Bursts Test 8.5.1. Sustained Bursts Test
The sustained burst test implements a combined worst case version of The sustained burst test implements a combined worst case version of
all of the capacity tests above. It is simply: all of the capacity tests above. It is simply:
Send target_window_size bursts of packets at server interface rate Send target_window_size bursts of packets at server interface rate
with target_RTT burst headway (burst start to burst start). Verify with target_RTT burst headway (burst start to burst start). Verify
that the observed packet delivery statistics meets the that the observed packet transfer statistics meets the
target_run_length. target_run_length.
Key observations: Key observations:
o The subpath under test is expected to go idle for some fraction of o The subpath under test is expected to go idle for some fraction of
the time: (subpath_IP_capacity-target_rate/ the time: (subpath_IP_capacity-target_rate/
(target_MTU-header_overhead)*target_MTU)/subpath_IP_capacity. (target_MTU-header_overhead)*target_MTU)/subpath_IP_capacity.
Failing to do so indicates a problem with the procedure and an Failing to do so indicates a problem with the procedure and an
inconclusive test result. inconclusive test result.
o The burst sensitivity can be derated by sending smaller bursts o The burst sensitivity can be derated by sending smaller bursts
more frequently. E.g. send target_window_size*derate packet more frequently. E.g. send target_window_size*derate packet
bursts every target_RTT*derate. bursts every target_RTT*derate.
o When not derated, this test is the most strenuous capacity test. o When not derated, this test is the most strenuous capacity test.
o A subpath that passes this test is likely to be able to sustain o A subpath that passes this test is likely to be able to sustain
higher rates (close to subpath_IP_capacity) for paths with RTTs higher rates (close to subpath_IP_capacity) for paths with RTTs
significantly smaller than the target_RTT. significantly smaller than the target_RTT.
o This test can be implemented with instrumented TCP [RFC4898], o This test can be implemented with instrumented TCP [RFC4898],
using a specialized measurement application at one end [MBMSource] using a specialized measurement application at one end [MBMSource]
and a minimal service at the other end [RFC0863] [RFC0864]. and a minimal service at the other end [RFC0863] [RFC0864].
o This test is efficient to implement, since it does not require o This test is efficient to implement, since it does not require
per-packet timers, and can make use of TSO in modern NIC hardware. per-packet timers, and can make use of TSO in modern NIC hardware.
o This test by itself is not sufficient: the standing window o If a subpath is known to pass the Standing Queue engineering tests
engineering tests are also needed to ensure that the subpath is (particularly that it has a progressive onset of loss at an
well behaved at and beyond the onset of congestion. appropriate queue depth), then the Sustained Burst Test is
o Assuming the subpath passes relevant standing window engineering sufficient to assure that the subpath under test will not impair
tests (particularly that it has a progressive onset of loss at an Bulk Transport Capacity at the target performance under all
appropriate queue depth) the passing sustained burst test is conditions. See Section 8.2 for a discussion of the standing
(believed to be) a sufficient verify that the subpath will not queue tests.
impair stream at the target performance under all conditions.
Proving this statement will be subject of ongoing research.
Note that this test is clearly independent of the subpath RTT, or Note that this test is clearly independent of the subpath RTT, or
other details of the measurement infrastructure, as long as the other details of the measurement infrastructure, as long as the
measurement infrastructure can accurately and reliably deliver the measurement infrastructure can accurately and reliably deliver the
required bursts to the subpath under test. required bursts to the subpath under test.
8.5.2. Streaming Media 8.5.2. Streaming Media
Model Based Metrics can be implicitly implemented as a side effect of Model Based Metrics can be implicitly implemented as a side effect
serving any non-throughput maximizing traffic, such as streaming any non-throughput maximizing application, such as streaming media,
media, with some additional controls and instrumentation in the with some additional controls and instrumentation in the servers.
servers. The essential requirement is that the traffic be The essential requirement is that the data rate be constrained such
constrained such that even with arbitrary application pauses, bursts that even with arbitrary application pauses and bursts the data rate
and data rate fluctuations, the traffic stays within the envelope and burst sizes stay within the envelope defined by the individual
defined by the individual tests described above. tests described above.
If the application's serving_data_rate is less than or equal to the If the application's serving_data_rate is less than or equal to the
target_data_rate and the serving_RTT (the RTT between the sender and target_data_rate and the serving_RTT (the RTT between the sender and
client) is less than the target_RTT, this constraint is most easily client) is less than the target_RTT, this constraint is most easily
implemented by clamping the transport window size to be no larger implemented by clamping the transport window size to be no larger
than: than:
serving_window_clamp=target_data_rate*serving_RTT/ serving_window_clamp=target_data_rate*serving_RTT/
(target_MTU-header_overhead) (target_MTU-header_overhead)
Under the above constraints the serving_window_clamp will limit the Under the above constraints the serving_window_clamp will limit the
both the serving data rate and burst sizes to be no larger than the both the serving data rate and burst sizes to be no larger than the
procedures in Section 8.1.2 and Section 8.4 or Section 8.5.1. Since procedures in Section 8.1.2 and Section 8.4 or Section 8.5.1. Since
the serving RTT is smaller than the target_RTT, the worst case bursts the serving RTT is smaller than the target_RTT, the worst case bursts
that might be generated under these conditions will be smaller than that might be generated under these conditions will be smaller than
called for by Section 8.4 and the sender rate burst sizes are called for by Section 8.4 and the sender rate burst sizes are
implicitly derated by the serving_window_clamp divided by the implicitly derated by the serving_window_clamp divided by the
target_window_size at the very least. (Depending on the application target_window_size at the very least. (Depending on the application
behavior, the data traffic might be significantly smoother than behavior, the data might be significantly smoother than specified by
specified by any of the burst tests.) any of the burst tests.)
In an alternative implementation the data rate and bursts might be In an alternative implementation the data rate and bursts might be
explicitly controlled by a host shaper or pacing at the sender. This explicitly controlled by a programmable traffic shaper or pacing at
would provide better control over transmissions but it is the sender. This would provide better control over transmissions but
substantially more complicated to implement and would be likely to it is substantially more complicated to implement and would be likely
have a higher CPU overhead. to have a higher CPU overhead.
Note that these techniques can be applied to any content delivery Note that these techniques can be applied to any content delivery
that can be subjected to a reduced data rate in order to inhibit TCP that can be subjected to a reduced data rate in order to inhibit TCP
equilibrium behavior. equilibrium behavior.
9. An Example 9. An Example
In this section a we illustrate a TDS designed to confirm that an In this section a we illustrate a TIDS designed to confirm that an
access ISP can reliably deliver HD video from multiple content access ISP can reliably deliver HD video from multiple content
providers to all of their customers. With modern codecs, minimal HD providers to all of their customers. With modern codecs, minimal HD
video (720p) generally fits in 2.5 Mb/s. Due to their geographical video (720p) generally fits in 2.5 Mb/s. Due to their geographical
size, network topology and modem designs the ISP determines that most size, network topology and modem designs the ISP determines that most
content is within a 50 mS RTT from their users (This is a sufficient content is within a 50 mS RTT from their users (This is a sufficient
to cover continental Europe or either US coast from a single serving to cover continental Europe or either US coast from a single serving
site.) site.)
2.5 Mb/s over a 50 ms path 2.5 Mb/s over a 50 ms path
+----------------------+-------+---------+ +----------------------+-------+---------+
| End-to-End Parameter | value | units | | End-to-End Parameter | value | units |
+----------------------+-------+---------+ +----------------------+-------+---------+
| target_rate | 2.5 | Mb/s | | target_rate | 2.5 | Mb/s |
| target_RTT | 50 | ms | | target_RTT | 50 | ms |
| target_MTU | 1500 | bytes | | target_MTU | 1500 | bytes |
| header_overhead | 64 | bytes | | header_overhead | 64 | bytes |
| target_window_size | 11 | packets | | target_window_size | 11 | packets |
skipping to change at page 39, line 20 skipping to change at page 40, line 49
| target_RTT | 50 | ms | | target_RTT | 50 | ms |
| target_MTU | 1500 | bytes | | target_MTU | 1500 | bytes |
| header_overhead | 64 | bytes | | header_overhead | 64 | bytes |
| target_window_size | 11 | packets | | target_window_size | 11 | packets |
| target_run_length | 363 | packets | | target_run_length | 363 | packets |
+----------------------+-------+---------+ +----------------------+-------+---------+
Table 1 Table 1
Table 1 shows the default TCP model with no derating, and as such is Table 1 shows the default TCP model with no derating, and as such is
quite conservative. The simplest TDS would be to use the sustained quite conservative. The simplest TIDS would be to use the sustained
burst test, described in Section 8.5.1. Such a test would send 11 burst test, described in Section 8.5.1. Such a test would send 11
packet bursts every 50mS, and confirming that there was no more than packet bursts every 50mS, and confirming that there was no more than
1 packet loss per 33 bursts (363 total packets in 1.650 seconds). 1 packet loss per 33 bursts (363 total packets in 1.650 seconds).
Since this number represents is the entire end-to-end loss budget, Since this number represents is the entire end-to-end loss budget,
independent subpath tests could be implemented by apportioning the independent subpath tests could be implemented by apportioning the
packet loss ratio across subpaths. For example 50% of the losses packet loss ratio across subpaths. For example 50% of the losses
might be allocated to the access or last mile link to the user, 40% might be allocated to the access or last mile link to the user, 40%
to the interconnects with other ISPs and 1% to each internal hop to the interconnects with other ISPs and 1% to each internal hop
(assuming no more than 10 internal hops). Then all of the subpaths (assuming no more than 10 internal hops). Then all of the subpaths
can be tested independently, and the spatial composition of passing can be tested independently, and the spatial composition of passing
subpaths would be expected to be within the end-to-end loss budget. subpaths would be expected to be within the end-to-end loss budget.
Testing interconnects has generally been problematic: conventional Testing interconnects has generally been problematic: conventional
performance tests run between Measurement Points adjacent to either performance tests run between measurement points adjacent to either
side of the interconnect, are not generally useful. Unconstrained side of the interconnect, are not generally useful. Unconstrained
TCP tests, such as iperf [iperf] are usually overly aggressive TCP tests, such as iperf [iperf] are usually overly aggressive
because the RTT is so small (often less than 1 mS). With a short RTT because the RTT is so small (often less than 1 mS). With a short RTT
these tools are likely to report inflated numbers because for short these tools are likely to report inflated numbers because for short
RTTs these tools can tolerate very high packet loss ratios and can RTTs these tools can tolerate very high packet loss ratios and can
push other cross traffic off of the network. As a consequence they push other cross traffic off of the network. As a consequence they
are useless for predicting actual user performance, and may are useless for predicting actual user performance, and may
themselves be quite disruptive. Model Based Metrics solves this themselves be quite disruptive. Model Based Metrics solves this
problem. The same test pattern as used on other subpaths can be problem. The same test pattern as used on other subpaths can be
applied to the interconnect. For our example, when apportioned 40% applied to the interconnect. For our example, when apportioned 40%
of the losses, 11 packet bursts sent every 50mS should have fewer of the losses, 11 packet bursts sent every 50mS should have fewer
than one loss per 82 bursts (902 packets). than one loss per 82 bursts (902 packets).
10. Validation 10. Validation
Since some aspects of the models are likely to be too conservative, Since some aspects of the models are likely to be too conservative,
Section 5.2 permits alternate protocol models and Section 5.3 permits Section 5.2 permits alternate protocol models and Section 5.3 permits
test parameter derating. If either of these techniques are used, we test parameter derating. If either of these techniques are used, we
require demonstrations that such a TDS can robustly detect subpaths require demonstrations that such a TIDS can robustly detect subpaths
that will prevent authentic applications using state-of-the-art that will prevent authentic applications using state-of-the-art
protocol implementations from meeting the specified Target Transport protocol implementations from meeting the specified Target Transport
Performance. This correctness criteria is potentially difficult to Performance. This correctness criteria is potentially difficult to
prove, because it implicitly requires validating a TDS against all prove, because it implicitly requires validating a TIDS against all
possible subpaths and subpaths. The procedures described here are possible subpaths and subpaths. The procedures described here are
still experimental. still experimental.
We suggest two approaches, both of which should be applied: first, We suggest two approaches, both of which should be applied: first,
publish a fully open description of the TDS, including what publish a fully open description of the TIDS, including what
assumptions were used and and how it was derived, such that the assumptions were used and and how it was derived, such that the
research community can evaluate the design decisions, test them and research community can evaluate the design decisions, test them and
comment on their applicability; and second, demonstrate that an comment on their applicability; and second, demonstrate that an
applications running over an infinitessimally passing testbed do meet applications running over an infinitesimally passing testbed do meet
the performance targets. the performance targets.
An infinitessimally passing testbed resembles a epsilon-delta proof An infinitesimally passing testbed resembles a epsilon-delta proof in
in calculus. Construct a test network such that all of the calculus. Construct a test network such that all of the individual
individual tests of the TDS pass by only small (infinitesimal) tests of the TIDS pass by only small (infinitesimal) margins, and
margins, and demonstrate that a variety of authentic applications demonstrate that a variety of authentic applications running over
running over real TCP implementations (or other protocol as real TCP implementations (or other protocol as appropriate) meets the
appropriate) meets the Target Transport Performance over such a Target Transport Performance over such a network. The workloads
network. The workloads should include multiple types of streaming should include multiple types of streaming media and transaction
media and transaction oriented short flows (e.g. synthetic web oriented short flows (e.g. synthetic web traffic).
traffic).
For example, for the HD streaming video TDS described in Section 9, For example, for the HD streaming video TIDS described in Section 9,
the IP capacity should be exactly the header overhead above 2.5 Mb/s, the IP capacity should be exactly the header overhead above 2.5 Mb/s,
the per packet random background loss ratio should be 1/363, for a the per packet random background loss ratio should be 1/363, for a
run length of 363 packets, the bottleneck queue should be 11 packets run length of 363 packets, the bottleneck queue should be 11 packets
and the front path should have just enough buffering to withstand 11 and the front path should have just enough buffering to withstand 11
packet interface rate bursts. We want every one of the TDS tests to packet interface rate bursts. We want every one of the TIDS tests to
fail if we slightly increase the relevant test parameter, so for fail if we slightly increase the relevant test parameter, so for
example sending a 12 packet bursts should cause excess (possibly example sending a 12 packet bursts should cause excess (possibly
deterministic) packet drops at the dominant queue at the bottleneck. deterministic) packet drops at the dominant queue at the bottleneck.
On this infinitessimally passing network it should be possible for a On this infinitesimally passing network it should be possible for a
real application using a stock TCP implementation in the vendor's real application using a stock TCP implementation in the vendor's
default configuration to attain 2.5 Mb/s over an 50 mS path. default configuration to attain 2.5 Mb/s over an 50 mS path.
The most difficult part of setting up such a testbed is arranging for The most difficult part of setting up such a testbed is arranging for
it to infinitesimally pass the individual tests. Two approaches: it to infinitesimally pass the individual tests. Two approaches:
constraining the network devices not to use all available resources constraining the network devices not to use all available resources
(e.g. by limiting available buffer space or data rate); and (e.g. by limiting available buffer space or data rate); and
preloading subpaths with cross traffic. Note that is it important preloading subpaths with cross traffic. Note that is it important
that a single environment be constructed which infinitessimally that a single environment be constructed which infinitesimally passes
passes all tests at the same time, otherwise there is a chance that all tests at the same time, otherwise there is a chance that TCP can
TCP can exploit extra latitude in some parameters (such as data rate) exploit extra latitude in some parameters (such as data rate) to
to partially compensate for constraints in other parameters (queue partially compensate for constraints in other parameters (queue
space, or viceversa). space, or viceversa).
To the extent that a TDS is used to inform public dialog it should be To the extent that a TIDS is used to inform public dialog it should
fully publicly documented, including the details of the tests, what be fully publicly documented, including the details of the tests,
assumptions were used and how it was derived. All of the details of what assumptions were used and how it was derived. All of the
the validation experiment should also be published with sufficient details of the validation experiment should also be published with
detail for the experiments to be replicated by other researchers. sufficient detail for the experiments to be replicated by other
All components should either be open source of fully described researchers. All components should either be open source of fully
proprietary implementations that are available to the research described proprietary implementations that are available to the
community. research community.
11. Security Considerations 11. Security Considerations
Measurement is often used to inform business and policy decisions, Measurement is often used to inform business and policy decisions,
and as a consequence is potentially subject to manipulation. Model and as a consequence is potentially subject to manipulation. Model
Based Metrics are expected to be a huge step forward because Based Metrics are expected to be a huge step forward because
equivalent measurements can be performed from multiple vantage equivalent measurements can be performed from multiple vantage
points, such that performance claims can be independently validated points, such that performance claims can be independently validated
by multiple parties. by multiple parties.
skipping to change at page 41, line 40 skipping to change at page 43, line 19
historical lack of any effective vantage independent tools to historical lack of any effective vantage independent tools to
characterize network performance. Traditional methods for measuring characterize network performance. Traditional methods for measuring
Bulk Transport Capacity are sensitive to RTT and as a consequence Bulk Transport Capacity are sensitive to RTT and as a consequence
often yield very different results when run local to an ISP or often yield very different results when run local to an ISP or
internconnect and when run over a customer's complete path. Neither internconnect and when run over a customer's complete path. Neither
the ISP nor customer can repeat the other's measurements, leading to the ISP nor customer can repeat the other's measurements, leading to
high levels of distrust and acrimony. Model Based Metrics are high levels of distrust and acrimony. Model Based Metrics are
expected to greatly improve this situation. expected to greatly improve this situation.
This document only describes a framework for designing Fully This document only describes a framework for designing Fully
Specified Targeted Diagnostic Suite. Each FSTDS MUST include its own Specified Targeted IP Diagnostic Suite. Each FSTIDS MUST include its
security section. own security section.
12. Acknowledgements 12. Acknowledgements
Ganga Maguluri suggested the statistical test for measuring loss Ganga Maguluri suggested the statistical test for measuring loss
probability in the target run length. Alex Gilgur for helping with probability in the target run length. Alex Gilgur for helping with
the statistics. the statistics.
Meredith Whittaker for improving the clarity of the communications. Meredith Whittaker for improving the clarity of the communications.
Ruediger Geib provided feedback which greatly improved the document. Ruediger Geib provided feedback which greatly improved the document.
skipping to change at page 42, line 29 skipping to change at page 44, line 12
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
14.2. Informative References 14.2. Informative References
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983.
[RFC0864] Postel, J., "Character Generator Protocol", STD 22, [RFC0864] Postel, J., "Character Generator Protocol", STD 22,
RFC 864, May 1983. RFC 864, May 1983.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering,
S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G.,
Partridge, C., Peterson, L., Ramakrishnan, K., Shenker,
S., Wroclawski, J., and L. Zhang, "Recommendations on
Queue Management and Congestion Avoidance in the
Internet", RFC 2309, April 1998.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
May 1998. May 1998.
[RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion
Window Validation", RFC 2861, June 2000. Window Validation", RFC 2861, June 2000.
[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148, Empirical Bulk Transfer Capacity Metrics", RFC 3148,
July 2001. July 2001.
skipping to change at page 43, line 19 skipping to change at page 44, line 42
[RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP
Extended Statistics MIB", RFC 4898, May 2007. Extended Statistics MIB", RFC 4898, May 2007.
[RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity",
RFC 5136, February 2008. RFC 5136, February 2008.
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric
Composition", RFC 5835, April 2010.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, January 2011.
[RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673,
August 2012. August 2012.
[RFC6928] Chu, J., Dukkipati, N., Cheng, Y., and M. Mathis,
"Increasing TCP's Initial Window", RFC 6928, DOI 10.17487/
RFC6928, April 2013,
<http://www.rfc-editor.org/info/rfc6928>.
[RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling
Framework for IP Performance Metrics (IPPM)", RFC 7312, Framework for IP Performance Metrics (IPPM)", RFC 7312,
August 2014. August 2014.
[RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and
A. Morton, "A Reference Path and Measurement Points for A. Morton, "A Reference Path and Measurement Points for
Large-Scale Measurement of Broadband Performance", Large-Scale Measurement of Broadband Performance",
RFC 7398, February 2015. RFC 7398, February 2015.
[RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF
Recommendations Regarding Active Queue Management",
BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015,
<http://www.rfc-editor.org/info/rfc7567>.
[I-D.ietf-ippm-2680-bis] [I-D.ietf-ippm-2680-bis]
Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, "A Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, "A
One-Way Loss Metric for IPPM", draft-ietf-ippm-2680-bis-02 One-Way Loss Metric for IPPM", draft-ietf-ippm-2680-bis-05
(work in progress), June 2015. (work in progress), August 2015.
[I-D.ietf-aqm-recommendation]
Baker, F. and G. Fairhurst, "IETF Recommendations
Regarding Active Queue Management",
draft-ietf-aqm-recommendation-11 (work in progress),
February 2015.
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The
Macroscopic Behavior of the TCP Congestion Avoidance Macroscopic Behavior of the TCP Congestion Avoidance
Algorithm", Computer Communications Review volume 27, Algorithm", Computer Communications Review volume 27,
number3, July 1997. number3, July 1997.
[WPING] Mathis, M., "Windowed Ping: An IP Level Performance [WPING] Mathis, M., "Windowed Ping: An IP Level Performance
Diagnostic", INET 94, June 1994. Diagnostic", INET 94, June 1994.
[mpingSource] [mpingSource]
skipping to change at page 44, line 49 skipping to change at page 46, line 30
[wikiBloat] [wikiBloat]
Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/ Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/
index.php?title=Bufferbloat&oldid=608805474, March 2015. index.php?title=Bufferbloat&oldid=608805474, March 2015.
[CCscaling] [CCscaling]
Fernando, F., Doyle, J., and S. Steven, "Scalable laws for Fernando, F., Doyle, J., and S. Steven, "Scalable laws for
stable network congestion control", Proceedings of stable network congestion control", Proceedings of
Conference on Decision and Conference on Decision and
Control, http://www.ee.ucla.edu/~paganini, December 2001. Control, http://www.ee.ucla.edu/~paganini, December 2001.
[TSO_pacing]
Corbet, J., "TSO sizing and the FQ scheduler",
LWN.net https://lwn.net/Articles/564978/, Aug 2013.
[TSO_fq_pacing]
Dumazet, E. and Y. Chen, "TSO, fair queuing, pacing:
three's a charm", Proceedings of IETF 88, TCPM WG https://
www.ietf.org/proceedings/88/slides/slides-88-tcpm-9.pdf,
Nov 2013.
Appendix A. Model Derivations Appendix A. Model Derivations
The reference target_run_length described in Section 5.2 is based on The reference target_run_length described in Section 5.2 is based on
very conservative assumptions: that all window above very conservative assumptions: that all window above
target_window_size contributes to a standing queue that raises the target_window_size contributes to a standing queue that raises the
RTT, and that classic Reno congestion control with delayed ACKs are RTT, and that classic Reno congestion control with delayed ACKs are
in effect. In this section we provide two alternative calculations in effect. In this section we provide two alternative calculations
using different assumptions. using different assumptions.
It may seem out of place to allow such latitude in a measurement It may seem out of place to allow such latitude in a measurement
skipping to change at page 45, line 23 skipping to change at page 47, line 15
The estimates provided by these models make the most sense if network The estimates provided by these models make the most sense if network
performance is viewed logarithmically. In the operational Internet, performance is viewed logarithmically. In the operational Internet,
data rates span more than 8 orders of magnitude, RTT spans more than data rates span more than 8 orders of magnitude, RTT spans more than
3 orders of magnitude, and packet loss ratio spans at least 8 orders 3 orders of magnitude, and packet loss ratio spans at least 8 orders
of magnitude if not more. When viewed logarithmically (as in of magnitude if not more. When viewed logarithmically (as in
decibels), these correspond to 80 dB of dynamic range. On an 80 dB decibels), these correspond to 80 dB of dynamic range. On an 80 dB
scale, a 3 dB error is less than 4% of the scale, even though it scale, a 3 dB error is less than 4% of the scale, even though it
represents a factor of 2 in untransformed parameter. represents a factor of 2 in untransformed parameter.
This document gives a lot of latitude for calculating This document gives a lot of latitude for calculating
target_run_length, however people designing a TDS should consider the target_run_length, however people designing a TIDS should consider
effect of their choices on the ongoing tussle about the relevance of the effect of their choices on the ongoing tussle about the relevance
"TCP friendliness" as an appropriate model for Internet capacity of "TCP friendliness" as an appropriate model for Internet capacity
allocation. Choosing a target_run_length that is substantially allocation. Choosing a target_run_length that is substantially
smaller than the reference target_run_length specified in Section 5.2 smaller than the reference target_run_length specified in Section 5.2
strengthens the argument that it may be appropriate to abandon "TCP strengthens the argument that it may be appropriate to abandon "TCP
friendliness" as the Internet fairness model. This gives developers friendliness" as the Internet fairness model. This gives developers
incentive and permission to develop even more aggressive applications incentive and permission to develop even more aggressive applications
and protocols, for example by increasing the number of connections and protocols, for example by increasing the number of connections
that they open concurrently. that they open concurrently.
A.1. Queueless Reno A.1. Queueless Reno
In Section 5.2 it was assumed that the subpath IP rate matches the In Section 5.2 models were derived based on the assumption that the
target rate plus overhead, such that the excess window needed for the subpath IP rate matches the target rate plus overhead, such that the
AIMD sawtooth causes a fluctuating queue at the bottleneck. excess window needed for the AIMD sawtooth causes a fluctuating queue
at the bottleneck.
An alternate situation would be bottleneck where there is no An alternate situation would be a bottleneck where there is no
significant queue and losses are caused by some mechanism that does significant queue and losses are caused by some mechanism that does
not involve extra delay, for example by the use of a virtual queue as not involve extra delay, for example by the use of a virtual queue as
in Approximate Fair Dropping [AFD]. A flow controlled by such a done in Approximate Fair Dropping [AFD]. A flow controlled by such a
bottleneck would have a constant RTT and a data rate that fluctuates bottleneck would have a constant RTT and a data rate that fluctuates
in a sawtooth due to AIMD congestion control. Assume the losses are in a sawtooth due to AIMD congestion control. Assume the losses are
being controlled to make the average data rate meet some goal which being controlled to make the average data rate meet some goal which
is equal or greater than the target_rate. The necessary run length is equal or greater than the target_rate. The necessary run length
can be computed as follows: to meet the target_rate can be computed as follows:
For some value of Wmin, the window will sweep from Wmin packets to For some value of Wmin, the window will sweep from Wmin packets to
2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the 2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the
queueing case where Wmin = target_window_size, we want the average of queueing case where Wmin = target_window_size, we want the average of
Wmin and 2*Wmin to be the target_window_size, so the average rate is Wmin and 2*Wmin to be the target_window_size, so the average data
the target rate. Thus we want Wmin = (2/3)*target_window_size. rate is the target rate. Thus we want Wmin =
(2/3)*target_window_size.
Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin)
packets in 2*Wmin round trip times. packets in 2*Wmin round trip times.
Substituting these together we get: Substituting these together we get:
target_run_length = (4/3)(target_window_size^2) target_run_length = (4/3)(target_window_size^2)
Note that this is 44% of the reference_run_length computed earlier. Note that this is 44% of the reference_run_length computed earlier.
This makes sense because under the assumptions in Section 5.2 the This makes sense because under the assumptions in Section 5.2 the
AMID sawtooth caused a queue at the bottleneck, which raised the AMID sawtooth caused a queue at the bottleneck, which raised the
effective RTT by 50%. effective RTT by 50%.
Appendix B. Complex Queueing Appendix B. The effects of ACK scheduling
For many network technologies simple queueing models don't apply: the For many network technologies simple queueing models don't apply: the
network schedules, thins or otherwise alters the timing of ACKs and network schedules, thins or otherwise alters the timing of ACKs and
data, generally to raise the efficiency of the channel allocation data, generally to raise the efficiency of the channel allocation
when confronted with relatively widely spaced small ACKs. These algorithms when confronted with relatively widely spaced small ACKs.
efficiency strategies are ubiquitous for half duplex, wireless and These efficiency strategies are ubiquitous for half duplex, wireless
broadcast media. and broadcast media.
Altering the ACK stream generally has two consequences: it raises the Altering the ACK stream by holding or thinning ACKs typically has two
implied bottleneck IP capacity, making slowstart burst at higher consequences: it raises the implied bottleneck IP capacity, making
rates (possibly as high as the sender's interface rate) and it the fine grained slowstart bursts either faster or larger and it
effectively raises the RTT by the average time that the ACKs and data raises the effective RTT by the average time that the ACKs and data
were delayed. The first effect can be partially mitigated by are delayed. The first effect can be partially mitigated by
reclocking ACKs once they are beyond the bottleneck on the return reclocking ACKs once they are beyond the bottleneck on the return
path to the sender, however this further raises the effective RTT. path to the sender, however this further raises the effective RTT.
The most extreme example of this sort of behavior would be a half The most extreme example of this sort of behavior would be a half
duplex channel that is not released as long as end point currently duplex channel that is not released as long as the endpoint currently
holding the channel has more traffic (data or ACKs) to send. Such holding the channel has more traffic (data or ACKs) to send. Such
environments cause self clocked protocols under full load to revert environments cause self clocked protocols under full load to revert
to extremely inefficient stop and wait behavior, where they send an to extremely inefficient stop and wait behavior. The channel
entire window of data as a single burst of the forward path, followed constrains the protocol to send an entire window of data as a single
by the entire window of ACKs on the return path. It is important to contiguous burst on the forward path, followed by the entire window
note that due to self clocking, ill conceived channel allocation of ACKs on the return path.
mechanisms can increase the stress on upstream subpaths in a long
path: they cause large and faster bursts.
If a particular return path contains a subpath or device that alters If a particular return path contains a subpath or device that alters
the ACK stream, then the entire path from the sender up to the the timing of the ACK stream, then the entire front path from the
bottleneck must be tested at the burst parameters implied by the ACK sender up to the bottleneck must be tested at the burst parameters
scheduling algorithm. The most important parameter is the Implied implied by the ACK scheduling algorithm. The most important
Bottleneck IP Capacity, which is the average rate at which the ACKs parameter is the Implied Bottleneck IP Capacity, which is the average
advance snd.una. Note that thinning the ACKs (relying on the rate at which the ACKs advance snd.una. Note that thinning the ACK
cumulative nature of seg.ack to permit discarding some ACKs) is stream (relying on the cumulative nature of seg.ack to permit
implies an effectively infinite Implied Bottleneck IP Capacity. discarding some ACKs) requires larger sender interface bursts to
offset the longer times between ACK in order to maintain the average
data rate.
It is important to note that due to ubiquitous self clocking in
Internet protocols, ill conceived channel allocation mechanisms
increases the queueing stress on the front path because they cause
larger full sender rate data bursts.
Holding data or ACKs for channel allocation or other reasons (such as Holding data or ACKs for channel allocation or other reasons (such as
forward error correction) always raises the effective RTT relative to forward error correction) always raises the effective RTT relative to
the minimum delay for the path. Therefore it may be necessary to the minimum delay for the path. Therefore it may be necessary to
replace target_RTT in the calculation in Section 5.2 by an replace target_RTT in the calculation in Section 5.2 by an
effective_RTT, which includes the target_RTT plus a term to account effective_RTT, which includes the target_RTT plus a term to account
for the extra delays introduced by these mechanisms. for the extra delays introduced by these mechanisms.
Appendix C. Version Control Appendix C. Version Control
This section to be removed prior to publication. This section to be removed prior to publication.
Formatted: Mon Jul 6 13:49:30 PDT 2015 Formatted: Mon Oct 19 15:59:51 PDT 2015
Authors' Addresses Authors' Addresses
Matt Mathis Matt Mathis
Google, Inc Google, Inc
1600 Amphitheater Parkway 1600 Amphitheater Parkway
Mountain View, California 94043 Mountain View, California 94043
USA USA
Email: mattmathis@google.com Email: mattmathis@google.com
 End of changes. 197 change blocks. 
680 lines changed or deleted 809 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/