draft-ietf-ippm-model-based-metrics-05.txt | draft-ietf-ippm-model-based-metrics-06.txt | |||
---|---|---|---|---|
IP Performance Working Group M. Mathis | IP Performance Working Group M. Mathis | |||
Internet-Draft Google, Inc | Internet-Draft Google, Inc | |||
Intended status: Experimental A. Morton | Intended status: Experimental A. Morton | |||
Expires: December 15, 2015 AT&T Labs | Expires: January 7, 2016 AT&T Labs | |||
June 13, 2015 | July 6, 2015 | |||
Model Based Metrics for Bulk Transport Capacity | Model Based Metrics for Bulk Transport Capacity | |||
draft-ietf-ippm-model-based-metrics-05.txt | draft-ietf-ippm-model-based-metrics-06.txt | |||
Abstract | Abstract | |||
We introduce a new class of model based metrics designed to determine | We introduce a new class of Model Based Metrics designed to assess if | |||
if a complete Internet path can meet predefined bulk transport | a complete Internet path can be expected to meet a predefined Bulk | |||
performance targets by applying a suite of IP diagnostic tests to | Transport Performance target by applying a suite of IP diagnostic | |||
successive subpaths. The subpath-at-a-time tests can be robustly | tests to successive subpaths. The subpath-at-a-time tests can be | |||
applied to key infrastructure, such as interconnects or even | robustly applied to key infrastructure, such as interconnects or even | |||
individual devices, to accurately detect if any part of the | individual devices, to accurately detect if any part of the | |||
infrastructure will prevent any path traversing it from meeting the | infrastructure will prevent any path traversing it from meeting the | |||
specified target performance. | specified Target Transport Performance. | |||
The diagnostic tests consist of precomputed traffic patterns and | The IP diagnostic tests consist of precomputed traffic patterns and | |||
statistical criteria for evaluating packet delivery. The traffic | statistical criteria for evaluating packet delivery. The traffic | |||
patterns are precomputed to mimic TCP or other transport protocol | patterns are precomputed to mimic TCP or other transport protocol | |||
over a long path but are constructed in such a way that they are | over a long path but are constructed in such a way that they are | |||
independent of the actual details of the subpath under test, end | independent of the actual details of the subpath under test, end | |||
systems or applications. Likewise the success criteria depends on | systems or applications. Likewise the success criteria depends on | |||
the packet delivery statistics of the subpath, as evaluated against a | the packet delivery statistics of the subpath, as evaluated against a | |||
protocol model applied to the target performance. The success | protocol model applied to the Target Transport Performance. The | |||
criteria also does not depend on the details of the subpath, end | success criteria also does not depend on the details of the subpath, | |||
systems or application. This makes the measurements open loop, | end systems or application. This makes the measurements open loop, | |||
eliminating most of the difficulties encountered by traditional bulk | eliminating most of the difficulties encountered by traditional bulk | |||
transport metrics. | transport metrics. | |||
Model based metrics exhibit several important new properties not | Model based metrics exhibit several important new properties not | |||
present in other Bulk Capacity Metrics, including the ability to | present in other Bulk Capacity Metrics, including the ability to | |||
reason about concatenated or overlapping subpaths. The results are | reason about concatenated or overlapping subpaths. The results are | |||
vantage independent which is critical for supporting independent | vantage independent which is critical for supporting independent | |||
validation of tests results from multiple Measurement Points. | validation of tests results from multiple Measurement Points. | |||
This document does not define diagnostic tests directly, but provides | This document does not define IP diagnostic tests directly, but | |||
a framework for designing suites of IP diagnostics tests that are | provides a framework for designing suites of IP diagnostics tests | |||
tailored to confirming that infrastructure can meet a predetermined | that are tailored to confirming that infrastructure can meet a | |||
target performance. | predetermined Target Transport Performance. | |||
Interim DRAFT Formatted: Sat Jun 13 16:25:01 PDT 2015 | ||||
Status of this Memo | Status of this Memo | |||
This Internet-Draft is submitted in full conformance with the | This Internet-Draft is submitted in full conformance with the | |||
provisions of BCP 78 and BCP 79. | provisions of BCP 78 and BCP 79. | |||
Internet-Drafts are working documents of the Internet Engineering | Internet-Drafts are working documents of the Internet Engineering | |||
Task Force (IETF). Note that other groups may also distribute | Task Force (IETF). Note that other groups may also distribute | |||
working documents as Internet-Drafts. The list of current Internet- | working documents as Internet-Drafts. The list of current Internet- | |||
Drafts is at http://datatracker.ietf.org/drafts/current/. | Drafts is at http://datatracker.ietf.org/drafts/current/. | |||
Internet-Drafts are draft documents valid for a maximum of six months | Internet-Drafts are draft documents valid for a maximum of six months | |||
and may be updated, replaced, or obsoleted by other documents at any | and may be updated, replaced, or obsoleted by other documents at any | |||
skipping to change at page 2, line 17 | skipping to change at page 2, line 16 | |||
Internet-Drafts are working documents of the Internet Engineering | Internet-Drafts are working documents of the Internet Engineering | |||
Task Force (IETF). Note that other groups may also distribute | Task Force (IETF). Note that other groups may also distribute | |||
working documents as Internet-Drafts. The list of current Internet- | working documents as Internet-Drafts. The list of current Internet- | |||
Drafts is at http://datatracker.ietf.org/drafts/current/. | Drafts is at http://datatracker.ietf.org/drafts/current/. | |||
Internet-Drafts are draft documents valid for a maximum of six months | Internet-Drafts are draft documents valid for a maximum of six months | |||
and may be updated, replaced, or obsoleted by other documents at any | and may be updated, replaced, or obsoleted by other documents at any | |||
time. It is inappropriate to use Internet-Drafts as reference | time. It is inappropriate to use Internet-Drafts as reference | |||
material or to cite them other than as "work in progress." | material or to cite them other than as "work in progress." | |||
This Internet-Draft will expire on December 15, 2015. | This Internet-Draft will expire on January 7, 2016. | |||
Copyright Notice | Copyright Notice | |||
Copyright (c) 2015 IETF Trust and the persons identified as the | Copyright (c) 2015 IETF Trust and the persons identified as the | |||
document authors. All rights reserved. | document authors. All rights reserved. | |||
This document is subject to BCP 78 and the IETF Trust's Legal | This document is subject to BCP 78 and the IETF Trust's Legal | |||
Provisions Relating to IETF Documents | Provisions Relating to IETF Documents | |||
(http://trustee.ietf.org/license-info) in effect on the date of | (http://trustee.ietf.org/license-info) in effect on the date of | |||
publication of this document. Please review these documents | publication of this document. Please review these documents | |||
carefully, as they describe your rights and restrictions with respect | carefully, as they describe your rights and restrictions with respect | |||
to this document. Code Components extracted from this document must | to this document. Code Components extracted from this document must | |||
include Simplified BSD License text as described in Section 4.e of | include Simplified BSD License text as described in Section 4.e of | |||
the Trust Legal Provisions and are provided without warranty as | the Trust Legal Provisions and are provided without warranty as | |||
described in the Simplified BSD License. | described in the Simplified BSD License. | |||
Table of Contents | Table of Contents | |||
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 | 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 | |||
1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 6 | 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 6 | |||
2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 | 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 | |||
3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 10 | 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9 | |||
4. New requirements relative to RFC 2330 . . . . . . . . . . . . 14 | 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 14 | |||
5. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 15 | 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 16 | |||
5.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 16 | 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 17 | |||
5.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 17 | 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 18 | |||
6. Common Models and Parameters . . . . . . . . . . . . . . . . . 19 | 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 18 | |||
6.1. Target End-to-end parameters . . . . . . . . . . . . . . . 19 | 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 18 | |||
6.2. Common Model Calculations . . . . . . . . . . . . . . . . 19 | 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 19 | |||
6.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 20 | 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 20 | |||
7. Traffic generating techniques . . . . . . . . . . . . . . . . 21 | 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . . 21 | |||
7.1. Paced transmission . . . . . . . . . . . . . . . . . . . . 21 | 6. Traffic generating techniques . . . . . . . . . . . . . . . . 21 | |||
7.2. Constant window pseudo CBR . . . . . . . . . . . . . . . . 22 | 6.1. Paced transmission . . . . . . . . . . . . . . . . . . . . 21 | |||
7.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 23 | 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . . 23 | |||
7.4. Concurrent or channelized testing . . . . . . . . . . . . 23 | 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 24 | |||
8. Interpreting the Results . . . . . . . . . . . . . . . . . . . 24 | 6.4. Concurrent or channelized testing . . . . . . . . . . . . 24 | |||
8.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 24 | 7. Interpreting the Results . . . . . . . . . . . . . . . . . . . 25 | |||
8.2. Statistical criteria for estimating run_length . . . . . . 26 | 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 25 | |||
8.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . . 27 | 7.2. Statistical criteria for estimating run_length . . . . . . 27 | |||
9. Test Preconditions . . . . . . . . . . . . . . . . . . . . . . 28 | 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . . 29 | |||
10. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 29 | 8. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 29 | |||
10.1. Basic Data Rate and Delivery Statistics Tests . . . . . . 29 | 8.1. Basic Data Rate and Packet Delivery Tests . . . . . . . . 30 | |||
10.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 30 | 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 30 | |||
10.1.2. Delivery Statistics at Full Data Windowed Rate . . . 30 | 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 31 | |||
10.1.3. Background Delivery Statistics Tests . . . . . . . . 30 | 8.1.3. Background Packet Delivery Statistics Tests . . . . . 31 | |||
10.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 31 | 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 31 | |||
10.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 32 | 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 33 | |||
10.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 32 | 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 33 | |||
10.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 33 | 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 33 | |||
10.2.4. Duplex Self Interference . . . . . . . . . . . . . . 33 | 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 34 | |||
10.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 34 | 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 34 | |||
10.3.1. Full Window slowstart test . . . . . . . . . . . . . 34 | 8.3.1. Full Window slowstart test . . . . . . . . . . . . . . 35 | |||
10.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 34 | 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 35 | |||
10.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 35 | 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 35 | |||
10.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 35 | 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 36 | |||
10.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 36 | 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 36 | |||
10.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 37 | 8.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 37 | |||
11. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 37 | 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 38 | |||
12. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 39 | 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 40 | |||
13. Security Considerations . . . . . . . . . . . . . . . . . . . 40 | 11. Security Considerations . . . . . . . . . . . . . . . . . . . 41 | |||
14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 41 | 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 41 | |||
15. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 41 | 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 | |||
16. References . . . . . . . . . . . . . . . . . . . . . . . . . . 41 | 14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 42 | |||
16.1. Normative References . . . . . . . . . . . . . . . . . . . 41 | 14.1. Normative References . . . . . . . . . . . . . . . . . . . 42 | |||
16.2. Informative References . . . . . . . . . . . . . . . . . . 41 | 14.2. Informative References . . . . . . . . . . . . . . . . . . 42 | |||
Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 44 | Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 44 | |||
A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 44 | A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 45 | |||
Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 45 | Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 46 | |||
Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 46 | Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 47 | |||
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 46 | Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 47 | |||
1. Introduction | 1. Introduction | |||
Model Based Metrics (MBM) rely on mathematical models to specify a | Model Based Metrics (MBM) rely on mathematical models to specify a | |||
targeted diagnostic suite of IP diagnostic tests, designed to verify | targeted suite of IP diagnostic tests, designed to assess whether | |||
that common transport protocols can meet a predetermined performance | common transport protocols can be expected to meet a predetermined | |||
target over an Internet path. Each diagnostic in the suite measures | performance target over an Internet path. Each test in the Targeted | |||
some aspect of IP delivery that is required to meet the performance | Diagnostic Suite (TDS) measures some aspect of IP packet transfer | |||
target. For example a TDS may have separate diagnostic tests to | that is required to meet the Target Transport Performance. For | |||
verify that there is sufficient data rate and sufficient queueing | example a TDS may have separate diagnostic tests to verify that there | |||
buffer space to deliver typical transport bursts, and that the | is: sufficient IP capacity (rate); sufficient queue space to deliver | |||
background packet loss is small enough not to interfere with | typical transport bursts; and that the background packet loss ratio | |||
congestion control. Unlike other metrics which yield measures of | is small enough not to interfere with congestion control. Unlike | |||
network properties, Model Based Metrics nominally yield pass/fail | other metrics which yield measures of network properties, Model Based | |||
evaluations of the ability of transport protocols to meet a | Metrics nominally yield pass/fail evaluations of the ability of | |||
performance objective as need by a user application over a particular | standard transport protocols to meet a specific performance objective | |||
network path. | over some network path. | |||
This note describes the modeling framework to derive the IP | This note describes the modeling framework to derive the IP | |||
diagnostic test parameters from the target performance specified for | diagnostic test parameters from the Target Transport Performance | |||
TCP bulk transport capacity. In the future, other Model Based | specified for TCP Bulk Transport Capacity. Model Based Metrics is an | |||
Metrics may cover other applications and transports, such as VoIP | alternative to the approach described in [RFC3148]. In the future, | |||
over RTP. In most cases the IP diagnostic tests can be implemented | other Model Based Metrics may cover other applications and | |||
by combining existing IPPM metrics with additional controls for | transports, such as VoIP over RTP. In most cases the IP diagnostic | |||
precomputed traffic patterns and statistical criteria for evaluating | tests can be implemented by combining existing IPPM metrics with | |||
packet delivery. | additional controls for generating precomputed traffic patterns and | |||
statistical criteria for evaluating packet delivery. | ||||
This approach, mapping transport performance targets to a targeted | This approach, mapping Target Transport Performance to a targeted | |||
diagnostic suite (TDS) of IP diagnostic tests, solves an intrinsic | diagnostic suite (TDS) of IP tests, solves some intrinsic problems | |||
problem with using TCP or other throughput maximizing protocols for | with using TCP or other throughput maximizing protocols for | |||
measurement. In particular all throughput maximizing protocols (and | measurement. In particular all throughput maximizing protocols (and | |||
TCP congestion control in particular) cause some level of congestion | TCP congestion control in particular) cause some level of congestion | |||
in order to fill the network. This self inflicted congestion | in order to detect when they have filled the network. This self | |||
obscures the network properties of interest and introduces non-linear | inflicted congestion obscures the network properties of interest and | |||
equilibrium behaviors that make any resulting measurements useless as | introduces non-linear equilibrium behaviors that make any resulting | |||
metrics because they have no predictive value for conditions or paths | measurements useless as metrics because they have no predictive value | |||
different than the measurement itself. This problem is discussed in | for conditions or paths other than that of the measurement itself. | |||
Section 5. | These problems are discussed at length in Section 4. | |||
A targeted suite of IP diagnostic tests do not have such | A targeted suite of IP diagnostic tests does not have such | |||
difficulties. They can be constructed to make strong statistical | difficulties. They can be constructed such that they make strong | |||
statements about path properties that are independent of the | statistical statements about path properties that are independent of | |||
measurement details, such as vantage and choice of measurement | the measurement details, such as vantage and choice of measurement | |||
points. Model Based Metrics bridge the gap between empirical IP | points. Model Based Metrics bridge the gap between empirical IP | |||
measurements and expected TCP performance. | measurements and expected TCP performance. | |||
1.1. Version Control | 1.1. Version Control | |||
RFC Editor: Please remove this entire subsection prior to | RFC Editor: Please remove this entire subsection prior to | |||
publication. | publication. | |||
Please send comments about this draft to ippm@ietf.org. See | Please send comments about this draft to ippm@ietf.org. See | |||
http://goo.gl/02tkD for more information including: interim drafts, | http://goo.gl/02tkD for more information including: interim drafts, | |||
an up to date todo list and information on contributing. | an up to date todo list and information on contributing. | |||
Formatted: Sat Jun 13 16:25:01 PDT 2015 | Formatted: Mon Jul 6 13:49:30 PDT 2015 | |||
Changes since -05 draft: | ||||
o Wordsmithing on sections overhauled in -05 draft. | ||||
o Reorganized the document: | ||||
* Relocated subsection "Preconditions". | ||||
* Relocated subsection "New Requirements relative to RFC 2330". | ||||
o Addressed nits and not so nits by Ruediger Geib. (Thanks!) | ||||
o Substantially tightened the entire definitions section. | ||||
o Many terminology changes, to better conform to other docs : | ||||
* IP rate and IP capacity (following RFC 5136) replaces various | ||||
forms of link data rate. | ||||
* subpath replaces link. | ||||
* target_window_size replaces target_pipe_size. | ||||
* Implied Bottleneck IP Rate replaces effective bottleneck link | ||||
rate. | ||||
* Packet delivery statistics replaces delivery statistics. | ||||
Changes since -04 draft: | Changes since -04 draft: | |||
o The introduction was heavily overhauled: split into a separate | o The introduction was heavily overhauled: split into a separate | |||
introduction and overview. | introduction and overview. | |||
o The new shorter introduction: | o The new shorter introduction: | |||
* Is a problem statement; | * Is a problem statement; | |||
* This document provides a framework; | * This document provides a framework; | |||
* That it replaces TCP measurement by IP tests; | * That it replaces TCP measurement by IP tests; | |||
* That the results are pass/fail. | * That the results are pass/fail. | |||
o Added a diagram of the framework to the overview | o Added a diagram of the framework to the overview | |||
o and introduces all of the elements of the framework. | o and introduces all of the elements of the framework. | |||
o Renumbered sections, reducing the depth of some section numbers. | o Renumbered sections, reducing the depth of some section numbers. | |||
o Updated definitions to better agree with other documents: | o Updated definitions to better agree with other documents: | |||
* Reordered section 2 | * Reordered section 2 | |||
* Bulk [data] performance -> Bulk Transport Capacity, everywhere | * Bulk [data] performance -> Bulk Transport Capacity, everywhere | |||
including the title. | including the title. | |||
* loss rate and loss probability -> loss ratio | * loss rate and loss probability -> packet loss ratio | |||
* end-to-end path -> complete path | * end-to-end path -> complete path | |||
* [end-to-end][target] performance -> target transport | * [end-to-end][target] performance -> Target Transport | |||
performance | Performance | |||
* load test -> capacity test | ||||
This interim draft is a partial update since the WGLC, to collect an | * load test -> capacity test | |||
additional round of feedback on the Introduction, overview, and | ||||
terminology sections. Note that some of the prior WGLC comments are | ||||
still pending. Later sections (4 and beyond) have only been updated | ||||
to track changes in the terminology section. We intend to produce an | ||||
additional draft prior to the IETF, incorporating still pending | ||||
comments from the WGLC and any additional comments on the | ||||
introduction and overview. | ||||
2. Overview | 2. Overview | |||
This document describes a modeling framework for deriving Target | This document describes a modeling framework for deriving a Targeted | |||
Diagnostic Suites to determine if an IP path can be expected to meet | Diagnostic Suite from a predetermined Target Transport Performance. | |||
a predetermined target performance. It relies on other standards | It is not a complete specification, and relies on other standards | |||
documents to define Important details such as packet type-p | documents to define important details such as packet type-p | |||
selection, sampling techniques, vantage selection, etc. which are not | selection, sampling techniques, vantage selection, etc. We imagine | |||
specified here. We imagine Fully Specified Targeted Diagnostic | Fully Specified Targeted Diagnostic Suites (FSTDS), that define all | |||
Suites (FSTDS), that define all of these details. We use TDS to | of these details. We use Targeted Diagnostic Suite (TDS) to refer to | |||
refer to the subset of such a specification that is in scope for this | the subset of such a specification that is in scope for this | |||
document. | document. This terminology is defined in Section 3. | |||
Figure 1 shows the MBM modeling and measurement framework. (See | Section 4 describes some key aspects of TCP behavior and what it | |||
Section 3 for terminology used throughout this document). The target | implies about the requirements for IP packet delivery. Most of the | |||
transport performance is determined by the needs of the user or | IP diagnostic tests needed to confirm that the path meets these | |||
application, outside the scope of this document. For bulk transport | properties can be built on existing IPPM metrics, with the addition | |||
capacity, the performance parameter of interest is the target data | of statistical criteria for evaluating packet delivery and in a few | |||
rate. However, since TCP's ability to compensate for less than ideal | cases, new mechanisms to implement precomputed traffic patterns. | |||
network conditions is fundamentally affected by the Round Trip Time | (One group of tests, the standing queue tests described in | |||
(RTT) and the Maximum Transmission Unit (MTU) of the complete path, | Section 8.2, don't correspond to existing IPPM metrics, but suitable | |||
these parameters must also be specified in advance using knowledge | metrics can be patterned after existing tools.) | |||
about the intended application setting. Section 6 describes the | ||||
common parameters and models used to derive a targeted diagnostic | ||||
suite. | ||||
The target transport performance may reflect a specific application | Figure 1 shows the MBM modeling and measurement framework. The | |||
over real path through the Internet or an idealized application and | Target Transport Performance, at the top of the figure, is determined | |||
path representing a typical user community. | by the needs of the user or application, outside the scope of this | |||
document. For Bulk Transport Capacity, the main performance | ||||
parameter of interest is the target data rate. However, since TCP's | ||||
ability to compensate for less than ideal network conditions is | ||||
fundamentally affected by the Round Trip Time (RTT) and the Maximum | ||||
Transmission Unit (MTU) of the complete path, these parameters must | ||||
also be specified in advance based on knowledge about the intended | ||||
application setting. They may reflect a specific application over | ||||
real path through the Internet or an idealized application and | ||||
hypothetical path representing a typical user community. Section 5 | ||||
describes the common parameters and models derived from the Target | ||||
Transport Performance. | ||||
target transport performance | Target Transport Performance | |||
(target data rate, target RTT and target MTU) | (target data rate, target RTT and target MTU) | |||
| | | | |||
________V_________ | ________V_________ | |||
| mathematical | | | mathematical | | |||
| models | | | models | | |||
| | | | | | |||
------------------ | ------------------ | |||
Traffic parameters | | Statistical criteria | Traffic parameters | | Statistical criteria | |||
| | | | | | |||
_______V____________V____Targeted_______ | _______V____________V____Targeted_______ | |||
skipping to change at page 8, line 38 | skipping to change at page 8, line 38 | |||
| | subpath under test | |- | | | subpath under test | |- | |||
----V----------------------------------V--- | | ----V----------------------------------V--- | | |||
| | | | | | | | | | | | | | |||
V V V V V V | V V V V V V | |||
fail/inconclusive pass/fail/inconclusive | fail/inconclusive pass/fail/inconclusive | |||
Overall Modeling Framework | Overall Modeling Framework | |||
Figure 1 | Figure 1 | |||
Section 5 describes some key aspects of TCP behavior and what they | The mathematical models are used to design traffic patterns that | |||
imply about the requirements for IP packet delivery. Most of the IP | mimic TCP or other bulk transport protocol operating at the target | |||
diagnostic tests needed to confirm that the path meets these | data rate, MTU and RTT over a full range of conditions, including | |||
properties can be built on existing IPPM metrics, with the addition | flows that are bursty at multiple time scales. The traffic patterns | |||
of statistical criteria for evaluating packet delivery and in some | are generated based on the three target parameters of complete path | |||
cases new mechanisms to implement precomputed traffic patterns. One | and independent of the properties of individual subpaths using the | |||
group of tests, the standing queue tests described in section | techniques described in Section 6. As much as possible the | |||
Section 10.2, don't correspond to existing IPPM metrics, but suitable | measurement traffic is generated deterministically (precomputed) to | |||
metrics can be patterned after existing tools. | minimize the extent to which test methodology, measurement points, | |||
measurement vantage or path partitioning affect the details of the | ||||
Mathematical models are used to design traffic patterns that mimic | measurement traffic. | |||
TCP or other bulk transport protocol operating at the target data | ||||
rate, MTU and RTT over a full range of conditions, including flows | ||||
that are bursty at multiple time scales. The traffic patterns are | ||||
generated based on the three target parameters of complete path and | ||||
independent of the properties of individual subpaths as described in | ||||
Section 7. As much as possible the measurement traffic is generated | ||||
deterministically to that minimize the extent to which test | ||||
methodology, measurement points, measurement vantage or path | ||||
partitioning affect the details of the measurement traffic. | ||||
Section 8 describes packet delivery statistics and methods test them | Section 7 describes packet delivery statistics and methods test them | |||
against the bounds provided by the mathematical models. Since these | against the bounds provided by the mathematical models. Since these | |||
statistics are typically aggregated from all subpaths of the complete | statistics are typically the composition of subpaths of the complete | |||
path, in situ testing requires that the end-to-end statistical bounds | path [RFC6049] , in situ testing requires that the end-to-end | |||
be apportioned as a separate bound for each subpath. Links that are | statistical bounds be apportioned as separate bounds for each | |||
expected to be bottlenecks are expected to contribute a larger | subpath. Subpaths that are expected to be bottlenecks may be | |||
fraction of the total packet loss. In compensation, other links have | expected to contribute a larger fraction of the total packet loss. | |||
to be constrained to contribute less packet loss. The criteria for | In compensation, non-bottlenecked subpaths have to be constrained to | |||
passing each test of a TDS is an apportioned share of the total bound | contribute less packet loss. The criteria for passing each test of a | |||
determined by the mathematical model from the target transport | TDS is an apportioned share of the total bound determined by the | |||
performance . | mathematical model from the Target Transport Performance. | |||
Section 10 describes the suite of individual tests needed to verify | Section 8 describes the suite of individual tests needed to verify | |||
all of required IP delivery properties. A subpath passes if and only | all of required IP delivery properties. A subpath passes if and only | |||
if all of the individual IP diagnostics tests pass. Any subpath that | if all of the individual IP diagnostics tests pass. Any subpath that | |||
fails any test indicates that some users are likely fail to attain | fails any test indicates that some users are likely fail to attain | |||
their target transport performance under some conditions. In | their Target Transport Performance under some conditions. In | |||
addition to passing or failing, a test can be deemed to be | addition to passing or failing, a test can be deemed to be | |||
inconclusive for a number of reasons including: the precomputed | inconclusive for a number of reasons including: the precomputed | |||
traffic pattern was not accurately generated; the measurement results | traffic pattern was not accurately generated; the measurement results | |||
were not statistically significant; and others such as failing to | were not statistically significant; and others such as failing to | |||
meet some required test preconditions. If all test pass, except some | meet some required test preconditions. If all tests pass, except | |||
are inconclusive then the entire suite is deemed to be inconclusive. | some are inconclusive then the entire suite is deemed to be | |||
inconclusive. | ||||
Since there is some uncertainty in this process, Section 12, | ||||
describes a validation procedure to diagnose and minimize false | ||||
positive and false negative results. | ||||
In Section 11 we present an example TDS that might be representative | In Section 9 we present an example TDS that might be representative | |||
of HD video, and illustrate how Model Based Metrics can be used to | of HD video, and illustrate how Model Based Metrics can be used to | |||
address difficult measurement situations, such as confirming that | address difficult measurement situations, such as confirming that | |||
intercarrier exchanges have sufficient performance and capacity to | intercarrier exchanges have sufficient performance and capacity to | |||
deliver HD video between ISPs. | deliver HD video between ISPs. | |||
A TDS includes the target parameters, documentation of the models and | Since there is some uncertainty in the modeling process, Section 10 | |||
assumptions used to derive the IP diagnostic test parameters, | describes a validation procedure to diagnose and minimize false | |||
specifications for the traffic and delivery statistics for the tests | positive and false negative results. | |||
themselves, and a description of a test setup that can be used to | ||||
validate the tests and models. | ||||
3. Terminology | 3. Terminology | |||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | |||
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | |||
document are to be interpreted as described in [RFC2119]. | document are to be interpreted as described in [RFC2119]. | |||
Note that terms containing underscores (rather than spaces) appear in | ||||
equations in the modeling sections. In some cases both forms are | ||||
used for aesthetic reasons, they do not have different meanings. | ||||
General Terminology: | General Terminology: | |||
Target: A general term for any parameter specified by or derived | Target: A general term for any parameter specified by or derived | |||
from the user's application or transport performance requirements. | from the user's application or transport performance requirements. | |||
Complete Path: From RFC 5835 | Target Transport Performance: Application or transport performance | |||
target transport performance: Application or transport performance | goals for the complete path. For Bulk Transport Capacity defined | |||
goals for the complete path. For bulk transport capacity defined | in this note the Target Transport Performance includes the target | |||
in this note the target transport performance includes the target | ||||
data rate, target RTT and target MTU as described below. | data rate, target RTT and target MTU as described below. | |||
Target Data Rate: The specified application data rate required for | Target Data Rate: The specified application data rate required for | |||
an application's proper operation. This is typically the | an application's proper operation. Conventional BTC metrics are | |||
performance goal as needed by the ultimate user. | focused on the target data rate, however these metrics had little | |||
Target RTT (Round Trip Time): The baseline (minimum) RTT of the | or no predictive value because they do not consider the effects of | |||
longest complete path over which the application expects to be | the other two parameters of the Target Transport Performance, the | |||
able meet the target performance. TCP and other transport | RTT and MTU of the complete paths. | |||
Target RTT (Round Trip Time): The specified baseline (minimum) RTT | ||||
of the longest complete path over which the application expects to | ||||
be able meet the target performance. TCP and other transport | ||||
protocol's ability to compensate for path problems is generally | protocol's ability to compensate for path problems is generally | |||
proportional to the number of round trips per second. The Target | proportional to the number of round trips per second. The Target | |||
RTT determines both key parameters of the traffic patterns (e.g. | RTT determines both key parameters of the traffic patterns (e.g. | |||
burst sizes) and the thresholds on acceptable traffic statistics. | burst sizes) and the thresholds on acceptable traffic statistics. | |||
The Target RTT must be specified considering authentic packets | The Target RTT must be specified considering appropriate packets | |||
sizes: MTU sized packets on the forward path, ACK sized packets | sizes: MTU sized packets on the forward path, ACK sized packets | |||
(typically header_overhead) on the return path. | (typically header_overhead) on the return path. Note that target | |||
Target MTU (Maximum Transmission Unit): The maximum MTU supported by | RTT is specified and not measured, it determines the applicability | |||
the complete path the over which the application expects to meet | MBM evaluations for paths that are different than the measured | |||
the target performance. Assume 1500 Byte MTU unless otherwise | path. | |||
specified. If some subpath forces a smaller MTU, then it becomes | Target MTU (Maximum Transmission Unit): The specified maximum MTU | |||
the target MTU, and all model calculations and subpath tests must | supported by the complete path the over which the application | |||
use the same smaller MTU. | expects to meet the target performance. Assume 1500 Byte MTU | |||
Targeted Diagnostic Suite (TDS): A set of IP Diagnostics designed to | unless otherwise specified. If some subpath forces a smaller MTU, | |||
determine if an otherwise ideal complete path containing the | then it becomes the target MTU for the complete path, and all | |||
subpath under test can sustain flows at a specific | model calculations and subpath tests must use the same smaller | |||
MTU. | ||||
Targeted Diagnostic Suite (TDS): A set of IP diagnostic tests | ||||
designed to determine if an otherwise ideal complete path | ||||
containing the subpath under test can sustain flows at a specific | ||||
target_data_rate using target_MTU sized packets when the RTT of | target_data_rate using target_MTU sized packets when the RTT of | |||
the complete path is target_RTT. | the complete path is target_RTT. | |||
Fully Specified Targeted Diagnostic Suite: A TDS together with | Fully Specified Targeted Diagnostic Suite: A TDS together with | |||
additional specification such as "type-p", etc which are out of | additional specification such as "type-p", etc which are out of | |||
scope for this document, but need to be drawn from other standards | scope for this document, but need to be drawn from other standards | |||
documents. | documents. | |||
loss ratio: See "Packet Loss Ratio in [RFC2680bis] | ||||
apportioned: To divide and allocate, for example budgeting packet | ||||
loss ratio across multiple subpaths such that they will accumulate | ||||
to less than a specified end-to-end loss ratio. | ||||
open loop: A control theory term used to describe a class of | ||||
techniques where systems that naturally exhibit circular | ||||
dependencies can be analyzed by suppressing some of the | ||||
dependences, such that the resulting dependency graph is acyclic. | ||||
Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an | Bulk Transport Capacity: Bulk Transport Capacity Metrics evaluate an | |||
Internet path's ability to carry bulk data, such as large files, | Internet path's ability to carry bulk data, such as large files, | |||
streaming (non-real time) video, and under some conditions, web | streaming (non-real time) video, and under some conditions, web | |||
images and other content. Prior efforts to define BTC metrics | images and other content. Prior efforts to define BTC metrics | |||
have been based on [RFC3148], which never succeeded due to some | have been based on [RFC3148], which predates our understanding of | |||
overlooked requirements described in Section 4 and problems | TCP ant the requirements described in Section 4 | |||
described in The metrics presented in this document reflect an | ||||
entirely different approach to the problem outlined in [RFC3148]. | IP diagnostic tests: Measurements or diagnostic tests to determine | |||
if packet delivery statistics meet some precomputed target. | ||||
traffic patterns: The temporal patterns or statistics of traffic | traffic patterns: The temporal patterns or statistics of traffic | |||
generated by applications over transport protocols such as TCP. | generated by applications over transport protocols such as TCP. | |||
There are several mechanisms that cause bursts at various time | There are several mechanisms that cause bursts at various time | |||
scales. Our goal here is to mimic the range of common patterns | scales as described in Section 4.1. Our goal here is to mimic the | |||
(burst sizes and rates, etc), without tieing our applicability to | range of common patterns (burst sizes and rates, etc), without | |||
specific applications, implementations or technologies, which are | tying our applicability to specific applications, implementations | |||
sure to become stale. | or technologies, which are sure to become stale. | |||
delivery Statistics: Raw or summary statistics about packet delivery | packet delivery statistics: Raw, detailed or summary statistics | |||
properties of the IP layer including packet losses, ECN marks, | about packet delivery properties of the IP layer including packet | |||
reordering, or any other properties that may be germane to | losses, ECN marks, reordering, or any other properties that may be | |||
transport performance. | germane to transport performance. | |||
IP performance tests: Measurements or diagnostic tests to determine | packet loss ratio: As defined in [I-D.ietf-ippm-2680-bis]. | |||
delivery statistics. | apportioned: To divide and allocate, for example budgeting packet | |||
loss across multiple subpaths such that they will accumulate to | ||||
less than a specified end-to-end loss ratio. | ||||
open loop: A control theory term used to describe a class of | ||||
techniques where systems that naturally exhibit circular | ||||
dependencies can be analyzed by suppressing some of the | ||||
dependencies, such that the resulting dependency graph is acyclic. | ||||
Terminology about paths, etc. See [RFC2330] and [RFC7398]. | Terminology about paths, etc. See [RFC2330] and [RFC7398]. | |||
[data] sender: Host sending data and receiving ACKs. | [data] sender: Host sending data and receiving ACKs. | |||
[data] receiver: Host receiving data and sending ACKs. | [data] receiver: Host receiving data and sending ACKs. | |||
subpath: A portion of the full path. Note that there is no | complete path: The end-to-end path from the data sender to the data | |||
requirement that subpaths be non-overlapping. | receiver. | |||
subpath: A portion of the complete path. Note that there is no | ||||
requirement that subpaths be non-overlapping. A subpath can be a | ||||
small as a single device, link or interface. | ||||
Measurement Point: Measurement points as described in [RFC7398]. | Measurement Point: Measurement points as described in [RFC7398]. | |||
test path: A path between two measurement points that includes a | test path: A path between two measurement points that includes a | |||
subpath of the complete path under test, and could include | subpath of the complete path under test, and if the measurement | |||
infrastructure between the measurement points and the subpath. | points are off path, may include "test leads" between the | |||
measurement points and the subpath. | ||||
[Dominant] Bottleneck: The Bottleneck that generally dominates | [Dominant] Bottleneck: The Bottleneck that generally dominates | |||
traffic statistics for the entire path. It typically determines a | packet delivery statistics for the entire path. It typically | |||
flow's self clock timing, packet loss and ECN marking rate. See | determines a flow's self clock timing, packet loss and ECN marking | |||
Section 5.1. | rate. See Section 4.1. | |||
front path: The subpath from the data sender to the dominant | front path: The subpath from the data sender to the dominant | |||
bottleneck. | bottleneck. | |||
back path: The subpath from the dominant bottleneck to the receiver. | back path: The subpath from the dominant bottleneck to the receiver. | |||
return path: The path taken by the ACKs from the data receiver to | return path: The path taken by the ACKs from the data receiver to | |||
the data sender. | the data sender. | |||
cross traffic: Other, potentially interfering, traffic competing for | cross traffic: Other, potentially interfering, traffic competing for | |||
network resources (bandwidth and/or queue capacity). | network resources (bandwidth and/or queue capacity). | |||
Properties determined by the complet path and application. They are | Properties determined by the complete path and application. They are | |||
described in more detail in Section 6.1. | described in more detail in Section 5.1. | |||
Application Data Rate: General term for the data rate as seen by the | Application Data Rate: General term for the data rate as seen by the | |||
application above the transport layer. This is the payload data | application above the transport layer in bytes per second. This | |||
rate, and explicitly excludes transport and lower level headers | is the payload data rate, and explicitly excludes transport and | |||
(TCP/IP or other protocols), retransmissions and other overhead | lower level headers (TCP/IP or other protocols), retransmissions | |||
that is not part to the total quantity of data delivered to the | and other overhead that is not part to the total quantity of data | |||
application. | delivered to the application. | |||
Link Data Rate: General term for the data rate as seen by the link | IP Rate: The actual number of IP-layer bytes delivered through a | |||
or lower layers. The link data rate includes transport and IP | subpath, per unit time, including TCP and IP headers, retransmits | |||
headers, retransmissions and other transport layer overhead. This | and other TCP/IP overhead. Follows from IP-type-P Link Usage | |||
document is agnostic as to whether the link data rate includes or | [RFC5136]. | |||
excludes framing, MAC, or other lower layer overheads, except that | IP Capacity: The maximum number of IP-layer bytes that can be | |||
they must be treated uniformly. | transmitted through a subpath, per unit time, including TCP and IP | |||
Effective Bottleneck Data Rate: This is the bottleneck data rate | headers, retransmits and other TCP/IP overhead. Follows from IP- | |||
implied by the returning ACKs, by looking at how much application | type-P Link Capacity [RFC5136]. | |||
data the ACK stream reports delivered per unit time. If the path | Bottleneck IP Rate: This is the IP rate of the data flowing through | |||
is thinning ACKs or batching ACKs the effective bottleneck rate | the dominant bottleneck in the forward path. TCP and other | |||
can be much higher than the average link rate. See Section 5.1 | protocols normally derive their self clocks from the timing of | |||
and Appendix B for more details. | this data. See Section 4.1 and Appendix B for more details. | |||
[sender | interface] rate: The burst data rate, constrained by the | Implied Bottleneck IP Rate: This is the bottleneck IP rate implied | |||
data sender's interface. Today 1 or 10 Gb/s are typical. | by the returning ACKs from the receiver. It is determined by | |||
looking at how much application data the ACK stream reports | ||||
delivered per unit time. If the return path is thinning, batching | ||||
or otherwise altering ACK timing TCP will derive its clock from | ||||
the the implied bottleneck IP rate of the ACK stream, which in the | ||||
short term, might be much different than the actual bottleneck IP | ||||
rate. In the case of thinned or batched ACKs front path must have | ||||
sufficient buffering to smooth any data bursts to the IP capacity | ||||
of the bottleneck. If the return path is not altering the ACK | ||||
stream, then the Implied Bottleneck IP Rate will be the same as | ||||
the Bottleneck IP Rate. See Section 4.1 and Appendix B for more | ||||
details. | ||||
[sender | interface] rate: The IP rate which corresponds to the IP | ||||
Capacity of the data sender's interface. Due to issues of sender | ||||
efficiency and technologies such as TCP offload engines, nearly | ||||
all moderns servers deliver data in bursts at full interface link | ||||
rate. Today 1 or 10 Gb/s are typical. | ||||
Header_overhead: The IP and TCP header sizes, which are the portion | Header_overhead: The IP and TCP header sizes, which are the portion | |||
of each MTU not available for carrying application payload. | of each MTU not available for carrying application payload. | |||
Without loss of generality this is assumed to be the size for | Without loss of generality this is assumed to be the size for | |||
returning acknowledgements (ACKs). For TCP, the Maximum Segment | returning acknowledgements (ACKs). For TCP, the Maximum Segment | |||
Size (MSS) is the Target MTU minus the header_overhead. | Size (MSS) is the Target MTU minus the header_overhead. | |||
Basic parameters common to models and subpath tests are defined here | Basic parameters common to models and subpath tests are defined here | |||
are described in more detail in Section 6.2. Note that these are | are described in more detail in Section 5.2. Note that these are | |||
mixed between application transport performance (excludes headers) | mixed between application transport performance (excludes headers) | |||
and link IP performance (includes headers). | and IP performance (which include TCP headers and retransmissions as | |||
part of the payload). | ||||
Window: The total quantity of data plus the data represented by ACKs | Window: The total quantity of data plus the data represented by ACKs | |||
circulating in the network is referred to as the window. See | circulating in the network is referred to as the window. See | |||
Section 5.1 | Section 4.1. Sometimes used with other qualifiers (congestion | |||
window, cwnd or receiver window) to indicate which mechanism is | ||||
controlling the window. | ||||
pipe size: A general term for number of packets needed in flight | pipe size: A general term for number of packets needed in flight | |||
(the window size) to exactly fill some network path or subpath. | (the window size) to exactly fill some network path or subpath. | |||
It corresponds to the window size which maximizes network power, | It corresponds to the window size which maximizes network power, | |||
the observed data rate divided by the observed RTT. Often used | the observed data rate divided by the observed RTT. Often used | |||
with additional qualifies to specify which path, etc. | with additional qualifiers to specify which path, or under what | |||
conditions, etc. | ||||
target_pipe_size: The number of packets in flight (the window size) | target_window_size: The average number of packets in flight (the | |||
needed to exactly meet the target rate, with a single stream and | window size) needed to meet the target data rate, for the | |||
no cross traffic for the specified application target data rate, | specified target RTT, and MTU. It implies the scale of the bursts | |||
RTT, and MTU. It is the amount of circulating data required to | ||||
meet the target data rate, and implies the scale of the bursts | ||||
that the network might experience. | that the network might experience. | |||
run length: A general term for the observed, measured, or specified | run length: A general term for the observed, measured, or specified | |||
number of packets that are (to be) delivered between losses or ECN | number of packets that are (expected to be) delivered between | |||
marks. Nominally one over the sum of the loss and ECN marking | losses or ECN marks. Nominally one over the sum of the loss and | |||
probabilities, if there are independently and identically | ECN marking probabilities, if there are independently and | |||
distributed. | identically distributed. | |||
target_run_length: The target_run_length is an estimate of the | target_run_length: The target_run_length is an estimate of the | |||
minimum number of non-congestion marked packets needed between | minimum number of non-congestion marked packets needed between | |||
losses or ECN marks necessary to attain the target_data_rate over | losses or ECN marks necessary to attain the target_data_rate over | |||
a path with the specified target_RTT and target_MTU, as computed | a path with the specified target_RTT and target_MTU, as computed | |||
by a mathematical model of TCP congestion control. A reference | by a mathematical model of TCP congestion control. A reference | |||
calculation is shown in Section 6.2 and alternatives in Appendix A | calculation is shown in Section 5.2 and alternatives in Appendix A | |||
reference target_run_length: target_run_length computed precisely by | reference target_run_length: target_run_length computed precisely by | |||
the method in Section 6.2. This is likely to be more slightly | the method in Section 5.2. This is likely to be more slightly | |||
conservative than required by modern TCP algorithms. | conservative than required by modern TCP implementations. | |||
Ancillary parameters used for some tests | Ancillary parameters used for some tests: | |||
derating: Under some conditions the standard models are too | derating: Under some conditions the standard models are too | |||
conservative. The modeling framework permits some latitude in | conservative. The modeling framework permits some latitude in | |||
relaxing or "derating" some test parameters as described in | relaxing or "derating" some test parameters as described in | |||
Section 6.3 in exchange for a more stringent TDS validation | Section 5.3 in exchange for a more stringent TDS validation | |||
procedures, described in Section 12. | procedures, described in Section 10. | |||
subpath_data_rate: The maximum data rate supported by a subpath. | subpath_IP_capacity: The IP capacity of a specific subpath. | |||
This typically includes TCP/IP overhead, including all headers and | ||||
retransmits, etc. | test path: A subpath of a complete path under test. | |||
test_path_RTT: The RTT observed between two measurement points using | test_path_RTT: The RTT observed between two measurement points using | |||
packet sizes that are consistent with the transport protocol. | packet sizes that are consistent with the transport protocol. | |||
Generally MTU sized packets of the forward path, header_overhead | Generally MTU sized packets of the forward path, header_overhead | |||
sized packets on the return path. | sized packets on the return path. | |||
test_path_pipe: The amount of data necessary to fill a test path. | test_path_pipe: The pipe size of a test path. Nominally the test | |||
Nominally the test path RTT times the subpath_data_rate. | path RTT times the test path IP_capacity. | |||
test_window: The window necessary to meet the target_rate over a | test_window: The window necessary to meet the target_rate over a | |||
subpath. Typically test_window=target_data_rate*test_RTT/ | test path. Typically test_window=target_data_rate*test_path_RTT/ | |||
(target_MTU - header_overhead). | (target_MTU - header_overhead). | |||
Tests can be grouped according to their applicability. | The tests described in this note can be grouped according to their | |||
applicability. | ||||
Capacity tests: determine if a network subpath has sufficient | capacity tests: determine if a network subpath has sufficient | |||
capacity to deliver the target performance. As long as the test | capacity to deliver the Target Transport Performance. As long as | |||
traffic is within the proper envelope for the target performance, | the test traffic is within the proper envelope for the Target | |||
the average packet losses or ECN marks must be below the threshold | Transport Performance, the average packet losses or ECN marks must | |||
computed by the model. As such, capacity tests reflect parameters | be below the threshold computed by the model. As such, capacity | |||
that can transition from passing to failing as a consequence of | tests reflect parameters that can transition from passing to | |||
cross traffic, additional presented load or the actions of other | failing as a consequence of cross traffic, additional presented | |||
network users. By definition, capacity tests also consume | load or the actions of other network users. By definition, | |||
significant network resources (data capacity and/or buffer space), | capacity tests also consume significant network resources (data | |||
and the test schedules must be balanced by their cost. | capacity and/or queue buffer space), and the test schedules must | |||
be balanced by their cost. | ||||
Monitoring tests: are designed to capture the most important aspects | Monitoring tests: are designed to capture the most important aspects | |||
of a capacity test, but without presenting excessive ongoing load | of a capacity test, but without presenting excessive ongoing load | |||
themselves. As such they may miss some details of the network's | themselves. As such they may miss some details of the network's | |||
performance, but can serve as a useful reduced-cost proxy for a | performance, but can serve as a useful reduced-cost proxy for a | |||
capacity test, for example to support ongoing monitoring. | capacity test, for example to support ongoing monitoring. | |||
Engineering tests: evaluate how network algorithms (such as AQM and | Engineering tests: evaluate how network algorithms (such as AQM and | |||
channel allocation) interact with TCP-style self clocked protocols | channel allocation) interact with TCP-style self clocked protocols | |||
and adaptive congestion control based on packet loss and ECN | and adaptive congestion control based on packet loss and ECN | |||
marks. These tests are likely to have complicated interactions | marks. These tests are likely to have complicated interactions | |||
with cross traffic and under some conditions can be inversely | with cross traffic and under some conditions can be inversely | |||
sensitive to load. For example a test to verify that an AQM | sensitive to load. For example a test to verify that an AQM | |||
algorithm causes ECN marks or packet drops early enough to limit | algorithm causes ECN marks or packet drops early enough to limit | |||
queue occupancy may experience a false pass result in the presence | queue occupancy may experience a false pass result in the presence | |||
of cross traffic. It is important that engineering tests be | of cross traffic. It is important that engineering tests be | |||
performed under a wide range of conditions, including both in situ | performed under a wide range of conditions, including both in situ | |||
and bench testing, and over a wide variety of load conditions. | and bench testing, and over a wide variety of load conditions. | |||
Ongoing monitoring is less likely to be useful for engineering | Ongoing monitoring is less likely to be useful for engineering | |||
tests, although sparse in situ testing might be appropriate. | tests, although sparse in situ testing might be appropriate. | |||
4. New requirements relative to RFC 2330 | 4. Background | |||
Model Based Metrics are designed to fulfill some additional | ||||
requirement that were not recognized at the time RFC 2330 was written | ||||
[RFC2330]. These missing requirements may have significantly | ||||
contributed to policy difficulties in the IP measurement space. Some | ||||
additional requirements are: | ||||
o IP metrics must be actionable by the ISP - they have to be | ||||
interpreted in terms of behaviors or properties at the IP or lower | ||||
layers, that an ISP can test, repair and verify. | ||||
o Metrics should be spatially composable, such that measures of | ||||
concatenated paths should be predictable from subpaths. | ||||
o Metrics must be vantage point invariant over a significant range | ||||
of measurement point choices, including off path measurement | ||||
points. The only requirements on MP selection should be that the | ||||
portion of the test path that is not under test between the MP and | ||||
the part that is under test is effectively ideal, or is non ideal | ||||
in ways that can be calibrated out of the measurements and the | ||||
test RTT between the MPs is below some reasonable bound. | ||||
o Metric measurements must be repeatable by multiple parties with no | ||||
specialized access to MPs or diagnostic infrastructure. It must | ||||
be possible for different parties to make the same measurement and | ||||
observe the same results. In particular it is specifically | ||||
important that both a consumer (or their delegate) and ISP be able | ||||
to perform the same measurement and get the same result. Note | ||||
that vantage independence is key to this requirement. | ||||
5. Background | ||||
At the time the IPPM WG was chartered, sound Bulk Transport Capacity | At the time the IPPM WG was chartered, sound Bulk Transport Capacity | |||
measurement was known to be well beyond our capabilities. Even at | measurement was known to be well beyond our capabilities. Even at | |||
the time [RFC3148] was written we knew that we didn't fully | the time [RFC3148] was written we knew that we didn't fully | |||
understand the problem. Now, by hindsight we understand why BTC is | understand the problem. Now, by hindsight we understand why BTC is | |||
such a hard problem: | such a hard problem: | |||
o TCP is a control system with circular dependencies - everything | o TCP is a control system with circular dependencies - everything | |||
affects performance, including components that are explicitly not | affects performance, including components that are explicitly not | |||
part of the test. | part of the test. | |||
o Congestion control is an equilibrium process, such that transport | o Congestion control is an equilibrium process, such that transport | |||
protocols change the network (raise the loss ratio and/or RTT) to | protocols change the network statistics (raise the packet loss | |||
conform to their behavior. By design TCP congestion control keep | ratio and/or RTT) to conform to their behavior. By design TCP | |||
raising the data rate until the network give some indication that | congestion control keep raising the data rate until the network | |||
it is full by delaying, dropping or ECN marking packets. | gives some indication that it is full by dropping or ECN marking | |||
packets. If TCP successfully fills the network the packet loss | ||||
and ECN marks are mostly determined by TCP and how hard TCP drives | ||||
the network and not by the network itself. | ||||
o TCP's ability to compensate for network flaws is directly | o TCP's ability to compensate for network flaws is directly | |||
proportional to the number of roundtrips per second (i.e. | proportional to the number of roundtrips per second (i.e. | |||
inversely proportional to the RTT). As a consequence a flawed | inversely proportional to the RTT). As a consequence a flawed | |||
link may pass a short RTT local test even though it fails when the | subpath may pass a short RTT local test even though it fails when | |||
path is extended by a perfect network to some larger RTT. | the path is extended by a perfect network to some larger RTT. | |||
o TCP has a meta Heisenberg problem - Measurement and cross traffic | o TCP has an extreme form of the Heisenberg problem - Measurement | |||
interact in unknown and ill defined ways. The situation is | and cross traffic interact in unknown and ill defined ways. The | |||
actually worse than the traditional physics problem where you can | situation is actually worse than the traditional physics problem | |||
at least estimate bounds on the relative momentum of the | where you can at least estimate bounds on the relative momentum of | |||
measurement and measured particles. For network measurement you | the measurement and measured particles. For network measurement | |||
can not in general determine the relative "mass" of the | you can not in general determine the relative "mass" of either the | |||
measurement traffic and cross traffic, so you can not even gauge | measurement traffic or the cross traffic, so you can not gauge the | |||
the relative magnitude of their effects on each other. | relative magnitude of the uncertainty that might be introduced by | |||
any interaction. | ||||
These properties are a consequence of the equilibrium behavior | These properties are a consequence of the equilibrium behavior | |||
intrinsic to how all throughput optimizing protocols interact with | intrinsic to how all throughput maximizing protocols interact with | |||
the Internet. The protocols rely on control systems based on | the Internet. These protocols rely on control systems based on | |||
multiple network estimators to regulate the quantity of data traffic | estimated network parameters to regulate the quantity of data traffic | |||
sent into the network. The data traffic in turn alters network and | sent into the network. The data traffic in turn alters network and | |||
the properties observed by the estimators, such that there are | the properties observed by the estimators, such that there are | |||
circular dependencies between every component and every property. | circular dependencies between every component and every property. | |||
Since some of these properties are nonlinear, the entire system is | Since some of these properties are nonlinear, the entire system is | |||
nonlinear, and any change anywhere causes difficult to predict | nonlinear, and any change anywhere causes difficult to predict | |||
changes in every parameter. | changes in every parameter. | |||
Model Based Metrics overcome these problems by forcing the | Model Based Metrics overcome these problems by forcing the | |||
measurement system to be open loop: the delivery statistics (akin to | measurement system to be open loop: the packet delivery statistics | |||
the network estimators) do not affect the traffic or traffic patterns | (akin to the network estimators) do not affect the traffic or traffic | |||
(bursts), which computed on the basis of the target performance. In | patterns (bursts), which computed on the basis of the Target | |||
order for a network to pass, the resulting delivery statistics and | Transport Performance. In order for a network to pass, the resulting | |||
corresponding network estimators have to be such that they would not | packet delivery statistics and corresponding network estimators have | |||
cause the control systems slow the traffic below the target rate. | to be such that they would not cause the control systems slow the | |||
traffic below the target data rate. | ||||
5.1. TCP properties | 4.1. TCP properties | |||
TCP and SCTP are self clocked protocols. The dominant steady state | TCP and SCTP are self clocked protocols. The dominant steady state | |||
behavior is to have an approximately fixed quantity of data and | behavior is to have an approximately fixed quantity of data and | |||
acknowledgements (ACKs) circulating in the network. The receiver | acknowledgements (ACKs) circulating in the network. The receiver | |||
reports arriving data by returning ACKs to the data sender, the data | reports arriving data by returning ACKs to the data sender, the data | |||
sender typically responds by sending exactly the same quantity of | sender typically responds by sending exactly the same quantity of | |||
data back into the network. The total quantity of data plus the data | data back into the network. The total quantity of data plus the data | |||
represented by ACKs circulating in the network is referred to as the | represented by ACKs circulating in the network is referred to as the | |||
window. The mandatory congestion control algorithms incrementally | window. The mandatory congestion control algorithms incrementally | |||
adjust the window by sending slightly more or less data in response | adjust the window by sending slightly more or less data in response | |||
to each ACK. The fundamentally important property of this systems is | to each ACK. The fundamentally important property of this system is | |||
that it is entirely self clocked: The data transmissions are a | that it is self clocked: The data transmissions are a reflection of | |||
reflection of the ACKs that were delivered by the network, the ACKs | the ACKs that were delivered by the network, the ACKs are a | |||
are a reflection of the data arriving from the network. | reflection of the data arriving from the network. | |||
A number of phenomena can cause bursts of data, even in idealized | A number of phenomena can cause bursts of data, even in idealized | |||
networks that are modeled as simple queueing systems. | networks that can be modeled as simple queueing systems. | |||
During slowstart the data rate is doubled on each RTT by sending | During slowstart the data rate is doubled on each RTT by sending | |||
twice as much data as was delivered to the receiver on the prior RTT. | twice as much data as was delivered to the receiver on the prior RTT. | |||
For slowstart to be able to fill such a network the network must be | For slowstart to be able to fill such a network the network must be | |||
able to tolerate slowstart bursts up to the full pipe size inflated | able to tolerate slowstart bursts up to the full pipe size inflated | |||
by the anticipated window reduction on the first loss or ECN mark. | by the anticipated window reduction on the first loss or ECN mark. | |||
For example, with classic Reno congestion control, an optimal | For example, with classic Reno congestion control, an optimal | |||
slowstart has to end with a burst that is twice the bottleneck rate | slowstart has to end with a burst that is twice the bottleneck rate | |||
for exactly one RTT in duration. This burst causes a queue which is | for exactly one RTT in duration. This burst causes a queue which is | |||
exactly equal to the pipe size (i.e. the window is exactly twice the | exactly equal to the pipe size (i.e. the window is exactly twice the | |||
skipping to change at page 17, line 7 | skipping to change at page 17, line 5 | |||
Other sources of bursts include application pauses and channel | Other sources of bursts include application pauses and channel | |||
allocation mechanisms. Appendix B describes the treatment of channel | allocation mechanisms. Appendix B describes the treatment of channel | |||
allocation systems. If the application pauses (stops reading or | allocation systems. If the application pauses (stops reading or | |||
writing data) for some fraction of one RTT, state-of-the-art TCP | writing data) for some fraction of one RTT, state-of-the-art TCP | |||
catches up to the earlier window size by sending a burst of data at | catches up to the earlier window size by sending a burst of data at | |||
the full sender interface rate. To fill such a network with a | the full sender interface rate. To fill such a network with a | |||
realistic application, the network has to be able to tolerate | realistic application, the network has to be able to tolerate | |||
interface rate bursts from the data sender large enough to cover | interface rate bursts from the data sender large enough to cover | |||
application pauses. | application pauses. | |||
Although the interface rate bursts are typically smaller than last | Although the interface rate bursts are typically smaller than the | |||
burst of a slowstart, they are at a higher data rate so they | last burst of a slowstart, they are at a higher data rate so they | |||
potentially exercise queues at arbitrary points along the front path | potentially exercise queues at arbitrary points along the front path | |||
from the data sender up to and including the queue at the dominant | from the data sender up to and including the queue at the dominant | |||
bottleneck. There is no model for how frequent or what sizes of | bottleneck. There is no model for how frequent or what sizes of | |||
sender rate bursts should be tolerated. | sender rate bursts should be tolerated. | |||
To verify that a path can meet a performance target, it is necessary | To verify that a path can meet a Target Transport Performance, it is | |||
to independently confirm that the path can tolerate bursts in the | necessary to independently confirm that the path can tolerate bursts | |||
dimensions that can be caused by these mechanisms. Three cases are | in the dimensions that can be caused by these mechanisms. Three | |||
likely to be sufficient: | cases are likely to be sufficient: | |||
o Slowstart bursts sufficient to get connections started properly. | o Slowstart bursts sufficient to get connections started properly. | |||
o Frequent sender interface rate bursts that are small enough where | o Frequent sender interface rate bursts that are small enough where | |||
they can be assumed not to significantly affect delivery | they can be assumed not to significantly affect packet delivery | |||
statistics. (Implicitly derated by selecting the burst size). | statistics. (Implicitly derated by limiting the burst size). | |||
o Infrequent sender interface rate full target_pipe_size bursts that | o Infrequent sender interface rate full target_window_size bursts | |||
do affect the delivery statistics. (Target_run_length may be | that might affect the packet delivery statistics. | |||
derated). | (Target_run_length may be derated). | |||
5.2. Diagnostic Approach | ||||
The MBM approach is to open loop TCP by precomputing traffic patterns | ||||
that are typically generated by TCP operating at the given target | ||||
parameters, and evaluating delivery statistics (packet loss, ECN | ||||
marks and delay). In this approach the measurement software | ||||
explicitly controls the data rate, transmission pattern or cwnd | ||||
(TCP's primary congestion control state variables) to create | ||||
repeatable traffic patterns that mimic TCP behavior but are | ||||
independent of the actual behavior of the subpath under test. These | ||||
patterns are manipulated to probe the network to verify that it can | ||||
deliver all of the traffic patterns that a transport protocol is | ||||
likely to generate under normal operation at the target rate and RTT. | ||||
By opening the protocol control loops, we remove most sources of | ||||
temporal and spatial correlation in the traffic delivery statistics, | ||||
such that each subpath's contribution to the end-to-end delivery | ||||
statistics can be assumed to be independent and stationary (The | ||||
delivery statistics depend on the fine structure of the data | ||||
transmissions, but not on long time scale state imbedded in the | ||||
sender, receiver or other network components.) Therefore each | ||||
subpath's contribution to the end-to-end delivery statistics can be | ||||
assumed to be independent, and spatial composition techniques such as | ||||
[RFC5835] and [RFC6049] apply. | ||||
In typical networks, the dominant bottleneck contributes the majority | ||||
of the packet loss and ECN marks. Often the rest of the path makes | ||||
insignificant contribution to these properties. A TDS should | ||||
apportion the end-to-end budget for the specified parameters | ||||
(primarily packet loss and ECN marks) to each subpath or group of | ||||
subpaths. For example the dominant bottleneck may be permitted to | ||||
contribute 90% of the loss budget, while the rest of the path is only | ||||
permitted to contribute 10%. | ||||
A TDS or FSTDS MUST apportion all relevant packet delivery statistics | 4.2. Diagnostic Approach | |||
between successive subpaths, such that the spatial composition of the | ||||
apportioned metrics will yield end-to-end delivery statistics which | ||||
are within the bounds determined by the models. | ||||
A network is expected to be able to sustain a Bulk TCP flow of a | A complete path is expected to be able to sustain a Bulk TCP flow of | |||
given data rate, MTU and RTT when all of the following conditions are | a given data rate, MTU and RTT when all of the following conditions | |||
met: | are met: | |||
1. The raw link rate is higher than the target data rate. See | 1. The IP capacity is above the target data rate by sufficient | |||
Section 10.1 or any number of data rate tests outside of MBM. | margin to cover all TCP/IP overheads. See Section 8.1 or any | |||
number of data rate tests outside of MBM. | ||||
2. The observed packet delivery statistics are better than required | 2. The observed packet delivery statistics are better than required | |||
by a suitable TCP performance model (e.g. fewer losses or ECN | by a suitable TCP performance model (e.g. fewer losses or ECN | |||
marks). See Section 10.1 or any number of low rate packet loss | marks). See Section 8.1 or any number of low rate packet loss | |||
tests outside of MBM. | tests outside of MBM. | |||
3. There is sufficient buffering at the dominant bottleneck to | 3. There is sufficient buffering at the dominant bottleneck to | |||
absorb a slowstart rate burst large enough to get the flow out of | absorb a slowstart rate burst large enough to get the flow out of | |||
slowstart at a suitable window size. See Section 10.3. | slowstart at a suitable window size. See Section 8.3. | |||
4. There is sufficient buffering in the front path to absorb and | 4. There is sufficient buffering in the front path to absorb and | |||
smooth sender interface rate bursts at all scales that are likely | smooth sender interface rate bursts at all scales that are likely | |||
to be generated by the application, any channel arbitration in | to be generated by the application, any channel arbitration in | |||
the ACK path or any other mechanisms. See Section 10.4. | the ACK path or any other mechanisms. See Section 8.4. | |||
5. When there is a standing queue at a bottleneck for a shared media | 5. When there is a slowly rising standing queue at the bottleneck | |||
the onset of packet loss has to be at an appropriate point (time | ||||
or queue depth) and progressive. See Section 8.2. | ||||
6. When there is a standing queue at a bottleneck for a shared media | ||||
subpath (e.g. half duplex), there are suitable bounds on how the | subpath (e.g. half duplex), there are suitable bounds on how the | |||
data and ACKs interact, for example due to the channel | data and ACKs interact, for example due to the channel | |||
arbitration mechanism. See Section 10.2.4. | arbitration mechanism. See Section 8.2.4. | |||
6. When there is a slowly rising standing queue at the bottleneck | ||||
the onset of packet loss has to be at an appropriate point (time | ||||
or queue depth) and progressive. See Section 10.2. | ||||
Note that conditions 1 through 4 require capacity tests for | Note that conditions 1 through 4 require capacity tests for | |||
confirmation, and thus need to be monitored on an ongoing basis. | validation, and thus may need to be monitored on an ongoing basis. | |||
Conditions 5 and 6 require engineering tests. They won't generally | Conditions 5 and 6 require engineering tests best performed in | |||
controlled environments such as a bench test. They won't generally | ||||
fail due to load, but may fail in the field due to configuration | fail due to load, but may fail in the field due to configuration | |||
errors, etc. and should be spot checked. | errors, etc. and should be spot checked. | |||
We are developing a tool that can perform many of the tests described | We are developing a tool that can perform many of the tests described | |||
here[MBMSource]. | here [MBMSource]. | |||
6. Common Models and Parameters | 4.3. New requirements relative to RFC 2330 | |||
6.1. Target End-to-end parameters | Model Based Metrics are designed to fulfill some additional | |||
requirements that were not recognized at the time RFC 2330 was | ||||
written [RFC2330]. These missing requirements may have significantly | ||||
contributed to policy difficulties in the IP measurement space. Some | ||||
additional requirements are: | ||||
o IP metrics must be actionable by the ISP - they have to be | ||||
interpreted in terms of behaviors or properties at the IP or lower | ||||
layers, that an ISP can test, repair and verify. | ||||
o Metrics should be spatially composable, such that measures of | ||||
concatenated paths should be predictable from subpaths. | ||||
o Metrics must be vantage point invariant over a significant range | ||||
of measurement point choices, including off path measurement | ||||
points. The only requirements on MP selection should be that the | ||||
RTT between the MPs is below some reasonable bound, and that the | ||||
effects of the "test leads" connecting MPs to the subpath under | ||||
test can be can be calibrated out of the measurements. The latter | ||||
might be be accomplished if the test leads are effectively ideal | ||||
or their properties can be deducted from the measurements between | ||||
the MPs. While many of tests require that the test leads have at | ||||
least as much IP capacity as the subpath under test, some do not, | ||||
for example Background Packet Delivery Tests described in | ||||
Section 8.1.3. | ||||
o Metric measurements must be repeatable by multiple parties with no | ||||
specialized access to MPs or diagnostic infrastructure. It must | ||||
be possible for different parties to make the same measurement and | ||||
observe the same results. In particular it is specifically | ||||
important that both a consumer (or their delegate) and ISP be able | ||||
to perform the same measurement and get the same result. Note | ||||
that vantage independence is key to meeting this requirement. | ||||
5. Common Models and Parameters | ||||
5.1. Target End-to-end parameters | ||||
The target end-to-end parameters are the target data rate, target RTT | The target end-to-end parameters are the target data rate, target RTT | |||
and target MTU as defined in Section 3. These parameters are | and target MTU as defined in Section 3. These parameters are | |||
determined by the needs of the application or the ultimate end user | determined by the needs of the application or the ultimate end user | |||
and the complete Internet path over which the application is expected | and the complete Internet path over which the application is expected | |||
to operate. The target parameters are in units that make sense to | to operate. The target parameters are in units that make sense to | |||
upper layers: payload bytes delivered to the application, above TCP. | upper layers: payload bytes delivered to the application, above TCP. | |||
They exclude overheads associated with TCP and IP headers, | They exclude overheads associated with TCP and IP headers, | |||
retransmits and other protocols (e.g. DNS). | retransmits and other protocols (e.g. DNS). | |||
Other end-to-end parameters defined in Section 3 include the | Other end-to-end parameters defined in Section 3 include the | |||
effective bottleneck data rate, the sender interface data rate and | effective bottleneck data rate, the sender interface data rate and | |||
the TCP/IP header sizes (overhead). | the TCP and IP header sizes. | |||
The target data rate must be smaller than all link data rates by | The target_data_rate must be smaller than all subpath IP capacities | |||
enough headroom to carry the transport protocol overhead, explicitly | by enough headroom to carry the transport protocol overhead, | |||
including retransmissions and an allowance for fluctuations in the | explicitly including retransmissions and an allowance for | |||
actual data rate, needed to meet the specified average rate. | fluctuations in TCP's actual data rate. Specifying a | |||
Specifying a target rate with insufficient headroom is likely to | target_data_rate with insufficient headroom is likely to result in | |||
result in brittle measurements having little predictive value. | brittle measurements having little predictive value. | |||
Note that the target parameters can be specified for a hypothetical | Note that the target parameters can be specified for a hypothetical | |||
path, for example to construct TDS designed for bench testing in the | path, for example to construct TDS designed for bench testing in the | |||
absence of a real application, or for a real physical test for in | absence of a real application; or for a live in situ test of | |||
situ testing of production infrastructure. | production infrastructure. | |||
The number of concurrent connections is explicitly not a parameter to | The number of concurrent connections is explicitly not a parameter to | |||
this model. If a subpath requires multiple connections in order to | this model. If a subpath requires multiple connections in order to | |||
meet the specified performance, that must be stated explicitly and | meet the specified performance, that must be stated explicitly and | |||
the procedure described in Section 7.4 applies. | the procedure described in Section 6.4 applies. | |||
6.2. Common Model Calculations | 5.2. Common Model Calculations | |||
The target transport performance is used to derive the | The Target Transport Performance is used to derive the | |||
target_pipe_size and the reference target_run_length. | target_window_size and the reference target_run_length. | |||
The target_pipe_size, is the average window size in packets needed to | The target_window_size, is the average window size in packets needed | |||
meet the target rate, for the specified target RTT and MTU. It is | to meet the target_rate, for the specified target_RTT and target_MTU. | |||
given by: | It is given by: | |||
target_pipe_size = ceiling( target_rate * target_RTT / ( target_MTU - | target_window_size = ceiling( target_rate * target_RTT / ( target_MTU | |||
header_overhead ) ) | - header_overhead ) ) | |||
Target_run_length is an estimate of the minimum required number of | Target_run_length is an estimate of the minimum required number of | |||
unmarked packets that must be delivered between losses or ECN marks, | unmarked packets that must be delivered between losses or ECN marks, | |||
as computed by a mathematical model of TCP congestion control. The | as computed by a mathematical model of TCP congestion control. The | |||
derivation here follows [MSMO97], and by design is quite | derivation here follows [MSMO97], and by design is quite | |||
conservative. The alternate models described in Appendix A generally | conservative. | |||
yield smaller run_lengths (higher acceptable loss or ECN marking | ||||
rates), but may not apply in all situations. A FSTDS that uses an | ||||
alternate model MUST compare it to the reference target_run_length | ||||
computed here. | ||||
Reference target_run_length is derived as follows: assume the | Reference target_run_length is derived as follows: assume the | |||
subpath_data_rate is infinitesimally larger than the target_data_rate | subpath_IP_capacity is infinitesimally larger than the | |||
plus the required header_overhead. Then target_pipe_size also | target_data_rate plus the required header_overhead. Then | |||
predicts the onset of queueing. A larger window will cause a | target_window_size also predicts the onset of queueing. A larger | |||
standing queue at the bottleneck. | window will cause a standing queue at the bottleneck. | |||
Assume the transport protocol is using standard Reno style Additive | Assume the transport protocol is using standard Reno style Additive | |||
Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681] | Increase, Multiplicative Decrease (AIMD) congestion control [RFC5681] | |||
(but not Appropriate Byte Counting [RFC3465]) and the receiver is | (but not Appropriate Byte Counting [RFC3465]) and the receiver is | |||
using standard delayed ACKs. Reno increases the window by one packet | using standard delayed ACKs. Reno increases the window by one packet | |||
every pipe_size worth of ACKs. With delayed ACKs this takes 2 Round | every pipe_size worth of ACKs. With delayed ACKs this takes 2 Round | |||
Trip Times per increase. To exactly fill the pipe, losses must be no | Trip Times per increase. To exactly fill the pipe, losses must be no | |||
closer than when the peak of the AIMD sawtooth reached exactly twice | closer than when the peak of the AIMD sawtooth reached exactly twice | |||
the target_pipe_size otherwise the multiplicative window reduction | the target_window_size otherwise the multiplicative window reduction | |||
triggered by the loss would cause the network to be underfilled. | triggered by the loss would cause the network to be underfilled. | |||
Following [MSMO97] the number of packets between losses must be the | Following [MSMO97] the number of packets between losses must be the | |||
area under the AIMD sawtooth. They must be no more frequent than | area under the AIMD sawtooth. They must be no more frequent than | |||
every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets, | every 1 in ((3/2)*target_window_size)*(2*target_window_size) packets, | |||
which simplifies to: | which simplifies to: | |||
target_run_length = 3*(target_pipe_size^2) | target_run_length = 3*(target_window_size^2) | |||
Note that this calculation is very conservative and is based on a | Note that this calculation is very conservative and is based on a | |||
number of assumptions that may not apply. Appendix A discusses these | number of assumptions that may not apply. Appendix A discusses these | |||
assumptions and provides some alternative models. If a different | assumptions and provides some alternative models. If a different | |||
model is used, a fully specified TDS or FSTDS MUST document the | model is used, a fully specified TDS or FSTDS MUST document the | |||
actual method for computing target_run_length and ratio between | actual method for computing target_run_length and ratio between | |||
alternate target_run_length and the reference target_run_length | alternate target_run_length and the reference target_run_length | |||
calculated above, along with a discussion of the rationale for the | calculated above, along with a discussion of the rationale for the | |||
underlying assumptions. | underlying assumptions. | |||
These two parameters, target_pipe_size and target_run_length, | These two parameters, target_window_size and target_run_length, | |||
directly imply most of the individual parameters for the tests in | directly imply most of the individual parameters for the tests in | |||
Section 10. | Section 8. | |||
6.3. Parameter Derating | 5.3. Parameter Derating | |||
Since some aspects of the models are very conservative, the MBM | Since some aspects of the models are very conservative, the MBM | |||
framework permits some latitude in derating test parameters. Rather | framework permits some latitude in derating test parameters. Rather | |||
than trying to formalize more complicated models we permit some test | than trying to formalize more complicated models we permit some test | |||
parameters to be relaxed as long as they meet some additional | parameters to be relaxed as long as they meet some additional | |||
procedural constraints: | procedural constraints: | |||
o The TDS or FSTDS MUST document and justify the actual method used | o The TDS or FSTDS MUST document and justify the actual method used | |||
to compute the derated metric parameters. | to compute the derated metric parameters. | |||
o The validation procedures described in Section 12 must be used to | o The validation procedures described in Section 10 must be used to | |||
demonstrate the feasibility of meeting the performance targets | demonstrate the feasibility of meeting the Target Transport | |||
with infrastructure that infinitesimally passes the derated tests. | Performance with infrastructure that infinitesimally passes the | |||
o The validation process itself must be documented is such a way | derated tests. | |||
that other researchers can duplicate the validation experiments. | ||||
o The validation process for a FSTDS itself must be documented is | ||||
such a way that other researchers can duplicate the validation | ||||
experiments. | ||||
Except as noted, all tests below assume no derating. Tests where | Except as noted, all tests below assume no derating. Tests where | |||
there is not currently a well established model for the required | there is not currently a well established model for the required | |||
parameters explicitly include derating as a way to indicate | parameters explicitly include derating as a way to indicate | |||
flexibility in the parameters. | flexibility in the parameters. | |||
7. Traffic generating techniques | 5.4. Test Preconditions | |||
7.1. Paced transmission | Many tests have preconditions which are required to assure their | |||
validity. For example the presence or nonpresence of cross traffic | ||||
on specific subpaths, or appropriate preloading to put reactive | ||||
network elements into the proper states [RFC7312]. If preconditions | ||||
are not properly satisfied for some reason, the tests should be | ||||
considered to be inconclusive. In general it is useful to preserve | ||||
diagnostic information about why the preconditions were not met, and | ||||
any test data that was collected even if it is not useful for the | ||||
intended test. Such diagnostic information and partial test data may | ||||
be useful for improving the test in the future. | ||||
It is important to preserve the record that a test was scheduled, | ||||
because otherwise precondition enforcement mechanisms can introduce | ||||
sampling bias. For example, canceling tests due to cross traffic on | ||||
subscriber access links might introduce sampling bias in tests of the | ||||
rest of the network by reducing the number of tests during peak | ||||
network load. | ||||
Test preconditions and failure actions MUST be specified in a FSTDS. | ||||
6. Traffic generating techniques | ||||
6.1. Paced transmission | ||||
Paced (burst) transmissions: send bursts of data on a timer to meet a | Paced (burst) transmissions: send bursts of data on a timer to meet a | |||
particular target rate and pattern. In all cases the specified data | particular target rate and pattern. In all cases the specified data | |||
rate can either be the application or link rates. Header overheads | rate can either be the application or IP rates. Header overheads | |||
must be included in the calculations as appropriate. | must be included in the calculations as appropriate. | |||
Packet Headway: Time interval between packets, specified from the | Packet Headway: Time interval between packets, specified from the | |||
start of one to the start of the next. e.g. If packets are sent | start of one to the start of the next. e.g. If packets are sent | |||
with a 1 mS headway, there will be exactly 1000 packets per | with a 1 mS headway, there will be exactly 1000 packets per | |||
second. | second. | |||
Burst Headway: Time interval between bursts, specified from the | Burst Headway: Time interval between bursts, specified from the | |||
start of the first packet one burst to the start of the first | start of the first packet one burst to the start of the first | |||
packet of the next burst. e.g. If 4 packet bursts are sent with a | packet of the next burst. e.g. If 4 packet bursts are sent with a | |||
1 mS headway, there will be exactly 4000 packets per second. | 1 mS burst headway, there will be exactly 4000 packets per second. | |||
Paced single packets: Send individual packets at the specified rate | Paced single packets: Send individual packets at the specified rate | |||
or packet headway. [@@@@ Site RFC 3432, update definition?] | or packet headway. | |||
Paced Bursts: Send sender interface rate bursts on a timer. Specify | Paced Bursts: Send sender interface rate bursts on a timer. Specify | |||
any 3 of: average rate, packet size, burst size (number of | any 3 of: average rate, packet size, burst size (number of | |||
packets) and burst headway (burst start to start). The packet | packets) and burst headway (burst start to start). The packet | |||
headway within a burst is typically assumed to be the minimum | headway within a burst is typically assumed to be the minimum | |||
supported by the tester's interface. i.e. Bursts are normally | supported by the tester's interface. i.e. Bursts are normally | |||
sent as back-to-back packets. The packet headway within the | sent as back-to-back packets. The packet headway within the | |||
bursts can be explicitly specified. | bursts can also be explicitly specified. | |||
Slowstart bursts: Send 4 packet paced bursts at an average data rate | Slowstart burst: Mimic TCP slowstart by sending 4 packet paced | |||
equal to twice effective bottleneck link rate (but not more than | bursts at an average data rate equal to twice the implied | |||
the sender interface rate). This corresponds to the average rate | bottleneck IP rate (but not more than the sender interface rate). | |||
during a TCP slowstart when Appropriate Byte Counting [RFC3465] is | If the implied bottleneck IP rate is more than half of the sender | |||
present or delayed ack is disabled. Note that if the effective | interface rate, slowstart rate bursts become sender interface rate | |||
bottleneck link rate is more than half of the sender interface | bursts. See the discussion and figure below. | |||
rate, slowstart rate bursts become sender interface rate bursts. | Repeated Slowstart bursts: Repeat Slowstart bursts once per | |||
target_RTT. For TCP each burst would be twice as large as the | ||||
prior burst, and the sequence would end at the first ECN mark or | ||||
lost packet. For measurement, all slowstart bursts would be the | ||||
same size (nominally target_window_size but other sizes might be | ||||
specified). See the discussion and figure below. | ||||
[@@@@ Add figure --MM]. | The slowstart bursts mimic TCP slowstart under a particular set of | |||
Repeated Slowstart bursts: Slowstart bursts are typically part of | implementation assumptions. The burst headway shown in Figure 2 | |||
larger scale pattern of repeated bursts, such as sending | reflects the TCP self clock derived from the data passing through the | |||
target_pipe_size packets as slowstart bursts on a target_RTT | dominant bottleneck. The slow start burst size is nominally | |||
headway (burst start to burst start). Such a stream has three | target_window_size (so it might end with a bust that is less than 4 | |||
different average rates, depending on the averaging interval. At | packets). The slowstart bursts are repeated every target_RTT. Note | |||
the finest time scale the average rate is the same as the sender | that a stream of repeated slowstart bursts has three different | |||
interface rate, at a medium scale the average rate is twice the | average rates, depending on the averaging interval. At the finest | |||
effective bottleneck link rate and at the longest time scales the | time scale (a few packet times at the sender interface) the peak of | |||
average rate is equal to the target data rate. | the average rate is the same as the sender interface rate; at a | |||
medium scale (a few packet times at the dominant bottleneck) the peak | ||||
of the average rate is twice the implied bottleneck IP rate; and at | ||||
time scales longer than the target_RTT and when the burst size is | ||||
equal to the target_window_size the average rate is equal to the | ||||
target_data_rate. This pattern corresponds to repeating the last RTT | ||||
of TCP slowstart when delayed ACK and sender side byte counting are | ||||
present but without the limits specified in Appropriate Byte Counting | ||||
[RFC3465]. | ||||
Note that in conventional measurement theory, exponential | time --> ( - = one packet) | |||
distributions are often used to eliminate many sorts of correlations. | Packet stream: | |||
For the procedures above, the correlations are created by the network | ||||
elements and accurately reflect their behavior. At some point in the | ||||
future, it will be desirable to introduce noise sources into the | ||||
above pacing models, but they are not warranted at this time. | ||||
7.2. Constant window pseudo CBR | ---- ---- ---- ---- ---- ---- ---- ... | |||
|<>| 4 packet sender interface rate bursts | ||||
|<--->| Burst headway | ||||
|<------------------------>| slowstart burst size | ||||
|<---------------------------------------------->| slowstart headway | ||||
\____________ _____________/ \______ __ ... | ||||
V V | ||||
One slowstart burst Repeated slowstart bursts | ||||
Slowstart Burst Structure | ||||
Figure 2 | ||||
Note that in conventional measurement practice, exponentially | ||||
distributed intervals are often used to eliminate many sorts of | ||||
correlations. For the procedures above, the correlations are created | ||||
by the network or protocol elements and accurately reflect their | ||||
behavior. At some point in the future, it will be desirable to | ||||
introduce noise sources into the above pacing models, but they are | ||||
not warranted at this time. | ||||
6.2. Constant window pseudo CBR | ||||
Implement pseudo constant bit rate by running a standard protocol | Implement pseudo constant bit rate by running a standard protocol | |||
such as TCP with a fixed window size, such that it is self clocked. | such as TCP with a fixed window size, such that it is self clocked. | |||
Data packets arriving at the receiver trigger acknowledgements (ACKs) | Data packets arriving at the receiver trigger acknowledgements (ACKs) | |||
which travel back to the sender where they trigger additional | which travel back to the sender where they trigger additional | |||
transmissions. The window size is computed from the target_data_rate | transmissions. The window size is computed from the target_data_rate | |||
and the actual RTT of the test path. The rate is only maintained in | and the actual RTT of the test path. The rate is only maintained in | |||
average over each RTT, and is subject to limitations of the transport | average over each RTT, and is subject to limitations of the transport | |||
protocol. | protocol. | |||
Since the window size is constrained to be an integer number of | Since the window size is constrained to be an integer number of | |||
packets, for small RTTs or low data rates there may not be | packets, for small RTTs or low data rates there may not be | |||
sufficiently precise control over the data rate. Rounding the window | sufficiently precise control over the data rate. Rounding the window | |||
size up (the default) is likely to be result in data rates that are | size up (the default) is likely to be result in data rates that are | |||
higher than the target rate, but reducing the window by one packet | higher than the target rate, but reducing the window by one packet | |||
may result in data rates that are too small. Also cross traffic | may result in data rates that are too small. Also cross traffic | |||
potentially raises the RTT, implicitly reducing the rate. Cross | potentially raises the RTT, implicitly reducing the rate. Cross | |||
traffic that raises the RTT nearly always makes the test more | traffic that raises the RTT nearly always makes the test more | |||
strenuous. A FSTDS specifying a constant window CBR tests MUST | strenuous. A FSTDS specifying a constant window CBR tests MUST | |||
explicitly indicate under what conditions errors in the data cause | explicitly indicate under what conditions errors in the data cause | |||
tests to inconclusive. See the discussion of test outcomes in | tests to inconclusive. | |||
Section 8.1. | ||||
Since constant window pseudo CBR testing is sensitive to RTT | Since constant window pseudo CBR testing is sensitive to RTT | |||
fluctuations it can not accurately control the data rate in | fluctuations it is less accurate at control the data rate in | |||
environments with fluctuating delays. | environments with fluctuating delays. | |||
7.3. Scanned window pseudo CBR | 6.3. Scanned window pseudo CBR | |||
Scanned window pseudo CBR is similar to the constant window CBR | Scanned window pseudo CBR is similar to the constant window CBR | |||
described above, except the window is scanned across a range of sizes | described above, except the window is scanned across a range of sizes | |||
designed to include two key events, the onset of queueing and the | designed to include two key events, the onset of queueing and the | |||
onset of packet loss or ECN marks. The window is scanned by | onset of packet loss or ECN marks. The window is scanned by | |||
incrementing it by one packet every 2*target_pipe_size delivered | incrementing it by one packet every 2*target_window_size delivered | |||
packets. This mimics the additive increase phase of standard TCP | packets. This mimics the additive increase phase of standard TCP | |||
congestion avoidance when delayed ACKs are in effect. It normally | congestion avoidance when delayed ACKs are in effect. Normally the | |||
separates the the window increases by approximately twice the | window increases separated by intervals slightly longer than twice | |||
target_RTT. | the target_RTT. | |||
There are two ways to implement this test: one built by applying a | There are two ways to implement this test: one built by applying a | |||
window clamp to standard congestion control in a standard protocol | window clamp to standard congestion control in a standard protocol | |||
such as TCP and the other built by stiffening a non-standard | such as TCP and the other built by stiffening a non-standard | |||
transport protocol. When standard congestion control is in effect, | transport protocol. When standard congestion control is in effect, | |||
any losses or ECN marks cause the transport to revert to a window | any losses or ECN marks cause the transport to revert to a window | |||
smaller than the clamp such that the scanning clamp loses control the | smaller than the clamp such that the scanning clamp loses control the | |||
window size. The NPAD pathdiag tool is an example of this class of | window size. The NPAD pathdiag tool is an example of this class of | |||
algorithms [Pathdiag]. | algorithms [Pathdiag]. | |||
Alternatively a non-standard congestion control algorithm can respond | Alternatively a non-standard congestion control algorithm can respond | |||
to losses by transmitting extra data, such that it maintains the | to losses by transmitting extra data, such that it maintains the | |||
specified window size independent of losses or ECN marks. Such a | specified window size independent of losses or ECN marks. Such a | |||
stiffened transport explicitly violates mandatory Internet congestion | stiffened transport explicitly violates mandatory Internet congestion | |||
control and is not suitable for in situ testing. [RFC5681] It is | control and is not suitable for in situ testing. [RFC5681] It is | |||
only appropriate for engineering testing under laboratory conditions. | only appropriate for engineering testing under laboratory conditions. | |||
The Windowed Ping tool implements such a test [WPING]. The tool | The Windowed Ping tool implements such a test [WPING]. The tool | |||
described in the paper has been updated.[mpingSource] | described in the paper has been updated.[mpingSource] | |||
The test procedures in Section 10.2 describe how to the partition the | The test procedures in Section 8.2 describe how to the partition the | |||
scans into regions and how to interpret the results. | scans into regions and how to interpret the results. | |||
7.4. Concurrent or channelized testing | 6.4. Concurrent or channelized testing | |||
The procedures described in this document are only directly | The procedures described in this document are only directly | |||
applicable to single stream performance measurement, e.g. one TCP | applicable to single stream measurement, e.g. one TCP connection or | |||
connection. In an ideal world, we would disallow all performance | measurement stream. In an ideal world, we would disallow all | |||
claims based multiple concurrent streams, but this is not practical | performance claims based multiple concurrent streams, but this is not | |||
due to at least two different issues. First, many very high rate | practical due to at least two different issues. First, many very | |||
link technologies are channelized and pin individual flows to | high rate link technologies are channelized and at last partially pin | |||
specific channels to minimize reordering or other problems and | the flow to channel mapping to minimize packet reordering within | |||
second, TCP itself has scaling limits. Although the former problem | flows. Second, TCP itself has scaling limits. Although the former | |||
might be overcome through different design decisions, the later | problem might be overcome through different design decisions, the | |||
problem is more deeply rooted. | later problem is more deeply rooted. | |||
All congestion control algorithms that are philosophically aligned | All congestion control algorithms that are philosophically aligned | |||
with the standard [RFC5681] (e.g. claim some level of TCP | with the standard [RFC5681] (e.g. claim some level of TCP | |||
friendliness) have scaling limits, in the sense that as a long fast | compatibility, friendliness or fairness) have scaling limits, in the | |||
network (LFN) with a fixed RTT and MTU gets faster, these congestion | sense that as a long fast network (LFN) with a fixed RTT and MTU gets | |||
control algorithms get less accurate and as a consequence have | faster, these congestion control algorithms get less accurate and as | |||
difficulty filling the network[CCscaling]. These properties are a | a consequence have difficulty filling the network[CCscaling]. These | |||
consequence of the original Reno AIMD congestion control design and | properties are a consequence of the original Reno AIMD congestion | |||
the requirement in [RFC5681] that all transport protocols have | control design and the requirement in [RFC5681] that all transport | |||
uniform response to congestion. | protocols have similar responses to congestion. | |||
There are a number of reasons to want to specify performance in term | There are a number of reasons to want to specify performance in term | |||
of multiple concurrent flows, however this approach is not | of multiple concurrent flows, however this approach is not | |||
recommended for data rates below several megabits per second, which | recommended for data rates below several megabits per second, which | |||
can be attained with run lengths under 10000 packets. Since the | can be attained with run lengths under 10000 packets on many paths. | |||
required run length goes as the square of the data rate, at higher | Since the required run length goes as the square of the data rate, at | |||
rates the run lengths can be unreasonably large, and multiple | higher rates the run lengths can be unreasonably large, and multiple | |||
connection might be the only feasible approach. | flows might be the only feasible approach. | |||
If multiple connections are deemed necessary to meet aggregate | If multiple flows are deemed necessary to meet aggregate performance | |||
performance targets then this MUST be stated both the design of the | targets then this MUST be stated both the design of the TDS and in | |||
TDS and in any claims about network performance. The tests MUST be | any claims about network performance. The IP diagnostic tests MUST | |||
performed concurrently with the specified number of connections. For | be performed concurrently with the specified number of connections. | |||
the the tests that use bursty traffic, the bursts should be | For the the tests that use bursty traffic, the bursts should be | |||
synchronized across flows. | synchronized across flows. | |||
8. Interpreting the Results | 7. Interpreting the Results | |||
8.1. Test outcomes | 7.1. Test outcomes | |||
To perform an exhaustive test of a complete network path, each test | To perform an exhaustive test of a complete network path, each test | |||
of the TDS is applied to each subpath of the complete path. If any | of the TDS is applied to each subpath of the complete path. If any | |||
subpath fails any test then an application running over the complete | subpath fails any test then a standard transport protocol running | |||
path can also be expected to fail to attain the target performance | over the complete path can also be expected to fail to attain the | |||
under some conditions. | Target Transport Performance under some conditions. | |||
In addition to passing or failing, a test can be deemed to be | In addition to passing or failing, a test can be deemed to be | |||
inconclusive for a number of reasons. Proper instrumentation and | inconclusive for a number of reasons. Proper instrumentation and | |||
treatment of inconclusive outcomes is critical to the accuracy and | treatment of inconclusive outcomes is critical to the accuracy and | |||
robustness of Model Based Metrics. Tests can be inconclusive if the | robustness of Model Based Metrics. Tests can be inconclusive if the | |||
precomputed traffic pattern or data rates were not accurately | precomputed traffic pattern or data rates were not accurately | |||
generated; the measurement results were not statistically | generated; the measurement results were not statistically | |||
significant; and others causes such as failing to meet some required | significant; and others causes such as failing to meet some required | |||
preconditions for the test. | preconditions for the test. See Section 5.4 | |||
For example consider a test that implements Constant Window Pseudo | For example consider a test that implements Constant Window Pseudo | |||
CBR (Section 7.2) by adding rate controls and detailed traffic | CBR (Section 6.2) by adding rate controls and detailed traffic | |||
instrumentation to TCP (e.g. [RFC4898]). TCP includes built in | instrumentation to TCP (e.g. [RFC4898]). TCP includes built in | |||
control systems which might interfere with the sending data rate. If | control systems which might interfere with the sending data rate. If | |||
such a test meets the required delivery statistics (e.g. run length) | such a test meets the required packet delivery statistics (e.g. run | |||
while failing to attain the specified data rate it must be treated as | length) while failing to attain the specified data rate it must be | |||
an inconclusive result, because we can not a priori determine if the | treated as an inconclusive result, because we can not a priori | |||
reduced data rate was caused by a TCP problem or a network problem, | determine if the reduced data rate was caused by a TCP problem or a | |||
or if the reduced data rate had a material effect on the observed | network problem, or if the reduced data rate had a material effect on | |||
delivery statistics. | the observed packet delivery statistics. | |||
Note that for capacity tests, if the observed delivery statistics | Note that for capacity tests, if the observed packet delivery | |||
fail to meet the targets, the test can can be considered to have | statistics meet the statistical criteria for failing (accepting | |||
hypnosis H1 in Section 7.2), the test can can be considered to have | ||||
failed because it doesn't really matter that the test didn't attain | failed because it doesn't really matter that the test didn't attain | |||
the required data rate. | the required data rate. | |||
The really important new properties of MBM, such as vantage | The really important new properties of MBM, such as vantage | |||
independence, are a direct consequence of opening the control loops | independence, are a direct consequence of opening the control loops | |||
in the protocols, such that the test traffic does not depend on | in the protocols, such that the test traffic does not depend on | |||
network conditions or traffic received. Any mechanism that | network conditions or traffic received. Any mechanism that | |||
introduces feedback between the paths measurements and the traffic | introduces feedback between the paths measurements and the traffic | |||
generation is at risk of introducing nonlinearities that spoil these | generation is at risk of introducing nonlinearities that spoil these | |||
properties. Any exceptional event that indicates that such feedback | properties. Any exceptional event that indicates that such feedback | |||
has happened should cause the test to be considered inconclusive. | has happened should cause the test to be considered inconclusive. | |||
One way to view inconclusive tests is that they reflect situations | One way to view inconclusive tests is that they reflect situations | |||
where a test outcome is ambiguous between limitations of the network | where a test outcome is ambiguous between limitations of the network | |||
and some unknown limitation of the diagnostic test itself, which may | and some unknown limitation of the IP diagnostic test itself, which | |||
have been caused by some uncontrolled feedback from the network. | may have been caused by some uncontrolled feedback from the network. | |||
Note that procedures that attempt to sweep the target parameter space | Note that procedures that attempt to sweep the target parameter space | |||
to find the limits on some parameter such as target_data_rate are at | to find the limits on some parameter such as target_data_rate are at | |||
risk of breaking the location independent properties of Model Based | risk of breaking the location independent properties of Model Based | |||
Metrics, if the boundary between passing and inconclusive is at all | Metrics, if any part of the boundary between passing and inconclusive | |||
sensitive to RTT. | is sensitive to RTT (which is normally the case). | |||
One of the goals for evolving TDS designs will be to keep sharpening | One of the goals for evolving TDS designs will be to keep sharpening | |||
distinction between inconclusive, passing and failing tests. The | distinction between inconclusive, passing and failing tests. The | |||
criteria for for passing, failing and inconclusive tests MUST be | criteria for for passing, failing and inconclusive tests MUST be | |||
explicitly stated for every test in the TDS or FSTDS. | explicitly stated for every test in the TDS or FSTDS. | |||
One of the goals of evolving the testing process, procedures, tools | One of the goals of evolving the testing process, procedures, tools | |||
and measurement point selection should be to minimize the number of | and measurement point selection should be to minimize the number of | |||
inconclusive tests. | inconclusive tests. | |||
It may be useful to keep raw data delivery statistics for deeper | It may be useful to keep raw packet delivery statistics and ancillary | |||
study of the behavior of the network path and to measure the tools | metrics [RFC3148] for deeper study of the behavior of the network | |||
themselves. Raw delivery statistics can help to drive tool | path and to measure the tools themselves. Raw packet delivery | |||
evolution. Under some conditions it might be possible to reevaluate | statistics can help to drive tool evolution. Under some conditions | |||
the raw data for satisfying alternate performance targets. However | it might be possible to reevaluate the raw data for satisfying | |||
it is important to guard against sampling bias and other implicit | alternate Target Transport Performance. However it is important to | |||
feedback which can cause false results and exhibit measurement point | guard against sampling bias and other implicit feedback which can | |||
vantage sensitivity. | cause false results and exhibit measurement point vantage | |||
sensitivity. Simply applying different delivery criteria based on a | ||||
different Target Transport Performance is insufficient if the test | ||||
traffic patterns (bursts, etc) does not match the alternate Target | ||||
Transport Performance. | ||||
8.2. Statistical criteria for estimating run_length | 7.2. Statistical criteria for estimating run_length | |||
When evaluating the observed run_length, we need to determine | When evaluating the observed run_length, we need to determine | |||
appropriate packet stream sizes and acceptable error levels for | appropriate packet stream sizes and acceptable error levels for | |||
efficient measurement. In practice, can we compare the empirically | efficient measurement. In practice, can we compare the empirically | |||
estimated packet loss and ECN marking ratios with the targets as the | estimated packet loss and ECN marking ratios with the targets as the | |||
sample size grows? How large a sample is needed to say that the | sample size grows? How large a sample is needed to say that the | |||
measurements of packet transfer indicate a particular run length is | measurements of packet transfer indicate a particular run length is | |||
present? | present? | |||
The generalized measurement can be described as recursive testing: | The generalized measurement can be described as recursive testing: | |||
send packets (individually or in patterns) and observe the packet | send packets (individually or in patterns) and observe the packet | |||
delivery performance (loss ratio or other metric, any marking we | delivery performance (packet loss ratio or other metric, any marking | |||
define). | we define). | |||
As each packet is sent and measured, we have an ongoing estimate of | As each packet is sent and measured, we have an ongoing estimate of | |||
the performance in terms of the ratio of packet loss or ECN mark to | the performance in terms of the ratio of packet loss or ECN mark to | |||
total packets (i.e. an empirical probability). We continue to send | total packets (i.e. an empirical probability). We continue to send | |||
until conditions support a conclusion or a maximum sending limit has | until conditions support a conclusion or a maximum sending limit has | |||
been reached. | been reached. | |||
We have a target_mark_probability, 1 mark per target_run_length, | We have a target_mark_probability, 1 mark per target_run_length, | |||
where a "mark" is defined as a lost packet, a packet with ECN mark, | where a "mark" is defined as a lost packet, a packet with ECN mark, | |||
or other signal. This constitutes the null Hypothesis: | or other signal. This constitutes the null Hypothesis: | |||
H0: no more than one mark in target_run_length = | H0: no more than one mark in target_run_length = | |||
3*(target_pipe_size)^2 packets | 3*(target_window_size)^2 packets | |||
and we can stop sending packets if on-going measurements support | and we can stop sending packets if on-going measurements support | |||
accepting H0 with the specified Type I error = alpha (= 0.05 for | accepting H0 with the specified Type I error = alpha (= 0.05 for | |||
example). | example). | |||
We also have an alternative Hypothesis to evaluate: if performance is | We also have an alternative Hypothesis to evaluate: if performance is | |||
significantly lower than the target_mark_probability. Based on | significantly lower than the target_mark_probability. Based on | |||
analysis of typical values and practical limits on measurement | analysis of typical values and practical limits on measurement | |||
duration, we choose four times the H0 probability: | duration, we choose four times the H0 probability: | |||
skipping to change at page 27, line 49 | skipping to change at page 29, line 12 | |||
Analysis [Rtool] , in the add-on package for Cross-Validation via | Analysis [Rtool] , in the add-on package for Cross-Validation via | |||
Sequential Testing (CVST) [CVST] . | Sequential Testing (CVST) [CVST] . | |||
Using the equations above, we can calculate the minimum number of | Using the equations above, we can calculate the minimum number of | |||
packets (n) needed to accept H0 when x defects are observed. For | packets (n) needed to accept H0 when x defects are observed. For | |||
example, when x = 0: | example, when x = 0: | |||
Xa = 0 = -h1 + s*n | Xa = 0 = -h1 + s*n | |||
and n = h1 / s | and n = h1 / s | |||
8.3. Reordering Tolerance | 7.3. Reordering Tolerance | |||
All tests must be instrumented for packet level reordering [RFC4737]. | All tests must be instrumented for packet level reordering [RFC4737]. | |||
However, there is no consensus for how much reordering should be | However, there is no consensus for how much reordering should be | |||
acceptable. Over the last two decades the general trend has been to | acceptable. Over the last two decades the general trend has been to | |||
make protocols and applications more tolerant to reordering (see for | make protocols and applications more tolerant to reordering (see for | |||
example [RFC4015]), in response to the gradual increase in reordering | example [RFC4015]), in response to the gradual increase in reordering | |||
in the network. This increase has been due to the deployment of | in the network. This increase has been due to the deployment of | |||
technologies such as multi threaded routing lookups and Equal Cost | technologies such as multi threaded routing lookups and Equal Cost | |||
MultiPath (ECMP) routing. These techniques increase parallelism in | MultiPath (ECMP) routing. These techniques increase parallelism in | |||
network and are critical to enabling overall Internet growth to | network and are critical to enabling overall Internet growth to | |||
skipping to change at page 28, line 31 | skipping to change at page 29, line 43 | |||
By implication, recording which is less than these bounds should not | By implication, recording which is less than these bounds should not | |||
be treated as a network impairment. However [RFC4737] still applies: | be treated as a network impairment. However [RFC4737] still applies: | |||
reordering should be instrumented and the maximum reordering that can | reordering should be instrumented and the maximum reordering that can | |||
be properly characterized by the test (e.g. bound on history buffers) | be properly characterized by the test (e.g. bound on history buffers) | |||
should be recorded with the measurement results. | should be recorded with the measurement results. | |||
Reordering tolerance and diagnostic limitations, such as history | Reordering tolerance and diagnostic limitations, such as history | |||
buffer size, MUST be specified in a FSTDS. | buffer size, MUST be specified in a FSTDS. | |||
9. Test Preconditions | 8. Diagnostic Tests | |||
Many tests have preconditions which are required to assure their | ||||
validity. For example the presence or nonpresence of cross traffic | ||||
on specific subpaths, or appropriate preloading to put reactive | ||||
network elements into the proper states[RFC7312]). If preconditions | ||||
are not properly satisfied for some reason, the tests should be | ||||
considered to be inconclusive. In general it is useful to preserve | ||||
diagnostic information about why the preconditions were not met, and | ||||
any test data that was collected even if it is not useful for the | ||||
intended test. Such diagnostic information and partial test data may | ||||
be useful for improving the test in the future. | ||||
It is important to preserve the record that a test was scheduled, | ||||
because otherwise precondition enforcement mechanisms can introduce | ||||
sampling bias. For example, canceling tests due to cross traffic on | ||||
subscriber access links might introduce sampling bias of tests of the | ||||
rest of the network by reducing the number of tests during peak | ||||
network load. | ||||
Test preconditions and failure actions MUST be specified in a FSTDS. | ||||
10. Diagnostic Tests | ||||
The diagnostic tests below are organized by traffic pattern: basic | The IP diagnostic tests below are organized by traffic pattern: basic | |||
data rate and delivery statistics, standing queues, slowstart bursts, | data rate and packet delivery statistics, standing queues, slowstart | |||
and sender rate bursts. We also introduce some combined tests which | bursts, and sender rate bursts. We also introduce some combined | |||
are more efficient when networks are expected to pass, but conflate | tests which are more efficient when networks are expected to pass, | |||
diagnostic signatures when they fail. | but conflate diagnostic signatures when they fail. | |||
There are a number of test details which are not fully defined here. | There are a number of test details which are not fully defined here. | |||
They must be fully specified in a FSTDS. From a standardization | They must be fully specified in a FSTDS. From a standardization | |||
perspective, this lack of specificity will weaken this version of | perspective, this lack of specificity will weaken this version of | |||
Model Based Metrics, however it is anticipated that this it be more | Model Based Metrics, however it is anticipated that this it be more | |||
than offset by the extent to which MBM suppresses the problems caused | than offset by the extent to which MBM suppresses the problems caused | |||
by using transport protocols for measurement. e.g. non-specific MBM | by using transport protocols for measurement. e.g. non-specific MBM | |||
metrics are likely to have better repeatability than many existing | metrics are likely to have better repeatability than many existing | |||
BTC like metrics. Once we have good field experience, the missing | BTC like metrics. Once we have good field experience, the missing | |||
details can be fully specified. | details can be fully specified. | |||
10.1. Basic Data Rate and Delivery Statistics Tests | 8.1. Basic Data Rate and Packet Delivery Tests | |||
We propose several versions of the basic data rate and delivery | We propose several versions of the basic data rate and packet | |||
statistics test. All measure the number of packets delivered between | delivery statistics test. All measure the number of packets | |||
losses or ECN marks, using a data stream that is rate controlled at | delivered between losses or ECN marks, using a data stream that is | |||
or below the target_data_rate. | rate controlled at or below the target_data_rate. | |||
The tests below differ in how the data rate is controlled. The data | The tests below differ in how the data rate is controlled. The data | |||
can be paced on a timer, or window controlled at full target data | can be paced on a timer, or window controlled at full target data | |||
rate. The first two tests implicitly confirm that sub_path has | rate. The first two tests implicitly confirm that sub_path has | |||
sufficient raw capacity to carry the target_data_rate. They are | sufficient raw capacity to carry the target_data_rate. They are | |||
recommend for relatively infrequent testing, such as an installation | recommend for relatively infrequent testing, such as an installation | |||
or periodic auditing process. The third, background delivery | or periodic auditing process. The third, background packet delivery | |||
statistics, is a low rate test designed for ongoing monitoring for | statistics, is a low rate test designed for ongoing monitoring for | |||
changes in subpath quality. | changes in subpath quality. | |||
All rely on the receiver accumulating packet delivery statistics as | All rely on the receiver accumulating packet delivery statistics as | |||
described in Section 8.2 to score the outcome: | described in Section 7.2 to score the outcome: | |||
Pass: it is statistically significant that the observed interval | Pass: it is statistically significant that the observed interval | |||
between losses or ECN marks is larger than the target_run_length. | between losses or ECN marks is larger than the target_run_length. | |||
Fail: it is statistically significant that the observed interval | Fail: it is statistically significant that the observed interval | |||
between losses or ECN marks is smaller than the target_run_length. | between losses or ECN marks is smaller than the target_run_length. | |||
A test is considered to be inconclusive if it failed to meet the data | A test is considered to be inconclusive if it failed to meet the data | |||
rate as specified below, meet the qualifications defined in Section 9 | rate as specified below, meet the qualifications defined in | |||
or neither run length statistical hypothesis was confirmed in the | Section 5.4 or neither run length statistical hypothesis was | |||
allotted test duration. | confirmed in the allotted test duration. | |||
10.1.1. Delivery Statistics at Paced Full Data Rate | 8.1.1. Delivery Statistics at Paced Full Data Rate | |||
Confirm that the observed run length is at least the | Confirm that the observed run length is at least the | |||
target_run_length while relying on timer to send data at the | target_run_length while relying on timer to send data at the | |||
target_rate using the procedure described in in Section 7.1 with a | target_rate using the procedure described in in Section 6.1 with a | |||
burst size of 1 (single packets) or 2 (packet pairs). | burst size of 1 (single packets) or 2 (packet pairs). | |||
The test is considered to be inconclusive if the packet transmission | The test is considered to be inconclusive if the packet transmission | |||
can not be accurately controlled for any reason. | can not be accurately controlled for any reason. | |||
RFC 6673 [RFC6673] is appropriate for measuring delivery statistics | RFC 6673 [RFC6673] is appropriate for measuring packet delivery | |||
at full data rate. | statistics at full data rate. | |||
10.1.2. Delivery Statistics at Full Data Windowed Rate | 8.1.2. Delivery Statistics at Full Data Windowed Rate | |||
Confirm that the observed run length is at least the | Confirm that the observed run length is at least the | |||
target_run_length while sending at an average rate approximately | target_run_length while sending at an average rate approximately | |||
equal to the target_data_rate, by controlling (or clamping) the | equal to the target_data_rate, by controlling (or clamping) the | |||
window size of a conventional transport protocol to a fixed value | window size of a conventional transport protocol to a fixed value | |||
computed from the properties of the test path, typically | computed from the properties of the test path, typically | |||
test_window=target_data_rate*test_RTT/target_MTU. Note that if there | test_window=target_data_rate*test_path_RTT/target_MTU. Note that if | |||
is any interaction between the forward and return path, test_window | there is any interaction between the forward and return path, | |||
may need to be adjusted slightly to compensate for the resulting | test_window may need to be adjusted slightly to compensate for the | |||
inflated RTT. | resulting inflated RTT. | |||
Since losses and ECN marks generally cause transport protocols to at | Since losses and ECN marks generally cause transport protocols to at | |||
least temporarily reduce their data rates, this test is expected to | least temporarily reduce their data rates, this test is expected to | |||
be less precise about controlling its data rate. It should not be | be less precise about controlling its data rate. It should not be | |||
considered inconclusive as long as at least some of the round trips | considered inconclusive as long as at least some of the round trips | |||
reached the full target_data_rate without incurring losses or ECN | reached the full target_data_rate without incurring losses or ECN | |||
marks. To pass this test the network MUST deliver target_pipe_size | marks. To pass this test the network MUST deliver target_window_size | |||
packets in target_RTT time without any losses or ECN marks at least | packets in target_RTT time without any losses or ECN marks at least | |||
once per two target_pipe_size round trips, in addition to meeting the | once per two target_window_size round trips, in addition to meeting | |||
run length statistical test. | the run length statistical test. | |||
10.1.3. Background Delivery Statistics Tests | 8.1.3. Background Packet Delivery Statistics Tests | |||
The background run length is a low rate version of the target target | The background run length is a low rate version of the target target | |||
rate test above, designed for ongoing lightweight monitoring for | rate test above, designed for ongoing lightweight monitoring for | |||
changes in the observed subpath run length without disrupting users. | changes in the observed subpath run length without disrupting users. | |||
It should be used in conjunction with one of the above full rate | It should be used in conjunction with one of the above full rate | |||
tests because it does not confirm that the subpath can support raw | tests because it does not confirm that the subpath can support raw | |||
data rate. | data rate. | |||
RFC 6673 [RFC6673] is appropriate for measuring background delivery | RFC 6673 [RFC6673] is appropriate for measuring background packet | |||
statistics. | delivery statistics. | |||
10.2. Standing Queue Tests | 8.2. Standing Queue Tests | |||
These engineering tests confirm that the bottleneck is well behaved | These engineering tests confirm that the bottleneck is well behaved | |||
across the onset of packet loss, which typically follows after the | across the onset of packet loss, which typically follows after the | |||
onset of queueing. Well behaved generally means lossless for | onset of queueing. Well behaved generally means lossless for | |||
transient queues, but once the queue has been sustained for a | transient queues, but once the queue has been sustained for a | |||
sufficient period of time (or reaches a sufficient queue depth) there | sufficient period of time (or reaches a sufficient queue depth) there | |||
should be a small number of losses to signal to the transport | should be a small number of losses to signal to the transport | |||
protocol that it should reduce its window. Losses that are too early | protocol that it should reduce its window. Losses that are too early | |||
can prevent the transport from averaging at the target_data_rate. | can prevent the transport from averaging at the target_data_rate. | |||
Losses that are too late indicate that the queue might be subject to | Losses that are too late indicate that the queue might be subject to | |||
bufferbloat [wikiBloat] and inflict excess queuing delays on all | bufferbloat [wikiBloat] and inflict excess queuing delays on all | |||
flows sharing the bottleneck queue. Excess losses (more than half of | flows sharing the bottleneck queue. Excess losses (more than half of | |||
the window) at the onset of congestion make loss recovery problematic | the window) at the onset of congestion make loss recovery problematic | |||
for the transport protocol. Non-linear, erratic or excessive RTT | for the transport protocol. Non-linear, erratic or excessive RTT | |||
increases suggest poor interactions between the channel acquisition | increases suggest poor interactions between the channel acquisition | |||
algorithms and the transport self clock. All of the tests in this | algorithms and the transport self clock. All of the tests in this | |||
section use the same basic scanning algorithm, described here, but | section use the same basic scanning algorithm, described here, but | |||
score the link on the basis of how well it avoids each of these | score the link or subpath on the basis of how well it avoids each of | |||
problems. | these problems. | |||
For some technologies the data might not be subject to increasing | For some technologies the data might not be subject to increasing | |||
delays, in which case the data rate will vary with the window size | delays, in which case the data rate will vary with the window size | |||
all the way up to the onset of load induced losses or ECN marks. For | all the way up to the onset of load induced losses or ECN marks. For | |||
theses technologies, the discussion of queueing does not apply, but | theses technologies, the discussion of queueing does not apply, but | |||
it is still required that the onset of losses or ECN marks be at an | it is still required that the onset of losses or ECN marks be at an | |||
appropriate point and progressive. | appropriate point and progressive. | |||
Use the procedure in Section 7.3 to sweep the window across the onset | Use the procedure in Section 6.3 to sweep the window across the onset | |||
of queueing and the onset of loss. The tests below all assume that | of queueing and the onset of loss. The tests below all assume that | |||
the scan emulates standard additive increase and delayed ACK by | the scan emulates standard additive increase and delayed ACK by | |||
incrementing the window by one packet for every 2*target_pipe_size | incrementing the window by one packet for every 2*target_window_size | |||
packets delivered. A scan can typically be divided into three | packets delivered. A scan can typically be divided into three | |||
regions: below the onset of queueing, a standing queue, and at or | regions: below the onset of queueing, a standing queue, and at or | |||
beyond the onset of loss. | beyond the onset of loss. | |||
Below the onset of queueing the RTT is typically fairly constant, and | Below the onset of queueing the RTT is typically fairly constant, and | |||
the data rate varies in proportion to the window size. Once the data | the data rate varies in proportion to the window size. Once the data | |||
rate reaches the link rate, the data rate becomes fairly constant, | rate reaches the subpath IP rate, the data rate becomes fairly | |||
and the RTT increases in proportion to the increase in window size. | constant, and the RTT increases in proportion to the increase in | |||
The precise transition across the start of queueing can be identified | window size. The precise transition across the start of queueing can | |||
by the maximum network power, defined to be the ratio data rate over | be identified by the maximum network power, defined to be the ratio | |||
the RTT. The network power can be computed at each window size, and | data rate over the RTT. The network power can be computed at each | |||
the window with the maximum are taken as the start of the queueing | window size, and the window with the maximum are taken as the start | |||
region. | of the queueing region. | |||
For technologies that do not have conventional queues, start the scan | For technologies that do not have conventional queues, start the scan | |||
at a window equal to the test_window=target_data_rate*test_RTT/ | at a window equal to the test_window=target_data_rate*test_path_RTT/ | |||
target_MTU, i.e. starting at the target rate, instead of the power | target_MTU, i.e. starting at the target rate, instead of the power | |||
point. | point. | |||
If there is random background loss (e.g. bit errors, etc), precise | If there is random background loss (e.g. bit errors, etc), precise | |||
determination of the onset of queue induced packet loss may require | determination of the onset of queue induced packet loss may require | |||
multiple scans. Above the onset of queuing loss, all transport | multiple scans. Above the onset of queuing loss, all transport | |||
protocols are expected to experience periodic losses determined by | protocols are expected to experience periodic losses determined by | |||
the interaction between the congestion control and AQM algorithms. | the interaction between the congestion control and AQM algorithms. | |||
For standard congestion control algorithms the periodic losses are | For standard congestion control algorithms the periodic losses are | |||
likely to be relatively widely spaced and the details are typically | likely to be relatively widely spaced and the details are typically | |||
dominated by the behavior of the transport protocol itself. For the | dominated by the behavior of the transport protocol itself. For the | |||
stiffened transport protocols case (with non-standard, aggressive | stiffened transport protocols case (with non-standard, aggressive | |||
congestion control algorithms) the details of periodic losses will be | congestion control algorithms) the details of periodic losses will be | |||
dominated by how the the window increase function responds to loss. | dominated by how the the window increase function responds to loss. | |||
10.2.1. Congestion Avoidance | 8.2.1. Congestion Avoidance | |||
A link passes the congestion avoidance standing queue test if more | A subpath passes the congestion avoidance standing queue test if more | |||
than target_run_length packets are delivered between the onset of | than target_run_length packets are delivered between the onset of | |||
queueing (as determined by the window with the maximum network power) | queueing (as determined by the window with the maximum network power) | |||
and the first loss or ECN mark. If this test is implemented using a | and the first loss or ECN mark. If this test is implemented using a | |||
standards congestion control algorithm with a clamp, it can be | standards congestion control algorithm with a clamp, it can be | |||
performed in situ in the production internet as a capacity test. For | performed in situ in the production internet as a capacity test. For | |||
an example of such a test see [Pathdiag]. | an example of such a test see [Pathdiag]. | |||
For technologies that do not have conventional queues, use the | For technologies that do not have conventional queues, use the | |||
test_window inplace of the onset of queueing. i.e. A link passes the | test_window inplace of the onset of queueing. i.e. A subpath passes | |||
congestion avoidance standing queue test if more than | the congestion avoidance standing queue test if more than | |||
target_run_length packets are delivered between start of the scan at | target_run_length packets are delivered between start of the scan at | |||
test_window and the first loss or ECN mark. | test_window and the first loss or ECN mark. | |||
10.2.2. Bufferbloat | 8.2.2. Bufferbloat | |||
This test confirms that there is some mechanism to limit buffer | This test confirms that there is some mechanism to limit buffer | |||
occupancy (e.g. that prevents bufferbloat). Note that this is not | occupancy (e.g. that prevents bufferbloat). Note that this is not | |||
strictly a requirement for single stream bulk performance, however if | strictly a requirement for single stream bulk transport capacity, | |||
there is no mechanism to limit buffer queue occupancy then a single | however if there is no mechanism to limit buffer queue occupancy then | |||
stream with sufficient data to deliver is likely to cause the | a single stream with sufficient data to deliver is likely to cause | |||
problems described in [RFC2309], [I-D.ietf-aqm-recommendation] and | the problems described in [RFC2309], [I-D.ietf-aqm-recommendation] | |||
[wikiBloat]. This may cause only minor symptoms for the dominant | and [wikiBloat]. This may cause only minor symptoms for the dominant | |||
flow, but has the potential to make the link unusable for other flows | flow, but has the potential to make the subpath unusable for other | |||
and applications. | flows and applications. | |||
Pass if the onset of loss occurs before a standing queue has | Pass if the onset of loss occurs before a standing queue has | |||
introduced more delay than than twice target_RTT, or other well | introduced more delay than than twice target_RTT, or other well | |||
defined and specified limit. Note that there is not yet a model for | defined and specified limit. Note that there is not yet a model for | |||
how much standing queue is acceptable. The factor of two chosen here | how much standing queue is acceptable. The factor of two chosen here | |||
reflects a rule of thumb. In conjunction with the previous test, | reflects a rule of thumb. In conjunction with the previous test, | |||
this test implies that the first loss should occur at a queueing | this test implies that the first loss should occur at a queueing | |||
delay which is between one and two times the target_RTT. | delay which is between one and two times the target_RTT. | |||
Specified RTT limits that are larger than twice the target_RTT must | Specified RTT limits that are larger than twice the target_RTT must | |||
be fully justified in the FSTDS. | be fully justified in the FSTDS. | |||
10.2.3. Non excessive loss | 8.2.3. Non excessive loss | |||
This test confirm that the onset of loss is not excessive. Pass if | This test confirm that the onset of loss is not excessive. Pass if | |||
losses are equal or less than the increase in the cross traffic plus | losses are equal or less than the increase in the cross traffic plus | |||
the test traffic window increase on the previous RTT. This could be | the test traffic window increase on the previous RTT. This could be | |||
restated as non-decreasing link throughput at the onset of loss, | restated as non-decreasing subpath throughput at the onset of loss, | |||
which is easy to meet as long as discarding packets in not more | which is easy to meet as long as discarding packets is not more | |||
expensive than delivering them. (Note when there is a transient drop | expensive than delivering them. (Note when there is a transient drop | |||
in link throughput, outside of a standing queue test, a link that | in subpath throughput, outside of a standing queue test, a subpath | |||
passes other queue tests in this document will have sufficient queue | that passes other queue tests in this document will have sufficient | |||
space to hold one RTT worth of data). | queue space to hold one RTT worth of data). | |||
Note that conventional Internet traffic policers will not pass this | Note that conventional Internet traffic policers will not pass this | |||
test, which is correct. TCP often fails to come into equilibrium at | test, which is correct. TCP often fails to come into equilibrium at | |||
more than a small fraction of the available capacity, if the capacity | more than a small fraction of the available capacity, if the capacity | |||
is enforced by a policer. [Citation Pending]. | is enforced by a policer. [Citation Pending]. | |||
10.2.4. Duplex Self Interference | 8.2.4. Duplex Self Interference | |||
This engineering test confirms a bound on the interactions between | This engineering test confirms a bound on the interactions between | |||
the forward data path and the ACK return path. | the forward data path and the ACK return path. | |||
Some historical half duplex technologies had the property that each | Some historical half duplex technologies had the property that each | |||
direction held the channel until it completely drains its queue. | direction held the channel until it completely drained its queue. | |||
When a self clocked transport protocol, such as TCP, has data and | When a self clocked transport protocol, such as TCP, has data and | |||
acks passing in opposite directions through such a link, the behavior | ACKs passing in opposite directions through such a link, the behavior | |||
often reverts to stop-and-wait. Each additional packet added to the | often reverts to stop-and-wait. Each additional packet added to the | |||
window raises the observed RTT by two forward path packet times, once | window raises the observed RTT by two forward path packet times, once | |||
as it passes through the data path, and once for the additional delay | as it passes through the data path, and once for the additional delay | |||
incurred by the ACK waiting on the return path. | incurred by the ACK waiting on the return path. | |||
The duplex self interference test fails if the RTT rises by more than | The duplex self interference test fails if the RTT rises by more than | |||
some fixed bound above the expected queueing time computed from trom | some fixed bound above the expected queueing time computed from trom | |||
the excess window divided by the link data rate. This bound must be | the excess window divided by the subpath IP Capacity. This bound | |||
smaller than target_RTT/2 to avoid reverting to stop and wait | must be smaller than target_RTT/2 to avoid reverting to stop and wait | |||
behavior. (e.g. Packets have to be released at least twice per RTT, | behavior. (e.g. Data packets and ACKs have to be released at least | |||
to avoid stop and wait behavior.) | twice per RTT.) | |||
10.3. Slowstart tests | 8.3. Slowstart tests | |||
These tests mimic slowstart: data is sent at twice the effective | These tests mimic slowstart: data is sent at twice the effective | |||
bottleneck rate to exercise the queue at the dominant bottleneck. | bottleneck rate to exercise the queue at the dominant bottleneck. | |||
In general they are deemed inconclusive if the elapsed time to send | In general they are deemed inconclusive if the elapsed time to send | |||
the data burst is not less than half of the time to receive the ACKs. | the data burst is not less than half of the time to receive the ACKs. | |||
(i.e. sending data too fast is ok, but sending it slower than twice | (i.e. sending data too fast is ok, but sending it slower than twice | |||
the actual bottleneck rate as indicated by the ACKs is deemed | the actual bottleneck rate as indicated by the ACKs is deemed | |||
inconclusive). Space the bursts such that the average data rate is | inconclusive). Space the bursts such that the average data rate is | |||
equal to the target_data_rate. | equal to the target_data_rate. | |||
10.3.1. Full Window slowstart test | 8.3.1. Full Window slowstart test | |||
This is a capacity test to confirm that slowstart is not likely to | This is a capacity test to confirm that slowstart is not likely to | |||
exit prematurely. Send slowstart bursts that are target_pipe_size | exit prematurely. Send slowstart bursts that are target_window_size | |||
total packets. | total packets. | |||
Accumulate packet delivery statistics as described in Section 8.2 to | Accumulate packet delivery statistics as described in Section 7.2 to | |||
score the outcome. Pass if it is statistically significant that the | score the outcome. Pass if it is statistically significant that the | |||
observed number of good packets delivered between losses or ECN marks | observed number of good packets delivered between losses or ECN marks | |||
is larger than the target_run_length. Fail if it is statistically | is larger than the target_run_length. Fail if it is statistically | |||
significant that the observed interval between losses or ECN marks is | significant that the observed interval between losses or ECN marks is | |||
smaller than the target_run_length. | smaller than the target_run_length. | |||
Note that these are the same parameters as the Sender Full Window | Note that these are the same parameters as the Sender Full Window | |||
burst test, except the burst rate is at slowestart rate, rather than | burst test, except the burst rate is at slowestart rate, rather than | |||
sender interface rate. | sender interface rate. | |||
10.3.2. Slowstart AQM test | 8.3.2. Slowstart AQM test | |||
Do a continuous slowstart (send data continuously at slowstart_rate), | Do a continuous slowstart (send data continuously at slowstart_rate), | |||
until the first loss, stop, allow the network to drain and repeat, | until the first loss, stop, allow the network to drain and repeat, | |||
gathering statistics on the last packet delivered before the loss, | gathering statistics on the last packet delivered before the loss, | |||
the loss pattern, maximum observed RTT and window size. Justify the | the loss pattern, maximum observed RTT and window size. Justify the | |||
results. There is not currently sufficient theory justifying | results. There is not currently sufficient theory justifying | |||
requiring any particular result, however design decisions that affect | requiring any particular result, however design decisions that affect | |||
the outcome of this tests also affect how the network balances | the outcome of this tests also affect how the network balances | |||
between long and short flows (the "mice and elephants" problem). The | between long and short flows (the "mice and elephants" problem). The | |||
queue at the time of the first loss should be at least one half of | queue at the time of the first loss should be at least one half of | |||
the target_RTT. | the target_RTT. | |||
This is an engineering test: It would be best performed on a | This is an engineering test: It would be best performed on a | |||
quiescent network or testbed, since cross traffic has the potential | quiescent network or testbed, since cross traffic has the potential | |||
to change the results. | to change the results. | |||
10.4. Sender Rate Burst tests | 8.4. Sender Rate Burst tests | |||
These tests determine how well the network can deliver bursts sent at | These tests determine how well the network can deliver bursts sent at | |||
sender's interface rate. Note that this test most heavily exercises | sender's interface rate. Note that this test most heavily exercises | |||
the front path, and is likely to include infrastructure may be out of | the front path, and is likely to include infrastructure may be out of | |||
scope for an access ISP, even though the bursts might be caused by | scope for an access ISP, even though the bursts might be caused by | |||
ACK compression, thinning or channel arbitration in the access ISP. | ACK compression, thinning or channel arbitration in the access ISP. | |||
See Appendix B. | See Appendix B. | |||
Also, there are a several details that are not precisely defined. | Also, there are a several details that are not precisely defined. | |||
For starters there is not a standard server interface rate. 1 Gb/s | For starters there is not a standard server interface rate. 1 Gb/s | |||
and 10 Gb/s are very common today, but higher rates will become cost | and 10 Gb/s are very common today, but higher rates will become cost | |||
effective and can be expected to be dominant some time in the future. | effective and can be expected to be dominant some time in the future. | |||
Current standards permit TCP to send a full window bursts following | Current standards permit TCP to send a full window bursts following | |||
an application pause. (Congestion Window Validation [RFC2861], is | an application pause. (Congestion Window Validation [RFC2861], is | |||
not required, but even if was, it does not take effect until an | not required, but even if was, it does not take effect until an | |||
application pause is longer than an RTO.) Since full window bursts | application pause is longer than an RTO.) Since full window bursts | |||
are consistent with standard behavior, it is desirable that the | are consistent with standard behavior, it is desirable that the | |||
network be able to deliver such bursts, otherwise application pauses | network be able to deliver such bursts, otherwise application pauses | |||
will cause unwarranted losses. Note that the AIMD sawtooth requires | will cause unwarranted losses. Note that the AIMD sawtooth requires | |||
a peak window that is twice target_pipe_size, so the worst case burst | a peak window that is twice target_window_size, so the worst case | |||
may be 2*target_pipe_size. | burst may be 2*target_window_size. | |||
It is also understood in the application and serving community that | It is also understood in the application and serving community that | |||
interface rate bursts have a cost to the network that has to be | interface rate bursts have a cost to the network that has to be | |||
balanced against other costs in the servers themselves. For example | balanced against other costs in the servers themselves. For example | |||
TCP Segmentation Offload (TSO) reduces server CPU in exchange for | TCP Segmentation Offload (TSO) reduces server CPU in exchange for | |||
larger network bursts, which increase the stress on network buffer | larger network bursts, which increase the stress on network buffer | |||
memory. | memory. | |||
There is not yet theory to unify these costs or to provide a | There is not yet theory to unify these costs or to provide a | |||
framework for trying to optimize global efficiency. We do not yet | framework for trying to optimize global efficiency. We do not yet | |||
have a model for how much the network should tolerate server rate | have a model for how much the network should tolerate server rate | |||
bursts. Some bursts must be tolerated by the network, but it is | bursts. Some bursts must be tolerated by the network, but it is | |||
probably unreasonable to expect the network to be able to efficiently | probably unreasonable to expect the network to be able to efficiently | |||
deliver all data as a series of bursts. | deliver all data as a series of bursts. | |||
For this reason, this is the only test for which we encourage | For this reason, this is the only test for which we encourage | |||
derating. A TDS could include a table of pairs of derating | derating. A TDS could include a table of pairs of derating | |||
parameters: what burst size to use as a fraction of the | parameters: what burst size to use as a fraction of the | |||
target_pipe_size, and how much each burst size is permitted to reduce | target_window_size, and how much each burst size is permitted to | |||
the run length, relative to to the target_run_length. | reduce the run length, relative to to the target_run_length. | |||
10.5. Combined and Implicit Tests | 8.5. Combined and Implicit Tests | |||
Combined tests efficiently confirm multiple network properties in a | Combined tests efficiently confirm multiple network properties in a | |||
single test, possibly as a side effect of normal content delivery. | single test, possibly as a side effect of normal content delivery. | |||
They require less measurement traffic than other testing strategies | They require less measurement traffic than other testing strategies | |||
at the cost of conflating diagnostic signatures when they fail. | at the cost of conflating diagnostic signatures when they fail. | |||
These are by far the most efficient for monitoring networks that are | These are by far the most efficient for monitoring networks that are | |||
nominally expected to pass all tests. | nominally expected to pass all tests. | |||
10.5.1. Sustained Bursts Test | 8.5.1. Sustained Bursts Test | |||
The sustained burst test implements a combined worst case version of | The sustained burst test implements a combined worst case version of | |||
all of the capacity tests above. It is simply: | all of the capacity tests above. It is simply: | |||
Send target_pipe_size bursts of packets at server interface rate with | Send target_window_size bursts of packets at server interface rate | |||
target_RTT burst headway (burst start to burst start). Verify that | with target_RTT burst headway (burst start to burst start). Verify | |||
the observed delivery statistics meets the target_run_length. | that the observed packet delivery statistics meets the | |||
target_run_length. | ||||
Key observations: | Key observations: | |||
o The subpath under test is expected to go idle for some fraction of | o The subpath under test is expected to go idle for some fraction of | |||
the time: (subpath_data_rate-target_rate)/subpath_data_rate. | the time: (subpath_IP_capacity-target_rate/ | |||
(target_MTU-header_overhead)*target_MTU)/subpath_IP_capacity. | ||||
Failing to do so indicates a problem with the procedure and an | Failing to do so indicates a problem with the procedure and an | |||
inconclusive test result. | inconclusive test result. | |||
o The burst sensitivity can be derated by sending smaller bursts | o The burst sensitivity can be derated by sending smaller bursts | |||
more frequently. E.g. send target_pipe_size*derate packet bursts | more frequently. E.g. send target_window_size*derate packet | |||
every target_RTT*derate. | bursts every target_RTT*derate. | |||
o When not derated, this test is the most strenuous capacity test. | o When not derated, this test is the most strenuous capacity test. | |||
o A link that passes this test is likely to be able to sustain | o A subpath that passes this test is likely to be able to sustain | |||
higher rates (close to subpath_data_rate) for paths with RTTs | higher rates (close to subpath_IP_capacity) for paths with RTTs | |||
significantly smaller than the target_RTT. | significantly smaller than the target_RTT. | |||
o This test can be implemented with instrumented TCP [RFC4898], | o This test can be implemented with instrumented TCP [RFC4898], | |||
using a specialized measurement application at one end [MBMSource] | using a specialized measurement application at one end [MBMSource] | |||
and a minimal service at the other end [RFC0863] [RFC0864]. | and a minimal service at the other end [RFC0863] [RFC0864]. | |||
o This test is efficient to implement, since it does not require | o This test is efficient to implement, since it does not require | |||
per-packet timers, and can make use of TSO in modern NIC hardware. | per-packet timers, and can make use of TSO in modern NIC hardware. | |||
o This test by itself is not sufficient: the standing window | o This test by itself is not sufficient: the standing window | |||
engineering tests are also needed to ensure that the link is well | engineering tests are also needed to ensure that the subpath is | |||
behaved at and beyond the onset of congestion. | well behaved at and beyond the onset of congestion. | |||
o Assuming the link passes relevant standing window engineering | o Assuming the subpath passes relevant standing window engineering | |||
tests (particularly that it has a progressive onset of loss at an | tests (particularly that it has a progressive onset of loss at an | |||
appropriate queue depth) the passing sustained burst test is | appropriate queue depth) the passing sustained burst test is | |||
(believed to be) a sufficient verify that the subpath will not | (believed to be) a sufficient verify that the subpath will not | |||
impair stream at the target performance under all conditions. | impair stream at the target performance under all conditions. | |||
Proving this statement will be subject of ongoing research. | Proving this statement will be subject of ongoing research. | |||
Note that this test is clearly independent of the subpath RTT, or | Note that this test is clearly independent of the subpath RTT, or | |||
other details of the measurement infrastructure, as long as the | other details of the measurement infrastructure, as long as the | |||
measurement infrastructure can accurately and reliably deliver the | measurement infrastructure can accurately and reliably deliver the | |||
required bursts to the subpath under test. | required bursts to the subpath under test. | |||
10.5.2. Streaming Media | 8.5.2. Streaming Media | |||
Model Based Metrics can be implicitly implemented as a side effect of | Model Based Metrics can be implicitly implemented as a side effect of | |||
serving any non-throughput maximizing traffic, such as streaming | serving any non-throughput maximizing traffic, such as streaming | |||
media, with some additional controls and instrumentation in the | media, with some additional controls and instrumentation in the | |||
servers. The essential requirement is that the traffic be | servers. The essential requirement is that the traffic be | |||
constrained such that even with arbitrary application pauses, bursts | constrained such that even with arbitrary application pauses, bursts | |||
and data rate fluctuations, the traffic stays within the envelope | and data rate fluctuations, the traffic stays within the envelope | |||
defined by the individual tests described above. | defined by the individual tests described above. | |||
If the application's serving_data_rate is less than or equal to the | If the application's serving_data_rate is less than or equal to the | |||
target_data_rate and the serving_RTT (the RTT between the sender and | target_data_rate and the serving_RTT (the RTT between the sender and | |||
client) is less than the target_RTT, this constraint is most easily | client) is less than the target_RTT, this constraint is most easily | |||
implemented by clamping the transport window size to be no larger | implemented by clamping the transport window size to be no larger | |||
than: | than: | |||
serving_window_clamp=target_data_rate*serving_RTT/ | serving_window_clamp=target_data_rate*serving_RTT/ | |||
(target_MTU-header_overhead) | (target_MTU-header_overhead) | |||
Under the above constraints the serving_window_clamp will limit the | Under the above constraints the serving_window_clamp will limit the | |||
both the serving data rate and burst sizes to be no larger than the | both the serving data rate and burst sizes to be no larger than the | |||
procedures in Section 10.1.2 and Section 10.4 or Section 10.5.1. | procedures in Section 8.1.2 and Section 8.4 or Section 8.5.1. Since | |||
Since the serving RTT is smaller than the target_RTT, the worst case | the serving RTT is smaller than the target_RTT, the worst case bursts | |||
bursts that might be generated under these conditions will be smaller | that might be generated under these conditions will be smaller than | |||
than called for by Section 10.4 and the sender rate burst sizes are | called for by Section 8.4 and the sender rate burst sizes are | |||
implicitly derated by the serving_window_clamp divided by the | implicitly derated by the serving_window_clamp divided by the | |||
target_pipe_size at the very least. (Depending on the application | target_window_size at the very least. (Depending on the application | |||
behavior, the data traffic might be significantly smoother than | behavior, the data traffic might be significantly smoother than | |||
specified by any of the burst tests.) | specified by any of the burst tests.) | |||
Note that it is important that the target_data_rate be above the | ||||
actual average rate needed by the application so it can recover after | ||||
transient pauses caused by congestion or the application itself. | ||||
In an alternative implementation the data rate and bursts might be | In an alternative implementation the data rate and bursts might be | |||
explicitly controlled by a host shaper or pacing at the sender. This | explicitly controlled by a host shaper or pacing at the sender. This | |||
would provide better control over transmissions but it is | would provide better control over transmissions but it is | |||
substantially more complicated to implement and would be likely to | substantially more complicated to implement and would be likely to | |||
have a higher CPU overhead. | have a higher CPU overhead. | |||
Note that these techniques can be applied to any content delivery | Note that these techniques can be applied to any content delivery | |||
that can be subjected to a reduced data rate in order to inhibit TCP | that can be subjected to a reduced data rate in order to inhibit TCP | |||
equilibrium behavior. | equilibrium behavior. | |||
11. An Example | 9. An Example | |||
In this section a we illustrate a TDS designed to confirm that an | In this section a we illustrate a TDS designed to confirm that an | |||
access ISP can reliably deliver HD video from multiple content | access ISP can reliably deliver HD video from multiple content | |||
providers to all of their customers. With modern codecs, minimal HD | providers to all of their customers. With modern codecs, minimal HD | |||
video (720p) generally fits in 2.5 Mb/s. Due to their geographical | video (720p) generally fits in 2.5 Mb/s. Due to their geographical | |||
size, network topology and modem designs the ISP determines that most | size, network topology and modem designs the ISP determines that most | |||
content is within a 50 mS RTT from their users (This is a sufficient | content is within a 50 mS RTT from their users (This is a sufficient | |||
to cover continental Europe or either US coast from a single serving | to cover continental Europe or either US coast from a single serving | |||
site.) | site.) | |||
2.5 Mb/s over a 50 ms path | 2.5 Mb/s over a 50 ms path | |||
+----------------------+-------+---------+ | +----------------------+-------+---------+ | |||
| End-to-End Parameter | value | units | | | End-to-End Parameter | value | units | | |||
+----------------------+-------+---------+ | +----------------------+-------+---------+ | |||
| target_rate | 2.5 | Mb/s | | | target_rate | 2.5 | Mb/s | | |||
| target_RTT | 50 | ms | | | target_RTT | 50 | ms | | |||
| target_MTU | 1500 | bytes | | | target_MTU | 1500 | bytes | | |||
| header_overhead | 64 | bytes | | | header_overhead | 64 | bytes | | |||
| target_pipe_size | 11 | packets | | | target_window_size | 11 | packets | | |||
| target_run_length | 363 | packets | | | target_run_length | 363 | packets | | |||
+----------------------+-------+---------+ | +----------------------+-------+---------+ | |||
Table 1 | Table 1 | |||
Table 1 shows the default TCP model with no derating, and as such is | Table 1 shows the default TCP model with no derating, and as such is | |||
quite conservative. The simplest TDS would be to use the sustained | quite conservative. The simplest TDS would be to use the sustained | |||
burst test, described in Section 10.5.1. Such a test would send 11 | burst test, described in Section 8.5.1. Such a test would send 11 | |||
packet bursts every 50mS, and confirming that there was no more than | packet bursts every 50mS, and confirming that there was no more than | |||
1 packet loss per 33 bursts (363 total packets in 1.650 seconds). | 1 packet loss per 33 bursts (363 total packets in 1.650 seconds). | |||
Since this number represents is the entire end-to-end loss budget, | Since this number represents is the entire end-to-end loss budget, | |||
independent subpath tests could be implemented by apportioning the | independent subpath tests could be implemented by apportioning the | |||
loss ratio across subpaths. For example 50% of the losses might be | packet loss ratio across subpaths. For example 50% of the losses | |||
allocated to the access or last mile link to the user, 40% to the | might be allocated to the access or last mile link to the user, 40% | |||
interconnects with other ISPs and 1% to each internal hop (assuming | to the interconnects with other ISPs and 1% to each internal hop | |||
no more than 10 internal hops). Then all of the subpaths can be | (assuming no more than 10 internal hops). Then all of the subpaths | |||
tested independently, and the spatial composition of passing subpaths | can be tested independently, and the spatial composition of passing | |||
would be expected to be within the end-to-end loss budget. | subpaths would be expected to be within the end-to-end loss budget. | |||
Testing interconnects has generally been problematic: conventional | Testing interconnects has generally been problematic: conventional | |||
performance tests run between Measurement Points adjacent to either | performance tests run between Measurement Points adjacent to either | |||
side of the interconnect, are not generally useful. Unconstrained | side of the interconnect, are not generally useful. Unconstrained | |||
TCP tests, such as iperf [iperf] are usually overly aggressive | TCP tests, such as iperf [iperf] are usually overly aggressive | |||
because the RTT is so small (often less than 1 mS). With a short RTT | because the RTT is so small (often less than 1 mS). With a short RTT | |||
these tools are likely to report inflated numbers because for short | these tools are likely to report inflated numbers because for short | |||
RTTs these tools can tolerate very high loss ratio and can push other | RTTs these tools can tolerate very high packet loss ratios and can | |||
cross traffic off of the network. As a consequence they are useless | push other cross traffic off of the network. As a consequence they | |||
for predicting actual user performance, and may themselves be quite | are useless for predicting actual user performance, and may | |||
disruptive. Model Based Metrics solves this problem. The same test | themselves be quite disruptive. Model Based Metrics solves this | |||
pattern as used on other links can be applied to the interconnect. | problem. The same test pattern as used on other subpaths can be | |||
For our example, when apportioned 40% of the losses, 11 packet bursts | applied to the interconnect. For our example, when apportioned 40% | |||
sent every 50mS should have fewer than one loss per 82 bursts (902 | of the losses, 11 packet bursts sent every 50mS should have fewer | |||
packets). | than one loss per 82 bursts (902 packets). | |||
12. Validation | 10. Validation | |||
Since some aspects of the models are likely to be too conservative, | Since some aspects of the models are likely to be too conservative, | |||
Section 6.2 permits alternate protocol models and Section 6.3 permits | Section 5.2 permits alternate protocol models and Section 5.3 permits | |||
test parameter derating. If either of these techniques are used, we | test parameter derating. If either of these techniques are used, we | |||
require demonstrations that such a TDS can robustly detect links that | require demonstrations that such a TDS can robustly detect subpaths | |||
will prevent authentic applications using state-of-the-art protocol | that will prevent authentic applications using state-of-the-art | |||
implementations from meeting the specified performance targets. This | protocol implementations from meeting the specified Target Transport | |||
correctness criteria is potentially difficult to prove, because it | Performance. This correctness criteria is potentially difficult to | |||
implicitly requires validating a TDS against all possible links and | prove, because it implicitly requires validating a TDS against all | |||
subpaths. The procedures described here are still experimental. | possible subpaths and subpaths. The procedures described here are | |||
still experimental. | ||||
We suggest two approaches, both of which should be applied: first, | We suggest two approaches, both of which should be applied: first, | |||
publish a fully open description of the TDS, including what | publish a fully open description of the TDS, including what | |||
assumptions were used and and how it was derived, such that the | assumptions were used and and how it was derived, such that the | |||
research community can evaluate the design decisions, test them and | research community can evaluate the design decisions, test them and | |||
comment on their applicability; and second, demonstrate that an | comment on their applicability; and second, demonstrate that an | |||
applications running over an infinitessimally passing testbed do meet | applications running over an infinitessimally passing testbed do meet | |||
the performance targets. | the performance targets. | |||
An infinitessimally passing testbed resembles a epsilon-delta proof | An infinitessimally passing testbed resembles a epsilon-delta proof | |||
in calculus. Construct a test network such that all of the | in calculus. Construct a test network such that all of the | |||
individual tests of the TDS pass by only small (infinitesimal) | individual tests of the TDS pass by only small (infinitesimal) | |||
margins, and demonstrate that a variety of authentic applications | margins, and demonstrate that a variety of authentic applications | |||
running over real TCP implementations (or other protocol as | running over real TCP implementations (or other protocol as | |||
appropriate) meets the target transport performance over such a | appropriate) meets the Target Transport Performance over such a | |||
network. The workloads should include multiple types of streaming | network. The workloads should include multiple types of streaming | |||
media and transaction oriented short flows (e.g. synthetic web | media and transaction oriented short flows (e.g. synthetic web | |||
traffic ). | traffic). | |||
For example, for the HD streaming video TDS described in Section 11, | For example, for the HD streaming video TDS described in Section 9, | |||
the link layer bottleneck data rate should be exactly the header | the IP capacity should be exactly the header overhead above 2.5 Mb/s, | |||
overhead above 2.5 Mb/s, the per packet random background loss ratio | the per packet random background loss ratio should be 1/363, for a | |||
should be 1/363, for a run length of 363 packets, the bottleneck | run length of 363 packets, the bottleneck queue should be 11 packets | |||
queue should be 11 packets and the front path should have just enough | and the front path should have just enough buffering to withstand 11 | |||
buffering to withstand 11 packet interface rate bursts. We want | packet interface rate bursts. We want every one of the TDS tests to | |||
every one of the TDS tests to fail if we slightly increase the | fail if we slightly increase the relevant test parameter, so for | |||
relevant test parameter, so for example sending a 12 packet bursts | example sending a 12 packet bursts should cause excess (possibly | |||
should cause excess (possibly deterministic) packet drops at the | deterministic) packet drops at the dominant queue at the bottleneck. | |||
dominant queue at the bottleneck. On this infinitessimally passing | On this infinitessimally passing network it should be possible for a | |||
network it should be possible for a real application using a stock | real application using a stock TCP implementation in the vendor's | |||
TCP implementation in the vendor's default configuration to attain | default configuration to attain 2.5 Mb/s over an 50 mS path. | |||
2.5 Mb/s over an 50 mS path. | ||||
The most difficult part of setting up such a testbed is arranging for | The most difficult part of setting up such a testbed is arranging for | |||
it to infinitesimally pass the individual tests. Two approaches: | it to infinitesimally pass the individual tests. Two approaches: | |||
constraining the network devices not to use all available resources | constraining the network devices not to use all available resources | |||
(e.g. by limiting available buffer space or data rate); and | (e.g. by limiting available buffer space or data rate); and | |||
preloading subpaths with cross traffic. Note that is it important | preloading subpaths with cross traffic. Note that is it important | |||
that a single environment be constructed which infinitessimally | that a single environment be constructed which infinitessimally | |||
passes all tests at the same time, otherwise there is a chance that | passes all tests at the same time, otherwise there is a chance that | |||
TCP can exploit extra latitude in some parameters (such as data rate) | TCP can exploit extra latitude in some parameters (such as data rate) | |||
to partially compensate for constraints in other parameters (queue | to partially compensate for constraints in other parameters (queue | |||
skipping to change at page 40, line 26 | skipping to change at page 41, line 20 | |||
To the extent that a TDS is used to inform public dialog it should be | To the extent that a TDS is used to inform public dialog it should be | |||
fully publicly documented, including the details of the tests, what | fully publicly documented, including the details of the tests, what | |||
assumptions were used and how it was derived. All of the details of | assumptions were used and how it was derived. All of the details of | |||
the validation experiment should also be published with sufficient | the validation experiment should also be published with sufficient | |||
detail for the experiments to be replicated by other researchers. | detail for the experiments to be replicated by other researchers. | |||
All components should either be open source of fully described | All components should either be open source of fully described | |||
proprietary implementations that are available to the research | proprietary implementations that are available to the research | |||
community. | community. | |||
13. Security Considerations | 11. Security Considerations | |||
Measurement is often used to inform business and policy decisions, | Measurement is often used to inform business and policy decisions, | |||
and as a consequence is potentially subject to manipulation for | and as a consequence is potentially subject to manipulation. Model | |||
illicit gains. Model Based Metrics are expected to be a huge step | Based Metrics are expected to be a huge step forward because | |||
forward because equivalent measurements can be performed from | equivalent measurements can be performed from multiple vantage | |||
multiple vantage points, such that performance claims can be | points, such that performance claims can be independently validated | |||
independently validated by multiple parties. | by multiple parties. | |||
Much of the acrimony in the Net Neutrality debate is due by the | Much of the acrimony in the Net Neutrality debate is due by the | |||
historical lack of any effective vantage independent tools to | historical lack of any effective vantage independent tools to | |||
characterize network performance. Traditional methods for measuring | characterize network performance. Traditional methods for measuring | |||
Bulk Transport Capacity are sensitive to RTT and as a consequence | Bulk Transport Capacity are sensitive to RTT and as a consequence | |||
often yield very different results local to an ISP and when run over | often yield very different results when run local to an ISP or | |||
a customer's complete path. Neither the ISP nor customer can repeat | internconnect and when run over a customer's complete path. Neither | |||
the other's measurements, leading to high levels of distrust and | the ISP nor customer can repeat the other's measurements, leading to | |||
acrimony. Model Based Metrics are expected to greatly improve this | high levels of distrust and acrimony. Model Based Metrics are | |||
situation. | expected to greatly improve this situation. | |||
This document only describes a framework for designing Fully | This document only describes a framework for designing Fully | |||
Specified Targeted Diagnostic Suite. Each FSTDS MUST include its own | Specified Targeted Diagnostic Suite. Each FSTDS MUST include its own | |||
security section. | security section. | |||
14. Acknowledgements | 12. Acknowledgements | |||
Ganga Maguluri suggested the statistical test for measuring loss | Ganga Maguluri suggested the statistical test for measuring loss | |||
probability in the target run length. Alex Gilgur for helping with | probability in the target run length. Alex Gilgur for helping with | |||
the statistics. | the statistics. | |||
Meredith Whittaker for improving the clarity of the communications. | Meredith Whittaker for improving the clarity of the communications. | |||
Ruediger Geib provided feedback which greatly improved the document. | ||||
This work was inspired by Measurement Lab: open tools running on an | This work was inspired by Measurement Lab: open tools running on an | |||
open platform, using open tools to collect open data. See | open platform, using open tools to collect open data. See | |||
http://www.measurementlab.net/ | http://www.measurementlab.net/ | |||
15. IANA Considerations | 13. IANA Considerations | |||
This document has no actions for IANA. | This document has no actions for IANA. | |||
16. References | 14. References | |||
16.1. Normative References | 14.1. Normative References | |||
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate | [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate | |||
Requirement Levels", BCP 14, RFC 2119, March 1997. | Requirement Levels", BCP 14, RFC 2119, March 1997. | |||
16.2. Informative References | 14.2. Informative References | |||
[RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. | [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. | |||
[RFC0864] Postel, J., "Character Generator Protocol", STD 22, | [RFC0864] Postel, J., "Character Generator Protocol", STD 22, | |||
RFC 864, May 1983. | RFC 864, May 1983. | |||
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, | [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, | |||
S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., | S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., | |||
Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, | Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, | |||
S., Wroclawski, J., and L. Zhang, "Recommendations on | S., Wroclawski, J., and L. Zhang, "Recommendations on | |||
skipping to change at page 42, line 19 | skipping to change at page 43, line 13 | |||
[RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm | [RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm | |||
for TCP", RFC 4015, February 2005. | for TCP", RFC 4015, February 2005. | |||
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, | [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, | |||
S., and J. Perser, "Packet Reordering Metrics", RFC 4737, | S., and J. Perser, "Packet Reordering Metrics", RFC 4737, | |||
November 2006. | November 2006. | |||
[RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP | [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP | |||
Extended Statistics MIB", RFC 4898, May 2007. | Extended Statistics MIB", RFC 4898, May 2007. | |||
[RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", | ||||
RFC 5136, February 2008. | ||||
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion | [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion | |||
Control", RFC 5681, September 2009. | Control", RFC 5681, September 2009. | |||
[RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric | ||||
Composition", RFC 5835, April 2010. | ||||
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of | [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of | |||
Metrics", RFC 6049, January 2011. | Metrics", RFC 6049, January 2011. | |||
[RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, | [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, | |||
August 2012. | August 2012. | |||
[RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling | [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling | |||
Framework for IP Performance Metrics (IPPM)", RFC 7312, | Framework for IP Performance Metrics (IPPM)", RFC 7312, | |||
August 2014. | August 2014. | |||
[RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and | [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and | |||
A. Morton, "A Reference Path and Measurement Points for | A. Morton, "A Reference Path and Measurement Points for | |||
Large-Scale Measurement of Broadband Performance", | Large-Scale Measurement of Broadband Performance", | |||
RFC 7398, February 2015. | RFC 7398, February 2015. | |||
[I-D.ietf-ippm-2680-bis] | ||||
Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, "A | ||||
One-Way Loss Metric for IPPM", draft-ietf-ippm-2680-bis-02 | ||||
(work in progress), June 2015. | ||||
[I-D.ietf-aqm-recommendation] | [I-D.ietf-aqm-recommendation] | |||
Baker, F. and G. Fairhurst, "IETF Recommendations | Baker, F. and G. Fairhurst, "IETF Recommendations | |||
Regarding Active Queue Management", | Regarding Active Queue Management", | |||
draft-ietf-aqm-recommendation-11 (work in progress), | draft-ietf-aqm-recommendation-11 (work in progress), | |||
February 2015. | February 2015. | |||
[MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The | [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The | |||
Macroscopic Behavior of the TCP Congestion Avoidance | Macroscopic Behavior of the TCP Congestion Avoidance | |||
Algorithm", Computer Communications Review volume 27, | Algorithm", Computer Communications Review volume 27, | |||
number3, July 1997. | number3, July 1997. | |||
skipping to change at page 44, line 7 | skipping to change at page 44, line 51 | |||
index.php?title=Bufferbloat&oldid=608805474, March 2015. | index.php?title=Bufferbloat&oldid=608805474, March 2015. | |||
[CCscaling] | [CCscaling] | |||
Fernando, F., Doyle, J., and S. Steven, "Scalable laws for | Fernando, F., Doyle, J., and S. Steven, "Scalable laws for | |||
stable network congestion control", Proceedings of | stable network congestion control", Proceedings of | |||
Conference on Decision and | Conference on Decision and | |||
Control, http://www.ee.ucla.edu/~paganini, December 2001. | Control, http://www.ee.ucla.edu/~paganini, December 2001. | |||
Appendix A. Model Derivations | Appendix A. Model Derivations | |||
The reference target_run_length described in Section 6.2 is based on | The reference target_run_length described in Section 5.2 is based on | |||
very conservative assumptions: that all window above target_pipe_size | very conservative assumptions: that all window above | |||
contributes to a standing queue that raises the RTT, and that classic | target_window_size contributes to a standing queue that raises the | |||
Reno congestion control with delayed ACKs are in effect. In this | RTT, and that classic Reno congestion control with delayed ACKs are | |||
section we provide two alternative calculations using different | in effect. In this section we provide two alternative calculations | |||
assumptions. | using different assumptions. | |||
It may seem out of place to allow such latitude in a measurement | It may seem out of place to allow such latitude in a measurement | |||
standard, but this section provides offsetting requirements. | standard, but this section provides offsetting requirements. | |||
The estimates provided by these models make the most sense if network | The estimates provided by these models make the most sense if network | |||
performance is viewed logarithmically. In the operational Internet, | performance is viewed logarithmically. In the operational Internet, | |||
data rates span more than 8 orders of magnitude, RTT spans more than | data rates span more than 8 orders of magnitude, RTT spans more than | |||
3 orders of magnitude, and loss ratio spans at least 8 orders of | 3 orders of magnitude, and packet loss ratio spans at least 8 orders | |||
magnitude. When viewed logarithmically (as in decibels), these | of magnitude if not more. When viewed logarithmically (as in | |||
correspond to 80 dB of dynamic range. On an 80 db scale, a 3 dB | decibels), these correspond to 80 dB of dynamic range. On an 80 dB | |||
error is less than 4% of the scale, even though it might represent a | scale, a 3 dB error is less than 4% of the scale, even though it | |||
factor of 2 in untransformed parameter. | represents a factor of 2 in untransformed parameter. | |||
This document gives a lot of latitude for calculating | This document gives a lot of latitude for calculating | |||
target_run_length, however people designing a TDS should consider the | target_run_length, however people designing a TDS should consider the | |||
effect of their choices on the ongoing tussle about the relevance of | effect of their choices on the ongoing tussle about the relevance of | |||
"TCP friendliness" as an appropriate model for Internet capacity | "TCP friendliness" as an appropriate model for Internet capacity | |||
allocation. Choosing a target_run_length that is substantially | allocation. Choosing a target_run_length that is substantially | |||
smaller than the reference target_run_length specified in Section 6.2 | smaller than the reference target_run_length specified in Section 5.2 | |||
strengthens the argument that it may be appropriate to abandon "TCP | strengthens the argument that it may be appropriate to abandon "TCP | |||
friendliness" as the Internet fairness model. This gives developers | friendliness" as the Internet fairness model. This gives developers | |||
incentive and permission to develop even more aggressive applications | incentive and permission to develop even more aggressive applications | |||
and protocols, for example by increasing the number of connections | and protocols, for example by increasing the number of connections | |||
that they open concurrently. | that they open concurrently. | |||
A.1. Queueless Reno | A.1. Queueless Reno | |||
In Section 6.2 it was assumed that the link rate matches the target | In Section 5.2 it was assumed that the subpath IP rate matches the | |||
rate plus overhead, such that the excess window needed for the AIMD | target rate plus overhead, such that the excess window needed for the | |||
sawtooth causes a fluctuating queue at the bottleneck. | AIMD sawtooth causes a fluctuating queue at the bottleneck. | |||
An alternate situation would be bottleneck where there is no | An alternate situation would be bottleneck where there is no | |||
significant queue and losses are caused by some mechanism that does | significant queue and losses are caused by some mechanism that does | |||
not involve extra delay, for example by the use of a virtual queue as | not involve extra delay, for example by the use of a virtual queue as | |||
in Approximate Fair Dropping[AFD]. A flow controlled by such a | in Approximate Fair Dropping [AFD]. A flow controlled by such a | |||
bottleneck would have a constant RTT and a data rate that fluctuates | bottleneck would have a constant RTT and a data rate that fluctuates | |||
in a sawtooth due to AIMD congestion control. Assume the losses are | in a sawtooth due to AIMD congestion control. Assume the losses are | |||
being controlled to make the average data rate meet some goal which | being controlled to make the average data rate meet some goal which | |||
is equal or greater than the target_rate. The necessary run length | is equal or greater than the target_rate. The necessary run length | |||
can be computed as follows: | can be computed as follows: | |||
For some value of Wmin, the window will sweep from Wmin packets to | For some value of Wmin, the window will sweep from Wmin packets to | |||
2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the | 2*Wmin packets in 2*Wmin RTT (due to delayed ACK). Unlike the | |||
queueing case where Wmin = Target_pipe_size, we want the average of | queueing case where Wmin = target_window_size, we want the average of | |||
Wmin and 2*Wmin to be the target_pipe_size, so the average rate is | Wmin and 2*Wmin to be the target_window_size, so the average rate is | |||
the target rate. Thus we want Wmin = (2/3)*target_pipe_size. | the target rate. Thus we want Wmin = (2/3)*target_window_size. | |||
Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) | Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) | |||
packets in 2*Wmin round trip times. | packets in 2*Wmin round trip times. | |||
Substituting these together we get: | Substituting these together we get: | |||
target_run_length = (4/3)(target_pipe_size^2) | target_run_length = (4/3)(target_window_size^2) | |||
Note that this is 44% of the reference_run_length computed earlier. | Note that this is 44% of the reference_run_length computed earlier. | |||
This makes sense because under the assumptions in Section 6.2 the | This makes sense because under the assumptions in Section 5.2 the | |||
AMID sawtooth caused a queue at the bottleneck, which raised the | AMID sawtooth caused a queue at the bottleneck, which raised the | |||
effective RTT by 50%. | effective RTT by 50%. | |||
Appendix B. Complex Queueing | Appendix B. Complex Queueing | |||
For many network technologies simple queueing models don't apply: the | For many network technologies simple queueing models don't apply: the | |||
network schedules, thins or otherwise alters the timing of ACKs and | network schedules, thins or otherwise alters the timing of ACKs and | |||
data, generally to raise the efficiency of the channel allocation | data, generally to raise the efficiency of the channel allocation | |||
when confronted with relatively widely spaced small ACKs. These | when confronted with relatively widely spaced small ACKs. These | |||
efficiency strategies are ubiquitous for half duplex, wireless and | efficiency strategies are ubiquitous for half duplex, wireless and | |||
broadcast media. | broadcast media. | |||
Altering the ACK stream generally has two consequences: it raises the | Altering the ACK stream generally has two consequences: it raises the | |||
effective bottleneck data rate, making slowstart burst at higher | implied bottleneck IP capacity, making slowstart burst at higher | |||
rates (possibly as high as the sender's interface rate) and it | rates (possibly as high as the sender's interface rate) and it | |||
effectively raises the RTT by the average time that the ACKs and data | effectively raises the RTT by the average time that the ACKs and data | |||
were delayed. The first effect can be partially mitigated by | were delayed. The first effect can be partially mitigated by | |||
reclocking ACKs once they are beyond the bottleneck on the return | reclocking ACKs once they are beyond the bottleneck on the return | |||
path to the sender, however this further raises the effective RTT. | path to the sender, however this further raises the effective RTT. | |||
The most extreme example of this sort of behavior would be a half | The most extreme example of this sort of behavior would be a half | |||
duplex channel that is not released as long as end point currently | duplex channel that is not released as long as end point currently | |||
holding the channel has more traffic (data or ACKs) to send. Such | holding the channel has more traffic (data or ACKs) to send. Such | |||
environments cause self clocked protocols under full load to revert | environments cause self clocked protocols under full load to revert | |||
to extremely inefficient stop and wait behavior, where they send an | to extremely inefficient stop and wait behavior, where they send an | |||
entire window of data as a single burst of the forward path, followed | entire window of data as a single burst of the forward path, followed | |||
by the entire window of ACKs on the return path. It is important to | by the entire window of ACKs on the return path. It is important to | |||
note that due to self clocking, ill conceived channel allocation | note that due to self clocking, ill conceived channel allocation | |||
mechanisms can increase the stress on upstream links in a long path: | mechanisms can increase the stress on upstream subpaths in a long | |||
they cause large and faster bursts. | path: they cause large and faster bursts. | |||
If a particular return path contains a link or device that alters the | If a particular return path contains a subpath or device that alters | |||
ACK stream, then the entire path from the sender up to the bottleneck | the ACK stream, then the entire path from the sender up to the | |||
must be tested at the burst parameters implied by the ACK scheduling | bottleneck must be tested at the burst parameters implied by the ACK | |||
algorithm. The most important parameter is the Effective Bottleneck | scheduling algorithm. The most important parameter is the Implied | |||
Data Rate, which is the average rate at which the ACKs advance | Bottleneck IP Capacity, which is the average rate at which the ACKs | |||
snd.una. Note that thinning the ACKs (relying on the cumulative | advance snd.una. Note that thinning the ACKs (relying on the | |||
nature of seg.ack to permit discarding some ACKs) is implies an | cumulative nature of seg.ack to permit discarding some ACKs) is | |||
effectively infinite bottleneck data rate. | implies an effectively infinite Implied Bottleneck IP Capacity. | |||
Holding data or ACKs for channel allocation or other reasons (such as | Holding data or ACKs for channel allocation or other reasons (such as | |||
forward error correction) always raises the effective RTT relative to | forward error correction) always raises the effective RTT relative to | |||
the minimum delay for the path. Therefore it may be necessary to | the minimum delay for the path. Therefore it may be necessary to | |||
replace target_RTT in the calculation in Section 6.2 by an | replace target_RTT in the calculation in Section 5.2 by an | |||
effective_RTT, which includes the target_RTT plus a term to account | effective_RTT, which includes the target_RTT plus a term to account | |||
for the extra delays introduced by these mechanisms. | for the extra delays introduced by these mechanisms. | |||
Appendix C. Version Control | Appendix C. Version Control | |||
This section to be removed prior to publication. | This section to be removed prior to publication. | |||
Formatted: Sat Jun 13 16:25:01 PDT 2015 | Formatted: Mon Jul 6 13:49:30 PDT 2015 | |||
Authors' Addresses | Authors' Addresses | |||
Matt Mathis | Matt Mathis | |||
Google, Inc | Google, Inc | |||
1600 Amphitheater Parkway | 1600 Amphitheater Parkway | |||
Mountain View, California 94043 | Mountain View, California 94043 | |||
USA | USA | |||
Email: mattmathis@google.com | Email: mattmathis@google.com | |||
End of changes. 225 change blocks. | ||||
770 lines changed or deleted | 818 lines changed or added | |||
This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/ |