--- 1/draft-ietf-ippm-model-based-metrics-10.txt 2017-06-29 20:13:14.964786244 -0700 +++ 2/draft-ietf-ippm-model-based-metrics-11.txt 2017-06-29 20:13:15.076788935 -0700 @@ -1,148 +1,146 @@ IP Performance Working Group M. Mathis Internet-Draft Google, Inc Intended status: Experimental A. Morton -Expires: September 1, 2017 AT&T Labs - February 28, 2017 +Expires: January 1, 2018 AT&T Labs + June 30, 2017 Model Based Metrics for Bulk Transport Capacity - draft-ietf-ippm-model-based-metrics-10.txt + draft-ietf-ippm-model-based-metrics-11.txt Abstract We introduce a new class of Model Based Metrics designed to assess if a complete Internet path can be expected to meet a predefined Target Transport Performance by applying a suite of IP diagnostic tests to successive subpaths. The subpath-at-a-time tests can be robustly applied to critical infrastructure, such as network interconnections or even individual devices, to accurately detect if any part of the infrastructure will prevent paths traversing it from meeting the Target Transport Performance. Model Based Metrics rely on peer-reviewed mathematical models to specify a Targeted Suite of IP Diagnostic tests, designed to assess whether common transport protocols can be expected to meet a predetermined Target Transport Performance over an Internet path. - For Bulk Transport Capacity, the IP diagnostics are built on test - streams that mimic TCP over the complete path and statistical - criteria for evaluating the packet transfer statistics of those - streams. The temporal structure of the test stream (bursts, etc) - mimic TCP or other transport protocol carrying bulk data over a long - path. However they are constructed to be independent of the details - of the subpath under test, end systems or applications. Likewise the - success criteria evaluates the packet transfer statistics of the - subpath against criteria determined by protocol performance models - applied to the Target Transport Performance of the complete path. - The success criteria also does not depend on the details of the - subpath, end systems or application. + For Bulk Transport Capacity IP diagnostics are built using test + streams and statistical criteria for evaluating the packet transfer + that mimic TCP over the complete path. The temporal structure of the + test stream (bursts, etc) mimic TCP or other transport protocol + carrying bulk data over a long path. However they are constructed to + be independent of the details of the subpath under test, end systems + or applications. Likewise the success criteria evaluates the packet + transfer statistics of the subpath against criteria determined by + protocol performance models applied to the Target Transport + Performance of the complete path. The success criteria also does not + depend on the details of the subpath, end systems or application. Model Based Metrics exhibit several important new properties not present in other Bulk Transport Capacity Metrics, including the ability to reason about concatenated or overlapping subpaths. The results are vantage independent which is critical for supporting independent validation of tests by comparing results from multiple measurement points. - This document does not define the IP diagnostic tests, but provides a - framework for designing suites of IP diagnostic tests that are - tailored to confirming that infrastructure can meet the predetermined - Target Transport Performance. + This document provides a framework for designing suites of IP + diagnostic tests that are tailored to confirming that infrastructure + can meet the predetermined Target Transport Performance. It does not + fully specify the IP diagnostics tests needed to assure any specific + target performance. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on September 1, 2017. + This Internet-Draft will expire on January 1, 2018. Copyright Notice Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1. Version Control . . . . . . . . . . . . . . . . . . . . . 5 - 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7 + 2. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 10 - 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 16 + 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . 18 - 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 19 - 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 20 - 5. Common Models and Parameters . . . . . . . . . . . . . . . . 21 - 5.1. Target End-to-end parameters . . . . . . . . . . . . . . 21 - 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 22 - 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 23 - 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 23 - 6. Generating test streams . . . . . . . . . . . . . . . . . . . 24 - 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 25 + 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 20 + 4.3. New requirements relative to RFC 2330 . . . . . . . . . . 21 + 5. Common Models and Parameters . . . . . . . . . . . . . . . . 22 + 5.1. Target End-to-end parameters . . . . . . . . . . . . . . 22 + 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 23 + 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . 24 + 5.4. Test Preconditions . . . . . . . . . . . . . . . . . . . 24 + 6. Generating test streams . . . . . . . . . . . . . . . . . . . 25 + 6.1. Mimicking slowstart . . . . . . . . . . . . . . . . . . . 26 6.2. Constant window pseudo CBR . . . . . . . . . . . . . . . 27 - 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 27 - 6.4. Concurrent or channelized testing . . . . . . . . . . . . 28 - 7. Interpreting the Results . . . . . . . . . . . . . . . . . . 29 - 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 29 - 7.2. Statistical criteria for estimating run_length . . . . . 30 - 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 32 - 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 33 - 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 33 - 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 34 - 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 34 - 8.1.3. Background Packet Transfer Statistics Tests . . . . . 35 - 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 35 - 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 36 - 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 37 - 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 37 - 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 38 - 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 38 - 8.3.1. Full Window slowstart test . . . . . . . . . . . . . 38 - 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 39 - 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 39 - 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 40 - 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 40 - 8.5.2. Passive Measurements . . . . . . . . . . . . . . . . 41 - 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 42 - 9.1. Observations about applicability . . . . . . . . . . . . 43 - 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 44 - 11. Security Considerations . . . . . . . . . . . . . . . . . . . 45 - 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 45 - 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 46 - 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 46 - 14.1. Normative References . . . . . . . . . . . . . . . . . . 46 - 14.2. Informative References . . . . . . . . . . . . . . . . . 46 - Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 49 - A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 50 - Appendix B. The effects of ACK scheduling . . . . . . . . . . . 51 - Appendix C. Version Control . . . . . . . . . . . . . . . . . . 52 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 52 + 6.3. Scanned window pseudo CBR . . . . . . . . . . . . . . . . 28 + 6.4. Concurrent or channelized testing . . . . . . . . . . . . 29 + 7. Interpreting the Results . . . . . . . . . . . . . . . . . . 30 + 7.1. Test outcomes . . . . . . . . . . . . . . . . . . . . . . 30 + 7.2. Statistical criteria for estimating run_length . . . . . 31 + 7.3. Reordering Tolerance . . . . . . . . . . . . . . . . . . 34 + 8. IP Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . 34 + 8.1. Basic Data Rate and Packet Transfer Tests . . . . . . . . 35 + 8.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 35 + 8.1.2. Delivery Statistics at Full Data Windowed Rate . . . 36 + 8.1.3. Background Packet Transfer Statistics Tests . . . . . 36 + 8.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . 36 + 8.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . 38 + 8.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 38 + 8.2.3. Non excessive loss . . . . . . . . . . . . . . . . . 38 + 8.2.4. Duplex Self Interference . . . . . . . . . . . . . . 39 + 8.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 39 + 8.3.1. Full Window slowstart test . . . . . . . . . . . . . 39 + 8.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . 40 + 8.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 40 + 8.5. Combined and Implicit Tests . . . . . . . . . . . . . . . 41 + 8.5.1. Sustained Bursts Test . . . . . . . . . . . . . . . . 41 + 8.5.2. Passive Measurements . . . . . . . . . . . . . . . . 42 + 9. An Example . . . . . . . . . . . . . . . . . . . . . . . . . 43 + 9.1. Observations about applicability . . . . . . . . . . . . 44 + 10. Validation . . . . . . . . . . . . . . . . . . . . . . . . . 45 + 11. Security Considerations . . . . . . . . . . . . . . . . . . . 46 + 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 47 + 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 47 + 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 47 + Appendix A. Model Derivations . . . . . . . . . . . . . . . . . 51 + A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . 52 + Appendix B. The effects of ACK scheduling . . . . . . . . . . . 53 + Appendix C. Version Control . . . . . . . . . . . . . . . . . . 54 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 54 1. Introduction Model Based Metrics (MBM) rely on peer-reviewed mathematical models to specify a Targeted Suite of IP Diagnostic tests, designed to assess whether common transport protocols can be expected to meet a predetermined Target Transport Performance over an Internet path. This note describes the modeling framework to derive the test parameters for assessing an Internet path's ability to support a predetermined Bulk Transport Capacity. @@ -179,44 +177,68 @@ IP Diagnostic Suite (TIDS) of IP tests, solves some intrinsic problems with using TCP or other throughput maximizing protocols for measurement. In particular all throughput maximizing protocols (and TCP congestion control in particular) cause some level of congestion in order to detect when they have reached the available capacity limitation of the network. This self inflicted congestion obscures the network properties of interest and introduces non-linear dynamic equilibrium behaviors that make any resulting measurements useless as metrics because they have no predictive value for conditions or paths different than that of the measurement itself. In order to prevent - these effects it is necessary to suppress the effects of TCP - congestion control in the measurement method. These issues are - discussed at length in Section 4. Readers whom are unfamiliar with - basic properties of TCP and TCP-like congestion control may find it - easier to start at Section 4 or Section 4.1. + these effects it is necessary to avoid the effects of TCP congestion + control in the measurement method. These issues are discussed at + length in Section 4. Readers whom are unfamiliar with basic + properties of TCP and TCP-like congestion control may find it easier + to start at Section 4 or Section 4.1. A Targeted IP Diagnostic Suite does not have such difficulties. IP diagnostics can be constructed such that they make strong statistical statements about path properties that are independent of the measurement details, such as vantage and choice of measurement - points. Model Based Metrics are designed to bridge the gap between - empirical IP measurements and expected TCP performance for multiple - standardized versions of TCP. + points. 1.1. Version Control RFC Editor: Please remove this entire subsection prior to publication. + REF Editor: The reference to draft-ietf-tcpm-rack is to attribute an + idea. This document should not block waiting for the completion of + that one. + Please send comments about this draft to ippm@ietf.org. See http://goo.gl/02tkD for more information including: interim drafts, an up to date todo list and information on contributing. - Formatted: Tue Feb 28 14:24:28 PST 2017 + Formatted: Thu Jun 29 19:08:08 PDT 2017 + + Changes since -10 draft: + + o A few more nits from various sources. + o (From IETF LC review comments.) + o David Mandelberg: design metrics to prevent DDOS. + o From Robert Sparks: + + * Remove all legacy 2119 language. + * Fixed Xr notation inconsistency. + * Adjusted abstract: tests are only partially specified. + * Avoid rather than suppress the effects of congestion control + * Removed the unnecessary, excessively abstract and unclear + thought about IP vs TCP measurements. + * Changed "thwarted" to "not fulfilled". + * Qualified language about burst models. + * Replaced "infinitesimal" with other language. + * Added citations for the reordering strawman. + * Pointed out that psuedo CBR tests depend on self clock. + * Fixed some run on sentences. + o Update language to reflect RFC7567, AQM recommendations. + o Suggestion from Merry Mou (MIT) Changes since -09 draft: o Five last minute editing nits. Changes since -08 draft: o Language, spelling and usage nits. o Expanded the abstract describe the models. o Remove superfluous standards like language @@ -374,46 +397,46 @@ ----V----------------------------------V--- | | | | | | | V V V V V V fail/inconclusive pass/fail/inconclusive (traffic generation status) (test result) Overall Modeling Framework Figure 1 - The mathematical models are used to determine Traffic parameters and + Mathematical TCP models are used to determine Traffic parameters and subsequently to design traffic patterns that mimic TCP or other transport protocol delivering bulk data and operating at the Target Data Rate, MTU and RTT over a full range of conditions, including flows that are bursty at multiple time scales. The traffic patterns are generated based on the three Target parameters of complete path and independent of the properties of individual subpaths using the techniques described in Section 6. As much as possible the test streams are generated deterministically (precomputed) to minimize the extent to which test methodology, measurement points, measurement vantage or path partitioning affect the details of the measurement traffic. Section 7 describes packet transfer statistics and methods to test them against the statistical criteria provided by the mathematical models. Since the statistical criteria typically apply to the complete path (a composition of subpaths) [RFC6049], in situ testing requires that the end-to-end statistical criteria be apportioned as separate criteria for each subpath. Subpaths that are expected to be bottlenecks would then be permitted to contribute a larger fraction of the end-to-end packet loss budget. In compensation, subpaths that - to not expected exhibit bottlenecks must be constrained to contribute - less packet loss. Thus the statistical criteria for each subpath in - each test of a TIDS is an apportioned share of the end-to-end - statistical criteria for the complete path which was determined by - the mathematical model. + are not expected to exhibit bottlenecks must be constrained to + contribute less packet loss. Thus the statistical criteria for each + subpath in each test of a TIDS is an apportioned share of the end-to- + end statistical criteria for the complete path which was determined + by the mathematical model. Section 8 describes the suite of individual tests needed to verify all of required IP delivery properties. A subpath passes if and only if all of the individual IP diagnostic tests pass. Any subpath that fails any test indicates that some users are likely to fail to attain their Target Transport Performance under some conditions. In addition to passing or failing, a test can be deemed to be inconclusive for a number of reasons including: the precomputed traffic pattern was not accurately generated; the measurement results were not statistically significant; and others such as failing to @@ -425,24 +448,20 @@ can be used to address difficult measurement situations, such as confirming that inter-carrier exchanges have sufficient performance and capacity to deliver HD video between ISPs. Since there is some uncertainty in the modeling process, Section 10 describes a validation procedure to diagnose and minimize false positive and false negative results. 3. Terminology - The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", - "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this - document are to be interpreted as described in [RFC2119]. - Terms containing underscores (rather than spaces) appear in equations and typically have algorithmic definitions. General Terminology: Target: A general term for any parameter specified by or derived from the user's application or transport performance requirements. Target Transport Performance: Application or transport performance target values for the complete path. For Bulk Transport Capacity defined in this note the Target Transport Performance includes the @@ -792,22 +810,22 @@ These properties are a consequence of the dynamic equilibrium behavior intrinsic to how all throughput maximizing protocols interact with the Internet. These protocols rely on control systems based on estimated network metrics to regulate the quantity of data to send into the network. The packet sending characteristics in turn alter the network properties estimated by the control system metrics, such that there are circular dependencies between every transmission characteristic and every estimated metric. Since some of these dependencies are nonlinear, the entire system is nonlinear, and any change anywhere causes a difficult to predict response in network - metrics. As a consequence Bulk Transport Capacity metrics have - entirely thwarted the analytic framework envisioned in [RFC2330] + metrics. As a consequence Bulk Transport Capacity metrics have not + fulfilled the analytic framework envisioned in [RFC2330] Model Based Metrics overcome these problems by making the measurement system open loop: the packet transfer statistics (akin to the network estimators) do not affect the traffic or traffic patterns (bursts), which are computed on the basis of the Target Transport Performance. A path or subpath meeting the Target Transfer Performance requirements would exhibit packet transfer statistics and estimated metrics that would not cause the control system to slow the traffic below the Target Data Rate. @@ -872,22 +890,24 @@ fraction of an RTT, many TCP implementations catch up to their earlier window size by sending a burst of data at the full sender interface rate. To fill a network with a realistic application, the network has to be able to tolerate sender interface rate bursts large enough to restore the prior window following application pauses. Although the sender interface rate bursts are typically smaller than the last burst of a slowstart, they are at a higher IP rate so they potentially exercise queues at arbitrary points along the front path from the data sender up to and including the queue at the dominant - bottleneck. There is no model for how frequent or what sizes of - sender rate bursts the network should tolerate. + bottleneck. It is known that these bursts can hurt network + performance, especially in conjunction with other queue pressure, + however we are not aware of any models for how frequent sender rate + bursts the network should be able to tolerate at various burst sizes. In conclusion, to verify that a path can meet a Target Transport Performance, it is necessary to independently confirm that the path can tolerate bursts at the scales that can be caused by the above mechanisms. Three cases are believed to be sufficient: o Two level slowstart bursts sufficient to get connections started properly. o Ubiquitous sender interface rate bursts caused by efficiency algorithms. We assume 4 packet bursts to be the most common case, @@ -1074,21 +1095,21 @@ Since some aspects of the models are very conservative, the MBM framework permits some latitude in derating test parameters. Rather than trying to formalize more complicated models we permit some test parameters to be relaxed as long as they meet some additional procedural constraints: o The FS-TIDS must document and justify the actual method used to compute the derated metric parameters. o The validation procedures described in Section 10 must be used to demonstrate the feasibility of meeting the Target Transport - Performance with infrastructure that infinitesimally passes the + Performance with infrastructure that just barely passes the derated tests. o The validation process for a FS-TIDS itself must be documented is such a way that other researchers can duplicate the validation experiments. Except as noted, all tests below assume no derating. Tests where there is not currently a well established model for the required parameters explicitly include derating as a way to indicate flexibility in the parameters. @@ -1240,33 +1261,39 @@ 6.2. Constant window pseudo CBR Implement pseudo constant bit rate by running a standard self clocked protocol such as TCP with a fixed window size. If that window size is test_window, the data rate will be slightly above the target_rate. Since the test_window is constrained to be an integer number of packets, for small RTTs or low data rates there may not be sufficiently precise control over the data rate. Rounding the - test_window up (the default) is likely to result in data rates that - are higher than the target rate, but reducing the window by one + test_window up (as defined above) is likely to result in data rates + that are higher than the target rate, but reducing the window by one packet may result in data rates that are too small. Also cross traffic potentially raises the RTT, implicitly reducing the rate. + Cross traffic that raises the RTT nearly always makes the test more - strenuous (more demanding for the network path). A FS-TIDS - specifying a constant window CBR test must explicitly indicate under - what conditions errors in the data rate cause tests to inconclusive. + strenuous (more demanding for the network path). - Since constant window pseudo CBR testing is sensitive to RTT - fluctuations it will be less accurate at controlling the data rate in - environments with fluctuating delays. Conventional paced measurement - traffic may be more appropriate for these environments. + Note that Constant window pseudo CBR (and Scanned window pseudo CBR + in the next section) both rely on a self clock which is at least + partially derived from the properties of the subnet under test. This + introduces the possibility that the subnet under test exhibits + behaviors such as extreme RTT fluctuations that prevent these + algorithms from accurately controlling data rates. + + A FS-TIDS specifying a constant window CBR test must explicitly + indicate under what conditions errors in the data rate cause tests to + be inconclusive. Conventional paced measurement traffic may be more + appropriate for these environments. 6.3. Scanned window pseudo CBR Scanned window pseudo CBR is similar to the constant window CBR described above, except the window is scanned across a range of sizes designed to include two key events, the onset of queuing and the onset of packet loss or ECN CE marks. The window is scanned by incrementing it by one packet every 2*target_window_size delivered packets. This mimics the additive increase phase of standard Reno TCP congestion avoidance when delayed ACKs are in effect. Normally @@ -1319,23 +1346,23 @@ There are a number of reasons to want to specify performance in terms of multiple concurrent flows, however this approach is not recommended for data rates below several megabits per second, which can be attained with run lengths under 10000 packets on many paths. Since the required run length goes as the square of the data rate, at higher rates the run lengths can be unreasonably large, and multiple flows might be the only feasible approach. If multiple flows are deemed necessary to meet aggregate performance - targets then this MUST be stated in both the design of the TIDS and + targets then this must be stated in both the design of the TIDS and in any claims about network performance. The IP diagnostic tests - MUST be performed concurrently with the specified number of + must be performed concurrently with the specified number of connections. For the tests that use bursty test streams, the bursts should be synchronized across streams unless there is a priori knowledge that the applications have some explicit mechanism to stagger their own bursts. In the absences of an explicit mechanism to stagger bursts many network and application artifacts will sometimes implicitly synchronize bursts. A test that does not control burst synchronization may be prone to false pass results for some applications. 7. Interpreting the Results @@ -1390,26 +1417,26 @@ may have been caused by some uncontrolled feedback from the network. Note that procedures that attempt to search the target parameter space to find the limits on some parameter such as target_data_rate are at risk of breaking the location independent properties of Model Based Metrics, if any part of the boundary between passing and inconclusive or failing results is sensitive to RTT (which is normally the case). For example the maximum data rate for a marginal link (e.g. exhibiting excess errors) is likely to be sensitive to the test_path_RTT. The maximum observed data rate over the test path - has very little predictive value for the maximum rate over a + has very little value for predicting the maximum rate over a different path. One of the goals for evolving TIDS designs will be to keep sharpening distinction between inconclusive, passing and failing tests. The - criteria for for passing, failing and inconclusive tests MUST be + criteria for for passing, failing and inconclusive tests must be explicitly stated for every test in the TIDS or FS-TIDS. One of the goals of evolving the testing process, procedures, tools and measurement point selection should be to minimize the number of inconclusive tests. It may be useful to keep raw packet transfer statistics and ancillary metrics [RFC3148] for deeper study of the behavior of the network path and to measure the tools themselves. Raw packet transfer statistics can help to drive tool evolution. Under some conditions @@ -1464,31 +1491,31 @@ and we can stop sending packets if measurements support rejecting H0 with the specified Type II error = beta (= 0.05 for example), thus preferring the alternate hypothesis H1. H0 and H1 constitute the Success and Failure outcomes described elsewhere in the memo, and while the ongoing measurements do not support either hypothesis the current status of measurements is inconclusive. The problem above is formulated to match the Sequential Probability - Ratio Test (SPRT) [StatQC]. Note that as originally framed the - events under consideration were all manufacturing defects. In - networking, ECN CE marks and lost packets are not defects but - signals, indicating that the transport protocol should slow down. + Ratio Test (SPRT) [Wald45] and [Montgomery90]. Note that as + originally framed the events under consideration were all + manufacturing defects. In networking, ECN CE marks and lost packets + are not defects but signals, indicating that the transport protocol + should slow down. The Sequential Probability Ratio Test also starts with a pair of hypothesis specified as above: H0: p0 = one defect in target_run_length H1: p1 = one defect in target_run_length/4 - As packets are sent and measurements collected, the tester evaluates the cumulative defect count against two boundaries representing H0 Acceptance or Rejection (and acceptance of H1): Acceptance line: Xa = -h1 + s*n Rejection line: Xr = h2 + s*n where n increases linearly for each packet sent and h1 = { log((1-alpha)/beta) }/k @@ -1495,66 +1522,81 @@ h2 = { log((1-beta)/alpha) }/k k = log{ (p1(1-p0)) / (p0(1-p1)) } s = [ log{ (1-p0)/(1-p1) } ]/k for p0 and p1 as defined in the null and alternative Hypotheses statements above, and alpha and beta as the Type I and Type II errors. The SPRT specifies simple stopping rules: - o Xa < defect_count(n) < Xb: continue testing + o Xa < defect_count(n) < Xr: continue testing o defect_count(n) <= Xa: Accept H0 - o defect_count(n) >= Xb: Accept H1 + o defect_count(n) >= Xr: Accept H1 The calculations above are implemented in the R-tool for Statistical Analysis [Rtool] , in the add-on package for Cross-Validation via Sequential Testing (CVST) [CVST]. Using the equations above, we can calculate the minimum number of packets (n) needed to accept H0 when x defects are observed. For example, when x = 0: Xa = 0 = -h1 + s*n and n = h1 / s + Note that the derivations in [Wald45] and [Montgomery90] differ. + Montgomery's simplified derivation of SPRT may assume a Bernoulli + processes, where the packet loss probabilities are independent and + identically distributed, making the SPRT more accessible. Wald's + seminal paper showed that this assumption is not necessary. It helps + to remember that the goal of SPRT is not to estimate the value of the + packet loss rate, but only whether or not the packet loss ratio is + likely low enough (when we accept the H0 null hypothesis) yielding + success; or too high (when we accept the H1 alternate hypothesis) + yielding failure. + 7.3. Reordering Tolerance - All tests MUST be instrumented for packet level reordering [RFC4737]. + All tests must be instrumented for packet level reordering [RFC4737]. However, there is no consensus for how much reordering should be acceptable. Over the last two decades the general trend has been to make protocols and applications more tolerant to reordering (see for example [RFC4015]), in response to the gradual increase in reordering in the network. This increase has been due to the deployment of technologies such as multi threaded routing lookups and Equal Cost MultiPath (ECMP) routing. These techniques increase parallelism in network and are critical to enabling overall Internet growth to exceed Moore's Law. Note that transport retransmission strategies can trade off reordering tolerance vs how quickly they can repair losses vs overhead from spurious retransmissions. In advance of new retransmission strategies we propose the following strawman: Transport protocols should be able to adapt to reordering as long as the reordering extent is not more than the maximum of one quarter - window or 1 mS, whichever is larger. Within this limit on reorder - extent, there should be no bound on reordering density. + window or 1 mS, whichever is larger. (These values come from + experience prototyping Early Retransmit [RFC5827] and related + algorithms. They agree with the values being proposed for "RACK: a + time-based fast loss detection algorithm" [I-D.ietf-tcpm-rack].) + Within this limit on reorder extent, there should be no bound on + reordering density. By implication, recording which is less than these bounds should not be treated as a network impairment. However [RFC4737] still applies: reordering should be instrumented and the maximum reordering that can be properly characterized by the test (because of the bound on history buffers) should be recorded with the measurement results. Reordering tolerance and diagnostic limitations, such as the size of the history buffer used to diagnose packets that are way out-of- - order, MUST be specified in a FSTIDS. + order, must be specified in a FSTIDS. 8. IP Diagnostic Tests The IP diagnostic tests below are organized by traffic pattern: basic data rate and packet transfer statistics, standing queues, slowstart bursts, and sender rate bursts. We also introduce some combined tests which are more efficient when networks are expected to pass, but conflate diagnostic signatures when they fail. There are a number of test details which are not fully defined here. @@ -1641,43 +1683,44 @@ transfer statistics. 8.2. Standing Queue Tests These engineering tests confirm that the bottleneck is well behaved across the onset of packet loss, which typically follows after the onset of queuing. Well behaved generally means lossless for transient queues, but once the queue has been sustained for a sufficient period of time (or reaches a sufficient queue depth) there should be a small number of losses or ECN CE marks to signal to the - transport protocol that it should reduce its window. Losses that are - too early can prevent the transport from averaging at the - target_data_rate. Losses that are too late indicate that the queue - might be subject to bufferbloat [wikiBloat] and inflict excess - queuing delays on all flows sharing the bottleneck queue. Excess - losses (more than half of the window) at the onset of congestion make - loss recovery problematic for the transport protocol. Non-linear, - erratic or excessive RTT increases suggest poor interactions between - the channel acquisition algorithms and the transport self clock. All - of the tests in this section use the same basic scanning algorithm, - described here, but score the link or subpath on the basis of how - well it avoids each of these problems. + transport protocol that it should reduce its window or data rate. + Losses that are too early can prevent the transport from averaging at + the target_data_rate. Losses that are too late indicate that the + queue might not have an appropriate AQM [RFC7567] and as a + consequence subject to bufferbloat [wikiBloat]. Queues without AQM + have the potential to inflict excess delays on all flows sharing the + bottleneck. Excess losses (more than half of the window) at the + onset of loss make loss recovery problematic for the transport + protocol. Non-linear, erratic or excessive RTT increases suggest + poor interactions between the channel acquisition algorithms and the + transport self clock. All of the tests in this section use the same + basic scanning algorithm, described here, but score the link or + subpath on the basis of how well it avoids each of these problems. Some network technologies rely on virtual queues or other techniques to meter traffic without adding any queuing delay, in which case the data rate will vary with the window size all the way up to the onset of load induced packet loss or ECN CE marks. For these technologies, the discussion of queuing in Section 6.3 does not apply, but it is still necessary to confirm that the onset of losses or ECN CE marks be at an appropriate point and progressive. If the network bottleneck does not introduce significant queuing delay, modify the - procedure described in Section 6.3 to start scan at a window equal to - or slightly smaller than the test_window. + procedure described in Section 6.3 to start the scan at a window + equal to or slightly smaller than the test_window. Use the procedure in Section 6.3 to sweep the window across the onset of queuing and the onset of loss. The tests below all assume that the scan emulates standard additive increase and delayed ACK by incrementing the window by one packet for every 2*target_window_size packets delivered. A scan can typically be divided into three regions: below the onset of queuing, a standing queue, and at or beyond the onset of loss. Below the onset of queuing the RTT is typically fairly constant, and @@ -1755,30 +1798,31 @@ intended. TCP often stumbles badly if more than a small fraction of the packets are dropped in one RTT. Many TCP implementations will require a timeout and slowstart to recover their self clock. Even if they can recover from the massive losses the sudden change in available capacity at the bottleneck wastes serving and front path capacity until TCP can adapt to the new rate [Policing]. 8.2.4. Duplex Self Interference This engineering test confirms a bound on the interactions between - the forward data path and the ACK return path. + the forward data path and the ACK return path when they share a half + duplex link. Some historical half duplex technologies had the property that each direction held the channel until it completely drained its queue. When a self clocked transport protocol, such as TCP, has data and ACKs passing in opposite directions through such a link, the behavior often reverts to stop-and-wait. Each additional packet added to the - window raises the observed RTT by two packet times, once as it passes - through the data path, and once for the additional delay incurred by - the ACK waiting on the return path. + window raises the observed RTT by two packet times, once as the + additional packet passes through the data path, and once for the + additional delay incurred by the ACK waiting on the return path. The duplex self interference test fails if the RTT rises by more than a fixed bound above the expected queuing time computed from the excess window divided by the subpath IP Capacity. This bound must be smaller than target_RTT/2 to avoid reverting to stop and wait behavior. (e.g. Data packets and ACKs both have to be released at least twice per RTT.) 8.3. Slowstart tests @@ -1853,22 +1897,22 @@ interface rate bursts have a cost to the network that has to be balanced against other costs in the servers themselves. For example TCP Segmentation Offload (TSO) reduces server CPU in exchange for larger network bursts, which increase the stress on network buffer memory. Some newer TCP implementations can pace traffic at scale [TSO_pacing][TSO_fq_pacing]. It remains to be determined if and how quickly these changes will be deployed. There is not yet theory to unify these costs or to provide a framework for trying to optimize global efficiency. We do not yet - have a model for how much the network should tolerate server rate - bursts. Some bursts must be tolerated by the network, but it is + have a model for how much server rate bursts should be tolerated by + the network. Some bursts must be tolerated by the network, but it is probably unreasonable to expect the network to be able to efficiently deliver all data as a series of bursts. For this reason, this is the only test for which we encourage derating. A TIDS could include a table of pairs of derating parameters: burst sizes and how much each burst size is permitted to reduce the run length, relative to to the target_run_length. 8.5. Combined and Implicit Tests @@ -2058,56 +2101,59 @@ protocol implementations from meeting the specified Target Transport Performance. This correctness criteria is potentially difficult to prove, because it implicitly requires validating a TIDS against all possible paths and subpaths. The procedures described here are still experimental. We suggest two approaches, both of which should be applied: first, publish a fully open description of the TIDS, including what assumptions were used and and how it was derived, such that the research community can evaluate the design decisions, test them and - comment on their applicability; and second, demonstrate that an - applications running over an infinitesimally passing testbed do meet - the performance targets. + comment on their applicability; and second, demonstrate that + applications do meet the Target Transport Performance when running + over a network testbed which has the tightest possible constraints + that still allow the tests in the TIDS to pass. - An infinitesimally passing testbed resembles a epsilon-delta proof in - calculus. Construct a test network such that all of the individual - tests of the TIDS pass by only small (infinitesimal) margins, and - demonstrate that a variety of authentic applications running over - real TCP implementations (or other protocol as appropriate) meets the - Target Transport Performance over such a network. The workloads - should include multiple types of streaming media and transaction - oriented short flows (e.g. synthetic web traffic). + This procedure resembles an epsilon-delta proof in calculus. + Construct a test network such that all of the individual tests of the + TIDS pass by only small (infinitesimal) margins, and demonstrate that + a variety of authentic applications running over real TCP + implementations (or other protocols as appropriate) meets the Target + Transport Performance over such a network. The workloads should + include multiple types of streaming media and transaction oriented + short flows (e.g. synthetic web traffic). For example, for the HD streaming video TIDS described in Section 9, the IP capacity should be exactly the header_overhead above 2.5 Mb/s, the per packet random background loss ratio should be 1/363, for a run length of 363 packets, the bottleneck queue should be 11 packets and the front path should have just enough buffering to withstand 11 packet interface rate bursts. We want every one of the TIDS tests to fail if we slightly increase the relevant test parameter, so for - example sending a 12 packet bursts should cause excess (possibly + example sending a 12 packet burst should cause excess (possibly deterministic) packet drops at the dominant queue at the bottleneck. - On this infinitesimally passing network it should be possible for a - real application using a stock TCP implementation in the vendor's - default configuration to attain 2.5 Mb/s over an 50 mS path. + This network has the tightest possible constraints that can be + expected to pass the TIDS, yet it should be possible for a real + application using a stock TCP implementation in the vendor's default + configuration to attain 2.5 Mb/s over an 50 mS path. The most difficult part of setting up such a testbed is arranging for - it to infinitesimally pass the individual tests. Two approaches are - suggested: constraining the network devices not to use all available - resources (e.g. by limiting available buffer space or data rate); and - pre-loading subpaths with cross traffic. Note that is it important - that a single environment be constructed which infinitesimally passes - all tests at the same time, otherwise there is a chance that TCP can - exploit extra latitude in some parameters (such as data rate) to - partially compensate for constraints in other parameters (queue - space, or vice-versa). + it to have the tightest possible constraints that still allow it to + pass the individual tests. Two approaches are suggested: + constraining (configuring) the network devices not to use all + available resources (e.g. by limiting available buffer space or data + rate); and pre-loading subpaths with cross traffic. Note that is it + important that a single tightly constrained environment just barely + passes all tests, otherwise there is a chance that TCP can exploit + extra latitude in some parameters (such as data rate) to partially + compensate for constraints in other parameters (queue space, or vice- + versa). To the extent that a TIDS is used to inform public dialog it should be fully publicly documented, including the details of the tests, what assumptions were used and how it was derived. All of the details of the validation experiment should also be published with sufficient detail for the experiments to be replicated by other researchers. All components should either be open source of fully described proprietary implementations that are available to the research community. @@ -2123,51 +2169,51 @@ Much of the acrimony in the Net Neutrality debate is due to the historical lack of any effective vantage independent tools to characterize network performance. Traditional methods for measuring Bulk Transport Capacity are sensitive to RTT and as a consequence often yield very different results when run local to an ISP or interconnect and when run over a customer's complete path. Neither the ISP nor customer can repeat the others measurements, leading to high levels of distrust and acrimony. Model Based Metrics are expected to greatly improve this situation. + Note that in situ measurements sometimes requires sending synthetic + measurement traffic between arbitrary locations in the network, and + as such are potentially attractive platforms for launching DDOS + attacks. All active measurement tools and protocols must be deigned + to minimize the opportunities for these misuses. See the discussion + in section 7 of [RFC7594]. + This document only describes a framework for designing Fully - Specified Targeted IP Diagnostic Suite. Each FS-TIDS MUST include + Specified Targeted IP Diagnostic Suite. Each FS-TIDS must include its own security section. 12. Acknowledgments Ganga Maguluri suggested the statistical test for measuring loss - probability in the target run length. Alex Gilgur for helping with - the statistics. + probability in the target run length. Alex Gilgur and Merry Mou for + helping with the statistics. Meredith Whittaker for improving the clarity of the communications. Ruediger Geib provided feedback which greatly improved the document. This work was inspired by Measurement Lab: open tools running on an open platform, using open tools to collect open data. See http://www.measurementlab.net/ 13. IANA Considerations This document has no actions for IANA. 14. References -14.1. Normative References - - [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate - Requirement Levels", BCP 14, RFC 2119, March 1997. - -14.2. Informative References - [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. [RFC0864] Postel, J., "Character Generator Protocol", STD 22, RFC 864, May 1983. [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, "Framework for IP Performance Metrics", RFC 2330, May 1998. [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion @@ -2189,20 +2235,26 @@ [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP Extended Statistics MIB", RFC 4898, May 2007. [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", RFC 5136, February 2008. [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion Control", RFC 5681, September 2009. + [RFC5827] Allman, M., Avrachenkov, K., Ayesta, U., Blanton, J., and + P. Hurtig, "Early Retransmit for TCP and Stream Control + Transmission Protocol (SCTP)", RFC 5827, + DOI 10.17487/RFC5827, May 2010, + . + [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric Composition", RFC 5835, April 2010. [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of Metrics", RFC 6049, January 2011. [RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz, "IP Performance Metrics (IPPM) Standard Advancement Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March 2012, . @@ -2222,34 +2274,45 @@ [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and A. Morton, "A Reference Path and Measurement Points for Large-Scale Measurement of Broadband Performance", RFC 7398, February 2015. [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF Recommendations Regarding Active Queue Management", BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, . + [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., + Aitken, P., and A. Akhter, "A Framework for Large-Scale + Measurement of Broadband Performance (LMAP)", RFC 7594, + DOI 10.17487/RFC7594, September 2015, + . + [RFC7661] Fairhurst, G., Sathiaseelan, A., and R. Secchi, "Updating TCP to Support Rate-Limited Traffic", RFC 7661, DOI 10.17487/RFC7661, October 2015, . [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, Ed., "A One-Way Loss Metric for IP Performance Metrics (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 2016, . [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, May 2016, . + [I-D.ietf-tcpm-rack] + Cheng, Y., Cardwell, N., and N. Dukkipati, "RACK: a time- + based fast loss detection algorithm for TCP", draft-ietf- + tcpm-rack-02 (work in progress), March 2017. + [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm", Computer Communications Review volume 27, number3, July 1997. [WPING] Mathis, M., "Windowed Ping: An IP Level Performance Diagnostic", INET 94, June 1994. [mpingSource] Fan, X., Mathis, M., and D. Hamon, "Git Repository for @@ -2264,21 +2327,28 @@ [Pathdiag] Mathis, M., Heffner, J., O'Neil, P., and P. Siemsen, "Pathdiag: Automated TCP Diagnosis", Passive and Active Measurement , June 2008. [iPerf] Wikipedia Contributors, , "iPerf", Wikipedia, The Free Encyclopedia , cited March 2015, . - [StatQC] Montgomery, D., "Introduction to Statistical Quality + [Wald45] Wald, A., "Sequential Tests of Statistical Hypotheses", + The Annals of Mathematical Statistics, Vol. 16, No. 2, pp. + 117-186, Published by: Institute of Mathematical + Statistics, Stable URL: + http://www.jstor.org/stable/2235829, June 1945. + + [Montgomery90] + Montgomery, D., "Introduction to Statistical Quality Control - 2nd ed.", ISBN 0-471-51988-X, 1990. [Rtool] R Development Core Team, , "R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/", , 2011. [CVST] Krueger, T. and M. Braun, "R package: Fast Cross- Validation via Sequential Testing", version 0.1, 11 2012.