--- 1/draft-ietf-ippm-model-based-metrics-02.txt 2014-07-03 21:14:34.223889752 -0700 +++ 2/draft-ietf-ippm-model-based-metrics-03.txt 2014-07-03 21:14:34.307891796 -0700 @@ -1,68 +1,62 @@ IP Performance Working Group M. Mathis Internet-Draft Google, Inc Intended status: Experimental A. Morton -Expires: August 18, 2014 AT&T Labs - February 14, 2014 +Expires: January 4, 2015 AT&T Labs + July 3, 2014 Model Based Bulk Performance Metrics - draft-ietf-ippm-model-based-metrics-02.txt + draft-ietf-ippm-model-based-metrics-03.txt Abstract We introduce a new class of model based metrics designed to determine if an end-to-end Internet path can meet predefined transport performance targets by applying a suite of IP diagnostic tests to - successive subpaths. The subpath-at-a-time tests are designed to - accurately detect if any subpath will prevent the full end-to-end - path from meeting the specified target performance. Each IP - diagnostic test consists of a precomputed traffic pattern and a - statistical criteria for evaluating packet delivery. + successive subpaths. The subpath-at-a-time tests can be robustly + applied to key infrastructure, such as interconnects, to accurately + detect if it will prevent the full end-to-end paths that traverse it + from meeting the specified target performance. - The IP diagnostics tests are based on traffic patterns that are - precomputed to mimic TCP or other transport protocol over a long path - but are independent of the actual details of the subpath under test. - Likewise the success criteria depends on the target performance and - not the actual performance of the subpath. This makes the - measurements open loop, eliminating nearly all of the difficulties - encountered by traditional bulk transport metrics. + Each IP diagnostic test consists of a precomputed traffic pattern and + a statistical criteria for evaluating packet delivery. The traffic + patterns are precomputed to mimic TCP or other transport protocol + over a long path but are independent of the actual details of the + subpath under test. Likewise the success criteria depends on the + target performance for the long path and not the details of the + subpath. This makes the measurements open loop, which introduces + several important new properties and eliminates most of the + difficulties encountered by traditional bulk transport metrics. - This document does not fully define diagnostic tests, but provides a + This document does not define diagnostic tests, but provides a framework for designing suites of diagnostics tests that are tailored the confirming the target performance. - By making the tests open loop, we eliminate standards congestion - control equilibrium behavior, which otherwise causes every measured - parameter to be sensitive to every component of the system. As an - open loop test, various measurable properties become independent, and - potentially subject to an algebra enabling several important new - uses. - - Interim DRAFT Formatted: Fri Feb 14 14:07:33 PST 2014 + Interim DRAFT Formatted: Thu Jul 3 20:19:04 PDT 2014 Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on August 18, 2014. + This Internet-Draft will expire on January 4, 2015. Copyright Notice Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -70,73 +64,67 @@ to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1. TODO . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 - 3. New requirements relative to RFC 2330 . . . . . . . . . . . . 10 + 3. New requirements relative to RFC 2330 . . . . . . . . . . . . 11 4. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.1. TCP properties . . . . . . . . . . . . . . . . . . . . . . 12 - 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 13 + 4.2. Diagnostic Approach . . . . . . . . . . . . . . . . . . . 14 5. Common Models and Parameters . . . . . . . . . . . . . . . . . 15 5.1. Target End-to-end parameters . . . . . . . . . . . . . . . 15 - 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 15 - 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 16 + 5.2. Common Model Calculations . . . . . . . . . . . . . . . . 16 + 5.3. Parameter Derating . . . . . . . . . . . . . . . . . . . . 17 6. Common testing procedures . . . . . . . . . . . . . . . . . . 17 6.1. Traffic generating techniques . . . . . . . . . . . . . . 17 6.1.1. Paced transmission . . . . . . . . . . . . . . . . . . 17 6.1.2. Constant window pseudo CBR . . . . . . . . . . . . . . 18 - 6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 18 + 6.1.3. Scanned window pseudo CBR . . . . . . . . . . . . . . 19 6.1.4. Concurrent or channelized testing . . . . . . . . . . 19 - 6.1.5. Intermittent Testing . . . . . . . . . . . . . . . . . 19 - 6.1.6. Intermittent Scatter Testing . . . . . . . . . . . . . 20 6.2. Interpreting the Results . . . . . . . . . . . . . . . . . 20 6.2.1. Test outcomes . . . . . . . . . . . . . . . . . . . . 20 6.2.2. Statistical criteria for measuring run_length . . . . 22 - 6.2.2.1. Alternate criteria for measuring run_length . . . 24 + 6.2.2.1. Alternate criteria for measuring run_length . . . 23 6.2.3. Reordering Tolerance . . . . . . . . . . . . . . . . . 25 - 6.3. Test Qualifications . . . . . . . . . . . . . . . . . . . 26 - 7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 27 - 7.1. Basic Data Rate and Run Length Tests . . . . . . . . . . . 27 - 7.1.1. Run Length at Paced Full Data Rate . . . . . . . . . . 27 - 7.1.2. Run Length at Full Data Windowed Rate . . . . . . . . 28 - 7.1.3. Background Run Length Tests . . . . . . . . . . . . . 28 - 7.2. Standing Queue tests . . . . . . . . . . . . . . . . . . . 28 + 6.3. Test Preconditions . . . . . . . . . . . . . . . . . . . . 25 + 7. Diagnostic Tests . . . . . . . . . . . . . . . . . . . . . . . 26 + 7.1. Basic Data Rate and Delivery Statistics Tests . . . . . . 26 + 7.1.1. Delivery Statistics at Paced Full Data Rate . . . . . 27 + 7.1.2. Delivery Statistics at Full Data Windowed Rate . . . . 27 + 7.1.3. Background Delivery Statistics Tests . . . . . . . . . 27 + 7.2. Standing Queue Tests . . . . . . . . . . . . . . . . . . . 28 7.2.1. Congestion Avoidance . . . . . . . . . . . . . . . . . 29 - 7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 30 + 7.2.2. Bufferbloat . . . . . . . . . . . . . . . . . . . . . 29 7.2.3. Non excessive loss . . . . . . . . . . . . . . . . . . 30 7.2.4. Duplex Self Interference . . . . . . . . . . . . . . . 30 7.3. Slowstart tests . . . . . . . . . . . . . . . . . . . . . 30 7.3.1. Full Window slowstart test . . . . . . . . . . . . . . 31 7.3.2. Slowstart AQM test . . . . . . . . . . . . . . . . . . 31 7.4. Sender Rate Burst tests . . . . . . . . . . . . . . . . . 31 7.5. Combined Tests . . . . . . . . . . . . . . . . . . . . . . 32 7.5.1. Sustained burst test . . . . . . . . . . . . . . . . . 32 - 7.5.2. Live Streaming Media . . . . . . . . . . . . . . . . . 33 - 8. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 - 8.1. Near serving HD streaming video . . . . . . . . . . . . . 34 - 8.2. Far serving SD streaming video . . . . . . . . . . . . . . 34 - 8.3. Bulk delivery of remote scientific data . . . . . . . . . 35 - + 7.5.2. Streaming Media . . . . . . . . . . . . . . . . . . . 33 + 8. An Example . . . . . . . . . . . . . . . . . . . . . . . . . . 34 9. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 35 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 37 11. Informative References . . . . . . . . . . . . . . . . . . . . 37 - Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 39 - A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 39 - A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 40 - Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 41 - Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 42 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 42 + Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 40 + A.1. Queueless Reno . . . . . . . . . . . . . . . . . . . . . . 40 + A.2. CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . 41 + Appendix B. Complex Queueing . . . . . . . . . . . . . . . . . . 42 + Appendix C. Version Control . . . . . . . . . . . . . . . . . . . 43 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43 1. Introduction Bulk performance metrics evaluate an Internet path's ability to carry bulk data. Model based bulk performance metrics rely on mathematical TCP models to design a targeted diagnostic suite (TDS) of IP performance tests which can be applied independently to each subpath of the full end-to-end path. These targeted diagnostic suites allow independent tests of subpaths to accurately detect if any subpath will prevent the full end-to-end path from delivering bulk data at @@ -177,24 +165,24 @@ subpaths of the end-to-end path, the end-to-end statistical bounds need to be apportioned as a separate bound for each subpath. Note that links that are expected to be bottlenecks are expected to contribute more packet loss and/or delay. In compensation, other links have to be constrained to contribute less packet loss and delay. The criteria for passing each test of a TDS is an apportioned share of the total bound determined by the mathematical model from the end-to-end target performance. In addition to passing or failing, a test can be deemed to be - inconclusive for a number of reasons including, the precomputed - traffic pattern was not accurately generated, measurement results - were not statistically significant, and others such as failing to - meet some test preconditions. + inconclusive for a number of reasons including: the precomputed + traffic pattern was not accurately generated; the measurement results + were not statistically significant; and others such as failing to + meet some required test preconditions. This document describes a framework for deriving traffic patterns and delivery statistics for model based metrics. It does not fully specify any measurement techniques. Important details such as packet type-p selection, sampling techniques, vantage selection, etc. are not specified here. We imagine Fully Specified Targeted Diagnostic Suites (FSTDS), that define all of these details. We use TDS to refer to the subset of such a specification that is in scope for this document. A TDS includes the target parameters, documentation of the models and assumptions used to derive the diagnostic test parameters, @@ -206,69 +194,69 @@ It has been difficult to develop Bulk Transport Capacity [RFC3148] metrics due to some overlooked requirements described in Section 3 and some intrinsic problems with using protocols for measurement, described in Section 4. In Section 5 we describe the models and common parameters used to derive the targeted diagnostic suite. In Section 6 we describe common testing procedures. Each subpath is evaluated using suite of far simpler and more predictable diagnostic tests described in - Section 7. In Section 8 we present three example TDS', one that - might be representative of HD video, when served fairly close to the - user, a second that might be representative of standard video, served - from a greater distance, and a third that might be representative of - high performance bulk data delivered over a transcontinental path. + Section 7. In Section 8 we present an example TDS that might be + representative of HD video, and illustrate how MBM can be used to + address difficult measurement situations, such as confirming that + intercarrier exchanges have sufficient performance and capacity to + deliver HD video between ISPs. There exists a small risk that model based metric itself might yield a false pass result, in the sense that every subpath of an end-to-end path passes every IP diagnostic test and yet a real application fails to attain the performance target over the end-to-end path. If this happens, then the validation procedure described in Section 9 needs to be used to prove and potentially revise the models. Future documents will define model based metrics for other traffic classes and application types, such as real time streaming media. 1.1. TODO - Please send comments on this draft to ippm@ietf.org. See + Please send comments about this draft to ippm@ietf.org. See http://goo.gl/02tkD for more information including: interim drafts, an up to date todo list and information on contributing. - Formatted: Fri Feb 14 14:07:33 PST 2014 + Formatted: Thu Jul 3 20:19:04 PDT 2014 2. Terminology Terminology about paths, etc. See [RFC2330] and - [I-D.morton-ippm-lmap-path]. + [I-D.ietf-ippm-lmap-path]. [data] sender Host sending data and receiving ACKs. [data] receiver Host receiving data and sending ACKs. subpath A portion of the full path. Note that there is no requirement that subpaths be non-overlapping. Measurement Point Measurement points as described in - [I-D.morton-ippm-lmap-path]. + [I-D.ietf-ippm-lmap-path]. test path A path between two measurement points that includes a subpath of the end-to-end path under test, and could include infrastructure between the measurement points and the subpath. [Dominant] Bottleneck The Bottleneck that generally dominates traffic statistics for the entire path. It typically determines a flow's self clock timing, packet loss and ECN marking rate. See Section 4.1. front path The subpath from the data sender to the dominant bottleneck. back path The subpath from the dominant bottleneck to the receiver. return path The path taken by the ACKs from the data receiver to the data sender. cross traffic Other, potentially interfering, traffic competing for - resources (network and/or queue capacity). + network resources (bandwidth and/or queue capacity). Properties determined by the end-to-end path and application. They are described in more detail in Section 5.1. Application Data Rate General term for the data rate as seen by the application above the transport layer. This is the payload data rate, and excludes transport and lower level headers(TCP/IP or other protocols) and as well as retransmissions and other data that does not contribute to the total quantity of data delivered to the application. @@ -278,29 +266,29 @@ headers, retransmits and other transport layer overhead. This document is agnostic as to whether the link data rate includes or excludes framing, MAC, or other lower layer overheads, except that they must be treated uniformly. end-to-end target parameters: Application or transport performance goals for the end-to-end path. They include the target data rate, RTT and MTU described below. Target Data Rate: The application data rate, typically the ultimate user's performance goal. Target RTT (Round Trip Time): The baseline (minimum) RTT of the - longest end-to-end path over which the application expects to meet - the target performance. TCP and other transport protocol's - ability to compensate for path problems is generally proportional - to the number of round trips per second. The Target RTT - determines both key parameters of the traffic patterns (e.g. burst - sizes) and the thresholds on acceptable traffic statistics. The - Target RTT must be specified considering authentic packets sizes: - MTU sized packets on the forward path, ACK sized packets - (typically the header_overhead) on the return path. + longest end-to-end path over which the application expects to be + able meet the target performance. TCP and other transport + protocol's ability to compensate for path problems is generally + proportional to the number of round trips per second. The Target + RTT determines both key parameters of the traffic patterns (e.g. + burst sizes) and the thresholds on acceptable traffic statistics. + The Target RTT must be specified considering authentic packets + sizes: MTU sized packets on the forward path, ACK sized packets + (typically header_overhead) on the return path. Target MTU (Maximum Transmission Unit): The maximum MTU supported by the end-to-end path the over which the application expects to meet the target performance. Assume 1500 Byte packet unless otherwise specified. If some subpath forces a smaller MTU, then it becomes the target MTU, and all model calculations and subpath tests must use the same smaller MTU. Effective Bottleneck Data Rate: This is the bottleneck data rate inferred from the ACK stream, by looking at how much data the ACK stream reports delivered per unit time. If the path is thinning ACKs or batching packets the effective bottleneck rate can be much @@ -321,36 +309,39 @@ pipe size A general term for number of packets needed in flight (the window size) to exactly fill some network path or subpath. This is the window size which is normally the onset of queueing. target_pipe_size: The number of packets in flight (the window size) needed to exactly meet the target rate, with a single stream and no cross traffic for the specified application target data rate, RTT, and MTU. It is the amount of circulating data required to meet the target data rate, and implies the scale of the bursts that the network might experience. + Delivery Statistics Raw or summary statistics about packet delivery, + packet losses, ECN marks, reordering, or any other properties of + packet delivery that may be germane to transport performance. run length A general term for the observed, measured, or specified number of packets that are (to be) delivered between losses or ECN marks. Nominally one over the loss or ECN marking probability, if there are independently and identically distributed. target_run_length The target_run_length is an estimate of the minimum required headway between losses or ECN marks necessary to attain the target_data_rate over a path with the specified target_RTT and target_MTU, as computed by a mathematical model of TCP congestion control. A reference calculation is show in Section 5.2 and alternatives in Appendix A Ancillary parameters used for some tests derating: Under some conditions the standard models are too conservative. The modeling framework permits some latitude in - relaxing or derating some test parameters as described in + relaxing or "derating" some test parameters as described in Section 5.3 in exchange for a more stringent TDS validation procedures, described in Section 9. subpath_data_rate The maximum IP data rate supported by a subpath. This typically includes TCP/IP overhead, including headers, retransmits, etc. test_path_RTT The RTT between two measurement points using appropriate data and ACK packet sizes. test_path_pipe The amount of data necessary to fill a test path. Nominally the test path RTT times the subpath_data_rate (which should be part of the end-to-end subpath). @@ -562,41 +556,43 @@ likely to generate under normal operation at the target rate and RTT. By opening the protocol control loops, we remove most sources of temporal and spatial correlation in the traffic delivery statistics, such that each subpath's contribution to the end-to-end statistics can be assumed to be independent and stationary (The delivery statistics depend on the fine structure of the data transmissions, but not on long time scale state imbedded in the sender, receiver or other network components.) Therefore each subpath's contribution to the end-to-end delivery statistics can be assumed to be independent, - and spatial composition techniques such as [RFC5835] apply. + and spatial composition techniques such as [RFC5835] and [RFC6049] + apply. In typical networks, the dominant bottleneck contributes the majority of the packet loss and ECN marks. Often the rest of the path makes insignificant contribution to these properties. A TDS should apportion the end-to-end budget for the specified parameters (primarily packet loss and ECN marks) to each subpath or group of subpaths. For example the dominant bottleneck may be permitted to contribute 90% of the loss budget, while the rest of the path is only permitted to contribute 10%. A TDS or FSTDS MUST apportion all relevant packet delivery statistics between different subpaths, such that the spatial composition of the - metrics yields end-to-end statics which are within the bounds - determined by the models. + apportioned metrics yields end-to-end statics which are within the + bounds determined by the models. A network is expected to be able to sustain a Bulk TCP flow of a given data rate, MTU and RTT when the following conditions are met: o The raw link rate is higher than the target data rate. - o The observed run length is larger than required by a suitable TCP - performance model + + o The observed delivery statistics are better than required by a + suitable TCP performance model (e.g. fewer losses). o There is sufficient buffering at the dominant bottleneck to absorb a slowstart rate burst large enough to get the flow out of slowstart at a suitable window size. o There is sufficient buffering in the front path to absorb and smooth sender interface rate bursts at all scales that are likely to be generated by the application, any channel arbitration in the ACK path or other mechanisms. o When there is a standing queue at a bottleneck for a shared media subpath, there are suitable bounds on how the data and ACKs interact, for example due to the channel arbitration mechanism. @@ -620,24 +616,24 @@ sense to upper layers: payload bytes delivered to the application, above TCP. They exclude overheads associated with TCP and IP headers, retransmits and other protocols (e.g. DNS). Other end-to-end parameters defined in Section 2 include the effective bottleneck data rate, the sender interface data rate and the TCP/IP header sizes (overhead). The target data rate must be smaller than all link data rates by enough headroom to carry the transport protocol overhead, explicitly - including retransmissions and an allowance fluctuations in the actual - data rate, needed to meet the specified average rate. Specifying a - target rate with insufficient headroom are likely to result in - brittle measurements having little predictive value. + including retransmissions and an allowance for fluctuations in the + actual data rate, needed to meet the specified average rate. + Specifying a target rate with insufficient headroom are likely to + result in brittle measurements having little predictive value. Note that the target parameters can be specified for a hypothetical path, for example to construct TDS designed for bench testing in the absence of a real application, or for a real physical test, for in situ testing of production infrastructure. The number of concurrent connections is explicitly not a parameter to this model. If a subpath requires multiple connections in order to meet the specified performance, that must be stated explicitly and the procedure described in Section 6.1.4 applies. @@ -675,27 +672,26 @@ Times per increase. To exactly fill the pipe losses must be no closer than when the peak of the AIMD sawtooth reached exactly twice the target_pipe_size otherwise the multiplicative window reduction triggered by the loss would cause the network to be underfilled. Following [MSMO97] the number of packets between losses must be the area under the AIMD sawtooth. They must be no more frequent than every 1 in ((3/2)*target_pipe_size)*(2*target_pipe_size) packets, which simplifies to: target_run_length = 3*(target_pipe_size^2) - Note that this calculation is very conservative and is based on a number of assumptions that may not apply. Appendix A discusses these - assumptions and provides some alternative models. If a less - conservative model is used, a fully specified TDS or FSTDS MUST - document the actual method for computing target_run_length along with - the rationale for the underlying assumptions and the ratio of chosen + assumptions and provides some alternative models. If a different + model is used, a fully specified TDS or FSTDS MUST document the + actual method for computing target_run_length along with the + rationale for the underlying assumptions and the ratio of chosen target_run_length to the reference target_run_length calculated above. These two parameters, target_pipe_size and target_run_length, directly imply most of the individual parameters for the tests in Section 7. 5.3. Parameter Derating Since some aspects of the models are very conservative, this @@ -743,212 +740,186 @@ Repeated Slowstart bursts: Slowstart bursts are typically part of larger scale pattern of repeated bursts, such as sending target_pipe_size packets as slowstart bursts on a target_RTT headway (burst start to burst start). Such a stream has three different average rates, depending on the averaging interval. At the finest time scale the average rate is the same as the sender interface rate, at a medium scale the average rate is twice the effective bottleneck link rate and at the longest time scales the average rate is equal to the target data rate. - Note that in conventional measurement theory exponential + Note that in conventional measurement theory, exponential distributions are often used to eliminate many sorts of correlations. For the procedures above, the correlations are created by the network elements and accurately reflect their behavior. At some point in the future, it may be desirable to introduce noise sources into the above pacing models, but the are not warranted at this time. 6.1.2. Constant window pseudo CBR Implement pseudo constant bit rate by running a standard protocol - such as TCP with a fixed bound on the window size. The rate is only - maintained in average over each RTT, and is subject to limitations of - the transport protocol. + such as TCP with a fixed window size. The rate is only maintained in + average over each RTT, and is subject to limitations of the transport + protocol. - The bound on the window size is computed from the target_data_rate - and the actual RTT of the test path. + The window size is computed from the target_data_rate and the actual + RTT of the test path. If the transport protocol fails to maintain the test rate within prescribed limits the test would typically be considered inconclusive - or failing, depending depending on what mechanism caused the reduced - rate. See the discussion of test outcomes in Section 6.2.1. + or failing, depending on what mechanism caused the reduced rate. See + the discussion of test outcomes in Section 6.2.1. 6.1.3. Scanned window pseudo CBR Same as the above, except the window is scanned across a range of sizes designed to include two key events, the onset of queueing and the onset of packet loss or ECN marks. The window is scanned by incrementing it by one packet for every 2*target_pipe_size delivered - packets. This mimics the additive increase phase of standard + packets. This mimics the additive increase phase of standard TCP congestion avoidance and normally separates the the window increases by approximately twice the target_RTT. There are two versions of this test: one built by applying a window - clamp to standard congestion control and one one built by stiffening - a non-standard transport protocol. When standard congestion control - is in effect, any losses or ECN marks cause the transport to revert - to a window smaller than the clamp such that the scanning clamp loses - control the window size. The NPAD pathdiag tool is an example of - this class of algorithms [Pathdiag]. + clamp to standard congestion control and the other built by + stiffening a non-standard transport protocol. When standard + congestion control is in effect, any losses or ECN marks cause the + transport to revert to a window smaller than the clamp such that the + scanning clamp loses control the window size. The NPAD pathdiag tool + is an example of this class of algorithms [Pathdiag]. Alternatively a non-standard congestion control algorithm can respond to losses by transmitting extra data, such that it maintains the specified window size independent of losses or ECN marks. Such a stiffened transport explicitly violates mandatory Internet congestion control and is not suitable for in situ testing. It is only appropriate for engineering testing under laboratory conditions. The - Windowed Ping tools implemented such a test [WPING]. This tool has - been updated and is under test.[mpingSource] + Windowed Ping tools implemented such a test [WPING]. The tool + described in the paper has been updated.[mpingSource] The test procedures in Section 7.2 describe how to the partition the scans into regions and how to interpret the results. 6.1.4. Concurrent or channelized testing - The procedures described in his document are only directly applicable - to single stream performance measurement, e.g. one TCP connection. - In an ideal world, we would disallow all performance claims based - multiple concurrent streams but this is not practical due to at least - two different issues. First, many very high rate link technologies - are channelized and pin individual flows to specific channels to - minimize reordering or other problems and second, TCP itself has - scaling limits. Although the former problem might be overcome - through different design decisions, the later problem is more deeply - rooted. + The procedures described in this document are only directly + applicable to single stream performance measurement, e.g. one TCP + connection. In an ideal world, we would disallow all performance + claims based multiple concurrent streams, but this is not practical + due to at least two different issues. First, many very high rate + link technologies are channelized and pin individual flows to + specific channels to minimize reordering or other problems and + second, TCP itself has scaling limits. Although the former problem + might be overcome through different design decisions, the later + problem is more deeply rooted. All standard [RFC5681] and de facto standard congestion control algorithms [CUBIC] have scaling limits, in the sense that as a long - fast network (LFN) with a fixed RTT and MTU gets faster, all + fast network (LFN) with a fixed RTT and MTU gets faster, these congestion control algorithms get less accurate and as a consequence - have difficulty filling the network [SLowScaling]. These properties - are a consequence of the original Reno AIMD congestion control design - and the requirement in RFC 5681 that all transport protocols have + have difficulty filling the network[CCscaling]. These properties are + a consequence of the original Reno AIMD congestion control design and + the requirement in [RFC5681] that all transport protocols have uniform response to congestion. There are a number of reasons to want to specify performance in term of multiple concurrent flows, however this approach is not - recommended for data rates below several Mb/s, which can be attained - with run lengths under 10000 packets. Since run length goes as the - square of the data rate, at higher rates the run lengths can be - unfeasibly large, and multiple connection might be the only feasible - approach. For an example of this problem see Section 8.3. + recommended for data rates below several megabits per second, which + can be attained with run lengths under 10000 packets. Since the + required run length goes as the square of the data rate, at higher + rates the run lengths can be unreasonably large, and multiple + connection might be the only feasible approach. If multiple connections are deemed necessary to meet aggregate performance targets then this MUST be stated both the design of the TDS and in any claims about network performance. The tests MUST be performed concurrently with the specified number of connections. For - the the tests that using bursty traffic, the bursts should be + the the tests that use bursty traffic, the bursts should be synchronized across flows. -6.1.5. Intermittent Testing - - Any test which does not depend on queueing (e.g. the CBR tests) or - experiences periodic zero outstanding data during normal operation - (e.g. between bursts for the various burst tests), can be formulated - as an intermittent test, to reduce the perceived impact on other - traffic. The approach is to insert periodic pauses in the test at - any point when there is no expected queue occupancy. - - Intermittent testing can be used for ongoing monitoring for changes - in subpath quality with minimal disruption users. However it is not - suitable in environments where there are reactive links[REACTIVE]. - -6.1.6. Intermittent Scatter Testing - - Intermittent scatter testing is a technique for non-disruptively - evaluating the front path from a sender to a subscriber aggregation - point within an ISP at full load by intermittently testing across a - pool of subscriber access links, such that each subscriber sees - tolerable test traffic loads. The load on the front path should be - limited to be no more than that which would be caused by a single - test to an known to otherwise be idle subscriber. This test in - aggregate mimics a full load test from a content provider to the - aggregation point. - - Intermittent scatter testing can be used to reduce the measurement - noise introduced by unknown traffic on customer access links. - 6.2. Interpreting the Results 6.2.1. Test outcomes To perform an exhaustive test of an end-to-end network path, each test of the TDS is applied to each subpath of an end-to-end path. If any subpath fails any test then an application running over the end- to-end path can also be expected to fail to attain the target performance under some conditions. In addition to passing or failing, a test can be deemed to be inconclusive for a number of reasons. Proper instrumentation and - treatment of inclusive outcomes is critical to the accuracy and + treatment of inconclusive outcomes is critical to the accuracy and robustness of Model Based Metrics. Tests can be inconclusive if the - precomputed traffic pattern was not accurately generated; the - measurement results were not statistically significant; and others - causes such as failing to meet some required preconditions for the - test. + precomputed traffic pattern or data rates were not accurately + generated; the measurement results were not statistically + significant; and others causes such as failing to meet some required + preconditions for the test. For example consider a test that implements Constant Window Pseudo CBR (Section 6.1.2) by adding rate controls and detailed traffic instrumentation to TCP (e.g. [RFC4898]). TCP includes built in control systems which might interfere with the sending data rate. If - such a test meets the the run length specification while failing to - attain the specified data rate it must be treated as an inconclusive - result, because we can not a priori determine if the reduced data - rate was caused by a TCP problem or a network problem, or if the - reduced data rate had a material effect on the run length measurement - itself. + such a test meets the required delivery statistics (e.g. run length) + while failing to attain the specified data rate it must be treated as + an inconclusive result, because we can not a priori determine if the + reduced data rate was caused by a TCP problem or a network problem, + or if the reduced data rate had a material effect on the delivery + statistics themselves. - Note that for load tests such as this example, an observed run length - that is too small can be considered to have failed the test because - it doesn't really matter that the test didn't attain the required - data rate. + Note that for load tests such as this example, an if the observed + delivery statistics fail to meet the targets, the test can can be + considered to have failed the test because it doesn't really matter + that the test didn't attain the required data rate. The really important new properties of MBM, such as vantage independence, are a direct consequence of opening the control loops in the protocols, such that the test traffic does not depend on network conditions or traffic received. Any mechanism that introduces feedback between the traffic measurements and the traffic generation is at risk of introducing nonlinearities that spoil these properties. Any exceptional event that indicates that such feedback has happened should cause the test to be considered inconclusive. One way to view inconclusive tests is that they reflect situations where a test outcome is ambiguous between limitations of the network - and some unknown limitation of the diagnostic test itself, which was - presumably caused by some uncontrolled feedback from the network. + and some unknown limitation of the diagnostic test itself, which may + have been caused by some uncontrolled feedback from the network. Note that procedures that attempt to sweep the target parameter space - to find the bounds on some parameter (for example to find the highest + to find the limits on some parameter (for example to find the highest data rate for a subpath) are likely to break the location independent properties of Model Based Metrics, because the boundary between - passing and inconclusive is sensitive to the RTT because TCP's - ability to compensate for problems scales with the number of round - trips per second. Repeating the same procedure from another vantage - point with a different RTT is likely get a different result, because - TCP will get lower performance on the path with the longer RTT. + passing and inconclusive is generally sensitive to RTT. This + interaction is because TCP's ability to compensate for flaws in the + network scales with the number of round trips per second. Repeating + the same procedure from a different vantage point with a larger RTT + is likely get a different result, because with the larger TCP will + less accurately control the data rate. One of the goals for evolving TDS designs will be to keep sharpening distinction between inconclusive, passing and failing tests. The - criteria for for passing, failing and inclusive tests MUST be + criteria for for passing, failing and inconclusive tests MUST be explicitly stated for every test in the TDS or FSTDS. One of the goals of evolving the testing process, procedures tools and measurement point selection should be to minimize the number of inconclusive tests. It may be useful to keep raw data delivery statistics for deeper study of the behavior of the network path and to measure the tools. - This can help to drive tool evolution. Under some conditions it - might be possible to reevaluate the raw data for satisfying alternate - performance targets. However such procedures are likely to introduce - sampling bias and other implicit feedback which can cause false - results and exhibit MP vantage sensitivity. + Raw delivery statistics can help to drive tool evolution. Under some + conditions it might be possible to reevaluate the raw data for + satisfying alternate performance targets. However it is important to + guard against sampling bias and other implicit feedback which can + cause false results and exhibit measurement point vantage + sensitivity. 6.2.2. Statistical criteria for measuring run_length When evaluating the observed run_length, we need to determine appropriate packet stream sizes and acceptable error levels for efficient measurement. In practice, can we compare the empirically estimated packet loss and ECN marking probabilities with the targets as the sample size grows? How large a sample is needed to say that the measurements of packet transfer indicate a particular run length is present? @@ -1083,573 +1054,613 @@ This algorithm allows accurate comparison of the observed failure probability with the corresponding values predicted based on a fixed target_failure_rate, which is equal to 1.0 / target_run_length. 6.2.3. Reordering Tolerance All tests must be instrumented for packet level reordering [RFC4737]. However, there is no consensus for how much reordering should be acceptable. Over the last two decades the general trend has been to - make protocols and applications more tolerant to reordering, in - response to the gradual increase in reordering in the network. This - increase has been due to the gradual deployment of parallelism in the - network, as a consequence of such technologies as multithreaded route - lookups and Equal Cost Multipath (ECMP) routing. These techniques to - increase network parallelism are critical to enabling overall - Internet growth to exceed Moore's Law. - - Section 5 of [RFC4737] proposed a metric that may be sufficient to - designate isolated reordered packets as effectively lost, because - TCP's retransmission response would be the same. + make protocols and applications more tolerant to reordering (see for + example [RFC4015]), in response to the gradual increase in reordering + in the network. This increase has been due to the gradual deployment + of technologies such as multi threaded routing lookups and Equal Cost + Multipath (ECMP) routing. These techniques increase parallelism in + network and are critical to enabling overall Internet growth to + exceed Moore's Law. - TCP should be able to adapt to reordering as long as the reordering + Note that transport retransmission strategies can trade off + reordering tolerance vs how quickly can repair losses vs overhead + from spurious retransmissions. In advance of new retransmission + strategies we propose the following strawman: Transport protocols + should be able to adapt to reordering as long as the reordering extent is no more than the maximum of one half window or 1 mS, - whichever is larger. Note that there is a fundamental tradeoff - between tolerance to reordering and how quickly algorithms such as - fast retransmit can repair losses. Within this limit on reorder - extent, there should be no bound on reordering density. - - NB: Traditional TCP implementations were not compatible with this - metric, however newer implementations still need to be evaluated - - Parameters: - Reordering displacement: the maximum of one half of target_pipe_size - or 1 mS. - -6.3. Test Qualifications + whichever is larger. Within this limit on reorder extent, there + should be no bound on reordering density. - This entire section need to be completely overhauled. @@@@ It might - be summarized as "needs to be specified in a FSTDS". + By implication, recording which is less than these bounds should not + be treated as a network impairment. However [RFC4737] still applies: + reordering should be instrumented and the maximum reordering that can + be properly characterized by the test (e.g. bound on history buffers) + should be recorded with the measurement results. - Send pre-load traffic as needed to activate radios with a sleep mode, - or other "reactive network" elements (term defined in - [draft-morton-ippm-2330-update-01]). + Reordering tolerance and diagnostic bounds must be specified in a + FSTDS. - In general failing to accurately generate the test traffic has to be - treated as an inconclusive test, since it must be presumed that the - error in traffic generation might have affected the test outcome. To - the extent that the network itself had an effect on the the traffic - generation (e.g. in the standing queue tests) the possibility exists - that allowing too large of error margin in the traffic generation - might introduce feedback loops that comprise the vantage independents - properties of these tests. +6.3. Test Preconditions - The proper treatment of cross traffic is different for different - subpaths. In general when testing infrastructure which is associated - with only one subscriber, the test should be treated as inconclusive - it that subscriber is active on the network. However, for shared - infrastructure managed by an ISP, the question at hand is likely to - be testing if ISP has sufficient total capacity. In such cases the - presence of cross traffic due to other subscribers is explicitly part - of the network conditions and its effects are explicitly part of the - test. + Many tests have preconditions which are required to assure their + validity. For example the presence or nonpresence of cross traffic + on specific subpaths, or appropriate preloading to put reactive + network elements into the proper states[I-D.ietf-ippm-2330-update]) + If preconditions are not properly satisfied for some reason, the + tests should be considered to be inconclusive. In general it is + useful to preserve diagnostic information about why the preconditions + were not met, and the test data that was collected, if any. - These two cases do not cover all subpaths. For example, WiFI which - itself shares unmanaged channel space with other devices is unlikely - to be unsuitable for any prescriptive measurement. + It is important to preserve the record that a test was scheduled, + because otherwise precondition enforcement mechanisms can introduce + sampling bias. For example, canceling tests due to load on + subscriber access links may introduce sampling bias for tests of the + rest of the network by reducing the number of tests during peak + network load. - Note that canceling tests due to load on subscriber lines may - introduce sampling bias for testing other parts of the - infrastructure. For this reason tests that are scheduled but not run - due to load should be treated as a special case of "inconclusive". + Test preconditions and failure actions must be specified in a FSTDS. 7. Diagnostic Tests The diagnostic tests below are organized by traffic pattern: basic - data rate and run length, standing queues, slowstart bursts, and - sender rate bursts. We also introduce some combined tests which are - more efficient the expense of conflating the signatures of different - failures. + data rate and delivery statistics, standing queues, slowstart bursts, + and sender rate bursts. We also introduce some combined tests which + are more efficient when networks are expected to pass, but conflate + diagnostic signatures when they fail. -7.1. Basic Data Rate and Run Length Tests + There are a number of test details which are not fully defined here. + They must be fully specified in a FSTDS. From a standardization + perspective, this lack of specificity will weaken this version of + Model Based Metrics, however it is anticipated that this it be more + than offset by the extent to which MBM suppresses the problems caused + by using transport protocols for measurement. e.g. non-specific MBM + metrics are likely to have better repeatability than many existing + BTC like metrics. Once we have good field experience, the missing + details can be fully specified. - We propose several versions of the basic data rate and run length - test. All measure the number of packets delivered between losses or - ECN marks, using a data stream that is rate controlled at or below - the target_data_rate. +7.1. Basic Data Rate and Delivery Statistics Tests + + We propose several versions of the basic data rate and delivery + statistics test. All measure the number of packets delivered between + losses or ECN marks, using a data stream that is rate controlled at + or below the target_data_rate. The tests below differ in how the data rate is controlled. The data can be paced on a timer, or window controlled at full target data rate. The first two tests implicitly confirm that sub_path has sufficient raw capacity to carry the target_data_rate. They are recommend for relatively infrequent testing, such as an installation - or auditing process. The third, background run length, is a low rate - test designed for ongoing monitoring for changes in subpath quality. + or periodic auditing process. The third, background delivery + statistics, is a low rate test designed for ongoing monitoring for + changes in subpath quality. All rely on the receiver accumulating packet delivery statistics as described in Section 6.2.2 to score the outcome: - Pass: it is statistically significant that the observed run length is - larger than the target_run_length. + Pass: it is statistically significant that the observed interval + between losses or ECN marks is larger than the target_run_length. - Fail: it is statistically significant that the observed run length is - smaller than the target_run_length. + Fail: it is statistically significant that the observed interval + between losses or ECN marks is smaller than the target_run_length. A test is considered to be inconclusive if it failed to meet the data rate as specified below, meet the qualifications defined in Section 6.3 or neither run length statistical hypothesis was confirmed in the allotted test duration. -7.1.1. Run Length at Paced Full Data Rate +7.1.1. Delivery Statistics at Paced Full Data Rate Confirm that the observed run length is at least the target_run_length while relying on timer to send data at the target_rate using the procedure described in in Section 6.1.1 with a - burst size of 1 (single packets). + burst size of 1 (single packets) or 2 (packet pairs). The test is considered to be inconclusive if the packet transmission can not be accurately controlled for any reason. -7.1.2. Run Length at Full Data Windowed Rate + RFC 6673 [RFC6673] is appropriate for measuring delivery statistics + at full data rate. + +7.1.2. Delivery Statistics at Full Data Windowed Rate Confirm that the observed run length is at least the - target_run_length while sending at an average rate equal to the - target_data_rate, by controlling (or clamping) the window size of a - conventional transport protocol to a fixed value computed from the - properties of the test path, typically - test_window=target_data_rate*test_RTT/target_MTU. + target_run_length while sending at an average rate approximately + equal to the target_data_rate, by controlling (or clamping) the + window size of a conventional transport protocol to a fixed value + computed from the properties of the test path, typically + test_window=target_data_rate*test_RTT/target_MTU. Note that if there + is any interaction between the forward and return path, test_window + may need to be adjusted slightly to compensate for the resulting + inflated RTT. Since losses and ECN marks generally cause transport protocols to at least temporarily reduce their data rates, this test is expected to be less precise about controlling its data rate. It should not be considered inconclusive as long as at least some of the round trips - reached the full target_data_rate, without incurring losses. To pass - this test the network MUST deliver target_pipe_size packets in - target_RTT time without any losses or ECN marks at least once per two - target_pipe_size round trips, in addition to meeting the run length - statistical test. + reached the full target_data_rate without incurring losses or ECN + marks. To pass this test the network MUST deliver target_pipe_size + packets in target_RTT time without any losses or ECN marks at least + once per two target_pipe_size round trips, in addition to meeting the + run length statistical test. -7.1.3. Background Run Length Tests +7.1.3. Background Delivery Statistics Tests The background run length is a low rate version of the target target rate test above, designed for ongoing lightweight monitoring for changes in the observed subpath run length without disrupting users. It should be used in conjunction with one of the above full rate tests because it does not confirm that the subpath can support raw data rate. - Existing loss metrics such as [RFC6673] might be appropriate for - measuring background run length. + RFC 6673 [RFC6673] is appropriate for measuring background delivery + statistics. -7.2. Standing Queue tests +7.2. Standing Queue Tests These test confirm that the bottleneck is well behaved across the onset of packet loss, which typically follows after the onset of queueing. Well behaved generally means lossless for transient queues, but once the queue has been sustained for a sufficient period of time (or reaches a sufficient queue depth) there should be a small number of losses to signal to the transport protocol that it should reduce its window. Losses that are too early can prevent the transport from averaging at the target_data_rate. Losses that are too late indicate that the queue might be subject to bufferbloat - [Bufferbloat] and inflict excess queuing delays on all flows sharing - the bottleneck queue. Excess losses make loss recovery problematic - for the transport protocol. Non-linear or erratic RTT fluctuations - suggest poor interactions between the channel acquisition systems and - the transport self clock. All of the tests in this section use the - same basic scanning algorithm but score the link on the basis of how - well it avoids each of these problems. + [wikiBloat] and inflict excess queuing delays on all flows sharing + the bottleneck queue. Excess losses (more than a few per RTT) make + loss recovery problematic for the transport protocol. Non-linear or + erratic RTT fluctuations suggest poor interactions between the + channel acquisition algorithms and the transport self clock. All of + the tests in this section use the same basic scanning algorithm, + described here, but score the link on the basis of how well it avoids + each of these problems. For some technologies the data might not be subject to increasing delays, in which case the data rate will vary with the window size - all the way up to the onset of losses or ECN marks. For theses - technologies, the discussion of queueing does not apply, but it is - still required that the onset of losses (or ECN marks) be at an + all the way up to the onset of load induced losses or ECN marks. For + theses technologies, the discussion of queueing does not apply, but + it is still required that the onset of losses (or ECN marks) be at an appropriate point and progressive. Use the procedure in Section 6.1.3 to sweep the window across the onset of queueing and the onset of loss. The tests below all assume that the scan emulates standard additive increase and delayed ACK by incrementing the window by one packet for every 2*target_pipe_size - packets delivered. A scan can be divided into three regions: below - the onset of queueing, a standing queue, and at or beyond the onset - of loss. + packets delivered. A scan can typically be divided into three + regions: below the onset of queueing, a standing queue, and at or + beyond the onset of loss. Below the onset of queueing the RTT is typically fairly constant, and the data rate varies in proportion to the window size. Once the data rate reaches the link rate, the data rate becomes fairly constant, - and the RTT increases in proportion to the the window size. The - precise transition from one region to the other can be identified by - the maximum network power, defined to be the ratio data rate over the - RTT[POWER]. + and the RTT increases in proportion to the increase in window size. + The precise transition across the start of queueing can be identified + by the maximum network power, defined to be the ratio data rate over + the RTT. The network power can be computed at each window size, and + the window with the maximum are taken as the start of the queueing + region. For technologies that do not have conventional queues, start the scan - at a window equal to the test_window, i.e. starting at the target - rate, instead of the power point. + at a window equal to the test_window=target_data_rate*test_RTT/ + target_MTU, i.e. starting at the target rate, instead of the power + point. If there is random background loss (e.g. bit errors, etc), precise - determination of the onset of packet loss may require multiple scans. - Above the onset of loss, all transport protocols are expected to - experience periodic losses. For the stiffened transport case they - will be determined by the AQM algorithm in the network or the details - of how the the window increase function responds to loss. For the - standard transport case the details of periodic losses are typically - dominated by the behavior of the transport protocol itself. + determination of the onset of queue induced packet loss may require + multiple scans. Above the onset of queuing loss, all transport + protocols are expected to experience periodic losses determined by + the interaction between the congestion control and AQM algorithms. + For standard congestion control algorithms the periodic losses are + likely to be relatively widely spaced and the details are typically + dominated by the behavior of the transport protocol itself. For the + stiffened transport protocols case (with non-standard, aggressive + congestion control algorithms) the details of periodic losses will be + dominated by how the the window increase function responds to loss. 7.2.1. Congestion Avoidance A link passes the congestion avoidance standing queue test if more - than target_run_length packets are delivered between the power point - (or test_window) and the first loss or ECN mark. If this test is - implemented using a standards congestion control algorithm with a - clamp, it can be used in situ in the production internet as a - capacity test. For an example of such a test see [NPAD]. + than target_run_length packets are delivered between the onset of + queueing (as determined by the window with the maximum network power) + and the first loss or ECN mark. If this test is implemented using a + standards congestion control algorithm with a clamp, it can be used + in situ in the production internet as a capacity test. For an + example of such a test see [Pathdiag]. + + For technologies that do not have conventional queues, use the + test_window inplace of the onset of queueing. i.e. A link passes the + congestion avoidance standing queue test if more than + target_run_length packets are delivered between start of the scan at + test_window and the first loss or ECN mark. 7.2.2. Bufferbloat This test confirms that there is some mechanism to limit buffer occupancy (e.g. that prevents bufferbloat). Note that this is not strictly a requirement for single stream bulk performance, however if - there is no mechanism to limit buffer occupancy then a single stream - with sufficient data to deliver is likely to cause the problems - described in [RFC2309] and [Bufferbloat]. This may cause only minor - symptoms for the dominant flow, but has the potential to make the - link unusable for other flows and applications. + there is no mechanism to limit buffer queue occupancy then a single + stream with sufficient data to deliver is likely to cause the + problems described in [RFC2309] and [wikiBloat]. This may cause only + minor symptoms for the dominant flow, but has the potential to make + the link unusable for other flows and applications. - Pass if the onset of loss is before a standing queue has introduced - more delay than than twice target_RTT, or other well defined limit. - Note that there is not yet a model for how much standing queue is - acceptable. The factor of two chosen here reflects a rule of thumb. - Note that in conjunction with the previous test, this test implies - that the first loss should occur at a queueing delay which is between - one and two times the target_RTT. + Pass if the onset of loss occurs before a standing queue has + introduced more delay than than twice target_RTT, or other well + defined and specified limit. Note that there is not yet a model for + how much standing queue is acceptable. The factor of two chosen here + reflects a rule of thumb. In conjunction with the previous test, + this test implies that the first loss should occur at a queueing + delay which is between one and two times the target_RTT. + + Specified RTT limits that are larger than twice the target_RTT must + be fully justified in the FSTDS. 7.2.3. Non excessive loss This test confirm that the onset of loss is not excessive. Pass if - losses are bound by the the fluctuations in the cross traffic, such - that transient load (bursts) do not cause dips in aggregate raw - throughput. e.g. pass as long as the losses are no more bursty than - are expected from a simple drop tail queue. Although this test could - be made more precise it is really included here for pedantic - completeness. + losses are equal or less than the increase in the cross traffic plus + the test traffic window increase on the previous RTT. This could be + restated as non-decreasing link throughput at the onset of loss, + which is easy to meet as long as discarding packets in not more + expensive than delivering them. (Note when there is a transient drop + in link throughput, outside of a standing queue test, a link that + passes other queue tests in this document will have sufficient queue + space to hold one RTT worth of data). 7.2.4. Duplex Self Interference This engineering test confirms a bound on the interactions between - the forward data path and the ACK return path. Fail if the RTT rises - by more than some fixed bound above the expected queueing time - computed from trom the excess window divided by the link data rate. + the forward data path and the ACK return path. + + Some historical half duplex technologies had the property that each + direction held the channel until it completely drains its queue. + When a self clocked transport protocol, such as TCP, has data and + acks passing in opposite directions through such a link, the behavior + often reverts to stop-and-wait. Each additional packet added to the + window raises the observed RTT by two forward path packet times, once + as it passes through the data path, and once for the additional delay + incurred by the ACK waiting on the return path. + + The duplex self interference test fails if the RTT rises by more than + some fixed bound above the expected queueing time computed from trom + the excess window divided by the link data rate. 7.3. Slowstart tests These tests mimic slowstart: data is sent at twice the effective bottleneck rate to exercise the queue at the dominant bottleneck. - They are deemed inconclusive if the elapsed time to send the data - burst is not less than half of the time to receive the ACKs. (i.e. - sending data too fast is ok, but sending it slower than twice the - actual bottleneck rate as indicated by the ACKs is deemed + In general they are deemed inconclusive if the elapsed time to send + the data burst is not less than half of the time to receive the ACKs. + (i.e. sending data too fast is ok, but sending it slower than twice + the actual bottleneck rate as indicated by the ACKs is deemed inconclusive). Space the bursts such that the average data rate is equal to the target_data_rate. 7.3.1. Full Window slowstart test This is a capacity test to confirm that slowstart is not likely to exit prematurely. Send slowstart bursts that are target_pipe_size total packets. Accumulate packet delivery statistics as described in Section 6.2.2 to score the outcome. Pass if it is statistically significant that - the observed run length is larger than the target_run_length. Fail - if it is statistically significant that the observed run length is - smaller than the target_run_length. + the observed interval between losses or ECN marks is larger than the + target_run_length. Fail if it is statistically significant that the + observed interval between losses or ECN marks is smaller than the + target_run_length. Note that these are the same parameters as the Sender Full Window burst test, except the burst rate is at slowestart rate, rather than sender interface rate. 7.3.2. Slowstart AQM test Do a continuous slowstart (send data continuously at slowstart_rate), until the first loss, stop, allow the network to drain and repeat, gathering statistics on the last packet delivered before the loss, the loss pattern, maximum observed RTT and window size. Justify the results. There is not currently sufficient theory justifying requiring any particular result, however design decisions that affect the outcome of this tests also affect how the network balances - between long and short flows (the "mice and elephants" problem). + between long and short flows (the "mice and elephants" problem). The + queue at the time of the first loss should be at least one half of + the target_RTT. This is an engineering test: It would be best performed on a quiescent network or testbed, since cross traffic has the potential to change the results. 7.4. Sender Rate Burst tests These tests determine how well the network can deliver bursts sent at sender's interface rate. Note that this test most heavily exercises the front path, and is likely to include infrastructure may be out of scope for a subscriber ISP. Also, there are a several details that are not precisely defined. For starters there is not a standard server interface rate. 1 Gb/s and 10 Gb/s are very common today, but higher rates will become cost effective and can be expected to be dominant some time in the future. Current standards permit TCP to send a full window bursts following - an application pause. Congestion Window Validation [RFC2861], is not - required, but even if was it does not take effect until an - application pause is longer than an RTO. Since this is standard - behavior, it is desirable that the network be able to deliver such - bursts, otherwise application pauses will cause unwarranted losses. + an application pause. (Congestion Window Validation [RFC2861], is + not required, but even if was, it does not take effect until an + application pause is longer than an RTO.) Since full window bursts + are consistent with standard behavior, it is desirable that the + network be able to deliver such bursts, otherwise application pauses + will cause unwarranted losses. Note that the AIMD sawtooth requires + a peak window that is twice target_pipe_size, so the worst case burst + may be 2*target_pipe_size. It is also understood in the application and serving community that interface rate bursts have a cost to the network that has to be balanced against other costs in the servers themselves. For example - TCP Segmentation Offload [TSO] reduces server CPU in exchange for + TCP Segmentation Offload (TSO) reduces server CPU in exchange for larger network bursts, which increase the stress on network buffer memory. There is not yet theory to unify these costs or to provide a framework for trying to optimize global efficiency. We do not yet have a model for how much the network should tolerate server rate bursts. Some bursts must be tolerated by the network, but it is probably unreasonable to expect the network to be able to efficiently deliver all data as a series of bursts. For this reason, this is the only test for which we explicitly - encourage detrateing. A TDS should include a table of pairs of + encourage derating. A TDS should include a table of pairs of derating parameters: what burst size to use as a fraction of the target_pipe_size, and how much each burst size is permitted to reduce the run length, relative to to the target_run_length. 7.5. Combined Tests - These tests are more efficient from a deployment/operational - perspective, but may not be possible to diagnose if they fail. + Combined tests efficiently confirm multiple network properties in a + single test, possibly as a side effect of production content + delivery. They require less measurement traffic than other testing + strategies at the cost of conflating diagnostic signatures when they + fail. These are by far the most efficient for testing networks that + are expected to pass all tests. 7.5.1. Sustained burst test - Send target_pipe_size*derate sender interface rate bursts every - target_RTT*derate, for derate between 0 and 1. Verify that the - observed run length meets target_run_length. Key observations: - o This test is subpath RTT invariant, as long as the tester can - generate the required pattern. + The sustained burst test implements a combined worst case version of + all of the capacity tests above. In its simplest form send + target_pipe_size bursts of packets at server interface rate with + target_RTT headway (burst start to burst start). Verify that the + observed delivery statistics meets the target_run_length. Key + observations: o The subpath under test is expected to go idle for some fraction of the time: (subpath_data_rate-target_rate)/subpath_data_rate. - Failing to do so suggests a problem with the procedure and an + Failing to do so indicates a problem with the procedure and an inconclusive test result. - o This test is more strenuous than the slowstart tests: they are not - needed if the link passes this test with derate=1. + o The burst sensitivity can be derated by sending smaller bursts + more frequently. E.g. send target_pipe_size*derate packet bursts + every target_RTT*derate. + o When not derated this test is more strenuous than the slowstart + capacity tests. o A link that passes this test is likely to be able to sustain higher rates (close to subpath_data_rate) for paths with RTTs - smaller than the target_RTT. Offsetting this performance - underestimation is part of the rationale behind permitting - derating in general. - - o This test can be implemented with standard instrumented - TCP[RFC4898], using a specialized measurement application at one - end and a minimal service at the other end [RFC 863, RFC 864]. It - may require tweaks to the TCP implementation. [MBMSource] + significantly smaller than the target_RTT. Offsetting this + performance underestimation is part of the rationale behind + permitting derating in general. + o This test can be implemented with instrumented TCP [RFC4898], + using a specialized measurement application at one end [MBMSource] + and a minimal service at the other end [RFC0863] [RFC0864]. A + prototype tool exists and is under evaluation . o This test is efficient to implement, since it does not require per-packet timers, and can make use of TSO in modern NIC hardware. - o This test is not totally sufficient: the standing window - engineering tests are also needed to be sure that the link is well - behaved at and beyond the onset of congestion. - o This one test can be proven to be the one capacity test to - supplant them all. + o This test is not completely sufficient: the standing window + engineering tests are also needed to ensure that the link is well + behaved at and beyond the onset of congestion. Links that exhibit + punitive behaviors such as sudden high loss under overload may not + interact well with TCP's self clock. + o Assuming the link passes relevant standing window engineering + tests (particularly that it has a progressive onset of loss at an + appropriate queue depth) the passing sustained burst test is + (believed to be) a sufficient verify that the subpath will not + impair stream at the target performance under all conditions. + Proving this statement is the subject of ongoing research. -7.5.2. Live Streaming Media + Note that this test is clearly independent of the subpath RTT, or + other details of the measurement infrastructure, as long as the + measurement infrastructure can accurately and reliably deliver the + required bursts to the subpath under test. + +7.5.2. Streaming Media Model Based Metrics can be implemented as a side effect of serving any non-throughput maximizing traffic*, such as streaming media, with some additional controls and instrumentation in the servers. The essential requirement is that the traffic be constrained such that even with arbitrary application pauses, bursts and data rate fluctuations, the traffic stays within the envelope defined by the - individual tests described above, for a specific TDS. + individual tests described above. If the serving_data_rate is less than or equal to the target_data_rate and the serving_RTT (the RTT between the sender and client) is less than the target_RTT, this constraint is most easily - implemented by clamping the transport window size to: + implemented by clamping the transport window size to be no larger + than: serving_window_clamp=target_data_rate*serving_RTT/ (target_MTU-header_overhead) - The serving_window_clamp will limit the both the serving data rate - and burst sizes to be no larger than the procedures in Section 7.1.2 - and Section 7.4 or Section 7.5.1. Since the serving RTT is smaller - than the target_RTT, the worst case bursts that might be generated - under these conditions will be smaller than called for by Section 7.4 - and the sender rate burst sizes are implicitly derated by the - serving_window_clamp divided by the target_pipe_size at the very - least. (The traffic might be smoother than specified by the sender - interface rate bursts test.) - - Note that if the application tolerates fluctuations in its actual - data rate (say by use of a playout buffer) it is important that the - target_data_rate be above the actual average rate needed by the - application so it can recover after transient pauses caused by - congestion or the application itself. - - Alternatively the sender data rate and bursts might be explicitly - controlled by a host shaper or pacing at the sender. This would - provide better control and work for serving_RTTs that are larger than - the target_RTT, but it is substantially more complicated to - implement. With this technique, any traffic might be used for - measurement. - - * Note that this technique might be applied to any content, if users - are willing to tolerate reduced data rate to inhibit TCP equilibrium - behavior. - -8. Examples + Under the above constraints the serving_window_clamp will limit the + both the serving data rate and burst sizes to be no larger than the + procedures in Section 7.1.2 and Section 7.4 or Section 7.5.1. Since + the serving RTT is smaller than the target_RTT, the worst case bursts + that might be generated under these conditions will be smaller than + called for by Section 7.4 and the sender rate burst sizes are + implicitly derated by the serving_window_clamp divided by the + target_pipe_size at the very least. (The traffic might be smoother + than specified by the sender interface rate bursts test.) - In this section we present TDS for a couple of performance - specifications. + Note that it is important that the target_data_rate be above the + actual average rate needed by the application so it can recover after + transient pauses caused by congestion or the application itself. - Tentatively: 5 Mb/s*50 ms, 1 Mb/s*50ms, 250kbp*100mS + In an alternative implementation the data rate and bursts might be + explicitly controlled by a host shaper or pacing at the sender. This + would provide better control over transmissions but it is + substantially more complicated to implement and would be likely to + have a higher CPU overhead. -8.1. Near serving HD streaming video + * Note that these techniques can be applied to any content delivery + that can be subjected to a reduced data rate in order to inhibit TCP + equilibrium behavior. - Today the best quality HD video requires slightly less than 5 Mb/s - [HDvideo]. Since it is desirable to serve such content locally, we - assume that the content will be within 50 mS, which is enough to - cover continental Europe or either US coast from a single site. +8. An Example - 5 Mb/s over a 50 ms path + In this section a we illustrate a TDS designed to confirm that an + access ISP can reliably deliver HD video from multiple content + providers to all of their customers. With modern codecs HD video + generally fits in 2.5 Mb/s [@@HDvideo]. Due to their geographical + size, network topology and modem designs the ISP determines that most + content is within a 50 mS RTT from their users (This is a sufficient + RTT to cover continental Europe or either US coast from a single + serving site.) + 2.5 Mb/s over a 50 ms path +----------------------+-------+---------+ - | End to End Parameter | Value | units | + | End to End Parameter | value | units | +----------------------+-------+---------+ - | target_rate | 5 | Mb/s | + | target_rate | 2.5 | Mb/s | | target_RTT | 50 | ms | - | traget_MTU | 1500 | bytes | - | target_pipe_size | 22 | packets | - | target_run_length | 1452 | packets | + | target_MTU | 1500 | bytes | + | header_overhead | 64 | bytes | + | target_pipe_size | 11 | packets | + | target_run_length | 363 | packets | +----------------------+-------+---------+ Table 1 - This example uses the most conservative TCP model and no derating. - -8.2. Far serving SD streaming video - - Standard Quality video typically fits in 1 Mb/s [SDvideo]. This can - be reasonably delivered via longer paths with larger. We assume - 100mS. - - 1 Mb/s over a 100 ms path - - +----------------------+-------+---------+ - | End to End Parameter | Value | units | - +----------------------+-------+---------+ - | target_rate | 1 | Mb/s | - | target_RTT | 100 | ms | - | traget_MTU | 1500 | bytes | - | target_pipe_size | 9 | packets | - | target_run_length | 243 | packets | - +----------------------+-------+---------+ - - Table 2 - - This example uses the most conservative TCP model and no derating. - -8.3. Bulk delivery of remote scientific data - - This example corresponds to 100 Mb/s bulk scientific data over a - moderately long RTT. Note that the target_run_length is infeasible - for most networks. - - 100 Mb/s over a 200 ms path + Table 1 shows the default TCP model with no derating, and as such is + quite conservative. The simplest TDS would be to use the sustained + burst test, described in Section 7.5.1. Such a test would send 11 + packet bursts every 50mS, and confirming that there was no more than + 1 packet loss per 33 bursts (363 total packets in 1.650 seconds). - +----------------------+---------+---------+ - | End to End Parameter | Value | units | - +----------------------+---------+---------+ - | target_rate | 100 | Mb/s | - | target_RTT | 200 | ms | - | traget_MTU | 1500 | bytes | - | target_pipe_size | 1741 | packets | - | target_run_length | 9093243 | packets | - +----------------------+---------+---------+ + Since this number represents is the entire end-to-ends loss budget, + independent subpath tests could be implemented by apportioning the + loss rate across subpaths. For example 50% of the losses might be + allocated to the access or last mile link to the user, 40% to the + interconnects with other ISPs and 1% to each internal hop (assuming + no more than 10 internal hops). Then all of the subpaths can be + tested independently, and the spatial composition of passing subpaths + would be expected to be within the end-to-end loss budget. - Table 3 + Testing interconnects has generally been problematic: conventional + performance tests run between Measurement Points adjacent to either + side of the interconnect, are not generally useful. Unconstrained + TCP tests, such as netperf tests [@@netperf] are typically overly + aggressive because the RTT is so small (often less than 1 mS). These + tools are likely to report inflated numbers by pushing other traffic + off of the network. As a consequence they are useless for predicting + actual user performance, and may themselves be quite disruptive. + Model Based Metrics solves this problem. The same test pattern as + used on other links can be applied to the interconnect. For our + example, when apportioned 40% of the losses, 11 packet bursts sent + every 50mS should have fewer than one loss per 82 bursts (902 + packets). 9. Validation Since some aspects of the models are likely to be too conservative, - Section 5.2 and Section 5.3 permit alternate protocol models and test - parameter derating. In exchange for this latitude in the modelling - process, we require demonstrations that such a TDS can robustly - detect links that will prevent authentic applications using state-of- - the-art protocol implementations from meeting the specified - performance targets. This correctness criteria is potentially - difficult to prove, because it implicitly requires validating a TDS - against all possible links and subpaths. + Section 5.2 permits alternate protocol models and Section 5.3 permits + test parameter derating. If either of these techniques are used, we + require demonstrations that such a TDS can robustly detect links that + will prevent authentic applications using state-of-the-art protocol + implementations from meeting the specified performance targets. This + correctness criteria is potentially difficult to prove, because it + implicitly requires validating a TDS against all possible links and + subpaths. The procedures described here are still experimental. - We suggest two strategies, both of which should be applied: first, + We suggest two approaches, both of which should be applied: first, publish a fully open description of the TDS, including what assumptions were used and and how it was derived, such that the - research community can evaluate these decisions, test them and - comment on there applicability; and second, demonstrate that an + research community can evaluate the design decisions, test them and + comment on their applicability; and second, demonstrate that an applications running over an infinitessimally passing testbed do meet the performance targets. An infinitessimally passing testbed resembles a epsilon-delta proof in calculus. Construct a test network such that all of the - individual tests of the TDS only pass by small (infinitesimal) + individual tests of the TDS pass by only small (infinitesimal) margins, and demonstrate that a variety of authentic applications running over real TCP implementations (or other protocol as appropriate) meets the end-to-end target parameters over such a network. The workloads should include multiple types of streaming media and transaction oriented short flows (e.g. synthetic web traffic ). - For example using our example in our HD streaming video TDS described - in Section 8.1, the bottleneck data rate should be 5 Mb/s, the per - packet random background loss probability should be 1/1453, for a run - length of 1452 packets, the bottleneck queue should be 22 packets and - the front path should have just enough buffering to withstand 22 - packet line rate bursts. We want every one of the TDS tests to fail - if we slightly increase the relevant test parameter, so for example - sending a 23 packet slowstart bursts should cause excess (possibly - deterministic) packet drops at the dominant queue at the bottleneck. - On this infinitessimally passing network it should be possible for a - real ral application using a stock TCP implementation in the vendor's - default configuration to attain 5 Mb/s over an 50 mS path. + For example, for the HD streaming video TDS described in Section 8, + the link layer bottleneck data rate should be exactly the header + overhead above 2.5 Mb/s, the per packet random background loss + probability should be 1/363, for a run length of 363 packets, the + bottleneck queue should be 11 packets and the front path should have + just enough buffering to withstand 11 packet interface rate bursts. + We want every one of the TDS tests to fail if we slightly increase + the relevant test parameter, so for example sending a 12 packet + bursts should cause excess (possibly deterministic) packet drops at + the dominant queue at the bottleneck. On this infinitessimally + passing network it should be possible for a real application using a + stock TCP implementation in the vendor's default configuration to + attain 2.5 Mb/s over an 50 mS path. - The most difficult part of setting up such a testbed is arranging to - infinitesimally pass the individual tests. We suggest two - approaches: constraining the network devices not to use all available - resources (limiting available buffer space or data rate); and + The most difficult part of setting up such a testbed is arranging for + it to infinitesimally pass the individual tests. Two approaches: + constraining the network devices not to use all available resources + (e.g. by limiting available buffer space or data rate); and preloading subpaths with cross traffic. Note that is it important that a single environment be constructed which infinitessimally passes all tests at the same time, otherwise there is a chance that TCP can exploit extra latitude in some parameters (such as data rate) to partially compensate for constraints in other parameters (queue space, or viceversa). To the extent that a TDS is used to inform public dialog it should be fully publicly documented, including the details of the tests, what assumptions were used and how it was derived. All of the details of - the validation experiment should also be public with sufficient + the validation experiment should also be published with sufficient detail for the experiments to be replicated by other researchers. All components should either be open source of fully described proprietary implementations that are available to the research community. - This work here is inspired by open tools running on an open platform, - using open techniques to collect open data. See Measurement Lab - [http://www.measurementlab.net/] - 10. Acknowledgements Ganga Maguluri suggested the statistical test for measuring loss probability in the target run length. Alex Gilgur for helping with the statistics and contributing and alternate model. Meredith Whittaker for improving the clarity of the communications. + This work was inspired by Measurement Lab: open tools running on an + open platform, using open tools to collect open data. See + http://www.measurementlab.net/ + 11. Informative References + [RFC0863] Postel, J., "Discard Protocol", STD 21, RFC 863, May 1983. + + [RFC0864] Postel, J., "Character Generator Protocol", STD 22, + RFC 864, May 1983. + [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, "Recommendations on Queue Management and Congestion Avoidance in the Internet", RFC 2309, April 1998. [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, "Framework for IP Performance Metrics", RFC 2330, May 1998. @@ -1657,44 +1668,52 @@ [RFC2861] Handley, M., Padhye, J., and S. Floyd, "TCP Congestion Window Validation", RFC 2861, June 2000. [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining Empirical Bulk Transfer Capacity Metrics", RFC 3148, July 2001. [RFC3465] Allman, M., "TCP Congestion Control with Appropriate Byte Counting (ABC)", RFC 3465, February 2003. - [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP - Extended Statistics MIB", RFC 4898, May 2007. + [RFC4015] Ludwig, R. and A. Gurtov, "The Eifel Response Algorithm + for TCP", RFC 4015, February 2005. [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, S., and J. Perser, "Packet Reordering Metrics", RFC 4737, November 2006. + [RFC4898] Mathis, M., Heffner, J., and R. Raghunarayan, "TCP + Extended Statistics MIB", RFC 4898, May 2007. + [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion Control", RFC 5681, September 2009. [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric Composition", RFC 5835, April 2010. [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of Metrics", RFC 6049, January 2011. [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, August 2012. - [I-D.morton-ippm-lmap-path] + [I-D.ietf-ippm-2330-update] + Fabini, J. and A. Morton, "Advanced Stream and Sampling + Framework for IPPM", draft-ietf-ippm-2330-update-05 (work + in progress), May 2014. + + [I-D.ietf-ippm-lmap-path] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and A. Morton, "A Reference Path and Measurement Points for - LMAP", draft-morton-ippm-lmap-path-00 (work in progress), - January 2013. + LMAP", draft-ietf-ippm-lmap-path-04 (work in progress), + June 2014. [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm", Computer Communications Review volume 27, number3, July 1997. [WPING] Mathis, M., "Windowed Ping: An IP Level Performance Diagnostic", INET 94, June 1994. [mpingSource] @@ -1715,25 +1734,43 @@ Control - 2nd ed.", ISBN 0-471-51988-X, 1990. [Rtool] R Development Core Team, "R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org/", , 2011. [CVST] Krueger, T. and M. Braun, "R package: Fast Cross- Validation via Sequential Testing", version 0.1, 11 2012. + [CUBIC] Ha, S., Rhee, I., and L. Xu, "CUBIC: a new TCP-friendly + high-speed TCP variant", SIGOPS Oper. Syst. Rev. 42, 5, + July 2008. + [LMCUBIC] Ledesma Goyzueta, R. and Y. Chen, "A Deterministic Loss Model Based Analysis of CUBIC, IEEE International Conference on Computing, Networking and Communications (ICNC), E-ISBN : 978-1-4673-5286-4", January 2013. + [AFD] Pan, R., Breslau, L., Prabhakar, B., and S. Shenker, + "Approximate fairness through differential dropping", + SIGCOMM Comput. Commun. Rev. 33, 2, April 2003. + + [wikiBloat] + Wikipedia, "Bufferbloat", http://en.wikipedia.org/w/ + index.php?title=Bufferbloat&oldid=608805474, June 2014. + + [CCscaling] + Fernando, F., Doyle, J., and S. Steven, "Scalable laws for + stable network congestion control", Proceedings of + Conference on Decision and + Control, http://www.ee.ucla.edu/~paganini, December 2001. + Appendix A. Model Derivations The reference target_run_length described in Section 5.2 is based on very conservative assumptions: that all window above target_pipe_size contributes to a standing queue that raises the RTT, and that classic Reno congestion control with delayed ACKs are in effect. In this section we provide two alternative calculations using different assumptions. It may seem out of place to allow such latitude in a measurement @@ -1770,24 +1807,24 @@ queueing delay, and losses are determined monitoring the average data rate, for example by the use of a virtual queue as in [AFD]. In such a scheme the RTT is constant and TCP's AIMD congestion control causes the data rate to fluctuate in a sawtooth. If the traffic is being controlled in a manner that is consistent with the metrics here, goal would be to make the actual average rate equal to the target_data_rate. We can derive a model for Reno TCP and delayed ACK under the above set of assumptions: for some value of Wmin, the window will sweep - from Wmin to 2*Wmin in 2*Wmin RTT. Unlike the queueing case where - Wmin = Target_pipe_size, we want the average of Wmin and 2*Wmin to be - the target_pipe_size, so the average rate is the target rate. Thus - we want Wmin = (2/3)*target_pipe_size. + from Wmin packets to 2*Wmin packets in 2*Wmin RTT. Unlike the + queueing case where Wmin = Target_pipe_size, we want the average of + Wmin and 2*Wmin to be the target_pipe_size, so the average rate is + the target rate. Thus we want Wmin = (2/3)*target_pipe_size. Between losses each sawtooth delivers (1/2)(Wmin+2*Wmin)(2Wmin) packets in 2*Wmin round trip times. Substituting these together we get: target_run_length = (4/3)(target_pipe_size^2) Note that this is 44% of the reference run length. This makes sense because under the assumptions in Section 5.2 the AMID sawtooth caused @@ -1808,50 +1845,51 @@ authors transform the time to reach the maximum Window size in terms of RTT and a parameter for the multiplicative rate decrease on observing loss, beta (whose default value is 0.2 in CUBIC). The expected value of Window size, E[W], is also dependent on C, a parameter of CUBIC that determines its window-growth aggressiveness (values from 0.01 to 4). E[W] = ( C*(RTT/p)^3 * ((4-beta)/beta) )^-4 and, further assuming Poisson arrival, the mean throughput, x, is + x = E[W]/RTT We note that under these conditions (deterministic single losses), the value of E[W] is always greater than 0.8 of the maximum window - size ~= reference_run_length. (as far as I can tell) + size ~= reference_run_length. @@@@ Appendix B. Complex Queueing For many network technologies simple queueing models do not apply: the network schedules, thins or otherwise alters the timing of ACKs and data, generally to raise the efficiency of the channel allocation process when confronted with relatively widely spaced small ACKs. These efficiency strategies are ubiquitous for half duplex, wireless and broadcast media. Altering the ACK stream generally has two consequences: it raises the effective bottleneck data rate, making slowstart burst at higher rates (possibly as high as the sender's interface rate) and it effectively raises the RTT by the average time that the ACKs were delayed. The first effect can be partially mitigated by reclocking ACKs once they are beyond the bottleneck on the return path to the sender, however this further raises the effective RTT. The most extreme example of this sort of behavior would be a half duplex channel that is not released as long as end point currently - holding the channel has pending traffic. Such environments cause - self clocked protocols under full load to revert to extremely - inefficient stop and wait behavior, where they send an entire window - of data as a single burst, followed by the entire window of ACKs on - the return path. + holding the channel has queued traffic. Such environments cause self + clocked protocols under full load to revert to extremely inefficient + stop and wait behavior, where they send an entire window of data as a + single burst, followed by the entire window of ACKs on the return + path. If a particular end-to-end path contains a link or device that alters the ACK stream, then the entire path from the sender up to the bottleneck must be tested at the burst parameters implied by the ACK scheduling algorithm. The most important parameter is the Effective Bottleneck Data Rate, which is the average rate at which the ACKs advance snd.una. Note that thinning the ACKs (relying on the cumulative nature of seg.ack to permit discarding some ACKs) is implies an effectively infinite bottleneck data rate. It is important to note that due to the self clock, ill conceived channel @@ -1861,21 +1899,21 @@ Holding data or ACKs for channel allocation or other reasons (such as error correction) always raises the effective RTT relative to the minimum delay for the path. Therefore it may be necessary to replace target_RTT in the calculation in Section 5.2 by an effective_RTT, which includes the target_RTT reflecting the fixed part of the path plus a term to account for the extra delays introduced by these mechanisms. Appendix C. Version Control - Formatted: Fri Feb 14 14:07:33 PST 2014 + Formatted: Thu Jul 3 20:19:04 PDT 2014 Authors' Addresses Matt Mathis Google, Inc 1600 Amphitheater Parkway Mountain View, California 94043 USA Email: mattmathis@google.com