draft-ietf-bmwg-traffic-management-04.txt   draft-ietf-bmwg-traffic-management-05.txt 
Network Working Group B. Constantine Network Working Group B. Constantine
Internet Draft JDSU Internet Draft JDSU
Intended status: Informational R. Krishnan Intended status: Informational R. Krishnan
Expires: September 2015 Brocade Communications Expires: February 2016 Brocade Communications
March 29, 2015 June 2, 2015
Traffic Management Benchmarking Traffic Management Benchmarking
draft-ietf-bmwg-traffic-management-04.txt draft-ietf-bmwg-traffic-management-05.txt
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 29, 2015. This Internet-Draft will expire on December 2, 2015.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 13 skipping to change at page 3, line 13
with representative application traffic. with representative application traffic.
Table of Contents Table of Contents
1. Introduction...................................................4 1. Introduction...................................................4
1.1. Traffic Management Overview...............................4 1.1. Traffic Management Overview...............................4
1.2. DUT Lab Configuration and Testing Overview................5 1.2. DUT Lab Configuration and Testing Overview................5
2. Conventions used in this document..............................7 2. Conventions used in this document..............................7
3. Scope and Goals................................................8 3. Scope and Goals................................................8
4. Traffic Benchmarking Metrics...................................9 4. Traffic Benchmarking Metrics...................................9
4.1. Metrics for Stateless Traffic Tests.......................9 4.1. Metrics for Stateless Traffic Tests......................10
4.2. Metrics for Stateful Traffic Tests.......................11 4.2. Metrics for Stateful Traffic Tests.......................11
5. Tester Capabilities...........................................11 5. Tester Capabilities...........................................12
5.1. Stateless Test Traffic Generation........................11 5.1. Stateless Test Traffic Generation........................13
5.1.1. Burst Hunt with Stateless Traffic...................11 5.1.1. Burst Hunt with Stateless Traffic...................13
5.2. Stateful Test Pattern Generation.........................12 5.2. Stateful Test Pattern Generation.........................13
5.2.1. TCP Test Pattern Definitions........................13 5.2.1. TCP Test Pattern Definitions........................15
6. Traffic Benchmarking Methodology..............................14 6. Traffic Benchmarking Methodology..............................16
6.1. Policing Tests...........................................15 6.1. Policing Tests...........................................16
6.1.1 Policer Individual Tests................................15 6.1.1 Policer Individual Tests................................17
6.1.2 Policer Capacity Tests..............................16 6.1.2 Policer Capacity Tests..............................18
6.1.2.1 Maximum Policers on Single Physical Port..........17 6.1.2.1 Maximum Policers on Single Physical Port..........19
6.1.2.2 Single Policer on All Physical Ports..............18 6.1.2.2 Single Policer on All Physical Ports..............20
6.1.2.3 Maximum Policers on All Physical Ports............19 6.1.2.3 Maximum Policers on All Physical Ports............21
6.2. Queue/Scheduler Tests....................................20 6.2. Queue/Scheduler Tests....................................21
6.2.1 Queue/Scheduler Individual Tests........................20 6.2.1 Queue/Scheduler Individual Tests........................21
6.2.1.1 Testing Queue/Scheduler with Stateless Traffic....21 6.2.1.1 Testing Queue/Scheduler with Stateless Traffic....21
6.2.1.2 Testing Queue/Scheduler with Stateful Traffic.....21 6.2.1.2 Testing Queue/Scheduler with Stateful Traffic.....23
6.2.2 Queue / Scheduler Capacity Tests......................23 6.2.2 Queue / Scheduler Capacity Tests......................25
6.2.2.1 Multiple Queues / Single Port Active..............23 6.2.2.1 Multiple Queues / Single Port Active..............25
6.2.2.1.1 Strict Priority on Egress Port..................24 6.2.2.1.1 Strict Priority on Egress Port..................26
6.2.2.1.2 Strict Priority + Weighted Fair Queue (WFQ).....24 6.2.2.1.2 Strict Priority + Weighted Fair Queue (WFQ).....26
6.2.2.2 Single Queue per Port / All Ports Active..........25 6.2.2.2 Single Queue per Port / All Ports Active..........27
6.2.2.3 Multiple Queues per Port, All Ports Active........25 6.2.2.3 Multiple Queues per Port, All Ports Active........27
6.3. Shaper tests.............................................26 6.3. Shaper tests.............................................28
6.3.1 Shaper Individual Tests...............................26 6.3.1 Shaper Individual Tests...............................28
6.3.1.1 Testing Shaper with Stateless Traffic.............27 6.3.1.1 Testing Shaper with Stateless Traffic.............29
6.3.1.2 Testing Shaper with Stateful Traffic..............28 6.3.1.2 Testing Shaper with Stateful Traffic..............30
6.3.2 Shaper Capacity Tests.................................30 6.3.2 Shaper Capacity Tests.................................32
6.3.2.1 Single Queue Shaped, All Physical Ports Active....30 6.3.2.1 Single Queue Shaped, All Physical Ports Active....32
6.3.2.2 All Queues Shaped, Single Port Active.............30 6.3.2.2 All Queues Shaped, Single Port Active.............32
6.3.2.3 All Queues Shaped, All Ports Active...............31 6.3.2.3 All Queues Shaped, All Ports Active...............33
6.4. Concurrent Capacity Load Tests...........................32 6.4. Concurrent Capacity Load Tests...........................34
Appendix A: Open Source Tools for Traffic Management Testing..32 7. Security Considerations.......................................34
Appendix B: Stateful TCP Test Patterns........................33 8. IANA Considerations...........................................34
7. Security Considerations.......................................37 9. References....................................................35
8. IANA Considerations...........................................37 9.1. Normative References.....................................35
9. Acknowledgments...............................................37 9.2. Informative References...................................35
10. References...................................................37 Appendix A: Open Source Tools for Traffic Management Testing.....36
10.1. Normative References....................................37 Appendix B: Stateful TCP Test Patterns...........................37
10.2. Informative References..................................38 Acknowledgments..................................................41
Authors' Addresses...............................................42
1. Introduction 1. Introduction
Traffic management (i.e. policing, shaping, etc.) is an increasingly Traffic management (i.e. policing, shaping, etc.) is an increasingly
important component when implementing network Quality of Service important component when implementing network Quality of Service
(QoS). (QoS).
There is currently no framework to benchmark these features There is currently no framework to benchmark these features
although some standards address specific areas which are described although some standards address specific areas which are described
in Section 1.1. in Section 1.1.
skipping to change at page 4, line 50 skipping to change at page 4, line 50
device according to the traffic classification. If the traffic device according to the traffic classification. If the traffic
exceeds the provisioned limits, the traffic is either dropped or exceeds the provisioned limits, the traffic is either dropped or
remarked and forwarded onto to the next network device remarked and forwarded onto to the next network device
- Traffic Scheduling: provides traffic classification within the - Traffic Scheduling: provides traffic classification within the
network device by directing packets to various types of queues and network device by directing packets to various types of queues and
applies a dispatching algorithm to assign the forwarding sequence applies a dispatching algorithm to assign the forwarding sequence
of packets of packets
- Traffic shaping: a traffic control technique that actively buffers - Traffic shaping: a traffic control technique that actively buffers
and smooths the output rate in an attempt to adapt bursty traffic and smooths the output rate in an attempt to adapt bursty traffic
to the configured limits to the configured limits
- Active Queue Management (AQM): - Active Queue Management (AQM): AQM involves monitoring the status
AQM involves monitoring the status of internal queues and of internal queues and proactively dropping (or remarking) packets,
proactively dropping (or remarking) packets, which causes hosts which causes hosts using congestion-aware protocols to back-off and
using congestion-aware protocols to back-off and in turn alleviate in turn alleviate queue congestion [AQM-RECO]. On the other hand,
queue congestion [AQM-RECO]. On the other hand, classic traffic classic traffic management techniques reactively drop (or remark)
management techniques reactively drop (or remark) packets based on packets based on queue full condition. The benchmarking scenarios
queue full condition. The benchmarking scenarios for AQM are for AQM are different and is outside of the scope of this testing
different and is outside of the scope of this testing framework. framework.
Even though AQM is outside of scope of this framework, it should be
noted that the TCP metrics and TCP test patterns (defined in Sections
4.2 and 5.2, respectively) could be useful to test new AQM
algorithms (targeted to alleviate buffer bloat). Examples of these
algorithms include code1 and pie (draft-ietf-aqm-code1 and
draft-ietf-aqm-pie).
The following diagram is a generic model of the traffic management The following diagram is a generic model of the traffic management
capabilities within a network device. It is not intended to capabilities within a network device. It is not intended to
represent all variations of manufacturer traffic management represent all variations of manufacturer traffic management
capabilities, but provide context to this test framework. capabilities, but provide context to this test framework.
|----------| |----------------| |--------------| |----------| |----------| |----------------| |--------------| |----------|
| | | | | | | | | | | | | | | |
|Interface | |Ingress Actions | |Egress Actions| |Interface | |Interface | |Ingress Actions | |Egress Actions| |Interface |
|Input | |(classification,| |(scheduling, | |Output | |Input | |(classification,| |(scheduling, | |Output |
skipping to change at page 6, line 7 skipping to change at page 6, line 7
| |<----| |<----| |<---| | | |<----| |<----| |<---| |
| | | | | | | | | | | | | | | |
+--------------+ +-------+ +----------+ +-----------+ +--------------+ +-------+ +----------+ +-----------+
As shown in the test diagram, the framework supports uni-directional As shown in the test diagram, the framework supports uni-directional
and bi-directional traffic management tests (where the transmitting and bi-directional traffic management tests (where the transmitting
and receiving roles would be reversed on the return path). and receiving roles would be reversed on the return path).
This testing framework describes the tests and metrics for each of This testing framework describes the tests and metrics for each of
the following traffic management functions: the following traffic management functions:
- Classification
- Policing - Policing
- Queuing / Scheduling - Queuing / Scheduling
- Shaping - Shaping
The tests are divided into individual and rated capacity tests. The tests are divided into individual and rated capacity tests.
The individual tests are intended to benchmark the traffic management The individual tests are intended to benchmark the traffic management
functions according to the metrics defined in Section 4. The functions according to the metrics defined in Section 4. The
capacity tests verify traffic management functions under the load of capacity tests verify traffic management functions under the load of
many simultaneous individual tests and their flows. many simultaneous individual tests and their flows.
skipping to change at page 6, line 41 skipping to change at page 6, line 42
Also note that the Network Delay Emulator (NDE) SHOULD be passive in Also note that the Network Delay Emulator (NDE) SHOULD be passive in
nature such as a fiber spool. This is recommended to eliminate the nature such as a fiber spool. This is recommended to eliminate the
potential effects that an active delay element (i.e. test impairment potential effects that an active delay element (i.e. test impairment
generator) may have on the test flows. In the case where a fiber generator) may have on the test flows. In the case where a fiber
spool is not practical due to the desired latency, an active NDE MUST spool is not practical due to the desired latency, an active NDE MUST
be independently verified to be capable of adding the configured be independently verified to be capable of adding the configured
delay without loss. In other words, the DUT would be removed and the delay without loss. In other words, the DUT would be removed and the
NDE performance benchmarked independently. NDE performance benchmarked independently.
Note the NDE SHOULD be used in "full pipe" delay mode. Most NDEs Note that the NDE SHOULD be used only as emulated delay. Most NDEs
allow for per flow delay actions, emulating QoS prioritization. For allow for per flow delay actions, emulating QoS prioritization. For
this framework, the NDE's sole purpose is simply to add delay to all this framework, the NDE's sole purpose is simply to add delay to all
packets (emulate network latency). So to benchmark the performance of packets (emulate network latency). So to benchmark the performance of
the NDE, maximum offered load should be tested against the following the NDE, maximum offered load should be tested against the following
frame sizes: 128, 256, 512, 768, 1024, 1500,and 9600 bytes. The delay frame sizes: 128, 256, 512, 768, 1024, 1500,and 9600 bytes. The delay
accuracy at each of these packet sizes can then be used to calibrate accuracy at each of these packet sizes can then be used to calibrate
the range of expected Bandwidth Delay Product (BDP) for the TCP the range of expected Bandwidth Delay Product (BDP) for the TCP
stateful tests. stateful tests.
2. Conventions used in this document 2. Conventions used in this document
skipping to change at page 8, line 43 skipping to change at page 8, line 43
methods and metrics to conduct repeatable testing, which will methods and metrics to conduct repeatable testing, which will
provide the means to compare measured performance between DUTs. provide the means to compare measured performance between DUTs.
As mentioned in section 1.2, these methods describe the individual As mentioned in section 1.2, these methods describe the individual
tests and metrics for several management functions. It is also within tests and metrics for several management functions. It is also within
scope that this framework will benchmark each function in terms of scope that this framework will benchmark each function in terms of
overall rated capacity. This involves concurrent testing of multiple overall rated capacity. This involves concurrent testing of multiple
interfaces with the specific traffic management function enabled, up interfaces with the specific traffic management function enabled, up
to the capacity limit of each interface. to the capacity limit of each interface.
It is not within scope of this of this framework to specify the It is not within scope of this framework to specify the procedure for
procedure for testing multiple configurations of traffic management testing multiple configurations of traffic management functions
functions concurrently. The multitudes of possible combinations is concurrently. The multitudes of possible combinations is almost
almost unbounded and the ability to identify functional "break unbounded and the ability to identify functional "break points"
points" would be almost impossible. would be almost impossible.
However, section 6.4 provides suggestions for some profiles of However, section 6.4 provides suggestions for some profiles of
concurrent functions that would be useful to benchmark. The key concurrent functions that would be useful to benchmark. The key
requirement for any concurrent test function is that tests MUST requirement for any concurrent test function is that tests MUST
produce reliable and repeatable results. produce reliable and repeatable results.
Also, it is not within scope to perform conformance testing. Tests Also, it is not within scope to perform conformance testing. Tests
defined in this framework benchmark the traffic management functions defined in this framework benchmark the traffic management functions
according to the metrics defined in section 4 and do not address any according to the metrics defined in section 4 and do not address any
conformance to standards related to traffic management. The current conformance to standards related to traffic management.
specifications don't specify exact behavior or implementation and the
specifications that do exist (cited in section 1.1) allow The current specifications don't specify exact behavior or
implementations to vary w.r.t. short term rate accuracy and other implementation and the specifications that do exist (cited in
factors. This is a primary driver for this framework: to provide Section 1.1) allow implementations to vary w.r.t. short term rate
an objective means to compare vendor traffic management functions. accuracy and other factors. This is a primary driver for this
framework: to provide an objective means to compare vendor traffic
management functions.
Another goal is to devise methods that utilize flows with Another goal is to devise methods that utilize flows with
congestion-aware transport (TCP) as part of the traffic load and congestion-aware transport (TCP) as part of the traffic load and
still produce repeatable results in the isolated test environment. still produce repeatable results in the isolated test environment.
This framework will derive stateful test patterns (TCP or This framework will derive stateful test patterns (TCP or
application layer) that can also be used to further benchmark the application layer) that can also be used to further benchmark the
performance of applicable traffic management techniques such as performance of applicable traffic management techniques such as
queuing / scheduling and traffic shaping. In cases where the queuing / scheduling and traffic shaping. In cases where the
network device is stateful in nature (i.e. firewall, etc.), network device is stateful in nature (i.e. firewall, etc.),
stateful test pattern traffic is important to test along with stateful test pattern traffic is important to test along with
stateless, UDP traffic in specific test scenarios (i.e. stateless, UDP traffic in specific test scenarios (i.e.
applications using TCP transport and UDP VoIP, etc.) applications using TCP transport and UDP VoIP, etc.).
As mentioned earlier in the document, repeatability of test results As mentioned earlier in the document, repeatability of test results
is critical, especially considering the nature of stateful TCP is critical, especially considering the nature of stateful TCP
traffic. To this end, the stateful tests will use TCP test patterns traffic. To this end, the stateful tests will use TCP test patterns
to emulate applications. This framework also provides guidelines to emulate applications. This framework also provides guidelines
for application modeling and open source tools to achieve the for application modeling and open source tools to achieve the
repeatable stimulus. And finally, TCP metrics from [RFC6349] MUST repeatable stimulus. And finally, TCP metrics from [RFC6349] MUST
be measured for each stateful test and provide the means to compare be measured for each stateful test and provide the means to compare
each repeated test. each repeated test.
Even though the scope is targeted to TCP applications (i.e. Web,
Email, database, etc.), the framework could be applied to SCTP
in terms of test patterns. WebRTC, SS7 signaling, and 3gpp are
examples of SCTP protocols that could be modeled with this
framework to benchmark SCTP's effect on traffic management
performance.
Also note that currently, this framework does not address tcpcrypt
(encrypted TCP) test patterns, although the metrics defined in
Section 4.2 can still be used since the metrics are based on TCP
retransmission and RTT measurements (versus any of the payload).
Thus if tcpcrypt becomes popular, it would be natural for
benchmarkers to consider encrypted TCP patterns and include them
in test cases.
4. Traffic Benchmarking Metrics 4. Traffic Benchmarking Metrics
The metrics to be measured during the benchmarks are divided into two The metrics to be measured during the benchmarks are divided into two
(2) sections: packet layer metrics used for the stateless traffic (2) sections: packet layer metrics used for the stateless traffic
testing and TCP layer metrics used for the stateful traffic testing and TCP layer metrics used for the stateful traffic
testing. testing.
4.1. Metrics for Stateless Traffic Tests 4.1. Metrics for Stateless Traffic Tests
Stateless traffic measurements require that sequence number and Stateless traffic measurements require that sequence number and
skipping to change at page 10, line 61 skipping to change at page 11, line 22
- Packet Delay Variation (PDV): the Packet Delay Variation metric is - Packet Delay Variation (PDV): the Packet Delay Variation metric is
the variation between the timestamp of the received egress port the variation between the timestamp of the received egress port
packets and specified in [RFC5481]. Note that per [RFC5481], packets and specified in [RFC5481]. Note that per [RFC5481],
this PDV is the variation of one-way delay across many packets in this PDV is the variation of one-way delay across many packets in
the traffic flow. Per the measurement formula in [RFC5481], select the traffic flow. Per the measurement formula in [RFC5481], select
the high percentile of 99% and units of measure will be a real the high percentile of 99% and units of measure will be a real
number of seconds (negative is not possible for PDV and would number of seconds (negative is not possible for PDV and would
indicate a measurement error). indicate a measurement error).
- Shaper Rate (SR): The SR represents the average DUT output - Shaper Rate (SR): The SR represents the average DUT output
rate (bps) over the test interval. The Shaper Rate is only rate (bps) over the test interval. The Shaper Rate is only
applicable to the traffic shaping tests. applicable to the traffic shaping tests.
- Shaper Burst Bytes (SBB): A traffic shaper will emit packets in - Shaper Burst Bytes (SBB): A traffic shaper will emit packets in
different size "trains"; these are frames "back-to-back", respect different size "trains"; these are frames "back-to-back", respect
the mandatory inter-frame gap. This metric characterizes the method the mandatory inter-frame gap. This metric characterizes the method
by which the shaper emits traffic. Some shapers transmit larger by which the shaper emits traffic. Some shapers transmit larger
bursts per interval, and a burst of 1 packet would apply to the bursts per interval, and a burst of 1 packet would apply to the
extreme case of a shaper sending a CBR stream of single packets. extreme case of a shaper sending a CBR stream of single packets.
This metric SHALL be reported in units of bytes, KBytes, or MBytes. This metric SHALL be reported in units of bytes, KBytes, or MBytes.
skipping to change at page 14, line 40 skipping to change at page 16, line 15
6. Traffic Benchmarking Methodology 6. Traffic Benchmarking Methodology
The traffic benchmarking methodology uses the test set-up from The traffic benchmarking methodology uses the test set-up from
section 2 and metrics defined in section 4. section 2 and metrics defined in section 4.
Each test SHOULD compare the network device's internal statistics Each test SHOULD compare the network device's internal statistics
(available via command line management interface, SNMP, etc.) to the (available via command line management interface, SNMP, etc.) to the
measured metrics defined in section 4. This evaluates the accuracy measured metrics defined in section 4. This evaluates the accuracy
of the internal traffic management counters under individual test of the internal traffic management counters under individual test
conditions and capacity test conditions that are defined in each conditions and capacity test conditions that are defined in each
subsection. subsection. This comparison is not intended to compare real-time
statistics, but the cumulative statistics reported after the test
has completed and device counters have updated (it is common for
device counters to update after a 10 second or greater interval).
From a device configuration standpoint, scheduling and shaping From a device configuration standpoint, scheduling and shaping
functionality can be applied to logical ports such Link Aggregation functionality can be applied to logical ports such Link Aggregation
(LAG). This would result in the same scheduling and shaping (LAG). This would result in the same scheduling and shaping
configuration applied to all the member physical ports. The focus of configuration applied to all the member physical ports. The focus of
this draft is only on tests at a physical port level. this draft is only on tests at a physical port level.
The following sections provide the objective, procedure, metrics, and The following sections provide the objective, procedure, metrics, and
reporting format for each test. For all test steps, the following reporting format for each test. For all test steps, the following
global parameters must be specified: global parameters must be specified:
skipping to change at page 16, line 50 skipping to change at page 18, line 38
******************************************************** ********************************************************
6.1.2 Policer Capacity Tests 6.1.2 Policer Capacity Tests
Objective: Objective:
The intent of the capacity tests is to verify the policer performance The intent of the capacity tests is to verify the policer performance
in a scaled environment with multiple ingress customer policers on in a scaled environment with multiple ingress customer policers on
multiple physical ports. This test will benchmark the maximum number multiple physical ports. This test will benchmark the maximum number
of active policers as specified by the device manufacturer. of active policers as specified by the device manufacturer.
Test Summary: Test Summary:
The specified policing function capacity is generally expressed in The specified policing function capacity is generally expressed in
terms of the number of policers active on each individual physical terms of the number of policers active on each individual physical
port as well as the number of unique policer rates that are utilized. port as well as the number of unique policer rates that are utilized.
For all of the capacity tests, the benchmarking test procedure and For all of the capacity tests, the benchmarking test procedure and
report format described in Section 6.1.1 for a single policer MUST report format described in Section 6.1.1 for a single policer MUST
be applied to each of the physical port policers. be applied to each of the physical port policers.
As an example, a Layer 2 switching device may specify that each of As an example, a Layer 2 switching device may specify that each of
the 32 physical ports can be policed using a pool of policing service the 32 physical ports can be policed using a pool of policing service
policies. The device may carry a single customer's traffic on each policies. The device may carry a single customer's traffic on each
skipping to change at page 21, line 44 skipping to change at page 23, line 44
BB * RTT / 8 (in bytes) BB * RTT / 8 (in bytes)
The NDE must be configured to an RTT value which is large enough to The NDE must be configured to an RTT value which is large enough to
allow the BDP to be greater than QL. An example test scenario is allow the BDP to be greater than QL. An example test scenario is
defined below: defined below:
- Ingress link = GigE - Ingress link = GigE
- Egress link = 100 Mbps (BB) - Egress link = 100 Mbps (BB)
- QL = 32KB - QL = 32KB
RTT(min) = QL * 8 / BB and would equal 2.56 millisecond (and the RTT(min) = QL * 8 / BB and would equal 2.56 ms (and the BDP = 32KB)
BDP = 32KB)
In this example, one (1) TCP connection with window size / SSB of In this example, one (1) TCP connection with window size / SSB of
32KB would be required to test the QL of 32KB. This Bulk Transfer 32KB would be required to test the QL of 32KB. This Bulk Transfer
Test can be accomplished using iperf as described in Appendix A. Test can be accomplished using iperf as described in Appendix A.
Two types of TCP tests MUST be performed: Bulk Transfer test and Two types of TCP tests MUST be performed: Bulk Transfer test and
Micro Burst Test Pattern as documented in Appendix B. The Bulk Micro Burst Test Pattern as documented in Appendix B. The Bulk
Transfer Test only bursts during the TCP Slow Start (or Congestion Transfer Test only bursts during the TCP Slow Start (or Congestion
Avoidance) state, while the Micro Burst test emulates application Avoidance) state, while the Micro Burst test emulates application
layer bursting which may occur any time during the TCP connection. layer bursting which may occur any time during the TCP connection.
skipping to change at page 22, line 15 skipping to change at page 24, line 15
Test Metrics: Test Metrics:
The test results will be recorded per the stateful metrics defined in The test results will be recorded per the stateful metrics defined in
section 4.2, primarily the TCP Test Pattern Execution Time (TTPET), section 4.2, primarily the TCP Test Pattern Execution Time (TTPET),
TCP Efficiency, and Buffer Delay. TCP Efficiency, and Buffer Delay.
Procedure: Procedure:
1. Configure the DUT queue length (QL) and scheduling technique 1. Configure the DUT queue length (QL) and scheduling technique
(FIFO, SP, etc) parameters (FIFO, SP, etc) parameters
2. Configure the tester* to generate a profile of emulated of an 2. Configure the test generator* with a profile of an emulated
application traffic mixture application traffic mixture
- The application mixture MUST be defined in terms of percentage - The application mixture MUST be defined in terms of percentage
of the total bandwidth to be tested of the total bandwidth to be tested
- The rate of transmission for each application within the mixture - The rate of transmission for each application within the mixture
MUST be also be configurable MUST be also be configurable
* The tester MUST be capable of generating a precise TCP test * The test generator MUST be capable of generating precise TCP
patterns for each application specified, to ensure repeatable test patterns for each application specified, to ensure repeatable
results. results.
3. Generate application traffic between the ingress (client side) and 3. Generate application traffic between the ingress (client side) and
egress (server side) ports of the DUT and measure application egress (server side) ports of the DUT and measure application
throughput the metrics (TTPET, TCP Efficiency, and Buffer Delay), throughput the metrics (TTPET, TCP Efficiency, and Buffer Delay),
per application stream and at the ingress and egress port (across per application stream and at the ingress and egress port (across
the entire Td, default 60 seconds duration). the entire Td, default 60 seconds duration).
Concerning application measurements, a couple of items require Concerning application measurements, a couple of items require
clarification. An application session may be comprised of a single clarification. An application session may be comprised of a single
skipping to change at page 29, line 18 skipping to change at page 31, line 18
Test Metrics: Test Metrics:
The test results will be recorded per the stateful metrics defined in The test results will be recorded per the stateful metrics defined in
section 4.2, primarily the TCP Test Pattern Execution Time (TTPET), section 4.2, primarily the TCP Test Pattern Execution Time (TTPET),
TCP Efficiency, and Buffer Delay. TCP Efficiency, and Buffer Delay.
Procedure: Procedure:
1. Configure the DUT shaper ingress queue length (QL) and shaper 1. Configure the DUT shaper ingress queue length (QL) and shaper
egress rate parameters (SR, Bc, Be) parameters egress rate parameters (SR, Bc, Be) parameters
2. Configure the tester* to generate a profile of emulated of an 2. Configure the test generator* with a profile of an emulated
application traffic mixture application traffic mixture
- The application mixture MUST be defined in terms of percentage - The application mixture MUST be defined in terms of percentage
of the total bandwidth to be tested of the total bandwidth to be tested
- The rate of transmission for each application within the mixture - The rate of transmission for each application within the mixture
MUST be also be configurable MUST be also be configurable
*The tester MUST be capable of generating precise TCP test patterns * The test generator MUST be capable of generating precise TCP
for each application specified, to ensure repeatable results. test patterns for each application specified, to ensure repeatable
results.
3. Generate application traffic between the ingress (client side) and 3. Generate application traffic between the ingress (client side) and
egress (server side) ports of the DUT and measure the metrics egress (server side) ports of the DUT and measure the metrics
(TTPET, TCP Efficiency, and Buffer Delay) per application stream (TTPET, TCP Efficiency, and Buffer Delay) per application stream
and at the ingress and egress port (across the entire Td, default and at the ingress and egress port (across the entire Td, default
30 seconds duration). 30 seconds duration).
Reporting Format: Reporting Format:
The Shaper Stateful Traffic individual report MUST contain all The Shaper Stateful Traffic individual report MUST contain all
results for each traffic scheduler and QL/SR test run and a results for each traffic scheduler and QL/SR test run and a
skipping to change at page 32, line 37 skipping to change at page 34, line 37
- Policers on ingress and queuing on egress - Policers on ingress and queuing on egress
- Policers on ingress and shapers on egress (not intended for a - Policers on ingress and shapers on egress (not intended for a
flow to be policed then shaped, these would be two different flow to be policed then shaped, these would be two different
flows tested at the same time) flows tested at the same time)
- etc. - etc.
The test procedures and reporting formatting from the previous The test procedures and reporting formatting from the previous
sections may be modified to accommodate the capacity test profile. sections may be modified to accommodate the capacity test profile.
Appendix A: Open Source Tools for Traffic Management Testing 7. Security Considerations
Documents of this type do not directly affect the security of the
Internet or of corporate networks as long as benchmarking is not
performed on devices or systems connected to production networks.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.
8. IANA Considerations
This document does not REQUIRE an IANA registration for ports
dedicated to the TCP testing described in this document.
9. References
9.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC2119, March 1997.
[RFC1242] S. Bradner, "Benchmarking Terminology for Network
Interconnection Devices," RFC1242 July 1991
[RFC2544] S. Bradner, "Benchmarking Methodology for Network
Interconnect Devices," RFC2544 March 1999
[RFC3148] M. Mathis et al., A Framework for Defining Empirical
Bulk Transfer Capacity Metrics," RFC3148 July 2001
[RFC5481] A. Morton et al., "Packet Delay Variation Applicability
Statement," RFC5481 March 2009
[RFC6703] A. Morton et al., "Reporting IP Network Performance
Metrics: Different Points of View." RFC 6703 August 2012
[RFC2680] G. Almes et al., "A One-way Packet Loss Metric for IPPM,"
RFC2680 September 1999
[RFC4689] S. Poretsky et al., "Terminology for Benchmarking
Network-layer Traffic Control Mechanisms," RFC4689,
October 2006
[RFC4737] A. Morton et al., "Packet Reordering Metrics," RFC4737,
February 2006
[RFC4115] O. Aboul-Magd et al., "A Differentiated Service Two-Rate,
Three-Color Marker with Efficient Handling of in-Profile Traffic."
RFC4115 July 2005
[RFC6349] Barry Constantine et al., "Framework for TCP Throughput
Testing," RFC6349, August 2011
9.2. Informative References
[RFC2697] J. Heinanen et al., "A Single Rate Three Color Marker,"
RFC2697, September 1999
[RFC2698] J. Heinanen et al., "A Two Rate Three Color Marker, "
RFC2698, September 1999
[AQM-RECO] Fred Baker et al., "IETF Recommendations Regarding
Active Queue Management," August 2014,
https://datatracker.ietf.org/doc/draft-ietf-aqm-
recommendation/
[MEF-10.2] "MEF 10.2: Ethernet Services Attributes Phase 2," October
2009, http://metroethernetforum.org/PDF_Documents/
technical-specifications/MEF10.2.pdf
[MEF-12.1] "MEF 12.1: Carrier Ethernet Network Architecture
Framework --
Part 2: Ethernet Services Layer - Base Elements," April
2010, https://www.metroethernetforum.org/Assets/Technical
_Specifications/PDF/MEF12.1.pdf
[MEF-26] "MEF 26: External Network Network Interface (ENNI) -
Phase 1,"January 2010, http://www.metroethernetforum.org
/PDF_Documents/technical-specifications/MEF26.pdf
[MEF-14] "Abstract Test Suite for Traffic Management Phase 1,
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_14.pdf
[MEF-19] "Abstract Test Suite for UNII Type 1",
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_19.pdf
[MEF-37] "Abstract Test Suite for ENNI",
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_37.pdf
Appendix A: Open Source Tools for Traffic Management Testing
This framework specifies that stateless and stateful behaviors SHOULD This framework specifies that stateless and stateful behaviors SHOULD
both be tested. Some open source tools that can be used to both be tested. Some open source tools that can be used to
accomplish many of the tests proposed in this framework are: accomplish many of the tests proposed in this framework are:
iperf, netperf (with netperf-wrapper),uperf, TMIX, iperf, netperf (with netperf-wrapper),uperf, TMIX,
TCP-incast-generator, and D-ITG (Distributed Internet Traffic TCP-incast-generator, and D-ITG (Distributed Internet Traffic
Generator). Generator).
Iperf can generate UDP or TCP based traffic; a client and server must Iperf can generate UDP or TCP based traffic; a client and server must
both run the iperf software in the same traffic mode. The server is both run the iperf software in the same traffic mode. The server is
skipping to change at page 37, line 39 skipping to change at page 41, line 39
Block Ave. = N/A Block Ave. = N/A
Size (Sb)* Min. = 16KB Size (Sb)* Min. = 16KB
Max. = 128KB Max. = 128KB
*Depending upon the tested file size, the block size will be *Depending upon the tested file size, the block size will be
transferred n number of times to complete the example. An example transferred n number of times to complete the example. An example
would be a 10 MB file test and 64KB block size. In this case 160 would be a 10 MB file test and 64KB block size. In this case 160
blocks would be transferred after the control channel is opened blocks would be transferred after the control channel is opened
between the client and server. between the client and server.
7. Security Considerations Acknowledgments
Documents of this type do not directly affect the security of the
Internet or of corporate networks as long as benchmarking is not
performed on devices or systems connected to production networks.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.
8. IANA Considerations
This document does not REQUIRE an IANA registration for ports
dedicated to the TCP testing described in this document.
9. Acknowledgments
We would like to thank Al Morton for his continuous review and We would like to thank Al Morton for his continuous review and
invaluable input to the document. We would also like to thank invaluable input to the document. We would also like to thank
Scott Bradner for providing guidance early in the drafts Scott Bradner for providing guidance early in the drafts
conception in the area of benchmarking scope of traffic management conception in the area of benchmarking scope of traffic management
functions. Additionally, we would like to thank Tim Copley for this functions. Additionally, we would like to thank Tim Copley for this
original input and David Taht, Gory Erg, Toke Hoiland-Jorgensen for original input and David Taht, Gory Erg, Toke Hoiland-Jorgensen for
their review and input for the AQM group. And for the formal reviews their review and input for the AQM group. And for the formal reviews
of this document, we would like to thank Gilles Forget, of this document, we would like to thank Gilles Forget,
Vijay Gurbani, Reinhard Schrage, and Bhuvaneswaran Vengainathan Vijay Gurbani, Reinhard Schrage, and Bhuvaneswaran Vengainathan
10. References
10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC2119, March 1997.
[RFC1242] S. Bradner, "Benchmarking Terminology for Network
Interconnection Devices," RFC1242 July 1991
[RFC2544] S. Bradner, "Benchmarking Methodology for Network
Interconnect Devices," RFC2544 March 1999
[RFC3148] M. Mathis et al., A Framework for Defining Empirical
Bulk Transfer Capacity Metrics," RFC3148 July 2001
[RFC5481] A. Morton et al., "Packet Delay Variation Applicability
Statement," RFC5481 March 2009
[RFC6703] A. Morton et al., "Reporting IP Network Performance
Metrics: Different Points of View." RFC 6703 August 2012
[RFC2680] G. Almes et al., "A One-way Packet Loss Metric for IPPM,"
RFC2680 September 1999
[RFC4689] S. Poretsky et al., "Terminology for Benchmarking
Network-layer Traffic Control Mechanisms," RFC4689,
October 2006
[RFC4737] A. Morton et al., "Packet Reordering Metrics," RFC4737,
February 2006
[RFC4115] O. Aboul-Magd et al., "A Differentiated Service Two-Rate,
Three-Color Marker with Efficient Handling of in-Profile Traffic."
RFC4115 July 2005
[RFC6349] Barry Constantine et al., "Framework for TCP Throughput
Testing," RFC6349, August 2011
10.2. Informative References
[RFC2697] J. Heinanen et al., "A Single Rate Three Color Marker,"
RFC2697, September 1999
[RFC2698] J. Heinanen et al., "A Two Rate Three Color Marker, "
RFC2698, September 1999
[AQM-RECO] Fred Baker et al., "IETF Recommendations Regarding
Active Queue Management," August 2014,
https://datatracker.ietf.org/doc/draft-ietf-aqm-
recommendation/
[MEF-10.2] "MEF 10.2: Ethernet Services Attributes Phase 2," October
2009, http://metroethernetforum.org/PDF_Documents/
technical-specifications/MEF10.2.pdf
[MEF-12.1] "MEF 12.1: Carrier Ethernet Network Architecture
Framework --
Part 2: Ethernet Services Layer - Base Elements," April
2010, https://www.metroethernetforum.org/Assets/Technical
_Specifications/PDF/MEF12.1.pdf
[MEF-26] "MEF 26: External Network Network Interface (ENNI) -
Phase 1,"January 2010, http://www.metroethernetforum.org
/PDF_Documents/technical-specifications/MEF26.pdf
[MEF-14] "Abstract Test Suite for Traffic Management Phase 1,
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_14.pdf
[MEF-19] "Abstract Test Suite for UNII Type 1",
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_19.pdf
[MEF-37] "Abstract Test Suite for ENNI",
https://www.metroethernetforum.org/Assets
/Technical_Specifications/PDF/MEF_37.pdf
Authors' Addresses Authors' Addresses
Barry Constantine Barry Constantine
JDSU, Test and Measurement Division JDSU, Test and Measurement Division
Germantown, MD 20876-7100, USA Germantown, MD 20876-7100, USA
Phone: +1 240 404 2227 Phone: +1 240 404 2227
Email: barry.constantine@jdsu.com Email: barry.constantine@jdsu.com
Ram Krishnan Ram Krishnan
Brocade Communications Brocade Communications
 End of changes. 25 change blocks. 
176 lines changed or deleted 205 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/