< draft-ietf-bmwg-dcbench-methodology-11.txt   draft-ietf-bmwg-dcbench-methodology-12.txt >
Internet Engineering Task Force L. Avramov Internet Engineering Task Force L. Avramov
INTERNET-DRAFT, Intended Status: Informational Google INTERNET-DRAFT, Intended Status: Informational Google
Expires December 20,2017 J. Rapp Expires December 20,2017 J. Rapp
June 18, 2017 VMware June 18, 2017 VMware
Data Center Benchmarking Methodology Data Center Benchmarking Methodology
draft-ietf-bmwg-dcbench-methodology-11 draft-ietf-bmwg-dcbench-methodology-12
Abstract Abstract
The purpose of this informational document is to establish test and The purpose of this informational document is to establish test and
evaluation methodology and measurement techniques for physical evaluation methodology and measurement techniques for physical
network equipment in the data center. Many of these terms and methods network equipment in the data center. Many of these terms and methods
may be applicable beyond this publication's scope as the technologies may be applicable beyond this publication's scope as the technologies
originally applied in the data center are deployed elsewhere. originally applied in the data center are deployed elsewhere.
Status of this Memo Status of this Memo
skipping to change at page 3, line 24 skipping to change at page 3, line 24
3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7 3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 7
3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Reporting format . . . . . . . . . . . . . . . . . . . . . . 10
4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11 4 Microburst Testing . . . . . . . . . . . . . . . . . . . . . . . 11
4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11 4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 12 4.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 12
5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12 5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 12
5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 12 5.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 13
5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 14 5.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 14
6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 14 6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 15
6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 14 6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 14 6.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 15 6.3 Reporting Format . . . . . . . . . . . . . . . . . . . . . . 16
7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17
9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17
9.1. Normative References . . . . . . . . . . . . . . . . . . . 17 9.1. Normative References . . . . . . . . . . . . . . . . . . . 18
9.2. Informative References . . . . . . . . . . . . . . . . . . 17 9.2. Informative References . . . . . . . . . . . . . . . . . . 18
9.2. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 17 9.2. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 18
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 18 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 19
1. Introduction 1. Introduction
Traffic patterns in the data center are not uniform and are Traffic patterns in the data center are not uniform and are
constantly changing. They are dictated by the nature and variety of constantly changing. They are dictated by the nature and variety of
applications utilized in the data center. It can be largely east-west applications utilized in the data center. It can be largely east-west
traffic flows in one data center and north-south in another, while traffic flows in one data center and north-south in another, while
others may combine both. Traffic patterns can be bursty in nature and others may combine both. Traffic patterns can be bursty in nature and
contain many-to-one, many-to-many, or one-to-many flows. Each flow contain many-to-one, many-to-many, or one-to-many flows. Each flow
may also be small and latency sensitive or large and throughput may also be small and latency sensitive or large and throughput
skipping to change at page 11, line 24 skipping to change at page 11, line 24
The same formula is used for max and avg variations measured. The same formula is used for max and avg variations measured.
4 Microburst Testing 4 Microburst Testing
4.1 Objective 4.1 Objective
To find the maximum amount of packet bursts a DUT can sustain under To find the maximum amount of packet bursts a DUT can sustain under
various configurations. various configurations.
This test provides additional methodology to the other RFC tests:
-All bursts should be send with 100% intensity
-All ports of the DUT must be used for this test
-All ports are recommended to be testes simultaneously
4.2 Methodology 4.2 Methodology
A traffic generator MUST be connected to all ports on the DUT. In A traffic generator MUST be connected to all ports on the DUT. In
order to cause congestion, two or more ingress ports MUST send bursts order to cause congestion, two or more ingress ports MUST send bursts
of packets destined for the same egress port. The simplest of the of packets destined for the same egress port. The simplest of the
setups would be two ingress ports and one egress port (2-to-1). setups would be two ingress ports and one egress port (2-to-1).
The burst MUST be sent with an intensity of 100%, meaning the burst The burst MUST be sent with an intensity of 100%, meaning the burst
of packets will be sent with a minimum inter-packet gap. The amount of packets will be sent with a minimum inter-packet gap. The amount
of packet contained in the burst will be trial variable and increase of packet contained in the burst will be trial variable and increase
skipping to change at page 12, line 28 skipping to change at page 12, line 38
on the DUT on the DUT
- The repeatability of the test needs to be indicated: number of - The repeatability of the test needs to be indicated: number of
iterations of the same test and percentage of variation between iterations of the same test and percentage of variation between
results (min, max, avg) results (min, max, avg)
5. Head of Line Blocking 5. Head of Line Blocking
5.1 Objective 5.1 Objective
Head-of-line blocking (HOL blocking) is a performance-limiting Head-of-line blocking (HOLB) is a performance-limiting phenomenon
phenomenon that occurs when packets are held-up by the first packet that occurs when packets are held-up by the first packet ahead
ahead waiting to be transmitted to a different output port. This is waiting to be transmitted to a different output port. This is defined
defined in RFC 2889 section 5.5, Congestion Control. This section in RFC 2889 section 5.5, Congestion Control. This section expands on
expands on RFC 2889 in the context of Data Center Benchmarking. RFC 2889 in the context of Data Center Benchmarking.
The objective of this test is to understand the DUT behavior under The objective of this test is to understand the DUT behavior under
head of line blocking scenario and measure the packet loss. head of line blocking scenario and measure the packet loss.
Here are the differences between this HOLB test and RFC 2889:
-This HOLB starts with 8 ports in two groups of 4, instead of 4 RFC
2889
-This HOLB shifts all the port numbers by one in a second iteration
of the test, this is new compared to RFC 2889. The shifting port
numbers continue until all ports are the first in the group. The
purpose is to make sure to have tested all permutations to cover
differences of behavior in the SoC of the DUT
-Another test in this HOLB expands the group of ports, such that
traffic is divided among 4 ports instead of two (25% instead of 50%
per port)
-Section 5.3 adds additional reporting requirements from Congestion
Control in RFC 2889
5.2 Methodology 5.2 Methodology
In order to cause congestion in the form of head of line blocking, In order to cause congestion in the form of head of line blocking,
groups of four ports are used. A group has 2 ingress and 2 egress groups of four ports are used. A group has 2 ingress and 2 egress
ports. The first ingress port MUST have two flows configured each ports. The first ingress port MUST have two flows configured each
going to a different egress port. The second ingress port will going to a different egress port. The second ingress port will
congest the second egress port by sending line rate. The goal is to congest the second egress port by sending line rate. The goal is to
measure if there is loss on the flow for the first egress port which measure if there is loss on the flow for the first egress port which
is not over-subscribed. is not over-subscribed.
 End of changes. 7 change blocks. 
17 lines changed or deleted 42 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/