draft-ietf-bmwg-ngfw-performance-08.txt   draft-ietf-bmwg-ngfw-performance-09.txt 
Benchmarking Methodology Working Group B. Balarajah Benchmarking Methodology Working Group B. Balarajah
Internet-Draft Internet-Draft
Intended status: Informational C. Rossenhoevel Intended status: Informational C. Rossenhoevel
Expires: October 18, 2021 EANTC AG Expires: November 22, 2021 EANTC AG
B. Monkman B. Monkman
NetSecOPEN NetSecOPEN
April 16, 2021 May 21, 2021
Benchmarking Methodology for Network Security Device Performance Benchmarking Methodology for Network Security Device Performance
draft-ietf-bmwg-ngfw-performance-08 draft-ietf-bmwg-ngfw-performance-09
Abstract Abstract
This document provides benchmarking terminology and methodology for This document provides benchmarking terminology and methodology for
next-generation network security devices including next-generation next-generation network security devices including next-generation
firewalls (NGFW), next-generation intrusion detection and prevention firewalls (NGFW), next-generation intrusion detection and prevention
systems (NGIDS/NGIPS) and unified threat management (UTM) systems (NGIDS/NGIPS) and unified threat management (UTM)
implementations. This document aims to strongly improve the implementations. This document aims to strongly improve the
applicability, reproducibility, and transparency of benchmarks and to applicability, reproducibility, and transparency of benchmarks and to
align the test methodology with today's increasingly complex layer 7 align the test methodology with today's increasingly complex layer 7
skipping to change at page 1, line 43 skipping to change at page 1, line 43
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 18, 2021. This Internet-Draft will expire on November 22, 2021.
Copyright Notice Copyright Notice
Copyright (c) 2021 IETF Trust and the persons identified as the Copyright (c) 2021 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 27 skipping to change at page 2, line 27
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4 4. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.1. Test Bed Configuration . . . . . . . . . . . . . . . . . 4 4.1. Test Bed Configuration . . . . . . . . . . . . . . . . . 4
4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6 4.2. DUT/SUT Configuration . . . . . . . . . . . . . . . . . . 6
4.2.1. Security Effectiveness Configuration . . . . . . . . 12 4.2.1. Security Effectiveness Configuration . . . . . . . . 12
4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12 4.3. Test Equipment Configuration . . . . . . . . . . . . . . 12
4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12 4.3.1. Client Configuration . . . . . . . . . . . . . . . . 12
4.3.2. Backend Server Configuration . . . . . . . . . . . . 15 4.3.2. Backend Server Configuration . . . . . . . . . . . . 15
4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 16 4.3.3. Traffic Flow Definition . . . . . . . . . . . . . . . 17
4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17 4.3.4. Traffic Load Profile . . . . . . . . . . . . . . . . 17
5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 18 5. Test Bed Considerations . . . . . . . . . . . . . . . . . . . 18
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 19
6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 20 6.2. Detailed Test Results . . . . . . . . . . . . . . . . . . 21
6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21 6.3. Benchmarks and Key Performance Indicators . . . . . . . . 21
7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 22 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 22
7.1. Throughput Performance with Application Traffic Mix . . . 22 7.1. Throughput Performance with Application Traffic Mix . . . 23
7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 22 7.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 23
7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23 7.1.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 23
7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23 7.1.3. Test Parameters . . . . . . . . . . . . . . . . . . . 23
7.1.4. Test Procedures and Expected Results . . . . . . . . 24 7.1.4. Test Procedures and Expected Results . . . . . . . . 25
7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 25 7.2. TCP/HTTP Connections Per Second . . . . . . . . . . . . . 26
7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 25 7.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 26
7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 25 7.2.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 26
7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26 7.2.3. Test Parameters . . . . . . . . . . . . . . . . . . . 26
7.2.4. Test Procedures and Expected Results . . . . . . . . 27 7.2.4. Test Procedures and Expected Results . . . . . . . . 27
7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 28 7.3. HTTP Throughput . . . . . . . . . . . . . . . . . . . . . 29
7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 28 7.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 29
7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 28 7.3.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 29
7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 29 7.3.3. Test Parameters . . . . . . . . . . . . . . . . . . . 29
7.3.4. Test Procedures and Expected Results . . . . . . . . 31 7.3.4. Test Procedures and Expected Results . . . . . . . . 31
7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 32 7.4. HTTP Transaction Latency . . . . . . . . . . . . . . . . 32
7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32 7.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 32
7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 32 7.4.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 32
7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 32 7.4.3. Test Parameters . . . . . . . . . . . . . . . . . . . 32
7.4.4. Test Procedures and Expected Results . . . . . . . . 34 7.4.4. Test Procedures and Expected Results . . . . . . . . 34
7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 35 7.5. Concurrent TCP/HTTP Connection Capacity . . . . . . . . . 35
7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 35 7.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 35
7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 35 7.5.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 35
skipping to change at page 3, line 20 skipping to change at page 3, line 20
7.5.4. Test Procedures and Expected Results . . . . . . . . 37 7.5.4. Test Procedures and Expected Results . . . . . . . . 37
7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 38 7.6. TCP/HTTPS Connections per Second . . . . . . . . . . . . 38
7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 38 7.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 38
7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 38 7.6.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 38
7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 38 7.6.3. Test Parameters . . . . . . . . . . . . . . . . . . . 38
7.6.4. Test Procedures and Expected Results . . . . . . . . 40 7.6.4. Test Procedures and Expected Results . . . . . . . . 40
7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 41 7.7. HTTPS Throughput . . . . . . . . . . . . . . . . . . . . 41
7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 41 7.7.1. Objective . . . . . . . . . . . . . . . . . . . . . . 41
7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 41 7.7.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 41
7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 42 7.7.3. Test Parameters . . . . . . . . . . . . . . . . . . . 42
7.7.4. Test Procedures and Expected Results . . . . . . . . 44 7.7.4. Test Procedures and Expected Results . . . . . . . . 43
7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 45 7.8. HTTPS Transaction Latency . . . . . . . . . . . . . . . . 44
7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 45 7.8.1. Objective . . . . . . . . . . . . . . . . . . . . . . 44
7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 45 7.8.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 44
7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 45 7.8.3. Test Parameters . . . . . . . . . . . . . . . . . . . 44
7.8.4. Test Procedures and Expected Results . . . . . . . . 47 7.8.4. Test Procedures and Expected Results . . . . . . . . 46
7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 48 7.9. Concurrent TCP/HTTPS Connection Capacity . . . . . . . . 47
7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 48 7.9.1. Objective . . . . . . . . . . . . . . . . . . . . . . 47
7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 48 7.9.2. Test Setup . . . . . . . . . . . . . . . . . . . . . 47
7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 48 7.9.3. Test Parameters . . . . . . . . . . . . . . . . . . . 47
7.9.4. Test Procedures and Expected Results . . . . . . . . 50 7.9.4. Test Procedures and Expected Results . . . . . . . . 49
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 51 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 50
9. Security Considerations . . . . . . . . . . . . . . . . . . . 51 9. Security Considerations . . . . . . . . . . . . . . . . . . . 50
10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 51 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 50
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 51 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 51
12. References . . . . . . . . . . . . . . . . . . . . . . . . . 52 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.1. Normative References . . . . . . . . . . . . . . . . . . 52 12.1. Normative References . . . . . . . . . . . . . . . . . . 51
12.2. Informative References . . . . . . . . . . . . . . . . . 52 12.2. Informative References . . . . . . . . . . . . . . . . . 51
Appendix A. Test Methodology - Security Effectiveness Evaluation 53 Appendix A. Test Methodology - Security Effectiveness Evaluation 52
A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 53 A.1. Test Objective . . . . . . . . . . . . . . . . . . . . . 52
A.2. Test Bed Setup . . . . . . . . . . . . . . . . . . . . . 53 A.2. Test Bed Setup . . . . . . . . . . . . . . . . . . . . . 52
A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 53 A.3. Test Parameters . . . . . . . . . . . . . . . . . . . . . 53
A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 53 A.3.1. DUT/SUT Configuration Parameters . . . . . . . . . . 53
A.3.2. Test Equipment Configuration Parameters . . . . . . . 54 A.3.2. Test Equipment Configuration Parameters . . . . . . . 53
A.4. Test Results Validation Criteria . . . . . . . . . . . . 54 A.4. Test Results Validation Criteria . . . . . . . . . . . . 53
A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 54 A.5. Measurement . . . . . . . . . . . . . . . . . . . . . . . 54
A.6. Test Procedures and Expected Results . . . . . . . . . . 55 A.6. Test Procedures and Expected Results . . . . . . . . . . 55
A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 55 A.6.1. Step 1: Background Traffic . . . . . . . . . . . . . 55
A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 56 A.6.2. Step 2: CVE Emulation . . . . . . . . . . . . . . . . 55
Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 56 Appendix B. DUT/SUT Classification . . . . . . . . . . . . . . . 55
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 56
1. Introduction 1. Introduction
15 years have passed since IETF recommended test methodology and 15 years have passed since IETF recommended test methodology and
terminology for firewalls initially ([RFC3511]). The requirements terminology for firewalls initially ([RFC3511]). The requirements
for network security element performance and effectiveness have for network security element performance and effectiveness have
increased tremendously since then. Security function implementations increased tremendously since then. Security function implementations
have evolved to more advanced areas and have diversified into have evolved to more advanced areas and have diversified into
intrusion detection and prevention, threat management, analysis of intrusion detection and prevention, threat management, analysis of
skipping to change at page 5, line 6 skipping to change at page 5, line 6
Test bed configuration MUST ensure that any performance implications Test bed configuration MUST ensure that any performance implications
that are discovered during the benchmark testing aren't due to the that are discovered during the benchmark testing aren't due to the
inherent physical network limitations such as the number of physical inherent physical network limitations such as the number of physical
links and forwarding performance capabilities (throughput and links and forwarding performance capabilities (throughput and
latency) of the network devices in the test bed. For this reason, latency) of the network devices in the test bed. For this reason,
this document recommends avoiding external devices such as switches this document recommends avoiding external devices such as switches
and routers in the test bed wherever possible. and routers in the test bed wherever possible.
In some deployment scenarios, the network security devices (Device In some deployment scenarios, the network security devices (Device
Under Test/System Under Test) are connected to routers and switches Under Test/System Under Test) are connected to routers and switches,
which will reduce the number of entries in MAC or ARP tables of the which will reduce the number of entries in MAC or ARP tables of the
Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables Device Under Test/System Under Test (DUT/SUT). If MAC or ARP tables
have many entries, this may impact the actual DUT/SUT performance due have many entries, this may impact the actual DUT/SUT performance due
to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This to MAC and ARP/ND (Neighbor Discovery) table lookup processes. This
document also recommends using test equipment with the capability of document also recommends using test equipment with the capability of
emulating layer 3 routing functionality instead of adding external emulating layer 3 routing functionality instead of adding external
routers in the test bed. routers in the test bed.
The test bed setup Option 1 (Figure 1) is the RECOMMENDED test bed The test bed setup Option 1 (Figure 1) is the RECOMMENDED test bed
setup for the benchmarking test. setup for the benchmarking test.
skipping to change at page 9, line 22 skipping to change at page 9, line 22
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Anti-Botnet | DUT/SUT detects traffic to or from botnets. | | Anti-Botnet | DUT/SUT detects traffic to or from botnets. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Anti-Evasion | DUT/SUT detects and mitigates attacks that have| | Anti-Evasion | DUT/SUT detects and mitigates attacks that have|
| | been obfuscated in some manner. | | | been obfuscated in some manner. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Web Filtering | DUT/SUT detects and blocks malicious website | | Web Filtering | DUT/SUT detects and blocks malicious website |
| | including defined classifications of website | | | including defined classifications of website |
| | across the monitored network. | | | across the monitored network. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| DLP | DUT/SUT detects and blocks the transmission | | DLP | DUT/SUT detects and prevents data breaches and |
| | of Personally Identifiable Information (PII) | | | data exfiltration, or it detects and blocks the|
| | and specific files across the monitored network| | | transmission of sensitive data across the |
| | monitored network. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Certificate | DUT/SUT validates certificates used in | | Certificate | DUT/SUT validates certificates used in |
| Validation | encrypted communications across the monitored | | Validation | encrypted communications across the monitored |
| | network. | | | network. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Logging and | DUT/SUT logs and reports all traffic at the | | Logging and | DUT/SUT logs and reports all traffic at the |
| Reporting | flow level across the monitored. | | Reporting | flow level across the monitored network. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
| Application | DUT/SUT detects known applications as defined | | Application | DUT/SUT detects known applications as defined |
| Identification | within the traffic mix selected across | | Identification | within the traffic mix selected across |
| | the monitored network. | | | the monitored network. |
+------------------+------------------------------------------------+ +------------------+------------------------------------------------+
Table 3: Security Feature Description Table 3: Security Feature Description
In summary, a DUT/SUT SHOULD be configured as follows: In summary, a DUT/SUT SHOULD be configured as follows:
o All RECOMMENDED security inspection enabled o All RECOMMENDED security inspections enabled
o Disposition of all flows of traffic are logged - Logging to an o Disposition of all flows of traffic are logged - Logging to an
external device is permissible external device is permissible
o Geographical location filtering and Application Identification and o Geographical location filtering and Application Identification and
Control configured to be triggered based on a site or application Control configured to be triggered based on a site or application
from the defined traffic mix from the defined traffic mix
In addition, a realistic number of access control rules (ACL) SHOULD In addition, a realistic number of access control rules (ACL) SHOULD
be configured on the DUT/SUT where ACL's are configurable and also be configured on the DUT/SUT where ACL's are configurable and also
skipping to change at page 13, line 19 skipping to change at page 13, line 19
be set to 1460 bytes and 1440 bytes respectively and a TX and RX be set to 1460 bytes and 1440 bytes respectively and a TX and RX
initial receive windows of 64 KByte. Client initial congestion initial receive windows of 64 KByte. Client initial congestion
window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are window SHOULD NOT exceed 10 times the MSS. Delayed ACKs are
permitted and the maximum client delayed ACK SHOULD NOT exceed 10 permitted and the maximum client delayed ACK SHOULD NOT exceed 10
times the MSS before a forced ACK. Up to three retries SHOULD be times the MSS before a forced ACK. Up to three retries SHOULD be
allowed before a timeout event is declared. All traffic MUST set the allowed before a timeout event is declared. All traffic MUST set the
TCP PSH flag to high. The source port range SHOULD be in the range TCP PSH flag to high. The source port range SHOULD be in the range
of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per of 1024 - 65535. Internal timeout SHOULD be dynamically scalable per
RFC 793. The client SHOULD initiate and close TCP connections. The RFC 793. The client SHOULD initiate and close TCP connections. The
TCP connection MUST be initiated via a TCP three way handshake (SYN, TCP connection MUST be initiated via a TCP three way handshake (SYN,
SYN/ACK, ACK). and it MUST be closed via either a TCP three way SYN/ACK, ACK), and it MUST be closed via either a TCP three way close
close (FIN, FIN/ACK, ACK), or a TCP four way close (FIN, ACK, FIN, (FIN, FIN/ACK, ACK), or a TCP four way close (FIN, ACK, FIN, ACK).
ACK).
4.3.1.2. Client IP Address Space 4.3.1.2. Client IP Address Space
The sum of the client IP space SHOULD contain the following The sum of the client IP space SHOULD contain the following
attributes. attributes.
o The IP blocks SHOULD consist of multiple unique, discontinuous o The IP blocks SHOULD consist of multiple unique, discontinuous
static address blocks. static address blocks.
o A default gateway is permitted. o A default gateway is permitted.
o The IPv4 Type of Service (ToS) byte or IPv6 traffic class should o The DSCP (differentiated services code point) marking is set to DF
be set to '00' or '000000' respectively. (Default Forwarding) '000000' on IPv4 Type of Service (ToS) field
and IPv6 traffic class field.
The following equation can be used to define the total number of The following equation can be used to define the total number of
client IP addresses that will be configured on the test equipment. client IP addresses that will be configured on the test equipment.
Desired total number of client IP = Target throughput [Mbit/s] / Desired total number of client IP = Target throughput [Mbit/s] /
Average throughput per IP address [Mbit/s] Average throughput per IP address [Mbit/s]
As shown in the example list below, the value for "Average throughput As shown in the example list below, the value for "Average throughput
per IP address" can be varied depending on the deployment and use per IP address" can be varied depending on the deployment and use
case scenario. case scenario.
skipping to change at page 14, line 20 skipping to change at page 14, line 20
(Option 2) 80 % IPv4, 20% IPv6 (Option 2) 80 % IPv4, 20% IPv6
(Option 3) 50 % IPv4, 50% IPv6 (Option 3) 50 % IPv4, 50% IPv6
(Option 4) 20 % IPv4, 80% IPv6 (Option 4) 20 % IPv4, 80% IPv6
(Option 5) no IPv4, 100% IPv6 (Option 5) no IPv4, 100% IPv6
Note: The IANA has assigned IP address range for the testing purpose Note: The IANA has assigned IP address range for the testing purpose
as described in Section 8. as described in Section 8. If the test scenario requires additional
number of IP addresses or subnets than the IANA assigned, this
document recommends to use non routable Private IPv4 address ranges
or Unique Local Address (ULA) IPv6 address ranges for the testing.
4.3.1.3. Emulated Web Browser Attributes 4.3.1.3. Emulated Web Browser Attributes
The emulated web client contains attributes that will materially The emulated web client contains attributes that will materially
affect how traffic is loaded. The objective is to emulate modern, affect how traffic is loaded. The objective is to emulate modern,
typical browser attributes to improve realism of the result set. typical browser attributes to improve realism of the result set.
For HTTP traffic emulation, the emulated browser MUST negotiate HTTP For HTTP traffic emulation, the emulated browser MUST negotiate HTTP
1.1. HTTP persistence MAY be enabled depending on the test scenario. version 1.1 or higher. Depending on test scenarios and chosen HTTP
The browser MAY open multiple TCP connections per Server endpoint IP version, the browser MAY open multiple TCP connections per Server
at any time depending on how many sequential transactions are needed endpoint IP at any time depending on how many sequential transactions
to be processed. Within the TCP connection multiple transactions MAY need to be processed. For HTTP/2 or HTTP/3, the browser MAY open
be processed if the emulated browser has available connections. The Multiple concurrent streams per connection (multiplexing). If HTTP/3
browser SHOULD advertise a User-Agent header. Headers MUST be sent is used the browser MUST open Quick UDP Internet Connections (QUIC)
uncompressed. The browser SHOULD enforce content length validation. connection. HTTP settings such as number of connection per server
IP, number of requests per connection and number of streams per
connection MUST be documented. This document refers to [RFC8446] for
HTTP/2. The browser SHOULD advertise a User-Agent header. The
browser SHOULD enforce content length validation. Depending on test
scenarios and selected HTTP version, HTTP header compression MAY be
set to enable or disable. This setting (compression enabled or
disabled) MUST be documented in the report.
For encrypted traffic, the following attributes SHALL define the For encrypted traffic, the following attributes SHALL define the
negotiated encryption parameters. The test clients MUST use TLS negotiated encryption parameters. The test clients MUST use TLS
version 1.2 or higher. TLS record size MAY be optimized for the version 1.2 or higher. TLS record size MAY be optimized for the
HTTPS response object size up to a record size of 16 KByte. If HTTPS response object size up to a record size of 16 KByte. If
Server Name Indication (SNI) is required in the traffic mix Server Name Indication (SNI) is required in the traffic mix profile,
profile,the client endpoint MUST send TLS Extension Server Name the client endpoint MUST send TLS Extension Server Name Indication
Indication (SNI) information when opening a security tunnel. Each (SNI) information when opening a security tunnel. Each client
client connection MUST perform a full handshake with server connection MUST perform a full handshake with server certificate and
certificate and MUST NOT use session reuse or resumption. MUST NOT use session reuse or resumption.
The following TLS 1.2 supported ciphers and keys are RECOMMENDED to The following TLS 1.2 supported ciphers and keys are RECOMMENDED to
use for HTTPS based benchmarking tests defined in Section 7. use for HTTPS based benchmarking tests defined in Section 7.
1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash 1. ECHDE-ECDSA-AES128-GCM-SHA256 with Prime256v1 (Signature Hash
Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1) Algorithm: ecdsa_secp256r1_sha256 and Supported group: sepc256r1)
2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash 2. ECDHE-RSA-AES128-GCM-SHA256 with RSA 2048 (Signature Hash
Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256) Algorithm: rsa_pkcs1_sha256 and Supported group: sepc256)
skipping to change at page 16, line 9 skipping to change at page 16, line 22
4.3.2.2. Server Endpoint IP Addressing 4.3.2.2. Server Endpoint IP Addressing
The sum of the server IP space SHOULD contain the following The sum of the server IP space SHOULD contain the following
attributes. attributes.
o The server IP blocks SHOULD consist of unique, discontinuous o The server IP blocks SHOULD consist of unique, discontinuous
static address blocks with one IP per Server Fully Qualified static address blocks with one IP per Server Fully Qualified
Domain Name (FQDN) endpoint per test port. Domain Name (FQDN) endpoint per test port.
o A default gateway is permitted. The IPv4 ToS byte and IPv6 o A default gateway is permitted. The DSCP (differentiated services
traffic class bytes should be set to '00' and '000000' code point) marking is set to DF (Default Forwarding) '000000' on
respectively. IPv4 Type of Service (ToS) field and IPv6 traffic class field.
o The server IP addresses SHOULD be distributed between IPv4 and o The server IP addresses SHOULD be distributed between IPv4 and
IPv6 with a ratio identical to the clients distribution ratio. IPv6 with a ratio identical to the clients distribution ratio.
Note: The IANA has assigned IP address range for the testing purpose Note: The IANA has assigned IP address range for the testing purpose
as described in Section 8. as described in Section 8. If the test scenario requires additional
number of IP addresses or subnets than the IANA assigned, this
document recommends to use non routable Private IPv4 address ranges
or Unique Local Address (ULA) IPv6 address ranges for the testing.
4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes 4.3.2.3. HTTP / HTTPS Server Pool Endpoint Attributes
The server pool for HTTP SHOULD listen on TCP port 80 and emulate The server pool for HTTP SHOULD listen on TCP port 80 and emulate the
HTTP version 1.1 with persistence. The Server MUST advertise server same HTTP version and settings chosen by the client (emulated web
type in the Server response header [RFC2616]. For HTTPS server, TLS browser). The Server MUST advertise server type in the Server
1.2 or higher MUST be used with a maximum record size of 16 KByte and response header [RFC2616]. For HTTPS server, TLS 1.2 or higher MUST
MUST NOT use ticket resumption or Session ID reuse. The server MUST be used with a maximum record size of 16 KByte and MUST NOT use
listen on port TCP 443. The server SHALL serve a certificate to the ticket resumption or Session ID reuse. The server SHOULD listen on
client. The HTTPS server MUST check Host SNI information with the port TCP 443. The server SHALL serve a certificate to the client.
FQDN if the SNI is in use. Cipher suite and key size on the server The HTTPS server MUST check Host SNI information with the FQDN if the
side MUST be configured similar to the client side configuration SNI is in use. Cipher suite and key size on the server side MUST be
described in Section 4.3.1.3. configured similar to the client side configuration described in
Section 4.3.1.3.
4.3.3. Traffic Flow Definition 4.3.3. Traffic Flow Definition
This section describes the traffic pattern between client and server This section describes the traffic pattern between client and server
endpoints. At the beginning of the test, the server endpoint endpoints. At the beginning of the test, the server endpoint
initializes and will be ready to accept connection states including initializes and will be ready to accept connection states including
initialization of the TCP stack as well as bound HTTP and HTTPS initialization of the TCP stack as well as bound HTTP and HTTPS
servers. When a client endpoint is needed, it will initialize and be servers. When a client endpoint is needed, it will initialize and be
given attributes such as a MAC and IP address. The behavior of the given attributes such as a MAC and IP address. The behavior of the
client is to sweep through the given server IP space, sequentially client is to sweep through the given server IP space, generating a
generating a recognizable service by the DUT. Thus, a balanced, mesh recognizable service by the DUT. Sequential and pseudorandom sweep
between client endpoints and server endpoints will be generated in a methods are acceptable. The method used MUST be stated in the final
client IP and port server IP and port combination. Each client report. Thus, a balanced, mesh between client endpoints and server
endpoint performs the same actions as other endpoints, with the endpoints will be generated in a client IP and port server IP and
difference being the source IP of the client endpoint and the target port combination. Each client endpoint performs the same actions as
server IP pool. The client MUST use the server's IP address or Fully other endpoints, with the difference being the source IP of the
Qualified Domain Names (FQDN) in Host Headers [RFC2616]. client endpoint and the target server IP pool. The client MUST use
the server's IP address or Fully Qualified Domain Names (FQDN) in
Host Headers [RFC2616].
4.3.3.1. Description of Intra-Client Behavior 4.3.3.1. Description of Intra-Client Behavior
Client endpoints are independent of other clients that are Client endpoints are independent of other clients that are
concurrently executing. When a client endpoint initiates traffic, concurrently executing. When a client endpoint initiates traffic,
this section describes how the client steps through different this section describes how the client steps through different
services. Once the test is initialized, the client endpoints services. Once the test is initialized, the client endpoints
randomly hold (perform no operation) for a few milliseconds to allow randomly hold (perform no operation) for a few milliseconds to allow
for better randomization of the start of client traffic. Each client for better randomization of the start of client traffic. Each client
will either open a new TCP connection or connect to a TCP persistence will either open a new TCP connection or connect to a TCP persistence
skipping to change at page 17, line 44 skipping to change at page 18, line 16
server endpoints should negotiate layer 2-3 connectivity such as server endpoints should negotiate layer 2-3 connectivity such as
MAC learning and ARP. Only after successful MAC learning or ARP/ MAC learning and ARP. Only after successful MAC learning or ARP/
ND resolution SHALL the test iteration move to the next phase. ND resolution SHALL the test iteration move to the next phase.
No measurements are made in this phase. The minimum RECOMMEND No measurements are made in this phase. The minimum RECOMMEND
time for Init phase is 5 seconds. During this phase, the time for Init phase is 5 seconds. During this phase, the
emulated clients SHOULD NOT initiate any sessions with the DUT/ emulated clients SHOULD NOT initiate any sessions with the DUT/
SUT, in contrast, the emulated servers should be ready to accept SUT, in contrast, the emulated servers should be ready to accept
requests from DUT/SUT or from emulated clients. requests from DUT/SUT or from emulated clients.
2. In the ramp up phase, the test equipment SHOULD start to generate 2. In the ramp up phase, the test equipment SHOULD start to generate
the test traffic. It SHOULD use a set approximate number of the test traffic. It SHOULD use a set of approximate number of
unique client IP addresses actively to generate traffic. The unique client IP addresses actively to generate traffic. The
traffic SHOULD ramp from zero to desired target objective. The traffic SHOULD ramp up from zero to desired target objective.
target objective will be defined for each benchmarking test. The The target objective will be defined for each benchmarking test.
duration for the ramp up phase MUST be configured long enough, so The duration for the ramp up phase MUST be configured long
that the test equipment does not overwhelm the DUT/SUT's stated enough, so that the test equipment does not overwhelm the DUT/
performance metrics defined in Section 6.3 namely; TCP SUT's stated performance metrics defined in Section 6.3 namely;
Connections Per Second, Inspected Throughput, Concurrent TCP TCP Connections Per Second, Inspected Throughput, Concurrent TCP
Connections, and Application Transactions Per Second. No Connections, and Application Transactions Per Second. No
measurements are made in this phase. measurements are made in this phase.
3. Sustain phase starts when all required clients are active and 3. Sustain phase starts when all required clients are active and
operating at their desired load condition. In the sustain phase, operating at their desired load condition. In the sustain phase,
the test equipment SHOULD continue generating traffic to constant the test equipment SHOULD continue generating traffic to constant
target value for a constant number of active clients. The target value for a constant number of active clients. The
minimum RECOMMENDED time duration for sustain phase is 300 minimum RECOMMENDED time duration for sustain phase is 300
seconds. This is the phase where measurements occur. seconds. This is the phase where measurements occur. The test
equipment SHOULD measure and record statistics continuously. The
sampling interval for collecting the row results and calculating
the statistics SHOULD be less than 2 seconds.
4. In the ramp down/close phase, no new connections are established, 4. In the ramp down/close phase, no new connections are established,
and no measurements are made. The time duration for ramp up and and no measurements are made. The time duration for ramp up and
ramp down phase SHOULD be the same. ramp down phase SHOULD be the same.
5. The last phase is administrative and will occur when the test 5. The last phase is administrative and will occur when the test
equipment merges and collates the report data. equipment merges and collates the report data.
5. Test Bed Considerations 5. Test Bed Considerations
This section recommends steps to control the test environment and This section recommends steps to control the test environment and
test equipment, specifically focusing on virtualized environments and test equipment, specifically focusing on virtualized environments and
virtualized test equipment. virtualized test equipment.
1. Ensure that any ancillary switching or routing functions between 1. Ensure that any ancillary switching or routing functions between
the system under test and the test equipment do not limit the the system under test and the test equipment do not limit the
performance of the traffic generator. This is specifically performance of the traffic generator. This is specifically
important for virtualized components (vSwitches, vRouters). important for virtualized components (vSwitches, vRouters).
2. Verify that the performance of the test equipment matches and 2. Verify that the performance of the test equipment matches and
reasonably exceeds the expected maximum performance of the system reasonably exceeds the expected maximum performance of the DUT/
under test. SUT.
3. Assert that the test bed characteristics are stable during the 3. Assert that the test bed characteristics are stable during the
entire test session. Several factors might influence stability entire test session. Several factors might influence stability
specifically, for virtualized test beds. For example, additional specifically, for virtualized test beds. For example, additional
workloads in a virtualized system, load balancing, and movement workloads in a virtualized system, load balancing, and movement
of virtual machines during the test, or simple issues such as of virtual machines during the test, or simple issues such as
additional heat created by high workloads leading to an emergency additional heat created by high workloads leading to an emergency
CPU performance reduction. CPU performance reduction.
Test bed reference pre-tests help to ensure that the maximum desired
traffic generator aspects such as throughput, transaction per second,
connection per second, concurrent connection, and latency.
Test bed preparation may be performed either by configuring the DUT Test bed preparation may be performed either by configuring the DUT
in the most trivial setup (fast forwarding) or without presence of in the most trivial setup (fast forwarding) or without presence of
the DUT. the DUT.
6. Reporting 6. Reporting
This section describes how the final report should be formatted and This section describes how the final report should be formatted and
presented. The final test report MAY have two major sections; presented. The final test report MAY have two major sections;
Introduction and detailed test results sections. Introduction and detailed test results sections.
6.1. Introduction 6.1. Introduction
The following attributes SHOULD be present in the introduction The following attributes SHOULD be present in the introduction
section of the test report. section of the test report.
1. The time and date of the execution of the test MUST be prominent. 1. The time and date of the execution of the test MUST be prominent.
2. Summary of test bed software and Hardware details 2. Summary of test bed software and hardware details
A. DUT/SUT Hardware/Virtual Configuration A. DUT/SUT hardware/virtual configuration
+ This section SHOULD clearly identify the make and model of + This section SHOULD clearly identify the make and model of
the DUT/SUT the DUT/SUT
+ The port interfaces, including speed and link information + The port interfaces, including speed and link information
MUST be documented. MUST be documented.
+ If the DUT/SUT is a Virtual Network Function (VNF), host + If the DUT/SUT is a Virtual Network Function (VNF), host
(server) hardware and software details, interface (server) hardware and software details, interface
acceleration type such as DPDK and SR-IOV used CPU cores, acceleration type such as DPDK and SR-IOV used CPU cores,
used RAM, and the resource sharing (e.g. Pinning details used RAM, and the resource sharing (e.g. Pinning details
and NUMA Node) configuration MUST be documented. The and NUMA Node) configuration MUST be documented. The
virtual components such as Hypervisor, virtual switch virtual components such as Hypervisor, virtual switch
version MUST be also documented. version MUST be also documented.
+ Any additional hardware relevant to the DUT/SUT such as + Any additional hardware relevant to the DUT/SUT such as
controllers MUST be documented controllers MUST be documented
B. DUT/SUT Software B. DUT/SUT software
+ The operating system name MUST be documented + The operating system name MUST be documented
+ The version MUST be documented + The version MUST be documented
+ The specific configuration MUST be documented + The specific configuration MUST be documented
C. DUT/SUT Enabled Features C. DUT/SUT enabled features
+ Configured DUT/SUT features (see Table 1 and Table 2) MUST + Configured DUT/SUT features (see Table 1 and Table 2) MUST
be documented be documented
+ Attributes of those featured MUST be documented + Attributes of those featured MUST be documented
+ Any additional relevant information about features MUST be + Any additional relevant information about features MUST be
documented documented
D. Test equipment hardware and software D. Test equipment hardware and software
+ Test equipment vendor name + Test equipment vendor name
+ Hardware details including model number, interface type + Hardware details including model number, interface type
+ Test equipment firmware and test application software + Test equipment firmware and test application software
skipping to change at page 20, line 49 skipping to change at page 21, line 23
3. Results Summary / Executive Summary 3. Results Summary / Executive Summary
A. Results SHOULD resemble a pyramid in how it is reported, with A. Results SHOULD resemble a pyramid in how it is reported, with
the introduction section documenting the summary of results the introduction section documenting the summary of results
in a prominent, easy to read block. in a prominent, easy to read block.
6.2. Detailed Test Results 6.2. Detailed Test Results
In the result section of the test report, the following attributes In the result section of the test report, the following attributes
should be present for each benchmarking test. SHOULD be present for each benchmarking test.
a. KPIs MUST be documented separately for each benchmarking test. a. KPIs MUST be documented separately for each benchmarking test.
The format of the KPI metrics should be presented as described in The format of the KPI metrics SHOULD be presented as described in
Section 6.3. Section 6.3.
b. The next level of details SHOULD be graphs showing each of these b. The next level of details SHOULD be graphs showing each of these
metrics over the duration (sustain phase) of the test. This metrics over the duration (sustain phase) of the test. This
allows the user to see the measured performance stability changes allows the user to see the measured performance stability changes
over time. over time.
6.3. Benchmarks and Key Performance Indicators 6.3. Benchmarks and Key Performance Indicators
This section lists key performance indicators (KPIs) for overall This section lists key performance indicators (KPIs) for overall
skipping to change at page 21, line 48 skipping to change at page 22, line 20
all data must have been transferred in its entirety. In case of all data must have been transferred in its entirety. In case of
HTTP(S) transaction, it must have a valid status code, and the HTTP(S) transaction, it must have a valid status code, and the
appropriate FIN, FIN/ACK sequence must have been completed. appropriate FIN, FIN/ACK sequence must have been completed.
o TLS Handshake Rate o TLS Handshake Rate
The average number of successfully established TLS connections per The average number of successfully established TLS connections per
second between hosts across the DUT/SUT, or between hosts and the second between hosts across the DUT/SUT, or between hosts and the
DUT/SUT. DUT/SUT.
o Inspected Throughput o Inspected Throughput
The number of bits per second of allowed traffic a network The number of bits per second of examined and allowed traffic a
security device is able to transmit to the correct destination network security device is able to transmit to the correct
interface(s) in response to a specified offered load. The destination interface(s) in response to a specified offered load.
throughput benchmarking tests defined in Section 7 SHOULD measure The throughput benchmarking tests defined in Section 7 SHOULD
the average OSI model Layer 2 throughput value. This document measure the average Layer 2 throughput value when the DUT/SUT is
recommends presenting the throughput value in Gbit/s rounded to "inspecting" traffic. This document recommends presenting the
two places of precision with a more specific Kbit/s in inspected throughput value in Gbit/s rounded to two places of
parenthesis. precision with a more specific Kbit/s in parenthesis.
o Time to First Byte (TTFB) o Time to First Byte (TTFB)
TTFB is the elapsed time between the start of sending the TCP SYN TTFB is the elapsed time between the start of sending the TCP SYN
packet from the client and the client receiving the first packet packet from the client and the client receiving the first packet
of application data from the server or DUT/SUT. The benchmarking of application data from the server or DUT/SUT. The benchmarking
tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction tests HTTP Transaction Latency (Section 7.4) and HTTPS Transaction
Latency (Section 7.8) measure the minimum, average and maximum Latency (Section 7.8) measure the minimum, average and maximum
TTFB. The value SHOULD be expressed in millisecond. TTFB. The value SHOULD be expressed in millisecond.
o URL Response time / Time to Last Byte (TTLB) o URL Response time / Time to Last Byte (TTLB)
URL Response time / TTLB is the elapsed time between the start of URL Response time / TTLB is the elapsed time between the start of
sending the TCP SYN packet from the client and the client sending the TCP SYN packet from the client and the client
receiving the last packet of application data from the server or receiving the last packet of application data from the server or
DUT/SUT. The benchmarking tests HTTP Transaction Latency DUT/SUT. The benchmarking tests HTTP Transaction Latency
(Section 7.4) and HTTP Transaction Latency (Section 7.8) measure (Section 7.4) and HTTPS Transaction Latency (Section 7.8) measure
the minimum, average and maximum TTLB. The value SHOULD be the minimum, average and maximum TTLB. The value SHOULD be
expressed in millisecond. expressed in millisecond.
7. Benchmarking Tests 7. Benchmarking Tests
7.1. Throughput Performance with Application Traffic Mix 7.1. Throughput Performance with Application Traffic Mix
7.1.1. Objective 7.1.1. Objective
Using a relevant application traffic mix, determine the sustainable Using a relevant application traffic mix, determine the sustainable
inspected throughput supported by the DUT/SUT. inspected throughput supported by the DUT/SUT.
Based on customer use case, users can choose the application traffic Based on customer use case, users can choose the application traffic
mix for this test. The details about the traffic mix MUST be mix for this test. The details about the traffic mix MUST be
documented in the report. At least the following traffic mix details documented in the report. At least the following traffic mix details
skipping to change at page 25, line 4 skipping to change at page 25, line 25
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to generate test Configure traffic load profile of the test equipment to generate test
traffic at the "Initial inspected throughput" rate as described in traffic at the "Initial inspected throughput" rate as described in
the parameters Section 7.1.3.2. The test equipment SHOULD follow the the parameters Section 7.1.3.2. The test equipment SHOULD follow the
traffic load profile definition as described in Section 4.3.4. The traffic load profile definition as described in Section 4.3.4. The
DUT/SUT SHOULD reach the "Initial inspected throughput" during the DUT/SUT SHOULD reach the "Initial inspected throughput" during the
sustain phase. Measure all KPI as defined in Section 7.1.3.5. The sustain phase. Measure all KPI as defined in Section 7.1.3.5. The
measured KPIs during the sustain phase MUST meet the test results measured KPIs during the sustain phase MUST meet all the test results
validation criteria "a" and "b" defined in Section 7.1.3.4. validation criteria defined in Section 7.1.3.4.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to step 2. the test procedure MUST NOT be continued to step 2.
7.1.4.2. Step 2: Test Run with Target Objective 7.1.4.2. Step 2: Test Run with Target Objective
Configure test equipment to generate traffic at the "Target inspected Configure test equipment to generate traffic at the "Target inspected
throughput" rate defined in the parameter table. The test equipment throughput" rate defined in the parameter table. The test equipment
SHOULD follow the traffic load profile definition as described in SHOULD follow the traffic load profile definition as described in
Section 4.3.4. The test equipment SHOULD start to measure and record Section 4.3.4. The test equipment SHOULD start to measure and record
all specified KPIs and the frequency of measurements SHOULD be less all specified KPIs. Continue the test until all traffic profile
than 2 seconds. Continue the test until all traffic profile phases phases are completed.
are completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective ("Target inspected to reach the desired value of the target objective ("Target inspected
throughput") in the sustain phase. Follow step 3, if the measured throughput") in the sustain phase. Follow step 3, if the measured
value does not meet the target value or does not fulfill the test value does not meet the target value or does not fulfill the test
results validation criteria. results validation criteria.
7.1.4.3. Step 3: Test Iteration 7.1.4.3. Step 3: Test Iteration
Determine the achievable average inspected throughput within the test Determine the achievable average inspected throughput within the test
skipping to change at page 26, line 35 skipping to change at page 27, line 4
Traffic distribution ratio between IPv4 and IPv6 defined in Traffic distribution ratio between IPv4 and IPv6 defined in
Section 4.3.1.2 Section 4.3.1.2
Target connections per second: Initial value from product datasheet Target connections per second: Initial value from product datasheet
or the value defined based on requirement for a specific deployment or the value defined based on requirement for a specific deployment
scenario scenario
Initial connections per second: 10% of "Target connections per Initial connections per second: 10% of "Target connections per
second" (an optional parameter for documentation) second" (an optional parameter for documentation)
The client SHOULD negotiate HTTP and close the connection with FIN
The client SHOULD negotiate HTTP 1.1 and close the connection with immediately after completion of one transaction. In each test
FIN immediately after completion of one transaction. In each test
iteration, client MUST send GET command requesting a fixed HTTP iteration, client MUST send GET command requesting a fixed HTTP
response object size. response object size.
The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte. The RECOMMENDED response object sizes are 1, 2, 4, 16, and 64 KByte.
7.2.3.3. Test Results Validation Criteria 7.2.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of total attempt transactions. of 100,000 transactions) of total attempt transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections. connections) of total initiated TCP connections.
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate. constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
d. Concurrent TCP connections MUST be constant during steady state d. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate. almost at the same rate.
7.2.3.4. Measurement 7.2.3.4. Measurement
TCP Connections Per Second MUST be reported for each test iteration TCP Connections Per Second MUST be reported for each test iteration
(for each object size). (for each object size).
skipping to change at page 27, line 37 skipping to change at page 28, line 11
This test procedure MAY be repeated multiple times with different IP This test procedure MAY be repeated multiple times with different IP
types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic types; IPv4 only, IPv6 only and IPv4 and IPv6 mixed traffic
distribution. distribution.
7.2.4.1. Step 1: Test Initialization and Qualification 7.2.4.1. Step 1: Test Initialization and Qualification
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure the traffic load profile of the test equipment to establish Configure the traffic load profile of the test equipment to establish
"initial connections per second" as defined in the parameters "Initial connections per second" as defined in the parameters
Section 7.2.3.2. The traffic load profile SHOULD be defined as Section 7.2.3.2. The traffic load profile SHOULD be defined as
described in Section 4.3.4. described in Section 4.3.4.
The DUT/SUT SHOULD reach the "Initial connections per second" before The DUT/SUT SHOULD reach the "Initial connections per second" before
the sustain phase. The measured KPIs during the sustain phase MUST the sustain phase. The measured KPIs during the sustain phase MUST
meet the test results validation criteria a, b, c, and d defined in meet all the test results validation criteria defined in
Section 7.2.3.3. Section 7.2.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.2.4.2. Step 2: Test Run with Target Objective 7.2.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish the target objective ("Target Configure test equipment to establish the target objective ("Target
connections per second") defined in the parameters table. The test connections per second") defined in the parameters table. The test
equipment SHOULD follow the traffic load profile definition as equipment SHOULD follow the traffic load profile definition as
skipping to change at page 28, line 24 skipping to change at page 28, line 42
application transactions per second MUST NOT reach to the maximum application transactions per second MUST NOT reach to the maximum
value the DUT/SUT can support. The test results for specific test value the DUT/SUT can support. The test results for specific test
iterations SHOULD NOT be reported, if the above mentioned KPI iterations SHOULD NOT be reported, if the above mentioned KPI
(especially inspected throughput) reaches the maximum value. (especially inspected throughput) reaches the maximum value.
(Example: If the test iteration with 64 KByte of HTTP response object (Example: If the test iteration with 64 KByte of HTTP response object
size reached the maximum inspected throughput limitation of the DUT, size reached the maximum inspected throughput limitation of the DUT,
the test iteration MAY be interrupted and the result for 64 KByte the test iteration MAY be interrupted and the result for 64 KByte
SHOULD NOT be reported). SHOULD NOT be reported).
The test equipment SHOULD start to measure and record all specified The test equipment SHOULD start to measure and record all specified
KPIs and the frequency of measurements SHOULD be less than 2 seconds. KPIs. Continue the test until all traffic profile phases are
Continue the test until all traffic profile phases are completed. completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective ("Target to reach the desired value of the target objective ("Target
connections per second") in the sustain phase. Follow step 3, if the connections per second") in the sustain phase. Follow step 3, if the
measured value does not meet the target value or does not fulfill the measured value does not meet the target value or does not fulfill the
test results validation criteria. test results validation criteria.
7.2.4.3. Step 3: Test Iteration 7.2.4.3. Step 3: Test Iteration
Determine the achievable TCP connections per second within the test Determine the achievable TCP connections per second within the test
skipping to change at page 30, line 29 skipping to change at page 30, line 35
+---------------------+---------------------+ +---------------------+---------------------+
| 26 | 1 | | 26 | 1 |
+---------------------+---------------------+ +---------------------+---------------------+
| 35 | 1 | | 35 | 1 |
+---------------------+---------------------+ +---------------------+---------------------+
| 59 | 1 | | 59 | 1 |
+---------------------+---------------------+ +---------------------+---------------------+
| 347 | 1 | | 347 | 1 |
+---------------------+---------------------+ +---------------------+---------------------+
Table 4: Mixed Objects Table 5: Mixed Objects
7.3.3.3. Test Results Validation Criteria 7.3.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Traffic should be forwarded constantly. b. Traffic SHOULD be forwarded at a constant rate (considered as a
constant rate if any deviation of traffic forwarding rate is less
than 5%).
c. Concurrent TCP connections MUST be constant during steady state c. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate. almost at the same rate.
7.3.3.4. Measurement 7.3.3.4. Measurement
Inspected Throughput and HTTP Transactions per Second MUST be Inspected Throughput and HTTP Transactions per Second MUST be
reported for each object size. reported for each object size.
skipping to change at page 31, line 32 skipping to change at page 31, line 37
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"Initial inspected throughput" as defined in the parameters "Initial inspected throughput" as defined in the parameters
Section 7.3.3.2. Section 7.3.3.2.
The traffic load profile SHOULD be defined as described in The traffic load profile SHOULD be defined as described in
Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected
throughput" during the sustain phase. Measure all KPI as defined in throughput" during the sustain phase. Measure all KPI as defined in
Section 7.3.3.4. Section 7.3.3.4.
The measured KPIs during the sustain phase MUST meet the test results The measured KPIs during the sustain phase MUST meet the test results
validation criteria "a" defined in Section 7.3.3.3. validation criteria "a" defined in Section 7.3.3.3. The test results
validation criteria "b" and "c" are OPTIONAL for step 1.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.3.4.2. Step 2: Test Run with Target Objective 7.3.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish the target objective ("Target Configure test equipment to establish the target objective ("Target
inspected throughput") defined in the parameters table. The test inspected throughput") defined in the parameters table. The test
equipment SHOULD start to measure and record all specified KPIs and equipment SHOULD start to measure and record all specified KPIs.
the frequency of measurements SHOULD be less than 2 seconds.
Continue the test until all traffic profile phases are completed. Continue the test until all traffic profile phases are completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective in the sustain to reach the desired value of the target objective in the sustain
phase. Follow step 3, if the measured value does not meet the target phase. Follow step 3, if the measured value does not meet the target
value or does not fulfill the test results validation criteria. value or does not fulfill the test results validation criteria.
7.3.4.3. Step 3: Test Iteration 7.3.4.3. Step 3: Test Iteration
Determine the achievable inspected throughput within the test results Determine the achievable inspected throughput within the test results
skipping to change at page 32, line 24 skipping to change at page 32, line 26
7.4.1. Objective 7.4.1. Objective
Using HTTP traffic, determine the HTTP transaction latency when DUT Using HTTP traffic, determine the HTTP transaction latency when DUT
is running with sustainable HTTP transactions per second supported by is running with sustainable HTTP transactions per second supported by
the DUT/SUT under different HTTP response object sizes. the DUT/SUT under different HTTP response object sizes.
Test iterations MUST be performed with different HTTP response object Test iterations MUST be performed with different HTTP response object
sizes in two different scenarios. One with a single transaction and sizes in two different scenarios. One with a single transaction and
the other with multiple transactions within a single TCP connection. the other with multiple transactions within a single TCP connection.
For consistency both the single and multiple transaction test MUST be For consistency both the single and multiple transaction test MUST be
configured with HTTP 1.1. configured with the same HTTP version
Scenario 1: The client MUST negotiate HTTP 1.1 and close the Scenario 1: The client MUST negotiate HTTP and close the connection
connection with FIN immediately after completion of a single with FIN immediately after completion of a single transaction (GET
transaction (GET and RESPONSE). and RESPONSE).
Scenario 2: The client MUST negotiate HTTP 1.1 and close the Scenario 2: The client MUST negotiate HTTP and close the connection
connection FIN immediately after completion of 10 transactions (GET FIN immediately after completion of 10 transactions (GET and
and RESPONSE) within a single TCP connection. RESPONSE) within a single TCP connection.
7.4.2. Test Setup 7.4.2. Test Setup
Test bed setup SHOULD be configured as defined in Section 4. Any Test bed setup SHOULD be configured as defined in Section 4. Any
specific test bed configuration changes such as number of interfaces specific test bed configuration changes such as number of interfaces
and interface type, etc. MUST be documented. and interface type, etc. MUST be documented.
7.4.3. Test Parameters 7.4.3. Test Parameters
In this section, benchmarking test specific parameters SHOULD be In this section, benchmarking test specific parameters SHOULD be
skipping to change at page 33, line 34 skipping to change at page 33, line 34
Initial objective for scenario 1: 10% of Target objective for Initial objective for scenario 1: 10% of Target objective for
scenario 1" (an optional parameter for documentation) scenario 1" (an optional parameter for documentation)
Initial objective for scenario 2: 10% of "Target objective for Initial objective for scenario 2: 10% of "Target objective for
scenario 2" (an optional parameter for documentation) scenario 2" (an optional parameter for documentation)
HTTP transaction per TCP connection: test scenario 1 with single HTTP transaction per TCP connection: test scenario 1 with single
transaction and the second scenario with 10 transactions transaction and the second scenario with 10 transactions
HTTP 1.1 with GET command requesting a single object. The HTTP with GET command requesting a single object. The RECOMMENDED
RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test object sizes are 1, 16, and 64 KByte. For each test iteration,
iteration, client MUST request a single HTTP response object size. client MUST request a single HTTP response object size.
7.4.3.3. Test Results Validation Criteria 7.4.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. Ramp up and the whole sustain phase of the traffic load profile. Ramp up and
ramp down phase SHOULD NOT be considered. ramp down phase SHOULD NOT be considered.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections. connections) of total initiated TCP connections.
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate. constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
d. Concurrent TCP connections MUST be constant during steady state d. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate. almost at the same rate.
e. After ramp up the DUT MUST achieve the "Target objective" defined e. After ramp up the DUT MUST achieve the "Target objective" defined
in the parameter Section 7.4.3.2 and remain in that state for the in the parameter Section 7.4.3.2 and remain in that state for the
entire test duration (sustain phase). entire test duration (sustain phase).
skipping to change at page 34, line 42 skipping to change at page 34, line 43
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"Initial objective" as defined in the parameters Section 7.4.3.2. "Initial objective" as defined in the parameters Section 7.4.3.2.
The traffic load profile can be defined as described in The traffic load profile can be defined as described in
Section 4.3.4. Section 4.3.4.
The DUT/SUT SHOULD reach the "Initial objective" before the sustain The DUT/SUT SHOULD reach the "Initial objective" before the sustain
phase. The measured KPIs during the sustain phase MUST meet the test phase. The measured KPIs during the sustain phase MUST meet all the
results validation criteria a, b, c, d, e and f defined in test results validation criteria defined in Section 7.4.3.3.
Section 7.4.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.4.4.2. Step 2: Test Run with Target Objective 7.4.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish "Target objective" defined in Configure test equipment to establish "Target objective" defined in
the parameters table. The test equipment SHOULD follow the traffic the parameters table. The test equipment SHOULD follow the traffic
load profile definition as described in Section 4.3.4. load profile definition as described in Section 4.3.4.
The test equipment SHOULD start to measure and record all specified The test equipment SHOULD start to measure and record all specified
KPIs and the frequency of measurement SHOULD be less than 2 seconds. KPIs. Continue the test until all traffic profile phases are
Continue the test until all traffic profile phases are completed. completed.
Within the test results validation criteria, the DUT/SUT MUST reach Within the test results validation criteria, the DUT/SUT MUST reach
the desired value of the target objective in the sustain phase. the desired value of the target objective in the sustain phase.
Measure the minimum, average and maximum values of TFB and TTLB. Measure the minimum, average and maximum values of TFB and TTLB.
7.5. Concurrent TCP/HTTP Connection Capacity 7.5. Concurrent TCP/HTTP Connection Capacity
7.5.1. Objective 7.5.1. Objective
skipping to change at page 36, line 26 skipping to change at page 36, line 26
HTTP Connections per second (Section 7.2) HTTP Connections per second (Section 7.2)
Ramp up time (in traffic load profile for "Target concurrent Ramp up time (in traffic load profile for "Target concurrent
connection"): "Target concurrent connection" / "Maximum connection"): "Target concurrent connection" / "Maximum
connections per second during ramp up phase" connections per second during ramp up phase"
Ramp up time (in traffic load profile for "Initial concurrent Ramp up time (in traffic load profile for "Initial concurrent
connection"): "Initial concurrent connection" / "Maximum connection"): "Initial concurrent connection" / "Maximum
connections per second during ramp up phase" connections per second during ramp up phase"
The client MUST negotiate HTTP 1.1 with persistence and each client The client MUST negotiate HTTP and each client MAY open multiple
MAY open multiple concurrent TCP connections per server endpoint IP. concurrent TCP connections per server endpoint IP.
Each client sends 10 GET commands requesting 1 KByte HTTP response Each client sends 10 GET commands requesting 1 KByte HTTP response
object in the same TCP connection (10 transactions/TCP connection) object in the same TCP connection (10 transactions/TCP connection)
and the delay (think time) between each transaction MUST be X and the delay (think time) between each transaction MUST be X
seconds. seconds.
X = ("Ramp up time" + "steady state time") /10 X = ("Ramp up time" + "steady state time") /10
The established connections SHOULD remain open until the ramp down The established connections SHOULD remain open until the ramp down
phase of the test. During the ramp down phase, all connections phase of the test. During the ramp down phase, all connections
skipping to change at page 37, line 9 skipping to change at page 37, line 9
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transaction) of total attempted transactions. of 100,000 transaction) of total attempted transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections. connections) of total initiated TCP connections.
c. During the sustain phase, traffic SHOULD be forwarded constantly. c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
7.5.3.4. Measurement 7.5.3.4. Measurement
Average Concurrent TCP Connections MUST be reported for this Average Concurrent TCP Connections MUST be reported for this
benchmarking test. benchmarking test.
7.5.4. Test Procedures and Expected Results 7.5.4. Test Procedures and Expected Results
The test procedure is designed to measure the concurrent TCP The test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of connection capacity of the DUT/SUT at the sustaining period of
skipping to change at page 37, line 35 skipping to change at page 37, line 37
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure test equipment to establish "Initial concurrent TCP Configure test equipment to establish "Initial concurrent TCP
connections" defined in Section 7.5.3.2. Except ramp up time, the connections" defined in Section 7.5.3.2. Except ramp up time, the
traffic load profile SHOULD be defined as described in Section 4.3.4. traffic load profile SHOULD be defined as described in Section 4.3.4.
During the sustain phase, the DUT/SUT SHOULD reach the "Initial During the sustain phase, the DUT/SUT SHOULD reach the "Initial
concurrent TCP connections". The measured KPIs during the sustain concurrent TCP connections". The measured KPIs during the sustain
phase MUST meet the test results validation criteria "a" and "b" phase MUST meet all the test results validation criteria defined in
defined in Section 7.5.3.3. Section 7.5.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.5.4.2. Step 2: Test Run with Target Objective 7.5.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish the target objective ("Target Configure test equipment to establish the target objective ("Target
concurrent TCP connections"). The test equipment SHOULD follow the concurrent TCP connections"). The test equipment SHOULD follow the
traffic load profile definition (except ramp up time) as described in traffic load profile definition (except ramp up time) as described in
Section 4.3.4. Section 4.3.4.
During the ramp up and sustain phase, the other KPIs such as During the ramp up and sustain phase, the other KPIs such as
inspected throughput, TCP connections per second and application inspected throughput, TCP connections per second and application
transactions per second MUST NOT reach to the maximum value that the transactions per second MUST NOT reach to the maximum value that the
DUT/SUT can support. DUT/SUT can support.
The test equipment SHOULD start to measure and record KPIs defined in The test equipment SHOULD start to measure and record KPIs defined in
Section 7.5.3.4. The frequency of measurement SHOULD be less than 2 Section 7.5.3.4. Continue the test until all traffic profile phases
seconds. Continue the test until all traffic profile phases are are completed.
completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective in the sustain to reach the desired value of the target objective in the sustain
phase. Follow step 3, if the measured value does not meet the target phase. Follow step 3, if the measured value does not meet the target
value or does not fulfill the test results validation criteria. value or does not fulfill the test results validation criteria.
7.5.4.3. Step 3: Test Iteration 7.5.4.3. Step 3: Test Iteration
Determine the achievable concurrent TCP connections capacity within Determine the achievable concurrent TCP connections capacity within
the test results validation criteria. the test results validation criteria.
skipping to change at page 39, line 33 skipping to change at page 39, line 33
Target connections per second: Initial value from product datasheet Target connections per second: Initial value from product datasheet
or the value defined based on requirement for a specific deployment or the value defined based on requirement for a specific deployment
scenario. scenario.
Initial connections per second: 10% of "Target connections per Initial connections per second: 10% of "Target connections per
second" (an optional parameter for documentation) second" (an optional parameter for documentation)
RECOMMENDED ciphers and keys defined in Section 4.3.1.3 RECOMMENDED ciphers and keys defined in Section 4.3.1.3
The client MUST negotiate HTTPS 1.1 and close the connection with FIN The client MUST negotiate HTTPS and close the connection with FIN
immediately after completion of one transaction. In each test immediately after completion of one transaction. In each test
iteration, client MUST send GET command requesting a fixed HTTPS iteration, client MUST send GET command requesting a fixed HTTPS
response object size. The RECOMMENDED object sizes are 1, 2, 4, 16, response object size. The RECOMMENDED object sizes are 1, 2, 4, 16,
and 64 KByte. and 64 KByte.
7.6.3.3. Test Results Validation Criteria 7.6.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria: criteria:
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections. connections) of total initiated TCP connections.
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate. constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
d. Concurrent TCP connections MUST be constant during steady state d. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate. almost at the same rate.
7.6.3.4. Measurement 7.6.3.4. Measurement
TCP Connections Per Second MUST be reported for each test iteration TCP Connections Per Second MUST be reported for each test iteration
(for each object size). (for each object size).
skipping to change at page 40, line 40 skipping to change at page 40, line 41
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"Initial connections per second" as defined in Section 7.6.3.2. The "Initial connections per second" as defined in Section 7.6.3.2. The
traffic load profile MAY be defined as described in Section 4.3.4. traffic load profile MAY be defined as described in Section 4.3.4.
The DUT/SUT SHOULD reach the "Initial connections per second" before The DUT/SUT SHOULD reach the "Initial connections per second" before
the sustain phase. The measured KPIs during the sustain phase MUST the sustain phase. The measured KPIs during the sustain phase MUST
meet the test results validation criteria a, b, c, and d defined in meet all the test results validation criteria defined in
Section 7.6.3.3. Section 7.6.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.6.4.2. Step 2: Test Run with Target Objective 7.6.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish "Target connections per second" Configure test equipment to establish "Target connections per second"
defined in the parameters table. The test equipment SHOULD follow defined in the parameters table. The test equipment SHOULD follow
the traffic load profile definition as described in Section 4.3.4. the traffic load profile definition as described in Section 4.3.4.
During the ramp up and sustain phase, other KPIs such as inspected During the ramp up and sustain phase, other KPIs such as inspected
throughput, concurrent TCP connections and application transactions throughput, concurrent TCP connections and application transactions
per second MUST NOT reach the maximum value that the DUT/SUT can per second MUST NOT reach the maximum value that the DUT/SUT can
support. The test results for specific test iteration SHOULD NOT be support. The test results for specific test iteration SHOULD NOT be
reported, if the above mentioned KPI (especially inspected reported, if the above mentioned KPI (especially inspected
throughput) reaches the maximum value. (Example: If the test throughput) reaches the maximum value. (Example: If the test
iteration with 64 KByte of HTTPS response object size reached the iteration with 64 KByte of HTTPS response object size reached the
maximum inspected throughput limitation of the DUT, the test maximum inspected throughput limitation of the DUT, the test
iteration can be interrupted and the result for 64 KByte SHOULD NOT iteration MAY be interrupted and the result for 64 KByte SHOULD NOT
be reported). be reported).
The test equipment SHOULD start to measure and record all specified The test equipment SHOULD start to measure and record all specified
KPIs. The frequency of measurement SHOULD be less than 2 seconds. KPIs. Continue the test until all traffic profile phases are
Continue the test until all traffic profile phases are completed. completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective ("Target to reach the desired value of the target objective ("Target
connections per second") in the sustain phase. Follow step 3, if the connections per second") in the sustain phase. Follow step 3, if the
measured value does not meet the target value or does not fulfill the measured value does not meet the target value or does not fulfill the
test results validation criteria. test results validation criteria.
7.6.4.3. Step 3: Test Iteration 7.6.4.3. Step 3: Test Iteration
Determine the achievable connections per second within the test Determine the achievable connections per second within the test
skipping to change at page 42, line 42 skipping to change at page 42, line 42
Initial inspected throughput: 10% of "Target inspected throughput" Initial inspected throughput: 10% of "Target inspected throughput"
(an optional parameter for documentation) (an optional parameter for documentation)
Number of HTTPS response object requests (transactions) per Number of HTTPS response object requests (transactions) per
connection: 10 connection: 10
RECOMMENDED ciphers and keys defined in Section 4.3.1.3 RECOMMENDED ciphers and keys defined in Section 4.3.1.3
RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and RECOMMENDED HTTPS response object size: 1, 16, 64, 256 KByte, and
mixed objects defined in the table below. mixed objects defined in the Table 5 under the Section 7.3.3.2.
+---------------------+---------------------+
| Object size (KByte) | Number of requests/ |
| | Weight |
+---------------------+---------------------+
| 0.2 | 1 |
+---------------------+---------------------+
| 6 | 1 |
+---------------------+---------------------+
| 8 | 1 |
+---------------------+---------------------+
| 9 | 1 |
+---------------------+---------------------+
| 10 | 1 |
+---------------------+---------------------+
| 25 | 1 |
+---------------------+---------------------+
| 26 | 1 |
+---------------------+---------------------+
| 35 | 1 |
+---------------------+---------------------+
| 59 | 1 |
+---------------------+---------------------+
| 347 | 1 |
+---------------------+---------------------+
Table 5: Mixed Objects
7.7.3.3. Test Results Validation Criteria 7.7.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Traffic should be forwarded constantly. b. Traffic SHOULD be forwarded at a constant rate (considered as a
constant rate if any deviation of traffic forwarding rate is less
than 5%).
c. Concurrent TCP connections MUST be constant during steady state c. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate. almost at the same rate.
7.7.3.4. Measurement 7.7.3.4. Measurement
Inspected Throughput and HTTP Transactions per Second MUST be Inspected Throughput and HTTP Transactions per Second MUST be
reported for each object size. reported for each object size.
skipping to change at page 44, line 22 skipping to change at page 43, line 31
The test procedure consists of three major steps. This test The test procedure consists of three major steps. This test
procedure MAY be repeated multiple times with different IPv4 and IPv6 procedure MAY be repeated multiple times with different IPv4 and IPv6
traffic distribution and HTTPS response object sizes. traffic distribution and HTTPS response object sizes.
7.7.4.1. Step 1: Test Initialization and Qualification 7.7.4.1. Step 1: Test Initialization and Qualification
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"initial inspected throughput" as defined in the parameters "Initial inspected throughput" as defined in the parameters
Section 7.7.3.2. Section 7.7.3.2.
The traffic load profile should be defined as described in The traffic load profile SHOULD be defined as described in
Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected Section 4.3.4. The DUT/SUT SHOULD reach the "Initial inspected
throughput" during the sustain phase. Measure all KPI as defined in throughput" during the sustain phase. Measure all KPI as defined in
Section 7.7.3.4. Section 7.7.3.4.
The measured KPIs during the sustain phase MUST meet the test results The measured KPIs during the sustain phase MUST meet the test results
validation criteria "a" defined in Section 7.7.3.3. validation criteria "a" defined in Section 7.7.3.3. The test results
validation criteria "b" and "c" are OPTIONAL for step 1.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.7.4.2. Step 2: Test Run with Target Objective 7.7.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish the target objective ("Target Configure test equipment to establish the target objective ("Target
inspected throughput") defined in the parameters table. The test inspected throughput") defined in the parameters table. The test
equipment SHOULD start to measure and record all specified KPIs. The equipment SHOULD start to measure and record all specified KPIs.
frequency of measurement SHOULD be less than 2 seconds. Continue the Continue the test until all traffic profile phases are completed.
test until all traffic profile phases are completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective in the sustain to reach the desired value of the target objective in the sustain
phase. Follow step 3, if the measured value does not meet the target phase. Follow step 3, if the measured value does not meet the target
value or does not fulfill the test results validation criteria. value or does not fulfill the test results validation criteria.
7.7.4.3. Step 3: Test Iteration 7.7.4.3. Step 3: Test Iteration
Determine the achievable average inspected throughput within the test Determine the achievable average inspected throughput within the test
results validation criteria. Final test iteration MUST be performed results validation criteria. Final test iteration MUST be performed
skipping to change at page 46, line 25 skipping to change at page 45, line 36
Initial objective for scenario 1: 10% of Target objective for Initial objective for scenario 1: 10% of Target objective for
scenario 1" (an optional parameter for documentation) scenario 1" (an optional parameter for documentation)
Initial objective for scenario 2: 10% of "Target objective for Initial objective for scenario 2: 10% of "Target objective for
scenario 2" (an optional parameter for documentation) scenario 2" (an optional parameter for documentation)
HTTPS transaction per TCP connection: test scenario 1 with single HTTPS transaction per TCP connection: test scenario 1 with single
transaction and the second scenario with 10 transactions transaction and the second scenario with 10 transactions
HTTPS 1.1 with GET command requesting a single object. The HTTPS with GET command requesting a single object. The RECOMMENDED
RECOMMENDED object sizes are 1, 16, and 64 KByte. For each test object sizes are 1, 16, and 64 KByte. For each test iteration,
iteration, client MUST request a single HTTPS response object size. client MUST request a single HTTPS response object size.
7.8.3.3. Test Results Validation Criteria 7.8.3.3. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole sustain phase of the traffic load profile. Ramp up and the whole sustain phase of the traffic load profile. Ramp up and
ramp down phase SHOULD NOT be considered. ramp down phase SHOULD NOT be considered.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of attempt transactions. of 100,000 transactions) of attempt transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections connections) of total initiated TCP connections.
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate. constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
d. Concurrent TCP connections MUST be constant during steady state d. Concurrent TCP connections MUST be constant during steady state
and any deviation of concurrent TCP connections SHOULD be less and any deviation of concurrent TCP connections SHOULD be less
than 10%. This confirms the DUT opens and closes TCP connections than 10%. This confirms the DUT opens and closes TCP connections
almost at the same rate almost at the same rate.
e. After ramp up the DUT MUST achieve the "Target objective" defined e. After ramp up the DUT MUST achieve the "Target objective" defined
in the parameter Section 7.8.3.2 and remain in that state for the in the parameter Section 7.8.3.2 and remain in that state for the
entire test duration (sustain phase). entire test duration (sustain phase).
7.8.3.4. Measurement 7.8.3.4. Measurement
TTFB (minimum, average and maximum) and TTLB (minimum, average and TTFB (minimum, average and maximum) and TTLB (minimum, average and
maximum) MUST be reported for each object size. maximum) MUST be reported for each object size.
skipping to change at page 47, line 34 skipping to change at page 46, line 47
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure traffic load profile of the test equipment to establish Configure traffic load profile of the test equipment to establish
"Initial objective" as defined in the parameters Section 7.8.3.2. "Initial objective" as defined in the parameters Section 7.8.3.2.
The traffic load profile can be defined as described in The traffic load profile can be defined as described in
Section 4.3.4. Section 4.3.4.
The DUT/SUT SHOULD reach the "Initial objective" before the sustain The DUT/SUT SHOULD reach the "Initial objective" before the sustain
phase. The measured KPIs during the sustain phase MUST meet the test phase. The measured KPIs during the sustain phase MUST meet all the
results validation criteria a, b, c, d, e and f defined in test results validation criteria defined in Section 7.8.3.3.
Section 7.8.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
7.8.4.2. Step 2: Test Run with Target Objective 7.8.4.2. Step 2: Test Run with Target Objective
Configure test equipment to establish "Target objective" defined in Configure test equipment to establish "Target objective" defined in
the parameters table. The test equipment SHOULD follow the traffic the parameters table. The test equipment SHOULD follow the traffic
load profile definition as described in Section 4.3.4. load profile definition as described in Section 4.3.4.
The test equipment SHOULD start to measure and record all specified The test equipment SHOULD start to measure and record all specified
KPIs. The frequency of measurement SHOULD be less than 2 seconds. KPIs. Continue the test until all traffic profile phases are
Continue the test until all traffic profile phases are completed. completed.
Within the test results validation criteria, the DUT/SUT MUST reach Within the test results validation criteria, the DUT/SUT MUST reach
the desired value of the target objective in the sustain phase. the desired value of the target objective in the sustain phase.
Measure the minimum, average and maximum values of TFB and TTLB. Measure the minimum, average and maximum values of TFB and TTLB.
7.9. Concurrent TCP/HTTPS Connection Capacity 7.9. Concurrent TCP/HTTPS Connection Capacity
7.9.1. Objective 7.9.1. Objective
skipping to change at page 49, line 45 skipping to change at page 49, line 9
the whole sustain phase of the traffic load profile. the whole sustain phase of the traffic load profile.
a. Number of failed Application transactions (receiving any HTTP a. Number of failed Application transactions (receiving any HTTP
response code other than 200 OK) MUST be less than 0.001% (1 out response code other than 200 OK) MUST be less than 0.001% (1 out
of 100,000 transactions) of total attempted transactions. of 100,000 transactions) of total attempted transactions.
b. Number of Terminated TCP connections due to unexpected TCP RST b. Number of Terminated TCP connections due to unexpected TCP RST
sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000 sent by DUT/SUT MUST be less than 0.001% (1 out of 100,000
connections) of total initiated TCP connections. connections) of total initiated TCP connections.
c. During the sustain phase, traffic SHOULD be forwarded constantly. c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
7.9.3.4. Measurement 7.9.3.4. Measurement
Average Concurrent TCP Connections MUST be reported for this Average Concurrent TCP Connections MUST be reported for this
benchmarking test. benchmarking test.
7.9.4. Test Procedures and Expected Results 7.9.4. Test Procedures and Expected Results
The test procedure is designed to measure the concurrent TCP The test procedure is designed to measure the concurrent TCP
connection capacity of the DUT/SUT at the sustaining period of connection capacity of the DUT/SUT at the sustaining period of
traffic load profile. The test procedure consists of three major traffic load profile. The test procedure consists of three major
steps. This test procedure MAY be repeated multiple times with steps. This test procedure MAY be repeated multiple times with
different IPv4 and IPv6 traffic distribution. different IPv4 and IPv6 traffic distribution.
7.9.4.1. Step 1: Test Initialization and Qualification 7.9.4.1. Step 1: Test Initialization and Qualification
Verify the link status of all connected physical interfaces. All Verify the link status of all connected physical interfaces. All
interfaces are expected to be in "UP" status. interfaces are expected to be in "UP" status.
Configure test equipment to establish "initial concurrent TCP Configure test equipment to establish "Initial concurrent TCP
connections" defined in Section 7.9.3.2. Except ramp up time, the connections" defined in Section 7.9.3.2. Except ramp up time, the
traffic load profile SHOULD be defined as described in Section 4.3.4. traffic load profile SHOULD be defined as described in Section 4.3.4.
During the sustain phase, the DUT/SUT SHOULD reach the "Initial During the sustain phase, the DUT/SUT SHOULD reach the "Initial
concurrent TCP connections". The measured KPIs during the sustain concurrent TCP connections". The measured KPIs during the sustain
phase MUST meet the test results validation criteria "a" and "b" phase MUST meet the test results validation criteria "a" and "b"
defined in Section 7.9.3.3. defined in Section 7.9.3.3.
If the KPI metrics do not meet the test results validation criteria, If the KPI metrics do not meet the test results validation criteria,
the test procedure MUST NOT be continued to "Step 2". the test procedure MUST NOT be continued to "Step 2".
skipping to change at page 50, line 43 skipping to change at page 50, line 8
concurrent TCP connections"). The test equipment SHOULD follow the concurrent TCP connections"). The test equipment SHOULD follow the
traffic load profile definition (except ramp up time) as described in traffic load profile definition (except ramp up time) as described in
Section 4.3.4. Section 4.3.4.
During the ramp up and sustain phase, the other KPIs such as During the ramp up and sustain phase, the other KPIs such as
inspected throughput, TCP connections per second and application inspected throughput, TCP connections per second and application
transactions per second MUST NOT reach to the maximum value that the transactions per second MUST NOT reach to the maximum value that the
DUT/SUT can support. DUT/SUT can support.
The test equipment SHOULD start to measure and record KPIs defined in The test equipment SHOULD start to measure and record KPIs defined in
Section 7.9.3.4. The frequency of measurement SHOULD be less than 2 Section 7.9.3.4. Continue the test until all traffic profile phases
seconds. Continue the test until all traffic profile phases are are completed.
completed.
Within the test results validation criteria, the DUT/SUT is expected Within the test results validation criteria, the DUT/SUT is expected
to reach the desired value of the target objective in the sustain to reach the desired value of the target objective in the sustain
phase. Follow step 3, if the measured value does not meet the target phase. Follow step 3, if the measured value does not meet the target
value or does not fulfill the test results validation criteria. value or does not fulfill the test results validation criteria.
7.9.4.3. Step 3: Test Iteration 7.9.4.3. Step 3: Test Iteration
Determine the achievable concurrent TCP connections within the test Determine the achievable concurrent TCP connections within the test
results validation criteria. results validation criteria.
8. IANA Considerations 8. IANA Considerations
The IANA has allocated 2001:0200::/48 for IPv6 testing, which is a The IANA has assigned IPv4 and IPv6 Address Blocks in [RFC6890] that
48-bit prefix from the [RFC4733] pool. For IPv4 testing, the IP have been registered for special purposes. The IPv6 Address Block
subnet 198.18.0.0/15 has been assigned to the BMWG by the IANA. This 2001:2::/48 has been allocated for the purpose of IPv6 Benchmarking
assignment was made to minimize the chance of conflict in case a [RFC5180] and the IPv4 Address Block 198.18.0.0/15 has been allocated
testing device were to be accidentally connected to part of the for the purpose of IPv4 Benchmarking [RFC2544]. This assignment was
Internet. The specific use of the IPv4 addresses is detailed in made to minimize the chance of conflict in case a testing device were
[RFC2544] Appendix C. to be accidentally connected to part of the Internet.
9. Security Considerations 9. Security Considerations
The primary goal of this document is to provide benchmarking The primary goal of this document is to provide benchmarking
terminology and methodology for next-generation network security terminology and methodology for next-generation network security
devices. However, readers should be aware that there is some overlap devices. However, readers should be aware that there is some overlap
between performance and security issues. Specifically, the optimal between performance and security issues. Specifically, the optimal
configuration for network security device performance may not be the configuration for network security device performance may not be the
most secure, and vice-versa. The Cipher suites recommended in this most secure, and vice-versa. The Cipher suites recommended in this
document are just for test purpose only. The Cipher suite document are just for test purpose only. The Cipher suite
skipping to change at page 52, line 40 skipping to change at page 52, line 5
[RFC2647] Newman, D., "Benchmarking Terminology for Firewall [RFC2647] Newman, D., "Benchmarking Terminology for Firewall
Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999, Performance", RFC 2647, DOI 10.17487/RFC2647, August 1999,
<https://www.rfc-editor.org/info/rfc2647>. <https://www.rfc-editor.org/info/rfc2647>.
[RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin, [RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin,
"Benchmarking Methodology for Firewall Performance", "Benchmarking Methodology for Firewall Performance",
RFC 3511, DOI 10.17487/RFC3511, April 2003, RFC 3511, DOI 10.17487/RFC3511, April 2003,
<https://www.rfc-editor.org/info/rfc3511>. <https://www.rfc-editor.org/info/rfc3511>.
[RFC4733] Schulzrinne, H. and T. Taylor, "RTP Payload for DTMF [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D.
Digits, Telephony Tones, and Telephony Signals", RFC 4733, Dugatkin, "IPv6 Benchmarking Methodology for Network
DOI 10.17487/RFC4733, December 2006, Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May
<https://www.rfc-editor.org/info/rfc4733>. 2008, <https://www.rfc-editor.org/info/rfc5180>.
[RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton,
"Applicability Statement for RFC 2544: Use on Production "Applicability Statement for RFC 2544: Use on Production
Networks Considered Harmful", RFC 6815, Networks Considered Harmful", RFC 6815,
DOI 10.17487/RFC6815, November 2012, DOI 10.17487/RFC6815, November 2012,
<https://www.rfc-editor.org/info/rfc6815>. <https://www.rfc-editor.org/info/rfc6815>.
[RFC6890] Cotton, M., Vegoda, L., Bonica, R., Ed., and B. Haberman,
"Special-Purpose IP Address Registries", BCP 153,
RFC 6890, DOI 10.17487/RFC6890, April 2013,
<https://www.rfc-editor.org/info/rfc6890>.
[RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol
Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018,
<https://www.rfc-editor.org/info/rfc8446>. <https://www.rfc-editor.org/info/rfc8446>.
Appendix A. Test Methodology - Security Effectiveness Evaluation Appendix A. Test Methodology - Security Effectiveness Evaluation
A.1. Test Objective A.1. Test Objective
This test methodology verifies the DUT/SUT is able to detect, prevent This test methodology verifies the DUT/SUT is able to detect, prevent
and report the vulnerabilities. and report the vulnerabilities.
skipping to change at page 53, line 49 skipping to change at page 53, line 17
In this section, the benchmarking test specific parameters SHOULD be In this section, the benchmarking test specific parameters SHOULD be
defined. defined.
A.3.1. DUT/SUT Configuration Parameters A.3.1. DUT/SUT Configuration Parameters
DUT/SUT configuration Parameters MUST conform to the requirements DUT/SUT configuration Parameters MUST conform to the requirements
defined in Section 4.2. The same DUT configuration MUST be used for defined in Section 4.2. The same DUT configuration MUST be used for
Security effectiveness test and as well as for benchmarking test Security effectiveness test and as well as for benchmarking test
cases defined in Section 7. The DUT/SUT MUST be configured in inline cases defined in Section 7. The DUT/SUT MUST be configured in inline
mode and all detected attack traffic MUST be dropped and the session mode and all detected attack traffic MUST be dropped and the session
Should be reset SHOULD be reset
A.3.2. Test Equipment Configuration Parameters A.3.2. Test Equipment Configuration Parameters
Test equipment configuration parameters MUST conform to the Test equipment configuration parameters MUST conform to the
requirements defined in Section 4.3. The same Client and server IP requirements defined in Section 4.3. The same Client and server IP
ranges MUST be configured as used in the benchmarking test cases. In ranges MUST be configured as used in the benchmarking test cases. In
addition, the following parameters MUST be documented for this addition, the following parameters MUST be documented for this
benchmarking test: benchmarking test:
o Background Traffic: 45% of maximum HTTP throughput and 45% of o Background Traffic: 45% of maximum HTTP throughput and 45% of
skipping to change at page 54, line 34 skipping to change at page 53, line 48
cipher configured for HTTPS traffic related benchmarking tests cipher configured for HTTPS traffic related benchmarking tests
(Section 7.6 - Section 7.9) (Section 7.6 - Section 7.9)
A.4. Test Results Validation Criteria A.4. Test Results Validation Criteria
The following test Criteria is defined as test results validation The following test Criteria is defined as test results validation
criteria. Test results validation criteria MUST be monitored during criteria. Test results validation criteria MUST be monitored during
the whole test duration. the whole test duration.
a. Number of failed Application transaction in the background a. Number of failed Application transaction in the background
traffic MUST be less than 0.01% of attempted transactions traffic MUST be less than 0.01% of attempted transactions.
b. Number of Terminated TCP connections of the background traffic b. Number of Terminated TCP connections of the background traffic
(due to unexpected TCP RST sent by DUT/SUT) MUST be less than (due to unexpected TCP RST sent by DUT/SUT) MUST be less than
0.01% of total initiated TCP connections in the background 0.01% of total initiated TCP connections in the background
traffic traffic.
c. During the sustain phase, traffic should be forwarded at a c. During the sustain phase, traffic SHOULD be forwarded at a
constant rate constant rate (considered as a constant rate if any deviation of
traffic forwarding rate is less than 5%).
d. False positive MUST NOT occur in the background traffic d. False positive MUST NOT occur in the background traffic.
A.5. Measurement A.5. Measurement
Following KPI metrics MUST be reported for this test scenario: Following KPI metrics MUST be reported for this test scenario:
Mandatory KPIs: Mandatory KPIs:
o Blocked CVEs: It should be represented in the following ways: o Blocked CVEs: It SHOULD be represented in the following ways:
* Number of blocked CVEs out of total CVEs * Number of blocked CVEs out of total CVEs
* Percentage of blocked CVEs * Percentage of blocked CVEs
o Unblocked CVEs: It should be represented in the following ways: o Unblocked CVEs: It SHOULD be represented in the following ways:
* Number of unblocked CVEs out of total CVEs * Number of unblocked CVEs out of total CVEs
* Percentage of unblocked CVEs * Percentage of unblocked CVEs
o Background traffic behavior: it should represent one of the o Background traffic behavior: it SHOULD be represented one of the
followings ways: followings ways:
* No impact (traffic transmission at a constant rate) * No impact: considered as "no impact'" if any deviation of
traffic forwarding rate is less than or equal to 5 % (constant
rate)
* Minor impact (e.g. small spikes- +/- 100 Mbit/s) * Minor impact: considered as "minor impact" if any deviation of
traffic forwarding rate is greater that 5% and less than or
equal to10% (e.g. small spikes)
* Heavily impacted (e.g. large spikes and reduced the background * Heavily impacted: considered as "Heavily impacted" if any
HTTP(S) throughput > 100 Mbit/s) deviation of traffic forwarding rate is greater that 10% (e.g.
large spikes) or reduced the background HTTP(S) throughput
greater than 10%
o DUT/SUT reporting accuracy: DUT/SUT MUST report all detected o DUT/SUT reporting accuracy: DUT/SUT MUST report all detected
vulnerabilities. vulnerabilities.
Optional KPIs: Optional KPIs:
o List of unblocked CVEs o List of unblocked CVEs
A.6. Test Procedures and Expected Results A.6. Test Procedures and Expected Results
skipping to change at page 55, line 47 skipping to change at page 55, line 22
MAY be repeated multiple times with different IPv4 and IPv6 traffic MAY be repeated multiple times with different IPv4 and IPv6 traffic
distribution. distribution.
A.6.1. Step 1: Background Traffic A.6.1. Step 1: Background Traffic
Generate the background traffic at the transmission rate defined in Generate the background traffic at the transmission rate defined in
the parameter section. the parameter section.
The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in The DUT/SUT MUST reach the target objective (HTTP(S) throughput) in
sustain phase. The measured KPIs during the sustain phase MUST meet sustain phase. The measured KPIs during the sustain phase MUST meet
the test results validation criteria a, b, c and d defined in all the test results validation criteria defined in Appendix A.4.
Appendix A.4.
If the KPI metrics do not meet the acceptance criteria, the test If the KPI metrics do not meet the acceptance criteria, the test
procedure MUST NOT be continued to "Step 2". procedure MUST NOT be continued to "Step 2".
A.6.2. Step 2: CVE Emulation A.6.2. Step 2: CVE Emulation
While generating the background traffic (in sustain phase), send the While generating the background traffic (in sustain phase), send the
CVE traffic as defined in the parameter section. CVE traffic as defined in the parameter section.
The test equipment SHOULD start to measure and record all specified The test equipment SHOULD start to measure and record all specified
KPIs. The frequency of measurement MUST be less than 2 seconds. KPIs. Continue the test until all CVEs are sent.
Continue the test until all CVEs are sent.
The measured KPIs MUST meet all the test results validation criteria The measured KPIs MUST meet all the test results validation criteria
a, b, c, and d defined in Appendix A.4. defined in Appendix A.4.
In addition, the DUT/SUT SHOULD report the vulnerabilities correctly. In addition, the DUT/SUT SHOULD report the vulnerabilities correctly.
Appendix B. DUT/SUT Classification Appendix B. DUT/SUT Classification
This document attempts to classify the DUT/SUT in four different four This document attempts to classify the DUT/SUT in four different
different categories based on its maximum supported firewall categories based on its maximum supported firewall throughput
throughput performance number defined in the vendor datasheet. This performance number defined in the vendor datasheet. This
classification MAY help user to determine specific configuration classification MAY help user to determine specific configuration
scale (e.g., number of ACL entries), traffic profiles, and attack scale (e.g., number of ACL entries), traffic profiles, and attack
traffic profiles, scaling those proportionally to DUT/SUT sizing traffic profiles, scaling those proportionally to DUT/SUT sizing
category. category.
The four different categories are Extra Small, Small, Medium, and The four different categories are Extra Small, Small, Medium, and
Large. The RECOMMENDED throughput values for the following Large. The RECOMMENDED throughput values for the following
categories are: categories are:
Extra Small (XS) - supported throughput less than 1Gbit/s Extra Small (XS) - supported throughput less than or equal to1Gbit/s
Small (S) - supported throughput less than 5Gbit/s Small (S) - supported throughput greater than 1Gbit/s and less than
or equal to 5Gbit/s
Medium (M) - supported throughput greater than 5Gbit/s and less than Medium (M) - supported throughput greater than 5Gbit/s and less than
10Gbit/s or equal to10Gbit/s
Large (L) - supported throughput greater than 10Gbit/s Large (L) - supported throughput greater than 10Gbit/s
Authors' Addresses Authors' Addresses
Balamuhunthan Balarajah Balamuhunthan Balarajah
Berlin Berlin
Germany Germany
Email: bm.balarajah@gmail.com Email: bm.balarajah@gmail.com
 End of changes. 104 change blocks. 
253 lines changed or deleted 259 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/