draft-ietf-bmwg-sdn-controller-benchmark-meth-07.txt   draft-ietf-bmwg-sdn-controller-benchmark-meth-08.txt 
Internet-Draft Bhuvaneswaran Vengainathan Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies Intended Status: Informational Veryx Technologies
Expires: June 10, 2018 Mark Tassinari Expires: August 25, 2018 Mark Tassinari
Hewlett-Packard Hewlett-Packard
Vishwas Manral Vishwas Manral
Nano Sec Nano Sec
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
January 10, 2018 February 25, 2018
Benchmarking Methodology for SDN Controller Performance Benchmarking Methodology for SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-07 draft-ietf-bmwg-sdn-controller-benchmark-meth-08
Abstract Abstract
This document defines the methodologies for benchmarking control This document defines the methodologies for benchmarking control
plane performance of SDN controllers. Terminology related to plane performance of SDN controllers. SDN controller is a core
benchmarking SDN controllers is described in the companion component in software-defined networking architecture that controls
terminology document. SDN controllers have been implemented with the network behavior. Terminology related to benchmarking SDN
many varying designs in order to achieve their intended network controllers is described in the companion terminology documentI-D
sdn-controller-benchmark-term. SDN controllers have been implemented
with many varying designs in order to achieve their intended network
functionality. Hence, the authors have taken the approach of functionality. Hence, the authors have taken the approach of
considering an SDN controller as a black box, defining the considering an SDN controller as a black box, defining the
methodology in a manner that is agnostic to protocols and network methodology in a manner that is agnostic to protocols and network
services supported by controllers. The intent of this document is to services supported by controllers. The intent of this document is to
provide a standard mechanism to measure the performance of all provide a standard mechanism to measure the performance of all
controller implementations. controller implementations.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
skipping to change at page 1, line 45 skipping to change at page 1, line 47
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current. Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress. material or to cite them other than as "work in progress.
This Internet-Draft will expire on June 10, 2018. This Internet-Draft will expire on August 25, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction...................................................4 1. Introduction...................................................4
2. Scope..........................................................4 2. Scope..........................................................4
3. Test Setup..........................Error! Bookmark not defined. 3. Test Setup.....................................................4
3.1. Test setup - Controller working in Standalone Mode........5 3.1. Test setup - Controller working in Standalone Mode........5
3.2. Test setup - Controller working in Cluster Mode...........6 3.2. Test setup - Controller working in Cluster Mode...........6
4. Test Considerations............................................7 4. Test Considerations............................................7
4.1. Network Topology..........................................7 4.1. Network Topology..........................................7
4.2. Test Traffic..............................................7 4.2. Test Traffic..............................................7
4.3. Test Emulator Requirements................................7 4.3. Test Emulator Requirements................................7
4.4. Connection Setup..........................................7 4.4. Connection Setup..........................................7
4.5. Measurement Point Specification and Recommendation........8 4.5. Measurement Point Specification and Recommendation........8
4.6. Connectivity Recommendation...............................8 4.6. Connectivity Recommendation...............................8
4.7. Test Repeatability........................................8 4.7. Test Repeatability........................................8
5. Benchmarking Tests.............................................9 5. Benchmarking Tests.............................................9
5.1. Performance...............................................9 5.1. Performance...............................................9
5.1.1. Network Topology Discovery Time......................9 5.1.1. Network Topology Discovery Time......................9
5.1.2. Asynchronous Message Processing Time................11 5.1.2. Asynchronous Message Processing Time................11
5.1.3. Asynchronous Message Processing Rate................12 5.1.3. Asynchronous Message Processing Rate................13
5.1.4. Reactive Path Provisioning Time.....................15 5.1.4. Reactive Path Provisioning Time.....................15
5.1.5. Proactive Path Provisioning Time....................16 5.1.5. Proactive Path Provisioning Time....................16
5.1.6. Reactive Path Provisioning Rate.....................17 5.1.6. Reactive Path Provisioning Rate.....................18
5.1.7. Proactive Path Provisioning Rate....................19 5.1.7. Proactive Path Provisioning Rate....................19
5.1.8. Network Topology Change Detection Time..............20 5.1.8. Network Topology Change Detection Time..............21
5.2. Scalability..............................................22 5.2. Scalability..............................................23
5.2.1. Control Session Capacity............................22 5.2.1. Control Session Capacity............................23
5.2.2. Network Discovery Size..............................22 5.2.2. Network Discovery Size..............................23
5.2.3. Forwarding Table Capacity...........................23 5.2.3. Forwarding Table Capacity...........................24
5.3. Security.................................................25 5.3. Security.................................................26
5.3.1. Exception Handling..................................25 5.3.1. Exception Handling..................................26
5.3.2. Denial of Service Handling..........................26 5.3.2. Denial of Service Handling..........................27
5.4. Reliability..............................................28 5.4. Reliability..............................................29
5.4.1. Controller Failover Time............................28 5.4.1. Controller Failover Time............................29
5.4.2. Network Re-Provisioning Time........................29 5.4.2. Network Re-Provisioning Time........................30
6. References....................................................31 6. References....................................................32
6.1. Normative References.....................................31 6.1. Normative References.....................................32
6.2. Informative References...................................31 6.2. Informative References...................................32
7. IANA Considerations...........................................31 7. IANA Considerations...........................................32
8. Security Considerations.......................................31 8. Security Considerations.......................................32
9. Acknowledgments...............................................32 9. Acknowledgments...............................................33
Appendix A. Example Test Topologies..............................33 Appendix A. Example Test Topology................................34
A.1. Leaf-Spine Topology - Three Tier Network Architecture....33 A.1. Leaf-Spine Topology......................................34
A.2. Leaf-Spine Topology - Two Tier Network Architecture......33 Appendix B. Benchmarking Methodology using OpenFlow Controllers..35
Appendix B. Benchmarking Methodology using OpenFlow Controllers..34 B.1. Protocol Overview........................................35
B.1. Protocol Overview........................................34 B.2. Messages Overview........................................35
B.2. Messages Overview........................................34 B.3. Connection Overview......................................35
B.3. Connection Overview......................................34 B.4. Performance Benchmarking Tests...........................36
B.4. Performance Benchmarking Tests...........................35 B.4.1. Network Topology Discovery Time.....................36
B.4.1. Network Topology Discovery Time.....................35 B.4.2. Asynchronous Message Processing Time................37
B.4.2. Asynchronous Message Processing Time................36 B.4.3. Asynchronous Message Processing Rate................38
B.4.3. Asynchronous Message Processing Rate................37 B.4.4. Reactive Path Provisioning Time.....................39
B.4.4. Reactive Path Provisioning Time.....................38 B.4.5. Proactive Path Provisioning Time....................40
B.4.5. Proactive Path Provisioning Time....................39 B.4.6. Reactive Path Provisioning Rate.....................41
B.4.6. Reactive Path Provisioning Rate.....................40 B.4.7. Proactive Path Provisioning Rate....................42
B.4.7. Proactive Path Provisioning Rate....................41 B.4.8. Network Topology Change Detection Time..............43
B.4.8. Network Topology Change Detection Time..............42 B.5. Scalability..............................................44
B.5. Scalability..............................................43 B.5.1. Control Sessions Capacity...........................44
B.5.1. Control Sessions Capacity...........................43 B.5.2. Network Discovery Size..............................44
B.5.2. Network Discovery Size..............................43 B.5.3. Forwarding Table Capacity...........................45
B.5.3. Forwarding Table Capacity...........................44 B.6. Security.................................................47
B.6. Security.................................................46 B.6.1. Exception Handling..................................47
B.6.1. Exception Handling..................................46 B.6.2. Denial of Service Handling..........................48
B.6.2. Denial of Service Handling..........................47 B.7. Reliability..............................................50
B.7. Reliability..............................................49 B.7.1. Controller Failover Time............................50
B.7.1. Controller Failover Time............................49 B.7.2. Network Re-Provisioning Time........................51
B.7.2. Network Re-Provisioning Time........................50 Authors' Addresses...............................................54
Authors' Addresses...............................................53
1. Introduction 1. Introduction
This document provides generic methodologies for benchmarking SDN This document provides generic methodologies for benchmarking SDN
controller performance. An SDN controller may support many controller performance. An SDN controller may support many
northbound and southbound protocols, implement a wide range of northbound and southbound protocols, implement a wide range of
applications, and work solely, or as a group to achieve the desired applications, and work solely, or as a group to achieve the desired
functionality. This document considers an SDN controller as a black functionality. This document considers an SDN controller as a black
box, regardless of design and implementation. The tests defined in box, regardless of design and implementation. The tests defined in
the document can be used to benchmark SDN controller for the document can be used to benchmark SDN controller for
performance, scalability, reliability and security independent of performance, scalability, reliability and security independent of
northbound and southbound protocols. These tests can be performed on northbound and southbound protocols. These tests can be performed on
an SDN controller running as a virtual machine (VM) instance or on a an SDN controller running as a virtual machine (VM) instance or on a
bare metal server. This document is intended for those who want to bare metal server. This document is intended for those who want to
measure the SDN controller performance as well as compare various measure the SDN controller performance as well as compare various
SDN controllers performance. SDN controllers performance.
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
document are to be interpreted as described in RFC 2119. "OPTIONAL" in this document are to be interpreted as described in
BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
2. Scope 2. Scope
This document defines methodology to measure the networking metrics This document defines methodology to measure the networking metrics
of SDN controllers. For the purpose of this memo, the SDN controller of SDN controllers. For the purpose of this memo, the SDN controller
is a function that manages and controls Network Devices. Any SDN is a function that manages and controls Network Devices. Any SDN
controller without a control capability is out of scope for this controller without a control capability is out of scope for this
memo. The tests defined in this document enable benchmarking of SDN memo. The tests defined in this document enable benchmarking of SDN
Controllers in two ways; as a standalone controller and as a cluster Controllers in two ways; as a standalone controller and as a cluster
of homogeneous controllers. These tests are recommended for of homogeneous controllers. These tests are recommended for
execution in lab environments rather than in live network execution in lab environments rather than in live network
deployments. Performance benchmarking of a federation of controllers deployments. Performance benchmarking of a federation of
is beyond the scope of this document. controllers, set of SDN controllers managing different domains, is
beyond the scope of this document.
3. Test Setup 3. Test Setup
The tests defined in this document enable measurement of an SDN The tests defined in this document enable measurement of an SDN
controller's performance in standalone mode and cluster mode. This controller's performance in standalone mode and cluster mode. This
section defines common reference topologies that are later referred section defines common reference topologies that are later referred
to in individual tests (Additional forwarding Plane topologies are to in individual tests.
provided in Appendix A).
3.1. Test setup - Controller working in Standalone Mode 3.1. Test setup - Controller working in Standalone Mode
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Application Plane Test Emulator | | Application Plane Test Emulator |
| | | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | Application | | Service | | | | Application | | Service | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | | |
+-----------------------------+(I2)-------------------------+ +-----------------------------+(I2)-------------------------+
| |
|
| (Northbound interfaces) | (Northbound interfaces)
+-------------------------------+ +-------------------------------+
| +----------------+ | | +----------------+ |
| | SDN Controller | | | | SDN Controller | |
| +----------------+ | | +----------------+ |
| | | |
| Device Under Test (DUT) | | Device Under Test (DUT) |
+-------------------------------+ +-------------------------------+
| (Southbound interfaces) | (Southbound interfaces)
| |
|
+-----------------------------+(I1)-------------------------+ +-----------------------------+(I1)-------------------------+
| | | |
| +-----------+ +-----------+ | | +-----------+ +-----------+ |
| | Network |l1 ln-1| Network | | | | Network | | Network | |
| | Device 1 |---- .... ----| Device n | | | | Device 2 |--..-| Device n-1| |
| +-----------+ +-----------+ | | +-----------+ +-----------+ |
| |l0 |ln | | / \ / \ |
| | | | | / \ / \ |
| | | | | l0 / X \ ln |
| +---------------+ +---------------+ | | / / \ \ |
| | Test Traffic | | Test Traffic | | | +-----------+ +-----------+ |
| | Generator | | Generator | | | | Network | | Network | |
| | (TP1) | | (TP2) | | | | Device 1 |..| Device n | |
| +---------------+ +---------------+ | | +-----------+ +-----------+ |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| | | |
| Forwarding Plane Test Emulator | | Forwarding Plane Test Emulator |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 1 Figure 1
3.2. Test setup - Controller working in Cluster Mode 3.2. Test setup - Controller working in Cluster Mode
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Application Plane Test Emulator | | Application Plane Test Emulator |
| | | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | Application | | Service | | | | Application | | Service | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | | |
+-----------------------------+(I2)-------------------------+ +-----------------------------+(I2)-------------------------+
| |
|
| (Northbound interfaces) | (Northbound interfaces)
+---------------------------------------------------------+ +---------------------------------------------------------+
| | | |
| ------------------ ------------------ | | ------------------ ------------------ |
| | SDN Controller 1 | <--E/W--> | SDN Controller n | | | | SDN Controller 1 | <--E/W--> | SDN Controller n | |
| ------------------ ------------------ | | ------------------ ------------------ |
| | | |
| Device Under Test (DUT) | | Device Under Test (DUT) |
+---------------------------------------------------------+ +---------------------------------------------------------+
| (Southbound interfaces) | (Southbound interfaces)
| |
|
+-----------------------------+(I1)-------------------------+ +-----------------------------+(I1)-------------------------+
| | | |
| +-----------+ +-----------+ | | +-----------+ +-----------+ |
| | Network |l1 ln-1| Network | | | | Network | | Network | |
| | Device 1 |---- .... ----| Device n | | | | Device 2 |--..-| Device n-1| |
| +-----------+ +-----------+ | | +-----------+ +-----------+ |
| |l0 |ln | | / \ / \ |
| | | | | / \ / \ |
| | | | | l0 / X \ ln |
| +---------------+ +---------------+ | | / / \ \ |
| | Test Traffic | | Test Traffic | | | +-----------+ +-----------+ |
| | Generator | | Generator | | | | Network | | Network | |
| | (TP1) | | (TP2) | | | | Device 1 |..| Device n | |
| +---------------+ +---------------+ | | +-----------+ +-----------+ |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| | | |
| Forwarding Plane Test Emulator | | Forwarding Plane Test Emulator |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 2 Figure 2
4. Test Considerations 4. Test Considerations
4.1. Network Topology 4.1. Network Topology
The test cases SHOULD use Leaf-Spine topology with at least 1 The test cases SHOULD use Leaf-Spine topology with at least 1
Network Device in the topology for benchmarking. The test traffic Network Device in the topology for benchmarking. The test traffic
generators TP1 and TP2 SHOULD be connected to the first and the last generators TP1 and TP2 SHOULD be connected to the first and the last
leaf Network Device. If a test case uses test topology with 1 leaf Network Device. If a test case uses test topology with 1
Network Device, the test traffic generators TP1 and TP2 SHOULD be Network Device, the test traffic generators TP1 and TP2 SHOULD be
connected to the same node. However to achieve a complete connected to the same node. However to achieve a complete
performance characterization of the SDN controller, it is performance characterization of the SDN controller, it is
recommended that the controller be benchmarked for many network recommended that the controller be benchmarked for many network
topologies and a varying number of Network Devices. This document topologies and a varying number of Network Devices. This document
includes two sample test topologies, defined in Section 10 - includes a sample test topology, defined in Section 10 - Appendix A
Appendix A for reference. Further, care should be taken to make sure for reference. Further, care should be taken to make sure that a
that a loop prevention mechanism is enabled either in the SDN loop prevention mechanism is enabled either in the SDN controller,
controller, or in the network when the topology contains redundant or in the network when the topology contains redundant network
network paths. paths.
4.2. Test Traffic 4.2. Test Traffic
Test traffic is used to notify the controller about the asynchronous Test traffic is used to notify the controller about the asynchronous
arrival of new flows. The test cases SHOULD use frame sizes of 128, arrival of new flows. The test cases SHOULD use frame sizes of 128,
512 and 1508 bytes for benchmarking. Tests using jumbo frames are 512 and 1508 bytes for benchmarking. Tests using jumbo frames are
optional. optional.
4.3. Test Emulator Requirements 4.3. Test Emulator Requirements
skipping to change at page 7, line 45 skipping to change at page 7, line 45
connections. The test cases use these values to compute the connections. The test cases use these values to compute the
controller processing time. controller processing time.
4.4. Connection Setup 4.4. Connection Setup
There may be controller implementations that support unencrypted and There may be controller implementations that support unencrypted and
encrypted network connections with Network Devices. Further, the encrypted network connections with Network Devices. Further, the
controller may have backward compatibility with Network Devices controller may have backward compatibility with Network Devices
running older versions of southbound protocols. It may be useful to running older versions of southbound protocols. It may be useful to
measure the controller performance with one or more applicable measure the controller performance with one or more applicable
connection setup methods defined below. connection setup methods defined below. For cases with encrypted
communications between the controller and the switch, key management
and key exchange MUST take place before any performance or benchmark
measurements.
1. Unencrypted connection with Network Devices, running same 1. Unencrypted connection with Network Devices, running same
protocol version. protocol version.
2. Unencrypted connection with Network Devices, running different 2. Unencrypted connection with Network Devices, running different
protocol versions. protocol versions.
Example: Example:
a. Controller running current protocol version and switch a. Controller running current protocol version and switch
running older protocol version running older protocol version
b. Controller running older protocol version and switch b. Controller running older protocol version and switch
running current protocol version running current protocol version
3. Encrypted connection with Network Devices, running same 3. Encrypted connection with Network Devices, running same
protocol version protocol version
4. Encrypted connection with Network Devices, running different 4. Encrypted connection with Network Devices, running different
protocol versions. protocol versions.
Example: Example:
a. Controller running current protocol version and switch a. Controller running current protocol version and switch
skipping to change at page 9, line 10 skipping to change at page 9, line 14
parameters and controller settings parameters MUST be reflected in parameters and controller settings parameters MUST be reflected in
the test report. the test report.
Test Configuration Parameters: Test Configuration Parameters:
1. Controller name and version 1. Controller name and version
2. Northbound protocols and versions 2. Northbound protocols and versions
3. Southbound protocols and versions 3. Southbound protocols and versions
4. Controller redundancy mode (Standalone or Cluster Mode) 4. Controller redundancy mode (Standalone or Cluster Mode)
5. Connection setup (Unencrypted or Encrypted) 5. Connection setup (Unencrypted or Encrypted)
6. Network Topology (Mesh or Tree or Linear) 6. Network Device Type (Physical or Virtual or Emulated)
7. Network Device Type (Physical or Virtual or Emulated) 7. Number of Nodes
8. Number of Nodes 8. Number of Links
9. Number of Links 9. Dataplane Test Traffic Type
10. Dataplane Test Traffic Type 10. Controller System Configuration (e.g., Physical or Virtual
11. Controller System Configuration (e.g., Physical or Virtual
Machine, CPU, Memory, Caches, Operating System, Interface Machine, CPU, Memory, Caches, Operating System, Interface
Speed, Storage) Speed, Storage)
12. Reference Test Setup (e.g., Section 3.1 etc.,) 11. Reference Test Setup (e.g., Section 3.1 etc.,)
Controller Settings Parameters: Controller Settings Parameters:
1. Topology re-discovery timeout 1. Topology re-discovery timeout
2. Controller redundancy mode (e.g., active-standby etc.,) 2. Controller redundancy mode (e.g., active-standby etc.,)
3. Controller state persistence enabled/disabled 3. Controller state persistence enabled/disabled
To ensure the repeatability of test, the following capabilities of To ensure the repeatability of test, the following capabilities of
test emulator SHOULD be reported test emulator SHOULD be reported
1. Maximum number of Network Devices that the forwarding plane 1. Maximum number of Network Devices that the forwarding plane
skipping to change at page 10, line 46 skipping to change at page 10, line 48
the deployed network topology, or when the discovered topology the deployed network topology, or when the discovered topology
information return the same details for 3 consecutive queries. information return the same details for 3 consecutive queries.
6. Record the time last discovery message (Tmn) sent to controller 6. Record the time last discovery message (Tmn) sent to controller
from the forwarding plane test emulator interface (I1) when the from the forwarding plane test emulator interface (I1) when the
trial completed successfully. (e.g., the topology matches). trial completed successfully. (e.g., the topology matches).
Measurement: Measurement:
Topology Discovery Time Tr1 = Tmn-Tm1. Topology Discovery Time Tr1 = Tmn-Tm1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Topology Discovery Time = ----------------------- Average Topology Discovery Time (TDm) = -----------------------
Total Trials Total Trials
SUM[SQUAREOF(Tri-TDm)]
Topology Discovery Time Variance (TDv) ----------------------
Total Trials -1
Reporting Format: Reporting Format:
The Topology Discovery Time results MUST be reported in the format The Topology Discovery Time results MUST be reported in the format
of a table, with a row for each successful iteration. The last row of a table, with a row for each successful iteration. The last row
of the table indicates the average Topology Discovery Time. of the table indicates the Topology Discovery Time variance and the
previous row indicates the average Topology Discovery Time.
If this test is repeated with varying number of nodes over the same If this test is repeated with varying number of nodes over the same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Topology Discovery Time. SHOULD be the average Topology Discovery Time.
If this test is repeated with same number of nodes over different
topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Topology Discovery Time.
5.1.2. Asynchronous Message Processing Time 5.1.2. Asynchronous Message Processing Time
Objective: Objective:
The time taken by controller(s) to process an asynchronous message, The time taken by controller(s) to process an asynchronous message,
defined as the interval starting with an asynchronous message from a defined as the interval starting with an asynchronous message from a
network device after the discovery of all the devices by the network device after the discovery of all the devices by the
controller(s), ending with a response message from the controller(s) controller(s), ending with a response message from the controller(s)
at its Southbound interface. at its Southbound interface.
skipping to change at page 11, line 43 skipping to change at page 11, line 43
This test SHOULD use one of the test setup described in section 3.1 This test SHOULD use one of the test setup described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST have successfully completed the network 1. The controller MUST have successfully completed the network
topology discovery for the connected Network Devices. topology discovery for the connected Network Devices.
Procedure: Procedure:
1. Generate asynchronous messages from every connected Network 1. Generate asynchronous messages from every connected Network
Device, to the SDN controller, one at a time in series from the Device, to the SDN controller, one at a time in series from the
forwarding plane test emulator for the trial duration. forwarding plane test emulator for the trial duration.
2. Record every request transmit (T1) timestamp and the 2. Record every request transmit time (T1) and the corresponding
corresponding response (R1) received timestamp at the response received time (R1) at the forwarding plane test emulator
forwarding plane test emulator interface (I1) for every interface (I1) for every successful message exchange.
successful message exchange.
Measurement: Measurement:
(R1-T1) + (R2-T2)..(Rn-Tn) (R1-T1) + (R2-T2)..(Rn-Tn)
Asynchronous Message Processing Time Tr1 = ----------------------- Asynchronous Message Processing Time Tr1 = -----------------------
Nrx Nrx
Where Nrx is the total number of successful messages exchanged Where Nrx is the total number of successful messages exchanged
Tr1 + Tr2 + Tr3..Trn Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Time= -------------------- Average Asynchronous Message Processing Time = --------------------
Total Trials Total Trials
Asynchronous Message Processing Time Variance (TAMv) =
SUM[SQUAREOF(Tri-TAMm)]
----------------------
Total Trials -1
Where TAMm is the Average Asynchronous Message Processing Time.
Reporting Format: Reporting Format:
The Asynchronous Message Processing Time results MUST be reported in The Asynchronous Message Processing Time results MUST be reported in
the format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of
the table indicates the average Asynchronous Message Processing the table indicates the Asynchronous Message Processing Time
Time. variance and the previous row indicates the average Asynchronous
Message Processing Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. - Successful the configuration parameters captured in section 5.
messages exchanged (Nrx)
- Successful messages exchanged (Nrx)
- Percentage of unsuccessful messages exchanged, computed using the
formula (1 - Nrx/Ntx) * 100), Where Ntx is the total number of
messages transmitted to the controller.
If this test is repeated with varying number of nodes with same If this test is repeated with varying number of nodes with same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Asynchronous Message Processing Time. SHOULD be the average Asynchronous Message Processing Time.
If this test is repeated with same number of nodes using different
topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Asynchronous Message Processing Time.
5.1.3. Asynchronous Message Processing Rate 5.1.3. Asynchronous Message Processing Rate
Objective: Objective:
Measure the number of responses to asynchronous messages (such as Measure the number of responses to asynchronous messages (such as
new flow arrival notification message, etc.) for which the new flow arrival notification message, etc.) for which the
controller(s) performed processing and replied with a valid and controller(s) performed processing and replied with a valid and
productive (non-trivial) response message productive (non-trivial) response message
This test will measure two benchmarks on Asynchronous Message This test will measure two benchmarks on Asynchronous Message
skipping to change at page 14, line 46 skipping to change at page 15, line 14
The results MAY be presented in the form of a graph. The X axis The results MAY be presented in the form of a graph. The X axis
SHOULD be the Offered rate, and dual Y axes would represent SHOULD be the Offered rate, and dual Y axes would represent
Asynchronous Message Processing Rate and Loss Ratio, respectively. Asynchronous Message Processing Rate and Loss Ratio, respectively.
If this test is repeated with varying number of nodes over same If this test is repeated with varying number of nodes over same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the
Asynchronous Message Processing Rate. Both the Maximum and the Loss- Asynchronous Message Processing Rate. Both the Maximum and the Loss-
Free Rates should be plotted for each N. Free Rates should be plotted for each N.
If this test is repeated with same number of nodes over different
topologies, the results SHOULD be reported in the form of a graph.
The X axis SHOULD be the Topology Type, the Y axis SHOULD be the
Asynchronous Message Processing Rate. Both the Maximum and the Loss-
Free Rates should be plotted for each topology.
5.1.4. Reactive Path Provisioning Time 5.1.4. Reactive Path Provisioning Time
Objective: Objective:
The time taken by the controller to setup a path reactively between The time taken by the controller to setup a path reactively between
source and destination node, defined as the interval starting with source and destination node, defined as the interval starting with
the first flow provisioning request message received by the the first flow provisioning request message received by the
controller(s) at its Southbound interface, ending with the last flow controller(s) at its Southbound interface, ending with the last flow
provisioning response message sent from the controller(s) at its provisioning response message sent from the controller(s) at its
Southbound interface. Southbound interface.
skipping to change at page 16, line 13 skipping to change at page 16, line 22
forwarding plane test emulator interface (I1). forwarding plane test emulator interface (I1).
Measurement: Measurement:
Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Time = ----------------------- Average Reactive Path Provisioning Time = -----------------------
Total Trials Total Trials
SUM[SQUAREOF(Tri-TRPm)]
Reactive Path Provisioning Time Variance(TRPv) ---------------------
Total Trials -1
Where TRPm is the Average Reactive Path Provisioning Time.
Reporting Format: Reporting Format:
The Reactive Path Provisioning Time results MUST be reported in the The Reactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Time. table indicates the Reactive Path Provisioning Time variance and the
previous row indicates the Average Reactive Path Provisioning Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
- Number of Network Devices in the path - Number of Network Devices in the path
5.1.5. Proactive Path Provisioning Time 5.1.5. Proactive Path Provisioning Time
Objective: Objective:
skipping to change at page 17, line 32 skipping to change at page 17, line 47
interface I1. interface I1.
Measurement: Measurement:
Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Time = ----------------------- Average Proactive Path Provisioning Time = -----------------------
Total Trials Total Trials
SUM[SQUAREOF(Tri-TPPm)]
Proactive Path Provisioning Time Variance(TPPv) --------------------
Total Trials -1
Where TPPm is the Average Proactive Path Provisioning Time.
Reporting Format: Reporting Format:
The Proactive Path Provisioning Time results MUST be reported in the The Proactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Time. table indicates the Proactive Path Provisioning Time variance and
the previous row indicates the Average Proactive Path Provisioning
Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
- Number of Network Devices in the path - Number of Network Devices in the path
5.1.6. Reactive Path Provisioning Rate 5.1.6. Reactive Path Provisioning Rate
Objective: Objective:
The maximum number of independent paths a controller can The maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish per second between source and destination
reactively, defined as the number of paths provisioned by the nodes reactively, defined as the number of paths provisioned per
controller(s) at its Southbound interface for the flow provisioning second by the controller(s) at its Southbound interface for the flow
requests received for path provisioning at its Southbound interface provisioning requests received for path provisioning at its
between the start of the test and the expiry of given trial Southbound interface between the start of the test and the expiry of
duration. given trial duration.
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
the deployed network topology. the deployed network topology.
skipping to change at page 18, line 44 skipping to change at page 19, line 22
Measurement: Measurement:
Ndf Ndf
Reactive Path Provisioning Rate Tr1 = ------ Reactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Rate = ------------------------ Average Reactive Path Provisioning Rate = ------------------------
Total Trials Total Trials
SUM[SQUAREOF(Tri-RPPm)]
Reactive Path Provisioning Rate Variance(RPPv) --------------------
Total Trials -1
Where RPPm is the Average Reactive Path Provisioning Rate.
Reporting Format: Reporting Format:
The Reactive Path Provisioning Rate results MUST be reported in the The Reactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Rate. table indicates the Reactive Path Provisioning Rate variance and the
previous row indicates the Average Reactive Path Provisioning Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
- Number of Network Devices in the path - Number of Network Devices in the path
- Offered rate - Offered rate
5.1.7. Proactive Path Provisioning Rate 5.1.7. Proactive Path Provisioning Rate
Objective: Objective:
Measure the maximum rate of independent paths a controller can Measure the maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish per second between source and destination
proactively, defined as the number of paths provisioned by the nodes proactively, defined as the number of paths provisioned per
controller(s) at its Southbound interface for the paths requested in second by the controller(s) at its Southbound interface for the
its Northbound interface between the start of the test and the paths requested in its Northbound interface between the start of the
expiry of given trial duration. The measurement is based on test and the expiry of given trial duration. The measurement is
dataplane observations of successful path activation based on dataplane observations of successful path activation
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
the deployed network topology. the deployed network topology.
skipping to change at page 20, line 19 skipping to change at page 21, line 4
Measurement: Measurement:
Ndf Ndf
Proactive Path Provisioning Rate Tr1 = ------ Proactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Rate = ----------------------- Average Proactive Path Provisioning Rate = -----------------------
Total Trials Total Trials
SUM[SQUAREOF(Tri-PPPm)]
Proactive Path Provisioning Rate Variance(PPPv) --------------------
Total Trials -1
Where PPPm is the Average Proactive Path Provisioning Rate.
Reporting Format: Reporting Format:
The Proactive Path Provisioning Rate results MUST be reported in the The Proactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Rate. table indicates the Proactive Path Provisioning Rate variance and
the previous row indicates the Average Proactive Path Provisioning
Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
- Number of Network Devices in the path - Number of Network Devices in the path
- Offered rate - Offered rate
5.1.8. Network Topology Change Detection Time 5.1.8. Network Topology Change Detection Time
skipping to change at page 21, line 38 skipping to change at page 22, line 30
emulator interface (I1) emulator interface (I1)
Measurement: Measurement:
Network Topology Change Detection Time Tr1 = Tcd-Tcn. Network Topology Change Detection Time Tr1 = Tcd-Tcn.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Network Topology Change Detection Time = ------------------ Average Network Topology Change Detection Time = ------------------
Total Trials Total Trials
Network Topology Change Detection Time Variance(NTDv) =
SUM[SQUAREOF(Tri-NTDm)]
-----------------------
Total Trials -1
Where NTDm is the Average Network Topology Change Detection Time.
Reporting Format: Reporting Format:
The Network Topology Change Detection Time results MUST be reported The Network Topology Change Detection Time results MUST be reported
in the format of a table with a row for each iteration. The last in the format of a table with a row for each iteration. The last row
row of the table indicates the average Network Topology Change Time. of the table indicates the Network Topology Change Detection Time
variance and the previous row indicates the average Network Topology
Change Time.
5.2. Scalability 5.2. Scalability
5.2.1. Control Session Capacity 5.2.1. Control Session Capacity
Objective: Objective:
Measure the maximum number of control sessions the controller can Measure the maximum number of control sessions the controller can
maintain, defined as the number of sessions that the controller can maintain, defined as the number of sessions that the controller can
accept from network devices, starting with the first control accept from network devices, starting with the first control
skipping to change at page 23, line 24 skipping to change at page 24, line 24
information either through controller's management interface or information either through controller's management interface or
northbound interface. northbound interface.
Procedure: Procedure:
1. Establish the network connections between controller and network 1. Establish the network connections between controller and network
nodes. nodes.
2. Query the controller for the discovered network topology 2. Query the controller for the discovered network topology
information and compare it with the deployed network topology information and compare it with the deployed network topology
information. information.
3. Increase the number of nodes by 1 when the comparison is 3. If the comparison is successful, increase the number of nodes by 1
successful and repeat the trial. and repeat the trial.
4. Decrease the number of nodes by 1 when the comparison fails and If the comparison is unsuccessful, decrease the number of nodes by
repeat the trial. 1 and repeat the trial.
5. Continue the trial until the comparison of step 4 is successful. 4. Continue the trial until the comparison of step 3 is successful.
6. Record the number of nodes for the last trial (Ns) where the 5. Record the number of nodes for the last trial (Ns) where the
topology comparison was successful. topology comparison was successful.
Measurement: Measurement:
Network Discovery Size = Ns. Network Discovery Size = Ns.
Reporting Format: Reporting Format:
The Network Discovery Size results MUST be reported in addition to The Network Discovery Size results MUST be reported in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 27, line 32 skipping to change at page 28, line 32
Procedure: Procedure:
1. Perform the listed tests and launch a DoS attack towards 1. Perform the listed tests and launch a DoS attack towards
controller while the trial is running. controller while the trial is running.
Note: Note:
DoS attacks can be launched on one of the following interfaces. DoS attacks can be launched on one of the following interfaces.
a. Northbound (e.g., Sending a huge number of requests on a. Northbound (e.g., Query for flow entries continuously on
northbound interface) northbound interface)
b. Management (e.g., Ping requests to controller's management b. Management (e.g., Ping requests to controller's management
interface) interface)
c. Southbound (e.g., TCP SYNC messages on southbound interface) c. Southbound (e.g., TCP SYN messages on southbound interface)
Measurement: Measurement:
Measurement MUST be done as per the equation defined in the Measurement MUST be done as per the equation defined in the
corresponding test's measurement section. corresponding test's measurement section.
Reporting Format: Reporting Format:
The DoS Attacks Handling results MUST be reported in the format of The DoS Attacks Handling results MUST be reported in the format of
table with a column for each of the below parameters and row for table with a column for each of the below parameters and row for
skipping to change at page 31, line 14 skipping to change at page 32, line 14
- Network Re-Provisioning Time - Network Re-Provisioning Time
- Forward Direction Packet Loss - Forward Direction Packet Loss
- Reverse Direction Packet Loss - Reverse Direction Packet Loss
6. References 6. References
6.1. Normative References 6.1. Normative References
[RFC2119] S. Bradner, "Key words for use in RFCs to Indicate
Requirement Levels", RFC 2119, March 1997.
[RFC8174] B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", RFC 8174, May 2017.
[I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks, "Terminology for Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
Benchmarking SDN Controller Performance", Benchmarking SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-term-07 draft-ietf-bmwg-sdn-controller-benchmark-term-08
(Work in progress), January 10, 2018 (Work in progress), February 25, 2018
6.2. Informative References 6.2. Informative References
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
7. IANA Considerations 7. IANA Considerations
This document does not have any IANA requests. This document does not have any IANA requests.
skipping to change at page 32, line 15 skipping to change at page 33, line 20
Special capabilities SHOULD NOT exist in the controller specifically Special capabilities SHOULD NOT exist in the controller specifically
for benchmarking purposes. Any implications for network security for benchmarking purposes. Any implications for network security
arising from the controller SHOULD be identical in the lab and in arising from the controller SHOULD be identical in the lab and in
production networks. production networks.
9. Acknowledgments 9. Acknowledgments
The authors would like to thank the following individuals for The authors would like to thank the following individuals for
providing their valuable comments to the earlier versions of this providing their valuable comments to the earlier versions of this
document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu
(NAIST), Andrew McGregor (Google), Scott Bradner (Harvard (NAIST), Andrew McGregor (Google), Scott Bradner , Jay Karthik
University), Jay Karthik (Cisco), Ramakrishnan (Dell), Khasanov (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei), Brian
Boris (Huawei), Brian Castelli (Spirent) Castelli (Spirent)
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Appendix A. Example Test Topologies Appendix A. Example Test Topology
A.1. Leaf-Spine Topology - Three Tier Network Architecture
+----------+
| SDN |
| Node | (Core)
+----------+
/ \
/ \
+------+ +------+
| SDN | | SDN | (Spine)
| Node |.. | Node |
+------+ +------+
/ \ / \
/ \ / \
l1 / / \ ln-1
/ / \ \
+--------+ +-------+
| SDN | | SDN |
| Node |.. | Node | (Leaf)
+--------+ +-------+
A.2. Leaf-Spine Topology - Two Tier Network Architecture A.1. Leaf-Spine Topology
+------+ +------+ +------+ +------+
| SDN | | SDN | (Spine) | SDN | | SDN | (Spine)
| Node |.. | Node | | Node |.. | Node |
+------+ +------+ +------+ +------+
/ \ / \ / \ / \
/ \ / \ / \ / \
l1 / / \ ln-1 l1 / / \ ln
/ / \ \ / / \ \
+--------+ +-------+ +--------+ +-------+
| SDN | | SDN | | SDN | | SDN |
| Node |.. | Node | (Leaf) | Node |.. | Node | (Leaf)
+--------+ +-------+ +--------+ +-------+
Appendix B. Benchmarking Methodology using OpenFlow Controllers Appendix B. Benchmarking Methodology using OpenFlow Controllers
This section gives an overview of OpenFlow protocol and provides This section gives an overview of OpenFlow protocol and provides
test methodology to benchmark SDN controllers supporting OpenFlow test methodology to benchmark SDN controllers supporting OpenFlow
 End of changes. 55 change blocks. 
194 lines changed or deleted 233 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/