draft-ietf-bmwg-sdn-controller-benchmark-meth-04.txt   draft-ietf-bmwg-sdn-controller-benchmark-meth-05.txt 
Internet-Draft Bhuvaneswaran Vengainathan Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies Intended Status: Informational Veryx Technologies
Expires: December 28, 2017 Mark Tassinari Expires: April 01, 2018 Mark Tassinari
Hewlett-Packard Hewlett-Packard
Vishwas Manral Vishwas Manral
Nano Sec Nano Sec
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
June 28, 2017 October 01, 2017
Benchmarking Methodology for SDN Controller Performance Benchmarking Methodology for SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-04 draft-ietf-bmwg-sdn-controller-benchmark-meth-05
Abstract Abstract
This document defines the methodologies for benchmarking control This document defines the methodologies for benchmarking control
plane performance of SDN controllers. Terminology related to plane performance of SDN controllers. Terminology related to
benchmarking SDN controllers is described in the companion benchmarking SDN controllers is described in the companion
terminology document. SDN controllers have been implemented with terminology document. SDN controllers have been implemented with
many varying designs in order to achieve their intended network many varying designs in order to achieve their intended network
functionality. Hence, the authors have taken the approach of functionality. Hence, the authors have taken the approach of
considering an SDN controller as a black box, defining the considering an SDN controller as a black box, defining the
skipping to change at page 1, line 45 skipping to change at page 1, line 45
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current. Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress. material or to cite them other than as "work in progress.
This Internet-Draft will expire on December 28, 2017. This Internet-Draft will expire on April 01, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction..................................................4 1. Introduction...................................................4
2. Scope.........................................................4 2. Scope..........................................................4
3. Test Setup....................................................4 3. Test Setup.....................................................5
3.1. Test setup - Controller working in Standalone Mode.......5 3.1. Test setup - Controller working in Standalone Mode........5
3.2. Test setup - Controller working in Cluster Mode..........6 3.2. Test setup - Controller working in Cluster Mode...........6
4. Test Considerations...........................................7 4. Test Considerations............................................7
4.1. Network Topology.........................................7 4.1. Network Topology..........................................7
4.2. Test Traffic.............................................7 4.2. Test Traffic..............................................7
4.3. Test Emulator Requirements...............................7 4.3. Test Emulator Requirements................................7
4.4. Connection Setup.........................................7 4.4. Connection Setup..........................................7
4.5. Measurement Point Specification and Recommendation.......8 4.5. Measurement Point Specification and Recommendation........8
4.6. Connectivity Recommendation..............................8 4.6. Connectivity Recommendation...............................8
4.7. Test Repeatability.......................................8 4.7. Test Repeatability........................................8
5. Benchmarking Tests............................................9 5. Benchmarking Tests.............................................9
5.1. Performance..............................................9 5.1. Performance...............................................9
5.1.1. Network Topology Discovery Time.....................9 5.1.1. Network Topology Discovery Time......................9
5.1.2. Asynchronous Message Processing Time...............11 5.1.2. Asynchronous Message Processing Time................11
5.1.3. Asynchronous Message Processing Rate...............12 5.1.3. Asynchronous Message Processing Rate................12
5.1.4. Reactive Path Provisioning Time....................14 5.1.4. Reactive Path Provisioning Time.....................15
5.1.5. Proactive Path Provisioning Time...................15 5.1.5. Proactive Path Provisioning Time....................16
5.1.6. Reactive Path Provisioning Rate....................17 5.1.6. Reactive Path Provisioning Rate.....................17
5.1.7. Proactive Path Provisioning Rate...................18 5.1.7. Proactive Path Provisioning Rate....................19
5.1.8. Network Topology Change Detection Time.............20 5.1.8. Network Topology Change Detection Time..............20
5.2. Scalability.............................................21 5.2. Scalability..............................................22
5.2.1. Control Session Capacity...........................21 5.2.1. Control Session Capacity............................22
5.2.2. Network Discovery Size.............................22 5.2.2. Network Discovery Size..............................22
5.2.3. Forwarding Table Capacity..........................23 5.2.3. Forwarding Table Capacity...........................23
5.3. Security................................................24 5.3. Security.................................................25
5.3.1. Exception Handling.................................24 5.3.1. Exception Handling..................................25
5.3.2. Denial of Service Handling.........................26 5.3.2. Denial of Service Handling..........................26
5.4. Reliability.............................................27 5.4. Reliability..............................................28
5.4.1. Controller Failover Time...........................27 5.4.1. Controller Failover Time............................28
5.4.2. Network Re-Provisioning Time.......................28 5.4.2. Network Re-Provisioning Time........................29
6. References...................................................30 6. References....................................................31
6.1. Normative References....................................30 6.1. Normative References.....................................31
6.2. Informative References..................................31 6.2. Informative References...................................31
7. IANA Considerations..........................................31 7. IANA Considerations...........................................31
8. Security Considerations......................................31 8. Security Considerations.......................................31
9. Acknowledgments..............................................31 9. Acknowledgments...............................................32
Appendix A. Example Test Topologies.............................33 Appendix A. Example Test Topologies..............................33
A.1. Leaf-Spine Topology - Three Tier Network Architecture...33 A.1. Leaf-Spine Topology - Three Tier Network Architecture....33
A.2. Leaf-Spine Topology - Two Tier Network Architecture.....33 A.2. Leaf-Spine Topology - Two Tier Network Architecture......33
Appendix B. Benchmarking Methodology using OpenFlow Controllers.34 Appendix B. Benchmarking Methodology using OpenFlow Controllers..34
B.1. Protocol Overview.......................................34 B.1. Protocol Overview........................................34
B.2. Messages Overview.......................................34 B.2. Messages Overview........................................34
B.3. Connection Overview.....................................34 B.3. Connection Overview......................................34
B.4. Performance Benchmarking Tests..........................35 B.4. Performance Benchmarking Tests...........................35
B.4.1. Network Topology Discovery Time....................35 B.4.1. Network Topology Discovery Time.....................35
B.4.2. Asynchronous Message Processing Time...............36 B.4.2. Asynchronous Message Processing Time................36
B.4.3. Asynchronous Message Processing Rate...............37 B.4.3. Asynchronous Message Processing Rate................37
B.4.4. Reactive Path Provisioning Time....................38 B.4.4. Reactive Path Provisioning Time.....................38
B.4.5. Proactive Path Provisioning Time...................39 B.4.5. Proactive Path Provisioning Time....................39
B.4.6. Reactive Path Provisioning Rate....................40 B.4.6. Reactive Path Provisioning Rate.....................40
B.4.7. Proactive Path Provisioning Rate...................41 B.4.7. Proactive Path Provisioning Rate....................41
B.4.8. Network Topology Change Detection Time.............42 B.4.8. Network Topology Change Detection Time..............42
B.5. Scalability.............................................43 B.5. Scalability..............................................43
B.5.1. Control Sessions Capacity..........................43 B.5.1. Control Sessions Capacity...........................43
B.5.2. Network Discovery Size.............................43 B.5.2. Network Discovery Size..............................43
B.5.3. Forwarding Table Capacity..........................44 B.5.3. Forwarding Table Capacity...........................44
B.6. Security................................................46 B.6. Security.................................................46
B.6.1. Exception Handling.................................46 B.6.1. Exception Handling..................................46
B.6.2. Denial of Service Handling.........................47 B.6.2. Denial of Service Handling..........................47
B.7. Reliability.............................................49 B.7. Reliability..............................................49
B.7.1. Controller Failover Time...........................49 B.7.1. Controller Failover Time............................49
B.7.2. Network Re-Provisioning Time.......................50 B.7.2. Network Re-Provisioning Time........................50
Authors' Addresses..............................................53 Authors' Addresses...............................................53
1. Introduction 1. Introduction
This document provides generic methodologies for benchmarking SDN This document provides generic methodologies for benchmarking SDN
controller performance. An SDN controller may support many controller performance. An SDN controller may support many
northbound and southbound protocols, implement a wide range of northbound and southbound protocols, implement a wide range of
applications, and work solely, or as a group to achieve the desired applications, and work solely, or as a group to achieve the desired
functionality. This document considers an SDN controller as a black functionality. This document considers an SDN controller as a black
box, regardless of design and implementation. The tests defined in box, regardless of design and implementation. The tests defined in
the document can be used to benchmark SDN controller for the document can be used to benchmark SDN controller for
skipping to change at page 4, line 29 skipping to change at page 4, line 29
SDN controllers performance. SDN controllers performance.
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119. document are to be interpreted as described in RFC 2119.
2. Scope 2. Scope
3. This document defines methodology to measure the networking metrics This document defines methodology to measure the networking metrics
of SDN controllers. For the purpose of this memo, the SDN controller of SDN controllers. For the purpose of this memo, the SDN controller
is a function that manages and controls Network Devices. Any SDN is a function that manages and controls Network Devices. Any SDN
controller without a control capability is out of scope for this controller without a control capability is out of scope for this
memo. The tests defined in this document enable benchmarking of SDN memo. The tests defined in this document enable benchmarking of SDN
Controllers in two ways; as a standalone controller and as a cluster Controllers in two ways; as a standalone controller and as a cluster
of homogeneous controllers. These tests are recommended for of homogeneous controllers. These tests are recommended for
execution in lab environments rather than in live network execution in lab environments rather than in live network
deployments. Performance benchmarking of a federation of controllers deployments. Performance benchmarking of a federation of controllers
is beyond the scope of this document. Test Setup is beyond the scope of this document. Test Setup
The tests defined in this document enable measurement of an SDN The tests defined in this document enable measurement of an SDN
controllers performance in standalone mode and cluster mode. This controllers performance in standalone mode and cluster mode. This
section defines common reference topologies that are later referred section defines common reference topologies that are later referred
to in individual tests (Additional forwarding Plane topologies are to in individual tests (Additional forwarding Plane topologies are
provided in Appendix A). provided in Appendix A).
3. Test Setup
3.1. Test setup - Controller working in Standalone Mode 3.1. Test setup - Controller working in Standalone Mode
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Application Plane Test Emulator | | Application Plane Test Emulator |
| | | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | Application | | Service | | | | Application | | Service | |
| +-----------------+ +-------------+ | | +-----------------+ +-------------+ |
| | | |
+-----------------------------+(I2)-------------------------+ +-----------------------------+(I2)-------------------------+
skipping to change at page 10, line 35 skipping to change at page 10, line 35
running. running.
2. Establish the network connections between controller and Network 2. Establish the network connections between controller and Network
Devices. Devices.
3. Record the time for the first discovery message (Tm1) received 3. Record the time for the first discovery message (Tm1) received
from the controller at forwarding plane test emulator interface from the controller at forwarding plane test emulator interface
I1. I1.
4. Query the controller every 3 seconds to obtain the discovered 4. Query the controller every 3 seconds to obtain the discovered
network topology information through the northbound interface or network topology information through the northbound interface or
the management interface and compare it with the deployed network the management interface and compare it with the deployed network
topology information. topology information.
5. Stop the test when the discovered topology information matches the 5. Stop the trial when the discovered topology information matches
deployed network topology, or when the discovered topology the deployed network topology, or when the discovered topology
information for 3 consecutive queries return the same details. information for 3 consecutive queries return the same details.
6. Record the time last discovery message (Tmn) sent to controller 6. Record the time last discovery message (Tmn) sent to controller
from the forwarding plane test emulator interface (I1) when the from the forwarding plane test emulator interface (I1) when the
test completed successfully. (e.g., the topology matches). trail completed successfully. (e.g., the topology matches).
Measurement: Measurement:
Topology Discovery Time Tr1 = Tmn-Tm1. Topology Discovery Time Tr1 = Tmn-Tm1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Topology Discovery Time = ----------------------- Average Topology Discovery Time = -----------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Topology Discovery Time results MUST be reported in the format The Topology Discovery Time results MUST be reported in the format
of a table, with a row for each successful iteration. The last row of a table, with a row for each successful iteration. The last row
of the table indicates the average Topology Discovery Time. of the table indicates the average Topology Discovery Time.
If this test is repeated with varying number of nodes over the same If this test is repeated with varying number of nodes over the same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate X coordinate SHOULD be the Number of nodes (N), the Y coordinate
skipping to change at page 11, line 45 skipping to change at page 11, line 45
Prerequisite: Prerequisite:
1. The controller MUST have successfully completed the network 1. The controller MUST have successfully completed the network
topology discovery for the connected Network Devices. topology discovery for the connected Network Devices.
Procedure: Procedure:
1. Generate asynchronous messages from every connected Network 1. Generate asynchronous messages from every connected Network
Device, to the SDN controller, one at a time in series from the Device, to the SDN controller, one at a time in series from the
forwarding plane test emulator for the test duration. forwarding plane test emulator for the trail duration.
2. Record every request transmit (T1) timestamp and the 2. Record every request transmit (T1) timestamp and the
corresponding response (R1) received timestamp at the corresponding response (R1) received timestamp at the
forwarding plane test emulator interface (I1) for every forwarding plane test emulator interface (I1) for every
successful message exchange. successful message exchange.
Measurement: Measurement:
(R1-T1) + (R2-T2)..(Rn-Tn) (R1-T1) + (R2-T2)..(Rn-Tn)
Asynchronous Message Processing Time Tr1 = ----------------------- Asynchronous Message Processing Time Tr1 = -----------------------
Nrx Nrx
Where Nrx is the total number of successful messages exchanged Where Nrx is the total number of successful messages exchanged
Tr1 + Tr2 + Tr3..Trn Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Time= -------------------- Average Asynchronous Message Processing Time= --------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Asynchronous Message Processing Time results MUST be reported in The Asynchronous Message Processing Time results MUST be reported in
the format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of
the table indicates the average Asynchronous Message Processing the table indicates the average Asynchronous Message Processing
Time. Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. - Successful the configuration parameters captured in section 5. - Successful
skipping to change at page 12, line 42 skipping to change at page 12, line 42
If this test is repeated with same number of nodes using different If this test is repeated with same number of nodes using different
topologies, the results SHOULD be reported in the form of a graph. topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate The X coordinate SHOULD be the Topology Type, the Y coordinate
SHOULD be the average Asynchronous Message Processing Time. SHOULD be the average Asynchronous Message Processing Time.
5.1.3. Asynchronous Message Processing Rate 5.1.3. Asynchronous Message Processing Rate
Objective: Objective:
The maximum number of asynchronous messages (session aliveness check Measure the number of responses to asynchronous messages (such as
message, new flow arrival notification message etc.) that the new flow arrival notification message, etc.) for which the
controller(s) can process, defined as the iteration starting with controller(s) performed processing and replied with a valid and
sending asynchronous messages to the controller (s) at the maximum productive (non-trivial) response message
possible rate and ending with an iteration that the controller(s)
processes the received asynchronous messages without dropping. This test will measure two benchmarks on Asynchronous Message
Processing Rate using a single procedure. The two benchmarks are
(see section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]):
1. Loss-free Asynchronous Message Processing Rate
2. Maximum Asynchronous Message Processing Rate
Here two benchmarks are determined through a series of trials where
the number of messages are sent to the controller(s), and the
responses from the controller(s) are counted over the trial
duration. The message response rate and the message loss ratio are
calculated for each trial.
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST have successfully completed the network 1. The controller(s) MUST have successfully completed the network
topology discovery for the connected Network Devices. topology discovery for the connected Network Devices.
2. Choose and record the Trial Duration (Td), the sending rate step-
size (STEP), the tolerance on equality for two consecutive trials
(P%),and the maximum possible message sending rate (Ntx1/Td).
Procedure: Procedure:
1. Generate asynchronous messages continuously at the maximum 1. Generate asynchronous messages continuously at the maximum
possible rate on the established connections from all the possible rate on the established connections from all the
emulated/simulated Network Devices for the given Test Duration emulated/simulated Network Devices for the given trial Duration
(Td). (Td).
2. Record the total number of responses received from the controller 2. Record the total number of responses received from the controller
(Nrx1) as well as the number of messages sent (Ntx1) to the (Nrx1) as well as the number of messages sent (Ntx1) to the
controller within the test duration(Td). controller within the trial duration(Td).
3. Repeat the test by generating the asynchronous messages equal to 3. Calculate the Asynchronous Message Processing Rate (Tr1) and
the number of responses received from the controller in last the Message Loss Ratio (Lr1). Ensure that the controller(s) have
iteration for the given test duration (Td). returned to normal operation.
4. Test MUST be repeated until the generated asynchronous messages 4. Repeat the trial by reducing the asynchronous message sending rate
and the responses received from the controller are equal for two used in last trial by the STEP size.
consecutive iterations. 5. Continue repeating the trials and reducing the sending rate until
5. Record the number of responses received from the controller (Nrxn) both the maximum value of Nrxn and the Nrxn corresponding to zero
as well as the number of messages sent(Ntxn) to the controller in loss ratio have been found.
the last test iteration. 6. The trials corresponding to the benchmark levels MUST be repeated
using the same asynchronous message rates until the responses
received from the controller are equal (+/-P%) for two consecutive
trials.
7. Record the number of responses received from the controller (Nrxn)
as well as the number of messages sent (Ntxn) to the controller in
the last trial.
Measurement: Measurement:
Nrxn Nrxn
Asynchronous Message Processing Rate Tr1 = ----- Asynchronous Message Processing Rate Trn = -----
Td Td
Tr1 + Tr2 + Tr3..Trn Maximum Asynchronous Message Processing Rate = MAX(Trn) for all n
Average Asynchronous Message Processing Rate= --------------------
Total Test Iterations Nrxn
Asynchronous Message Loss Ratio Lrn = 1 - -----
Ntxn
Loss-free Asynchronous Message Processing Rate = MAX(Trn) given
Lrn=0
Reporting Format: Reporting Format:
The Asynchronous Message Processing Rate results MUST be reported in The Asynchronous Message Processing Rate results MUST be reported in
the format of a table with a row for each iteration. The last row of the format of a table with a row for each trial.
the table indicates the average Asynchronous Message Processing
Rate.
The report should capture the following information in addition to The table should report the following information in addition to the
the configuration parameters captured in section 5. configuration parameters captured in section 5, with columns:
- Offered rate (Ntx) - Offered rate (Ntxn/Td)
- Loss Ratio - Asynchronous Message Processing Rate (Trn)
- Loss Ratio (Lr)
- Benchmark at this iteration (blank for none, Maximum, Loss-Free)
The results MAY be presented in the form of a graph. The X axis
SHOULD be the Offered rate, and dual Y axes would represent
Asynchronous Message Processing Rate and Loss Ratio, respectively.
If this test is repeated with varying number of nodes over same If this test is repeated with varying number of nodes over same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the
SHOULD be the average Asynchronous Message Processing Rate. Asynchronous Message Processing Rate. Both the Maximum and the Loss-
Free Rates should be plotted for each N.
If this test is repeated with same number of nodes over different If this test is repeated with same number of nodes over different
topologies, the results SHOULD be reported in the form of a graph. topologies, the results SHOULD be reported in the form of a graph.
The X coordinate SHOULD be the Topology Type, the Y coordinate The X axis SHOULD be the Topology Type, the Y axis SHOULD be the
SHOULD be the average Asynchronous Message Processing Rate. Asynchronous Message Processing Rate. Both the Maximum and the Loss-
Free Rates should be plotted for each topology.
5.1.4. Reactive Path Provisioning Time 5.1.4. Reactive Path Provisioning Time
Objective: Objective:
The time taken by the controller to setup a path reactively between The time taken by the controller to setup a path reactively between
source and destination node, defined as the interval starting with source and destination node, defined as the interval starting with
the first flow provisioning request message received by the the first flow provisioning request message received by the
controller(s) at its Southbound interface, ending with the last flow controller(s) at its Southbound interface, ending with the last flow
provisioning response message sent from the controller(s) at its provisioning response message sent from the controller(s) at its
skipping to change at page 15, line 18 skipping to change at page 15, line 45
to make the forwarding decision while paving the entire path. to make the forwarding decision while paving the entire path.
Procedure: Procedure:
1. Send a single traffic stream from the test traffic generator TP1 1. Send a single traffic stream from the test traffic generator TP1
to test traffic generator TP2. to test traffic generator TP2.
2. Record the time of the first flow provisioning request message 2. Record the time of the first flow provisioning request message
sent to the controller (Tsf1) from the Network Device at the sent to the controller (Tsf1) from the Network Device at the
forwarding plane test emulator interface (I1). forwarding plane test emulator interface (I1).
3. Wait for the arrival of first traffic frame at the Traffic 3. Wait for the arrival of first traffic frame at the Traffic
Endpoint TP2 or the expiry of test duration (Td). Endpoint TP2 or the expiry of trail duration (Td).
4. Record the time of the last flow provisioning response message 4. Record the time of the last flow provisioning response message
received from the controller (Tdf1) to the Network Device at the received from the controller (Tdf1) to the Network Device at the
forwarding plane test emulator interface (I1). forwarding plane test emulator interface (I1).
Measurement: Measurement:
Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Time = ----------------------- Average Reactive Path Provisioning Time = -----------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Reactive Path Provisioning Time results MUST be reported in the The Reactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Time table indicates the Average Reactive Path Provisioning Time
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 16, line 33 skipping to change at page 17, line 16
'drop'. 'drop'.
Procedure: Procedure:
1. Send a single traffic stream from test traffic generator TP1 to 1. Send a single traffic stream from test traffic generator TP1 to
TP2. TP2.
2. Install the flow entries to reach from test traffic generator TP1 2. Install the flow entries to reach from test traffic generator TP1
to the test traffic generator TP2 through controller's northbound to the test traffic generator TP2 through controller's northbound
or management interface. or management interface.
3. Wait for the arrival of first traffic frame at the test traffic 3. Wait for the arrival of first traffic frame at the test traffic
generator TP2 or the expiry of test duration (Td). generator TP2 or the expiry of trail duration (Td).
4. Record the time when the proactive flow is provisioned in the 4. Record the time when the proactive flow is provisioned in the
Controller (Tsf1) at the management plane test emulator interface Controller (Tsf1) at the management plane test emulator interface
I2. I2.
5. Record the time of the last flow provisioning message received 5. Record the time of the last flow provisioning message received
from the controller (Tdf1) at the forwarding plane test emulator from the controller (Tdf1) at the forwarding plane test emulator
interface I1. interface I1.
Measurement: Measurement:
Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Time = ----------------------- Average Proactive Path Provisioning Time = -----------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Proactive Path Provisioning Time results MUST be reported in the The Proactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Time. table indicates the Average Proactive Path Provisioning Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 17, line 25 skipping to change at page 18, line 5
5.1.6. Reactive Path Provisioning Rate 5.1.6. Reactive Path Provisioning Rate
Objective: Objective:
The maximum number of independent paths a controller can The maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
reactively, defined as the number of paths provisioned by the reactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the flow provisioning controller(s) at its Southbound interface for the flow provisioning
requests received for path provisioning at its Southbound interface requests received for path provisioning at its Southbound interface
between the start of the test and the expiry of given test duration. between the start of the test and the expiry of given trail
duration.
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
the deployed network topology. the deployed network topology.
skipping to change at page 17, line 51 skipping to change at page 18, line 32
is configured to 'send to controller'. is configured to 'send to controller'.
4. Ensure that each Network Device in a path requires the controller 4. Ensure that each Network Device in a path requires the controller
to make the forwarding decision while provisioning the entire to make the forwarding decision while provisioning the entire
path. path.
Procedure: Procedure:
1. Send traffic with unique source and destination addresses from 1. Send traffic with unique source and destination addresses from
test traffic generator TP1. test traffic generator TP1.
2. Record total number of unique traffic frames (Ndf) received at the 2. Record total number of unique traffic frames (Ndf) received at the
test traffic generator TP2 within the test duration (Td). test traffic generator TP2 within the trail duration (Td).
Measurement: Measurement:
Ndf Ndf
Reactive Path Provisioning Rate Tr1 = ------ Reactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Rate = ------------------------ Average Reactive Path Provisioning Rate = ------------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Reactive Path Provisioning Rate results MUST be reported in the The Reactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Rate. table indicates the Average Reactive Path Provisioning Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 18, line 37 skipping to change at page 19, line 21
5.1.7. Proactive Path Provisioning Rate 5.1.7. Proactive Path Provisioning Rate
Objective: Objective:
Measure the maximum rate of independent paths a controller can Measure the maximum rate of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
proactively, defined as the number of paths provisioned by the proactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the paths requested in controller(s) at its Southbound interface for the paths requested in
its Northbound interface between the start of the test and the its Northbound interface between the start of the test and the
expiry of given test duration . The measurement is based on expiry of given trail duration . The measurement is based on
dataplane observations of successful path activation dataplane observations of successful path activation
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
skipping to change at page 19, line 24 skipping to change at page 20, line 8
1. Send traffic continuously with unique source and destination 1. Send traffic continuously with unique source and destination
addresses from test traffic generator TP1. addresses from test traffic generator TP1.
2. Install corresponding flow entries to reach from simulated 2. Install corresponding flow entries to reach from simulated
sources at the test traffic generator TP1 to the simulated sources at the test traffic generator TP1 to the simulated
destinations at test traffic generator TP2 through controller's destinations at test traffic generator TP2 through controller's
northbound or management interface. northbound or management interface.
3. Record total number of unique traffic frames received Ndf) at the 3. Record total number of unique traffic frames received Ndf) at the
test traffic generator TP2 within the test duration (Td). test traffic generator TP2 within the trail duration (Td).
Measurement: Measurement:
Ndf Ndf
Proactive Path Provisioning Rate Tr1 = ------ Proactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Rate = ----------------------- Average Proactive Path Provisioning Rate = -----------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Proactive Path Provisioning Rate results MUST be reported in the The Proactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Rate. table indicates the Average Proactive Path Provisioning Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 20, line 26 skipping to change at page 21, line 11
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST have successfully discovered the network 1. The controller MUST have successfully discovered the network
topology information for the deployed network topology. topology information for the deployed network topology.
2. The periodic network discovery operation should be configured to 2. The periodic network discovery operation should be configured to
twice the Test duration (Td) value. twice the Trail duration (Td) value.
Procedure: Procedure:
1. Trigger a topology change event by bringing down an active 1. Trigger a topology change event by bringing down an active
Network Device in the topology. Network Device in the topology.
2. Record the time when the first topology change notification is 2. Record the time when the first topology change notification is
sent to the controller (Tcn) at the forwarding plane test emulator sent to the controller (Tcn) at the forwarding plane test emulator
interface (I1). interface (I1).
3. Stop the test when the controller sends the first topology re- 3. Stop the trail when the controller sends the first topology re-
discovery message to the Network Device or the expiry of test discovery message to the Network Device or the expiry of trail
interval (Td). duration (Td).
4. Record the time when the first topology re-discovery message is 4. Record the time when the first topology re-discovery message is
received from the controller (Tcd) at the forwarding plane test received from the controller (Tcd) at the forwarding plane test
emulator interface (I1) emulator interface (I1)
Measurement: Measurement:
Network Topology Change Detection Time Tr1 = Tcd-Tcn. Network Topology Change Detection Time Tr1 = Tcd-Tcn.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Network Topology Change Detection Time = ------------------ Average Network Topology Change Detection Time = ------------------
Total Test Iterations Total Trails
Reporting Format: Reporting Format:
The Network Topology Change Detection Time results MUST be reported The Network Topology Change Detection Time results MUST be reported
in the format of a table with a row for each iteration. The last in the format of a table with a row for each iteration. The last
row of the table indicates the average Network Topology Change Time. row of the table indicates the average Network Topology Change Time.
5.2. Scalability 5.2. Scalability
5.2.1. Control Session Capacity 5.2.1. Control Session Capacity
skipping to change at page 21, line 32 skipping to change at page 22, line 26
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Procedure: Procedure:
1. Establish control connection with controller from every Network 1. Establish control connection with controller from every Network
Device emulated in the forwarding plane test emulator. Device emulated in the forwarding plane test emulator.
2. Stop the test when the controller starts dropping the control 2. Stop the trail when the controller starts dropping the control
connections. connections.
3. Record the number of successful connections established with the 3. Record the number of successful connections established with the
controller (CCn) at the forwarding plane test emulator. controller (CCn) at the forwarding plane test emulator.
Measurement: Measurement:
Control Sessions Capacity = CCn. Control Sessions Capacity = CCn.
Reporting Format: Reporting Format:
skipping to change at page 22, line 34 skipping to change at page 23, line 24
information either through controller's management interface or information either through controller's management interface or
northbound interface. northbound interface.
Procedure: Procedure:
1. Establish the network connections between controller and network 1. Establish the network connections between controller and network
nodes. nodes.
2. Query the controller for the discovered network topology 2. Query the controller for the discovered network topology
information and compare it with the deployed network topology information and compare it with the deployed network topology
information. information.
3. 3a. Increase the number of nodes by 1 when the comparison is 3. Increase the number of nodes by 1 when the comparison is
successful and repeat the test. successful and repeat the trail.
4. 3b. Decrease the number of nodes by 1 when the comparison fails 4. Decrease the number of nodes by 1 when the comparison fails and
and repeat the test. repeat the trail.
5. Continue the test until the comparison of step 3b is successful. 5. Continue the trail until the comparison of step 4 is successful.
6. Record the number of nodes for the last iteration (Ns) where the 6. Record the number of nodes for the last trail (Ns) where the
topology comparison was successful. topology comparison was successful.
Measurement: Measurement:
Network Discovery Size = Ns. Network Discovery Size = Ns.
Reporting Format: Reporting Format:
The Network Discovery Size results MUST be reported in addition to The Network Discovery Size results MUST be reported in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 23, line 36 skipping to change at page 24, line 29
Procedure: Procedure:
Reactive Flow Provisioning Mode: Reactive Flow Provisioning Mode:
1. Send bi-directional traffic continuously with unique source and/or 1. Send bi-directional traffic continuously with unique source and/or
destination addresses from test traffic generators TP1 and TP2 at destination addresses from test traffic generators TP1 and TP2 at
the asynchronous message processing rate of controller. the asynchronous message processing rate of controller.
2. Query the controller at a regular interval (e.g., 5 seconds) for 2. Query the controller at a regular interval (e.g., 5 seconds) for
the number of learnt flow entries from its northbound interface. the number of learnt flow entries from its northbound interface.
3. Stop the test when the retrieved value is constant for three 3. Stop the trail when the retrieved value is constant for three
consecutive iterations and record the value received from the last consecutive iterations and record the value received from the last
query (Nrp). query (Nrp).
Proactive Flow Provisioning Mode: Proactive Flow Provisioning Mode:
1. Install unique flows continuously through controller's northbound 1. Install unique flows continuously through controller's northbound
or management interface until a failure response is received from or management interface until a failure response is received from
the controller. the controller.
2. Record the total number of successful responses (Nrp). 2. Record the total number of successful responses (Nrp).
skipping to change at page 26, line 34 skipping to change at page 27, line 26
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
This test MUST be performed after obtaining the baseline measurement This test MUST be performed after obtaining the baseline measurement
results for the above tests. results for the above tests.
Procedure: Procedure:
1. Perform the listed tests and launch a DoS attack towards 1. Perform the listed tests and launch a DoS attack towards
controller while the test is running. controller while the trail is running.
Note: Note:
DoS attacks can be launched on one of the following interfaces. DoS attacks can be launched on one of the following interfaces.
a. Northbound (e.g., Sending a huge number of requests on a. Northbound (e.g., Sending a huge number of requests on
northbound interface) northbound interface)
b. Management (e.g., Ping requests to controller's management b. Management (e.g., Ping requests to controller's management
interface) interface)
c. Southbound (e.g., TCP SYNC messages on southbound interface) c. Southbound (e.g., TCP SYNC messages on southbound interface)
skipping to change at page 28, line 13 skipping to change at page 28, line 46
test traffic generator TP2. test traffic generator TP2.
Procedure: Procedure:
1. Send uni-directional traffic continuously with incremental 1. Send uni-directional traffic continuously with incremental
sequence number and source addresses from test traffic generator sequence number and source addresses from test traffic generator
TP1 at the rate that the controller processes without any drops. TP1 at the rate that the controller processes without any drops.
2. Ensure that there are no packet drops observed at the test traffic 2. Ensure that there are no packet drops observed at the test traffic
generator TP2. generator TP2.
3. Bring down the active controller. 3. Bring down the active controller.
4. Stop the test when a first frame received on TP2 after failover 4. Stop the trail when a first frame received on TP2 after failover
operation. operation.
5. Record the time at which the last valid frame received (T1) at 5. Record the time at which the last valid frame received (T1) at
test traffic generator TP2 before sequence error and the first test traffic generator TP2 before sequence error and the first
valid frame received (T2) after the sequence error at TP2 valid frame received (T2) after the sequence error at TP2
Measurement: Measurement:
Controller Failover Time = (T2 - T1) Controller Failover Time = (T2 - T1)
Packet Loss = Number of missing packet sequences. Packet Loss = Number of missing packet sequences.
skipping to change at page 29, line 24 skipping to change at page 30, line 16
of test traffic generators TP1 and TP2. of test traffic generators TP1 and TP2.
3. Ensure that the controller does not pre-provision the alternate 3. Ensure that the controller does not pre-provision the alternate
path in the emulated Network Devices at the forwarding plane test path in the emulated Network Devices at the forwarding plane test
emulator. emulator.
Procedure: Procedure:
1. Send bi-directional traffic continuously with unique sequence 1. Send bi-directional traffic continuously with unique sequence
number from TP1 and TP2. number from TP1 and TP2.
2. Bring down a link or switch in the traffic path. 2. Bring down a link or switch in the traffic path.
3. Stop the test after receiving first frame after network re- 3. Stop the trail after receiving first frame after network re-
convergence. convergence.
4. Record the time of last received frame prior to the frame loss at 4. Record the time of last received frame prior to the frame loss at
TP2 (TP2-Tlfr) and the time of first frame received after the TP2 (TP2-Tlfr) and the time of first frame received after the
frame loss at TP2 (TP2-Tffr). There must be a gap in sequence frame loss at TP2 (TP2-Tffr). There must be a gap in sequence
numbers of these frames numbers of these frames
5. Record the time of last received frame prior to the frame loss at 5. Record the time of last received frame prior to the frame loss at
TP1 (TP1-Tlfr) and the time of first frame received after the TP1 (TP1-Tlfr) and the time of first frame received after the
frame loss at TP1 (TP1-Tffr). frame loss at TP1 (TP1-Tffr).
Measurement: Measurement:
skipping to change at page 30, line 24 skipping to change at page 31, line 14
- Network Re-Provisioning Time - Network Re-Provisioning Time
- Forward Direction Packet Loss - Forward Direction Packet Loss
- Reverse Direction Packet Loss - Reverse Direction Packet Loss
6. References 6. References
6.1. Normative References 6.1. Normative References
[RFC2544] S. Bradner, J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices",RFC 2544, March 1999.
[RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis,
"Framework for IP Performance Metrics",RFC 2330,
May 1998.
[RFC6241] R. Enns, M. Bjorklund, J. Schoenwaelder, A. Bierman,
"Network Configuration Protocol (NETCONF)",RFC 6241,
July 2011.
[RFC6020] M. Bjorklund, "YANG - A Data Modeling Language for
the Network Configuration Protocol (NETCONF)", RFC 6020,
October 2010
[RFC5440] JP. Vasseur, JL. Le Roux, "Path Computation Element (PCE)
Communication Protocol (PCEP)", RFC 5440, March 2009.
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
[I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks, "Terminology for Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
Benchmarking SDN Controller Performance", Benchmarking SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-term-04 draft-ietf-bmwg-sdn-controller-benchmark-term-05
(Work in progress), June 28, 2017 (Work in progress), October 01, 2017
6.2. Informative References 6.2. Informative References
[I-D.i2rs-architecture] A. Atlas, J. Halpern, S. Hares, D. Ward, [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
T. Nadeau, "An Architecture for the Interface to the Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
Routing System", draft-ietf-i2rs-architecture-09
(Work in progress), March 6, 2015
[OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail
Architecture Documentation",
http://opencontrail.org/opencontrail-architecture-documentation
[OpenDaylight] OpenDaylight Controller:Architectural Framework,
https://wiki.opendaylight.org/view/OpenDaylight_Controller
7. IANA Considerations 7. IANA Considerations
This document does not have any IANA requests. This document does not have any IANA requests.
8. Security Considerations 8. Security Considerations
Benchmarking tests described in this document are limited to the Benchmarking tests described in this document are limited to the
performance characterization of controller in lab environment with performance characterization of controller in lab environment with
isolated network. isolated network.
skipping to change at page 34, line 14 skipping to change at page 34, line 14
Appendix B. Benchmarking Methodology using OpenFlow Controllers Appendix B. Benchmarking Methodology using OpenFlow Controllers
This section gives an overview of OpenFlow protocol and provides This section gives an overview of OpenFlow protocol and provides
test methodology to benchmark SDN controllers supporting OpenFlow test methodology to benchmark SDN controllers supporting OpenFlow
southbound protocol. southbound protocol.
B.1. Protocol Overview B.1. Protocol Overview
OpenFlow is an open standard protocol defined by Open Networking OpenFlow is an open standard protocol defined by Open Networking
Foundation (ONF), used for programming the forwarding plane of Foundation (ONF)[ OpenFlow Switch Specification], used for
network switches or routers via a centralized controller. programming the forwarding plane of network switches or routers via
a centralized controller.
B.2. Messages Overview B.2. Messages Overview
OpenFlow protocol supports three messages types namely controller- OpenFlow protocol supports three messages types namely controller-
to-switch, asynchronous and symmetric. to-switch, asynchronous and symmetric.
Controller-to-switch messages are initiated by the controller and Controller-to-switch messages are initiated by the controller and
used to directly manage or inspect the state of the switch. These used to directly manage or inspect the state of the switch. These
messages allow controllers to query/configure the switch (Features, messages allow controllers to query/configure the switch (Features,
Configuration messages), collect information from switch (Read-State Configuration messages), collect information from switch (Read-State
skipping to change at page 35, line 42 skipping to change at page 35, line 42
| rcvd from switch-2| | | rcvd from switch-2| |
|--------------------------->| | |--------------------------->| |
| . | | | . | |
| . | | | . | |
| | | | | |
| PACKET_IN with LLDP| | | PACKET_IN with LLDP| |
| rcvd from switch-n| | | rcvd from switch-n| |
(Tmn)|--------------------------->| | (Tmn)|--------------------------->| |
| | | | | |
| | <Wait for the expiry | | | <Wait for the expiry |
| | of Test Duration (Td)>| | | of Trail duration (Td)>|
| | | | | |
| | Query the controller for| | | Query the controller for|
| | discovered n/w topo.(Di)| | | discovered n/w topo.(Di)|
| |<--------------------------| | |<--------------------------|
| | | | | |
| | <Compare the discovered | | | <Compare the discovered |
| | & offered n/w topology>| | | & offered n/w topology>|
| | | | | |
Legend: Legend:
skipping to change at page 36, line 48 skipping to change at page 36, line 48
| | | | | |
|PACKET_IN with single OFP | | |PACKET_IN with single OFP | |
|match header | | |match header | |
(Tn)|--------------------------->| | (Tn)|--------------------------->| |
| | | | | |
| PACKET_OUT with single OFP | | | PACKET_OUT with single OFP | |
| action header| | | action header| |
(Rn)|<---------------------------| | (Rn)|<---------------------------| |
| | | | | |
|<Wait for the expiry of | | |<Wait for the expiry of | |
|Test Duration> | | |Trail duration> | |
| | | | | |
|<Record the number of | | |<Record the number of | |
|PACKET_INs/PACKET_OUTs | | |PACKET_INs/PACKET_OUTs | |
|Exchanged (Nrx)> | | |Exchanged (Nrx)> | |
| | | | | |
Legend: Legend:
T0,T1, ..Tn are PACKET_IN messages transmit timestamps. T0,T1, ..Tn are PACKET_IN messages transmit timestamps.
R0,R1, ..Rn are PACKET_OUT messages receive timestamps. R0,R1, ..Rn are PACKET_OUT messages receive timestamps.
skipping to change at page 37, line 27 skipping to change at page 37, line 27
The Asynchronous Message Processing Time will be obtained by sum of The Asynchronous Message Processing Time will be obtained by sum of
((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx. ((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx.
B.4.3. Asynchronous Message Processing Rate B.4.3. Asynchronous Message Processing Rate
Procedure: Procedure:
Network Devices OpenFlow SDN Network Devices OpenFlow SDN
Controller Application Controller Application
| | | | | |
|PACKET_IN with multiple OFP | | |PACKET_IN with single OFP | |
|match headers | | |match headers | |
|--------------------------->| | |--------------------------->| |
| | | | | |
| PACKET_OUT with multiple | | | PACKET_OUT with single | |
| OFP action headers| | | OFP action headers| |
|<---------------------------| | |<---------------------------| |
| | | | | |
|PACKET_IN with multiple OFP | |
|match headers | |
|--------------------------->| |
| | |
| PACKET_OUT with multiple | |
| OFP action headers| |
|<---------------------------| |
| . | | | . | |
| . | | | . | |
| . | | | . | |
| | | | | |
|PACKET_IN with multiple OFP | | |PACKET_IN with single OFP | |
|match headers | | |match headers | |
|--------------------------->| | |--------------------------->| |
| | | | | |
| PACKET_OUT with multiple | | | PACKET_OUT with single | |
| OFP action headers| | | OFP action headers| |
|<---------------------------| | |<---------------------------| |
| | | | | |
|<Wait for the expiry of | | |<Repeat the steps until the | |
|Test Duration> | | |expiry of Trial Duration> | |
| | | | | |
|<Record the number of OFP | | |<Record the number of OFP | |
(Nrx)|action headers rcvd> | | (Ntx1)|match headers sent> | |
| | |
|<Record the number of OFP | |
(Nrx1)|action headers rcvd> | |
| | | | | |
Note: The Ntx1 on initial trials should be greater than Nrx1 and
repeat the trials until the Nrxn for two consecutive trials equeal
to (+/-P%).
Discussion: Discussion:
The Asynchronous Message Processing Rate will be obtained by This test will measure two benchmarks using single procedure. 1) The
calculating the number of OFP action headers received in all Maximum Asynchronous Message Processing Rate will be obtained by
PACKET_OUT messages during the test duration. calculating the maximum PACKET OUTs (Nrxn) received from the
controller(s) across n trials. 2) The Loss-free Asynchronous Message
Processing Rate will be obtained by calculating the maximum PACKET
OUTs received from controller (s) when Loss Ratio equals zero. The
loss ratio is obtained by 1 - Nrxn/Ntxn
B.4.4. Reactive Path Provisioning Time B.4.4. Reactive Path Provisioning Time
Procedure: Procedure:
Test Traffic Test Traffic Network Devices OpenFlow Test Traffic Test Traffic Network Devices OpenFlow
Generator TP1 Generator TP2 Controller Generator TP1 Generator TP2 Controller
| | | | | | | |
| |G-ARP (D1) | | | |G-ARP (D1) | |
| |--------------------->| | | |--------------------->| |
skipping to change at page 41, line 21 skipping to change at page 41, line 21
G-ARP: Gratuitous ARP G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n Endpoint n
Discussion: Discussion:
The Reactive Path Provisioning Rate can be obtained by finding the The Reactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the test duration. total number of frames received at TP2 after the trail duration.
B.4.7. Proactive Path Provisioning Rate B.4.7. Proactive Path Provisioning Rate
Procedure: Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application Generator TP1 Generator TP2 Controller Application
| | | | | | | | | |
| |G-ARP (D1..Dn) | | | | |G-ARP (D1..Dn) | | |
| |-------------->| | | | |-------------->| | |
skipping to change at page 42, line 28 skipping to change at page 42, line 28
G-ARP: Gratuitous ARP G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n Endpoint n
Discussion: Discussion:
The Proactive Path Provisioning Rate can be obtained by finding the The Proactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the test duration total number of frames received at TP2 after the trail duration
B.4.8. Network Topology Change Detection Time B.4.8. Network Topology Change Detection Time
Procedure: Procedure:
Network Devices OpenFlow SDN Network Devices OpenFlow SDN
Controller Application Controller Application
| | | | | |
| | <Bring down a link in | | | <Bring down a link in |
| | switch S1>| | | switch S1>|
skipping to change at page 44, line 22 skipping to change at page 44, line 22
| rcvd from switch-2| | | rcvd from switch-2| |
|--------------------------->| | |--------------------------->| |
| . | | | . | |
| . | | | . | |
| | | | | |
| PACKET_IN with LLDP| | | PACKET_IN with LLDP| |
| rcvd from switch-n| | | rcvd from switch-n| |
|--------------------------->| | |--------------------------->| |
| | | | | |
| | <Wait for the expiry | | | <Wait for the expiry |
| | of Test Duration (Td)>| | | of Trail duration (Td)>|
| | | | | |
| | Query the controller for| | | Query the controller for|
| | discovered n/w topo.(N1)| | | discovered n/w topo.(N1)|
| |<--------------------------| | |<--------------------------|
| | | | | |
| | <If N1==N repeat Step 1 | | | <If N1==N repeat Step 1 |
| |with N+1 nodes until N1<N >| | |with N+1 nodes until N1<N >|
| | | | | |
| | <If N1<N repeat Step 1 | | | <If N1<N repeat Step 1 |
| | with N=N1 nodes once and | | | with N=N1 nodes once and |
| | exit> | | | exit> |
| | | | | |
Legend: Legend:
n/w topo: Network Topology n/w topo: Network Topology
OF: OpenFlow OF: OpenFlow
Discussion: Discussion:
The value of N1 provides the Network Discovery Size value. The test The value of N1 provides the Network Discovery Size value. The trail
duration can be set to the stipulated time within which the user duration can be set to the stipulated time within which the user
expects the controller to complete the discovery process. expects the controller to complete the discovery process.
B.5.3. Forwarding Table Capacity B.5.3. Forwarding Table Capacity
Procedure: Procedure:
Test Traffic Network Devices OpenFlow SDN Test Traffic Network Devices OpenFlow SDN
Generator TP1 Controller Application Generator TP1 Controller Application
| | | | | | | |
| | | | | | | |
skipping to change at page 52, line 36 skipping to change at page 52, line 36
| | | | | | | | | |
| | | |<Stop the test| | | | |<Stop the test|
| | | | after recv. | | | | | after recv. |
| | | | traffic upon| | | | | traffic upon|
| | | | failover> | | | | | failover> |
Legend: Legend:
G-ARP: Gratuitous ARP message. G-ARP: Gratuitous ARP message.
Seq.no: Sequence number. Seq.no: Sequence number.
Sa: Neighbour switch of the switch that was brought down. Sa: Neighbor switch of the switch that was brought down.
Discussion: Discussion:
The time difference between the last valid frame received before the The time difference between the last valid frame received before the
traffic loss (Packet number with sequence number x) and the first traffic loss (Packet number with sequence number x) and the first
frame received after the traffic loss (packet with sequence number frame received after the traffic loss (packet with sequence number
n) will provide the network path re-provisioning time. n) will provide the network path re-provisioning time.
Note that the test is valid only when the controller provisions the Note that the trail is valid only when the controller provisions the
alternate path upon network failure. alternate path upon network failure.
Authors' Addresses Authors' Addresses
Bhuvaneswaran Vengainathan Bhuvaneswaran Vengainathan
Veryx Technologies Inc. Veryx Technologies Inc.
1 International Plaza, Suite 550 1 International Plaza, Suite 550
Philadelphia Philadelphia
PA 19113 PA 19113
 End of changes. 66 change blocks. 
197 lines changed or deleted 210 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/