draft-ietf-bmwg-sdn-controller-benchmark-meth-05.txt   draft-ietf-bmwg-sdn-controller-benchmark-meth-06.txt 
Internet-Draft Bhuvaneswaran Vengainathan Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies Intended Status: Informational Veryx Technologies
Expires: April 01, 2018 Mark Tassinari Expires: May 16, 2018 Mark Tassinari
Hewlett-Packard Hewlett-Packard
Vishwas Manral Vishwas Manral
Nano Sec Nano Sec
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
October 01, 2017 November 16, 2017
Benchmarking Methodology for SDN Controller Performance Benchmarking Methodology for SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-05 draft-ietf-bmwg-sdn-controller-benchmark-meth-06
Abstract Abstract
This document defines the methodologies for benchmarking control This document defines the methodologies for benchmarking control
plane performance of SDN controllers. Terminology related to plane performance of SDN controllers. Terminology related to
benchmarking SDN controllers is described in the companion benchmarking SDN controllers is described in the companion
terminology document. SDN controllers have been implemented with terminology document. SDN controllers have been implemented with
many varying designs in order to achieve their intended network many varying designs in order to achieve their intended network
functionality. Hence, the authors have taken the approach of functionality. Hence, the authors have taken the approach of
considering an SDN controller as a black box, defining the considering an SDN controller as a black box, defining the
skipping to change at page 1, line 45 skipping to change at page 1, line 45
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current. Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress. material or to cite them other than as "work in progress.
This Internet-Draft will expire on April 01, 2018. This Internet-Draft will expire on May 16, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 10, line 40 skipping to change at page 10, line 40
I1. I1.
4. Query the controller every 3 seconds to obtain the discovered 4. Query the controller every 3 seconds to obtain the discovered
network topology information through the northbound interface or network topology information through the northbound interface or
the management interface and compare it with the deployed network the management interface and compare it with the deployed network
topology information. topology information.
5. Stop the trial when the discovered topology information matches 5. Stop the trial when the discovered topology information matches
the deployed network topology, or when the discovered topology the deployed network topology, or when the discovered topology
information for 3 consecutive queries return the same details. information for 3 consecutive queries return the same details.
6. Record the time last discovery message (Tmn) sent to controller 6. Record the time last discovery message (Tmn) sent to controller
from the forwarding plane test emulator interface (I1) when the from the forwarding plane test emulator interface (I1) when the
trail completed successfully. (e.g., the topology matches). trial completed successfully. (e.g., the topology matches).
Measurement: Measurement:
Topology Discovery Time Tr1 = Tmn-Tm1. Topology Discovery Time Tr1 = Tmn-Tm1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Topology Discovery Time = ----------------------- Average Topology Discovery Time = -----------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Topology Discovery Time results MUST be reported in the format The Topology Discovery Time results MUST be reported in the format
of a table, with a row for each successful iteration. The last row of a table, with a row for each successful iteration. The last row
of the table indicates the average Topology Discovery Time. of the table indicates the average Topology Discovery Time.
If this test is repeated with varying number of nodes over the same If this test is repeated with varying number of nodes over the same
topology, the results SHOULD be reported in the form of a graph. The topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate X coordinate SHOULD be the Number of nodes (N), the Y coordinate
skipping to change at page 11, line 45 skipping to change at page 11, line 45
Prerequisite: Prerequisite:
1. The controller MUST have successfully completed the network 1. The controller MUST have successfully completed the network
topology discovery for the connected Network Devices. topology discovery for the connected Network Devices.
Procedure: Procedure:
1. Generate asynchronous messages from every connected Network 1. Generate asynchronous messages from every connected Network
Device, to the SDN controller, one at a time in series from the Device, to the SDN controller, one at a time in series from the
forwarding plane test emulator for the trail duration. forwarding plane test emulator for the trial duration.
2. Record every request transmit (T1) timestamp and the 2. Record every request transmit (T1) timestamp and the
corresponding response (R1) received timestamp at the corresponding response (R1) received timestamp at the
forwarding plane test emulator interface (I1) for every forwarding plane test emulator interface (I1) for every
successful message exchange. successful message exchange.
Measurement: Measurement:
(R1-T1) + (R2-T2)..(Rn-Tn) (R1-T1) + (R2-T2)..(Rn-Tn)
Asynchronous Message Processing Time Tr1 = ----------------------- Asynchronous Message Processing Time Tr1 = -----------------------
Nrx Nrx
Where Nrx is the total number of successful messages exchanged Where Nrx is the total number of successful messages exchanged
Tr1 + Tr2 + Tr3..Trn Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Time= -------------------- Average Asynchronous Message Processing Time= --------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Asynchronous Message Processing Time results MUST be reported in The Asynchronous Message Processing Time results MUST be reported in
the format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of
the table indicates the average Asynchronous Message Processing the table indicates the average Asynchronous Message Processing
Time. Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. - Successful the configuration parameters captured in section 5. - Successful
skipping to change at page 15, line 45 skipping to change at page 15, line 45
to make the forwarding decision while paving the entire path. to make the forwarding decision while paving the entire path.
Procedure: Procedure:
1. Send a single traffic stream from the test traffic generator TP1 1. Send a single traffic stream from the test traffic generator TP1
to test traffic generator TP2. to test traffic generator TP2.
2. Record the time of the first flow provisioning request message 2. Record the time of the first flow provisioning request message
sent to the controller (Tsf1) from the Network Device at the sent to the controller (Tsf1) from the Network Device at the
forwarding plane test emulator interface (I1). forwarding plane test emulator interface (I1).
3. Wait for the arrival of first traffic frame at the Traffic 3. Wait for the arrival of first traffic frame at the Traffic
Endpoint TP2 or the expiry of trail duration (Td). Endpoint TP2 or the expiry of trial duration (Td).
4. Record the time of the last flow provisioning response message 4. Record the time of the last flow provisioning response message
received from the controller (Tdf1) to the Network Device at the received from the controller (Tdf1) to the Network Device at the
forwarding plane test emulator interface (I1). forwarding plane test emulator interface (I1).
Measurement: Measurement:
Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1. Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Time = ----------------------- Average Reactive Path Provisioning Time = -----------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Reactive Path Provisioning Time results MUST be reported in the The Reactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Time table indicates the Average Reactive Path Provisioning Time
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 17, line 16 skipping to change at page 17, line 16
'drop'. 'drop'.
Procedure: Procedure:
1. Send a single traffic stream from test traffic generator TP1 to 1. Send a single traffic stream from test traffic generator TP1 to
TP2. TP2.
2. Install the flow entries to reach from test traffic generator TP1 2. Install the flow entries to reach from test traffic generator TP1
to the test traffic generator TP2 through controller's northbound to the test traffic generator TP2 through controller's northbound
or management interface. or management interface.
3. Wait for the arrival of first traffic frame at the test traffic 3. Wait for the arrival of first traffic frame at the test traffic
generator TP2 or the expiry of trail duration (Td). generator TP2 or the expiry of trial duration (Td).
4. Record the time when the proactive flow is provisioned in the 4. Record the time when the proactive flow is provisioned in the
Controller (Tsf1) at the management plane test emulator interface Controller (Tsf1) at the management plane test emulator interface
I2. I2.
5. Record the time of the last flow provisioning message received 5. Record the time of the last flow provisioning message received
from the controller (Tdf1) at the forwarding plane test emulator from the controller (Tdf1) at the forwarding plane test emulator
interface I1. interface I1.
Measurement: Measurement:
Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1. Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Time = ----------------------- Average Proactive Path Provisioning Time = -----------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Proactive Path Provisioning Time results MUST be reported in the The Proactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Time. table indicates the Average Proactive Path Provisioning Time.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 18, line 5 skipping to change at page 18, line 5
5.1.6. Reactive Path Provisioning Rate 5.1.6. Reactive Path Provisioning Rate
Objective: Objective:
The maximum number of independent paths a controller can The maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
reactively, defined as the number of paths provisioned by the reactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the flow provisioning controller(s) at its Southbound interface for the flow provisioning
requests received for path provisioning at its Southbound interface requests received for path provisioning at its Southbound interface
between the start of the test and the expiry of given trail between the start of the test and the expiry of given trial
duration. duration.
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
skipping to change at page 18, line 32 skipping to change at page 18, line 32
is configured to 'send to controller'. is configured to 'send to controller'.
4. Ensure that each Network Device in a path requires the controller 4. Ensure that each Network Device in a path requires the controller
to make the forwarding decision while provisioning the entire to make the forwarding decision while provisioning the entire
path. path.
Procedure: Procedure:
1. Send traffic with unique source and destination addresses from 1. Send traffic with unique source and destination addresses from
test traffic generator TP1. test traffic generator TP1.
2. Record total number of unique traffic frames (Ndf) received at the 2. Record total number of unique traffic frames (Ndf) received at the
test traffic generator TP2 within the trail duration (Td). test traffic generator TP2 within the trial duration (Td).
Measurement: Measurement:
Ndf Ndf
Reactive Path Provisioning Rate Tr1 = ------ Reactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Rate = ------------------------ Average Reactive Path Provisioning Rate = ------------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Reactive Path Provisioning Rate results MUST be reported in the The Reactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Reactive Path Provisioning Rate. table indicates the Average Reactive Path Provisioning Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 19, line 21 skipping to change at page 19, line 21
5.1.7. Proactive Path Provisioning Rate 5.1.7. Proactive Path Provisioning Rate
Objective: Objective:
Measure the maximum rate of independent paths a controller can Measure the maximum rate of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
proactively, defined as the number of paths provisioned by the proactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the paths requested in controller(s) at its Southbound interface for the paths requested in
its Northbound interface between the start of the test and the its Northbound interface between the start of the test and the
expiry of given trail duration . The measurement is based on expiry of given trial duration . The measurement is based on
dataplane observations of successful path activation dataplane observations of successful path activation
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST contain the network topology information for 1. The controller MUST contain the network topology information for
skipping to change at page 20, line 8 skipping to change at page 20, line 8
1. Send traffic continuously with unique source and destination 1. Send traffic continuously with unique source and destination
addresses from test traffic generator TP1. addresses from test traffic generator TP1.
2. Install corresponding flow entries to reach from simulated 2. Install corresponding flow entries to reach from simulated
sources at the test traffic generator TP1 to the simulated sources at the test traffic generator TP1 to the simulated
destinations at test traffic generator TP2 through controller's destinations at test traffic generator TP2 through controller's
northbound or management interface. northbound or management interface.
3. Record total number of unique traffic frames received Ndf) at the 3. Record total number of unique traffic frames received Ndf) at the
test traffic generator TP2 within the trail duration (Td). test traffic generator TP2 within the trial duration (Td).
Measurement: Measurement:
Ndf Ndf
Proactive Path Provisioning Rate Tr1 = ------ Proactive Path Provisioning Rate Tr1 = ------
Td Td
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Rate = ----------------------- Average Proactive Path Provisioning Rate = -----------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Proactive Path Provisioning Rate results MUST be reported in the The Proactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the format of a table with a row for each iteration. The last row of the
table indicates the Average Proactive Path Provisioning Rate. table indicates the Average Proactive Path Provisioning Rate.
The report should capture the following information in addition to The report should capture the following information in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 21, line 11 skipping to change at page 21, line 11
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
1. The controller MUST have successfully discovered the network 1. The controller MUST have successfully discovered the network
topology information for the deployed network topology. topology information for the deployed network topology.
2. The periodic network discovery operation should be configured to 2. The periodic network discovery operation should be configured to
twice the Trail duration (Td) value. twice the Trial duration (Td) value.
Procedure: Procedure:
1. Trigger a topology change event by bringing down an active 1. Trigger a topology change event by bringing down an active
Network Device in the topology. Network Device in the topology.
2. Record the time when the first topology change notification is 2. Record the time when the first topology change notification is
sent to the controller (Tcn) at the forwarding plane test emulator sent to the controller (Tcn) at the forwarding plane test emulator
interface (I1). interface (I1).
3. Stop the trail when the controller sends the first topology re- 3. Stop the trial when the controller sends the first topology re-
discovery message to the Network Device or the expiry of trail discovery message to the Network Device or the expiry of trial
duration (Td). duration (Td).
4. Record the time when the first topology re-discovery message is 4. Record the time when the first topology re-discovery message is
received from the controller (Tcd) at the forwarding plane test received from the controller (Tcd) at the forwarding plane test
emulator interface (I1) emulator interface (I1)
Measurement: Measurement:
Network Topology Change Detection Time Tr1 = Tcd-Tcn. Network Topology Change Detection Time Tr1 = Tcd-Tcn.
Tr1 + Tr2 + Tr3 .. Trn Tr1 + Tr2 + Tr3 .. Trn
Average Network Topology Change Detection Time = ------------------ Average Network Topology Change Detection Time = ------------------
Total Trails Total Trials
Reporting Format: Reporting Format:
The Network Topology Change Detection Time results MUST be reported The Network Topology Change Detection Time results MUST be reported
in the format of a table with a row for each iteration. The last in the format of a table with a row for each iteration. The last
row of the table indicates the average Network Topology Change Time. row of the table indicates the average Network Topology Change Time.
5.2. Scalability 5.2. Scalability
5.2.1. Control Session Capacity 5.2.1. Control Session Capacity
skipping to change at page 22, line 26 skipping to change at page 22, line 26
Reference Test Setup: Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1 The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Procedure: Procedure:
1. Establish control connection with controller from every Network 1. Establish control connection with controller from every Network
Device emulated in the forwarding plane test emulator. Device emulated in the forwarding plane test emulator.
2. Stop the trail when the controller starts dropping the control 2. Stop the trial when the controller starts dropping the control
connections. connections.
3. Record the number of successful connections established with the 3. Record the number of successful connections established with the
controller (CCn) at the forwarding plane test emulator. controller (CCn) at the forwarding plane test emulator.
Measurement: Measurement:
Control Sessions Capacity = CCn. Control Sessions Capacity = CCn.
Reporting Format: Reporting Format:
skipping to change at page 23, line 25 skipping to change at page 23, line 25
northbound interface. northbound interface.
Procedure: Procedure:
1. Establish the network connections between controller and network 1. Establish the network connections between controller and network
nodes. nodes.
2. Query the controller for the discovered network topology 2. Query the controller for the discovered network topology
information and compare it with the deployed network topology information and compare it with the deployed network topology
information. information.
3. Increase the number of nodes by 1 when the comparison is 3. Increase the number of nodes by 1 when the comparison is
successful and repeat the trail. successful and repeat the trial.
4. Decrease the number of nodes by 1 when the comparison fails and 4. Decrease the number of nodes by 1 when the comparison fails and
repeat the trail. repeat the trial.
5. Continue the trail until the comparison of step 4 is successful. 5. Continue the trial until the comparison of step 4 is successful.
6. Record the number of nodes for the last trail (Ns) where the 6. Record the number of nodes for the last trial (Ns) where the
topology comparison was successful. topology comparison was successful.
Measurement: Measurement:
Network Discovery Size = Ns. Network Discovery Size = Ns.
Reporting Format: Reporting Format:
The Network Discovery Size results MUST be reported in addition to The Network Discovery Size results MUST be reported in addition to
the configuration parameters captured in section 5. the configuration parameters captured in section 5.
skipping to change at page 24, line 29 skipping to change at page 24, line 29
Procedure: Procedure:
Reactive Flow Provisioning Mode: Reactive Flow Provisioning Mode:
1. Send bi-directional traffic continuously with unique source and/or 1. Send bi-directional traffic continuously with unique source and/or
destination addresses from test traffic generators TP1 and TP2 at destination addresses from test traffic generators TP1 and TP2 at
the asynchronous message processing rate of controller. the asynchronous message processing rate of controller.
2. Query the controller at a regular interval (e.g., 5 seconds) for 2. Query the controller at a regular interval (e.g., 5 seconds) for
the number of learnt flow entries from its northbound interface. the number of learnt flow entries from its northbound interface.
3. Stop the trail when the retrieved value is constant for three 3. Stop the trial when the retrieved value is constant for three
consecutive iterations and record the value received from the last consecutive iterations and record the value received from the last
query (Nrp). query (Nrp).
Proactive Flow Provisioning Mode: Proactive Flow Provisioning Mode:
1. Install unique flows continuously through controller's northbound 1. Install unique flows continuously through controller's northbound
or management interface until a failure response is received from or management interface until a failure response is received from
the controller. the controller.
2. Record the total number of successful responses (Nrp). 2. Record the total number of successful responses (Nrp).
skipping to change at page 27, line 26 skipping to change at page 27, line 26
or section 3.2 of this document in combination with Appendix A. or section 3.2 of this document in combination with Appendix A.
Prerequisite: Prerequisite:
This test MUST be performed after obtaining the baseline measurement This test MUST be performed after obtaining the baseline measurement
results for the above tests. results for the above tests.
Procedure: Procedure:
1. Perform the listed tests and launch a DoS attack towards 1. Perform the listed tests and launch a DoS attack towards
controller while the trail is running. controller while the trial is running.
Note: Note:
DoS attacks can be launched on one of the following interfaces. DoS attacks can be launched on one of the following interfaces.
a. Northbound (e.g., Sending a huge number of requests on a. Northbound (e.g., Sending a huge number of requests on
northbound interface) northbound interface)
b. Management (e.g., Ping requests to controller's management b. Management (e.g., Ping requests to controller's management
interface) interface)
c. Southbound (e.g., TCP SYNC messages on southbound interface) c. Southbound (e.g., TCP SYNC messages on southbound interface)
skipping to change at page 28, line 46 skipping to change at page 28, line 46
test traffic generator TP2. test traffic generator TP2.
Procedure: Procedure:
1. Send uni-directional traffic continuously with incremental 1. Send uni-directional traffic continuously with incremental
sequence number and source addresses from test traffic generator sequence number and source addresses from test traffic generator
TP1 at the rate that the controller processes without any drops. TP1 at the rate that the controller processes without any drops.
2. Ensure that there are no packet drops observed at the test traffic 2. Ensure that there are no packet drops observed at the test traffic
generator TP2. generator TP2.
3. Bring down the active controller. 3. Bring down the active controller.
4. Stop the trail when a first frame received on TP2 after failover 4. Stop the trial when a first frame received on TP2 after failover
operation. operation.
5. Record the time at which the last valid frame received (T1) at 5. Record the time at which the last valid frame received (T1) at
test traffic generator TP2 before sequence error and the first test traffic generator TP2 before sequence error and the first
valid frame received (T2) after the sequence error at TP2 valid frame received (T2) after the sequence error at TP2
Measurement: Measurement:
Controller Failover Time = (T2 - T1) Controller Failover Time = (T2 - T1)
skipping to change at page 30, line 16 skipping to change at page 30, line 16
of test traffic generators TP1 and TP2. of test traffic generators TP1 and TP2.
3. Ensure that the controller does not pre-provision the alternate 3. Ensure that the controller does not pre-provision the alternate
path in the emulated Network Devices at the forwarding plane test path in the emulated Network Devices at the forwarding plane test
emulator. emulator.
Procedure: Procedure:
1. Send bi-directional traffic continuously with unique sequence 1. Send bi-directional traffic continuously with unique sequence
number from TP1 and TP2. number from TP1 and TP2.
2. Bring down a link or switch in the traffic path. 2. Bring down a link or switch in the traffic path.
3. Stop the trail after receiving first frame after network re- 3. Stop the trial after receiving first frame after network re-
convergence. convergence.
4. Record the time of last received frame prior to the frame loss at 4. Record the time of last received frame prior to the frame loss at
TP2 (TP2-Tlfr) and the time of first frame received after the TP2 (TP2-Tlfr) and the time of first frame received after the
frame loss at TP2 (TP2-Tffr). There must be a gap in sequence frame loss at TP2 (TP2-Tffr). There must be a gap in sequence
numbers of these frames numbers of these frames
5. Record the time of last received frame prior to the frame loss at 5. Record the time of last received frame prior to the frame loss at
TP1 (TP1-Tlfr) and the time of first frame received after the TP1 (TP1-Tlfr) and the time of first frame received after the
frame loss at TP1 (TP1-Tffr). frame loss at TP1 (TP1-Tffr).
Measurement: Measurement:
skipping to change at page 31, line 17 skipping to change at page 31, line 17
- Reverse Direction Packet Loss - Reverse Direction Packet Loss
6. References 6. References
6.1. Normative References 6.1. Normative References
[I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil, [I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks, "Terminology for Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
Benchmarking SDN Controller Performance", Benchmarking SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-term-05 draft-ietf-bmwg-sdn-controller-benchmark-term-06
(Work in progress), October 01, 2017 (Work in progress), November 16, 2017
6.2. Informative References 6.2. Informative References
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
7. IANA Considerations 7. IANA Considerations
This document does not have any IANA requests. This document does not have any IANA requests.
skipping to change at page 35, line 42 skipping to change at page 35, line 42
| rcvd from switch-2| | | rcvd from switch-2| |
|--------------------------->| | |--------------------------->| |
| . | | | . | |
| . | | | . | |
| | | | | |
| PACKET_IN with LLDP| | | PACKET_IN with LLDP| |
| rcvd from switch-n| | | rcvd from switch-n| |
(Tmn)|--------------------------->| | (Tmn)|--------------------------->| |
| | | | | |
| | <Wait for the expiry | | | <Wait for the expiry |
| | of Trail duration (Td)>| | | of Trial duration (Td)>|
| | | | | |
| | Query the controller for| | | Query the controller for|
| | discovered n/w topo.(Di)| | | discovered n/w topo.(Di)|
| |<--------------------------| | |<--------------------------|
| | | | | |
| | <Compare the discovered | | | <Compare the discovered |
| | & offered n/w topology>| | | & offered n/w topology>|
| | | | | |
Legend: Legend:
skipping to change at page 36, line 48 skipping to change at page 36, line 48
| | | | | |
|PACKET_IN with single OFP | | |PACKET_IN with single OFP | |
|match header | | |match header | |
(Tn)|--------------------------->| | (Tn)|--------------------------->| |
| | | | | |
| PACKET_OUT with single OFP | | | PACKET_OUT with single OFP | |
| action header| | | action header| |
(Rn)|<---------------------------| | (Rn)|<---------------------------| |
| | | | | |
|<Wait for the expiry of | | |<Wait for the expiry of | |
|Trail duration> | | |Trial duration> | |
| | | | | |
|<Record the number of | | |<Record the number of | |
|PACKET_INs/PACKET_OUTs | | |PACKET_INs/PACKET_OUTs | |
|Exchanged (Nrx)> | | |Exchanged (Nrx)> | |
| | | | | |
Legend: Legend:
T0,T1, ..Tn are PACKET_IN messages transmit timestamps. T0,T1, ..Tn are PACKET_IN messages transmit timestamps.
R0,R1, ..Rn are PACKET_OUT messages receive timestamps. R0,R1, ..Rn are PACKET_OUT messages receive timestamps.
skipping to change at page 41, line 21 skipping to change at page 41, line 21
G-ARP: Gratuitous ARP G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n Endpoint n
Discussion: Discussion:
The Reactive Path Provisioning Rate can be obtained by finding the The Reactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the trail duration. total number of frames received at TP2 after the trial duration.
B.4.7. Proactive Path Provisioning Rate B.4.7. Proactive Path Provisioning Rate
Procedure: Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application Generator TP1 Generator TP2 Controller Application
| | | | | | | | | |
| |G-ARP (D1..Dn) | | | | |G-ARP (D1..Dn) | | |
| |-------------->| | | | |-------------->| | |
skipping to change at page 42, line 28 skipping to change at page 42, line 28
G-ARP: Gratuitous ARP G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n Endpoint n
Discussion: Discussion:
The Proactive Path Provisioning Rate can be obtained by finding the The Proactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the trail duration total number of frames received at TP2 after the trial duration
B.4.8. Network Topology Change Detection Time B.4.8. Network Topology Change Detection Time
Procedure: Procedure:
Network Devices OpenFlow SDN Network Devices OpenFlow SDN
Controller Application Controller Application
| | | | | |
| | <Bring down a link in | | | <Bring down a link in |
| | switch S1>| | | switch S1>|
skipping to change at page 44, line 22 skipping to change at page 44, line 22
| rcvd from switch-2| | | rcvd from switch-2| |
|--------------------------->| | |--------------------------->| |
| . | | | . | |
| . | | | . | |
| | | | | |
| PACKET_IN with LLDP| | | PACKET_IN with LLDP| |
| rcvd from switch-n| | | rcvd from switch-n| |
|--------------------------->| | |--------------------------->| |
| | | | | |
| | <Wait for the expiry | | | <Wait for the expiry |
| | of Trail duration (Td)>| | | of Trial duration (Td)>|
| | | | | |
| | Query the controller for| | | Query the controller for|
| | discovered n/w topo.(N1)| | | discovered n/w topo.(N1)|
| |<--------------------------| | |<--------------------------|
| | | | | |
| | <If N1==N repeat Step 1 | | | <If N1==N repeat Step 1 |
| |with N+1 nodes until N1<N >| | |with N+1 nodes until N1<N >|
| | | | | |
| | <If N1<N repeat Step 1 | | | <If N1<N repeat Step 1 |
| | with N=N1 nodes once and | | | with N=N1 nodes once and |
| | exit> | | | exit> |
| | | | | |
Legend: Legend:
n/w topo: Network Topology n/w topo: Network Topology
OF: OpenFlow OF: OpenFlow
Discussion: Discussion:
The value of N1 provides the Network Discovery Size value. The trail The value of N1 provides the Network Discovery Size value. The trial
duration can be set to the stipulated time within which the user duration can be set to the stipulated time within which the user
expects the controller to complete the discovery process. expects the controller to complete the discovery process.
B.5.3. Forwarding Table Capacity B.5.3. Forwarding Table Capacity
Procedure: Procedure:
Test Traffic Network Devices OpenFlow SDN Test Traffic Network Devices OpenFlow SDN
Generator TP1 Controller Application Generator TP1 Controller Application
| | | | | | | |
| | | | | | | |
skipping to change at page 52, line 45 skipping to change at page 52, line 45
Seq.no: Sequence number. Seq.no: Sequence number.
Sa: Neighbor switch of the switch that was brought down. Sa: Neighbor switch of the switch that was brought down.
Discussion: Discussion:
The time difference between the last valid frame received before the The time difference between the last valid frame received before the
traffic loss (Packet number with sequence number x) and the first traffic loss (Packet number with sequence number x) and the first
frame received after the traffic loss (packet with sequence number frame received after the traffic loss (packet with sequence number
n) will provide the network path re-provisioning time. n) will provide the network path re-provisioning time.
Note that the trail is valid only when the controller provisions the Note that the trial is valid only when the controller provisions the
alternate path upon network failure. alternate path upon network failure.
Authors' Addresses Authors' Addresses
Bhuvaneswaran Vengainathan Bhuvaneswaran Vengainathan
Veryx Technologies Inc. Veryx Technologies Inc.
1 International Plaza, Suite 550 1 International Plaza, Suite 550
Philadelphia Philadelphia
PA 19113 PA 19113
 End of changes. 36 change blocks. 
40 lines changed or deleted 40 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/