draft-ietf-bmwg-sdn-controller-benchmark-term-01.txt   draft-ietf-bmwg-sdn-controller-benchmark-term-02.txt 
Internet-Draft Bhuvaneswaran Vengainathan Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies Intended Status: Informational Veryx Technologies
Expires: September 19, 2016 Mark Tassinari Expires: January 8, 2017 Mark Tassinari
Hewlett-Packard Hewlett-Packard
Vishwas Manral Vishwas Manral
Nano Sec Nano Sec
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
March 21, 2016 July 8, 2016
Terminology for Benchmarking SDN Controller Performance Terminology for Benchmarking SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-term-01 draft-ietf-bmwg-sdn-controller-benchmark-term-02
Abstract Abstract
This document defines terminology for benchmarking an SDN This document defines terminology for benchmarking an SDN
controller's control plane performance. It extends the terminology controller's control plane performance. It extends the terminology
already defined in RFC 7426 for the purpose of benchmarking SDN already defined in RFC 7426 for the purpose of benchmarking SDN
controllers. The terms provided in this document help to benchmark controllers. The terms provided in this document help to benchmark
SDN controller's performance independent of the controller's SDN controller's performance independent of the controller's
supported protocols and/or network services. A mechanism for supported protocols and/or network services. A mechanism for
benchmarking the performance of SDN controllers is defined in the benchmarking the performance of SDN controllers is defined in the
skipping to change at page 1, line 44 skipping to change at page 1, line 44
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current. Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress. material or to cite them other than as "work in progress.
This Internet-Draft will expire on September 19, 2016. This Internet-Draft will expire on January 8, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction ................................................ 3
2. Term Definitions...............................................4 2. Term Definitions ............................................ 4
2.1. SDN Terms.................................................4 2.1. SDN Terms .............................................. 4
2.1.1. Flow.................................................4 2.1.1. Flow .............................................. 4
2.1.2. Northbound Interface.................................4 2.1.2. Northbound Interface............................... 4
2.1.3. Controller Forwarding Table..........................4 2.1.3. Controller Forwarding Table........................ 4
2.1.4. Proactive Flow Provisioning Mode.....................5 2.1.4. Proactive Flow Provisioning Mode................... 5
2.1.5. Reactive Flow Provisioning Mode......................5 2.1.5. Reactive Flow Provisioning Mode.................... 5
2.1.6. Path.................................................6 2.1.6. Path .............................................. 6
2.1.7. Standalone Mode......................................6 2.1.7. Standalone Mode.................................... 6
2.1.8. Cluster/Redundancy Mode..............................6 2.1.8. Cluster/Redundancy Mode............................ 6
2.1.9. Asynchronous Message.................................7 2.1.9. Asynchronous Message............................... 7
2.1.10. Test Traffic Generator..............................7 2.1.10. Test Traffic Generator............................ 7
2.2. Test Configuration/Setup Terms............................8 2.2. Test Configuration/Setup Terms.......................... 8
2.2.1. Number of Network Devices............................8 2.2.1. Number of Network Devices.......................... 8
2.2.2. Test Iterations......................................8 2.2.2. Test Iterations.................................... 8
2.2.3. Test Duration........................................8 2.2.3. Test Duration...................................... 8
2.2.4. Number of Cluster nodes..............................9 2.2.4. Number of Cluster nodes............................ 9
2.3. Benchmarking Terms........................................9 2.3. Benchmarking Terms...................................... 9
2.3.1. Performance..........................................9 2.3.1. Performance........................................ 9
2.3.1.1. Network Topology Discovery Time.................9 2.3.1.1. Network Topology Discovery Time............... 9
2.3.1.2. Asynchronous Message Processing Time...........10 2.3.1.2. Asynchronous Message Processing Time.......... 10
2.3.1.3. Asynchronous Message Processing Rate...........10 2.3.1.3. Asynchronous Message Processing Rate.......... 10
2.3.1.4. Reactive Path Provisioning Time................11 2.3.1.4. Reactive Path Provisioning Time .............. 11
2.3.1.5. Proactive Path Provisioning Time...............11 2.3.1.5. Proactive Path Provisioning Time ............. 11
2.3.1.6. Reactive Path Provisioning Rate................12 2.3.1.6. Reactive Path Provisioning Rate .............. 12
2.3.1.7. Proactive Path Provisioning Rate...............12 2.3.1.7. Proactive Path Provisioning Rate ............. 12
2.3.1.8. Network Topology Change Detection Time.........12 2.3.1.8. Network Topology Change Detection Time........ 13
2.3.2. Scalability.........................................13 2.3.2. Scalability ....................................... 14
2.3.2.1. Control Sessions Capacity......................13 2.3.2.1. Control Sessions Capacity .................... 14
2.3.2.2. Network Discovery Size.........................13 2.3.2.2. Network Discovery Size ....................... 14
2.3.2.3. Forwarding Table Capacity......................14 2.3.2.3. Forwarding Table Capacity .................... 15
2.3.3. Security............................................14 2.3.3. Security ......................................... 15
2.3.3.1. Exception Handling.............................14 2.3.3.1. Exception Handling ........................... 15
2.3.3.2. Denial of Service Handling.....................15 2.3.3.2. Denial of Service Handling ................... 16
2.3.4. Reliability.........................................15 2.3.4. Reliability ....................................... 16
2.3.4.1. Controller Failover Time.......................15 2.3.4.1. Controller Failover Time ..................... 16
2.3.4.2. Network Re-Provisioning Time...................16 2.3.4.2. Network Re-Provisioning Time ................. 16
3. Test Setup....................................................16 3. Test Setup ................................................. 17
3.1. Test setup - Controller working in Standalone Mode.......17 3.1. Test setup - Controller working in Standalone Mode...... 18
3.2. Test setup - Controller working in Cluster Mode..........18 3.2. Test setup - Controller working in Cluster Mode......... 19
4. Test Coverage.................................................19 4. Test Coverage .............................................. 20
5. References....................................................20 5. References ................................................. 21
5.1. Normative References.....................................20 5.1. Normative References ................................... 21
5.2. Informative References...................................20 5.2. Informative References ................................. 21
6. IANA Considerations...........................................20 6. IANA Considerations ........................................ 21
7. Security Considerations.......................................20 7. Security Considerations ..................................... 21
8. Acknowledgements..............................................21 8. Acknowledgements ........................................... 22
9. Authors' Addresses............................................21 9. Authors' Addresses ......................................... 22
1. Introduction 1. Introduction
Software Defined Networking (SDN) is a networking architecture in Software Defined Networking (SDN) is a networking architecture in
which network control is decoupled from the underlying forwarding which network control is decoupled from the underlying forwarding
function and is placed in a centralized location called the SDN function and is placed in a centralized location called the SDN
controller. The SDN controller abstracts the underlying network and controller. The SDN controller abstracts the underlying network and
offers a global view of the overall network to applications and offers a global view of the overall network to applications and
business logic. Thus, an SDN controller provides the flexibility to business logic. Thus, an SDN controller provides the flexibility to
program, control, and manage network behaviour dynamically through program, control, and manage network behaviour dynamically through
skipping to change at page 9, line 38 skipping to change at page 9, line 38
This section defines metrics for benchmarking the SDN controller. This section defines metrics for benchmarking the SDN controller.
The procedure to perform the defined metrics is defined in the The procedure to perform the defined metrics is defined in the
accompanying methodology document. accompanying methodology document.
2.3.1. Performance 2.3.1. Performance
2.3.1.1. Network Topology Discovery Time 2.3.1.1. Network Topology Discovery Time
Definition: Definition:
To measure the time taken to discover the network topology - nodes The time taken by controller(s) to determine the complete network
and links by a controller. topology, defined as the interval starting with the first discovery
message from the controller(s) at its Southbound interface, ending
with all features of the static topology determined.
Discussion: Discussion:
Network topology discovery is key for the SDN controller to Network topology discovery is key for the SDN controller to
provision and manage the network. So it is important to measure how provision and manage the network. So it is important to measure how
quickly the controller discovers the topology to learn the current quickly the controller discovers the topology to learn the current
network state. This benchmark is obtained by presenting a network network state. This benchmark is obtained by presenting a network
topology (Tree, Mesh or Linear) with the given number of nodes to topology (Tree, Mesh or Linear) with the given number of nodes to
the controller and wait for the discovery process to complete .It is the controller and wait for the discovery process to complete .It is
expected that the controller supports network discovery mechanism expected that the controller supports network discovery mechanism
and uses protocol messages for its discovery process. and uses protocol messages for its discovery process.
Measurement Units: Measurement Units:
milliseconds milliseconds
See Also: See Also:
None None
2.3.1.2. Asynchronous Message Processing Time 2.3.1.2. Asynchronous Message Processing Time
Definition: Definition:
To measure the time taken by the controller to process an The time taken by controller(s) to process an asynchronous message,
asynchronous message. defined as the interval starting with an asynchronous message from a
network device after the discovery of all the devices by the
controller(s), ending with a response message from the controller(s)
at its Southbound interface.
Discussion: Discussion:
For SDN to support dynamic network provisioning, it is important to For SDN to support dynamic network provisioning, it is important to
measure how quickly the controller responds to an event triggered measure how quickly the controller responds to an event triggered
from the network. The event could be any notification messages from the network. The event could be any notification messages
generated by an Network Device upon arrival of a new flow, link down generated by an Network Device upon arrival of a new flow, link down
etc. This benchmark is obtained by sending asynchronous messages etc. This benchmark is obtained by sending asynchronous messages
from every connected Network Devices one at a time for the defined from every connected Network Devices one at a time for the defined
test duration. This test assumes that the controller will respond to test duration. This test assumes that the controller will respond to
the received asynchronous message. the received asynchronous message.
Measurement Units: Measurement Units:
milliseconds milliseconds
See Also: See Also:
None None
2.3.1.3. Asynchronous Message Processing Rate 2.3.1.3. Asynchronous Message Processing Rate
Definition: Definition:
To measure the maximum number of asynchronous messages that a The maximum number of asynchronous messages that the controller(s)
controller can process within the test duration. can process, defined as the number of asynchronous messages the
controller(s) can process at its Southbound interface between the
start of the test and the expiry of given test duration..
Discussion: Discussion:
As SDN assures flexible network and agile provisioning, it is As SDN assures flexible network and agile provisioning, it is
important to measure how many network events that the controller can important to measure how many network events that the controller can
handle at a time. This benchmark is obtained by sending asynchronous handle at a time. This benchmark is obtained by sending asynchronous
messages from every connected Network Devices at full connection messages from every connected Network Devices at full connection
capacity for the given test duration. This test assumes that the capacity for the given test duration. This test assumes that the
controller will respond to all the received asynchronous messages. controller will respond to all the received asynchronous messages.
Measurement Units: Measurement Units:
Messages processed per second. Messages processed per second.
skipping to change at page 11, line 12 skipping to change at page 11, line 22
Measurement Units: Measurement Units:
Messages processed per second. Messages processed per second.
See Also: See Also:
None None
2.3.1.4. Reactive Path Provisioning Time 2.3.1.4. Reactive Path Provisioning Time
Definition: Definition:
The time taken by the controller to setup a path reactively between The time taken by the controller to setup a path reactively between
source and destination node, expressed in milliseconds. source and destination node, defined as the interval starting with
the first flow provisioning request message received by the
controller(s), ending with the last flow provisioning response
message sent from the controller(s) at it Southbound interface.
Discussion: Discussion:
As SDN supports agile provisioning, it is important to measure how As SDN supports agile provisioning, it is important to measure how
fast that the controller provisions an end-to-end flow in the fast that the controller provisions an end-to-end flow in the
dataplane. The benchmark is obtained by sending traffic from a dataplane. The benchmark is obtained by sending traffic from a
source endpoint to the destination endpoint, finding the time source endpoint to the destination endpoint, finding the time
difference between the first and the last flow provisioning message difference between the first and the last flow provisioning message
exchanged between the controller and the Network Devices for the exchanged between the controller and the Network Devices for the
traffic path. traffic path.
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
2.3.1.5. Proactive Path Provisioning Time 2.3.1.5. Proactive Path Provisioning Time
Definition: Definition:
The time taken by the controller to setup a path proactively between The time taken by the controller to setup a path proactively between
source and destination node, expressed in milliseconds. source and destination node, defined as the interval starting with
the first proactive flow provisioned in the controller(s) at its
Northbound interface, ending with the last flow provisioning
response message sent from the controller(s) at it Southbound
interface.
Discussion: Discussion:
For SDN to support pre-provisioning of traffic path from For SDN to support pre-provisioning of traffic path from
application, it is important to measure how fast that the controller application, it is important to measure how fast that the controller
provisions an end-to-end flow in the dataplane. The benchmark is provisions an end-to-end flow in the dataplane. The benchmark is
obtained by provisioning a flow on controller's northbound interface obtained by provisioning a flow on controller's northbound interface
for the traffic to reach from a source to a destination endpoint, for the traffic to reach from a source to a destination endpoint,
finding the time difference between the first and the last flow finding the time difference between the first and the last flow
provisioning message exchanged between the controller and the provisioning message exchanged between the controller and the
Network Devices for the traffic path. Network Devices for the traffic path.
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
2.3.1.6. Reactive Path Provisioning Rate 2.3.1.6. Reactive Path Provisioning Rate
Definition: Definition:
Measure the maximum number of independent paths a controller can The maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
reactively within the test duration, expressed in paths per second. reactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the flow provisioning
requests received for path provisioning at its Southbound interface
between the start of the test and the expiry of given test duration
Discussion: Discussion:
For SDN to support agile traffic forwarding, it is important to For SDN to support agile traffic forwarding, it is important to
measure how many end-to-end flows that the controller could setup in measure how many end-to-end flows that the controller could setup in
the dataplane. This benchmark is obtained by sending traffic each the dataplane. This benchmark is obtained by sending traffic each
with unique source and destination pairs from the source Network with unique source and destination pairs from the source Network
Device and determine the number of frames received at the Device and determine the number of frames received at the
destination Network Device. destination Network Device.
Measurement Units: Measurement Units:
Paths provisioned per second. Paths provisioned per second.
See Also: See Also:
None None
2.3.1.7. Proactive Path Provisioning Rate 2.3.1.7. Proactive Path Provisioning Rate
Definition: Definition:
Measure the maximum number of independent paths a controller can Measure the maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
proactively within the test duration, expressed in paths per second. proactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the paths provisioned
in its Northbound interface between the start of the test and the
expiry of given test duration
Discussion: Discussion:
For SDN to support pre-provisioning of traffic path for a larger For SDN to support pre-provisioning of traffic path for a larger
network from the application, it is important to measure how many network from the application, it is important to measure how many
end-to-end flows that the controller could setup in the dataplane. end-to-end flows that the controller could setup in the dataplane.
This benchmark is obtained by sending traffic each with unique This benchmark is obtained by sending traffic each with unique
source and destination pairs from the source Network Device. Program source and destination pairs from the source Network Device. Program
the flows on controller's northbound interface for traffic to reach the flows on controller's northbound interface for traffic to reach
from each of the unique source and destination pairs and determine from each of the unique source and destination pairs and determine
the number of frames received at the destination Network Device. the number of frames received at the destination Network Device.
skipping to change at page 13, line 4 skipping to change at page 13, line 27
Measurement Units: Measurement Units:
Paths provisioned per second. Paths provisioned per second.
See Also: See Also:
None None
2.3.1.8. Network Topology Change Detection Time 2.3.1.8. Network Topology Change Detection Time
Definition: Definition:
The amount of time required for the controller to detect any changes The amount of time required for the controller to detect any changes
in the network topology. in the network topology, defined as the interval starting with the
notification message received by the controller(s) at its Southbound
interface, ending with the first topology rediscovery messages sent
from the controller(s) at its Southbound interface.
Discussion: Discussion:
In order to for the controller to support fast network failure In order to for the controller to support fast network failure
recovery, it is critical to measure how fast the controller is able recovery, it is critical to measure how fast the controller is able
to detect any network-state change events. This benchmark is to detect any network-state change events. This benchmark is
obtained by triggering a topology change event and measuring the obtained by triggering a topology change event and measuring the
time controller takes to detect and initiate a topology re-discovery time controller takes to detect and initiate a topology re-discovery
process. process.
Measurement Units: Measurement Units:
milliseconds milliseconds
See Also: See Also:
None None
2.3.2. Scalability 2.3.2. Scalability
2.3.2.1. Control Sessions Capacity 2.3.2.1. Control Sessions Capacity
Definition: Definition:
To measure the maximum number of control sessions the controller Measure the maximum number of control sessions the controller can
can maintain. maintain, defined as the number of sessions that the controller can
accept from network devices, starting with the first control
session, ending with the last control session that the controller(s)
accepts at its Southbound interface.
Discussion: Discussion:
Measuring the controller's control sessions capacity is important to Measuring the controller's control sessions capacity is important to
determine the controller's system and bandwidth resource determine the controller's system and bandwidth resource
requirements. This benchmark is obtained by establishing control requirements. This benchmark is obtained by establishing control
session with the controller from each of the Network Device until it session with the controller from each of the Network Device until it
fails. The number of sessions that were successfully established fails. The number of sessions that were successfully established
will provide the Control Sessions Capacity. will provide the Control Sessions Capacity.
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.3.2.2. Network Discovery Size 2.3.2.2. Network Discovery Size
Definition: Definition:
To measure the network size (number of nodes, links and hosts) that Measure the network size (number of nodes, links and hosts) that a
a controller can discover. controller can discover, defined as the size of a network that the
controller(s) can discover, starting from a network topology given
by the user for discovery, ending with the topology that the
controller(s) could successfully discover.
Discussion: Discussion:
For optimal network planning, it is key to measure the maximum For optimal network planning, it is key to measure the maximum
network size that the controller can discover. This benchmark is network size that the controller can discover. This benchmark is
obtained by presenting an initial set of Network Devices for obtained by presenting an initial set of Network Devices for
discovery to the controller. Based on the initial discovery, the discovery to the controller. Based on the initial discovery, the
number of Network Devices is increased or decreased to determine the number of Network Devices is increased or decreased to determine the
maximum nodes that the controller can discover. maximum nodes that the controller can discover.
Measurement Units: Measurement Units:
N/A N/A
skipping to change at page 15, line 41 skipping to change at page 16, line 32
See Also: See Also:
None None
2.3.4. Reliability 2.3.4. Reliability
2.3.4.1. Controller Failover Time 2.3.4.1. Controller Failover Time
Definition: Definition:
The time taken to switch from an active controller to the backup The time taken to switch from an active controller to the backup
controller, when the controllers work in redundancy mode and the controller, when the controllers work in redundancy mode and the
active controller fails. active controller fails, defined as the interval starting with the
active controller bringing down, ending with the first re-discovery
message received from the new controller at its Southbound
interface.
Discussion: Discussion:
This benchmark determine the impact of provisioning new flows when This benchmark determine the impact of provisioning new flows when
controllers are teamed and the active controller fails. controllers are teamed and the active controller fails.
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
skipping to change at page 16, line 11 skipping to change at page 17, line 4
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
2.3.4.2. Network Re-Provisioning Time 2.3.4.2. Network Re-Provisioning Time
Definition: Definition:
The time taken to re-route the traffic by the Controller, when there The time taken to re-route the traffic by the Controller, when there
is a failure in existing traffic paths. is a failure in existing traffic paths, defined as the interval
starting from the first failure notification message received by the
controller, ending with the last flow re-provisioning message sent
by the controller at its Southbound interface .
Discussion: Discussion:
This benchmark determines the controller's re-provisioning ability This benchmark determines the controller's re-provisioning ability
upon network failures. This benchmark test assumes the following: upon network failures. This benchmark test assumes the following:
i. Network topology supports redundant path between i. Network topology supports redundant path between
source and destination endpoints. source and destination endpoints.
ii. Controller does not pre-provision the redundant path. ii. Controller does not pre-provision the redundant path.
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
3. Test Setup 3. Test Setup
This section provides common reference topologies that are later This section provides common reference topologies that are later
skipping to change at page 20, line 28 skipping to change at page 21, line 28
[RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
May 1998. May 1998.
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
[I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks "Benchmarking Mark.T, Vishwas Manral, Sarah Banks "Benchmarking
Methodology for SDN Controller Performance", Methodology for SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-meth-01 draft-ietf-bmwg-sdn-controller-benchmark-meth-02
(Work in progress), March 21, 2016 (Work in progress), July 8, 2016
5.2. Informative References 5.2. Informative References
[OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail [OpenContrail] Ankur Singla, Bruno Rijsman, "OpenContrail
Architecture Documentation", Architecture Documentation",
http://opencontrail.org/opencontrail-architecture-documentation http://opencontrail.org/opencontrail-architecture-documentation
[OpenDaylight] OpenDaylight Controller:Architectural Framework, [OpenDaylight] OpenDaylight Controller:Architectural Framework,
https://wiki.opendaylight.org/view/OpenDaylight_Controller https://wiki.opendaylight.org/view/OpenDaylight_Controller
 End of changes. 25 change blocks. 
77 lines changed or deleted 112 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/