< draft-ietf-bmwg-sdn-controller-benchmark-term-06.txt   draft-ietf-bmwg-sdn-controller-benchmark-term-07.txt >
Internet-Draft Bhuvaneswaran Vengainathan Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies Intended Status: Informational Veryx Technologies
Expires: May 16, 2018 Mark Tassinari Expires: June 10, 2018 Mark Tassinari
Hewlett-Packard Hewlett-Packard
Vishwas Manral Vishwas Manral
Nano Sec Nano Sec
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
November 16, 2017 January 10, 2018
Terminology for Benchmarking SDN Controller Performance Terminology for Benchmarking SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-term-06 draft-ietf-bmwg-sdn-controller-benchmark-term-07
Abstract Abstract
This document defines terminology for benchmarking an SDN This document defines terminology for benchmarking an SDN
controller's control plane performance. It extends the terminology controller's control plane performance. It extends the terminology
already defined in RFC 7426 for the purpose of benchmarking SDN already defined in RFC 7426 for the purpose of benchmarking SDN
controllers. The terms provided in this document help to benchmark controllers. The terms provided in this document help to benchmark
SDN controller's performance independent of the controller's SDN controller's performance independent of the controller's
supported protocols and/or network services. A mechanism for supported protocols and/or network services. A mechanism for
benchmarking the performance of SDN controllers is defined in the benchmarking the performance of SDN controllers is defined in the
skipping to change at page 1, line 44 skipping to change at page 1, line 44
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current. Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress. material or to cite them other than as "work in progress.
This Internet-Draft will expire on May 16, 2018. This Internet-Draft will expire on June 10, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
skipping to change at page 2, line 37 skipping to change at page 2, line 37
2.1.3. Controller Forwarding Table..........................5 2.1.3. Controller Forwarding Table..........................5
2.1.4. Proactive Flow Provisioning Mode.....................5 2.1.4. Proactive Flow Provisioning Mode.....................5
2.1.5. Reactive Flow Provisioning Mode......................6 2.1.5. Reactive Flow Provisioning Mode......................6
2.1.6. Path.................................................6 2.1.6. Path.................................................6
2.1.7. Standalone Mode......................................6 2.1.7. Standalone Mode......................................6
2.1.8. Cluster/Redundancy Mode..............................7 2.1.8. Cluster/Redundancy Mode..............................7
2.1.9. Asynchronous Message.................................7 2.1.9. Asynchronous Message.................................7
2.1.10. Test Traffic Generator..............................8 2.1.10. Test Traffic Generator..............................8
2.2. Test Configuration/Setup Terms............................8 2.2. Test Configuration/Setup Terms............................8
2.2.1. Number of Network Devices............................8 2.2.1. Number of Network Devices............................8
2.2.2. Trials...............................................8 2.2.2. Trial Repetition.....................................8
2.2.3. Trial Duration.......................................9 2.2.3. Trial Duration.......................................9
2.2.4. Number of Cluster nodes..............................9 2.2.4. Number of Cluster nodes..............................9
2.3. Benchmarking Terms.......................................10 2.3. Benchmarking Terms.......................................10
2.3.1. Performance.........................................10 2.3.1. Performance.........................................10
2.3.1.1. Network Topology Discovery Time................10 2.3.1.1. Network Topology Discovery Time................10
2.3.1.2. Asynchronous Message Processing Time...........10 2.3.1.2. Asynchronous Message Processing Time...........10
2.3.1.3. Asynchronous Message Processing Rate...........11 2.3.1.3. Asynchronous Message Processing Rate...........11
2.3.1.4. Reactive Path Provisioning Time................12 2.3.1.4. Reactive Path Provisioning Time................12
2.3.1.5. Proactive Path Provisioning Time...............12 2.3.1.5. Proactive Path Provisioning Time...............12
2.3.1.6. Reactive Path Provisioning Rate................13 2.3.1.6. Reactive Path Provisioning Rate................13
skipping to change at page 3, line 16 skipping to change at page 3, line 16
2.3.2.1. Control Sessions Capacity......................15 2.3.2.1. Control Sessions Capacity......................15
2.3.2.2. Network Discovery Size.........................15 2.3.2.2. Network Discovery Size.........................15
2.3.2.3. Forwarding Table Capacity......................16 2.3.2.3. Forwarding Table Capacity......................16
2.3.3. Security............................................16 2.3.3. Security............................................16
2.3.3.1. Exception Handling.............................16 2.3.3.1. Exception Handling.............................16
2.3.3.2. Denial of Service Handling.....................17 2.3.3.2. Denial of Service Handling.....................17
2.3.4. Reliability.........................................17 2.3.4. Reliability.........................................17
2.3.4.1. Controller Failover Time.......................17 2.3.4.1. Controller Failover Time.......................17
2.3.4.2. Network Re-Provisioning Time...................18 2.3.4.2. Network Re-Provisioning Time...................18
3. Test Setup....................................................18 3. Test Setup....................................................18
3.1. Test setup - Controller working in Standalone Mode.......18 3.1. Test setup - Controller working in Standalone Mode.......19
3.2. Test setup - Controller working in Cluster Mode..........19 3.2. Test setup - Controller working in Cluster Mode..........20
4. Test Coverage.................................................20 4. Test Coverage.................................................21
5. References....................................................21 5. References....................................................22
5.1. Normative References.....................................21 5.1. Normative References.....................................22
5.2. Informative References...................................22 5.2. Informative References...................................22
6. IANA Considerations...........................................22 6. IANA Considerations...........................................22
7. Security Considerations.......................................22 7. Security Considerations.......................................22
8. Acknowledgements..............................................22 8. Acknowledgements..............................................23
9. Authors' Addresses............................................22 9. Authors' Addresses............................................23
1. Introduction 1. Introduction
Software Defined Networking (SDN) is a networking architecture in Software Defined Networking (SDN) is a networking architecture in
which network control is decoupled from the underlying forwarding which network control is decoupled from the underlying forwarding
function and is placed in a centralized location called the SDN function and is placed in a centralized location called the SDN
controller. The SDN controller abstracts the underlying network and controller. The SDN controller provides an abstraction of the
offers a global view of the overall network to applications and underlying network and offers a global view of the overall network
business logic. Thus, an SDN controller provides the flexibility to to applications and business logic. Thus, an SDN controller provides
program, control, and manage network behaviour dynamically through the flexibility to program, control, and manage network behaviour
standard interfaces. Since the network controls are logically dynamically through standard interfaces. Since the network controls
centralized, the need to benchmark the SDN controller performance are logically centralized, the need to benchmark the SDN controller
becomes significant. This document defines terms to benchmark performance becomes significant. This document defines terms to
various controller designs for performance, scalability, reliability benchmark various controller designs for performance, scalability,
and security, independent of northbound and southbound protocols. reliability and security, independent of northbound and southbound
protocols.
Conventions used in this document Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119. document are to be interpreted as described in RFC 2119.
2. Term Definitions 2. Term Definitions
2.1. SDN Terms 2.1. SDN Terms
skipping to change at page 5, line 27 skipping to change at page 5, line 27
N/A N/A
See Also: See Also:
None None
2.1.3. Controller Forwarding Table 2.1.3. Controller Forwarding Table
Definition: Definition:
A controller forwarding table contains flow entries learned in one A controller forwarding table contains flow entries learned in one
of two ways: first, entries could be learned from traffic received of two ways: first, entries could be learned from traffic received
through the data plane, or, second, these entries could be through the data plane, or second, these entries could be statically
statically provisioned on the controller, and distributed to devices provisioned on the controller and distributed to devices via the
via the southbound interface. southbound interface.
Discussion: Discussion:
The controller forwarding table has an aging mechanism which will be The controller forwarding table has an aging mechanism which will be
applied only for dynamically learnt entries. applied only for dynamically learned entries.
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.1.4. Proactive Flow Provisioning Mode 2.1.4. Proactive Flow Provisioning Mode
Definition: Definition:
skipping to change at page 7, line 47 skipping to change at page 7, line 47
2.1.9. Asynchronous Message 2.1.9. Asynchronous Message
Definition: Definition:
Any message from the Network Device that is generated for network Any message from the Network Device that is generated for network
events. events.
Discussion: Discussion:
Control messages like flow setup request and response message is Control messages like flow setup request and response message is
classified as asynchronous message. The controller has to return a classified as asynchronous message. The controller has to return a
response message. Note that the Network Device will not be in response message. Note that the Network Device will not be in
blocking mode and continues to send/receive other control messages blocking mode and continues to send/receive other control messages.
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.1.10. Test Traffic Generator 2.1.10. Test Traffic Generator
Definition: Definition:
Test Traffic Generator is an entity that generates/receives network Test Traffic Generator is an entity that generates/receives network
traffic. traffic.
Discussion: Discussion:
Test Traffic Generator can be an entity that interfaces with Network Test Traffic Generator typically connects with Network Devices to
Devices to send/receive real-time network traffic. send/receive real-time network traffic.
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.2. Test Configuration/Setup Terms 2.2. Test Configuration/Setup Terms
2.2.1. Number of Network Devices 2.2.1. Number of Network Devices
skipping to change at page 8, line 43 skipping to change at page 8, line 43
Discussion: Discussion:
The Network Devices defined in the test topology can be deployed The Network Devices defined in the test topology can be deployed
using real hardware or emulated in hardware platforms. using real hardware or emulated in hardware platforms.
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.2.2. Trials 2.2.2. Trial Repetition
Definition: Definition:
The number of times the test needs to be repeated. The number of times the test needs to be repeated.
Discussion: Discussion:
The test needs to be repeated for multiple iterations to obtain a The test needs to be repeated for multiple iterations to obtain a
reliable metric. It is recommended that this test SHOULD be reliable metric. It is recommended that this test SHOULD be
performed for at least 10 iterations to increase the confidence in performed for at least 10 iterations to increase the confidence in
measured result. measured result.
skipping to change at page 9, line 23 skipping to change at page 9, line 23
See Also: See Also:
None None
2.2.3. Trial Duration 2.2.3. Trial Duration
Definition: Definition:
Defines the duration of test trials for each iteration. Defines the duration of test trials for each iteration.
Discussion: Discussion:
Trial duration forms the basis for stop criteria for benchmarking Trial duration forms the basis for stop criteria for benchmarking
tests. Trial not completed within this time interval is considered tests. Trials not completed within this time interval is considered
as incomplete. as incomplete.
Measurement Units: Measurement Units:
seconds seconds
See Also: See Also:
None None
2.2.4. Number of Cluster nodes 2.2.4. Number of Cluster nodes
skipping to change at page 10, line 9 skipping to change at page 10, line 9
Measurement Units: Measurement Units:
N/A N/A
See Also: See Also:
None None
2.3. Benchmarking Terms 2.3. Benchmarking Terms
This section defines metrics for benchmarking the SDN controller. This section defines metrics for benchmarking the SDN controller.
The procedure to perform the defined metrics is defined in the The procedure to perform the defined metrics is defined in the
accompanying methodology document [I-D.sdn-controller-benchmark-meth] accompanying methodology document[I-D.sdn-controller-benchmark-meth]
2.3.1. Performance 2.3.1. Performance
2.3.1.1. Network Topology Discovery Time 2.3.1.1. Network Topology Discovery Time
Definition: Definition:
The time taken by controller(s) to determine the complete network The time taken by controller(s) to determine the complete network
topology, defined as the interval starting with the first discovery topology, defined as the interval starting with the first discovery
message from the controller(s) at its Southbound interface, ending message from the controller(s) at its Southbound interface, ending
with all features of the static topology determined. with all features of the static topology determined.
Discussion: Discussion:
Network topology discovery is key for the SDN controller to Network topology discovery is key for the SDN controller to
provision and manage the network. So it is important to measure how provision and manage the network. So it is important to measure how
quickly the controller discovers the topology to learn the current quickly the controller discovers the topology to learn the current
network state. This benchmark is obtained by presenting a network network state. This benchmark is obtained by presenting a network
topology (Tree, Mesh or Linear) with the given number of nodes to topology (Tree, Mesh or Linear) with the given number of nodes to
the controller and wait for the discovery process to complete .It is the controller and wait for the discovery process to complete. It is
expected that the controller supports network discovery mechanism expected that the controller supports network discovery mechanism
and uses protocol messages for its discovery process. and uses protocol messages for its discovery process.
Measurement Units: Measurement Units:
milliseconds milliseconds
See Also: See Also:
None None
2.3.1.2. Asynchronous Message Processing Time 2.3.1.2. Asynchronous Message Processing Time
skipping to change at page 11, line 27 skipping to change at page 11, line 26
2.3.1.3. Asynchronous Message Processing Rate 2.3.1.3. Asynchronous Message Processing Rate
Definition: Definition:
The number responses to asynchronous messages (such as new flow The number responses to asynchronous messages (such as new flow
arrival notification message, etc.) for which the controller(s) arrival notification message, etc.) for which the controller(s)
performed processing and replied with a valid and productive (non- performed processing and replied with a valid and productive (non-
trivial) response message. trivial) response message.
Discussion: Discussion:
As SDN assures flexible network and agile provisioning, it is As SDN assures flexible network and agile provisioning, it is
important to measure how many network events that the controller can important to measure how many network events the controller can
handle at a time. This benchmark is obtained by sending asynchronous handle at a time. This benchmark is obtained by sending asynchronous
messages from every connected Network Devices at the rate that the messages from every connected Network Device at the rate that the
controller processes without dropping. This test assumes that the controller processes (without dropping them). This test assumes that
controller responds to all the received asynchronous messages (the the controller responds to all the received asynchronous messages
messages can be designed to elicit individual responses). (the messages can be designed to elicit individual responses).
When sending asynchronous messages to the controller(s) at high When sending asynchronous messages to the controller(s) at high
rates, some messages or responses may be discarded or corrupted and rates, some messages or responses may be discarded or corrupted and
require retransmission to controller(s). Therefore, a useful require retransmission to controller(s). Therefore, a useful
qualification on Asynchronous Message Processing Rate is whether the qualification on Asynchronous Message Processing Rate is whether the
in-coming message count equals the response count in each trial. in-coming message count equals the response count in each trial.
This is called the Loss-free Asynchronous Message Processing Rate. This is called the Loss-free Asynchronous Message Processing Rate.
Note that several of the early controller benchmarking tools did not Note that several of the early controller benchmarking tools did not
consider lost messages, and instead report the maximum response consider lost messages, and instead report the maximum response
rate. This is called the Maximum Asynchronous Message Processing rate. This is called the Maximum Asynchronous Message Processing
Rate. Rate.
To characterize both the Loss-free and Maximum Rates, a test could To characterize both the Loss-free and Maximum Rates, a test could
begin the first trial by sending asynchronous messages to the begin the first trial by sending asynchronous messages to the
controller (s) at the maximum possible rate and record the message controller(s) at the maximum possible rate and record the message
reply rate and the message loss rate. The message sending rate is reply rate and the message loss rate. The message sending rate is
then decreased by the step-size. The message reply rate and the then decreased by the step-size. The message reply rate and the
message loss rate are recorded. The test ends with a trial where the message loss rate are recorded. The test ends with a trial where the
controller(s) processes the all asynchronous messages sent without controller(s) processes the all asynchronous messages sent without
loss. This is the Loss-free Asynchronous Message Processing Rate. loss. This is the Loss-free Asynchronous Message Processing Rate.
The trial where the controller(s) produced the maximum response rate The trial where the controller(s) produced the maximum response rate
is the Maximum Asynchronous Message Processing Rate. Of course, the is the Maximum Asynchronous Message Processing Rate. Of course, the
first trial could begin at a low sending rate with zero lost first trial could begin at a low sending rate with zero lost
responses, and increase until the Loss-free and Maximum Rates are responses, and increase until the Loss-free and Maximum Rates are
skipping to change at page 12, line 49 skipping to change at page 12, line 48
Measurement Units: Measurement Units:
milliseconds. milliseconds.
See Also: See Also:
None None
2.3.1.5. Proactive Path Provisioning Time 2.3.1.5. Proactive Path Provisioning Time
Definition: Definition:
The time taken by the controller to setup a path proactively between The time taken by the controller to proactively setup a path between
source and destination node, defined as the interval starting with source and destination node, defined as the interval starting with
the first proactive flow provisioned in the controller(s) at its the first proactive flow provisioned in the controller(s) at its
Northbound interface, ending with the last flow provisioning Northbound interface, ending with the last flow provisioning command
response message sent from the controller(s) at it Southbound message sent from the controller(s) at it Southbound interface.
interface.
Discussion: Discussion:
For SDN to support pre-provisioning of traffic path from For SDN to support pre-provisioning of traffic path from
application, it is important to measure how fast that the controller application, it is important to measure how fast that the controller
provisions an end-to-end flow in the dataplane. The benchmark is provisions an end-to-end flow in the dataplane. The benchmark is
obtained by provisioning a flow on controller's northbound interface obtained by provisioning a flow on controller's northbound interface
for the traffic to reach from a source to a destination endpoint, for the traffic to reach from a source to a destination endpoint,
finding the time difference between the first and the last flow finding the time difference between the first and the last flow
provisioning message exchanged between the controller and the provisioning message exchanged between the controller and the
Network Devices for the traffic path. Network Devices for the traffic path.
skipping to change at page 13, line 34 skipping to change at page 13, line 32
2.3.1.6. Reactive Path Provisioning Rate 2.3.1.6. Reactive Path Provisioning Rate
Definition: Definition:
The maximum number of independent paths a controller can The maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
reactively, defined as the number of paths provisioned by the reactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the flow provisioning controller(s) at its Southbound interface for the flow provisioning
requests received for path provisioning at its Southbound interface requests received for path provisioning at its Southbound interface
between the start of the trial and the expiry of given trial between the start of the trial and the expiry of given trial
duration duration.
Discussion: Discussion:
For SDN to support agile traffic forwarding, it is important to For SDN to support agile traffic forwarding, it is important to
measure how many end-to-end flows that the controller could setup in measure how many end-to-end flows that the controller could setup in
the dataplane. This benchmark is obtained by sending traffic each the dataplane. This benchmark is obtained by sending traffic each
with unique source and destination pairs from the source Network with unique source and destination pairs from the source Network
Device and determine the number of frames received at the Device and determine the number of frames received at the
destination Network Device. destination Network Device.
Measurement Units: Measurement Units:
skipping to change at page 14, line 13 skipping to change at page 14, line 13
None None
2.3.1.7. Proactive Path Provisioning Rate 2.3.1.7. Proactive Path Provisioning Rate
Definition: Definition:
Measure the maximum number of independent paths a controller can Measure the maximum number of independent paths a controller can
concurrently establish between source and destination nodes concurrently establish between source and destination nodes
proactively, defined as the number of paths provisioned by the proactively, defined as the number of paths provisioned by the
controller(s) at its Southbound interface for the paths provisioned controller(s) at its Southbound interface for the paths provisioned
in its Northbound interface between the start of the trial and the in its Northbound interface between the start of the trial and the
expiry of given trial duration expiry of given trial duration.
Discussion: Discussion:
For SDN to support pre-provisioning of traffic path for a larger For SDN to support pre-provisioning of traffic path for a larger
network from the application, it is important to measure how many network from the application, it is important to measure how many
end-to-end flows that the controller could setup in the dataplane. end-to-end flows that the controller could setup in the dataplane.
This benchmark is obtained by sending traffic each with unique This benchmark is obtained by sending traffic each with unique
source and destination pairs from the source Network Device. Program source and destination pairs from the source Network Device. Program
the flows on controller's northbound interface for traffic to reach the flows on controller's northbound interface for traffic to reach
from each of the unique source and destination pairs and determine from each of the unique source and destination pairs and determine
the number of frames received at the destination Network Device. the number of frames received at the destination Network Device.
skipping to change at page 22, line 8 skipping to change at page 22, line 25
"Terminology for Benchmarking Network-layer Traffic "Terminology for Benchmarking Network-layer Traffic
Control Mechanisms", RFC 4689, October 2006. Control Mechanisms", RFC 4689, October 2006.
[RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis, [RFC2330] V. Paxson, G. Almes, J. Mahdavi, M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
May 1998. May 1998.
[I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil, [I-D.sdn-controller-benchmark-meth] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks "Benchmarking Mark.T, Vishwas Manral, Sarah Banks "Benchmarking
Methodology for SDN Controller Performance", Methodology for SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-meth-06 draft-ietf-bmwg-sdn-controller-benchmark-meth-07
(Work in progress), November 16, 2017 (Work in progress), January 10, 2018
5.2. Informative References 5.2. Informative References
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification" [OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013. Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
6. IANA Considerations 6. IANA Considerations
This document does not have any IANA requests. This document does not have any IANA requests.
skipping to change at page 23, line 22 skipping to change at page 23, line 47
Email: mark.tassinari@hpe.com Email: mark.tassinari@hpe.com
Vishwas Manral Vishwas Manral
Nano Sec,CA Nano Sec,CA
Email: vishwas.manral@gmail.com Email: vishwas.manral@gmail.com
Sarah Banks Sarah Banks
VSS Monitoring VSS Monitoring
930 De Guigne Drive,
Sunnyvale, CA
Email: sbanks@encrypted.net Email: sbanks@encrypted.net
 End of changes. 26 change blocks. 
47 lines changed or deleted 49 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/