draft-thornburgh-adobe-rtmfp-04.txt   draft-thornburgh-adobe-rtmfp-05.txt 
Network Working Group M. Thornburgh Network Working Group M. Thornburgh
Internet-Draft Adobe Internet-Draft Adobe
Intended status: Informational February 14, 2013 Intended status: Informational April 5, 2013
Expires: August 18, 2013 Expires: October 7, 2013
Adobe's Secure Real-Time Media Flow Protocol Adobe's Secure Real-Time Media Flow Protocol
draft-thornburgh-adobe-rtmfp-04 draft-thornburgh-adobe-rtmfp-05
Abstract Abstract
This memo describes the Secure Real-Time Media Flow Protocol (RTMFP), This memo describes the Secure Real-Time Media Flow Protocol (RTMFP),
an endpoint-to-endpoint communication protocol designed to securely an endpoint-to-endpoint communication protocol designed to securely
transport parallel flows of real-time video, audio, and data transport parallel flows of real-time video, audio, and data
messages, as well as bulk data, over IP networks. RTMFP has features messages, as well as bulk data, over IP networks. RTMFP has features
making it effective for peer-to-peer (P2P) as well as client-server making it effective for peer-to-peer (P2P) as well as client-server
communications, even when Network Address Translators (NATs) are communications, even when Network Address Translators (NATs) are
used. used.
skipping to change at page 1, line 39 skipping to change at page 1, line 39
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 18, 2013. This Internet-Draft will expire on October 7, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1. Design Highlights of RTMFP . . . . . . . . . . . . . . . . 5 1.1. Design Highlights of RTMFP . . . . . . . . . . . . . . . 5
1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6
2. Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1. Common Elements . . . . . . . . . . . . . . . . . . . . . 7 2.1. Common Elements . . . . . . . . . . . . . . . . . . . . . 7
2.1.1. Elementary Types and Constructs . . . . . . . . . . . 7 2.1.1. Elementary Types and Constructs . . . . . . . . . . . 7
2.1.2. Variable Length Unsigned Integer (VLU) . . . . . . . . 8 2.1.2. Variable Length Unsigned Integer (VLU) . . . . . . . 8
2.1.3. Option . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.3. Option . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.4. Option List . . . . . . . . . . . . . . . . . . . . . 10 2.1.4. Option List . . . . . . . . . . . . . . . . . . . . . 10
2.1.5. Internet Socket Address (Address) . . . . . . . . . . 11 2.1.5. Internet Socket Address (Address) . . . . . . . . . . 11
2.2. Network Layer . . . . . . . . . . . . . . . . . . . . . . 12 2.2. Network Layer . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1. Encapsulation . . . . . . . . . . . . . . . . . . . . 12 2.2.1. Encapsulation . . . . . . . . . . . . . . . . . . . . 12
2.2.2. Multiplex . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2. Multiplex . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3. Encryption . . . . . . . . . . . . . . . . . . . . . . 14 2.2.3. Encryption . . . . . . . . . . . . . . . . . . . . . 14
2.2.4. Packet . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.4. Packet . . . . . . . . . . . . . . . . . . . . . . . 14
2.3. Chunks . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3. Chunks . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1. Packet Fragment Chunk . . . . . . . . . . . . . . . . 18 2.3.1. Packet Fragment Chunk . . . . . . . . . . . . . . . . 18
2.3.2. Initiator Hello Chunk (IHello) . . . . . . . . . . . . 19 2.3.2. Initiator Hello Chunk (IHello) . . . . . . . . . . . 19
2.3.3. Forwarded Initiator Hello Chunk (FIHello) . . . . . . 20 2.3.3. Forwarded Initiator Hello Chunk (FIHello) . . . . . . 20
2.3.4. Responder Hello Chunk (RHello) . . . . . . . . . . . . 21 2.3.4. Responder Hello Chunk (RHello) . . . . . . . . . . . 21
2.3.5. Responder Redirect Chunk (Redirect) . . . . . . . . . 22 2.3.5. Responder Redirect Chunk (Redirect) . . . . . . . . . 22
2.3.6. RHello Cookie Change Chunk . . . . . . . . . . . . . . 24 2.3.6. RHello Cookie Change Chunk . . . . . . . . . . . . . 24
2.3.7. Initiator Initial Keying Chunk (IIKeying) . . . . . . 25 2.3.7. Initiator Initial Keying Chunk (IIKeying) . . . . . . 25
2.3.8. Responder Initial Keying Chunk (RIKeying) . . . . . . 26 2.3.8. Responder Initial Keying Chunk (RIKeying) . . . . . . 26
2.3.9. Ping Chunk . . . . . . . . . . . . . . . . . . . . . . 28 2.3.9. Ping Chunk . . . . . . . . . . . . . . . . . . . . . 28
2.3.10. Ping Reply Chunk . . . . . . . . . . . . . . . . . . . 29 2.3.10. Ping Reply Chunk . . . . . . . . . . . . . . . . . . 29
2.3.11. User Data Chunk . . . . . . . . . . . . . . . . . . . 29 2.3.11. User Data Chunk . . . . . . . . . . . . . . . . . . . 29
2.3.11.1. Options for User Data . . . . . . . . . . . . . . 31 2.3.11.1. Options for User Data . . . . . . . . . . . . . 31
2.3.11.1.1. User's Per-Flow Metadata . . . . . . . . . . 32 2.3.11.1.1. User's Per-Flow Metadata . . . . . . . . . . 32
2.3.11.1.2. Return Flow Association . . . . . . . . . . . 32 2.3.11.1.2. Return Flow Association . . . . . . . . . . 32
2.3.12. Next User Data Chunk . . . . . . . . . . . . . . . . . 33 2.3.12. Next User Data Chunk . . . . . . . . . . . . . . . . 33
2.3.13. Data Acknowledgement Bitmap Chunk (Bitmap Ack) . . . . 35 2.3.13. Data Acknowledgement Bitmap Chunk (Bitmap Ack) . . . 35
2.3.14. Data Acknowledgement Ranges Chunk (Range Ack) . . . . 37 2.3.14. Data Acknowledgement Ranges Chunk (Range Ack) . . . . 37
2.3.15. Buffer Probe Chunk . . . . . . . . . . . . . . . . . . 39 2.3.15. Buffer Probe Chunk . . . . . . . . . . . . . . . . . 39
2.3.16. Flow Exception Report Chunk . . . . . . . . . . . . . 39 2.3.16. Flow Exception Report Chunk . . . . . . . . . . . . . 39
2.3.17. Session Close Request Chunk (Close) . . . . . . . . . 40 2.3.17. Session Close Request Chunk (Close) . . . . . . . . . 40
2.3.18. Session Close Acknowledgement Chunk (Close Ack) . . . 40 2.3.18. Session Close Acknowledgement Chunk (Close Ack) . . . 40
3. Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3. Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2. Endpoint Identity . . . . . . . . . . . . . . . . . . . . 43 3.2. Endpoint Identity . . . . . . . . . . . . . . . . . . . . 43
3.3. Packet Multiplex . . . . . . . . . . . . . . . . . . . . . 44 3.3. Packet Multiplex . . . . . . . . . . . . . . . . . . . . 44
3.4. Packet Fragmentation . . . . . . . . . . . . . . . . . . . 44 3.4. Packet Fragmentation . . . . . . . . . . . . . . . . . . 44
3.5. Sessions . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.5. Sessions . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5.1. Startup . . . . . . . . . . . . . . . . . . . . . . . 48 3.5.1. Startup . . . . . . . . . . . . . . . . . . . . . . . 48
3.5.1.1. Normal Handshake . . . . . . . . . . . . . . . . 48 3.5.1.1. Normal Handshake . . . . . . . . . . . . . . . . 48
3.5.1.1.1. Initiator . . . . . . . . . . . . . . . . . . 49 3.5.1.1.1. Initiator . . . . . . . . . . . . . . . . . 49
3.5.1.1.2. Responder . . . . . . . . . . . . . . . . . . 51 3.5.1.1.2. Responder . . . . . . . . . . . . . . . . . 51
3.5.1.2. Cookie Change . . . . . . . . . . . . . . . . . . 53 3.5.1.2. Cookie Change . . . . . . . . . . . . . . . . . 53
3.5.1.3. Glare . . . . . . . . . . . . . . . . . . . . . . 54 3.5.1.3. Glare . . . . . . . . . . . . . . . . . . . . . 54
3.5.1.4. Redirector . . . . . . . . . . . . . . . . . . . 55 3.5.1.4. Redirector . . . . . . . . . . . . . . . . . . . 55
3.5.1.5. Forwarder . . . . . . . . . . . . . . . . . . . . 56 3.5.1.5. Forwarder . . . . . . . . . . . . . . . . . . . 56
3.5.1.6. Redirector and Forwarder with NAT . . . . . . . . 58 3.5.1.6. Redirector and Forwarder with NAT . . . . . . . 58
3.5.2. Congestion Control . . . . . . . . . . . . . . . . . . 61 3.5.1.7. Load Distribution and Fault Tolerance . . . . . 61
3.5.2.1. Time Critical Reverse Notification . . . . . . . 61 3.5.2. Congestion Control . . . . . . . . . . . . . . . . . 62
3.5.2.2. Retransmission Timeout . . . . . . . . . . . . . 62 3.5.2.1. Time Critical Reverse Notification . . . . . . . 63
3.5.2.3. Burst Avoidance . . . . . . . . . . . . . . . . . 64 3.5.2.2. Retransmission Timeout . . . . . . . . . . . . . 63
3.5.3. Address Mobility . . . . . . . . . . . . . . . . . . . 65 3.5.2.3. Burst Avoidance . . . . . . . . . . . . . . . . 65
3.5.4. Ping . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.3. Address Mobility . . . . . . . . . . . . . . . . . . 66
3.5.4.1. Keepalive . . . . . . . . . . . . . . . . . . . . 66 3.5.4. Ping . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.4.2. Address Mobility . . . . . . . . . . . . . . . . 66 3.5.4.1. Keepalive . . . . . . . . . . . . . . . . . . . 67
3.5.4.3. Path MTU Discovery . . . . . . . . . . . . . . . 67 3.5.4.2. Address Mobility . . . . . . . . . . . . . . . . 67
3.5.5. Close . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.4.3. Path MTU Discovery . . . . . . . . . . . . . . . 68
3.6. Flows . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5.5. Close . . . . . . . . . . . . . . . . . . . . . . . . 68
3.6.1. Overview . . . . . . . . . . . . . . . . . . . . . . . 68 3.6. Flows . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6.1.1. Identity . . . . . . . . . . . . . . . . . . . . 68 3.6.1. Overview . . . . . . . . . . . . . . . . . . . . . . 69
3.6.1.2. Messages and Sequencing . . . . . . . . . . . . . 69 3.6.1.1. Identity . . . . . . . . . . . . . . . . . . . . 70
3.6.1.3. Lifetime . . . . . . . . . . . . . . . . . . . . 70 3.6.1.2. Messages and Sequencing . . . . . . . . . . . . 70
3.6.2. Sender . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6.1.3. Lifetime . . . . . . . . . . . . . . . . . . . . 71
3.6.2.1. Startup . . . . . . . . . . . . . . . . . . . . . 72 3.6.2. Sender . . . . . . . . . . . . . . . . . . . . . . . 72
3.6.2.2. Queuing Data . . . . . . . . . . . . . . . . . . 73 3.6.2.1. Startup . . . . . . . . . . . . . . . . . . . . 74
3.6.2.3. Sending Data . . . . . . . . . . . . . . . . . . 73 3.6.2.2. Queuing Data . . . . . . . . . . . . . . . . . . 75
3.6.2.3.1. Startup Options . . . . . . . . . . . . . . . 75 3.6.2.3. Sending Data . . . . . . . . . . . . . . . . . . 75
3.6.2.3.2. Send Next Data . . . . . . . . . . . . . . . 75 3.6.2.3.1. Startup Options . . . . . . . . . . . . . . 77
3.6.2.4. Processing Acknowledgements . . . . . . . . . . . 76 3.6.2.3.2. Send Next Data . . . . . . . . . . . . . . . 77
3.6.2.5. Negative Acknowledgement and Loss . . . . . . . . 76 3.6.2.4. Processing Acknowledgements . . . . . . . . . . 78
3.6.2.6. Timeout . . . . . . . . . . . . . . . . . . . . . 77 3.6.2.5. Negative Acknowledgement and Loss . . . . . . . 78
3.6.2.7. Abandoning Data . . . . . . . . . . . . . . . . . 78 3.6.2.6. Timeout . . . . . . . . . . . . . . . . . . . . 79
3.6.2.7.1. Forward Sequence Number Update . . . . . . . 78 3.6.2.7. Abandoning Data . . . . . . . . . . . . . . . . 80
3.6.2.8. Examples . . . . . . . . . . . . . . . . . . . . 79 3.6.2.7.1. Forward Sequence Number Update . . . . . . . 80
3.6.2.9. Flow Control . . . . . . . . . . . . . . . . . . 80 3.6.2.8. Examples . . . . . . . . . . . . . . . . . . . . 81
3.6.2.9.1. Buffer Probe . . . . . . . . . . . . . . . . 81 3.6.2.9. Flow Control . . . . . . . . . . . . . . . . . . 82
3.6.2.10. Exception . . . . . . . . . . . . . . . . . . . . 81 3.6.2.9.1. Buffer Probe . . . . . . . . . . . . . . . . 83
3.6.2.11. Close . . . . . . . . . . . . . . . . . . . . . . 81 3.6.2.10. Exception . . . . . . . . . . . . . . . . . . . 83
3.6.3. Receiver . . . . . . . . . . . . . . . . . . . . . . . 82 3.6.2.11. Close . . . . . . . . . . . . . . . . . . . . . 83
3.6.3.1. Startup . . . . . . . . . . . . . . . . . . . . . 84 3.6.3. Receiver . . . . . . . . . . . . . . . . . . . . . . 84
3.6.3.2. Receiving Data . . . . . . . . . . . . . . . . . 85 3.6.3.1. Startup . . . . . . . . . . . . . . . . . . . . 86
3.6.3.3. Buffering and Delivering Data . . . . . . . . . . 87 3.6.3.2. Receiving Data . . . . . . . . . . . . . . . . . 87
3.6.3.4. Acknowledging Data . . . . . . . . . . . . . . . 89 3.6.3.3. Buffering and Delivering Data . . . . . . . . . 89
3.6.3.4.1. Timing . . . . . . . . . . . . . . . . . . . 89 3.6.3.4. Acknowledging Data . . . . . . . . . . . . . . . 91
3.6.3.4.2. Size and Truncation . . . . . . . . . . . . . 90 3.6.3.4.1. Timing . . . . . . . . . . . . . . . . . . . 91
3.6.3.4.3. Constructing . . . . . . . . . . . . . . . . 91 3.6.3.4.2. Size and Truncation . . . . . . . . . . . . 92
3.6.3.4.4. Delayed Acknowledgement . . . . . . . . . . . 91 3.6.3.4.3. Constructing . . . . . . . . . . . . . . . . 93
3.6.3.4.5. Obligatory Acknowledgement . . . . . . . . . 91 3.6.3.4.4. Delayed Acknowledgement . . . . . . . . . . 93
3.6.3.4.6. Opportunistic Acknowledgement . . . . . . . . 92 3.6.3.4.5. Obligatory Acknowledgement . . . . . . . . . 93
3.6.3.4.7. Example . . . . . . . . . . . . . . . . . . . 92 3.6.3.4.6. Opportunistic Acknowledgement . . . . . . . 94
3.6.3.5. Flow Control . . . . . . . . . . . . . . . . . . 93 3.6.3.4.7. Example . . . . . . . . . . . . . . . . . . 94
3.6.3.6. Receiving a Buffer Probe . . . . . . . . . . . . 94 3.6.3.5. Flow Control . . . . . . . . . . . . . . . . . . 95
3.6.3.7. Rejecting a Flow . . . . . . . . . . . . . . . . 95 3.6.3.6. Receiving a Buffer Probe . . . . . . . . . . . . 96
3.6.3.8. Close . . . . . . . . . . . . . . . . . . . . . . 95 3.6.3.7. Rejecting a Flow . . . . . . . . . . . . . . . . 97
4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 96 3.6.3.8. Close . . . . . . . . . . . . . . . . . . . . . 97
5. Security Considerations . . . . . . . . . . . . . . . . . . . 96 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 98
6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 97 5. Security Considerations . . . . . . . . . . . . . . . . . . . 98
7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 99
7.1. Normative References . . . . . . . . . . . . . . . . . . . 97 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2. Informative References . . . . . . . . . . . . . . . . . . 98 7.1. Normative References . . . . . . . . . . . . . . . . . . 99
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.2. Informative References . . . . . . . . . . . . . . . . . 100
Appendix A. Example Congestion Control Algorithm . . . . . . . . 100
A.1. Discussion . . . . . . . . . . . . . . . . . . . . . . . 100
A.2. Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 102
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 104
1. Introduction 1. Introduction
Adobe's Secure Real-Time Media Flow Protocol (RTMFP) is intended for Adobe's Secure Real-Time Media Flow Protocol (RTMFP) is intended for
use as an endpoint-to-endpoint data transport service in IP networks. use as an endpoint-to-endpoint data transport service in IP networks.
It has features that make it well suited to the transport of real- It has features that make it well suited to the transport of real-
time media (such as low-delay video, audio, and data) as well as bulk time media (such as low-delay video, audio, and data) as well as bulk
data, and for client-server as well as peer-to-peer (P2P) data, and for client-server as well as peer-to-peer (P2P)
communication. These features include independent parallel message communication. These features include independent parallel message
flows which may have different delivery priorities, variable message flows which may have different delivery priorities, variable message
skipping to change at page 48, line 12 skipping to change at page 48, line 12
o The role of this end of the session, which is either Initiator or o The role of this end of the session, which is either Initiator or
Responder. Responder.
Note: this diagram is only a summary of state transitions and their Note: this diagram is only a summary of state transitions and their
causing events, and is not a complete operational specification. causing events, and is not a complete operational specification.
rcv IIKeying Glare rcv IIKeying Glare
far prevails +-------------+ ultimate open timeout far prevails +-------------+ ultimate open timeout
+--------------|S_IHELLO_SENT|-------------+ +--------------|S_IHELLO_SENT|-------------+
| +-------------+ | | +-------------+ |
| |rcv RHello v | |rcv RHello |
| | +-------------+ | | v
| | |S_OPEN_FAILED| | v +-------------+
| | +-------------+ |<-----------(duplicate session?) |S_OPEN_FAILED|
| rcv IIKeying Glare v ^ | yes |no +-------------+
| | ^
| rcv IIKeying Glare v |
| far prevails +-------------+ | | far prevails +-------------+ |
|<-------------|S_KEYING_SENT|-------------+ |<-------------|S_KEYING_SENT|-------------+
| +-------------+ ultimate open timeout | +-------------+ ultimate open timeout
| |rcv RIKeying | |rcv RIKeying
| | | |
| rcv v | rcv v
| +-+ IIKeying +--------+ rcv Close Request | +-+ IIKeying +--------+ rcv Close Request
| |X|---------->| S_OPEN |--------------------+ | |X|---------->| S_OPEN |--------------------+
| +-+ +--------+ | | +-+ +--------+ |
| CLOSE| |rcv Close Ack | | CLOSE| |rcv Close Ack |
skipping to change at page 61, line 5 skipping to change at page 61, line 5
At the point in the diagram marked (*), Responder's RHello from the At the point in the diagram marked (*), Responder's RHello from the
FIHello might arrive at Initiator's NAT before or after Initiator's FIHello might arrive at Initiator's NAT before or after Initiator's
IHello is sent outbound to Responder's public NAT address. If it IHello is sent outbound to Responder's public NAT address. If it
arrives before, it may be dropped by the NAT. If it arrives after, arrives before, it may be dropped by the NAT. If it arrives after,
it will transit the NAT and trigger keying without waiting for it will transit the NAT and trigger keying without waiting for
another round trip time. The timing of this race depends, among another round trip time. The timing of this race depends, among
other factors, on the relative distances of Initiator and Responder other factors, on the relative distances of Initiator and Responder
to each other and the introduction service. to each other and the introduction service.
3.5.1.7. Load Distribution and Fault Tolerance
+---+ IHello/RHello +-------------+
| I |<------------------->| Responder 1 |
| n | +-------------+
| i | SESSION +-------------+
| t |<=========>| Responder 2 |
| i | +-------------+
| a | IHello... +----------------+
| t |-------------------------> X | Dead Responder |
| o | +----------------+
| r | IHello/RHello +-------------+
| |<---------------->| Responder N |
+---+ +-------------+
Figure 17: Parallel Open to Multiple Endpoints
Section 3.2 allows more than one endpoint to be selected by one
Endpoint Discriminator. This will typically be the case for a set of
servers, any of which could accommodate a connecting client.
Section 3.5.1.1.1 allows an Initiator to use multiple candidate
endpoint addresses when starting a session, and specifies that the
the sender of the first acceptable RHello chunk to be received is
selected to complete the session, with later responses ignored. An
Initiator can start with the multiple candidate endpoint addresses,
or it may learn them during startup from one or more Redirectors
(Section 3.5.1.4).
Parallel open to multiple endpoints for the same Endpoint
Discriminator combined with selection by earliest RHello can be used
for load distribution and fault tolerance. The cost at each endpoint
that is not selected is limited to receiving and processing an
IHello, and generating and sending an RHello.
In one circumstance, multiple servers of similar processing and
networking capacity may be located in near proximity to each other,
such as in a data center. In this circumstance, a less heavily
loaded server can respond to an IHello more quickly than more heavily
loaded servers, and will tend to be selected by a client.
In another circumstance, multiple servers may be located in different
physical locations, such as different data centers. In this
circumstance, a server that is located nearer (in terms of network
distance) to the client can respond more quickly than more distant
servers, and will tend to be selected by the client.
Multiple servers, in proximity or distant from one another, can form
a redundant pool of servers. A client can perform a parallel open to
the multiple servers. In normal operation, the multiple servers will
all respond, and the client will select one of them as described
above. If one of the multiple servers fails, other servers in the
pool can still respond to the client, allowing the client to
successfully complete to an S_OPEN session with one of them.
3.5.2. Congestion Control 3.5.2. Congestion Control
An RTMFP MUST implement congestion control and avoidance algorithms An RTMFP MUST implement congestion control and avoidance algorithms
that are "TCP compatible", in accordance with Internet best current that are "TCP compatible", in accordance with Internet best current
practice [RFC2914]. The algorithms SHOULD NOT be more aggressive practice [RFC2914]. The algorithms SHOULD NOT be more aggressive
than those described in TCP Congestion Control [RFC5681], and MUST than those described in TCP Congestion Control [RFC5681], and MUST
NOT be more aggressive than the "slow start algorithm" described in NOT be more aggressive than the "slow start algorithm" described in
RFC 5681 Section 3.1. RFC 5681 Section 3.1.
An endpoint maintains a transmission budget in the session An endpoint maintains a transmission budget in the session
information context of each S_OPEN session (Section 3.5), controlling information context of each S_OPEN session (Section 3.5), controlling
the rate at which the endpoint sends data into the network. the rate at which the endpoint sends data into the network.
For window-based congestion control and avoidance algorithms, the For window-based congestion control and avoidance algorithms, the
transmission budget is the congestion window, which is the amount of transmission budget is the congestion window, which is the amount of
user data that is allowed to be outstanding, or in flight, in the user data that is allowed to be outstanding, or in flight, in the
network. Transmission is allowed when S_OUTSTANDING_BYTES network. Transmission is allowed when S_OUTSTANDING_BYTES
(Section 3.5) is less than the congestion window (Section 3.6.2.3). (Section 3.5) is less than the congestion window (Section 3.6.2.3).
See Appendix A for an experimental window-based congestion control
algorithm for real-time and bulk data.
An endpoint avoids sending large bursts of data or packets into the An endpoint avoids sending large bursts of data or packets into the
network (Section 3.5.2.3). network (Section 3.5.2.3).
A sending endpoint increases and decreases its transmission budget in A sending endpoint increases and decreases its transmission budget in
response to acknowledgements (Section 3.6.2.4) and loss according to response to acknowledgements (Section 3.6.2.4) and loss according to
the congestion control and avoidance algorithms. Loss is detected by the congestion control and avoidance algorithms. Loss is detected by
negative acknowledgement (Section 3.6.2.5) and timeout negative acknowledgement (Section 3.6.2.5) and timeout
(Section 3.6.2.6). (Section 3.6.2.6).
skipping to change at page 72, line 43 skipping to change at page 74, line 30
v v
+-----------------+ +-----------------+
|F_COMPLETE_LINGER| |F_COMPLETE_LINGER|
+-----------------+ +-----------------+
| 130 seconds | 130 seconds
v v
+--------+ +--------+
|F_CLOSED| |F_CLOSED|
+--------+ +--------+
Figure 17: Sending flow state diagram Figure 18: Sending flow state diagram
3.6.2.1. Startup 3.6.2.1. Startup
The application opens a new sending flow to the other end in an The application opens a new sending flow to the other end in an
S_OPEN session. The implementation chooses a new flow ID that is not S_OPEN session. The implementation chooses a new flow ID that is not
assigned to any other sending flow in that session in the F_OPEN, assigned to any other sending flow in that session in the F_OPEN,
F_CLOSING, or F_COMPLETE_LINGER states. The flow starts in the F_CLOSING, or F_COMPLETE_LINGER states. The flow starts in the
F_OPEN state. The STARTUP_OPTIONS for the new flow is set with the F_OPEN state. The STARTUP_OPTIONS for the new flow is set with the
User's Per-Flow Metadata (Section 2.3.11.1.1). If this flow is in User's Per-Flow Metadata (Section 2.3.11.1.1). If this flow is in
return (or response) to an RF_OPEN receiving flow from the other end, return (or response) to an RF_OPEN receiving flow from the other end,
skipping to change at page 78, line 14 skipping to change at page 80, line 5
initially unset. initially unset.
On sending a packet containing at least one User Data chunk, set or On sending a packet containing at least one User Data chunk, set or
reset TIMEOUT_ALARM to fire in ERTO. reset TIMEOUT_ALARM to fire in ERTO.
On receiving a packet containing at least one acknowledgement, reset On receiving a packet containing at least one acknowledgement, reset
TIMEOUT_ALARM (if already set) to fire in ERTO. TIMEOUT_ALARM (if already set) to fire in ERTO.
When TIMEOUT_ALARM fires: When TIMEOUT_ALARM fires:
1. Set ANY_LOSS = false; 1. Set WAS_LOSS = false;
2. For each sending flow in the session, and for each entry in that 2. For each sending flow in the session, and for each entry in that
flow's SEND_QUEUE: flow's SEND_QUEUE:
1. If entry.IN_FLIGHT is true: set ANY_LOSS = true; and 1. If entry.IN_FLIGHT is true: set WAS_LOSS = true; and
2. Set entry.IN_FLIGHT to false. 2. Set entry.IN_FLIGHT to false.
3. If ANY_LOSS is true: perform ERTO backoff (Section 3.5.2.2); and 3. If WAS_LOSS is true: perform ERTO backoff (Section 3.5.2.2); and
4. Notify the congestion control and avoidance algorithms of the 4. Notify the congestion control and avoidance algorithms of the
timeout and, if ANY_LOSS is true, that there was loss. timeout and, if WAS_LOSS is true, that there was loss.
3.6.2.7. Abandoning Data 3.6.2.7. Abandoning Data
The application can abandon queued messages at any time and for any The application can abandon queued messages at any time and for any
reason. Example reasons include (but are not limited to): one or reason. Example reasons include (but are not limited to): one or
more fragments of a message have remained in the SEND_QUEUE for more fragments of a message have remained in the SEND_QUEUE for
longer than a specified message lifetime; a fragment has been longer than a specified message lifetime; a fragment has been
retransmitted more than a specified retransmission limit; a prior retransmitted more than a specified retransmission limit; a prior
message on which this message depends (such as a key frame in a message on which this message depends (such as a key frame in a
prediction chain) was abandoned and not delivered. prediction chain) was abandoned and not delivered.
skipping to change at page 79, line 33 skipping to change at page 81, line 23
1 |<--- Ack ID=2, seq:0-16 1 |<--- Ack ID=2, seq:0-16
2 |---> Data ID=2, seq#=25, fsnOff=9 (fsn=16) 2 |---> Data ID=2, seq#=25, fsnOff=9 (fsn=16)
3 |---> Data ID=2, seq#=26, fsnOff=10 (fsn=16) 3 |---> Data ID=2, seq#=26, fsnOff=10 (fsn=16)
4 |<--- Ack ID=2, seq:0-18 4 |<--- Ack ID=2, seq:0-18
5 |---> Data ID=2, seq#=27, fsnOff=9 (fsn=18) 5 |---> Data ID=2, seq#=27, fsnOff=9 (fsn=18)
6 |---> Data ID=2, seq#=28, fsnOff=10 (fsn=18) 6 |---> Data ID=2, seq#=28, fsnOff=10 (fsn=18)
| : | :
There are 9 sequence numbers in flight with delayed acknowledgements. There are 9 sequence numbers in flight with delayed acknowledgements.
Figure 18: Normal flow with no loss Figure 19: Normal flow with no loss
Sender Sender
| : | :
1 |<--- Ack ID=3, seq:0-30 1 |<--- Ack ID=3, seq:0-30
2 |---> Data ID=3, seq#=45, fsnOff=15 (fsn=30) 2 |---> Data ID=3, seq#=45, fsnOff=15 (fsn=30)
3 |<--- Ack ID=3, seq:0-30, 32 (nack 31:1) 3 |<--- Ack ID=3, seq:0-30, 32 (nack 31:1)
4 |---> Data ID=3, seq#=46, fsnOff=16 (fsn=30) 4 |---> Data ID=3, seq#=46, fsnOff=16 (fsn=30)
5 |<--- Ack ID=3, seq:0-30, 32, 34 (nack 31:2, 33:1) 5 |<--- Ack ID=3, seq:0-30, 32, 34 (nack 31:2, 33:1)
6 |<--- Ack ID=3, seq:0-30, 32, 34-35 (nack 31:3=lost, 33:2) 6 |<--- Ack ID=3, seq:0-30, 32, 34-35 (nack 31:3=lost, 33:2)
7 |---> Data ID=3, seq#=47, fsnOff=15 (fsn=32, abandon 31) 7 |---> Data ID=3, seq#=47, fsnOff=15 (fsn=32, abandon 31)
skipping to change at page 80, line 44 skipping to change at page 82, line 44
24 |<--- Ack ID=3, seq:0-59, 61-63 (nack 60:3=lost) 24 |<--- Ack ID=3, seq:0-59, 61-63 (nack 60:3=lost)
25 |---> Data ID=3, ABN=1, seq#=60, fsnOff=0 (fsn=60, abandon 60) 25 |---> Data ID=3, ABN=1, seq#=60, fsnOff=0 (fsn=60, abandon 60)
26 |<--- Ack ID=3, seq:0-59, 61-64 26 |<--- Ack ID=3, seq:0-59, 61-64
| : | :
27 |<--- Ack ID=3, seq:0-64 27 |<--- Ack ID=3, seq:0-64
Flow with sequence numbers 31, 33, and 60 lost in transit, and a Flow with sequence numbers 31, 33, and 60 lost in transit, and a
pause at 64. 33 is retransmitted, 31 and 60 are abandoned. Note line pause at 64. 33 is retransmitted, 31 and 60 are abandoned. Note line
25 is a Forward Sequence Number Update (Section 3.6.2.7.1). 25 is a Forward Sequence Number Update (Section 3.6.2.7.1).
Figure 19: Flow with loss Figure 20: Flow with loss
3.6.2.9. Flow Control 3.6.2.9. Flow Control
The flow receiver advertises the amount of new data it's willing to The flow receiver advertises the amount of new data it's willing to
accept from the flow sender with the bufferBytesAvailable derived accept from the flow sender with the bufferBytesAvailable derived
field of an acknowledgement (Section 2.3.13, Section 2.3.14). field of an acknowledgement (Section 2.3.13, Section 2.3.14).
The flow sender MUST NOT send new data into the network if The flow sender MUST NOT send new data into the network if
flow.F_OUTSTANDING_BYTES is greater than or equal to the most flow.F_OUTSTANDING_BYTES is greater than or equal to the most
recently received buffer advertisement, unless flow.EXCEPTION is true recently received buffer advertisement, unless flow.EXCEPTION is true
skipping to change at page 84, line 39 skipping to change at page 86, line 39
v v v v
+------------------+ +------------------+
|RF_COMPLETE_LINGER| |RF_COMPLETE_LINGER|
+------------------+ +------------------+
| 120 seconds | 120 seconds
v v
+---------+ +---------+
|RF_CLOSED| |RF_CLOSED|
+---------+ +---------+
Figure 20: Receiving flow state diagram Figure 21: Receiving flow state diagram
3.6.3.1. Startup 3.6.3.1. Startup
A new receiving flow starts on receipt of a User Data chunk A new receiving flow starts on receipt of a User Data chunk
(Section 2.3.11) encoding a flow ID not belonging to any other (Section 2.3.11) encoding a flow ID not belonging to any other
receiving flow in the same session in the RF_OPEN, RF_REJECTED, or receiving flow in the same session in the RF_OPEN, RF_REJECTED, or
RF_COMPLETE_LINGER states. RF_COMPLETE_LINGER states.
On receipt of such a User Data chunk: On receipt of such a User Data chunk:
skipping to change at page 93, line 27 skipping to change at page 95, line 27
12 |<--- Data ID=3, seq#=33, fsnOff=1 (fsn=32) 12 |<--- Data ID=3, seq#=33, fsnOff=1 (fsn=32)
13 |---> Ack ID=3, seq#=0-47 13 |---> Ack ID=3, seq#=0-47
14 |<--- Data ID=3, seq#=48, fsnOff=16 (fsn=32) 14 |<--- Data ID=3, seq#=48, fsnOff=16 (fsn=32)
15 |<--- Data ID=3, seq#=49, fsnOff=17 (fsn=32) 15 |<--- Data ID=3, seq#=49, fsnOff=17 (fsn=32)
16 |---> Ack ID=3, seq#=0-49 16 |---> Ack ID=3, seq#=0-49
| : | :
Flow with sequence numbers 31 and 33 lost in transit, 31 abandoned Flow with sequence numbers 31 and 33 lost in transit, 31 abandoned
and 33 retransmitted. and 33 retransmitted.
Figure 21 Figure 22
3.6.3.5. Flow Control 3.6.3.5. Flow Control
The flow receiver maintains a buffer for reassembling and reordering The flow receiver maintains a buffer for reassembling and reordering
messages for delivery to the user (Section 3.6.3.3). The messages for delivery to the user (Section 3.6.3.3). The
implementation and the user may wish to limit the amount of resources implementation and the user may wish to limit the amount of resources
(including buffer memory) that a flow is allowed to use. (including buffer memory) that a flow is allowed to use.
RTMFP provides a means for each receiving flow to govern the amount RTMFP provides a means for each receiving flow to govern the amount
of data sent by the sender, by way of the bufferBytesAvailable of data sent by the sender, by way of the bufferBytesAvailable
skipping to change at page 98, line 26 skipping to change at page 100, line 26
[RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion
Control", RFC 5681, September 2009. Control", RFC 5681, September 2009.
7.2. Informative References 7.2. Informative References
[RFC5389] Rosenberg, J., Mahy, R., Matthews, P., and D. Wing, [RFC5389] Rosenberg, J., Mahy, R., Matthews, P., and D. Wing,
"Session Traversal Utilities for NAT (STUN)", RFC 5389, "Session Traversal Utilities for NAT (STUN)", RFC 5389,
October 2008. October 2008.
[ScalableTCP]
Kelly, T., "Scalable TCP: Improving Performance in
Highspeed Wide Area Networks", December 2002, <http://
datatag.web.cern.ch/datatag/papers/pfldnet2003-ctk.pdf>.
Appendix A. Example Congestion Control Algorithm
Section 3.5.2 mandates that an RTMFP use TCP-compatible congestion
control, but allows flexibility in exact implementation within
certain limits. This section describes an experimental window-based
congestion control algorithm that is appropriate for real-time and
bulk data transport in RTMFP. The algorithm includes slow-start and
congestion avoidance phases including modified increase and decrease
parameters. These parameters are further adjusted according to
whether real-time data is being sent and whether time-critical
reverse notifications are received.
A.1. Discussion
RFC 5681 defines the standard window-based congestion control
algorithms for TCP. These algorithms are appropriate for delay-
insensitive bulk data transport, but have undesirable behaviors for
delay- and loss-sensitive applications. Among the undesirable
behaviors are the cutting of the congestion window in half during a
loss event, and the rapidity of the slow start algorithm's
exponential growth. Cutting the congestion window in half requires a
large channel headroom to support a real-time application, and can
cause a large amount of jitter from sender-side buffering. Doubling
the congestion window during the slow-start phase can lead to the
congestion window temporarily growing to twice the size it should be,
causing a period of excessive loss in the path.
We found that a number of deployed TCP implementations use the method
of equation 3 from RFC 5681 Section 3.1, which, when combined with
the recommended behavior of acknowledging every other packet, causes
the congestion window to grow at approximately half the rate as the
recommended method specifies. In order to compete fairly with these
deployed TCPs, we choose 768 bytes per round trip as the increment
during the normal congestion avoidance phase, which is approximately
half of the typical maximum segment size of 1500 bytes while also
being easily subdivided.
The sender may be sending real-time data to the far end. When
sending real-time data, a smoother response to congestion is desired
while still competing with reasonable fairness to other flows in the
Internet. In order to scale the sending rate quickly, the slow start
algorithm is desired, but slow start's normal rate of increase can
cause excessive loss in the last round trip. Accordingly, slow
start's exponential increase rate is adjusted to double every
approximately 3 round trips instead of every round trip. The
multiplicative decrease cuts the congestion window by one eighth on
loss to maintain a smoother sending rate. The additive increase is
done at half the normal rate (incrementing at 384 bytes per round
trip), both to compensate for the less aggressive loss response and
to probe the path capacity more gently.
The far end may report that it is receiving real-time data from other
peers, or the sender may be sending real-time data to other far ends.
In these circumstances (if not sending real-time data to this far
end) it is desirable to respond differently than the standard TCP
algorithms specify, both to yield capacity to the real-time flows and
to avoid excessive losses while probing the path capacity. Slow
start's exponential increase is disabled and the additive increase is
done at half the normal rate (incrementing at 384 bytes per round
trip). Multiplicative decrease is left at cutting by half to yield
to other flows.
Since real-time messages may be small, and sent regularly, it is
advantageous to spread congestion window increases out across the
round-trip time instead of doing them all at once. We divide the
round-trip into 16 segments with an additive increase of a useful
size (48 bytes) per segment.
Scalable TCP [ScalableTCP] describes experimental methods of
modifying the additive increase and multiplicative decrease of the
congestion window in large delay-bandwidth scenarios. The congestion
window is increased by 1% each round trip and decreased by one eighth
on loss in the congestion avoidance phase in certain circumstances
(specifically, when a 1% increase is larger than the normal additive-
increase amount). Those methods are adapted here. The scalable
increase amount is 48 bytes for every 4800 bytes acknowledged, to
spread the increase out over the round-trip. The congestion window
is decreased by one eighth on loss when it is at least 67200 bytes
per round trip, which is seven eighths of 76800 (the point at which
1% is greater than 768 bytes per round trip). When sending real-time
data to the far end, the scalable increase is 1% or 384 bytes per
round trip, whichever is greater. Otherwise, when notified that the
far end is receiving real-time data from other peers, the scaled
increase is adjusted to 0.5% or 384 bytes per round trip, whichever
is greater.
A.2. Algorithm
Let SMSS denote the Sender Maximum Segment Size [RFC5681], for
example 1460 bytes. Let CWND_INIT denote the Initial Congestion
Window (IW) according to RFC 5681 Section 3.1, for example 4380
bytes. Let CWND_TIMEDOUT denote the congestion window after a
timeout indicating lost data, being 1*SMSS (for example 1460 bytes).
Let the session information context contain additional variables:
o CWND: the Congestion Window, initialized to CWND_INIT;
o SSTHRESH: the Slow Start Threshold, initialized to positive
infinity;
o ACKED_BYTES_ACCUMULATOR: a count of acknowledged bytes,
initialized to 0;
o ACKED_BYTES_THIS_PACKET: a count of acknowledged bytes observed in
the current packet;
o PRE_ACK_OUTSTANDING: the number of bytes outstanding in the
network before processing any acknowledgements in the current
packet;
o ANY_LOSS: an indication to whether any loss has been detected in
the current packet;
o ANY_NAKS: an indication to whether any negative acknowledgements
have been detected in the current packet;
o ANY_ACKS: an indication to whether any acknowledgement chunks have
been received in the current packet;
Let FASTGROW_ALLOWED indicate whether the congestion window is
allowed to grow at the normal rate versus a slower rate, being FALSE
if a Time Critical Reverse Notification has been received on this
session within the last 800 milliseconds (Section 2.2.4,
Section 3.5.2.1) or if a Time Critical Forward Notification has been
sent on ANY session in the last 800 milliseconds, and otherwise being
TRUE.
Let TC_SENT indicate whether a Time Critical Forward Notification has
been sent on this session within the last 800 milliseconds.
Implement the method of Section 3.6.2.6 to manage transmission
timeouts, including setting the TIMEOUT_ALARM.
On being notified that the TIMEOUT_ALARM has fired, perform the
function in Figure 23:
on TimeoutNotification(WAS_LOSS):
SSTHRESH <- MAX(SSTHRESH, CWND * 3/4);
ACKED_BYTES_ACCUMULATOR <- 0;
if WAS_LOSS is TRUE:
CWND <- CWND_TIMEDOUT;
else:
CWND <- CWND_INIT;
Figure 23: Pseudocode for handling a timeout notification
Before processing each received packet in this session:
1. Set ANY_LOSS to FALSE;
2. Set ANY_NAKS to FALSE;
3. Set ACKED_BYTES_THIS_PACKET to 0; and
4. Set PRE_ACK_OUTSTANDING to S_OUTSTANDING_BYTES;
On notification of loss (Section 3.6.2.5): set ANY_LOSS to TRUE.
On notification of negative acknowledgement (Section 3.6.2.5): set
ANY_NAKS to TRUE.
On notification of acknowledgement of data (Section 3.6.2.4): set
ANY_ACKS to TRUE, and add the count of acknowledged bytes to
ACKED_BYTES_THIS_PACKET.
After processing all chunks in each received packet for this session,
perform the function in Figure 24:
if ANY_LOSS is TRUE:
if (TC_SENT is TRUE) OR (PRE_ACK_OUTSTANDING > 67200 AND \
FASTGROW_ALLOWED is TRUE):
SSTHRESH <- MAX(PRE_ACK_OUTSTANDING * 7/8, CWND_INIT);
else:
SSHTRESH <- MAX(PRE_ACK_OUTSTANDING * 1/2, CWND_INIT);
CWND <- SSTHRESH;
ACKED_BYTES_ACCUMULATOR <- 0;
else if (ANY_ACKS is TRUE) AND (ANY_NAKS is FALSE) AND \
(PRE_ACK_OUTSTANDING >= CWND):
var INCREASE <- 0;
var AITHRESH;
if FASTGROW_ALLOWED is TRUE:
if CWND < SSTHRESH:
INCREASE <- ACKED_BYTES_THIS_PACKET;
else:
ACKED_BYTES_ACCUMULATOR += ACKED_BYTES_THIS_PACKET;
AITHRESH <- MIN(MAX(FLOOR(CWND / 16), 64), 4800);
while ACKED_BYTES_ACCUMULATOR >= AITHRESH:
ACKED_BYTES_ACCUMULATOR -= AITHRESH;
INCREASE += 48;
else FASTGROW_ALLOWED is FALSE:
if CWND < SSTHRESH AND TC_SENT is TRUE:
INCREASE <- CEIL(ACKED_BYTES_THIS_PACKET / 4);
else:
var AITHRESH_CAP;
if TC_SENT is TRUE:
AITHRESH_CAP <- 2400;
else:
AITHRESH_CAP <- 4800;
ACKED_BYTES_ACCUMULATOR += ACKED_BYTES_THIS_PACKET;
AITHRESH <- MIN(MAX(FLOOR(CWND / 16), 64), AITHRESH_CAP);
while ACKED_BYTS_ACCUMULATOR >= AITHRESH:
ACKED_BYTES_ACCUMULATOR -= AITHRESH;
INCREASE += 24;
CWND <- MAX(CWND + MIN(INCREASE, SMSS), CWND_INIT);
Figure 24: Pseudocode for congestion window adjustment after
processing a packet
Author's Address Author's Address
Michael C. Thornburgh Michael C. Thornburgh
Adobe Systems Incorporated Adobe Systems Incorporated
345 Park Avenue 345 Park Avenue
San Jose, CA 95110-2704 San Jose, CA 95110-2704
US US
Phone: +1 408 536 6000 Phone: +1 408 536 6000
Email: mthornbu@adobe.com Email: mthornbu@adobe.com
 End of changes. 17 change blocks. 
124 lines changed or deleted 401 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/