draft-ietf-lwig-coap-02.txt   draft-ietf-lwig-coap-03.txt 
LWIG Working Group M. Kovatsch LWIG Working Group M. Kovatsch
Internet-Draft ETH Zurich Internet-Draft ETH Zurich
Intended status: Informational O. Bergmann Intended status: Informational O. Bergmann
Expires: December 29, 2015 C. Bormann, Ed. Expires: January 7, 2016 C. Bormann, Ed.
Universitaet Bremen TZI Universitaet Bremen TZI
June 27, 2015 July 06, 2015
CoAP Implementation Guidance CoAP Implementation Guidance
draft-ietf-lwig-coap-02 draft-ietf-lwig-coap-03
Abstract Abstract
The Constrained Application Protocol (CoAP) is designed for resource- The Constrained Application Protocol (CoAP) is designed for resource-
constrained nodes and networks, e.g., sensor nodes in a low-power constrained nodes and networks such as sensor nodes in a low-power
lossy network (LLN). Yet to implement this Internet protocol on lossy network (LLN). Yet to implement this Internet protocol on
Class 1 devices (as per RFC 7228, ~ 10 KiB of RAM and ~ 100 KiB of Class 1 devices (as per RFC 7228, ~ 10 KiB of RAM and ~ 100 KiB of
ROM) also lightweight implementation techniques are necessary. This ROM) also lightweight implementation techniques are necessary. This
document provides lessons learned from implementing CoAP for tiny, document provides lessons learned from implementing CoAP for tiny,
battery-operated networked embedded systems. In particular, it battery-operated networked embedded systems. In particular, it
provides guidance on correct implementation of the CoAP specification provides guidance on correct implementation of the CoAP specification
RFC 7252, memory optimizations, and customized protocol parameters. RFC 7252, memory optimizations, and customized protocol parameters.
Status of This Memo Status of This Memo
skipping to change at page 1, line 40 skipping to change at page 1, line 40
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 29, 2015. This Internet-Draft will expire on January 7, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 19 skipping to change at page 2, line 19
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Protocol Implementation . . . . . . . . . . . . . . . . . . . 4 2. Protocol Implementation . . . . . . . . . . . . . . . . . . . 4
2.1. Client/Server Model . . . . . . . . . . . . . . . . . . . 4 2.1. Client/Server Model . . . . . . . . . . . . . . . . . . . 4
2.2. Message Processing . . . . . . . . . . . . . . . . . . . 5 2.2. Message Processing . . . . . . . . . . . . . . . . . . . 5
2.2.1. On-the-fly Processing . . . . . . . . . . . . . . . . 5 2.2.1. On-the-fly Processing . . . . . . . . . . . . . . . . 5
2.2.2. Internal Data Structure . . . . . . . . . . . . . . . 6 2.2.2. Internal Data Structure . . . . . . . . . . . . . . . 6
2.3. Duplicate Rejection . . . . . . . . . . . . . . . . . . . 6 2.3. Message ID Usage . . . . . . . . . . . . . . . . . . . . 6
2.4. Token Usage . . . . . . . . . . . . . . . . . . . . . . . 7 2.3.1. Duplicate Rejection . . . . . . . . . . . . . . . . . 7
2.4.1. Tokens for Observe . . . . . . . . . . . . . . . . . 7 2.3.2. MID Namespaces . . . . . . . . . . . . . . . . . . . 7
2.4.2. Tokens for Blockwise Transfers . . . . . . . . . . . 8 2.3.3. Relaxation on the Server . . . . . . . . . . . . . . 8
2.5. Transmission States . . . . . . . . . . . . . . . . . . . 9 2.3.4. Relaxation on the Client . . . . . . . . . . . . . . 9
2.5.1. Request/Response Layer . . . . . . . . . . . . . . . 9 2.4. Token Usage . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2. Message Layer . . . . . . . . . . . . . . . . . . . . 10 2.4.1. Tokens for Observe . . . . . . . . . . . . . . . . . 10
2.6. Out-of-band Information . . . . . . . . . . . . . . . . . 11 2.4.2. Tokens for Blockwise Transfers . . . . . . . . . . . 11
2.7. Programming Model . . . . . . . . . . . . . . . . . . . . 12 2.5. Transmission States . . . . . . . . . . . . . . . . . . . 11
2.7.1. Client . . . . . . . . . . . . . . . . . . . . . . . 12 2.5.1. Request/Response Layer . . . . . . . . . . . . . . . 12
2.7.2. Server . . . . . . . . . . . . . . . . . . . . . . . 13 2.5.2. Message Layer . . . . . . . . . . . . . . . . . . . . 13
3. Optimizations . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6. Out-of-band Information . . . . . . . . . . . . . . . . . 14
3.1. Message Buffers . . . . . . . . . . . . . . . . . . . . . 14 2.7. Programming Model . . . . . . . . . . . . . . . . . . . . 15
3.2. Retransmissions . . . . . . . . . . . . . . . . . . . . . 14 2.7.1. Client . . . . . . . . . . . . . . . . . . . . . . . 15
3.3. Observable Resources . . . . . . . . . . . . . . . . . . 15 2.7.2. Server . . . . . . . . . . . . . . . . . . . . . . . 16
3.4. Blockwise Transfers . . . . . . . . . . . . . . . . . . . 16 3. Optimizations . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5. Deduplication with Sequential MIDs . . . . . . . . . . . 16 3.1. Message Buffers . . . . . . . . . . . . . . . . . . . . . 17
4. Alternative Configurations . . . . . . . . . . . . . . . . . 19 3.2. Retransmissions . . . . . . . . . . . . . . . . . . . . . 17
4.1. Transmission Parameters . . . . . . . . . . . . . . . . . 19 3.3. Observable Resources . . . . . . . . . . . . . . . . . . 18
4.2. CoAP over IPv4 . . . . . . . . . . . . . . . . . . . . . 20 3.4. Blockwise Transfers . . . . . . . . . . . . . . . . . . . 19
5. Binding to specific lower-layer APIs . . . . . . . . . . . . 20 3.5. Deduplication with Sequential MIDs . . . . . . . . . . . 19
5.1. Berkeley Socket Interface . . . . . . . . . . . . . . . . 20 4. Alternative Configurations . . . . . . . . . . . . . . . . . 22
5.1.1. Responding from the right address . . . . . . . . . . 20 4.1. Transmission Parameters . . . . . . . . . . . . . . . . . 22
5.2. Java . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2. CoAP over IPv4 . . . . . . . . . . . . . . . . . . . . . 23
5.3. Multicast detection . . . . . . . . . . . . . . . . . . . 21 5. Binding to specific lower-layer APIs . . . . . . . . . . . . 23
5.4. DTLS . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.1. Berkeley Socket Interface . . . . . . . . . . . . . . . . 23
6. CoAP on various transports . . . . . . . . . . . . . . . . . 22 5.1.1. Responding from the right address . . . . . . . . . . 23
6.1. CoAP over reliable transports . . . . . . . . . . . . . . 22 5.2. Java . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2. Translating between transports . . . . . . . . . . . . . 23 5.3. Multicast detection . . . . . . . . . . . . . . . . . . . 24
6.2.1. Transport translation by proxies . . . . . . . . . . 23 5.4. DTLS . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2.2. One-to-one Transport translation . . . . . . . . . . 23 6. CoAP on various transports . . . . . . . . . . . . . . . . . 25
7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 24 6.1. CoAP over reliable transports . . . . . . . . . . . . . . 25
8. Security considerations . . . . . . . . . . . . . . . . . . . 24 6.2. Translating between transports . . . . . . . . . . . . . 26
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 24 6.2.1. Transport translation by proxies . . . . . . . . . . 26
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 24 6.2.2. One-to-one Transport translation . . . . . . . . . . 26
10.1. Normative References . . . . . . . . . . . . . . . . . . 24 7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 27
10.2. Informative References . . . . . . . . . . . . . . . . . 25 8. Security considerations . . . . . . . . . . . . . . . . . . . 27
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 27
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 27
10.1. Normative References . . . . . . . . . . . . . . . . . . 27
10.2. Informative References . . . . . . . . . . . . . . . . . 28
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 29
1. Introduction 1. Introduction
The Constrained Application Protocol [RFC7252] has been designed The Constrained Application Protocol [RFC7252] has been designed
specifically for machine-to-machine communication in networks with specifically for machine-to-machine communication in networks with
very constrained nodes. Typical application scenarios therefore very constrained nodes. Typical application scenarios therefore
include building automation, process optimization, and the Internet include building automation, process optimization, and the Internet
of Things. The major design objectives have been set on small of Things. The major design objectives have been set on small
protocol overhead, robustness against packet loss, and against high protocol overhead, robustness against packet loss, and against high
latency induced by small bandwidth shares or slow request processing latency induced by small bandwidth shares or slow request processing
in end nodes. To leverage integration of constrained nodes with the in end nodes. To leverage integration of constrained nodes with the
world-wide Internet, the protocol design was led by the REST world-wide Internet, the protocol design was led by the REST
architectural style that accounts for the scalability and robustness architectural style that accounts for the scalability and robustness
of the Hypertext Transfer Protocol [RFC7230]. of the Hypertext Transfer Protocol [RFC7230].
Lightweight implementations benefit from this design in many Lightweight implementations benefit from this design in many
respects: First, the use of Uniform Resource Identifiers (URIs) for respects: First, the use of Uniform Resource Identifiers (URIs) for
naming resources and the transparent forwarding of their naming resources and the transparent forwarding of their
representations in a server-stateless request/response protocol make representations in a server-stateless request/response protocol make
protocol translation to HTTP a straightforward task. Second, the set protocol translation to HTTP a straightforward task. Second, the set
of protocol elements that are unavoidable for the core protocol and of protocol elements that are unavoidable for the core protocol, and
thus must be implemented on every node has been kept very small, thus must be implemented on every node, has been kept very small,
minimizing the unnecessary accumulation of "optional" features. minimizing the unnecessary accumulation of "optional" features.
Options that - when present - are critical for message processing are Options that - when present - are critical for message processing are
explicitly marked as such to force immediate rejection of messages explicitly marked as such to force immediate rejection of messages
with unknown critical options. Third, the syntax of protocol data with unknown critical options. Third, the syntax of protocol data
units is easy to parse and is carefully defined to avoid creation of units is easy to parse and is carefully defined to avoid creation of
state in servers where possible. state in servers where possible.
Although these features enable lightweight implementations of the Although these features enable lightweight implementations of the
Constrained Application Protocol, there is still a tradeoff between Constrained Application Protocol, there is still a tradeoff between
robustness and latency of constrained nodes on one hand and resource robustness and latency of constrained nodes on one hand and resource
skipping to change at page 4, line 25 skipping to change at page 4, line 30
must be supported by each buffer. Often the maximum message size is must be supported by each buffer. Often the maximum message size is
set far below the 1280-byte MTU of 6LoWPAN to allow more than one set far below the 1280-byte MTU of 6LoWPAN to allow more than one
open Confirmable transmission at a time (in particular for parallel open Confirmable transmission at a time (in particular for parallel
observe notifications [I-D.ietf-core-observe]). Note that observe notifications [I-D.ietf-core-observe]). Note that
implementations on constrained platforms often not even support the implementations on constrained platforms often not even support the
full MTU. Larger messages must then use blockwise transfers full MTU. Larger messages must then use blockwise transfers
[I-D.ietf-core-block], while a good tradeoff between 6LoWPAN [I-D.ietf-core-block], while a good tradeoff between 6LoWPAN
fragmentation and CoAP header overhead must be found. Usually the fragmentation and CoAP header overhead must be found. Usually the
amount of available free RAM dominates this decision. For Class 1 amount of available free RAM dominates this decision. For Class 1
devices, the maximum message size is typically 128 or 256 bytes devices, the maximum message size is typically 128 or 256 bytes
(blockwise) payload plus an estimate of the maximum header size with (blockwise) payload plus an estimate of the maximum header size for
a worst case option setting. the worst case option setting.
2.1. Client/Server Model 2.1. Client/Server Model
In general, CoAP servers can be implemented more efficiently than In general, CoAP servers can be implemented more efficiently than
clients. REST allows them to keep the communication stateless and clients. REST allows them to keep the communication stateless and
piggy-backed responses are not stored for retransmission, saving piggy-backed responses are not stored for retransmission, saving
buffer space. The use of idempotent requests also allows to relax buffer space. The use of idempotent requests also allows to relax
deduplication, which further decreases memory usage. It is also easy deduplication, which further decreases memory usage. It is also easy
to estimate the required maximum size of message buffers, since URI to estimate the required maximum size of message buffers, since URI
paths, supported options, and maximum payload sizes of the paths, supported options, and maximum payload sizes of the
application are known at compile time. Hence, when the application application are known at compile time. Hence, when the application
is distributed over constrained and unconstrained nodes, the is distributed over constrained and unconstrained nodes, the
constrained ones should preferably have the server role. constrained ones should preferably have the server role.
HTTP-based applications have established an inverse model because of HTTP-based applications have established an inverse model because of
the need for simple push notifications: A constrained client uses the need for simple push notifications: A constrained client uses
POST requests to update resources on an unconstrained server whenever POST requests to update resources on an unconstrained server whenever
an event, e.g., a new sensor reading, is triggered. This requirement an event (e.g., a new sensor reading) is triggered. This requirement
is solved by the Observe option [I-D.ietf-core-observe] of CoAP. It is solved by the Observe option [I-D.ietf-core-observe] of CoAP. It
allows servers to initiate communication and send push notifications allows servers to initiate communication and send push notifications
to interested client nodes. This allows a more efficient and also to interested client nodes. This allows a more efficient and also
more natural model for CoAP-based applications, where the information more natural model for CoAP-based applications, where the information
source is in server role and can benefit from caching. source is an origin server, which can also benefit from caching.
2.2. Message Processing 2.2. Message Processing
Apart from the required buffers, message processing is symmetric for Apart from the required buffers, message processing is symmetric for
clients and servers. First the 4-byte base header has to be parsed clients and servers. First the 4-byte base header has to be parsed
and thereby checked if it is a CoAP message. Since the encoding is and thereby checked if it is a CoAP message. Since the encoding is
very dense, only a wrong Version or a datagram size smaller than four very dense, only a wrong version or a datagram size smaller than four
bytes identify non-CoAP datagrams. These need to be silently bytes identify non-CoAP datagrams. These need to be silently
ignored. Other message format errors, such as an incomplete datagram ignored. Other message format errors, such as an incomplete datagram
length or the usage of reserved values, may need to be rejected with or the usage of reserved values, may need to be rejected with a Reset
a Reset (RST) message (see Section 4.2 and 4.3 of [RFC7252] for (RST) message (see Section 4.2 and 4.3 of [RFC7252] for details).
details). Next the Token is read based on the TKL field. For the Next the Token is read based on the TKL field. For the options
following header options, there are two alternatives: Either process following, there are two alternatives: either process them on the fly
the header on the fly when an option is accessed or initially parse when an option is accessed or initially parse all values into an
all values into an internal data structure. internal data structure.
2.2.1. On-the-fly Processing 2.2.1. On-the-fly Processing
The advantage of on-the-fly processing is that no additional memory The advantage of on-the-fly processing is that no additional memory
needs to be allocated to store the option values, which are stored needs to be allocated to store the option values, which are stored
efficiently inline in the buffer for incoming messages. Once the efficiently inline in the buffer for incoming messages. Once the
message is accepted for further processing, the set of options message is accepted for further processing, the set of options
contained in the received message must be decoded to check for contained in the received message must be decoded to check for
unknown critical options. To avoid multiple passes through the unknown critical options. To avoid multiple passes through the
option list, the option parser might maintain a bit-vector where each option list, the option parser might maintain a bit-vector where each
skipping to change at page 5, line 45 skipping to change at page 5, line 47
every option (a direct pointer) can be added to a sparse list (e.g., every option (a direct pointer) can be added to a sparse list (e.g.,
a one-dimensional array) for fast retrieval. a one-dimensional array) for fast retrieval.
This particularly enables efficient handling of options that might This particularly enables efficient handling of options that might
occur more than once such as Uri-Path. In this implementation occur more than once such as Uri-Path. In this implementation
strategy, the delta is zero for any subsequent path segment, hence strategy, the delta is zero for any subsequent path segment, hence
the stored byte index for this option (e.g., 11 for Uri-Path) would the stored byte index for this option (e.g., 11 for Uri-Path) would
be overwritten to hold a pointer to only the last occurrence of that be overwritten to hold a pointer to only the last occurrence of that
option. The Uri-Path can be resolved on the fly, though, and a option. The Uri-Path can be resolved on the fly, though, and a
pointer to the targeted resource stored directly in the sparse list. pointer to the targeted resource stored directly in the sparse list.
In simpler cases, conditionals can preselect one of the repeated
option values.
Once the option list has been processed, all known critical option Once the option list has been processed, all known critical option
and all elective options can be masked out in the bit-vector to and all elective options can be masked out in the bit-vector to
determine if any unknown critical option was present. If this is the determine if any unknown critical option was present. If this is the
case, this information can be used to create a 4.02 response case, this information can be used to create a 4.02 response
accordingly. Note that full processing must only be done up to the accordingly. Note that full processing must only be done up to the
highest supported option number. Beyond that, only the least highest supported option number. Beyond that, only the least
significant bit (Critical or Elective) needs to be checked. significant bit (Critical or Elective) needs to be checked.
Otherwise, if all critical options are supported, the sparse list of Otherwise, if all critical options are supported, the sparse list of
option pointers is used for further handling of the message. option pointers is used for further handling of the message.
2.2.2. Internal Data Structure 2.2.2. Internal Data Structure
Using an internal data structure for all parsed options has an Using an internal data structure for all parsed options has an
advantage when working on the option values, as they are already in a advantage when working on the option values, as they are already in a
variable of corresponding type, e.g., an integer in host byte order. variable of corresponding type (e.g., an integer in host byte order).
The incoming payload and byte strings of the header can be accessed The incoming payload and byte strings of the header can be accessed
directly in the buffer for incoming messages using pointers (similar directly in the buffer for incoming messages using pointers (similar
to on-the-fly processing). This approach also benefits from a to on-the-fly processing). This approach also benefits from a
bitmap. Otherwise special values must be reserved to encode an unset bitmap. Otherwise special values must be reserved to encode an unset
option, which might require a larger type than required for the option, which might require a larger type than required for the
actual value range (e.g., a 32-bit integer instead of 16-bit). actual value range (e.g., a 32-bit integer instead of 16-bit).
The byte strings (e.g., the URI) are usually not required when Many of the byte strings (e.g., the URI) are usually not required
generating the response. And since all important values were copied, when generating the response. When all important values are copied
this alternative facilitates using the buffer for incoming messages (e.g., the Token, which needs to be mirrored), the internal data
also for the assembly of outgoing messages - which can be the shared structure facilitates using the buffer for incoming messages also for
IP buffer provided by the OS. the assembly of outgoing messages - which can be the shared IP buffer
provided by the OS.
Setting options for outgoing messages is also easier with an internal Setting options for outgoing messages is also easier with an internal
data structure. Application developers can set options independent data structure. Application developers can set options independent
from the option number, whose order is required for the delta from the option number and do not need to care about the order for
encoding. The CoAP encoding is then applied in a serialization step the delta encoding. The CoAP encoding is applied in a serialization
before sending. In contrast, assembling outgoing messages with on- step before sending. In contrast, assembling outgoing messages with
the-fly processing requires either extensive memmove operations to on-the-fly processing requires either extensive memmove operations to
insert new header options or restrictions for developers to set insert new options, or restrictions for developers to set options in
options in their correct order. their correct order.
2.3. Duplicate Rejection 2.3. Message ID Usage
If CoAP is used directly on top of UDP (i.e., in NoSec mode), it Many applications of CoAP use unreliable transports, in particular
needs to cope with the fact that the UDP datagram transport can UDP, which can lose, reorder, and duplicate messages. Although
reorder and duplicate messages. (In contrast to UDP, DTLS has its DTLS's replay protection deals with duplication by the network,
own duplicate detection.) CoAP has been designed with protocol losses are addressed with DTLS retransmissions only for the handshake
protocol and not for the application data protocol. Furthermore,
CoAP implementations usually send CON retransmissions in new DTLS
records, which are not considered duplicates at the DTLS layer.
2.3.1. Duplicate Rejection
CoAP's messaging sub-layer has been designed with protocol
functionality such that rejection of duplicate messages is always functionality such that rejection of duplicate messages is always
possible. It is at the discretion of the receiver if it actually possible. It is realized through the Message IDs (MIDs) and their
wants to make use of this functionality. Processing of duplicate lifetimes with regard to the message type.
messages comes at a cost, but so does the management of the state
associated with duplicate rejection. The number of remote endpoints
that need to be managed might be vast. This can be costly in
particular for unconstrained nodes that have throughput in the order
of one hundred thousand requests per second (which might need about
16 GiB of RAM just for duplicate rejection). Deduplication is also
heavy for servers on Class 1 devices, as also piggy-backed responses
need to be stored for the case that the ACK message is lost. Hence,
a receiver may have good reasons to decide not to do the
deduplication.
If duplicate rejection is indeed necessary, e.g., for non-idempotent Duplicate detection is under the discretion of the recipient (see
requests, it is important to control the amount of state that needs Section 4.5 of [RFC7252], Section 2.3.3, Section 2.3.4). Where it is
to be stored. It can be reduced for instance by deduplication at desired, the receiver needs to keep track of MIDs to filter the
duplicates for at least NON_LIFETIME (145 s). This time also holds
for CON messages, since it equals the possible reception window of
MAX_TRANSMIT_SPAN + MAX_LATENCY.
On the sender side, MIDs of CON messages must not be re-used within
the EXCHANGE_LIFETIME; MIDs of NONs respectively within the
NON_LIFETIME. In typical scenarios, however, senders will re-use
MIDs with intervals far larger than these lifetimes: with sequential
assignment of MIDs, coming close to them would require 250 messages
per second, much more than the bandwidth of constrained networks
would usually allow for.
In cases where senders might come closer to the maximum message rate,
it is recommended to use more conservative timings for the re-use of
MIDs. Otherwise, opposite inaccuracies in the clocks of sender and
recipient may lead to obscure message loss. If needed, higher rates
can be achieved by using multiple endpoints for sending requests and
managing the local MID per remote endpoint instead of a single
counter per system (essentially extending the 16-bit message ID by a
16-bit port number and/or an 128-bit IP address). In controlled
scenarios, such as real-time applications over industrial Ethernet,
the protocol parameters can also be tweaked to achieve higher message
rates (Section 4.1).
2.3.2. MID Namespaces
MIDs are assigned under the control of the originator of CON and NON
messages, and they do not mix with the MIDs assigned by the peer for
CON and NON in the opposite direction. Hence, CoAP implementors need
to make sure to manage different namespaces for the MIDs used for
deduplication. MIDs of outgoing CONs and NONs belong to the local
endpoint; so do the MIDs of incoming ACKs and RSTs. Accordingly,
MIDs of incoming CONs and NONs and outgoing ACKs and RSTs belong to
the corresponding remote endpoint. Figure 1 depicts a scenario where
mixing the namespaces would cause erroneous filtering.
Client Server
| |
| CON [0x1234] |
+----------------->|
| |
| ACK [0x1234] |
|<-----------------+
| |
| CON [0x4711] |
|<-----------------+ Separate response
| |
| ACK [0x4711] |
+----------------->|
| |
A request follows that uses the same MID as the last separate response
| |
| CON [0x4711] |
+----------------->|
Response is filtered | |
because MID 0x4711 | ACK [0x4711] |
is still in the X<-----------------+ Piggy-backed response
deduplication list | |
Figure 1: Deduplication must manage the MIDs in different namespace
corresponding to their origin endpoints.
2.3.3. Relaxation on the Server
Using the de-duplication functionality is at the discretion of the
receiver: Processing of duplicate messages comes at a cost, but so
does the management of the state associated with duplicate rejection.
The number of remote endpoints that need to be managed might be vast.
This can be costly in particular for less constrained nodes that have
throughput in the order of hundreds of thousands requests per second
(which needs about 16 GiB of RAM just for duplicate rejection).
Deduplication is also heavy for servers on Class 1 devices, as also
piggy-backed responses need to be stored for the case that the ACK
message is lost. Hence, a receiver may have good reasons to decide
not to perform deduplication. This behavior is possible when the
application is designed with idempotent operations only and makes
good use of the If-Match/If-None-Match options.
If duplicate rejection is indeed necessary (e.g., for non-idempotent
requests) it is important to control the amount of state that needs
to be stored. It can be reduced, for instance, by deduplication at
resource level: Knowledge of the application and supported resource level: Knowledge of the application and supported
representations can minimize the amount of state that needs to be representations can minimize the amount of state that needs to be
kept. Duplicate rejection on the client side can be simplified by kept.
choosing clever Tokens and only filter based on this information
(e.g., a list of Tokens currently in use or an obscured counter in 2.3.4. Relaxation on the Client
the Token value).
Duplicate rejection on the client side can be simplified by choosing
clever Tokens that are virtually not re-used (e.g., through an
obfuscated sequence number in the Token value) and only filter based
on the list of open Tokens. If a client wants to re-use Tokens
(e.g., the empty Token for optimizations), it requires strict
duplicate rejection based on MIDs to avoid the scenario outlined in
Figure 2.
Client Server
| |
| CON [0x7a10] |
| GET /temp |
| (Token 0x23) |
+----------------->|
| |
| ACK [0x7a10] |
|<-----------------+
| |
... Time Passes ...
| |
| CON [0x23bb] |
| 4.04 Not Found |
| (Token 0x23) |
|<-----------------+
| |
| ACK [0x23bb] |
+--------X |
| |
| CON [0x7a11] |
| GET /resource |
| (Token 0x23) |
+----------------->|
| |
| CON [0x23bb] |
Causing an implicit | 4.04 Not Found |
acknowledgement if | (Token 0x23) |
not filtered through X<-----------------+ Retransmission
duplicate rejection | |
Figure 2: Re-using Tokens requires strict duplicate rejection.
2.4. Token Usage 2.4. Token Usage
Tokens are chosen by the client and help to identify request/response Tokens are chosen by the client and help to identify request/response
pairs that span several message exchanges (e.g., a separate response, pairs that span several message exchanges (e.g., a separate response,
which has a new MID). Servers do not generate Tokens and only mirror which has a new MID). Servers do not generate Tokens and only mirror
what they receive from the clients. Tokens must be unique within the what they receive from the clients. Tokens must be unique within the
namespace of a client throughout their lifetime. This begins when namespace of a client throughout their lifetime. This begins when
being assigned to a request and ends when the open request is closed being assigned to a request and ends when the open request is closed
by receiving and matching the final response. Neither empty ACKs nor by receiving and matching the final response. Neither empty ACKs nor
notifications (i.e., responses carrying the Observe option) terminate notifications (i.e., responses carrying the Observe option) terminate
the lifetime of a Token. the lifetime of a Token.
As already mentioned, a clever assignment of Tokens can help to As already mentioned, a clever assignment of Tokens can help to
simplify duplicate rejection. Yet this is also important for coping simplify duplicate rejection. Yet this is also important for coping
with client crashes. When a client restarts during an open request with client crashes. When a client restarts during an open request
and (unknowingly) re-uses the same Token, it might match the response and (unknowingly) re-uses the same Token, it might match the response
from the previous request to the current one. Hence, when only the from the previous request to the current one. Hence, when only the
Token is used for matching, which is always the case for separate Token is used for matching, which is always the case for separate
responses, randomized Tokens with enough entropy should be used. The responses, randomized Tokens with enough entropy should be used. The
8-byte range for Tokens even allows for one-time usage throughout the 8-byte range for Tokens can even allow for one-time usage throughout
lifetime of a client node. When DTLS is used, client crashes/ the lifetime of a client node. When DTLS is used, client crashes/
restarts will lead to a new security handshake, thereby solving the restarts will lead to a new security handshake, thereby solving the
problem of mismatching responses and/or notifications. problem of mismatching responses and/or notifications.
2.4.1. Tokens for Observe 2.4.1. Tokens for Observe
In the case of Observe [I-D.ietf-core-observe], a request will be In the case of Observe [I-D.ietf-core-observe], a request will be
answered with multiple notifications and it is important to continue answered with multiple notifications and it is important to continue
keeping track of the Token that was used for the request - its keeping track of the Token that was used for the request - its
lifetime will end much later. Upon establishing an Observe lifetime will end much later. Upon establishing an Observe
relationship, the Token is registered at the server. Hence, the relationship, the Token is registered at the server. Hence, the
client's use of that specific Token is now limited to controlling the client's use of that specific Token is now limited to controlling the
Observation relationship. A client can use it to cancel the Observation relationship. A client can use it to cancel the
relationship, which frees the Token upon success (i.e., the message relationship, which frees the Token upon success (i.e., the message
with an Observe Option with the value set to 'deregister' (1) is with an Observe Option with the value set to 'deregister' (1) is
acknowledged; see [I-D.ietf-core-observe] section 3.6). However, the confirmed with a response; see [I-D.ietf-core-observe] section 3.6).
client might never receive the ACK due to a temporary network outage However, the client might never receive the response due to a
or worse, a server crash. Although a network outage will also affect temporary network outage or worse, a server crash. Although a
notifications so that the Observe garbage collection could apply, the network outage will also affect notifications so that the Observe
server might simply happen not to send CON notifications during that garbage collection could apply, the server might simply happen not to
time. Alternative Observe lifetime models such as Stubbornness(tm) send CON notifications during that time. Alternative Observe
might also keep relationships alive for longer periods. lifetime models such as Stubbornness(tm) might also keep
relationships alive for longer periods.
Thus, it is best to carefully chose the Token value used with Observe Thus, it is best to carefully choose the Token value used with
requests. (The empty value will rarely be applicable.) One option Observe requests. (The empty value will rarely be applicable.) One
is to assign and re-use dedicated Tokens for each Observe option is to assign and re-use dedicated Tokens for each Observe
relationship the client will establish. The choice of Token values relationship the client will establish. The choice of Token values
also is critical in NoSec mode, to limit the effectiveness of also is critical in NoSec mode, to limit the effectiveness of
spoofing attacks. Here, the recommendation is to use randomized spoofing attacks. Here, the recommendation is to use randomized
Tokens with a length of at least four bytes (see Section 5.3.1 of Tokens with a length of at least four bytes (see Section 5.3.1 of
[RFC7252]). Thus, dedicated ranges within the 8-byte Token space [RFC7252]). Thus, dedicated ranges within the 8-byte Token space
should be used when in NoSec mode. This also solves the problem of should be used when in NoSec mode. This also solves the problem of
mismatching notifications after a client crash/restart. mismatching notifications after a client crash/restart.
When the client wishes to reinforce its interest in a resource, maybe When the client wishes to reinforce its interest in a resource, maybe
not really being sure whether the server has forgotten it or not, the not really being sure whether the server has forgotten it or not, the
skipping to change at page 9, line 29 skipping to change at page 12, line 12
response layer and the message layer. These layers are linked response layer and the message layer. These layers are linked
through actions. The M_CMD() action triggers a corresponding through actions. The M_CMD() action triggers a corresponding
transition at the message layer and the RR_EVT() action triggers a transition at the message layer and the RR_EVT() action triggers a
transition at the request/response layer. The FSMs also use guard transition at the request/response layer. The FSMs also use guard
conditions to distinguish between information that is only available conditions to distinguish between information that is only available
through the other layer (e.g., whether a request was sent using a CON through the other layer (e.g., whether a request was sent using a CON
or NON message). or NON message).
2.5.1. Request/Response Layer 2.5.1. Request/Response Layer
Figure 1 depicts the two states at the request/response layer of a Figure 3 depicts the two states at the request/response layer of a
CoAP client. When a request is issued, a "reliable_send" or CoAP client. When a request is issued, a "reliable_send" or
"unreliable_send" is triggered at the message layer. The WAITING "unreliable_send" is triggered at the message layer. The WAITING
state can be left through three transitions: Either the client state can be left through three transitions: Either the client
cancels the request and triggers cancellation of a CON transmission cancels the request and triggers cancellation of a CON transmission
at the message layer, the client receives a failure event from the at the message layer, the client receives a failure event from the
message layer, or a receive event containing a response. message layer, or a receive event containing a response.
+------------CANCEL-------------------------------+ +------------CANCEL-------------------------------+
| / M_CMD(cancel) | | / M_CMD(cancel) |
| V | V
| +------+ | +------+
+-------+ -------RR_EVT(fail)--------------------> | | +-------+ -------RR_EVT(fail)--------------------> | |
|WAITING| | IDLE | |WAITING| | IDLE |
+-------+ -------RR_EVT(rx)[is Response]---------> | | +-------+ -------RR_EVT(rx)[is Response]---------> | |
^ / M_CMD(accept) +------+ ^ / M_CMD(accept) +------+
| | | |
+--------------------REQUEST----------------------+ +--------------------REQUEST----------------------+
/ M_CMD((un)reliable_send) / M_CMD((un)reliable_send)
Figure 1: CoAP Client Request/Response Layer FSM Figure 3: CoAP Client Request/Response Layer FSM
A server resource can decide at the request/response layer whether to A server resource can decide at the request/response layer whether to
respond with a piggy-backed or a separate response. Thus, there are respond with a piggy-backed or a separate response. Thus, there are
two busy states in Figure 2, SERVING and SEPARATE. An incoming two busy states in Figure 4, SERVING and SEPARATE. An incoming
receive event with a NON request directly triggers the transition to receive event with a NON request directly triggers the transition to
the SEPARATE state. the SEPARATE state.
+--------+ <----------RR_EVT(rx)[is NON]---------- +------+ +--------+ <----------RR_EVT(rx)[is NON]---------- +------+
|SEPARATE| | | |SEPARATE| | |
+--------+ ----------------RESPONSE--------------> | IDLE | +--------+ ----------------RESPONSE--------------> | IDLE |
^ / M_CMD((un)reliable_send) | | ^ / M_CMD((un)reliable_send) | |
| +---> +------+ | +---> +------+
|EMPTY_ACK | | |EMPTY_ACK | |
|/M_CMD(accept) | | |/M_CMD(accept) | |
| | | | | |
| | | | | |
+--------+ | | +--------+ | |
|SERVING | --------------RESPONSE------------+ | |SERVING | --------------RESPONSE------------+ |
+--------+ / M_CMD(accept) | +--------+ / M_CMD(accept) |
^ | ^ |
+------------------------RR_EVT(rx)[is CON]--------+ +------------------------RR_EVT(rx)[is CON]--------+
Figure 2: CoAP Server Request/Response Layer FSM Figure 4: CoAP Server Request/Response Layer FSM
2.5.2. Message Layer 2.5.2. Message Layer
Figure 3 shows the different states of a CoAP endpoint per message Figure 5 shows the different states of a CoAP endpoint per message
exchange. Besides the linking action RR_EVT(), the message layer has exchange. Besides the linking action RR_EVT(), the message layer has
a TX action to send a message. For sending and receiving NONs, the a TX action to send a message. For sending and receiving NONs, the
endpoint remains in its CLOSED state. When sending a CON, the endpoint remains in its CLOSED state. When sending a CON, the
endpoint remains in RELIABLE_TX and keeps retransmitting until the endpoint remains in RELIABLE_TX and keeps retransmitting until the
transmission times out, it receives a matching RST, the request/ transmission times out, it receives a matching RST, the request/
response layer cancels the transmission, or the endpoint receives an response layer cancels the transmission, or the endpoint receives an
implicit acknowledgement through a matching NON or CON. Whenever the implicit acknowledgement through a matching NON or CON. Whenever the
endpoint receives a CON, it transitions into the ACK_PENDING state, endpoint receives a CON, it transitions into the ACK_PENDING state,
which can be left by sending the corresponding ACK. which can be left by sending the corresponding ACK.
skipping to change at page 11, line 35 skipping to change at page 14, line 35
+----RX_CON----> | | / RR_EVT(rx) | +----RX_CON----> | | / RR_EVT(rx) |
/ RR_EVT(rx) +---------------+ ---------M_CMD(accept)---+ / RR_EVT(rx) +---------------+ ---------M_CMD(accept)---+
/ TX(ack) / TX(ack)
*1: TIMEOUT(RETX_TIMEOUT) / TX(con) *1: TIMEOUT(RETX_TIMEOUT) / TX(con)
*2: M_CMD(unreliable_send) / TX(non) *2: M_CMD(unreliable_send) / TX(non)
*3: RX_NON / RR_EVT(rx) *3: RX_NON / RR_EVT(rx)
*4: RX_RST / REMOVE_OBSERVER *4: RX_RST / REMOVE_OBSERVER
*5: RX_ACK *5: RX_ACK
Figure 3: CoAP Message Layer FSM Figure 5: CoAP Message Layer FSM
T.B.D.: (i) Rejecting messages (can be triggered at message and T.B.D.: (i) Rejecting messages (can be triggered at message and
request/response layer). (ii) ACKs can also be triggered at both request/response layer). (ii) ACKs can also be triggered at both
layers. layers.
2.6. Out-of-band Information 2.6. Out-of-band Information
The CoAP implementation can also leverage out-of-band information, The CoAP implementation can also leverage out-of-band information,
that might also trigger some of the transitions shown in Section 2.5. that might also trigger some of the transitions shown in Section 2.5.
In particular ICMP messages can inform about unreachable remote In particular ICMP messages can inform about unreachable remote
 End of changes. 30 change blocks. 
112 lines changed or deleted 234 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/