draft-ietf-lwig-coap-00.txt   draft-ietf-lwig-coap-01.txt 
LWIG Working Group M. Kovatsch LWIG Working Group M. Kovatsch
Internet-Draft ETH Zurich Internet-Draft ETH Zurich
Intended status: Informational O. Bergmann Intended status: Informational O. Bergmann
Expires: October 3, 2014 Universitaet Bremen TZI Expires: January 5, 2015 Universitaet Bremen TZI
E. Dijk E. Dijk
Philips Research Philips Research
X. He X. He
Hitachi (China) R&D Corp. Hitachi (China) R&D Corp.
C. Bormann, Ed. C. Bormann, Ed.
Universitaet Bremen TZI Universitaet Bremen TZI
April 01, 2014 July 04, 2014
CoAP Implementation Guidance CoAP Implementation Guidance
draft-ietf-lwig-coap-00 draft-ietf-lwig-coap-01
Abstract Abstract
The Constrained Application Protocol (CoAP) is designed for resource- The Constrained Application Protocol (CoAP) is designed for resource-
constrained nodes and networks, e.g., sensor nodes in a low-power constrained nodes and networks, e.g., sensor nodes in a low-power
lossy network (LLN). Yet to implement this Internet protocol on lossy network (LLN). Yet to implement this Internet protocol on
Class 1 devices (i.e., ~ 10 KiB of RAM and ~ 100 KiB of ROM) also Class 1 devices (as per RFC 7228, ~ 10 KiB of RAM and ~ 100 KiB of
lightweight implementation techniques are necessary. This document ROM) also lightweight implementation techniques are necessary. This
provides lessons learned from implementing CoAP for tiny, battery- document provides lessons learned from implementing CoAP for tiny,
operated networked embedded systems. In particular, it provides battery-operated networked embedded systems. In particular, it
guidance on correct implementation of the CoAP specification provides guidance on correct implementation of the CoAP specification
[I-D.ietf-core-coap], memory optimizations, and customized protocol RFC 7252, memory optimizations, and customized protocol parameters.
parameters.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 3, 2014. This Internet-Draft will expire on January 5, 2015.
Copyright Notice Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 48 skipping to change at page 2, line 48
2.7.2. Server . . . . . . . . . . . . . . . . . . . . . . . 12 2.7.2. Server . . . . . . . . . . . . . . . . . . . . . . . 12
3. Optimizations . . . . . . . . . . . . . . . . . . . . . . . . 13 3. Optimizations . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1. Message Buffers . . . . . . . . . . . . . . . . . . . . . 13 3.1. Message Buffers . . . . . . . . . . . . . . . . . . . . . 13
3.2. Retransmissions . . . . . . . . . . . . . . . . . . . . . 14 3.2. Retransmissions . . . . . . . . . . . . . . . . . . . . . 14
3.3. Observable Resources . . . . . . . . . . . . . . . . . . 14 3.3. Observable Resources . . . . . . . . . . . . . . . . . . 14
3.4. Blockwise Transfers . . . . . . . . . . . . . . . . . . . 15 3.4. Blockwise Transfers . . . . . . . . . . . . . . . . . . . 15
3.5. Deduplication with Sequential MIDs . . . . . . . . . . . 15 3.5. Deduplication with Sequential MIDs . . . . . . . . . . . 15
4. Alternative Configurations . . . . . . . . . . . . . . . . . 18 4. Alternative Configurations . . . . . . . . . . . . . . . . . 18
4.1. Transmission Parameters . . . . . . . . . . . . . . . . . 18 4.1. Transmission Parameters . . . . . . . . . . . . . . . . . 18
4.2. CoAP over IPv4 . . . . . . . . . . . . . . . . . . . . . 19 4.2. CoAP over IPv4 . . . . . . . . . . . . . . . . . . . . . 19
5. References . . . . . . . . . . . . . . . . . . . . . . . . . 19 5. IANA considerations . . . . . . . . . . . . . . . . . . . . . 19
5.1. Normative References . . . . . . . . . . . . . . . . . . 19 6. Security considerations . . . . . . . . . . . . . . . . . . . 19
5.2. Informative References . . . . . . . . . . . . . . . . . 20 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.1. Normative References . . . . . . . . . . . . . . . . . . 19
7.2. Informative References . . . . . . . . . . . . . . . . . 20
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20
1. Introduction 1. Introduction
The Constrained Application Protocol [I-D.ietf-core-coap] has been The Constrained Application Protocol [RFC7252] has been designed
designed specifically for machine-to-machine communication in specifically for machine-to-machine communication in networks with
networks with very constrained nodes. Typical application scenarios very constrained nodes. Typical application scenarios therefore
therefore include building automation, process optimization, and the include building automation, process optimization, and the Internet
Internet of Things. The major design objectives have been set on of Things. The major design objectives have been set on small
small protocol overhead, robustness against packet loss, and against protocol overhead, robustness against packet loss, and against high
high latency induced by small bandwidth shares or slow request latency induced by small bandwidth shares or slow request processing
processing in end nodes. To leverage integration of constrained in end nodes. To leverage integration of constrained nodes with the
nodes with the world-wide Internet, the protocol design was led by world-wide Internet, the protocol design was led by the REST
the REST architectural style that accounts for the scalability and architectural style that accounts for the scalability and robustness
robustness of the Hypertext Transfer Protocol [RFC2616]. of the Hypertext Transfer Protocol [RFC7230].
Lightweight implementations benefit from this design in many Lightweight implementations benefit from this design in many
respects: First, the use of Uniform Resource Identifiers (URIs) for respects: First, the use of Uniform Resource Identifiers (URIs) for
naming resources and the transparent forwarding of their naming resources and the transparent forwarding of their
representations in a server-stateless request/response protocol make representations in a server-stateless request/response protocol make
protocol translation to HTTP a straightforward task. Second, the set protocol translation to HTTP a straightforward task. Second, the set
of protocol elements that are unavoidable for the core protocol and of protocol elements that are unavoidable for the core protocol and
thus must be implemented on every node has been kept very small, thus must be implemented on every node has been kept very small,
minimizing the unnecessary accumulation of "optional" features. minimizing the unnecessary accumulation of "optional" features.
Options that - when present - are critical for message processing are Options that - when present - are critical for message processing are
explicitly marked as such to force immediate rejection of messages explicitly marked as such to force immediate rejection of messages
with unknown critical options. Third, the syntax of protocol data with unknown critical options. Third, the syntax of protocol data
units is easy to parse and is carefully defined to avoid creation of units is easy to parse and is carefully defined to avoid creation of
state in servers where possible. state in servers where possible.
Although these features enable lightweight implementations of the Although these features enable lightweight implementations of the
Constrained Application Protocol, there is still a tradeoff between Constrained Application Protocol, there is still a tradeoff between
robustness and latency of constrained nodes on one hand and resource robustness and latency of constrained nodes on one hand and resource
demands on the other. For constrained nodes of Class 1 or even Class demands on the other. For constrained nodes of Class 1 or even
2, the most limiting factors usually are dynamic memory needs, static Class 2 [RFC7228], the most limiting factors usually are dynamic
code size, and energy. Most implementations therefore need to memory needs, static code size, and energy. Most implementations
optimize internal buffer usage, omit idle protocol feature, and therefore need to optimize internal buffer usage, omit idle protocol
maximize sleeping cycles. feature, and maximize sleeping cycles.
The present document gives possible strategies to solve this tradeoff The present document gives possible strategies to solve this tradeoff
for very constrained nodes (i.e., Class 1 in for very constrained nodes (i.e., Class 1). For this, it provides
[I-D.ietf-lwig-terminology]). For this, it provides guidance on guidance on correct implementation of the CoAP specification
correct implementation of the CoAP specification [RFC7252], memory optimizations, and customized protocol parameters.
[I-D.ietf-core-coap], memory optimizations, and customized protocol
parameters.
2. Protocol Implementation 2. Protocol Implementation
In the programming styles supported by very simple operating systems In the programming styles supported by very simple operating systems
as found on constrained nodes, preemptive multi-threading is not an as found on constrained nodes, preemptive multi-threading is not an
option. Instead, all operations are triggered by an event loop option. Instead, all operations are triggered by an event loop
system, e.g., in a send-receive-dispatch cycle. It is also common system, e.g., in a send-receive-dispatch cycle. It is also common
practice to allocate memory statically to ensure stable behavior, as practice to allocate memory statically to ensure stable behavior, as
no memory management unit (MMU) or other abstractions are available. no memory management unit (MMU) or other abstractions are available.
For a CoAP node, the two key parameters for memory usage are the For a CoAP node, the two key parameters for memory usage are the
number of (re)transmission buffers and the maximum message size that number of (re)transmission buffers and the maximum message size that
must be supported by each buffer. Often the maximum message size is must be supported by each buffer. Often the maximum message size is
set far below the 1280-byte MTU of 6LoWPAN to allow more than one set far below the 1280-byte MTU of 6LoWPAN to allow more than one
open Confirmable transmission at a time (in particular for parallel open Confirmable transmission at a time (in particular for parallel
observe notifications). Note that implementations on constrained observe notifications [I-D.ietf-core-observe]). Note that
platforms often not even support the full MTU. Larger messages must implementations on constrained platforms often not even support the
then use blockwise transfers [I-D.ietf-core-block], while a good full MTU. Larger messages must then use blockwise transfers
tradeoff between 6LoWPAN fragmentation and CoAP header overhead must [I-D.ietf-core-block], while a good tradeoff between 6LoWPAN
be found. Usually the amount of available free RAM dominates this fragmentation and CoAP header overhead must be found. Usually the
decision. For Class 1 devices, the maximum message size is typically amount of available free RAM dominates this decision. For Class 1
128 or 256 bytes (blockwise) payload plus an estimate of the maximum devices, the maximum message size is typically 128 or 256 bytes
header size with a worst case option setting. (blockwise) payload plus an estimate of the maximum header size with
a worst case option setting.
2.1. Client/Server Model 2.1. Client/Server Model
In general, CoAP servers can be implemented more efficiently than In general, CoAP servers can be implemented more efficiently than
clients. REST allows them to keep the communication stateless and clients. REST allows them to keep the communication stateless and
piggy-backed responses are not stored for retransmission, saving piggy-backed responses are not stored for retransmission, saving
buffer space. The use of idempotent requests also allows to relax buffer space. The use of idempotent requests also allows to relax
deduplication, which further decreases memory usage. It is also easy deduplication, which further decreases memory usage. It is also easy
to estimate the required maximum size of message buffers, since URI to estimate the required maximum size of message buffers, since URI
paths, supported options, and maximum payload sizes of the paths, supported options, and maximum payload sizes of the
skipping to change at page 5, line 11 skipping to change at page 5, line 11
to interested client nodes. This allows a more efficient and also to interested client nodes. This allows a more efficient and also
more natural model for CoAP-based applications, where the information more natural model for CoAP-based applications, where the information
source is in server role and can benefit from caching. source is in server role and can benefit from caching.
2.2. Message Processing 2.2. Message Processing
Apart from the required buffers, message processing is symmetric for Apart from the required buffers, message processing is symmetric for
clients and servers. First the 4-byte base header has to be parsed clients and servers. First the 4-byte base header has to be parsed
and thereby checked if it is a CoAP message. Since the encoding is and thereby checked if it is a CoAP message. Since the encoding is
very dense, only a wrong Version or a datagram size smaller than four very dense, only a wrong Version or a datagram size smaller than four
bytes identify non-CoAP datagrams. These MUST be silently ignored. bytes identify non-CoAP datagrams. These need to be silently
All other message format errors, such as an incomplete datagram ignored. Other message format errors, such as an incomplete datagram
length or the usage of reserved values, MUST be rejected with a Reset length or the usage of reserved values, may need to be rejected with
(RST) message. Next the Token is read based on the TKL field. For a Reset (RST) message (see Section 4.2 and 4.3 of [RFC7252] for
the following header options, there are two alternatives: Either details). Next the Token is read based on the TKL field. For the
process the header on the fly when an option is accessed or initially following header options, there are two alternatives: Either process
parse all values into an internal data structure. the header on the fly when an option is accessed or initially parse
all values into an internal data structure.
2.2.1. On-the-fly Processing 2.2.1. On-the-fly Processing
The advantage of on-the-fly processing is that no additional memory The advantage of on-the-fly processing is that no additional memory
needs to be allocated to store the option values, which are stored needs to be allocated to store the option values, which are stored
efficiently inline in the buffer for incoming messages. Once the efficiently inline in the buffer for incoming messages. Once the
message is accepted for further processing, the set of options message is accepted for further processing, the set of options
contained in the received message must be decoded to check for contained in the received message must be decoded to check for
unknown critical options. To avoid multiple passes through the unknown critical options. To avoid multiple passes through the
option list, the option parser might maintain a bit-vector where each option list, the option parser might maintain a bit-vector where each
skipping to change at page 6, line 10 skipping to change at page 6, line 11
determine if any unknown critical option was present. If this is the determine if any unknown critical option was present. If this is the
case, this information can be used to create a 4.02 response case, this information can be used to create a 4.02 response
accordingly. Note that full processing must only be done up to the accordingly. Note that full processing must only be done up to the
highest supported option number. Beyond that, only the least highest supported option number. Beyond that, only the least
significant bit (Critical or Elective) needs to be checked. significant bit (Critical or Elective) needs to be checked.
Otherwise, if all critical options are supported, the sparse list of Otherwise, if all critical options are supported, the sparse list of
option pointers is used for further handling of the message. option pointers is used for further handling of the message.
2.2.2. Internal Data Structure 2.2.2. Internal Data Structure
Using an internal data structure for all parsed options has advantage Using an internal data structure for all parsed options has an
when working on the option values, as they are already in a variable advantage when working on the option values, as they are already in a
of corresponding type, e.g., an integer in host byte order. The variable of corresponding type, e.g., an integer in host byte order.
incoming payload and byte strings of the header can be accessed The incoming payload and byte strings of the header can be accessed
directly in the buffer for incoming messages using pointers (similar directly in the buffer for incoming messages using pointers (similar
to on-the-fly processing). This approach also benefits from a to on-the-fly processing). This approach also benefits from a
bitmap. Otherwise special values must be reserved to encode an unset bitmap. Otherwise special values must be reserved to encode an unset
option, which might require a larger type than required for the option, which might require a larger type than required for the
actual value range (e.g., a 32-bit integer instead of 16-bit). actual value range (e.g., a 32-bit integer instead of 16-bit).
The byte strings (e.g., the URI) are usually not required when The byte strings (e.g., the URI) are usually not required when
generating the response. And since all important values were copied, generating the response. And since all important values were copied,
this alternative facilitates using the buffer for incoming messages this alternative facilitates using the buffer for incoming messages
also for the assembly of outgoing messages - which can be the shared also for the assembly of outgoing messages - which can be the shared
skipping to change at page 6, line 48 skipping to change at page 6, line 49
needs to cope with the fact that the UDP datagram transport can needs to cope with the fact that the UDP datagram transport can
reorder and duplicate messages. (In contrast to UDP, DTLS has its reorder and duplicate messages. (In contrast to UDP, DTLS has its
own duplicate detection.) CoAP has been designed with protocol own duplicate detection.) CoAP has been designed with protocol
functionality such that rejection of duplicate messages is always functionality such that rejection of duplicate messages is always
possible. It is at the discretion of the receiver if it actually possible. It is at the discretion of the receiver if it actually
wants to make use of this functionality. Processing of duplicate wants to make use of this functionality. Processing of duplicate
messages comes at a cost, but so does the management of the state messages comes at a cost, but so does the management of the state
associated with duplicate rejection. The number of remote endpoints associated with duplicate rejection. The number of remote endpoints
that need to be managed might be vast. This can be costly in that need to be managed might be vast. This can be costly in
particular for unconstrained nodes that have throughput in the order particular for unconstrained nodes that have throughput in the order
of one hundred thousand requests per second (i.e., about 16 GiB of of one hundred thousand requests per second (which might need about
RAM only for duplicate rejection). Deduplication is also heavy for 16 GiB of RAM just for duplicate rejection). Deduplication is also
servers on Class 1 devices, as also piggy-backed responses need to be heavy for servers on Class 1 devices, as also piggy-backed responses
stored for the case that the ACK message is lost. Hence, a receiver need to be stored for the case that the ACK message is lost. Hence,
may have good reasons to decide not to do the deduplication. a receiver may have good reasons to decide not to do the
deduplication.
If duplicate rejection is indeed necessary, e.g., for non-idempotent If duplicate rejection is indeed necessary, e.g., for non-idempotent
requests, it is important to control the amount of state that needs requests, it is important to control the amount of state that needs
to be stored. It can be reduced for instance by deduplication at to be stored. It can be reduced for instance by deduplication at
resource level: Knowledge of the application and supported resource level: Knowledge of the application and supported
representations can minimize the amount of state that needs to be representations can minimize the amount of state that needs to be
kept. Duplicate rejection on the client side can be simplified by kept. Duplicate rejection on the client side can be simplified by
choosing clever Tokens and only filter based on this information choosing clever Tokens and only filter based on this information
(e.g., a list of Tokens currently in use or an obscured counter in (e.g., a list of Tokens currently in use or an obscured counter in
the Token value). the Token value).
2.4. Token Usage 2.4. Token Usage
Tokens are chosen by the client and help to identify request/response Tokens are chosen by the client and help to identify request/response
pairs that span several messages (e.g., a separate response, which pairs that span several message exchanges (e.g., a separate response,
has a new MID). Servers do not generate Tokens and only mirror what which has a new MID). Servers do not generate Tokens and only mirror
they receive from the clients. Tokens must be unique within the what they receive from the clients. Tokens must be unique within the
namespace of a client throughout their lifetime. This begins when namespace of a client throughout their lifetime. This begins when
being assigned to a request and ends when the open request is closed being assigned to a request and ends when the open request is closed
by receiving and matching the final response. Neither empty ACKs nor by receiving and matching the final response. Neither empty ACKs nor
notifications (i.e., responses carrying the Observe option) terminate notifications (i.e., responses carrying the Observe option) terminate
the lifetime of a Token. the lifetime of a Token.
As already mentioned, a clever assignment of Tokens can help to As already mentioned, a clever assignment of Tokens can help to
simplify duplicate rejection. Yet this is also important for coping simplify duplicate rejection. Yet this is also important for coping
with client crashes. When a client restarts during an open request with client crashes. When a client restarts during an open request
and (unknowingly) re-uses the same Token, it might match the response and (unknowingly) re-uses the same Token, it might match the response
skipping to change at page 7, line 49 skipping to change at page 7, line 50
problem of mismatching responses and/or notifications. problem of mismatching responses and/or notifications.
2.4.1. Tokens for Observe 2.4.1. Tokens for Observe
In the case of Observe [I-D.ietf-core-observe], a request will be In the case of Observe [I-D.ietf-core-observe], a request will be
answered with multiple notifications and it can become hard to answered with multiple notifications and it can become hard to
determine the end of a Token lifetime. When establishing an Observe determine the end of a Token lifetime. When establishing an Observe
relationship, the Token is registered at the server. Hence, the relationship, the Token is registered at the server. Hence, the
client partially loses control of the used Token. A client can client partially loses control of the used Token. A client can
attempt to cancel the relationship, which frees the Token upon attempt to cancel the relationship, which frees the Token upon
success (i.e., the message with code 7.31 is acknowledged; see success (i.e., the message with an Observe Option with the value set
[I-D.ietf-core-observe] section 3.6). However, the client might to 'deregister' (1) is acknowledged; see [I-D.ietf-core-observe]
never receive the ACK due to a temporary network outages or worse, a section 3.6). However, the client might never receive the ACK due to
server crash. Although a network outage will also affect a temporary network outages or worse, a server crash. Although a
notifications so that the Observe garbage collection could apply, the network outage will also affect notifications so that the Observe
server might simply not send CON notifications during that time. garbage collection could apply, the server might simply not send CON
Alternative Observe lifetime models such as Stubbornness(tm) might notifications during that time. Alternative Observe lifetime models
also keep relationships alive for longer periods. such as Stubbornness(tm) might also keep relationships alive for
longer periods.
Thus, Observe requests should never use the empty Token, but Thus, Observe requests should carefully chose the value (and the
carefully chose the value. One option is to assign and re-use empty value will rarely be applicable). One option is to assign and
dedicated Tokens for each Observe relationship the client will re-use dedicated Tokens for each Observe relationship the client will
establish. This is, however, critical for spoofing attacks in NoSec establish. This is, however, critical for spoofing attacks in NoSec
mode. The recommendation is to use randomized Tokens with a length mode. The recommendation is to use randomized Tokens with a length
of at least four bytes. Thus, dedicated ranges within the 8-byte of at least four bytes (see Section 5.3.1 of [RFC7252]). Thus,
Token space should be used when in NoSec mode. This also solves the dedicated ranges within the 8-byte Token space should be used when in
problem of mismatching notifications after a client crash/restart. NoSec mode. This also solves the problem of mismatching
notifications after a client crash/restart.
2.4.2. Tokens for Blockwise Transfers 2.4.2. Tokens for Blockwise Transfers
In general, blockwise transfers are independent from the Token and In general, blockwise transfers are independent from the Token and
are correlated through client endpoint address and server address and are correlated through client endpoint address and server address and
resource path (destination URI). Thus, each block may be transferred resource path (destination URI). Thus, each block may be transferred
using a different Token. Still it can be beneficial to use the same using a different Token. Still it can be beneficial to use the same
Token (it is freed upon reception of a response block) for all Token (it is freed upon reception of a response block) for all
blocks, e.g., to easily route received blocks to the same response blocks, e.g., to easily route received blocks to the same response
handler. handler.
Special care has to be taken when Block2 is combined with Observe. When Block2 is combined with Observe, notifications only carry the
Notifications only carry the first block and it is up to the client first block and it is up to the client to retrieve the remaining
to retrieve the remaining ones. These GET requests do not carry the ones. These GET requests do not carry the Observe option and need to
Observe option and MUST use a different Token, since the Token from use a different Token, since the Token from the notification is still
the notification is still in use. in use.
2.5. Transmission States 2.5. Transmission States
CoAP endpoints must keep transmission state to manage open requests, CoAP endpoints must keep transmission state to manage open requests,
to handle the different response modes, and to implement reliable to handle the different response modes, and to implement reliable
delivery at the message layer. The following finite state machines delivery at the message layer. The following finite state machines
(FSMs) model the transmissions of a CoAP exchange at the request/ (FSMs) model the transmissions of a CoAP exchange at the request/
response layer and the message layer. These layers are linked response layer and the message layer. These layers are linked
through actions. The M_CMD() action triggers a corresponding through actions. The M_CMD() action triggers a corresponding
transition at the message layer and the RR_EVT() action triggers a transition at the message layer and the RR_EVT() action triggers a
skipping to change at page 11, line 7 skipping to change at page 11, line 7
*5: RX_ACK *5: RX_ACK
Figure 3: CoAP Message Layer FSM Figure 3: CoAP Message Layer FSM
T.B.D.: (i) Rejecting messages (can be triggered at message and T.B.D.: (i) Rejecting messages (can be triggered at message and
request/response layer). (ii) ACKs can also be triggered at both request/response layer). (ii) ACKs can also be triggered at both
layers. layers.
2.6. Out-of-band Information 2.6. Out-of-band Information
They CoAP implementation can also leverage out-of-band information, The CoAP implementation can also leverage out-of-band information,
that might also trigger some of the transitions shown in Section 2.5. that might also trigger some of the transitions shown in Section 2.5.
In particular ICMP messages can inform about unreachable remote In particular ICMP messages can inform about unreachable remote
endpoints or whole network outages. This information can be used to endpoints or whole network outages. This information can be used to
pause or cancel ongoing transmission to conserve energy. Providing pause or cancel ongoing transmission to conserve energy. Providing
ICMP information to the CoAP implementation is easier in constrained ICMP information to the CoAP implementation is easier in constrained
environments, where developers usually can adapt the underlying OS environments, where developers usually can adapt the underlying OS
(or firmware). This is not the case on general purpose platforms (or firmware). This is not the case on general purpose platforms
that have full-fledged OSes and make use of high-level programming that have full-fledged OSes and make use of high-level programming
frameworks. frameworks.
The most important ICMP messages are host, network, port, or protocol The most important ICMP messages are host, network, port, or protocol
unreachable errors. They should cause the cancellation of ongoing unreachable errors. After appropriate vetting (cf. [RFC5927]), they
CON transmissions and clearing of Observe relationships. Requests to should cause the cancellation of ongoing CON transmissions and
this destination should be paused for a sensible interval. In clearing (or deferral) of Observe relationships. Requests to this
addition, the device could indicate of this error through a destination should be paused for a sensible interval. In addition,
notification to a management endpoint or external status indicator, the device could indicate of this error through a notification to a
since the cause could be a misconfiguration or general unavailability management endpoint or external status indicator, since the cause
of the required service. Problems reported through the Parameter could be a misconfiguration or general unavailability of the required
Problem message are usually caused through a similar fundamental service. Problems reported through the Parameter Problem message are
problem. usually caused through a similar fundamental problem.
The CoAP specification recommends to ignore Source Quench and Time The CoAP specification recommends to ignore Source Quench and Time
Exceeded ICMP messages, though. Source Quench messages inform the Exceeded ICMP messages, though. Source Quench messages were
sender to reduce the rate of packets. However, this mechanism is originally intended to inform the sender to reduce the rate of
deprecated through [RFC6633]. CoAP also comes with its own packets. However, this mechanism is deprecated through [RFC6633].
congestion control mechanism, which is already designed CoAP also comes with its own congestion control mechanism, which is
conservatively. If an advanced mechanism is required to better already designed conservatively. One advanced mechanism that can be
utilize the network, [I-D.bormann-core-cocoa] should be implemented. employed for better network utilization is CoCoA,
Time Exceeded messages inform about possible routing loops or a too [I-D.bormann-core-cocoa]. Time Exceeded messages often occur during
small initial Hop Limit value. This is out of scope for CoAP transient routing loops (unless they are caused by a too small
implementations, though. initial Hop Limit value).
2.7. Programming Model 2.7. Programming Model
The event-driven approach, which is common in event-loop-based The event-driven approach, which is common in event-loop-based
firmware, has also proven very efficient for embedded operating firmware, has also proven very efficient for embedded operating
systems [TinyOS], [Contiki]. Note that an OS is not necessarily systems [TinyOS], [Contiki]. Note that an OS is not necessarily
required and a traditional firmware approach can suffice for Class 1 required and a traditional firmware approach can suffice for Class 1
devices. Event-driven systems use split-phase operations (i.e., devices. Event-driven systems use split-phase operations (i.e.,
there are no blocking functions, but functions return and an event there are no blocking functions, but functions return and an event
handler is called once a long-lasting operation completes) to enable handler is called once a long-lasting operation completes) to enable
skipping to change at page 15, line 27 skipping to change at page 15, line 27
3.4. Blockwise Transfers 3.4. Blockwise Transfers
Blockwise transfers have the main purpose of providing fragmentation Blockwise transfers have the main purpose of providing fragmentation
at the application layer, where partial information can be processed. at the application layer, where partial information can be processed.
This is not possible at lower layers such as 6LoWPAN, as only This is not possible at lower layers such as 6LoWPAN, as only
assembled packets can be passed up the stack. While assembled packets can be passed up the stack. While
[I-D.ietf-core-block] also anticipates atomic handling of blocks, [I-D.ietf-core-block] also anticipates atomic handling of blocks,
i.e., only fully received CoAP messages, this is not possible on i.e., only fully received CoAP messages, this is not possible on
Class 1 devices. Class 1 devices.
When receiving a blockwise transfer, each blocks is usually passed to When receiving a blockwise transfer, each block is usually passed to
a handler function that for instance performs stream processing or a handler function that for instance performs stream processing or
writes the blocks to external memory such as flash. Although there writes the blocks to external memory such as flash. Although there
are no restrictions in [I-D.ietf-core-block], it is beneficial for are no restrictions in [I-D.ietf-core-block], it is beneficial for
Class 1 devices to only allow ordered transmission of blocks. Class 1 devices to only allow ordered transmission of blocks.
Otherwise on-the-fly processing would not be possible. Otherwise on-the-fly processing would not be possible.
When sending a blockwise transfer, Class 1 devices usually do not When sending a blockwise transfer out of dynamically generated
have sufficient memory to print the full message into a buffer, and information, Class 1 devices usually do not have sufficient memory to
slice and send it in a second step. When transferring the CoRE Link print the full message into a buffer, and slice and send it in a
Format from /.well-known/core for instance, a generator function is second step. For instance, if the CoRE Link Format at /.well-known/
required that generates slices of a large string with a specific core is dynamically generated, a generator function is required that
offset length (a 'sonprintf()'). This functionality is required generates slices of a large string with a specific offset length (a
recurrently and should be included in a library. 'sonprintf()'). This functionality is required recurrently and
should be included in a library.
3.5. Deduplication with Sequential MIDs 3.5. Deduplication with Sequential MIDs
CoAP's duplicate rejection functionality can be straightforwardly CoAP's duplicate rejection functionality can be straightforwardly
implemented in a CoAP endpoint by storing, for each remote CoAP implemented in a CoAP endpoint by storing, for each remote CoAP
endpoint ("peer") that it communicates with, a list of recently endpoint ("peer") that it communicates with, a list of recently
received CoAP Message IDs (MIDs) along with some timing information. received CoAP Message IDs (MIDs) along with some timing information.
A CoAP message from a peer with a MID that is in the list for that A CoAP message from a peer with a MID that is in the list for that
peer can simply be discarded. peer can simply be discarded.
The timing information in the list can then be used to time out The timing information in the list can then be used to time out
entries that are older than the _expected extent of the re-ordering_, entries that are older than the _expected extent of the re-ordering_,
an upper bound for which can be estimated by adding the _potential an upper bound for which can be estimated by adding the _potential
retransmission window_ ([I-D.ietf-core-coap] section "Reliable retransmission window_ ([RFC7252] section "Reliable Messages") and
Messages") and the time packets can stay alive in the network. the time packets can stay alive in the network.
Such a straightforward implementation is suitable in case other CoAP Such a straightforward implementation is suitable in case other CoAP
endpoints generate random MIDs. However, this storage method may endpoints generate random MIDs. However, this storage method may
consume substantial RAM in specific cases, such as: consume substantial RAM in specific cases, such as:
o many clients are making periodic, non-idempotent requests to a o many clients are making periodic, non-idempotent requests to a
single CoAP server; single CoAP server;
o one client makes periodic requests to a large number of CoAP o one client makes periodic requests to a large number of CoAP
servers and/or requests a large number of resources; where servers servers and/or requests a large number of resources; where servers
happen to mostly generate separate CoAP responses (not piggy- happen to mostly generate separate CoAP responses (not piggy-
backed); backed);
For example, consider the first case where the expected extent of re- For example, consider the first case where the expected extent of re-
ordering is 50 seconds, and N clients are sending periodic POST ordering is 50 seconds, and N clients are sending periodic POST
requests to a single CoAP server during a period of high system requests to a single CoAP server during a period of high system
activity, each on average sending one client request per second. The activity, each on average sending one client request per second. The
server would need 100 * N bytes of RAM to store the MIDs only. This server would need 100 * N bytes of RAM to store the MIDs only. This
amount of RAM may be significant on a RAM-constrained platform. On a amount of RAM may be significant on a RAM-constrained platform. On a
number of platforms, it may be easier to allocate some extra program number of platforms, it may be easier to allocate some extra program
memory (e.g. Flash or ROM) to the CoAP protocol handler process than memory (e.g. Flash or ROM) to the CoAP protocol handler process than
to allocate extra RAM. Therefore, one may try to reduce RAM usage of to allocate extra RAM. Therefore, one may try to reduce RAM usage of
a CoAP implementation at the cost of some additional program memory a CoAP implementation at the cost of some additional program memory
usage and implementation complexity. usage and implementation complexity.
Some CoAP clients generate MID values by a using a Message ID Some CoAP clients generate MID values by a using a Message ID
variable [I-D.ietf-core-coap] that is incremented by one each time a variable [RFC7252] that is incremented by one each time a new MID
new MID needs to be generated. (After the maximum value 65535 it needs to be generated. (After the maximum value 65535 it wraps back
wraps back to 0.) We call this behavior "sequential" MIDs. One to 0.) We call this behavior "sequential" MIDs. One approach to
approach to reduce RAM use exploits the redundancy in sequential MIDs reduce RAM use exploits the redundancy in sequential MIDs for a more
for a more efficient MID storage in CoAP servers. efficient MID storage in CoAP servers.
Naturally such an approach requires, in order to actually reduce RAM Naturally such an approach requires, in order to actually reduce RAM
usage in an implementation, that a large part of the peers follow the usage in an implementation, that a large part of the peers follow the
sequential MID behavior. To realize this optimization, the authors sequential MID behavior. To realize this optimization, the authors
therefore RECOMMEND that CoAP endpoint implementers employ the therefore RECOMMEND that CoAP endpoint implementers employ the
"sequential MID" scheme if there are no reasons to prefer another "sequential MID" scheme if there are no reasons to prefer another
scheme, such as randomly generated MID values. scheme, such as randomly generated MID values.
Security considerations might call for a choice for Security considerations might call for a choice for
(pseudo)randomized MIDs. Note however that with truly randomly (pseudo)randomized MIDs. Note however that with truly randomly
skipping to change at page 17, line 44 skipping to change at page 17, line 45
+----------+----------------+-----------------+ +----------+----------------+-----------------+
Table 1: A per-peer table for storing MIDs based on MID_i Table 1: A per-peer table for storing MIDs based on MID_i
The presence of a table row with base MID_i (regardless of the The presence of a table row with base MID_i (regardless of the
bitfield values) indicates that a value MID_i has been received at a bitfield values) indicates that a value MID_i has been received at a
time t_i. Subsequently, each bitfield bit k (0...K-1) in a row i time t_i. Subsequently, each bitfield bit k (0...K-1) in a row i
corresponds to a received MID value of MID_i + k + 1. If a bit k is corresponds to a received MID value of MID_i + k + 1. If a bit k is
0, it means a message with corresponding MID has not yet been 0, it means a message with corresponding MID has not yet been
received. A bit 1 indicates such a message has been received already received. A bit 1 indicates such a message has been received already
at approximately time t_i. This storage structure allows e.g. with at approximately time t_i. This storage structure allows e.g. with
k=64 to store in best case up to 130 MID values using 20 bytes, as k=64 to store in best case up to 130 MID values using 20 bytes, as
opposed to 260 bytes that would be needed for a non-sequential opposed to 260 bytes that would be needed for a non-sequential
storage scheme. storage scheme.
The time values t_i are used for removing rows from the table after a The time values t_i are used for removing rows from the table after a
preset timeout period, to keep the MID store small in size and enable preset timeout period, to keep the MID store small in size and enable
these MIDs to be safely re-used in future communications. (Note that these MIDs to be safely re-used in future communications. (Note that
the table only stores one time value per row, which therefore needs the table only stores one time value per row, which therefore needs
to be updated on receipt of another MID that is stored as a single to be updated on receipt of another MID that is stored as a single
bit in this row. As a consequence of only storing one time value per bit in this row. As a consequence of only storing one time value per
skipping to change at page 18, line 39 skipping to change at page 18, line 40
endpoint uses sequential MIDs and in response improve efficiency by endpoint uses sequential MIDs and in response improve efficiency by
switching its mode to the bitfield based storage. switching its mode to the bitfield based storage.
4. Alternative Configurations 4. Alternative Configurations
4.1. Transmission Parameters 4.1. Transmission Parameters
When a constrained network of CoAP nodes is not communicating over When a constrained network of CoAP nodes is not communicating over
the Internet, for instance because it is shielded by a proxy or a the Internet, for instance because it is shielded by a proxy or a
closed deployment, alternative transmission parameters can be used. closed deployment, alternative transmission parameters can be used.
Consequently, the derived time values provided in Consequently, the derived time values provided in [RFC7252] section
[I-D.ietf-core-coap] section 4.8.2 will also need to be adjusted, 4.8.2 will also need to be adjusted, since most implementations will
since most implementations will encode their absolute values. encode their absolute values.
Static adjustments require a fixed deployment with a constant number Static adjustments require a fixed deployment with a constant number
or upper bound for the number of nodes, number of hops, and expected or upper bound for the number of nodes, number of hops, and expected
concurrent transmissions. Furthermore, the stability of the wireless concurrent transmissions. Furthermore, the stability of the wireless
links should be evaluated. ACK_TIMEOUT should be chosen above the links should be evaluated. ACK_TIMEOUT should be chosen above the
xx% percentile of the round-trip time distribution. xx% percentile of the round-trip time distribution.
ACK_RANDOM_FACTOR depends on the number of nodes on the network. ACK_RANDOM_FACTOR depends on the number of nodes on the network.
MAX_RETRANSMIT should be chosen suitable for the targeted MAX_RETRANSMIT should be chosen suitable for the targeted
application. A lower bound for LEISURE can be calculated as application. A lower bound for LEISURE can be calculated as
lb_Leisure = S * G / R lb_Leisure = S * G / R
where S is the estimated response size, G the group size, and R the where S is the estimated response size, G the group size, and R the
target data transfer rate (see [I-D.ietf-core-coap] section 8.2). target data transfer rate (see [RFC7252] section 8.2). NSTART and
NSTART and PROBING_RATE depend on estimated network utilization. If PROBING_RATE depend on estimated network utilization. If the main
the main cause for loss are weak links, higher values can be chosen. cause for loss are weak links, higher values can be chosen.
Dynamic adjustments will be performed by advanced congestion control Dynamic adjustments will be performed by advanced congestion control
mechanisms such as [I-D.bormann-core-cocoa]. They are required if mechanisms such as [I-D.bormann-core-cocoa]. They are required if
the main cause for message loss is network or endpoint congestion. the main cause for message loss is network or endpoint congestion.
Semi-dynamic adjustments could be implemented by disseminating new Semi-dynamic adjustments could be implemented by disseminating new
static transmission parameters to all nodes when the network static transmission parameters to all nodes when the network
configuration changes (e.g., new nodes are added or long-lasting configuration changes (e.g., new nodes are added or long-lasting
interference is detected). interference is detected).
4.2. CoAP over IPv4 4.2. CoAP over IPv4
CoAP was designed for the properties of IPv6, which is dominating in CoAP was designed for the properties of IPv6, which is dominating in
constrained environments because of the 6LoWPAN adaption layer constrained environments because of the 6LoWPAN adaption layer
[RFC6282]. In particular, the size limitations of CoAP are tailored [RFC6282]. In particular, the size limitations of CoAP are tailored
to the minimal MTU of 1280 bytes. Until the transition towards IPv6 to the minimal MTU of 1280 bytes. Until the transition towards IPv6
converges, CoAP nodes might also communicate over IPv4, though. converges, CoAP nodes might also communicate over IPv4, though.
Sections 4.2 and 4.6 of the base specification [I-D.ietf-core-coap] Sections 4.2 and 4.6 of the base specification [RFC7252] already
already provide guidance and implementation notes to handle the provide guidance and implementation notes to handle the smaller
smaller minimal MTUs of IPv4. minimal MTUs of IPv4.
5. References 5. IANA considerations
5.1. Normative References This document has no actions for IANA.
6. Security considerations
TBD
7. References
7.1. Normative References
[I-D.bormann-core-cocoa] [I-D.bormann-core-cocoa]
Bormann, C., "CoAP Simple Congestion Control/Advanced", Bormann, C., Betzler, A., Gomez, C., and I. Demirkol,
draft-bormann-core-cocoa-01 (work in progress), February "CoAP Simple Congestion Control/Advanced", draft-bormann-
2014. core-cocoa-02 (work in progress), July 2014.
[I-D.ietf-core-block] [I-D.ietf-core-block]
Bormann, C. and Z. Shelby, "Blockwise transfers in CoAP", Bormann, C. and Z. Shelby, "Blockwise transfers in CoAP",
draft-ietf-core-block-14 (work in progress), October 2013. draft-ietf-core-block-14 (work in progress), October 2013.
[I-D.ietf-core-coap]
Shelby, Z., Hartke, K., and C. Bormann, "Constrained
Application Protocol (CoAP)", draft-ietf-core-coap-18
(work in progress), June 2013.
[I-D.ietf-core-observe] [I-D.ietf-core-observe]
Hartke, K., "Observing Resources in CoAP", draft-ietf- Hartke, K., "Observing Resources in CoAP", draft-ietf-
core-observe-12 (work in progress), February 2014. core-observe-14 (work in progress), June 2014.
[RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H.,
Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.
[RFC6282] Hui, J. and P. Thubert, "Compression Format for IPv6 [RFC6282] Hui, J. and P. Thubert, "Compression Format for IPv6
Datagrams over IEEE 802.15.4-Based Networks", RFC 6282, Datagrams over IEEE 802.15.4-Based Networks", RFC 6282,
September 2011. September 2011.
[RFC6570] Gregorio, J., Fielding, R., Hadley, M., Nottingham, M., [RFC6570] Gregorio, J., Fielding, R., Hadley, M., Nottingham, M.,
and D. Orchard, "URI Template", RFC 6570, March 2012. and D. Orchard, "URI Template", RFC 6570, March 2012.
[RFC6633] Gont, F., "Deprecation of ICMP Source Quench Messages", [RFC6633] Gont, F., "Deprecation of ICMP Source Quench Messages",
RFC 6633, May 2012. RFC 6633, May 2012.
5.2. Informative References [RFC7230] Fielding, R. and J. Reschke, "Hypertext Transfer Protocol
(HTTP/1.1): Message Syntax and Routing", RFC 7230, June
2014.
[RFC7252] Shelby, Z., Hartke, K., and C. Bormann, "The Constrained
Application Protocol (CoAP)", RFC 7252, June 2014.
7.2. Informative References
[Contiki] Dunkels, A., Groenvall, B., and T. Voigt, "Contiki - a [Contiki] Dunkels, A., Groenvall, B., and T. Voigt, "Contiki - a
Lightweight and Flexible Operating System for Tiny Lightweight and Flexible Operating System for Tiny
Networked Sensors", Proceedings of the First IEEE Workshop Networked Sensors", Proceedings of the First IEEE Workshop
on Embedded Networked Sensors , November 2004. on Embedded Networked Sensors , November 2004.
[I-D.ietf-lwig-terminology] [RFC5927] Gont, F., "ICMP Attacks against TCP", RFC 5927, July 2010.
Bormann, C., Ersue, M., and A. Keranen, "Terminology for
Constrained Node Networks", draft-ietf-lwig-terminology-07 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for
(work in progress), February 2014. Constrained-Node Networks", RFC 7228, May 2014.
[TinyOS] Levis, P., Madden, S., Polastre, J., Szewczyk, R., [TinyOS] Levis, P., Madden, S., Polastre, J., Szewczyk, R.,
Whitehouse, K., Woo, A., Gay, D., Woo, A., Hill, J., Whitehouse, K., Woo, A., Gay, D., Woo, A., Hill, J.,
Welsh, M., Brewer, E., and D. Culler, "TinyOS: An Welsh, M., Brewer, E., and D. Culler, "TinyOS: An
Operating System for Sensor Networks", Ambient Operating System for Sensor Networks", Ambient
intelligence, Springer (Berlin Heidelberg), ISBN intelligence, Springer (Berlin Heidelberg), ISBN
978-3-540-27139-0 , 2005. 978-3-540-27139-0 , 2005.
Authors' Addresses Authors' Addresses
Matthias Kovatsch Matthias Kovatsch
ETH Zurich ETH Zurich
Universitaetstrasse 6 Universitaetstrasse 6
CH-8092 Zurich CH-8092 Zurich
Switzerland Switzerland
Email: kovatsch@inf.ethz.ch Email: kovatsch@inf.ethz.ch
Olaf Bergmann Olaf Bergmann
Universitaet Bremen TZI Universitaet Bremen TZI
Postfach 330440 Postfach 330440
D-28359 Bremen D-28359 Bremen
Germany Germany
Email: bergmann@tzi.org Email: bergmann@tzi.org
Esko Dijk Esko Dijk
Philips Research Philips Research
 End of changes. 39 change blocks. 
147 lines changed or deleted 159 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/