draft-ietf-homenet-dncp-10.txt   draft-ietf-homenet-dncp-11.txt 
Homenet Working Group M. Stenberg Homenet Working Group M. Stenberg
Internet-Draft S. Barth Internet-Draft S. Barth
Intended status: Standards Track Independent Intended status: Standards Track Independent
Expires: March 24, 2016 September 21, 2015 Expires: April 16, 2016 October 14, 2015
Distributed Node Consensus Protocol Distributed Node Consensus Protocol
draft-ietf-homenet-dncp-10 draft-ietf-homenet-dncp-11
Abstract Abstract
This document describes the Distributed Node Consensus Protocol This document describes the Distributed Node Consensus Protocol
(DNCP), a generic state synchronization protocol that uses the (DNCP), a generic state synchronization protocol that uses the
Trickle algorithm and hash trees. DNCP is an abstract protocol, and Trickle algorithm and hash trees. DNCP is an abstract protocol, and
must be combined with a specific profile to make a complete must be combined with a specific profile to make a complete
implementable protocol. implementable protocol.
Status of This Memo Status of This Memo
skipping to change at page 1, line 34 skipping to change at page 1, line 34
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on March 24, 2016. This Internet-Draft will expire on April 16, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 3 1.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1. Requirements Language . . . . . . . . . . . . . . . . . . 6 2.1. Requirements Language . . . . . . . . . . . . . . . . . . 7
3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4. Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4. Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1. Hash Tree . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1. Hash Tree . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2. Data Transport . . . . . . . . . . . . . . . . . . . . . 8 4.1.1. Calculating network state and node data hashes . . . 8
4.3. Trickle-Driven Status Updates . . . . . . . . . . . . . . 9 4.1.2. Updating network state and node data hashes . . . . . 9
4.4. Processing of Received TLVs . . . . . . . . . . . . . . . 10 4.2. Data Transport . . . . . . . . . . . . . . . . . . . . . 9
4.5. Adding and Removing Peers . . . . . . . . . . . . . . . . 13 4.3. Trickle-Driven Status Updates . . . . . . . . . . . . . . 10
4.6. Data Liveliness Validation . . . . . . . . . . . . . . . 13 4.4. Processing of Received TLVs . . . . . . . . . . . . . . . 11
5. Data Model . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.5. Discovering, Adding and Removing Peers . . . . . . . . . 14
6. Optional Extensions . . . . . . . . . . . . . . . . . . . . . 16 4.6. Data Liveliness Validation . . . . . . . . . . . . . . . 15
6.1. Keep-Alives . . . . . . . . . . . . . . . . . . . . . . . 16 5. Data Model . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.1.1. Data Model Additions . . . . . . . . . . . . . . . . 16 6. Optional Extensions . . . . . . . . . . . . . . . . . . . . . 17
6.1.2. Per-Endpoint Periodic Keep-Alives . . . . . . . . . . 17 6.1. Keep-Alives . . . . . . . . . . . . . . . . . . . . . . . 18
6.1.3. Per-Peer Periodic Keep-Alives . . . . . . . . . . . . 17 6.1.1. Data Model Additions . . . . . . . . . . . . . . . . 18
6.1.4. Received TLV Processing Additions . . . . . . . . . . 17 6.1.2. Per-Endpoint Periodic Keep-Alives . . . . . . . . . . 19
6.1.5. Peer Removal . . . . . . . . . . . . . . . . . . . . 17 6.1.3. Per-Peer Periodic Keep-Alives . . . . . . . . . . . . 19
6.2. Support For Dense Multicast-Enabled Links . . . . . . . . 18 6.1.4. Received TLV Processing Additions . . . . . . . . . . 19
7. Type-Length-Value Objects . . . . . . . . . . . . . . . . . . 19 6.1.5. Peer Removal . . . . . . . . . . . . . . . . . . . . 19
7.1. Request TLVs . . . . . . . . . . . . . . . . . . . . . . 20 6.2. Support For Dense Multicast-Enabled Links . . . . . . . . 20
7.1.1. Request Network State TLV . . . . . . . . . . . . . . 20 7. Type-Length-Value Objects . . . . . . . . . . . . . . . . . . 21
7.1.2. Request Node State TLV . . . . . . . . . . . . . . . 20 7.1. Request TLVs . . . . . . . . . . . . . . . . . . . . . . 22
7.2. Data TLVs . . . . . . . . . . . . . . . . . . . . . . . . 20 7.1.1. Request Network State TLV . . . . . . . . . . . . . . 22
7.2.1. Node Endpoint TLV . . . . . . . . . . . . . . . . . . 20 7.1.2. Request Node State TLV . . . . . . . . . . . . . . . 22
7.2.2. Network State TLV . . . . . . . . . . . . . . . . . . 21 7.2. Data TLVs . . . . . . . . . . . . . . . . . . . . . . . . 22
7.2.3. Node State TLV . . . . . . . . . . . . . . . . . . . 21 7.2.1. Node Endpoint TLV . . . . . . . . . . . . . . . . . . 22
7.3. Data TLVs within Node State TLV . . . . . . . . . . . . . 22 7.2.2. Network State TLV . . . . . . . . . . . . . . . . . . 23
7.3.1. Peer TLV . . . . . . . . . . . . . . . . . . . . . . 22 7.2.3. Node State TLV . . . . . . . . . . . . . . . . . . . 23
7.3.2. Keep-Alive Interval TLV . . . . . . . . . . . . . . . 23 7.3. Data TLVs within Node State TLV . . . . . . . . . . . . . 24
8. Security and Trust Management . . . . . . . . . . . . . . . . 23 7.3.1. Peer TLV . . . . . . . . . . . . . . . . . . . . . . 24
8.1. Pre-Shared Key Based Trust Method . . . . . . . . . . . . 23 7.3.2. Keep-Alive Interval TLV . . . . . . . . . . . . . . . 25
8.2. PKI Based Trust Method . . . . . . . . . . . . . . . . . 24 8. Security and Trust Management . . . . . . . . . . . . . . . . 25
8.3. Certificate Based Trust Consensus Method . . . . . . . . 24 8.1. Pre-Shared Key Based Trust Method . . . . . . . . . . . . 25
8.3.1. Trust Verdicts . . . . . . . . . . . . . . . . . . . 24 8.2. PKI Based Trust Method . . . . . . . . . . . . . . . . . 26
8.3.2. Trust Cache . . . . . . . . . . . . . . . . . . . . . 25 8.3. Certificate Based Trust Consensus Method . . . . . . . . 26
8.3.3. Announcement of Verdicts . . . . . . . . . . . . . . 26 8.3.1. Trust Verdicts . . . . . . . . . . . . . . . . . . . 26
8.3.4. Bootstrap Ceremonies . . . . . . . . . . . . . . . . 27 8.3.2. Trust Cache . . . . . . . . . . . . . . . . . . . . . 27
9. DNCP Profile-Specific Definitions . . . . . . . . . . . . . . 28 8.3.3. Announcement of Verdicts . . . . . . . . . . . . . . 28
10. Security Considerations . . . . . . . . . . . . . . . . . . . 29 8.3.4. Bootstrap Ceremonies . . . . . . . . . . . . . . . . 29
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 9. DNCP Profile-Specific Definitions . . . . . . . . . . . . . . 30
12. References . . . . . . . . . . . . . . . . . . . . . . . . . 31 10. Security Considerations . . . . . . . . . . . . . . . . . . . 31
12.1. Normative references . . . . . . . . . . . . . . . . . . 31 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 32
12.2. Informative references . . . . . . . . . . . . . . . . . 31 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 33
12.1. Normative references . . . . . . . . . . . . . . . . . . 33
Appendix A. Alternative Modes of Operation . . . . . . . . . . . 31 12.2. Informative references . . . . . . . . . . . . . . . . . 33
A.1. Read-only Operation . . . . . . . . . . . . . . . . . . . 32 Appendix A. Alternative Modes of Operation . . . . . . . . . . . 34
A.2. Forwarding Operation . . . . . . . . . . . . . . . . . . 32 A.1. Read-only Operation . . . . . . . . . . . . . . . . . . . 34
Appendix B. DNCP Profile Additional Guidance . . . . . . . . . . 32 A.2. Forwarding Operation . . . . . . . . . . . . . . . . . . 34
B.1. Unicast Transport - UDP or TCP? . . . . . . . . . . . . . 32 Appendix B. DNCP Profile Additional Guidance . . . . . . . . . . 34
B.2. (Optional) Multicast Transport . . . . . . . . . . . . . 33 B.1. Unicast Transport - UDP or TCP? . . . . . . . . . . . . . 34
B.3. (Optional) Transport Security . . . . . . . . . . . . . . 33 B.2. (Optional) Multicast Transport . . . . . . . . . . . . . 35
Appendix C. Example Profile . . . . . . . . . . . . . . . . . . 33 B.3. (Optional) Transport Security . . . . . . . . . . . . . . 35
Appendix C. Example Profile . . . . . . . . . . . . . . . . . . 36
Appendix D. Some Questions and Answers [RFC Editor: please Appendix D. Some Questions and Answers [RFC Editor: please
remove] . . . . . . . . . . . . . . . . . . . . . . 35 remove] . . . . . . . . . . . . . . . . . . . . . . 37
Appendix E. Changelog [RFC Editor: please remove] . . . . . . . 35 Appendix E. Changelog [RFC Editor: please remove] . . . . . . . 37
Appendix F. Draft Source [RFC Editor: please remove] . . . . . . 37 Appendix F. Draft Source [RFC Editor: please remove] . . . . . . 39
Appendix G. Acknowledgements . . . . . . . . . . . . . . . . . . 37 Appendix G. Acknowledgements . . . . . . . . . . . . . . . . . . 39
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 37 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 39
1. Introduction 1. Introduction
DNCP is designed to provide a way for each participating node to DNCP is designed to provide a way for each participating node to
publish a small set of TLV (Type-Length-Value) tuples (at most 64 publish a small set of TLV (Type-Length-Value) tuples (at most 64
KB), and to provide a shared and common view about the data published KB), and to provide a shared and common view about the data published
by every currently or recently bidirectionally reachable DNCP node in by every currently bidirectionally reachable DNCP node in a network.
a network.
For state synchronization a hash tree is used. It is formed by first For state synchronization a hash tree is used. It is formed by first
calculating a hash for the dataset published by each node, called calculating a hash for the dataset published by each node, called
node data, and then calculating another hash over those node data node data, and then calculating another hash over those node data
hashes. The single resulting hash, called network state hash, is hashes. The single resulting hash, called network state hash, is
transmitted using the Trickle algorithm [RFC6206] to ensure that all transmitted using the Trickle algorithm [RFC6206] to ensure that all
nodes share the same view of the current state of the published data nodes share the same view of the current state of the published data
within the network. The use of Trickle with only short network state within the network. The use of Trickle with only short network state
hashes sent infrequently (in steady state, once the maximum Trickle hashes sent infrequently (in steady state, once the maximum Trickle
interval per link or unicast connection has been reached) makes DNCP interval per link or unicast connection has been reached) makes DNCP
very thrifty when updates happen rarely. very thrifty when updates happen rarely.
For maintaining liveliness of the topology and the data within it, a For maintaining liveliness of the topology and the data within it, a
combination of Trickled network state, keep-alives, and "other" means combination of Trickled network state, keep-alives, and "other" means
of ensuring reachability are used. The core idea is that if every of ensuring reachability are used. The core idea is that if every
node ensures its peers are present, transitively, the whole network node ensures its peers are present, transitively, the whole network
state also stays up-to-date. state also stays up-to-date.
1.1. Applicability 1.1. Applicability
DNCP is useful for cases like autonomous bootstrapping, discovery and
negotiation of embedded network devices like routers. Furthermore it
can be used as a basis to run distributed algorithms like
[I-D.ietf-homenet-prefix-assignment] or usecases as described in
Appendix C. The topology of the devices is not limited and
automatically discovered. If globally scoped addresses are used,
DNCP peers do not necessarily need to be on the same link.
Autonomous discovery features are usually used in local network
scenario however - with security enabled - DNCP can also be used over
unsecured public networks. Network size is restricted merely by the
capabilities of the devices, i.e., each DNCP node needs to be able to
store the entirety of the data published by all nodes.
DNCP is most suitable for data that changes only infrequently to gain DNCP is most suitable for data that changes only infrequently to gain
the maximum benefit from using Trickle. As the network of nodes the maximum benefit from using Trickle. As the network of nodes
grows, or the frequency of data changes per node increases, Trickle grows, or the frequency of data changes per node increases, Trickle
is eventually used less and less and the benefit of using DNCP is eventually used less and less and the benefit of using DNCP
diminishes. In these cases Trickle just provides extra complexity diminishes. In these cases Trickle just provides extra complexity
within the specification and little added value. within the specification and little added value.
The suitability of DNCP for a particular application can roughly be The suitability of DNCP for a particular application can roughly be
evaluated by considering the expected average network-wide state evaluated by considering the expected average network-wide state
change interval A_NC_I; it is computed by dividing the mean interval change interval A_NC_I; it is computed by dividing the mean interval
skipping to change at page 6, line 30 skipping to change at page 6, line 48
Effective trust the trust verdict with the highest priority within Effective trust the trust verdict with the highest priority within
verdict the set of trust verdicts announced for the verdict the set of trust verdicts announced for the
certificate in the DNCP network. certificate in the DNCP network.
Topology graph the undirected graph of DNCP nodes produced by Topology graph the undirected graph of DNCP nodes produced by
retaining only bidirectional peer relationships retaining only bidirectional peer relationships
between nodes. between nodes.
Bidirectionally a peer is locally unidirectionally reachable if a Bidirectionally a peer is locally unidirectionally reachable if a
reachable recent and consistent multicast or any unicast DNCP reachable consistent multicast or any unicast DNCP message
message has been received by the local node (see has been received by the local node (see Section
Section 4.5). If said peer in return also 4.5). If said peer in return also considers the
considers the local node unidirectionally local node unidirectionally reachable, then
reachable, then bidirectionally reachability is bidirectionally reachability is established. As
established. As this process is based on this process is based on publishing peer
publishing peer relationships and evaluating the relationships and evaluating the resulting topology
resulting topology graph as described in Section graph as described in Section 4.6, this information
4.6, this information is available to the whole is available to the whole DNCP network.
DNCP network.
Trickle Instance a distinct Trickle [RFC6206] algorithm state kept Trickle Instance a distinct Trickle [RFC6206] algorithm state kept
by a node (Section 5) and related to an endpoint or by a node (Section 5) and related to an endpoint or
a particular (peer, endpoint) tuple with Trickle a particular (peer, endpoint) tuple with Trickle
variables I, t and c. See Section 4.3. variables I, t and c. See Section 4.3.
2.1. Requirements Language 2.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in RFC "OPTIONAL" in this document are to be interpreted as described in RFC
2119 [RFC2119]. 2119 [RFC2119].
3. Overview 3. Overview
DNCP operates primarily using unicast exchanges between nodes, and DNCP operates primarily using unicast exchanges between nodes, and
may use multicast for Trickle-based shared state dissemination and may use multicast for Trickle-based shared state dissemination and
topology discovery. If used in pure unicast mode with unreliable topology discovery. If used in pure unicast mode with unreliable
transport, Trickle is also used between peers. transport, Trickle is also used between peers.
DNCP is based on exchanging TLVs (Section 7) and defines a set of
mandatory and optional ones for its operation. They are categorized
into TLVs for requesting information (Section 7.1), transmitting data
(Section 7.2) and being published as data (Section 7.3). DNCP based
protocols usually specify additional ones to extend the capabilities.
DNCP discovers the topology of the nodes in the DNCP network and DNCP discovers the topology of the nodes in the DNCP network and
maintains the liveliness of published node data by ensuring that the maintains the liveliness of published node data by ensuring that the
publishing node is bidirectionally reachable. New potential peers publishing node is bidirectionally reachable. New potential peers
can be discovered autonomously on multicast-enabled links, their can be discovered autonomously on multicast-enabled links, their
addresses may be manually configured or they may be found by some addresses may be manually configured or they may be found by some
other means defined in the particular DNCP profile. The DNCP profile other means defined in the particular DNCP profile. The DNCP profile
may specify, for example, a well-known anycast address or may specify, for example, a well-known anycast address or
provisioning the remote address to contact via some other protocol provisioning the remote address to contact via some other protocol
such as DHCPv6 [RFC3315]. such as DHCPv6 [RFC3315].
skipping to change at page 8, line 10 skipping to change at page 8, line 29
updated (e.g., due to its own or another node's node state changing updated (e.g., due to its own or another node's node state changing
or due to a peer being added or removed) its Trickle instances are or due to a peer being added or removed) its Trickle instances are
reset which eventually causes any update to be propagated to all of reset which eventually causes any update to be propagated to all of
its peers. its peers.
4. Operation 4. Operation
4.1. Hash Tree 4.1. Hash Tree
Each DNCP node maintains an arbitrary width hash tree of height 1. Each DNCP node maintains an arbitrary width hash tree of height 1.
Each leaf represents one bidirectionally reachable DNCP node (see The root of the tree represents the overall network state hash and is
Section 4.6), and is represented by a tuple consisting of the node's used to determine whether the view of the network of two or more
sequence number in network byte order concatenated with the hash- nodes is consistent and shared. Each leaf represents one
value of the node's ordered node data published in the Node State TLV bidirectionally reachable DNCP node. Every time a node is added or
(Section 7.2.3). These leaves are ordered in ascending order of the removed from the topology graph (Section 4.6) it is likewise added or
node identifiers of the nodes they represent. The root of the tree - removed as a leaf. At any time the leaves of the tree are ordered in
the network state hash - is represented by the hash-value calculated ascending order of the node identifiers of the nodes they represent.
over all such leaf tuples concatenated in order. It is used to
determine whether the view of the network of two or more nodes is
consistent and shared.
The node data hashes in the leaves and the root network state hash 4.1.1. Calculating network state and node data hashes
are updated on-demand and whenever any locally stored per-node state
changes. This includes local unidirectional reachability encoded in The network state hash and the node data hashes are calculated using
the published Peer TLVs (Section 7.3.1) and - when combined with the hash function defined in the DNCP profile (Section 9) and
remote data - results in awareness of bidirectional reachability truncated to the number of bits specified therein.
changes.
Individual node data hashes are calculated by applying the function
and truncation on the respective node's node data as published in the
Node State TLV. Such node data sets are always ordered as defined in
Section 7.2.3.
The network state hash is calculated by applying the function and
truncation on the concatenated network state. This state is formed
by first concatenating each node's sequence number (in network byte
order) with its node data hash to form a per-node datum for each
node. These per-node data are then concatenated in ascending order
of the respective node's node identifier, i.e., in the order that the
nodes appear in the hash tree.
4.1.2. Updating network state and node data hashes
The network state hash and the node data hashes are updated on-demand
and whenever any locally stored per-node state changes. This
includes local unidirectional reachability encoded in the published
Peer TLVs (Section 7.3.1) and - when combined with remote data -
results in awareness of bidirectional reachability changes.
4.2. Data Transport 4.2. Data Transport
DNCP has few requirements for the underlying transport; it requires DNCP has few requirements for the underlying transport; it requires
some way of transmitting either unicast datagram or stream data to a some way of transmitting either unicast datagram or stream data to a
peer and, if used in multicast mode, a way of sending multicast peer and, if used in multicast mode, a way of sending multicast
datagrams. As multicast is used only to identify potential new DNCP datagrams. As multicast is used only to identify potential new DNCP
nodes and to send status messages which merely notify that a unicast nodes and to send status messages which merely notify that a unicast
exchange should be triggered, the multicast transport does not have exchange should be triggered, the multicast transport does not have
to be secured. If unicast security is desired and one of the built- to be secured. If unicast security is desired and one of the built-
in security methods is to be used, support for some TLS-derived in security methods is to be used, support for some TLS-derived
transport scheme - such as TLS [RFC5246] on top of TCP or DTLS transport scheme - such as TLS [RFC5246] on top of TCP or DTLS
[RFC6347] on top of UDP - is also required. They provide for [RFC6347] on top of UDP - is also required. They provide for
integrity protection and confidentiality of the node data, as well as integrity protection and confidentiality of the node data, as well as
authentication and authorization using the schemes defined in authentication and authorization using the schemes defined in
Security and Trust Management (Section 8). A specific definition of Security and Trust Management (Section 8). A specific definition of
the transport(s) in use and their parameters MUST be provided by the the transport(s) in use and their parameters MUST be provided by the
DNCP profile. DNCP profile.
TLVs are sent across the transport as is, and they SHOULD be sent TLVs (Section 7) are sent across the transport as is, and they SHOULD
together where, e.g., MTU considerations do not recommend sending be sent together where, e.g., MTU considerations do not recommend
them in multiple batches. TLVs in general are handled individually sending them in multiple batches. DNCP does not fragment or
and statelessly, with one exception: To form bidirectional peer reassemble TLVs thus it MUST be ensured that the underlying transport
relationships DNCP requires identification of the endpoints used for performs these operations should they be necessary. If this document
communication. As bidirectional peer relationships are required for indicates sending one or more TLVs, then the sending node does not
validating liveliness of published node data as described in need to keep track of the packets sent after handing them over to the
Section 4.6, a DNCP node MUST send a Node Endpoint TLV respective transport, i.e., reliable DNCP operation is ensured merely
(Section 7.2.1). When it is sent varies, depending on the underlying by the explicitly defined timers and state machines such as Trickle
transport, but conceptually it should be available whenever (Section 4.3). TLVs in general are handled individually and
processing a Network State TLV: statelessly (and thus do not need to be sent in any particular order)
with one exception: To form bidirectional peer relationships DNCP
requires identification of the endpoints used for communication. As
bidirectional peer relationships are required for validating
liveliness of published node data as described in Section 4.6, a DNCP
node MUST send a Node Endpoint TLV (Section 7.2.1). When it is sent
varies, depending on the underlying transport, but conceptually it
should be available whenever processing a Network State TLV:
o If using a stream transport, the TLV MUST be sent at least once o If using a stream transport, the TLV MUST be sent at least once
per connection, but SHOULD NOT be sent more than once. per connection, but SHOULD NOT be sent more than once.
o If using a datagram transport, it MUST be included in every o If using a datagram transport, it MUST be included in every
datagram that also contains a Network State TLV (Section 7.2.2) datagram that also contains a Network State TLV (Section 7.2.2)
and MUST be located before any such TLV. It SHOULD also be and MUST be located before any such TLV. It SHOULD also be
included in any other datagram, to speed up initial peer included in any other datagram, to speed up initial peer
detection. detection.
Given the assorted transport options as well as potential endpoint Given the assorted transport options as well as potential endpoint
configuration, a DNCP endpoint may be used in various transport configuration, a DNCP endpoint may be used in various transport
modes: modes:
Unicast: Unicast:
* If only reliable unicast transport is used, Trickle is not used * If only reliable unicast transport is used, Trickle is not used
at all. Where Trickle reset has been specified, a single at all. Whenever the locally calculated network state hash
Network State TLV (Section 7.2.2) is sent instead to every changes, a single Network State TLV (Section 7.2.2) is sent
unicast peer. Additionally, recently changed Node State TLVs instead to every unicast peer. Additionally, recently changed
(Section 7.2.3) MAY be included. Node State TLVs (Section 7.2.3) MAY be included.
* If only unreliable unicast transport is used, Trickle state is * If only unreliable unicast transport is used, Trickle state is
kept per peer and it is used to send Network State TLVs kept per peer and it is used to send Network State TLVs
intermittently, as specified in Section 4.3. intermittently, as specified in Section 4.3.
Multicast+Unicast: If multicast datagram transport is available on Multicast+Unicast: If multicast datagram transport is available on
an endpoint, Trickle state is only maintained for the endpoint as an endpoint, Trickle state is only maintained for the endpoint as
a whole. It is used to send Network State TLVs periodically, as a whole. It is used to send Network State TLVs periodically, as
specified in Section 4.3. Additionally, per-endpoint keep-alives specified in Section 4.3. Additionally, per-endpoint keep-alives
MAY be defined in the DNCP profile, as specified in Section 6.1.2. MAY be defined in the DNCP profile, as specified in Section 6.1.2.
MulticastListen+Unicast: Just like Unicast, except multicast MulticastListen+Unicast: Just like Unicast, except multicast
transmissions are listened to in order to detect changes of the transmissions are listened to in order to detect changes of the
highest node identifier. This mode is used only if the DNCP highest node identifier. This mode is used only if the DNCP
profile supports dense multicast-enabled link optimization profile supports dense multicast-enabled link optimization
(Section 6.2). (Section 6.2).
4.3. Trickle-Driven Status Updates 4.3. Trickle-Driven Status Updates
The Trickle algorithm [RFC6206] has 3 parameters: Imin, Imax and k. The Trickle algorithm [RFC6206] is used to ensure protocol
Imin and Imax represent the minimum value for I and the maximum reliability over unreliable multicast or unicast transports. For
number of doublings of Imin, where I is the time interval during reliable unicast transports, its actual algorithm is unnecessary and
which at least k Trickle updates must be seen on an endpoint to omitted (Section 4.2). DNCP maintains multiple Trickle states as
prevent local state transmission. The actual suggested Trickle defined in Section 5. Each such state can be based on different
algorithm parameters are DNCP profile specific, as described in parameters (see below) and is responsible for ensuring that a
Section 9. specific peer or all peers on the respective endpoint are regularly
provided with the node's current locally calculated network state
hash for state comparison, i.e., to detect potential divergence in
the perceived network state.
Trickle defines 3 parameters: Imin, Imax and k. Imin and Imax
represent the minimum value for I and the maximum number of doublings
of Imin, where I is the time interval during which at least k Trickle
updates must be seen on an endpoint to prevent local state
transmission. The actual suggested Trickle algorithm parameters are
DNCP profile specific, as described in Section 9.
The Trickle state for all Trickle instances defined in Section 5 is The Trickle state for all Trickle instances defined in Section 5 is
considered inconsistent and reset if and only if the locally considered inconsistent and reset if and only if the locally
calculated network state hash changes. This occurs either due to a calculated network state hash changes. This occurs either due to a
change in the local node's own node data, or due to receipt of more change in the local node's own node data, or due to receipt of more
recent data from another node. A node MUST NOT reset its Trickle recent data from another node as explained in Section 4.1. A node
state merely based on receiving a Network State TLV (Section 7.2.2) MUST NOT reset its Trickle state merely based on receiving a Network
with a network state hash which is different from its locally State TLV (Section 7.2.2) with a network state hash which is
calculated one. different from its locally calculated one.
Every time a particular Trickle instance indicates that an update Every time a particular Trickle instance indicates that an update
should be sent, the node MUST send a Network State TLV should be sent, the node MUST send a Network State TLV
(Section 7.2.2) if and only if: (Section 7.2.2) if and only if:
o the endpoint is in Multicast+Unicast transport mode, in which case o the endpoint is in Multicast+Unicast transport mode, in which case
the TLV MUST be sent over multicast. the TLV MUST be sent over multicast.
o the endpoint is NOT in Multicast+Unicast transport mode, and the o the endpoint is NOT in Multicast+Unicast transport mode, and the
unicast transport is unreliable, in which case the TLV MUST be unicast transport is unreliable, in which case the TLV MUST be
skipping to change at page 10, line 40 skipping to change at page 11, line 46
DNCP profile, or to avoid exposure of the node state TLVs by DNCP profile, or to avoid exposure of the node state TLVs by
transmitting them within insecure multicast when using secure transmitting them within insecure multicast when using secure
unicast. unicast.
4.4. Processing of Received TLVs 4.4. Processing of Received TLVs
This section describes how received TLVs are processed. The DNCP This section describes how received TLVs are processed. The DNCP
profile may specify when to ignore particular TLVs, e.g., to modify profile may specify when to ignore particular TLVs, e.g., to modify
security properties - see Section 9 for what may be safely defined to security properties - see Section 9 for what may be safely defined to
be ignored in a profile. Any 'reply' mentioned in the steps below be ignored in a profile. Any 'reply' mentioned in the steps below
denotes sending of the specified TLV(s) over unicast to the denotes sending of the specified TLV(s) to the originator of the TLV
originator of the TLV being processed. If the TLV being replied to being processed. All such replies MUST be sent using unicast. If
was received via multicast and it was sent to a multiple access link, the TLV being replied to was received via multicast and it was sent
the reply MUST be delayed by a random timespan in [0, Imin/2], to to a multiple access link, the reply MUST be delayed by a random
avoid potential simultaneous replies that may cause problems on some timespan in [0, Imin/2], to avoid potential simultaneous replies that
links, unless specified differently in the DNCP profile. Sending of may cause problems on some links, unless specified differently in the
replies MAY also be rate-limited or omitted for a short period of DNCP profile. Sending of replies MAY also be rate-limited or omitted
time by an implementation. However, if the TLV is not forbidden by for a short period of time by an implementation. However, if the TLV
the DNCP profile, an implementation MUST reply to retransmissions of is not forbidden by the DNCP profile, an implementation MUST reply to
the TLV with a non-zero probability to avoid starvation which would retransmissions of the TLV with a non-zero probability to avoid
break the state synchronization. starvation which would break the state synchronization.
A DNCP node MUST process TLVs received from any valid (e.g., A DNCP node MUST process TLVs received from any valid (e.g.,
correctly scoped) address, as specified by the DNCP profile and the correctly scoped) address, as specified by the DNCP profile and the
configuration of a particular endpoint, whether this address is known configuration of a particular endpoint, whether this address is known
to be the address of a peer or not. This provision satisfies the to be the address of a peer or not. This provision satisfies the
needs of monitoring or other host software that needs to discover the needs of monitoring or other host software that needs to discover the
DNCP topology without adding to the state in the network. DNCP topology without adding to the state in the network.
Upon receipt of: Upon receipt of:
skipping to change at page 13, line 5 skipping to change at page 14, line 12
and (a & b) represents bitwise conjunction of a and b is and (a & b) represents bitwise conjunction of a and b is
RECOMMENDED unless the DNCP profile defines another. RECOMMENDED unless the DNCP profile defines another.
o Any other TLV: TLVs not recognized by the receiver MUST be o Any other TLV: TLVs not recognized by the receiver MUST be
silently ignored unless they are sent within another TLV (for silently ignored unless they are sent within another TLV (for
example, TLVs within the Node Data field of a Node State TLV). example, TLVs within the Node Data field of a Node State TLV).
If secure unicast transport is configured for an endpoint, any Node If secure unicast transport is configured for an endpoint, any Node
State TLVs received over insecure multicast MUST be silently ignored. State TLVs received over insecure multicast MUST be silently ignored.
4.5. Adding and Removing Peers 4.5. Discovering, Adding and Removing Peers
Peer relations are established between neighbors using one or more
mutually connected endpoints. Such neighbors exchange information
about network state and published data directly and through
transitivity this information then propagates throughout the network.
New peers are discovered using the regular unicast or multicast
transport defined in the DNCP profile (Section 9). This process is
not distinguished from peer addition, i.e., an unknown peer is simply
discovered by receiving regular DNCP protocol TLVs from it and
dedicated discovery messages or TLVs do not exist. For unicast-only
transports, the individual node's transport addresses are
preconfigured or obtained using an external service discovery
protocol. In the presence of a multicast transport, messages from
unknown peers are handled in the same way as multicast messages from
peers that are already known, thus new peers are simply discovered
when sending their regular DNCP protocol TLVs using multicast.
When receiving a Node Endpoint TLV (Section 7.2.1) on an endpoint When receiving a Node Endpoint TLV (Section 7.2.1) on an endpoint
from an unknown peer: from an unknown peer:
o If received over unicast, the remote node MUST be added as a peer o If received over unicast, the remote node MUST be added as a peer
on the endpoint and a Peer TLV (Section 7.3.1) MUST be created for on the endpoint and a Peer TLV (Section 7.3.1) MUST be created for
it. it.
o If received over multicast, the node MAY be sent a (possibly rate- o If received over multicast, the node MAY be sent a (possibly rate-
limited) unicast Request Network State TLV (Section 7.1.1). limited) unicast Request Network State TLV (Section 7.1.1).
If keep-alives specified in Section 6.1 are NOT sent by the peer If keep-alives specified in Section 6.1 are NOT sent by the peer
(either the DNCP profile does not specify the use of keep-alives or (either the DNCP profile does not specify the use of keep-alives or
the particular peer chooses not to send keep-alives), some other the particular peer chooses not to send keep-alives), some other
existing local transport-specific means (such as Ethernet carrier- existing local transport-specific means (such as Ethernet carrier-
detection or TCP keep-alive) MUST be used to ensure its presence. If detection or TCP keep-alive) MUST be used to ensure its presence. If
the peer does not send keep-alives, and no means to verify presence the peer does not send keep-alives, and no means to verify presence
of the peer are available, the peer MUST be considered no longer of the peer are available, the peer MUST be considered no longer
present and it SHOULD NOT be added back as a peer until it starts present and it SHOULD NOT be added back as a peer until it starts
sending keep-alives again. When the peer is no longer present, the sending keep-alives again. When the peer is no longer present, the
Peer TLV and the local DNCP peer state MUST be removed. Peer TLV and the local DNCP peer state MUST be removed. DNCP does
not define an explicit message or TLV for indicating the termination
of DNCP operation by the terminating node, however a derived protocol
could specify an extension, if the need arises.
If the local endpoint is in the Multicast-Listen+Unicast transport If the local endpoint is in the Multicast-Listen+Unicast transport
mode, a Peer TLV (Section 7.3.1) MUST NOT be published for the peers mode, a Peer TLV (Section 7.3.1) MUST NOT be published for the peers
not having the highest node identifier. not having the highest node identifier.
4.6. Data Liveliness Validation 4.6. Data Liveliness Validation
The topology graph MUST be traversed either immediately or with a Maintenance of the hash tree (Section 4.1) and thereby network state
small delay shorter than the DNCP profile-defined Trickle Imin, hash updates depend on up-to-date information on bidirectional node
whenever: reachability derived from the contents of a topology graph. This
graph changes whenever nodes are added to or removed from the network
or when bidirectional connectivity between existing nodes is
established or lost. Therefore the graph MUST be updated either
immediately or with a small delay shorter than the DNCP profile-
defined Trickle Imin, whenever:
o A Peer TLV or a whole node is added or removed, or o A Peer TLV or a whole node is added or removed, or
o the origination time (in milliseconds) of some node's node data is o the origination time (in milliseconds) of some node's node data is
less than current time - 2^32 + 2^15. less than current time - 2^32 + 2^15.
The topology graph traversal starts with the local node marked as The artificial upper limit for the origination time is used to
reachable. Other nodes are then iteratively marked as reachable gracefully avoid overflows of the origination time and allow for the
using the following algorithm: A candidate not-yet-reachable node N node to republish its data as noted in Section 7.2.3.
with an endpoint NE is marked as reachable if there is a reachable
node R with an endpoint RE that meet all of the following criteria: The topology graph update starts with the local node marked as
reachable and all other nodes marked as unreachable. Other nodes are
then iteratively marked as reachable using the following algorithm: A
candidate not-yet-reachable node N with an endpoint NE is marked as
reachable if there is a reachable node R with an endpoint RE that
meet all of the following criteria:
o The origination time (in milliseconds) of R's node data is greater o The origination time (in milliseconds) of R's node data is greater
than current time - 2^32 + 2^15. than current time - 2^32 + 2^15.
o R publishes a Peer TLV with: o R publishes a Peer TLV with:
* Peer Node Identifier = N's node identifier * Peer Node Identifier = N's node identifier
* Peer Endpoint Identifier = NE's endpoint identifier * Peer Endpoint Identifier = NE's endpoint identifier
skipping to change at page 16, line 12 skipping to change at page 18, line 7
parameters I, T, and c (only on an endpoint in Unicast mode, when parameters I, T, and c (only on an endpoint in Unicast mode, when
using an unreliable unicast transport) . using an unreliable unicast transport) .
6. Optional Extensions 6. Optional Extensions
This section specifies extensions to the core protocol that a DNCP This section specifies extensions to the core protocol that a DNCP
profile may specify to be used. profile may specify to be used.
6.1. Keep-Alives 6.1. Keep-Alives
Trickle-driven status updates (Section 4.3) provide a mechanism for While DNCP provides mechanisms for discovery and adding of new peers
handling of new peer detection on an endpoint, as well as state on an endpoint (Section 4.5), as well as state change notifications,
change notifications. Another mechanism may be needed to get rid of another mechanism may be needed to get rid of old, no longer valid
old, no longer valid peers if the transport or lower layers do not peers if the transport or lower layers do not provide one as noted in
provide one. Section 4.6.
If keep-alives are not specified in the DNCP profile, the rest of If keep-alives are not specified in the DNCP profile, the rest of
this subsection MUST be ignored. this subsection MUST be ignored.
A DNCP profile MAY specify either per-endpoint (sent using multicast A DNCP profile MAY specify either per-endpoint (sent using multicast
to all DNCP nodes connected to a multicast-enabled link) or per-peer to all DNCP nodes connected to a multicast-enabled link) or per-peer
(sent using unicast to each peer individually) keep-alive support. (sent using unicast to each peer individually) keep-alive support.
For every endpoint that a keep-alive is specified for in the DNCP For every endpoint that a keep-alive is specified for in the DNCP
profile, the endpoint-specific keep-alive interval MUST be profile, the endpoint-specific keep-alive interval MUST be
skipping to change at page 16, line 51 skipping to change at page 18, line 46
o Last sent: If a timestamp which indicates the last time a Network o Last sent: If a timestamp which indicates the last time a Network
State TLV (Section 7.2.2) was sent over that interface. State TLV (Section 7.2.2) was sent over that interface.
For each remote (peer, endpoint) pair detected on a local endpoint, a For each remote (peer, endpoint) pair detected on a local endpoint, a
DNCP node has: DNCP node has:
o Last contact timestamp: a timestamp which indicates the last time o Last contact timestamp: a timestamp which indicates the last time
a consistent Network State TLV (Section 7.2.2) was received from a consistent Network State TLV (Section 7.2.2) was received from
the peer over multicast, or anything was received over unicast. the peer over multicast, or anything was received over unicast.
When adding a new peer, it is initialized to the current time. Failing to updated it for a certain amount of time as specified in
Section 6.1.5 results in the removal of the peer. When adding a
new peer, it is initialized to the current time.
o Last sent: If per-peer keep-alives are enabled, a timestamp which o Last sent: If per-peer keep-alives are enabled, a timestamp which
indicates the last time a Network State TLV (Section 7.2.2) was indicates the last time a Network State TLV (Section 7.2.2) was
sent to to that point-to-point peer. When adding a new peer, it sent to to that point-to-point peer. When adding a new peer, it
is initialized to the current time. is initialized to the current time.
6.1.2. Per-Endpoint Periodic Keep-Alives 6.1.2. Per-Endpoint Periodic Keep-Alives
If per-endpoint keep-alives are enabled on an endpoint in If per-endpoint keep-alives are enabled on an endpoint in
Multicast+Unicast transport mode, and if no traffic containing a Multicast+Unicast transport mode, and if no traffic containing a
skipping to change at page 17, line 37 skipping to change at page 19, line 34
peer, and a new Trickle interval started, as specified in the step 2 peer, and a new Trickle interval started, as specified in the step 2
of Section 4.2 of [RFC6206]. of Section 4.2 of [RFC6206].
6.1.4. Received TLV Processing Additions 6.1.4. Received TLV Processing Additions
If a TLV is received over unicast from the peer, the Last contact If a TLV is received over unicast from the peer, the Last contact
timestamp for the peer MUST be updated. timestamp for the peer MUST be updated.
On receipt of a Network State TLV (Section 7.2.2) which is consistent On receipt of a Network State TLV (Section 7.2.2) which is consistent
with the locally calculated network state hash, the Last contact with the locally calculated network state hash, the Last contact
timestamp for the peer MUST be updated. timestamp for the peer MUST be updated in order to maintain it as a
peer.
6.1.5. Peer Removal 6.1.5. Peer Removal
For every peer on every endpoint, the endpoint-specific keep-alive For every peer on every endpoint, the endpoint-specific keep-alive
interval must be calculated by looking for Keep-Alive Interval TLVs interval must be calculated by looking for Keep-Alive Interval TLVs
(Section 7.3.2) published by the node, and if none exist, using the (Section 7.3.2) published by the node, and if none exist, using the
default value of DNCP_KEEPALIVE_INTERVAL. If the peer's last contact default value of DNCP_KEEPALIVE_INTERVAL. If the peer's Last contact
timestamp has not been updated for at least locally chosen timestamp has not been updated for at least locally chosen
potentially endpoint-specific keep-alive multiplier (defaults to potentially endpoint-specific keep-alive multiplier (defaults to
DNCP_KEEPALIVE_MULTIPLIER) times the peer's endpoint-specific keep- DNCP_KEEPALIVE_MULTIPLIER) times the peer's endpoint-specific keep-
alive interval, the Peer TLV for that peer and the local DNCP peer alive interval, the Peer TLV for that peer and the local DNCP peer
state MUST be removed. state MUST be removed.
6.2. Support For Dense Multicast-Enabled Links 6.2. Support For Dense Multicast-Enabled Links
This optimization is needed to avoid a state space explosion. Given This optimization is needed to avoid a state space explosion. Given
a large set of DNCP nodes publishing data on an endpoint that uses a large set of DNCP nodes publishing data on an endpoint that uses
skipping to change at page 18, line 48 skipping to change at page 20, line 48
identifier detected on the link, therefore transitioning to identifier detected on the link, therefore transitioning to
Multicast-listen+Unicast transport mode. See Section 4.2 for Multicast-listen+Unicast transport mode. See Section 4.2 for
implications on the specific endpoint behavior. The nodes in implications on the specific endpoint behavior. The nodes in
Multicast-listen+Unicast transport mode MUST keep listening to Multicast-listen+Unicast transport mode MUST keep listening to
multicast traffic to both receive messages from the node(s) still in multicast traffic to both receive messages from the node(s) still in
Multicast+Unicast mode, and as well to react to nodes with a greater Multicast+Unicast mode, and as well to react to nodes with a greater
node identifier appearing. If the highest node identifier present on node identifier appearing. If the highest node identifier present on
the link changes, the remote unicast address of the endpoints in the link changes, the remote unicast address of the endpoints in
Multicast-Listen+Unicast transport mode MUST be changed. If the node Multicast-Listen+Unicast transport mode MUST be changed. If the node
identifier of the local node is the highest one, the node MUST switch identifier of the local node is the highest one, the node MUST switch
back to, or stay in Multicast+Unicast mode, and normally form peer back to, or stay in Multicast+Unicast mode, and form peer
relationships with all peers. relationships with all peers as specified in Section 4.5.
7. Type-Length-Value Objects 7. Type-Length-Value Objects
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Length | | Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Value (if any) (+padding (if any)) | | Value (if any) (+padding (if any)) |
.. ..
skipping to change at page 23, line 25 skipping to change at page 25, line 25
| Type: KEEP-ALIVE-INTERVAL (9) | Length: >= 8 | | Type: KEEP-ALIVE-INTERVAL (9) | Length: >= 8 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Endpoint Identifier | | Endpoint Identifier |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Interval | | Interval |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
This TLV indicates a non-default interval being used to send keep- This TLV indicates a non-default interval being used to send keep-
alives specified in Section 6.1. alives specified in Section 6.1.
Endpoint identifier is used to identify the particular endpoint for Endpoint identifier is used to identify the particular (local)
which the interval applies. If 0, it applies for ALL endpoints for endpoint for which the interval applies on the sending node. If 0,
which no specific TLV exists. it applies for ALL endpoints for which no specific TLV exists.
Interval specifies the interval in milliseconds at which the node Interval specifies the interval in milliseconds at which the node
sends keep-alives. A value of zero means no keep-alives are sent at sends keep-alives. A value of zero means no keep-alives are sent at
all; in that case, some lower layer mechanism that ensures presence all; in that case, some lower layer mechanism that ensures presence
of nodes MUST be available and used. of nodes MUST be available and used.
8. Security and Trust Management 8. Security and Trust Management
If specified in the DNCP profile, either DTLS [RFC6347] or TLS If specified in the DNCP profile, either DTLS [RFC6347] or TLS
[RFC5246] may be used to authenticate and encrypt either some (if [RFC5246] may be used to authenticate and encrypt either some (if
skipping to change at page 31, line 47 skipping to change at page 33, line 47
[RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer
Security Version 1.2", RFC 6347, January 2012. Security Version 1.2", RFC 6347, January 2012.
[RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security [RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security
(TLS) Protocol Version 1.2", RFC 5246, August 2008. (TLS) Protocol Version 1.2", RFC 5246, August 2008.
[RFC7435] Dukhovni, V., "Opportunistic Security: Some Protection [RFC7435] Dukhovni, V., "Opportunistic Security: Some Protection
Most of the Time", RFC 7435, DOI 10.17487/RFC7435, Most of the Time", RFC 7435, DOI 10.17487/RFC7435,
December 2014, <http://www.rfc-editor.org/info/rfc7435>. December 2014, <http://www.rfc-editor.org/info/rfc7435>.
[I-D.ietf-homenet-prefix-assignment]
Pfister, P., Paterson, B., and J. Arkko, "Distributed
Prefix Assignment Algorithm", draft-ietf-homenet-prefix-
assignment-08 (work in progress), August 2015.
Appendix A. Alternative Modes of Operation Appendix A. Alternative Modes of Operation
Beyond what is described in the main text, the protocol allows for Beyond what is described in the main text, the protocol allows for
other uses. These are provided as examples. other uses. These are provided as examples.
A.1. Read-only Operation A.1. Read-only Operation
If a node uses just a single endpoint and does not need to publish If a node uses just a single endpoint and does not need to publish
any TLVs, full DNCP node functionality is not required. Such limited any TLVs, full DNCP node functionality is not required. Such limited
node can acquire and maintain view of the TLV space by implementing node can acquire and maintain view of the TLV space by implementing
 End of changes. 28 change blocks. 
146 lines changed or deleted 236 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/