RIFT Working Group                                    A. Przygienda, Ed.
Internet-Draft                                                   Juniper
Intended status: Standards Track                               A. Sharma
Expires: March 11, May 7, 2020                                             Comcast
                                                              P. Thubert
                                                                   Cisco
                                                          Bruno. Rijsman
                                                              Individual
                                                       Dmitry. Afanasiev
                                                                  Yandex
                                                       September 8,
                                                        November 4, 2019

                       RIFT: Routing in Fat Trees
                        draft-ietf-rift-rift-08
                        draft-ietf-rift-rift-09

Abstract

   This document outlines defines a specialized, dynamic routing protocol for
   Clos and fat-tree network topologies. topologies optimized towards minimization
   of configuration and operational complexity.  The protocol (1)

      deals with no configuration, fully automated construction of fat-tree fat-
      tree topologies based on detection of links, (2)

      minimizes the amount of routing state held at each level, (3)

      automatically prunes and load balances topology flooding exchanges
      over a sufficient subset of links, (4)

      supports automatic disaggregation of prefixes on link and node
      failures to prevent black-holing and suboptimal routing, (5)

      allows traffic steering and re-routing policies, (6)

      allows loop-free non-ECMP forwarding, (7)

      automatically re-balances traffic towards the spines based on
      bandwidth available and finally (8)

      provides mechanisms to synchronize a limited key-value data-store
      that can be used after protocol convergence to e.g.  bootstrap
      higher levels of functionality on nodes.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on March 11, May 7, 2020.

Copyright Notice

   Copyright (c) 2019 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Authors . . . . . . . . . . . . . . . . . . . . . . . . . . .   6
   2.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   6
     2.1.  Requirements Language . . . . . . . . . . . . . . . . . .   8
   3.  Reference Frame . . . . . . . . . . . . . . . . . . . . . . .   8
     3.1.  Terminology . . . . . . . . . . . . . . . . . . . . . . .   8
     3.2.  Topology  . . . . . . . . . . . . . . . . . . . . . . . .  12  13
   4.  Requirement Considerations  RIFT: Routing in Fat Trees  . . . . . . . . . . . . . . . . .  14
   5.  RIFT: Routing in Fat Trees
     4.1.  Overview  . . . . . . . . . . . . . . . . .  17
     5.1.  Overview . . . . . . .  15
       4.1.1.  Properties  . . . . . . . . . . . . . . . . . . .  18
       5.1.1.  Properties . .  15
       4.1.2.  Generalized Topology View . . . . . . . . . . . . . .  16
         4.1.2.1.  Terminology . . . . . . . . . . . . .  18
       5.1.2.  Generalized Topology View . . . . . .  16
         4.1.2.2.  Clos as Crossed Crossbars . . . . . . . . .  18
       5.1.3. . . .  17
       4.1.3.  Fallen Leaf Problem . . . . . . . . . . . . . . . . .  28
       5.1.4.  26
       4.1.4.  Discovering Fallen Leaves . . . . . . . . . . . . . .  30
       5.1.5.  28
       4.1.5.  Addressing the Fallen Leaves Problem  . . . . . . . .  31
     5.2.  29
     4.2.  Specification . . . . . . . . . . . . . . . . . . . . . .  32
       5.2.1.  30
       4.2.1.  Transport . . . . . . . . . . . . . . . . . . . . . .  32
       5.2.2.  31
       4.2.2.  Link (Neighbor) Discovery (LIE Exchange)  . . . . . .  33
       5.2.3.  31
         4.2.2.1.  LIE FSM . . . . . . . . . . . . . . . . . . . . .  34
       4.2.3.  Topology Exchange (TIE Exchange)  . . . . . . . . . .  35
         5.2.3.1.  40
         4.2.3.1.  Topology Information Elements . . . . . . . . . .  35
         5.2.3.2.  40
         4.2.3.2.  South- and Northbound Representation  . . . . . .  36
         5.2.3.3.  40
         4.2.3.3.  Flooding  . . . . . . . . . . . . . . . . . . . .  38
         5.2.3.4.  43
         4.2.3.4.  TIE Flooding Scopes . . . . . . . . . . . . . . .  39
         5.2.3.5.  50
         4.2.3.5.  'Flood Only Node TIEs' Bit  . . . . . . . . . . .  41
         5.2.3.6.  52
         4.2.3.6.  Initial and Periodic Database Synchronization . .  42
         5.2.3.7.  53
         4.2.3.7.  Purging and Roll-Overs  . . . . . . . . . . . . .  42
         5.2.3.8.  53
         4.2.3.8.  Southbound Default Route Origination  . . . . . .  43
         5.2.3.9.  54
         4.2.3.9.  Northbound TIE Flooding Reduction . . . . . . . .  43
         5.2.3.10.  54
         4.2.3.10. Special Considerations  . . . . . . . . . . . . .  48
       5.2.4.  59
       4.2.4.  Reachability Computation  . . . . . . . . . . . . . .  49
         5.2.4.1.  60
         4.2.4.1.  Northbound SPF  . . . . . . . . . . . . . . . . .  49
         5.2.4.2.  60
         4.2.4.2.  Southbound SPF  . . . . . . . . . . . . . . . . .  50
         5.2.4.3.  61
         4.2.4.3.  East-West Forwarding Within a non-ToF Level . . .  50
         5.2.4.4.  61
         4.2.4.4.  East-West Links Within ToF Level  . . . . . . . .  50
       5.2.5.  61
       4.2.5.  Automatic Disaggregation on Link & Node Failures  . .  51
         5.2.5.1.  62
         4.2.5.1.  Positive, Non-transitive Disaggregation . . . . .  51
         5.2.5.2.  62
         4.2.5.2.  Negative, Transitive Disaggregation for Fallen
                   Leafs . . . . . . . . . . . . . . . . . . . . . .  54
       5.2.6.  65
       4.2.6.  Attaching Prefixes  . . . . . . . . . . . . . . . . .  56
       5.2.7.  67
       4.2.7.  Optional Zero Touch Provisioning (ZTP)  . . . . . . .  65
         5.2.7.1.  76
         4.2.7.1.  Terminology . . . . . . . . . . . . . . . . . . .  66
         5.2.7.2.  77
         4.2.7.2.  Automatic SystemID Selection  . . . . . . . . . .  67
         5.2.7.3.  78
         4.2.7.3.  Generic Fabric Example  . . . . . . . . . . . . .  68
         5.2.7.4.  79
         4.2.7.4.  Level Determination Procedure . . . . . . . . . .  69
         5.2.7.5.  80
         4.2.7.5.  ZTP FSM . . . . . . . . . . . . . . . . . . . . .  81
         4.2.7.6.  Resulting Topologies  . . . . . . . . . . . . . .  70
       5.2.8.  89
       4.2.8.  Stability Considerations  . . . . . . . . . . . . . .  72
     5.3.  91
     4.3.  Further Mechanisms  . . . . . . . . . . . . . . . . . . .  72
       5.3.1.  92
       4.3.1.  Overload Bit  . . . . . . . . . . . . . . . . . . . .  72
       5.3.2.  92
       4.3.2.  Optimized Route Computation on Leafs  . . . . . . . .  72
       5.3.3.  92
       4.3.3.  Mobility  . . . . . . . . . . . . . . . . . . . . . .  73
         5.3.3.1.  92
         4.3.3.1.  Clock Comparison  . . . . . . . . . . . . . . . .  74
         5.3.3.2.  94
         4.3.3.2.  Interaction between Time Stamps and Sequence
                   Counters  . . . . . . . . . . . . . . . . . . . .  74
         5.3.3.3.  94
         4.3.3.3.  Anycast vs. Unicast . . . . . . . . . . . . . . .  75
         5.3.3.4.  95
         4.3.3.4.  Overlays and Signaling  . . . . . . . . . . . . .  75
       5.3.4.  95
       4.3.4.  Key/Value Store . . . . . . . . . . . . . . . . . . .  76
         5.3.4.1.  95
         4.3.4.1.  Southbound  . . . . . . . . . . . . . . . . . . .  76
         5.3.4.2.  95
         4.3.4.2.  Northbound  . . . . . . . . . . . . . . . . . . .  76
       5.3.5.  96
       4.3.5.  Interactions with BFD . . . . . . . . . . . . . . . .  76
       5.3.6.  96
       4.3.6.  Fabric Bandwidth Balancing  . . . . . . . . . . . . .  77
         5.3.6.1.  97
         4.3.6.1.  Northbound Direction  . . . . . . . . . . . . . .  77
         5.3.6.2.  97
         4.3.6.2.  Southbound Direction  . . . . . . . . . . . . . .  79
       5.3.7.  99
       4.3.7.  Label Binding . . . . . . . . . . . . . . . . . . . .  80
       5.3.8.  Segment Routing Support with RIFT . . . . . . . . . .  80
         5.3.8.1.  Global Segment Identifiers Assignment . . . . . .  80
         5.3.8.2.  Distribution of Topology Information  . . . . . .  80
       5.3.9. 100
       4.3.8.  Leaf to Leaf Procedures . . . . . . . . . . . . . . .  81
       5.3.10. 100
       4.3.9.  Address Family and Multi Topology Considerations  . .  81
       5.3.11. 100
       4.3.10. Reachability of Internal Nodes in the Fabric  . . . .  81
       5.3.12. 101
       4.3.11. One-Hop Healing of Levels with East-West Links  . . .  82
     5.4. 101
     4.4.  Security  . . . . . . . . . . . . . . . . . . . . . . . .  82
       5.4.1. 101
       4.4.1.  Security Model  . . . . . . . . . . . . . . . . . . .  82
       5.4.2. 101
       4.4.2.  Security Mechanisms . . . . . . . . . . . . . . . . .  84
       5.4.3. 103
       4.4.3.  Security Envelope . . . . . . . . . . . . . . . . . .  84
       5.4.4. 104
       4.4.4.  Weak Nonces . . . . . . . . . . . . . . . . . . . . .  87
       5.4.5. 107
       4.4.5.  Lifetime  . . . . . . . . . . . . . . . . . . . . . .  88
       5.4.6. 108
       4.4.6.  Key Management  . . . . . . . . . . . . . . . . . . .  88
       5.4.7. 108
       4.4.7.  Security Association Changes  . . . . . . . . . . . .  88

   6. 108
   5.  Examples  . . . . . . . . . . . . . . . . . . . . . . . . . .  89
     6.1. 109
     5.1.  Normal Operation  . . . . . . . . . . . . . . . . . . . .  89
     6.2. 109
     5.2.  Leaf Link Failure . . . . . . . . . . . . . . . . . . . .  90
     6.3. 110
     5.3.  Partitioned Fabric  . . . . . . . . . . . . . . . . . . .  91
     6.4. 111
     5.4.  Northbound Partitioned Router and Optional East-West
           Links . . . . . . . . . . . . . . . . . . . . . . . . . .  93
   7. 113
   6.  Implementation and Operation: Further Details . . . . . . . .  93
     7.1. 113
     6.1.  Considerations for Leaf-Only Implementation . . . . . . .  93
     7.2. 113
     6.2.  Considerations for Spine Implementation . . . . . . . . .  94
     7.3. 114
     6.3.  Adaptations to Other Proposed Data Center Topologies  . .  94
     7.4. 114
     6.4.  Originating Non-Default Route Southbound  . . . . . . . .  95
   8. 115
   7.  Security Considerations . . . . . . . . . . . . . . . . . . .  95
     8.1. 115
     7.1.  General . . . . . . . . . . . . . . . . . . . . . . . . .  95
     8.2. 115
     7.2.  ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . .  95
     8.3. 115
     7.3.  Lifetime  . . . . . . . . . . . . . . . . . . . . . . . .  96
     8.4. 116
     7.4.  Packet Number . . . . . . . . . . . . . . . . . . . . . .  96
     8.5. 116
     7.5.  Outer Fingerprint Attacks . . . . . . . . . . . . . . . .  96
     8.6. 116
     7.6.  TIE Origin Fingerprint DoS Attacks  . . . . . . . . . . .  96
     8.7. 116
     7.7.  Host Implementations  . . . . . . . . . . . . . . . . . .  97
   9. 117
   8.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .  97
     9.1. 117
     8.1.  Requested Multicast and Port Numbers  . . . . . . . . . .  97
     9.2. 117
     8.2.  Requested Registries with Suggested Values  . . . . . . .  97
       9.2.1. 117
       8.2.1.  Registry RIFT/common/AddressFamilyType  . . . . . . . . . . . .  98
         9.2.1.1. 118
         8.2.1.1.  Requested Entries . . . . . . . . . . . . . . . .  98
       9.2.2. 118
       8.2.2.  Registry RIFT/common/HierarchyIndications . . . . . . . . . .  98
         9.2.2.1. 118
         8.2.2.1.  Requested Entries . . . . . . . . . . . . . . . .  98
       9.2.3. 118
       8.2.3.  Registry RIFT/common/IEEE802_1ASTimeStampType . . . . . . . .  98
         9.2.3.1. 118
         8.2.3.1.  Requested Entries . . . . . . . . . . . . . . . .  98
       9.2.4. 118
       8.2.4.  Registry RIFT/common/IPAddressType  . . . . . . . . . . . . . .  98
         9.2.4.1. 119
         8.2.4.1.  Requested Entries . . . . . . . . . . . . . . . .  98
       9.2.5. 119
       8.2.5.  Registry RIFT/common/IPPrefixType . . . . . . . . . . . . . .  99
         9.2.5.1. 119
         8.2.5.1.  Requested Entries . . . . . . . . . . . . . . . .  99
       9.2.6. 119
       8.2.6.  Registry RIFT/common/IPv4PrefixType . . . . . . . . . . . . .  99
         9.2.6.1. 119
         8.2.6.1.  Requested Entries . . . . . . . . . . . . . . . .  99
       9.2.7. 119
       8.2.7.  Registry RIFT/common/IPv6PrefixType . . . . . . . . . . . . .  99
         9.2.7.1. 119
         8.2.7.1.  Requested Entries . . . . . . . . . . . . . . . .  99
       9.2.8. 119
       8.2.8.  Registry RIFT/common/PrefixSequenceType . . . . . . . . . . .  99
         9.2.8.1. 120
         8.2.8.1.  Requested Entries . . . . . . . . . . . . . . . .  99
       9.2.9. 120
       8.2.9.  Registry RIFT/common/RouteType  . . . . . . . . . . . . . . . . 100
         9.2.9.1. 120
         8.2.9.1.  Requested Entries . . . . . . . . . . . . . . . . 100
       9.2.10. 120
       8.2.10. Registry RIFT/common/TIETypeType  . . . . . . . . . . . . . . . 100
         9.2.10.1. 120
         8.2.10.1.  Requested Entries  . . . . . . . . . . . . . . . 100
       9.2.11. 121
       8.2.11. Registry RIFT/common/TieDirectionType . . . . . . . . . . . . 101
         9.2.11.1. 121
         8.2.11.1.  Requested Entries  . . . . . . . . . . . . . . . 101
       9.2.12. 121
       8.2.12. Registry RIFT/encoding/Community  . . . . . . . . . . . . . . . 101
         9.2.12.1. 121
         8.2.12.1.  Requested Entries  . . . . . . . . . . . . . . . 101
       9.2.13. 121
       8.2.13. Registry RIFT/encoding/KeyValueTIEElement . . . . . . . . . . 101
         9.2.13.1. 121
         8.2.13.1.  Requested Entries  . . . . . . . . . . . . . . . 101

       9.2.14. 122
       8.2.14. Registry RIFT/encoding/LIEPacket  . . . . . . . . . . . . . . . 102
         9.2.14.1. 122
         8.2.14.1.  Requested Entries  . . . . . . . . . . . . . . . 102
       9.2.15. 122
       8.2.15. Registry RIFT/encoding/LinkCapabilities . . . . . . . . . . . 103
         9.2.15.1. 123
         8.2.15.1.  Requested Entries  . . . . . . . . . . . . . . . 103
       9.2.16. 123
       8.2.16. Registry RIFT/encoding/LinkIDPair . . . . . . . . . . . . . . 103
         9.2.16.1. 123
         8.2.16.1.  Requested Entries  . . . . . . . . . . . . . . . 103
       9.2.17. 123
       8.2.17. Registry RIFT/encoding/Neighbor . . . . . . . . . . . . . . . 104
         9.2.17.1. 124
         8.2.17.1.  Requested Entries  . . . . . . . . . . . . . . . 104
       9.2.18. 124
       8.2.18. Registry RIFT/encoding/NodeCapabilities . . . . . . . . . . . 104
         9.2.18.1. 124
         8.2.18.1.  Requested Entries  . . . . . . . . . . . . . . . 104
       9.2.19. 124
       8.2.19. Registry RIFT/encoding/NodeFlags  . . . . . . . . . . . . . . . 105
         9.2.19.1. 125
         8.2.19.1.  Requested Entries  . . . . . . . . . . . . . . . 105
       9.2.20. 125
       8.2.20. Registry RIFT/encoding/NodeNeighborsTIEElement  . . . . . . . . 105
         9.2.20.1. 125
         8.2.20.1.  Requested Entries  . . . . . . . . . . . . . . . 105
       9.2.21. 125
       8.2.21. Registry RIFT/encoding/NodeTIEElement . . . . . . . . . . . . 105
         9.2.21.1. 125
         8.2.21.1.  Requested Entries  . . . . . . . . . . . . . . . 106
       9.2.22. 126
       8.2.22. Registry RIFT/encoding/PacketContent  . . . . . . . . . . . . . 106
         9.2.22.1. 126
         8.2.22.1.  Requested Entries  . . . . . . . . . . . . . . . 106
       9.2.23. 126
       8.2.23. Registry RIFT/encoding/PacketHeader . . . . . . . . . . . . . 106
         9.2.23.1. 126
         8.2.23.1.  Requested Entries  . . . . . . . . . . . . . . . 106
       9.2.24. 126
       8.2.24. Registry RIFT/encoding/PrefixAttributes . . . . . . . . . . . 107
         9.2.24.1. 127
         8.2.24.1.  Requested Entries  . . . . . . . . . . . . . . . 107
       9.2.25. 127
       8.2.25. Registry RIFT/encoding/PrefixTIEElement . . . . . . . . . . . 107
         9.2.25.1. 127
         8.2.25.1.  Requested Entries  . . . . . . . . . . . . . . . 107
       9.2.26. 127
       8.2.26. Registry RIFT/encoding/ProtocolPacket . . . . . . . . . . . . 108
         9.2.26.1. 128
         8.2.26.1.  Requested Entries  . . . . . . . . . . . . . . . 108
       9.2.27. 128
       8.2.27. Registry RIFT/encoding/TIDEPacket . . . . . . . . . . . . . . 108
         9.2.27.1. 128
         8.2.27.1.  Requested Entries  . . . . . . . . . . . . . . . 108
       9.2.28. 128
       8.2.28. Registry RIFT/encoding/TIEElement . . . . . . . . . . . . . . 108
         9.2.28.1. 128
         8.2.28.1.  Requested Entries  . . . . . . . . . . . . . . . 108
       9.2.29. 128
       8.2.29. Registry RIFT/encoding/TIEHeader  . . . . . . . . . . . . . . . 109
         9.2.29.1. 129
         8.2.29.1.  Requested Entries  . . . . . . . . . . . . . . . 110
       9.2.30. 130
       8.2.30. Registry RIFT/encoding/TIEHeaderWithLifeTime  . . . . . . . . . 110
         9.2.30.1. 130
         8.2.30.1.  Requested Entries  . . . . . . . . . . . . . . . 110
       9.2.31. 130
       8.2.31. Registry RIFT/encoding/TIEID  . . . . . . . . . . . . . . . . . 110
         9.2.31.1. 130
         8.2.31.1.  Requested Entries  . . . . . . . . . . . . . . . 111
       9.2.32. 131
       8.2.32. Registry RIFT/encoding/TIEPacket  . . . . . . . . . . . . . . . 111
         9.2.32.1. 131
         8.2.32.1.  Requested Entries  . . . . . . . . . . . . . . . 111
       9.2.33. 131
       8.2.33. Registry RIFT/encoding/TIREPacket . . . . . . . . . . . . . . 111
         9.2.33.1. 131
         8.2.33.1.  Requested Entries  . . . . . . . . . . . . . . . 111
   10. 131
   9.  Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 111
   11. 131
   10. References  . . . . . . . . . . . . . . . . . . . . . . . . . 112
     11.1. 132
     10.1.  Normative References . . . . . . . . . . . . . . . . . . 112
     11.2. 132
     10.2.  Informative References . . . . . . . . . . . . . . . . . 114 134
   Appendix A.  Sequence Number Binary Arithmetic  . . . . . . . . . 116 136
   Appendix B.  Information Elements Schema  . . . . . . . . . . . . 117 137
     B.1.  common.thrift . . . . . . . . . . . . . . . . . . . . . . 118 138
     B.2.  encoding.thrift . . . . . . . . . . . . . . . . . . . . . 124 144
   Appendix C.  Finite State Machines and Precise Operational
                Specifications . . . . . . . . . . . . . . . . . . . 132
     C.1.  LIE FSM . . . . . . . . . . . . . . . . . . . . . . . . . 133
     C.2.  ZTP FSM . . . . . . . . . . . . . . . . . . . . . . . . . 139
     C.3.  Flooding Procedures . . . . . . . . . . . . . . . . . . . 147
       C.3.1.  FloodState Structure per Adjacency  . . . . . . . . . 147
       C.3.2.  TIDEs . . . . . . . . . . . . . . . . . . . . . . . . 149
         C.3.2.1.  TIDE Generation . . . . . . . . . . . . . . . . . 149
         C.3.2.2.  TIDE Processing . . . . . . . . . . . . . . . . . 150
       C.3.3.  TIREs . . . . . . . . . . . . . . . . . . . . . . . . 151
         C.3.3.1.  TIRE Generation . . . . . . . . . . . . . . . . . 151
         C.3.3.2.  TIRE Processing . . . . . . . . . . . . . . . . . 151
       C.3.4.  TIEs Processing on Flood State Adjacency  . . . . . . 152
       C.3.5.  TIEs Processing When LSDB Received Newer Version on
               Other Adjacencies . . . . . . . . . . . . . . . . . . 153
       C.3.6.  Sending TIEs  . . . . . . . . . . . . . . . . . . . . 153
   Appendix D.  Constants  . . . . . . . . . . . . . . . . . . . . . 153
     D.1. 152
     C.1.  Configurable Protocol Constants . . . . . . . . . . . . . 153 152
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . . 155 154

1.  Authors

   This work is a product of a list of individuals which are all to be
   considered major contributors independent of the fact whether their
   name made it to the limited boilerplate author's list or not.

         Tony Przygienda, Ed. | Alankar Sharma | Pascal Thubert
         Juniper Networks     | Comcast        | Cisco

         Bruno Rijsman        | Ilya Vershkov  | Dmitry Afanasiev
         Individual           | Mellanox       | Yandex

         Don Fedyk            | Alia Atlas     | John Drake
         Individual           | Individual     | Juniper

                           Table 1: RIFT Authors

2.  Introduction

   Clos [CLOS] and Fat-Tree [FATTREE] topologies have gained prominence
   in today's networking, primarily as result of the paradigm shift
   towards a centralized data-center based architecture that is poised
   to deliver a majority of computation and storage services in the
   future.  Today's current routing protocols were geared towards a
   network with an irregular topology and low degree of connectivity
   originally but given they were the only available options,
   consequently several attempts to apply those protocols to Clos have
   been made.  Most successfully BGP [RFC4271] [RFC7938] has been
   extended to this purpose, not as much due to its inherent suitability
   but rather because the perceived capability to easily modify BGP and
   the immanent difficulties with link-state [DIJKSTRA] based protocols
   to optimize topology exchange and converge quickly in large scale
   densely meshed topologies.  The incumbent protocols precondition
   normally extensive configuration or provisioning during bring up and
   re-dimensioning which is only
   re-dimensioning.  This tends to be viable only for a set of
   organizations with according networking operation skills and budgets.
   For the majority
   of data center consumers many IP fabric builders a preferable desirable protocol would be one that
   auto-configures itself and deals with failures and misconfigurations mis-configurations
   with a minimum of human intervention only.  Such a solution would
   allow local IP fabric bandwidth to be consumed in a 'standard
   component' fashion, i.e. provision it much faster and operate it at
   much lower costs, costs than today, much like compute or storage is consumed today.
   already.

   In looking at the problem through the lens of data center
   requirements, an optimal approach does RIFT addresses challenges in IP fabric routing not seem however to be a
   simple
   through an incremental modification of either a link-state
   (distributed computation) or distance-vector (diffused computation) approach
   but rather a mixture of both, colloquially best described as "link-state "link-
   state towards the spine" and "distance vector towards the leafs".  In
   other words, "bottom" levels are flooding their link-state
   information in the "northern" direction while each node generates
   under normal conditions a default route "default route" and floods it in the
   "southern" direction.  This type of protocol allows naturally for
   highly desirable aggregation.  Alas, such aggregation could blackhole
   traffic in cases of misconfiguration or while failures are being
   resolved or even cause partial network partitioning and this has to
   be addressed. addressed by some adequate mechanism.  The approach RIFT takes is
   described in Section 5.2.5 4.2.5 and is basically based on automatic,
   sufficient disaggregation of prefixes. prefixes in case of link and node
   failures.

   For the visually oriented reader, Figure 1 presents a first level
   simplified view of the resulting information and routes on a RIFT
   fabric.  The top of the fabric is holding in its link-state database
   the nodes below it and the routes to them.  In the second row of the
   database table we indicate that partial information of other nodes in
   the same level is available as well.  The details of how this is
   achieved will be postponed for the moment.  When we look at the
   "bottom" of the fabric, the leafs, we see that the topology is
   basically empty and they only hold a load balanced default route to
   the next level. level under normal conditions.

   The balance of this document details the requirements of a dedicated IP fabric routing
   protocol, fills in the specification details and ultimately includes
   resulting security considerations.

              .                                  [A,B,C,D]
              .                                  [E]
              .             +-----+      +-----+
              .             |  E  |      |  F  | A/32 @ [C,D]
              .             +-+-+-+      +-+-+-+ B/32 @ [C,D]
              .               | |          | |   C/32 @ C
              .               | |    +-----+ |   D/32 @ D
              .               | |    |       |
              .               | +------+     |
              .               |      | |     |
              .       [A,B] +-+---+  | | +---+-+ [A,B]
              .       [D]   |  C  +--+ +-+  D  | [C]
              .             +-+-+-+      +-+-+-+
              .  0/0  @ [E,F] | |          | |   0/0  @ [E,F]
              .  A/32 @ A     | |    +-----+ |   A/32 @ A
              .  B/32 @ B     | |    |       |   B/32 @ B
              .               | +------+     |
              .               |      | |     |
              .             +-+---+  | | +---+-+
              .             |  A  +--+ +-+  B  |
              . 0/0 @ [C,D] +-----+      +-----+ 0/0 @ [C,D]

                  Figure 1: RIFT information distribution

2.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119]. 8174 [RFC8174].

3.  Reference Frame

3.1.  Terminology

   This section presents the terminology used in this document.  It is
   assumed that the reader is thoroughly familiar with the terms and
   concepts used in OSPF [RFC2328] and IS-IS [ISO10589-Second-Edition],
   [ISO10589] as well as the according graph theoretical concepts of
   shortest path first (SPF) [DIJKSTRA] computation and directed acyclic
   graphs (DAG). DAGs.

   Crossbar:  Physical arrangement of ports in a switching matrix
      without implying any further scheduling or buffering disciplines.

   Clos/Fat Tree:  This document uses the terms Clos and Fat Tree
      interchangeably whereas it always refers to a folded spine-and-
      leaf topology with possibly multiple PoDs Points of Delivery (PoDs) and
      one or multiple ToF Top of Fabric (ToF) planes.  Several modifications
      such as leaf-2-leaf shortcuts and multiple level shortcuts are
      possible and described further in the document.

   Directed Acyclic Graph (DAG):  A finite directed graph with no
      directed cycles (loops).  If links in Clos are considered as
      either being all directed towards the top or vice versa, each of
      such two graphs is a DAG.

   Folded Spine-and-Leaf:  In case Clos fabric input and output stages
      are analogous, the fabric can be "folded" to build a "superspine"
      or top which we will call Top of Fabric (ToF) in this document.

   Level:  Clos and Fat Tree networks are topologically partially
      ordered graphs and 'level' denotes the set of nodes at the same
      height in such a network, where the bottom level (leaf) is the
      level with lowest value.  A node has links to nodes one level down
      and/or one level up.  Under some circumstances, a node may have
      links to nodes at the same level.  As footnote: Clos terminology
      uses often the concept of "stage" but due to the folded nature of
      the Fat Tree we do not use it to prevent misunderstandings.

   Superspine/Aggregation or Spine/Edge Levels:  Traditional names in
      5-stages folded Clos for Level 2, 1 and 0 respectively.  Level 0
      is often called leaf as well.  We normalize this language to talk
      about leafs, spines and top-of-fabric (ToF).

   Zero Touch Provisioning (ZTP):  Optional RIFT mechanism which allows
      to derive node levels automatically based on minimum configuration
      (only ToF property has to be provisioned on according nodes).

   Point of Delivery (PoD):  A self-contained vertical slice or subset
      of a Clos or Fat Tree network containing normally only level 0 and
      level 1 nodes.  A node in a PoD communicates with nodes in other
      PoDs via the Top-of-Fabric.  We number PoDs to distinguish them
      and use PoD #0 to denote "undefined" PoD.

   Top of PoD (ToP):  The set of nodes that provide intra-PoD
      communication and have northbound adjacencies outside of the PoD,
      i.e. are at the "top" of the PoD.

   Top of Fabric (ToF):  The set of nodes that provide inter-PoD
      communication and have no northbound adjacencies, i.e. are at the
      "very top" of the fabric.  ToF nodes do not belong to any PoD and
      are assigned "undefined" PoD value to indicate the equivalent of
      "any" PoD.

   Spine:  Any nodes north of leafs and south of top-of-fabric nodes.
      Multiple layers of spines in a PoD are possible.

   Leaf:  A node without southbound adjacencies.  Its level is 0 (except
      cases where it is deriving its level via ZTP and is running
      without LEAF_ONLY which will be explained in Section 5.2.7). 4.2.7).

   Top-of-fabric Plane or Partition:  In large fabrics top-of-fabric
      switches may not have enough ports to aggregate all switches south
      of them and with that, the ToF is 'split' into multiple
      independent planes.  Introduction and Section 5.1.2 4.1.2 explains the
      concept in more detail.  A plane is subset of ToF nodes that see
      each other through south reflection or E-W links.

   Radix:  A radix of a switch is basically number of switching ports it
      provides.  It's sometimes called fanout as well.

   North Radix:  Ports cabled northbound to higher level nodes.

   South Radix:  Ports cabled southbound to lower level nodes.

   South/Southbound and North/Northbound (Direction):  When describing
      protocol elements and procedures, we will be using in different
      situations the directionality of the compass.  I.e., 'south' or
      'southbound' mean moving towards the bottom of the Clos or Fat
      Tree network and 'north' and 'northbound' mean moving towards the
      top of the Clos or Fat Tree network.

   Northbound Link:  A link to a node one level up or in other words,
      one level further north.

   Southbound Link:  A link to a node one level down or in other words,
      one level further south.

   East-West Link:  A link between two nodes at the same level.  East-
      West links are normally not part of Clos or "fat-tree" topologies.

   Leaf shortcuts (L2L):  East-West links at leaf level will need to be
      differentiated from East-West links at other levels.

   Routing on the host (RotH):  Modern data center architecture variant
      where servers/leafs are multi-homed and consecutively participate
      in routing.

   Northbound representation:  Subset of topology information flooded
      towards higher levels of the fabric.

   Southbound representation:  Subset of topology information sent
      towards a lower level.

   South Reflection:  Often abbreviated just as "reflection" it defines
      a mechanism where South Node TIEs are "reflected" from the level
      south back up north to allow nodes in the same level without E-W
      links to "see" each other. other's node TIEs.

   TIE:  This is an acronym for a "Topology Information Element".  TIEs
      are exchanged between RIFT nodes to describe parts of a network
      such as links and address prefixes, in a fashion similar to ISIS
      LSPs or OSPF LSAs.  A TIE has always a direction and a type.  We
      will talk about N-TIEs North TIEs (sometimes abbreviated as N-TIEs) when
      talking about TIEs in the northbound representation and S-TIEs South-TIEs
      (sometimes abbreviated as S-TIEs) for the southbound equivalent.
      TIEs have different types such as node and prefix TIEs.

   Node TIE:  This stands as acronym for a "Node Topology Information
      Element" that contains all adjacencies the node discovered and
      information about node itself.  Node TIE should NOT be confused
      with a N-TIE since "node" defines the type of TIE rather than its
      direction.

   Prefix TIE:  This is an acronym for a "Prefix Topology Information
      Element" and it contains all prefixes directly attached to this
      node in case of a N-TIE North TIE and in case of S-TIE South TIE the necessary
      default routes the node passes advertises southbound.

   Key Value TIE:  A S-TIE South TIE that is carrying a set of key value pairs
      [DYNAMO].  It can be used to distribute information in the
      southbound direction within the protocol.

   TIDE:  Topology Information Description Element, equivalent to CSNP
      in ISIS.

   TIRE:  Topology Information Request Element, equivalent to PSNP in
      ISIS.  It can both confirm received and request missing TIEs.

   De-aggregation/Disaggregation:  Process in which a node decides to
      advertise certain prefixes it received in N-TIEs North TIEs to prevent black-
      holing
      black-holing and suboptimal routing upon link failures.

   LIE:  This is an acronym for a "Link Information Element", largely
      equivalent to HELLOs in IGPs and exchanged over all the links
      between systems running RIFT to form 3-way adjacencies.

   Flood Repeater (FR):  A node can designate one or more northbound
      neighbor nodes to be flood repeaters.  The flood repeaters are
      responsible for flooding northbound TIEs further north.  They are
      similar to MPR in OSLR.  The document sometimes calls them flood
      leaders as well.

   Bandwidth Adjusted Distance (BAD):  This is an acronym for Bandwidth
      Adjusted Distance.  Each RIFT node calculates the amount of
      northbound bandwidth available towards a node compared to other
      nodes at the same level and modifies the default route distance
      accordingly to allow for the lower level to adjust their load
      balancing towards spines.

   Overloaded:  Applies to a node advertising `overload` attribute as
      set.  The semantics closely follow the meaning of the same
      attribute in [ISO10589-Second-Edition].

   Interface:  A layer 3 entity over which RIFT control packets are
      exchanged.

   3-Way Adjacency:  RIFT tries to form a unique adjacency over an
      interface and exchange local configuration and necessary ZTP
      information.  An adjacency is only advertised in node TIEs and
      used for computations after it achieved 3-way state, i.e. both
      routers reflected each other in LIEs including relevant security
      information.  LIEs before 3-way state is reached may carry ZTP
      related information already.

   Bi-directional Adjacency:  Bidirectional adjacency is an adjacency
      where nodes of both sides of the adjacency advertised it in the
      node TIEs with the correct levels and system IDs.  Bi-
      directionality is used to check in different algorithms whether
      the link should be included.

   Neighbor:  Once a three way 3-way adjacency has been formed a neighborship
      relationship contains the neighbor's properties.  Multiple
      adjacencies can be formed to a neighbor remote node via parallel interfaces
      but such adjacencies are NOT sharing a neighbor structure.  Saying
      "neighbor" is thus equivalent to saying "a three way 3-way adjacency".

   Cost:  The term signifies the weighted distance between two
      neighbors.

   Distance:  Sum of costs (bound by infinite distance) between two
      nodes.

   Metric:  Without going deeper into the proper differentiation,

   Shortest-Path First (SPF):  A well-known graph algorithm attributed
      to Dijkstra that establishes a
      metric tree of shortest paths from a
      source to destinations on the graph.  We use SPF acronym due to
      its familiarity as general term for the node reachability
      calculations RIFT can employ to ultimately calculate routes of
      which Dijkstra algorithm is equivalent one.

   North SPF (N-SPF):  A reachability calculation that is progressing
      northbound, as example SPF that is using South Node TIEs only.

   South SPF (S-SPF):  A reachability calculation that is progressing
      southbound, as example SPF that is using North Node TIEs only.

   Security Envelope  RIFT packets are flooded within an authenticated
      security envelope that allows to distance. protect the integrity of
      information a node accepts.

3.2.  Topology

    .                +--------+          +--------+          ^ N
    .                |ToF   21|          |ToF   22|          |
    .Level 2         ++-+--+-++          ++-+--+-++        <-*-> E/W
    .                 | |  | |            | |  | |           |
    .             P111/2|  |P121          | |  | |         S v
    .                 ^ ^  ^ ^            | |  | |
    .                 | |  | |            | |  | |
    .  +--------------+ |  +-----------+  | |  | +---------------+
    .  |                |    |         |  | |  |                 |
    . South +-----------------------------+ |  |                 ^
    .  |    |           |    |         |    |  |              All TIEs
    .  0/0  0/0        0/0   +-----------------------------+     |
    .  v    v           v              |    |  |           |     |
    .  |    |           +-+    +<-0/0----------+           |     |
    .  |    |             |    |       |    |              |     |
    .+-+----++ optional +-+----++     ++----+-+           ++-----++
    .|       | E/W link |       |     |       |           |       |
    .|Spin111+----------+Spin112|     |Spin121|           |Spin122|
    .+-+---+-+          ++----+-+     +-+---+-+           ++---+--+
    .  |   |             |   South      |   |              |   |
    .  |   +---0/0--->-----+ 0/0        |   +----------------+ |
    . 0/0                | |  |         |                  | | |
    .  |   +---<-0/0-----+ |  v         |   +--------------+ | |
    .  v   |               |  |         |   |                | |
    .+-+---+-+          +--+--+-+     +-+---+-+          +---+-+-+
    .|       |  (L2L)   |       |     |       |  Level 0 |       |
    .|Leaf111~~~~~~~~~~~~Leaf112|     |Leaf121|          |Leaf122|
    .+-+-----+          +-+---+-+     +--+--+-+          +-+-----+
    .  +                  +    \        /   +              +
    .  Prefix111   Prefix112    \      /   Prefix121    Prefix122
    .                          multi-homed
    .                            Prefix
    .+---------- Pod 1 ---------+     +---------- Pod 2 ---------+

              Figure 2: A three level spine-and-leaf topology
                    .+--------+  +--------+  +--------+  +--------+
                    .|ToF   A1|  |ToF   B1|  |ToF   B2|  |ToF   A2|
                    .++-+-----+  ++-+-----+  ++-+-----+  ++-+-----+
                    . | |         | |         | |         | |
                    . | |         | |         | +---------------+
                    . | |         | |         |           | |   |
                    . | |         | +-------------------------+ |
                    . | |         |           |           | | | |
                    . | +-----------------------+         | | | |
                    . |           |           | |         | | | |
                    . |           | +---------+ | +---------+ | |
                    . |           | |           | |       |   | |
                    . | +---------------------------------+   | |
                    . | |         | |           | |           | |
                    .++-+-----+  ++-+-----+  +--+-+---+  +----+-+-+
                    .|Spine111|  |Spine112|  |Spine121|  |Spine122|
                    .+-+---+--+  ++----+--+  +-+---+--+  ++---+---+
                    .  |   |      |    |       |   |      |   |
                    .  |   +--------+  |       |   +--------+ |
                    .  |          | |  |       |          | | |
                    .  |   -------+ |  |       |   +------+ | |
                    .  |   |        |  |       |   |        | |
                    .+-+---+-+   +--+--+-+   +-+---+-+  +---+-+-+
                    .|Leaf111|   |Leaf112|   |Leaf121|  |Leaf122|
                    .+-------+   +-------+   +-------+  +-------+

                  Figure 3: Topology with multiple planes

   We will use topology in Figure 2 (called commonly a fat tree/network
   in modern IP fabric considerations [VAHDAT08] as homonym to the
   original definition of the term [FATTREE]) in all further
   considerations.  This figure depicts a generic "single plane fat-
   tree" and the concepts explained using three levels apply by
   induction to further levels and higher degrees of connectivity.
   Further, this document will deal also with designs that provide only
   sparser connectivity and "partitioned spines" as shown in Figure 3
   and explained further in Section 5.1.2. 4.1.2.

4.  Requirement Considerations

   [RFC7938] gives the original set of requirements augmented here based
   upon recent experience  RIFT: Routing in the operation Fat Trees

   We present here a detailed outline of fat-tree networks.

   REQ1:    The control a protocol should discover the physical links
            automatically and be able to detect cabling optimized for
   Routing in Fat Trees (RIFT) that violates
            fat-tree topology constraints.  It must react accordingly to
            such mis-cabling attempts, at in most abstract terms has many
   properties of a minimum preventing
            adjacencies between nodes from being formed modified link-state protocol
   [RFC2328][ISO10589-Second-Edition] when "pointing north" and traffic from
            being forwarded on those mis-cabled links.  E.g.  connecting
            a leaf to a spine at distance
   vector [RFC4271] protocol when "pointing south".  While this is an
   unusual combination, it does quite naturally exhibit the desirable
   properties we seek.

4.1.  Overview

4.1.1.  Properties

   The most singular property of RIFT is that it floods flat link-state
   information northbound only so that each level 2 should obtains the full
   topology of levels south of it.  That information is never flooded
   East-West or back South again with some exceptions like south
   reflection which will be detected explained in detail in Section 4.2.5.1 and ideally
            prevented.

   REQ2:    A node without any configuration beside default values
            should come up
   east-west flooding at the correct ToF level in any PoD it is
            introduced into.  Optionally, it must be possible to
            configure nodes to restrict their participation to multi-plane fabrics outlined in
   Section 4.1.2.  In the
            PoD(s) targeted at any level.

   REQ3:    Optionally, southbound direction the protocol should allow to provision IP
            fabrics where the individual switches carry no configuration
            information and are all deriving their level from operates
   like a "seed".
            Observe that this requirement may collide "fully summarizing, unidirectional" path vector protocol or
   rather a distance vector with implicit split horizon whereas the desire to
            detect cabling misconfiguration and with that only
   information propagates one of
            the requirements can be fully met in a chosen configuration
            mode.

   REQ4:    The solution should allow for minimum size routing
            information base and forwarding tables at leaf level for
            speed, cost hop south and simplicity reasons.  Holding excessive
            amount of information away from leaf is 're-advertised' by nodes simplifies
            operation and lowers cost of
   at next lower level, normally just the underlay and allows default route.  However, RIFT
   uses flooding in the southern direction as well to
            scale and introduce proper multi-homing down avoid the
   necessity to build an update per adjacency.  We omit describing the server
            level.  The routing solution should allow for easy
            instantiation of multiple routing planes.  Coupled with
            mobility defined in Paragraph 17 this should allow
   East-West direction out for
            "light-weight" overlays on the moment.

   Those information flow constraints create not only an IP fabric with e.g. native
            IPv6 mobility support.

   REQ5:    Very high degree of ECMP must be supported.  Maximum ECMP anisotropic
   protocol (i.e. the information is
            currently understood as not distributed "evenly" or
   "clumped" but summarized along the most efficient routing approach
            to maximize N-S gradient) but also a "smooth"
   information propagation where nodes do not receive the same
   information from multiple directions at the throughput same time.  Normally,
   accepting the same reachability on any link without understanding its
   topological significance forces tie-breaking on some kind of switching fabrics
            [MAKSIC2013].

   REQ6:    Non equal cost anycast must be supported to allow for easy distance
   metric and robust multi-homing of services without regressing ultimately leads in hop-by-hop forwarding substrates to
            careful balancing
   utilization of link costs.

   REQ7:    Traffic engineering should be allowed by modification variants of
            prefixes and/or their next-hops.

   REQ8:    The solution should allow for access shortest paths only.  RIFT under normal
   conditions does not need to link states of the
            whole topology reconcile same reachability information
   from multiple directions and its computation principles (south
   forwarding direction is always prefered) leads to enable efficient support for modern
            control architectures like SPRING [RFC7855] or PCE
            [RFC4655].

   REQ9:    The solution valley-free
   forwarding behavior.  And since valley free routing is loop-free it
   can use all feasible paths, another highly desirable property if
   available bandwidth should easily accommodate opaque data to be
            carried throughout the topology utilized to subsets of nodes.  This
            can be used the maximum extent
   possible.

   To account for many purposes, one of them being a key-value
            store that allows bootstrapping of nodes based right at the
            time of topology discovery.  Another use "northern" and the "southern" information split
   the link state database is distributing MAC
            to L3 address binding from accordingly partitioned into "north
   representation" and "south representation" TIEs.  In simplest terms
   the leafs up north in case North TIEs contain a link state topology description of
            e.g.  DHCP.

   REQ10:   Nodes should be taken out lower
   levels and introduced into production
            with minimum wait-times and minimum of "shaking" South TIEs carry simply default routes of the
            network, i.e.  radius of propagation (often called "blast
            radius") of changed information should level
   above.  This oversimplified view will be as small as
            feasible.

   REQ11:   The protocol should allow for maximum aggregation of carried
            routing information refined gradually in
   following sections while introducing protocol procedures and state
   machines at the same time automatically de-
            aggregating time.

4.1.2.  Generalized Topology View

   This section will shed some light on the prefixes to prevent black-holing topologies addresses by RIFT
   including multi plane fabrics and their related implications.
   Readers that are only interested in case of
            failures.  The de-aggregation should support maximum
            possible ECMP/N-ECMP remaining after failure.

   REQ12:   Reducing single plane designs, i.e. all
   top-of-fabric nodes being topologically equal and initially connected
   to all the scope of communication needed throughout switches at the
            network on link level below them can skip the rest of
   Section 4.1.2 and state failure, as well resulting Section 4.2.5.2 as reducing
            advertisements of repeating or idiomatic information in
            stable state well.

   It is highly desirable since it leads quite difficult to better
            stability and faster convergence behavior.

   REQ13:   Under normal, fully converged condition, once a packet is
            forwarded along visualize multi plane design which are
   effectively multi-dimensional switching matrices.  To cope with that,
   we will introduce a link methodology allowing us to depict the
   connectivity in a "southbound" direction, it must
            not take any further "northbound" links (Valley Free
            Routing).  Taking a path through two-dimensional plane.  Further, we will leverage
   the spine in cases fact that we are dealing basically with stacked crossbar fabrics
   where ports align "on top of each other" in a
            shorter path is available is highly undesirable (Bow Tying).

   REQ14:   Parallel links between same set regular fashion.

   As a word of nodes must be
            distinguishable for SPF, failure and traffic engineering
            purposes.

   REQ15:   The protocol must support interfaces sharing caution to the same
            address.  Specifically, reader at this point it must operate in presence of
            unnumbered links (even parallel ones) and/or links of a
            single node being configured with same addresses.

   REQ16:   It would should be desirable
   observed that the language used to achieve fast re-balancing of flows
            when links, describe Clos variations,
   especially towards in multi-plane designs varies widely between sources.
   This description follows the spines are lost or
            provisioned without regressing to per flow traffic
            engineering which introduces significant amount of
            complexity while possibly not being reactive enough terminology introduced in Section 3.1
   and it is paramount to
            account for short-lived flows.

   REQ17:   The control plane should be able have it present to unambiguously determine follow the current point rest of attachment (which port on which leaf
            node) this
   section correctly.

4.1.2.1.  Terminology

   P: We use P to denote the number of a prefix, even PoDs in a context topology.

   S: We use S to denote number of fast mobility, e.g.,
            when the prefix is a host address on ToF nodes in a wireless node that 1)
            may associate topology.

   K: We use K to any denote number of multiple access points (APs) that
            are attached to different ports on in radix of a same leaf node switch pointing
      north or south.  We further use K_LEAF to
            different leaf nodes, and 2) may move denote number of ports
      pointing south, i.e. towards leafs and reassociate
            several times to a different access point within K_TOP for number of ports
      pointing north towards a sub-
            second period.

   REQ18:   The protocol must provide security mechanisms that allow higher spine level.  To simplify the
            operator to restrict nodes, especially leaf nodes without
            proper credentials, from forming a three-way adjacency
      visual aids, notation and
            participating in routing.

   Following list represents non-requirements:

   PEND1:   Supporting anything but point-to-point links is not
            necessary.

   Finally, following further considerations, we mostly use K
      as Radix/2.

   ToF Plane:  set of ToFs that are the non-requirements:

   NONREQ1:   Broadcast media support is unnecessary.  However,
              miscabling leading to multiple nodes on a broadcast
              segment must be operationally easily recognizable and
              detectable while not taxing the protocol excessively.

   NONREQ2:   Purging link state elements is unnecessary given its
              fragility and complexity and today's large memory size on
              even modest switches and routers.

   NONREQ3:   Special support for layer 3 multi-hop adjacencies is not
              part aware of the protocol specification.  Such support can be
              easily provided each other by using tunneling technologies the same
              way IGPs today are solving means of
      south reflection.

   N: We use N to denote the problem.

5.  RIFT: Routing number of independent ToF planes in Fat Trees

   Derived from the above requirements we present a detailed outline of
      topology.

   R: We use R to denote a protocol optimized for Routing in Fat Trees (RIFT) that in most
   abstract terms has many properties redundancy factor, i.e. number of connections
      a modified link-state protocol
   [RFC2328][ISO10589-Second-Edition] when "pointing north" and distance
   vector [RFC4271] protocol when "pointing south".  While this spine has towards a ToF plane.  In single plane design K_TOP is an
   unusual combination, it does quite naturally exhibit the desirable
   properties we seek.

5.1.  Overview

5.1.1.  Properties

   The most singular property of RIFT
      equal to R.

   Fallen Leaf:  A fallen leaf in a plane Z is a switch that it floods flat link-state
   information lost all
      connectivity northbound only so that each level obtains the full to Z.

4.1.2.2.  Clos as Crossed Crossbars

   The typical topology for which RIFT is defined is built of levels south a number P
   of it.  That information is never flooded
   East-West (we'll talk about exceptions later) or back South again.
   In the southbound direction the protocol operates like PoDs, connected together by a "fully
   summarizing, unidirectional" path vector protocol or rather number S of ToF nodes.  A PoD node
   has a
   distance vector number of ports called Radix, with implicit split horizon whereas half of them (K=Radix/2)
   used to connect host devices from the information
   propagates one hop south south, and is 're-advertised' by nodes at next
   lower level, normally just the default route.  However, RIFT uses
   flooding in the southern direction as well half to avoid the necessity connect to
   interleaved PoD Top-Level switches to
   build an update per adjacency.  We omit describing the East-West
   direction out for the moment.

   Those information flow constraints create not only an anisotropic
   protocol (i.e. the information is not distributed "evenly" north.  Ratio K can be
   chosen differently without loss of generality when port speeds differ
   or
   "clumped" but summarized along the N-S gradient) fabric is oversubscribed but also a "smooth"
   information propagation where nodes do not receive the same
   information from multiple directions at the same time.  Normally,
   accepting the same reachability K=R/2 allows for more readable
   representation whereby there are as many ports facing north as south
   on any link without understanding its
   topological significance forces tie-breaking on some kind of distance
   metric and ultimately leads intermediate node.  We represent a node hence in hop-by-hop forwarding substrates a schematic
   fashion with ports "sticking out" to
   utilization its north and south rather than
   by the usual real-world front faceplate designs of variants the day.

   Figure 4 provides a view of shortest paths only.  RIFT under normal
   conditions does not need to reconcile same reachability information a leaf node as seen from multiple directions the north, i.e.
   showing ports that connect northbound and its computation principles (south
   forwarding direction is always prefered) leads for lack of a better
   symbol, we have chosen to valley-free
   forwarding behavior.  And since valley free routing is loop-free it
   can use all feasible paths, another highly desirable property if
   available bandwidth should be utilized the "oo" or a single "o" symbol as
   ASCII visualisation of a single RJ45 jack.  In that example, K_LEAF
   is chosen to be 6 ports.  Observe that the maximum extent
   possible.

   To account for number of PoDs is not
   related to Radix unless the "northern" and ToF Nodes are constrained to be the "southern" information split
   the link state database is accordingly partitioned into "north
   representation" and "south representation" TIEs.  In simplest terms
   the N-TIEs contain a link state topology description of lower levels
   and and S-TIEs carry simply default routes of the level above.  This
   oversimplified view will be refined gradually in following sections
   while introducing protocol procedures aimed to fulfill the described
   requirements.

5.1.2.  Generalized Topology View

   This section will shed some light on the topologies addresses by RIFT
   including multi plane fabrics and their related implications.
   Readers that are only interested in single plane designs, i.e. all
   top-of-fabric nodes being topologically equal and initially connected
   to all the switches at the level below them can skip this section and
   resulting Section 5.2.5.2 as well.

   It is quite difficult to visualize multi plane design which are
   effectively multi-dimensional switching matrices.  To cope with that,
   we will introduce a methodology allowing us to depict the
   connectivity in a two-dimensional plane.  Further, we will leverage
   the fact that we are dealing basically with crossbar fabrics stacked
   on top of each other where ports align "on top of each other" in a
   regular fashion.

   As a word of caution to the reader at this point it should be
   observed that the language used to describe Clos variations,
   especially in multi-plane designs varies widely between sources.
   This description follows the introduced Section 3.1 and it is
   paramount to have it present to follow the rest of this section
   correctly.

   The typical topology for which RIFT is defined is built of a number P
   of PoDs, connected together by a number S of ToF nodes.  A PoD node
   has a number of ports called Radix, with half of them (K=Radix/2)
   used to connect host devices from the south, and half to connect to
   interleaved PoD Top-Level switches to the north.  Ratio K can be
   chosen differently without loss of generality when port speeds differ
   or fabric is oversubscribed but K=R/2 allows for more readable
   representation whereby there are as many ports facing north as south
   on any intermediate node.  We represent a node hence in a schematic
   fashion with ports "sticking out" to its north and south rather than
   by the usual real-world front faceplate designs of the day.

   Figure 4 provides a view of a leaf node as seen from the north, i.e.
   showing ports that connect northbound and for lack of a better
   symbol, we have chosen to use the "HH" symbol as ASCII visualisation
   of a RJ45 jack.  In that example, K_LEAF is chosen to be 6 ports.
   Observe that the number of PoDs is not related to Radix unless the
   ToF Nodes are constrained to be the same as same
   as the PoD nodes in a particular deployment.

       Top view
        +----+
        |    |
        | HH oo |     e.g., Radix = 12, K_LEAF = 6
        |    |
        | HH oo |
        |    |      -------------------------
        | HH oo ------- Physical Port (Ethernet) ----+
        |    |      -------------------------     |
        | HH oo |                                    |
        |    |                                    |
        | HH oo |                                    |
        |    |                                    |
        | HH oo |                                    |
        |    |                                    |
        +----+                                    |

          ||              ||      ||      ||      ||      ||      ||
        +----+        +------------------------------------------------+
        |    |        |                                                |
        +----+        +------------------------------------------------+
          ||              ||      ||      ||      ||      ||      ||
              Side views

                      Figure 4: A Leaf Node, K_LEAF=6

   The Radix of a node on top of a PoD may be different than that of the
   leaf node, though more often than not a same type of node is used for
   both, effectively forming a square (K*K).  In the general case, we
   could have switches with K_TOP southern ports on nodes at the top of
   the PoD that is not necessarily the same as K_LEAF; for instance, in
   the representations below, we pick a K_LEAF of 6 and a K_TOP of 8.
   In order to form a crossbar, we need K_TOP Leaf Nodes as illustrated
   in Figure 5.

         +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |
         |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
         +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+

                 Figure 5: Southern View of a PoD, K_TOP=8

   The K_TOP Leaf Nodes are fully interconnected with the K_LEAF PoD-top
   nodes, providing a connectivity that can be represented as a crossbar
   as seen from the north and illustrated in Figure 6.  The result is
   that, in the absence of a breakage, a packet entering the PoD from
   North on any port can be routed to any port on the south of the PoD
   and vice versa.

                                 E<-*->W

     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+
     |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |
   +----------------------------------------------------------------+
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |
   +----------------------------------------------------------------+
   +----------------------------------------------------------------+
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |
   +----------------------------------------------------------------+
   +----------------------------------------------------------------+
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |
   +----------------------------------------------------------------+
   +----------------------------------------------------------------+
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |
   +----------------------------------------------------------------+
   +----------------------------------------------------------------+
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |<-+
   +----------------------------------------------------------------+  |
   +----------------------------------------------------------------+  |
   |   HH      HH      HH      HH      HH      HH      HH      HH   oo      oo      oo      oo      oo      oo      oo      oo   |  |
   +----------------------------------------------------------------+  |
     |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |    |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+    |
                ^                                                      |
                |                                                      |
                |     ----------                ---------------------  |
                +----- Leaf Node                PoD top Node (Spine) --+
                      ----------                ---------------------

            Figure 6: Northern View of a PoD's Spines, K_TOP=8

   Side views of this PoD is illustrated in Figure 7 and Figure 8.

                      Connecting to Spine

      ||      ||      ||      ||      ||      ||      ||      ||
  +----------------------------------------------------------------+   N
  |                    PoD top Node seen sideways                  |   ^
  +----------------------------------------------------------------+   |
      ||      ||      ||      ||      ||      ||      ||      ||       *
    +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+     |
    |    |  |    |  |    |  |    |  |    |  |    |  |    |  |    |     v
    +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+     S
      ||      ||      ||      ||      ||      ||      ||      ||

                           Connecting to Client nodes

              Figure 7: Side View of a PoD, K_TOP=8, K_LEAF=6

                      Connecting to Spine

             ||      ||      ||      ||      ||      ||
           +----+  +----+  +----+  +----+  +----+  +----+              N
           |    |  |    |  |    |  |    |  |    |  |   PoD top Nodes   ^
           +----+  +----+  +----+  +----+  +----+  +----+              |
             ||      ||      ||      ||      ||      ||                *
         +------------------------------------------------+            |
         |              Leaf seen sideways                |            v
         +------------------------------------------------+            S
             ||      ||      ||      ||      ||      ||

                      Connecting to Client nodes

    Figure 8: Other side View of a PoD, K_TOP=8, K_LEAF=6, 90o turn in
                                 E-W Plane

   Note that a resulting PoD can be abstracted as a bigger node with a
   number K of K_POD= K_TOP * K_LEAF, and the design can recurse.

   It is critical at this junction that the concept and the picture of
   those "crossed crossbars" is clear before progressing further,
   otherwise following considerations will be difficult to comprehend.

   Further, the PoDs are interconnected with one another through a Top-
   of-Fabric at the very top or the north edge of the fabric.  The
   resulting ToF is NOT partitioned if and only if (IIF) every PoD top
   level node (spine) is connected to every ToF Node.  This is also
   referred to as a single plane configuration.  In order to reach a
   1::1 1:1
   connectivity ratio between the ToF and the Leaves, it results that
   there are K_TOP ToF nodes, because each port of a ToP node connects
   to a different ToF node, and K_LEAF ToP nodes for the same reason.
   Consequently, it takes (P * K_LEAF) ports on a ToF node to connect to
   each of the K_LEAF ToP nodes of the P PoDs, as illustrated in
   Figure 9.

        [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] <-----+
         |   |   |   |   |   |   |   |        |
      [=================================]     |     -----------
         |   |   |   |   |   |   |   |        +----- Top-of-Fabric
        [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]       +----- Node      -------+
                                              |     -----------       |
                                              |                       v
        +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ <-----+                      +-+
        | | | | | | | | | | | | | | | |                              | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]                            | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] -------------------------  | |
      [ |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |H<--- Physical Port (Ethernet)  | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] -------------------------  | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]                            | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]                            | |
        | | | | | | | | | | | | | | | |                              | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]                            | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]      --------------        | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] <---  PoD top level        | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]       node (Spine)  ---+   | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]      --------------    |   | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]                        |   | |
        | | | | | | | | | | | | | | | |  -+           +-   +-+   v   | |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] |           |  --| |--[ ]--| |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] |   -----   |  --| |--[ ]--| |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] +--- PoD ---+  --| |--[ ]--| |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] |   -----   |  --| |--[ ]--| |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] |           |  --| |--[ ]--| |
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] |           |  --| |--[ ]--| |
        | | | | | | | | | | | | | | | |  -+           +-   +-+       | |
        +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+                              +-+

      Figure 9: Fabric Spines and TOFs in Single Plane Design, 3 PoDs

   The top view can be collapsed into a third dimension where the hidden
   depth index is representing the PoD number.  So we can show one PoD
   as a class of PoDs and hence save one dimension in our
   representation.  The Spine Node expands in the depth and the vertical
   dimensions whereas the PoD top level Nodes are constrained in
   horizontal dimension.  A port in the 2-D representation represents
   effectively the class of all the ports at the same position in all
   the PoDs that are projected in its position along the depth axis.
   This is shown in Figure 10.

            / / / / / / / / / / / / / / / /
           / / / / / / / / / / / / / / / /
          / / / / / / / / / / / / / / / /
         / / / / / / / / / / / / / / / /   ]
        +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+   ]]
        | | | | | | | | | | | | | | | |  ]   ---------------------------
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ] <-- PoD top level node (Spine)
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]    ---------------------------
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]]]]
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]]]     ^^
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]]     //  PoDs
      [ |H| |H| |H| |H| |H| |H| |H| |H| |o| |o| |o| |o| |o| |o| |o| |o| ]     // (in depth)
        | |/| |/| |/| |/| |/| |/| |/| |/     //
        +-+ +-+ +-+/+-+/+-+ +-+ +-+ +-+     //
                 ^
                 |     ----------------
                 +----- Top-of-Fabric Node
                       ----------------

   Figure 10: Collapsed Northern View of a Fabric for Any Number of PoDs

   This type of deployment introduces a "single plane limit" where the
   bound is the available radix of the ToF nodes, which limits (P nodes that has to be at least
   P *
   K_LEAF). K_LEAF.  Nevertheless, we will see that a distinct advantage of a
   connected or
   unpartitioned non-partitioned Top-of-Fabric is that all failures can
   be resolved by simple, non-transitive, positive disaggregation (i.e.
   nodes advertising more specific prefixes with the default to the
   level below them that is however not propagated further down the
   fabric) described in Section 5.2.5.1 that propagates only within one level of the fabric. 4.2.5.1.  In other words unpartitoned non-partitioned
   ToF nodes can always reach nodes below or withdraw the routes from
   PoDs they cannot reach unambiguously.  To be
   more precise,  And with this, positive
   disaggregation can heal all failures which still allow all the ToF
   nodes to see each other via south reflection as as, again, explained in
   further detail in Section 5.2.5. 4.2.5.

   In order to scale beyond the "single plane limit", the Top-of-Fabric
   can be partitioned by a number N of identically wired planes, N being
   an integer divider of K_LEAF.  The 1::1 1:1 ratio and the desired symmetry
   are still served, this time with (K_TOP * N) ToF nodes, each of (P *
   K_LEAF / N) ports.  N=1 represents a non-partitioned Spine and
   N=K_LEAF is a maximally partitioned Spine.  Further, if R is any
   divisor of K_LEAF, then (N=K_LEAF/R) is a feasible number of planes
   and R a redundancy factor.  If proves convenient for deployments to
   use a radix for the leaf nodes that is a power of 2 so they can pick
   a number of planes that is a lower power of 2.  The example in
   Figure 11 splits the Spine in 2 planes with a redundancy factor R=3,
   meaning that there are 3 non-intersecting paths between any leaf node
   and any ToF node.  A ToF node must have in this case at least 3*P
   ports, and be directly connected to 3 of the 6 PoD-ToP nodes (spines)
   in each PoD.

        +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
        +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+

      Plane 1
     ----------- . ------------ . ------------ . ------------ . --------
      Plane 2

        +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
      | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |
      +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+
        +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+
                   ^
                   |
                   |     ----------------
                   +----- Top-of-Fabric node
                          "across" depth
                         ----------------

    Figure 11: Northern View of a Multi-Plane ToF Level, K_LEAF=6, N=2

   At the extreme end of the spectrum, it is even possible to fully
   partition the spine with N = K_LEAF and R=1, while maintaining
   connectivity between each leaf node and each Top-of-Fabric node.  In
   that case the ToF node connects to a single Port per PoD, so it
   appears as a single port in the projected view represented in
   Figure 12 and the number of ports required on the Spine Node is more
   or equal to P, the number of PoDs.

   Plane 1
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+  -+
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |
  ----------- . ------------ . ------------ . ------------ . -------- |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |
  ----------- . ------------ . ------------ . ------------ . -------- |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |
  ----------- . ------------ . ------------ . ------------ . -------- +<-+
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |  |
  ----------- . ------------ . ------------ . ------------ . -------- |  |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |  |
  ----------- . ------------ . ------------ . ------------ . -------- |  |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+   |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | | |  |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |--|    |-+ |  |
     +----+  +----+  +----+  +----+  +----+  +----+  +----+  +----+  -+  |
   Plane 6      ^                                                        |
                |                                                        |
                |     ----------------                 --------------    |
                +-----  ToF       Node                 Class of PoDs  ---+
                      ----------------                 -------------

    Figure 12: Northern View of a Maximally Partitioned ToF Level, R=1

5.1.3.

4.1.3.  Fallen Leaf Problem

   As mentioned earlier, RIFT exhibits an anisotropic behavior tailored
   for fabrics with a North / South orientation and a high level of
   interleaving paths.  A non-partitioned fabric makes a total loss of
   connectivity between a Top-of-Fabric node at the north and a leaf
   node at the south a very rare but yet possible occasion that is fully
   healed by positive disaggregation as described in Section 5.2.5.1. 4.2.5.1.
   In large fabrics or fabrics built from switches with low radix, the
   ToF ends often being partioned partitioned in planes which makes the occurrence
   of having a given leaf being only reachable from a subset of the ToF
   nodes more likely to happen.  This makes some further considerations
   necessary.

   We define a "Fallen Leaf" as a leaf that can be reached by only a
   subset of Top-of-Fabric nodes but cannot be reached by all due to
   missing connectivity.  If R is the redundancy factor, then it takes
   at least R breakages to reach a "Fallen Leaf" situation.

   In a general manner, the mechanism of non-transitive positive
   disaggregation is sufficient when the disaggregating ToF nodes
   collectively connect to all the ToP nodes in the broken plane.  This
   happens in the following case:

      If the breakage is the last northern link from a ToP node to a ToF
      node going down, then the fallen leaf problem affects only The ToF
      node, and the connectivity to all the nodes in the PoD is lost
      from that ToF node.  This can be observed by other ToF nodes
      within the plane where the ToP node is located and positively
      disaggregated within that plane.

   On the other hand, there is a need to disaggregate the routes to
   Fallen Leaves in a transitive fashion all the way to the other leaves
   in the following cases:

      If the breakage is the last northern link from a Leaf node within
      a plane - there is only one such link in a maximally partitioned
      fabric - that goes down, then connectivity to all unicast prefixes
      attached to the Leaf node is lost within the plane where the link
      is located.  Southern Reflection by a Leaf Node - e.g., between
      ToP nodes if the PoD has only 2 levels - happens in between
      planes, allowing the ToP nodes to detect the problem within the
      PoD where it occurs and positively disaggregate.  The breakage can
      be observed by the ToF nodes in the same plane through the
      flooding of N-TIEs North TIEs from the ToP nodes, but the ToF nodes need
      to be aware of all the affected prefixes for the negative negative,
      possibly transitive disaggregation to be fully effective.  The problem can also be
      observed by the ToF nodes in the other planes effective (i.e. a
      node advertising in control plane that it cannot reach a certain
      more specific prefix than default whereas such disaggregation must
      in extreme condition propagate further down southbound).  The
      problem can also be observed by the ToF nodes in the other planes
      through the flooding of N-TIEs North TIEs from the affected Leaf nodes,
      together with non-node
      N-TIEs North TIEs which indicate the affected
      prefixes.  To be effective in that case, the positive
      disaggregation must reach down to the nodes that make the plane
      selection, which are typically the ingress Leaf nodes, and the
      information is not useful for routing in the intermediate levels.

      If the breakage is a ToP node in a maximally partitioned fabric -
      in which case it is the only ToP node serving that plane in that
      PoD - that goes down, then the connectivity to all the nodes in
      the PoD is lost within the plane where the ToP node is located -
      all leaves fall.  Since the Southern Reflection between the ToF
      nodes happens only within a plane, ToF nodes in other planes
      cannot discover the case of fallen leaves in a different plane,
      and cannot determine beyond their local plane whether a Leaf node
      that was initially reachable has become unreachable.  As above,
      the breakage can be observed by the ToF nodes in the plane where
      the breakage happened, and then again, the ToF nodes in the plane
      need to be aware of all the affected prefixes for the negative
      disaggregation to be fully effective.  The problem can also be
      observed by the ToF nodes in the other planes through the flooding
      of N-TIEs North TIEs from the affected Leaf nodes, if there are only 3
      levels and the ToP nodes are directly connected to the Leaf nodes,
      and then again it can only be effective it is propagated
      transitively to the Leaf, and useless above that level.

   For the sake of easy comprehension let us roll the abstractions back
   to a simple example and observe that in Figure 3 the loss of link
   Spine 122 to Leaf 122 will make Leaf 122 a fallen leaf for Top-of-
   Fabric plane B.  Worse, if the cabling was never present in first
   place, plane B will not even be able to know that such a fallen leaf
   exists.  Hence partitioning without further treatment results in two
   grave problems:

   o  Leaf111  Leaf 111 trying to route to Leaf122 Leaf 122 MUST choose Spine 111 in
      plane A as its next hop since plane B will inevitably blackhole
      the packet when forwarding using default routes or do excessive
      bow tie'ing, i.e.  this information must be in its routing table.

   o  any kind of "flooding" or distance vector trying to deal with the
      problem by distributing host routes will be able to converge only
      using paths through leafs, i.e. the flooding of information on
      Leaf122
      Leaf 122 will go up to Top-of-Fabric A and then "loopback" over
      other leafs to ToF B leading in extreme cases to traffic for
      Leaf122 Leaf
      122 when presented to plane B taking an "inverted fabric" path
      where leafs start to serve as TOFs.

5.1.4.

4.1.4.  Discovering Fallen Leaves

   As we illustrate later and without further proof here, to deal with
   fallen leafs in multi-plane designs when aggregation is used RIFT
   requires all the ToF nodes to share the same topology database.  This
   happens naturally in single plane design by the means of south
   reflection but needs additional considerations in multi-plane
   fabrics.  To satisfy this RIFT in multi-plane designs relies at the
   ToF Level on ring interconnection of switches in multiple planes.
   Other solutions are possible but they either need more cabling or end
   up having much longer flooding path and/or single points of failure.

   In more detail, by reserving two ports on each Top-of-Fabric node it
   is possible to connect them together in an by interplane bi-directional
   ring
   rings as illustrated in Figure 13 (where we show a bi-directional ring
   connecting switches across planes). 13.  The rings will exchange full
   topology information between planes and with that allow consequently
   by the means of transitive, negative disaggregation described in
   Section 5.2.5.2 4.2.5.2 to efficiently fix any possible fallen leaf scenario.
   Somewhat as a side-effect, the exchange of information fulfills the
   requirement
   ask to present full view of the fabric topology at the Top-
   of-Fabric Top-of-Fabric
   level without the need to collate it from multiple points by
   additional complexity of technologies like [RFC7752].

       +----+  +----+  +----+  +----+  +----+  +----+  +--------+
       |    |  |    |  |    |  |    |  |    |  |    |  |        |
       |       |       |       |       |       |       |        |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |   | Plane A
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
       |       |       |       |       |       |       |        |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |   | Plane B
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
       |       |       |       |       |       |       |        |
                                  ...                           |
       |       |       |       |       |       |       |        |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
   | | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo |  | HH oo | |   | Plane X
   +-|    |--|    |--|    |--|    |--|    |--|    |--|    |-+   |
     +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+  +-o--+     |
       |       |       |       |       |       |       |        |
       |    |  |    |  |    |  |    |  |    |  |    |  |        |
       +----+  +----+  +----+  +----+  +----+  +----+  +--------+

     Figure 13: Connecting Top-of-Fabric Nodes Across Planes by Two Rings

5.1.5.

4.1.5.  Addressing the Fallen Leaves Problem

   One consequence of the Fallen Leaf problem is that some prefixes
   attached to the fallen leaf become unreachable from some of the ToF
   nodes.  RIFT proposes two methods to address this issue, the positive
   and the negative disaggregation.  Both methods flood S-TIEs South TIEs to
   advertise the impacted prefix(es).

   When used for the operation of disaggregation, a positive S-TIE, South TIE,
   as usual, indicates reachability to a prefix of given length and all
   addresses subsumed by it.  In contrast, a negative route
   advertisement indicates that the origin cannot route to the
   advertised prefix.

   The positive disaggregation is originated by a router that can still
   reach the advertised prefix, and the operation is not transitive,
   meaning that the receiver does not generate its own flooding south as
   a consequence of receiving positive disaggregation advertisements
   from an higher level node.  The effect of a positive disaggregation
   is that the traffic to the impacted prefix will follow the prefix
   longest match and will be limited to the northbound routers that
   advertised the more specific route.

   In contrast, the negative disaggregation is transitive, and is
   propagated south when all the possible routes northwards are barred.
   A negative route advertisement is only actionable when the negative
   prefix is aggregated by a positive route advertisement for a shorter
   prefix.  In that case, the negative advertisement carves an exception
   to the positive route in the routing table (one could think of
   "punching a hole"), making the positive prefix reachable through the
   originator with the special consideration of the negative prefix
   removing certain next hop neighbors.

   When the ToF is not partitioned, the collective southern flooding of
   the positive disaggregation by the ToF nodes that can still reach the
   impacted prefix is in general enough to cover all the switches at the
   next level south, typically the ToP nodes.  If all those switches are
   aware of the disaggregation, they collectively create a ceiling that
   intercepts all the traffic north and forwards it to the ToF nodes
   that advertised the more specific route.  In that case, the positive
   disaggregation alone is sufficient to solve the fallen leaf problem.

   On the other hand, when the fabric is partitioned in planes, the
   positive disaggregation from ToF nodes in different planes do not
   reach the ToP switches in the affected plane and cannot solve the
   fallen leaves problem.  In other words, a breakage in a plane can
   only be solved in that plane.  Also, the selection of the plane for a
   packet typically occurs at the leaf level and the disaggregation must
   be transitive and reach all the leaves.  In that case, the negative
   disaggregation is necessary.  The details on the RIFT approach to
   deal with fallen leafs in an optimal way is specified in
   Section 5.2.5.2.

5.2. 4.2.5.2.

4.2.  Specification

5.2.1.

   This section specifies the protocol in a normative fashion by either
   prescriptive procedures or behavior defined by Finite State Machines
   (FSM).

   Some FSM figures are provided as [DOT] description due to limitations
   of ASCII art.

   "On Entry" actions on FSM state are performed every time and right
   before the according state is entered, i.e. after any transitions
   from previous state.

   "On Exit" actions are performed every time and immediately when a
   state is exited, i.e. before any transitions towards target state are
   performed.

   Any attempt to transition from a state towards another on reception
   of an event where no action is specified MUST be considered an
   unrecoverable error.

   The FSMs and procedures are normative in the sense that an
   implementation MUST implement them either literally or an
   implementation MUST exhibit externally observable behavior that is
   identical to the execution of the specified FSMs.

   Where a FSM representation is inconvenient, i.e. the amount of
   procedures and kept state exceeds the amount of transitions, we defer
   to a more procedural description on data structures.

4.2.1.  Transport

   All packet formats are defined in Thrift [thrift] models in
   Appendix B.

   The serialized model is carried in an envelope within a UDP frame
   that provides security and allows validation/modification of several
   important fields without de-serialization for performance and
   security reasons.

5.2.2.

4.2.2.  Link (Neighbor) Discovery (LIE Exchange)

   LIE exchange happens over well-known administratively locally scoped
   and configured or otherwise well-known IPv4 multicast address
   [RFC2365] and/or link-local multicast scope [RFC4291] for IPv6
   [RFC8200] using a configured or otherwise a well-known destination
   UDP port defined in Appendix D.1. C.1.  LIEs SHOULD be sent with a TTL of
   1 to prevent RIFT information reaching beyond a single L3 next-hop in
   the topology.  LIEs SHOULD be sent with network control precedence.

   Originating port of the LIE has no further significance other than
   identifying the origination point.  LIEs are exchanged over all links
   running RIFT.

   An implementation MAY listen and send LIEs on IPv4 and/or IPv6
   multicast addresses.  A node MUST NOT originate LIEs on an address
   family if it does not process received LIEs on that family.  LIEs on
   same link are considered part of the same negotiation independent on
   the address family they arrive on.  Observe further that the LIE
   source address may not identify the peer uniquely in unnumbered or
   link-local address cases so the response transmission MUST occur over
   the same interface the LIEs have been received on.  A node CAN use
   any of the adjacency's source addresses it saw in LIEs on the
   specific interface during adjacency formation to send TIEs.  That
   implies that an implementation MUST be ready to accept TIEs on all
   addresses it used as source of LIE frames.

   A three way 3-way adjacency over any address family implies support for IPv4
   forwarding if the `v4_forwarding_capable` flag is set to true and a
   node can use [RFC5549] type of forwarding in such a situation.  It is
   expected that the whole fabric supports the same type of forwarding
   of address families on all the links.  Operation of a fabric where
   only some of the links are supporting forwarding on an address family
   and others do not is outside the scope of this specification.

   Observe further that the protocol does NOT support selective
   disabling of address families, disabling v4 forwarding capability or
   any local address changes in three way 3-way state, i.e. if a link has entered three way
   3-way IPv4 and/or IPv6 with a neighbor on an adjacency and it wants
   to stop supporting one of the families or change any of its local
   addresses or stop v4 forwarding, it has to tear down and rebuild the
   adjacency.  It also has to remove any information it stored about the
   adjacency such as LIE source addresses seen.

   Unless Section 5.2.7 4.2.7 is used, each node is provisioned with the level
   at which it is operating and its PoD (or otherwise a default level
   and "undefined" PoD are assumed; meaning that leafs do not need to be
   configured at all if initial configuration values are all left at 0).
   Nodes in the spine are configured with "any" PoD which has the same
   value "undefined" PoD hence we will talk about "undefined/any" PoD.
   This information is propagated in the LIEs exchanged.

   Further definitions of leaf flags are found in Section 5.2.7 4.2.7 given
   they have implications in terms of level and adjacency forming here.

   A node tries to form a three way 3-way adjacency if and only if

   1.  the node is in the same PoD or either the node or the neighbor
       advertises "undefined/any" PoD membership (PoD# = 0) AND

   2.  the neighboring node is running the same MAJOR schema version AND

   3.  the neighbor is not member of some PoD while the node has a
       northbound adjacency already joining another PoD AND

   4.  the neighboring node uses a valid System ID AND

   5.  the neighboring node uses a different System ID than the node
       itself

   6.  the advertised MTUs match on both sides AND

   7.  both nodes advertise defined level values AND

   8.  [

          i) the node is at level 0 and has no three way 3-way adjacencies already
          to HAT Highest Adjacency Three-Way (HAT) nodes (defined in
          Section 4.2.7.1) with level different than the adjacent node
          OR

          ii) the node is not at level 0 and the neighboring node is at
          level 0 OR

          iii) both nodes are at level 0 AND both indicate support for
          Section 5.3.9 4.3.8 OR

          iv) neither node is at level 0 and the neighboring node is at
          most one level away

       ].

   The rule in Paragraph 3 rules checking PoD numbering MAY be optionally disregarded by a
   node if PoD detection is undesirable or has to be ignored.  This will
   not affect the correctness of the protocol expect preventing
   detection of certain miscabling cases.

   A node configured with "undefined" PoD membership MUST, after
   building first northbound three way 3-way adjacencies to a node being in a
   defined PoD, advertise that PoD as part of its LIEs.  In case that
   adjacency is lost, from all available northbound three way 3-way adjacencies
   the node with the highest System ID and defined PoD is chosen.  That
   way the northmost defined PoD value (normally the top spines in a
   PoD) can diffuse southbound towards the leafs "forcing" the PoD value
   on any node with "undefined" PoD.

   LIEs arriving with a TTL larger than 1 MUST be ignored.

   A node SHOULD NOT send out LIEs without defined level in the header
   but in certain scenarios it may be beneficial for trouble-shooting
   purposes.

   LIE exchange uses three way 3-way handshake mechanism which is a cleaned up
   version of [RFC5303].  Observe that for easier comprehension the
   terminology of one/two and three-way states does NOT align with OSPF
   or ISIS FSMs albeit they use roughly same mechanisms.

5.2.3.  Topology Exchange (TIE Exchange)

5.2.3.1.  Topology Information Elements

   Topology

4.2.2.1.  LIE FSM

   This section specifies the precise, normative LIE FSM and reachability information in RIFT is conveyed by can be
   omitted unless the
   means of TIEs which have good amount reader is pursuing an implemenentation of commonalities with LSAs in
   OSPF.

   The TIE exchange mechanism uses the port indicated by
   protocol.

   Initial state is `OneWay`.

   Event `MultipleNeighbors` occurs normally when more than two nodes
   see each node in
   the LIE exchange and the interface other on which the adjacency has been
   formed as destination.  It SHOULD use TTL of 1 as well and set inter-
   network control precedence on according packets.

   TIEs contain sequence numbers, lifetimes and same link or a type.  Each type has
   ample identifying number space and information remote node is spread across
   possibly many TIEs quickly
   reconfigured or rebooted without regressing to `OneWay` first.  Each
   occurence of a certain type by the means of a hash function
   that a node or deployment can individually determine.  One extreme
   design choice is event SHOULD generate a prefix per TIE which leads clear, according
   notification to more BGP-like
   behavior where small increments are only advertised help operational deployments.

   The machine sends LIEs on route changes
   vs. deploying with dense prefix packing into few TIEs leading several transitions to more
   traditional IGP trade-off with fewer TIEs.  An implementation may
   even rehash prefix to TIE mapping at any time at accelerate adjacency
   bring-up without waiting for the cost of
   significant amount of re-advertisements timer tic.

digraph Ga556dde74c30450aae125eaebc33bd57 {
    Nd16ab5092c6b421c88da482eb4ae36b6[label="ThreeWay"][shape="oval"];
    N54edd2b9de7641688608f44fca346303[label="OneWay"][shape="oval"];
    Nfeef2e6859ae4567bd7613a32cc28c0e[label="TwoWay"][shape="oval"];
    N7f2bb2e04270458cb5c9bb56c4b96e23[label="Enter"][style="invis"][shape="plain"];
    N292744a4097f492f8605c926b924616b[label="Enter"][style="dashed"][shape="plain"];
    Nc48847ba98e348efb45f5b78f4a5c987[label="Exit"][style="invis"][shape="plain"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"][color="blue"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303

    [label="|TimerTick|\n|LieRcvd|\n|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|SendLie|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    N292744a4097f492f8605c926b924616b -> N54edd2b9de7641688608f44fca346303
    [label=""][color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|NewNeighbor|"][color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|\n|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
    [color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
    [color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|NeighborDroppedReflection|"]
    [color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborDroppedReflection|"][color="red"]
    [arrowhead="normal" dir="both" arrowtail="none"];
}

                                LIE FSM DOT

   Events

   o  TimerTick: one second timer tic

   o  LevelChanged: node's level has been changed by ZTP or
      configuration

   o  HALChanged: best HAL computed by ZTP has changed

   o  HATChanged: HAT computed by ZTP has changed

   o  HALSChanged: set of TIEs.

   More information about the TIE structure can be found HAL offering systems computed by ZTP has
      changed

   o  LieRcvd: received LIE

   o  NewNeighbor: new neighbor parsed

   o  ValidReflection: received own reflection from neighbor

   o  NeighborDroppedReflection: lost previous own reflection from
      neighbor

   o  NeighborChangedLevel: neighbor changed advertised level
   o  NeighborChangedAddress: neighbor changed IP address

   o  UnacceptableHeader: unacceptable header seen

   o  MTUMismatch: MTU mismatched

   o  PODMismatch: Unacceptable PoD seen

   o  HoldtimeExpired: adjacency hold down expired

   o  MultipleNeighbors: more than one neighbor seen on interface

   o  SendLie: send a LIE out

   o  UpdateZTPOffer: update this node's ZTP offer

   Actions

      on TimerTick in the schema TwoWay finishes in Appendix B.

5.2.3.2.  South- and Northbound Representation

   A central concept of RIFT is that each node represents itself
   differently depending TwoWay: PUSH SendLie event, if
      holdtime expired PUSH HoldtimeExpired event

      on the direction HALChanged in which it is advertising
   information.  More precisely, a spine node represents two different
   databases over its adjacencies depending whether it advertises TIEs
   to the north or to the south/sideways.  We call those differing TIE
   databases either south- or northbound (S-TIEs and N-TIEs) depending TwoWay finishes in TwoWay: store new HAL

      on the direction of distribution.

   The N-TIEs hold all of the node's adjacencies and local prefixes
   while the S-TIEs hold only all of the node's adjacencies, the default
   prefix with necessary disaggregated prefixes and local prefixes.  We
   will explain this MTUMismatch in detail further ThreeWay finishes in Section 5.2.5.

   The TIE types are mostly symmetric OneWay: no action

      on HALChanged in both directions and Table 2
   provides a quick reference to main TIE types including direction and
   their function.

   +-------------------+-----------------------------------------------+
   | TIE-Type          | Content                                       |
   +-------------------+-----------------------------------------------+
   | Node N-TIE        | node properties and adjacencies               |
   +-------------------+-----------------------------------------------+
   | Node S-TIE        | same content as node N-TIE                    |
   +-------------------+-----------------------------------------------+
   | Prefix N-TIE      | contains nodes' directly reachable prefixes   |
   +-------------------+-----------------------------------------------+
   | Prefix S-TIE      | contains originated defaults and directly     |
   |                   | reachable prefixes                            |
   +-------------------+-----------------------------------------------+
   | Positive          | contains disaggregated prefixes               |
   | Disaggregation    |                                               |
   | S-TIE             |                                               |
   +-------------------+-----------------------------------------------+
   | Negative          | contains special, negatively disaggreagted    |
   | Disaggregation    | prefixes to support multi-plane designs       |
   | S-TIE             |                                               |
   +-------------------+-----------------------------------------------+
   | External Prefix   | contains external prefixes                    |
   | N-TIE             |                                               |
   +-------------------+-----------------------------------------------+
   | Key-Value N-TIE   | contains nodes northbound KVs                 |
   +-------------------+-----------------------------------------------+
   | Key-Value S-TIE   | contains nodes southbound KVs                 |
   +-------------------+-----------------------------------------------+

                            Table 2: TIE Types

   As an example illustrating a databases holding both representations,
   consider the topology ThreeWay finishes in Figure 2 with the optional link between
   spine 111 and spine 112 (so that the flooding ThreeWay: store new HAL

      on an East-West link
   can be shown).  This example assumes unnumbered interfaces.  First,
   here are the TIEs generated by some nodes.  For simplicity, the key
   value elements which may be included ValidReflection in their S-TIEs or N-TIEs are
   not shown.

        Spine21 S-TIEs:
        Node S-TIE:
          NodeElement(level=2, neighbors((Spine 111, level 1, cost 1),
          (Spine 112, level 1, cost 1), (Spine 121, level 1, cost 1),
          (Spine 122, level 1, cost 1)))
        Prefix S-TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 111 S-TIEs:
        Node S-TIE:
          NodeElement(level=1, neighbors((Spine21, level 2, cost 1, links(...)),
          (Spine22, level 2, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...)),
          (Leaf111, level 0, cost 1, links(...)),
          (Leaf112, level 0, cost 1, links(...))))
        Prefix S-TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 111 N-TIEs:
        Node N-TIE:
          NodeElement(level=1,
          neighbors((Spine21, level 2, cost 1, links(...)),
          (Spine22, level 2, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...)),
          (Leaf111, level 0, cost 1, links(...)),
          (Leaf112, level 0, cost 1, links(...))))
        Prefix N-TIE:
          NorthPrefixesElement(prefixes(Spine 111.loopback)

        Spine 121 S-TIEs:
        Node S-TIE:
          NodeElement(level=1, neighbors((Spine21,level 2,cost 1),
          (Spine22, level 2, cost 1), (Leaf121, level 0, cost 1),
          (Leaf122, level 0, cost 1)))
        Prefix S-TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 121 N-TIEs:
        Node N-TIE:

          NodeElement(level=1,
          neighbors((Spine21, level 2, cost 1, links(...)),
          (Spine22, level 2, cost 1, links(...)),
          (Leaf121, level 0, cost 1, links(...)),
          (Leaf122, level 0, cost 1, links(...))))
        Prefix N-TIE:
          NorthPrefixesElement(prefixes(Spine 121.loopback)

        Leaf112 N-TIEs:
        Node N-TIE:
          NodeElement(level=0,
          neighbors((Spine 111, level 1, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...))))
        Prefix N-TIE:
          NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112,
          Prefix_MH))

       Figure 14: example TIES generated TwoWay finishes in a 2 level spine-and-leaf
                                 topology

5.2.3.3.  Flooding

   The mechanism used to distribute TIEs is the well-known (albeit
   modified ThreeWay: no action

      on ValidReflection in several respects to address fat tree requirements)
   flooding mechanism used by today's link-state protocols.  Although
   flooding is initially more demanding to implement it avoids many
   problems with update style used OneWay finishes in diffused computation such as
   distance vector protocols.  Since flooding tends to present an
   unscalable burden ThreeWay: no action

      on NeighborDroppedReflection in large, densely meshed topologies (fat trees
   being unfortunately such a topology) we provide as solution a close
   to optimal global flood reduction and load balancing optimization ThreeWay finishes in
   Section 5.2.3.9.

   As described before, TIEs themselves are transported over UDP with
   the ports indicated TwoWay: no
      action

      on LieRcvd in the LIE exchanges and using the destination
   address ThreeWay finishes in ThreeWay: PROCESS_LIE

      on which the LIE adjacency has been formed.  For unnumbered
   IPv4 interfaces same considerations apply as MultipleNeighbors in equivalent OSPF case.

   On reception of a TIE with an undefined level value TwoWay finishes in the packet
   header the node SHOULD issue a warning and indiscriminately discard
   the packet.

   Precise finite state machines and procedures can be found OneWay: no action

      on UnacceptableHeader in
   Appendix C.3.

5.2.3.4.  TIE Flooding Scopes

   In a somewhat analogous fashion to link-local, area and domain
   flooding scopes, RIFT defines several complex "flooding scopes"
   depending ThreeWay finishes in OneWay: no action

      on the direction and type of TIE propagated.

   Every N-TIE is flooded northbound, providing a node at a given MTUMismatch in TwoWay finishes in OneWay: no action

      on LevelChanged in OneWay finishes in OneWay: update level with the complete topology of the Clos or Fat Tree network underneath
   it, including all specific prefixes.  This means that a packet
   received from a node at the same or lower level whose destination is
   covered by one of those specific prefixes may be routed directly
   towards the node advertising that prefix rather than sending the
   packet to a node at a higher level.

   A node's Node S-TIEs, consisting of all node's adjacencies and prefix
   S-TIEs limited to those related to default IP prefix and
   disaggregated prefixes, are flooded southbound
      event value, PUSH SendLie event

      on UnacceptableHeader in order to allow the
   nodes one level down to see connectivity of the higher level as well
   as reachability to the rest of the fabric.  In order to allow an E-W
   disconnected node TwoWay finishes in a given level to receive the S-TIEs of other
   nodes at its level, every *NODE* S-TIE is "reflected" northbound to
   level from which it was received.  It should be noted that East-West
   links are included OneWay: no action

      on HALSChanged in South TIE flooding (except at ToF level); those
   TIEs need to be flooded TwoWay finishes in TwoWay: store HALS
      on UpdateZTPOffer in TwoWay finishes in TwoWay: send offer to satisfy algorithms ZTP
      FSM

      on NeighborChangedLevel in Section 5.2.4.  In
   that way nodes at same level can learn about each other without a
   lower level, e.g. TwoWay finishes in case of leaf level.  The precise flooding scopes
   are given OneWay: no action

      on NewNeighbor in Table 3.  Those rules govern as well what SHOULD be
   included OneWay finishes in TIDEs TwoWay: PUSH SendLie event

      on the adjacency.  Again, East-West flooding scopes
   are identical to South flooding scopes except NeighborChangedAddress in case of ToF East-
   West links (rings) which are basically performing northbound
   flooding.

   Node S-TIE "south reflection" allows to support positive
   disaggregation ThreeWay finishes in OneWay: no
      action

      on failures describes HALChanged in Section 5.2.5 and flooding
   reduction OneWay finishes in Section 5.2.3.9.

   +-----------+---------------------+---------------+-----------------+
   | Type /    | South               | North         | East-West       |
   | Direction |                     |               |                 |
   +-----------+---------------------+---------------+-----------------+
   | node      | flood if OneWay: store new HAL

      on NeighborChangedLevel in OneWay finishes in OneWay: no action

      on HoldtimeExpired in TwoWay finishes in OneWay: no action

      on SendLie in TwoWay finishes in TwoWay: SEND_LIE

      on LevelChanged in TwoWay finishes in OneWay: update level of   | flood if      | flood only with
      event value

      on NeighborChangedAddress in OneWay finishes in OneWay: no action

      on HATChanged in TwoWay finishes in TwoWay: store HAT

      on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE

      on MultipleNeighbors in ThreeWay finishes in OneWay: no action

      on MTUMismatch in OneWay finishes in OneWay: no action

      on SendLie in OneWay finishes in OneWay: SEND_LIE

      on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE

      on TimerTick in ThreeWay finishes in ThreeWay: PUSH SendLie event,
      if   |
   | S-TIE     | originator is equal | holdtime expired PUSH HoldtimeExpired event

      on TimerTick in OneWay finishes in OneWay: PUSH SendLie event

      on PODMismatch in ThreeWay finishes in OneWay: no action

      on LevelChanged in ThreeWay finishes in OneWay: update level with
      event value

      on NeighborChangedLevel in ThreeWay finishes in OneWay: no action
      on UpdateZTPOffer in OneWay finishes in OneWay: send offer to ZTP
      FSM

      on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer to
      ZTP FSM

      on HATChanged in OneWay finishes in OneWay: store HAT

      on HATChanged in ThreeWay finishes in ThreeWay: store HAT

      on HoldtimeExpired in OneWay finishes in OneWay: no action

      on UnacceptableHeader in OneWay finishes in OneWay: no action

      on PODMismatch in OneWay finishes in OneWay: no action

      on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE

      on NeighborChangedAddress in TwoWay finishes in OneWay: no action

      on ValidReflection in ThreeWay finishes in ThreeWay: no action

      on HALSChanged in OneWay finishes in OneWay: store HALS

      on HoldtimeExpired in ThreeWay finishes in OneWay: no action

      on HALSChanged in ThreeWay finishes in ThreeWay: store HALS

      on NeighborDroppedReflection in OneWay finishes in OneWay: no
      action

      on PODMismatch in TwoWay finishes in OneWay: no action

      on Entry into OneWay: CLEANUP

   Following words are used for well known procedures:

   1.  PUSH Event: pushes an event to be executed by the FSM upon exit
       of      | this node is    |
   |           | action

   2.  CLEANUP: neighbor MUST be reset to this node        | originator is | not ToF         |
   |           |                     | higher than   |                 |
   |           |                     | this node     |                 |
   +-----------+---------------------+---------------+-----------------+
   | non-node  | flood self-         | flood only unknown

   3.  SEND_LIE: create a new LIE packet

       1.  reflecting the neighbor if | flood only known and valid and

       2.  setting the necessary `not_a_ztp_offer` variable if   |
   | S-TIE     | originated only     | level was
           derived from last known neighbor is   | self-originated |
   |           |                     | originator of | and on this node   |
   |           |                     | TIE           | is interface and

       3.  setting `you_are_not_flood_repeater` to computed value

   4.  PROCESS_LIE:

       1.  if lie has wrong major version OR our own system ID or
           invalid system ID then CLEANUP else

       2.  if lie has non matching MTUs then CLEANUP, PUSH
           UpdateZTPOffer, PUSH MTUMismatch else

       3.  if PoD rules do not ToF      |
   +-----------+---------------------+---------------+-----------------+
   | all       | never flood         | flood always  | flood only allow adjacency forming then CLEANUP,
           PUSH PODMismatch, PUSH MTUMismatch else

       4.  if   |
   | N-TIEs    |                     |               | this node lie has undefined level OR my level is    |
   |           |                     |               | ToF             |
   +-----------+---------------------+---------------+-----------------+
   | TIDE      | include at least    | include at    | if undefined OR this
           node is |
   |           | all non-self        | least all     | ToF then        |
   |           | originated N-TIE    | node S-TIEs   | include all     |
   |           | headers and self-   | and all       | N-TIEs,         |
   |           | originated S-TIE    | S-TIEs        | otherwise only  |
   |           | headers and node    | originated by | self-originated |
   |           | S-TIEs of nodes at  | peer leaf and all  | TIEs            |
   |           | same remote level          | N-TIEs        |                 |
   +-----------+---------------------+---------------+-----------------+
   | TIRE as   | request all N-TIEs  | request all   | if this node is |
   | Request   | lower than HAT OR (lie's level
           is not leaf AND its difference is more than one from my
           level) then CLEANUP, PUSH UpdateZTPOffer, PUSH
           UnacceptableHeader else

       5.  PUSH UpdateZTPOffer, construct temporary new neighbor
           structure with values from lie, if no current neighbor exists
           then set neighbor to new neighbor, PUSH NewNeighbor event,
           CHECK_THREE_WAY else

           1.  if current neighbor system ID differs from lie's system
               ID then PUSH MultipleNeighbors else

           2.  if current neighbor stored level differs from lie's level
               then PUSH NeighborChangedLevel else

           3.  if current neighbor stored IPv4/v6 address differs from
               lie's address then PUSH NeighborChangedAddress else

           4.  if any of neighbor's flood address port, name, local
               linkid changed then PUSH NeighborChangedMinorFields and all peer's      | S-TIEs        | ToF

           5.  CHECK_THREE_WAY

   5.  CHECK_THREE_WAY: if current state is one-way do nothing else

       1.  if lie packet does not contain neighbor then apply  |
   |           | self-originated     |               | North scope     |
   |           | TIEs if current state
           is three-way then PUSH NeighborDroppedReflection else

       2.  if packet reflects this system's ID and all local port and state
           is three-way then PUSH event ValidReflection else PUSH event
           MultipleNeighbors

4.2.3.  Topology Exchange (TIE Exchange)

4.2.3.1.  Topology Information Elements

   Topology and reachability information in RIFT is conveyed by the
   means of TIEs which have good amount of commonalities with LSAs in
   OSPF.

   The TIE exchange mechanism uses the port indicated by each node   |               | rules,          |
   |           | S-TIEs              |               | otherwise South |
   |           |                     |               | scope rules     |
   +-----------+---------------------+---------------+-----------------+
   | TIRE in
   the LIE exchange and the interface on which the adjacency has been
   formed as   | Ack all received    | Ack all       | Ack all         |
   | Ack       | TIEs                | received destination.  It SHOULD use TTL of 1 as well and set inter-
   network control precedence on according packets.

   TIEs | received contain sequence numbers, lifetimes and a type.  Each type has
   ample identifying number space and information is spread across
   possibly many TIEs   |
   +-----------+---------------------+---------------+-----------------+

                         Table 3: Flooding Scopes

   If of a certain type by the TIDE includes additional means of a hash function
   that a node or deployment can individually determine.  One extreme
   design choice is a prefix per TIE headers beside the ones
   specified, the receiving neighbor must apply according filter which leads to more BGP-like
   behavior where small increments are only advertised on route changes
   vs. deploying with dense prefix packing into few TIEs leading to more
   traditional IGP trade-off with fewer TIEs.  An implementation may
   even rehash prefix to TIE mapping at any time at the
   received TIDE strictly and MUST NOT request cost of
   significant amount of re-advertisements of TIEs.

   More information about the extra TIE headers
   that were not allowed by the flooding scope rules structure can be found in its direction.

   As an example to illustrate these rules, consider using the topology schema
   in Figure 2, with the optional link between spine 111 and spine 112, Appendix B.

4.2.3.2.  South- and the associated TIEs given in Figure 14.  The flooding from
   particular nodes Northbound Representation

   A central concept of the TIEs RIFT is given that each node represents itself
   differently depending on the direction in Table 4.

   +-------------+----------+------------------------------------------+
   | Router      | Neighbor | TIEs                                     |
   | floods to   |          |                                          |
   +-------------+----------+------------------------------------------+
   | Leaf111     | Spine    | Leaf111 N-TIEs, Spine 111 which it is advertising
   information.  More precisely, a spine node S-TIE     |
   |             | 112 represents two different
   databases over its adjacencies depending whether it advertises TIEs
   to the north or to the south/sideways.  We call those differing TIE
   databases either south- or northbound (South TIEs and North TIEs)
   depending on the direction of distribution.

   The North TIEs hold all of the node's adjacencies and local prefixes
   while the South TIEs hold only all of the node's adjacencies, the
   default prefix with necessary disaggregated prefixes and local
   prefixes.  We will explain this in detail further in Section 4.2.5.

   The TIE types are mostly symmetric in both directions and Table 2
   provides a quick reference to main TIE types including direction and
   their function.

   +--------------------+----------------------------------------------+
   | TIE-Type           | Content                                      | Leaf111
   +--------------------+----------------------------------------------+
   | Spine Node North TIE     | Leaf111 N-TIEs, Spine 112 node S-TIE     |
   |             | 111      |                                          |
   |             |          |                                          |
   | Spine 111   | Leaf111  | Spine 111 S-TIEs                         |
   | Spine 111   | Leaf112  | Spine 111 S-TIEs                         |
   | Spine 111   | Spine    | Spine 111 S-TIEs                         |
   |             | 112      |                                          |
   | Spine 111   | Spine21  | Spine 111 N-TIEs, Leaf111 N-TIEs,        | properties and adjacencies              |
   +--------------------+----------------------------------------------+
   | Node South TIE     | Leaf112 N-TIEs, Spine22 same content as node S-TIE       |
   | Spine 111   | Spine22  | Spine 111 N-TIEs, Leaf111 N-TIEs,        |
   |             | North TIE               | Leaf112 N-TIEs, Spine21 node S-TIE
   +--------------------+----------------------------------------------+
   | Prefix North TIE   | contains nodes' directly reachable prefixes  |
   +--------------------+----------------------------------------------+
   | Prefix South TIE   | contains originated defaults and directly    | ...
   | ...                    | ... reachable prefixes                           |
   +--------------------+----------------------------------------------+
   | Spine21 Positive           | Spine contains disaggregated prefixes              | Spine21 S-TIEs
   | Disaggregation     |                                              | 111
   | South TIE          |                                              | Spine21
   +--------------------+----------------------------------------------+
   | Spine Negative           | Spine21 S-TIEs contains special, negatively disaggreagted   |
   | Disaggregation     | 112 prefixes to support multi-plane designs      |
   | South TIE          | Spine21                                              | Spine
   +--------------------+----------------------------------------------+
   | Spine21 S-TIEs External Prefix    | contains external prefixes                   |
   | 121 North TIE          |                                              |
   +--------------------+----------------------------------------------+
   | Spine21 Key-Value North    | Spine contains nodes northbound KVs                | Spine21 S-TIEs
   | TIE                |                                              | 122
   +--------------------+----------------------------------------------+
   | Key-Value South    | contains nodes southbound KVs                | ...
   | ... TIE                | ...                                              |
   +-------------+----------+------------------------------------------+
   +--------------------+----------------------------------------------+

                            Table 4: Flooding some TIEs from 2: TIE Types

   As an example illustrating a databases holding both representations,
   consider the topology

5.2.3.5.  'Flood Only Node TIEs' Bit

   RIFT includes an optional ECN mechanism to prevent "flooding inrush"
   on restart or bring-up in Figure 2 with many southbound neighbors.  A node MAY
   set on its LIEs the according bit to indicate to the neighbor optional link between
   spine 111 and spine 112 (so that it
   should temporarily flood node TIEs only to it.  It should only set it
   in the southbound direction.  The receiving node SHOULD accomodate
   the request to lessen the flooding load on an East-West link
   can be shown).  This example assumes unnumbered interfaces.  First,
   here are the affected node if south
   of the sender and SHOULD ignore TIEs generated by some nodes.  For simplicity, the bit if northbound.

   Obviously this mechanism is most useful key
   value elements which may be included in southbound direction.  The
   distribution of node their South TIEs guarantees correct behavior of algorithms
   like disaggregation or default route origination.  Furthermore
   though, the use of this bit presents an inherent trade-off between
   processing load and convergence speed since suppressing flooding of
   northbound prefixes from neighbors will lead to blackholes.

5.2.3.6.  Initial and Periodic Database Synchronization

   The initial exchange of RIFT is modeled after ISIS with TIDE being
   equivalent to CSNP and TIRE playing the role of PSNP.  The content of
   TIDEs and TIREs is governed by Table 3.

5.2.3.7.  Purging and Roll-Overs

   RIFT does North
   TIEs are not purge information that has been distributed by the
   protocol.  Purging mechanisms shown.

        ToF 21 South TIEs:
        Node South TIE:
          NodeElement(level=2, neighbors((Spine 111, level 1, cost 1),
          (Spine 112, level 1, cost 1), (Spine 121, level 1, cost 1),
          (Spine 122, level 1, cost 1)))
        Prefix South TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 111 South TIEs:
        Node South TIE:
          NodeElement(level=1, neighbors((ToF 21, level 2, cost 1, links(...)),
          (ToF 22, level 2, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...)),
          (Leaf111, level 0, cost 1, links(...)),
          (Leaf112, level 0, cost 1, links(...))))
        Prefix South TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 111 North TIEs:
        Node North TIE:
          NodeElement(level=1,
          neighbors((ToF 21, level 2, cost 1, links(...)),
          (ToF 22, level 2, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...)),
          (Leaf111, level 0, cost 1, links(...)),
          (Leaf112, level 0, cost 1, links(...))))
        Prefix North TIE:
          NorthPrefixesElement(prefixes(Spine 111.loopback)

        Spine 121 South TIEs:
        Node South TIE:
          NodeElement(level=1, neighbors((ToF 21,level 2,cost 1),
          (ToF 22, level 2, cost 1), (Leaf121, level 0, cost 1),
          (Leaf122, level 0, cost 1)))
        Prefix South TIE:
          SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1))

        Spine 121 North TIEs:
        Node North TIE:
          NodeElement(level=1,
          neighbors((ToF 21, level 2, cost 1, links(...)),
          (ToF 22, level 2, cost 1, links(...)),
          (Leaf121, level 0, cost 1, links(...)),
          (Leaf122, level 0, cost 1, links(...))))
        Prefix North TIE:
          NorthPrefixesElement(prefixes(Spine 121.loopback)

        Leaf112 North TIEs:
        Node North TIE:
          NodeElement(level=0,
          neighbors((Spine 111, level 1, cost 1, links(...)),
          (Spine 112, level 1, cost 1, links(...))))
        Prefix North TIE:
          NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112,
          Prefix_MH))

       Figure 14: example TIES generated in other routing protocols have proven
   to a 2 level spine-and-leaf
                                 topology

   It may be complex and fragile over many years of experience.  Abundant
   amounts here not necessarily obvious why the node South TIEs
   contain all the adjacencies of memory are available today even on low-end platforms.  The
   information the according node.  This will age out be
   necessary for algorithms given in Section 4.2.3.9 and all computations will deliver correct
   results if a node leaves the network due Section 4.3.6.

4.2.3.3.  Flooding

   The mechanism used to distribute TIEs is the new information
   distributed well-known (albeit
   modified in several respects to take advantage of fat tree topology)
   flooding mechanism used by its adjacent nodes.

   Once today's link-state protocols.  Although
   flooding is initially more demanding to implement it avoids many
   problems with update style used in diffused computation such as
   distance vector protocols.  Since flooding tends to present an
   unscalable burden in large, densely meshed topologies (fat trees
   being unfortunately such a RIFT node issues topology) we provide as solution a TIE close
   to optimal global flood reduction and load balancing optimization in
   Section 4.2.3.9.

   As described before, TIEs themselves are transported over UDP with an ID, it MUST preserve
   the ID as
   long as feasible (also when ports indicated in the protocol restarts), even if LIE exchanges and using the TIE
   looses all content.  The re-advertisement of empty TIE fulfills destination
   address on which the
   purpose LIE adjacency has been formed.  For unnumbered
   IPv4 interfaces same considerations apply as in equivalent OSPF case.

   On reception of purging any information advertised a TIE with an undefined level value in previous versions.
   The originator is free to not re-originate the according empty TIE
   again or originate an empty TIE with relatively short lifetime to
   prevent large number of long-lived empty stubs polluting packet
   header the network.
   Each node MUST timeout and clean up the according empty TIEs
   independently.

   Upon restart SHOULD issue a node MUST, as any link-state implementation, be
   prepared to receive TIEs with its own system ID warning and supersede them
   with equivalent, newly generated, empty TIEs with a higher sequence
   number.  As above, indiscriminately discard
   the lifetime can be relatively short since it only
   needs to exceed packet.

4.2.3.3.1.  Normative Flooding Procedures

   This section specifies the necessary propagation precise, normative flooding mechanism and processing delay by all
   can be omitted unless the nodes that are within reader is pursuing an implemenentation of
   the TIE's flooding scope.

   TIE sequence numbers protocol.

   Flooding Procedures are rolled over using the method described in
   Appendix A.  First sequence number terms of any spontaneously originated
   TIE (i.e. a flooding state of an
   adjacency and resulting operations on it driven by packet arrivals.
   The FSM itself has basically just a single state and is not originated well
   suited to override a detected older copy in represent the
   network) behavior.  An implementation MUST be a reasonbly unpredictable random number in behave on
   the
   interval [0, 2^10-1] which will prevent otherwise identical TIE
   headers to remain "stuck" wire in the network with content different from
   TIE originated after reboot.

5.2.3.8.  Southbound Default Route Origination

   Under certain conditions nodes issue a default route in their South
   Prefix TIEs with costs same way as computed in Section 5.3.6.1.

   A node X that

   1.  is NOT overloaded AND

   2.  has southbound or East-West adjacencies

   originates in its south prefix TIE the provided normative procedures of this
   paragraph.

   RIFT does not specify any kind of flood rate limiting since such a default route IIF

   1.  all other nodes at X's' level
   specifications always assume particular points in available
   technology speeds and feeds and those points are overloaded OR

   2.  all other nodes shifting at X's' level have NO northbound adjacencies OR

   3.  X has computed reachability to a default route during N-SPF. faster
   and faster rate (speed of light holding for the moment).  The term "all other nodes encoded
   packets provide hints to react accordingly to losses or overruns.

   Flooding of all according topology exchange elements SHOULD be
   performed at X's' level" describes obviously just highest feasible rate whereas the
   nodes at rate of transmission
   MUST be throttled by reacting to adequate features of the same level system such
   as e.g. queue lengths or congestion indications in the PoD with a viable lower level
   (otherwise protocol
   packets.

4.2.3.3.1.1.  FloodState Structure per Adjacency

   The structure contains conceptually the node S-TIEs cannot following elements.  The word
   collection or queue indicates a set of elements that can be reflected and iterated:

   TIES_TX:  Collection containing all the nodes in e.g.
   PoD 1 and PoD 2 are "invisible" TIEs to each other).

   A node originating a southbound default route MUST install a default
   discard route if it did not compute a default route during N-SPF.

5.2.3.9.  Northbound TIE Flooding Reduction

   Section 1.4 of transmit on the Optimized Link State Routing Protocol [RFC3626]
   (OLSR) introduces
      adjacency.

   TIES_ACK:  Collection containing all the concept of a "multipoint relay" (MPR) TIEs that
   minimize have to be
      acknowledged on the overhead of flooding messages in the network by reducing
   redundant retransmissions in the same region.

   A similar technique is applied to RIFT to control northbound
   flooding.  Important observations first:

   1.  a node MUST flood self-originated N-TIEs to adjacency.

   TIES_REQ:  Collection containing all the reachable
       nodes at the level above which we call the node's "parents";

   2.  it is typically not necessary TIE headers that all parents reflood the N-TIEs have to achieve a complete flooding of be
      requested on the adjacency.

   TIES_RTX:  Collection containing all TIEs that need retransmission
      with the reachable nodes two
       levels above which we choose according time to call retransmit.

   Following words are used for well known procedures operating on this
   structure:

   TIE  Describes either a full RIFT TIE or accordingly just the node's "grandparents";

   3.  to control
      `TIEHeader` or `TIEID`. The according meaning is unambiguously
      contained in the volume context of its the algorithm.

   is_flood_reduced(TIE):  returns whether a TIE can be flood reduced or
      not.

   is_tide_entry_filtered(TIE):  returns whether a header should be
      propagated in TIDE according to flooding two hops North and yet keep
       it robust enough, it is advantageous for scopes.

   is_request_filtered(TIE):  returns whether a node TIE request should be
      propagated to select neighbor or not according to flooding scopes.

   is_flood_filtered(TIE):  returns whether a
       subset of its parents as "Flood Repeaters" (FRs), which combined
       together deliver two TIE requested be flooded
      to neighbor or more copies of its not according to flooding scopes.

   try_to_transmit_tie(TIE):

      A.  if not is_flood_filtered(TIE) then

          1.  remove TIE from TIES_RTX if present
          2.  if TIE" with same key on TIES_ACK then

              a.  if TIE" same or newer than TIE do nothing else

              b.  remove TIE" from TIES_ACK and add TIE to TIES_TX

          3.  else insert TIE into TIES_TX

   ack_tie(TIE):  remove TIE from all of its
       parents, i.e. the originating node's grandparents;

   4.  nodes at the collections and then insert TIE
      into TIES_ACK.

   tie_been_acked(TIE):  remove TIE from all collections.

   remove_from_all_queues(TIE):  same level do NOT have as `tie_been_acked`.

   request_tie(TIE):  if not is_request_filtered(TIE) then
      remove_from_all_queues(TIE) and add to agree on a specific
       algorithm TIES_REQ.

   move_to_rtx_list(TIE):  remove TIE from TIES_TX and then add to select
      TIES_RTX using TIE retransmission interval.

   clear_requests(TIEs):  remove all TIEs from TIES_REQ.

   bump_own_tie(TIE):  for self-originated TIE originate an empty or re-
      generate with version number higher then the FRs, but overall load balancing should one in TIE.

   The collection SHOULD be
       achieved so that different nodes at served with following priorities if the same level
   system cannot process all the collections in real time:

      Elements on TIES_ACK should tend to
       select different parents as FRs;

   5.  there are usually many solutions to be processed with highest priority

      TIES_TX

      TIES_REQ and TIES_RTX

4.2.3.3.1.2.  TIDEs

   `TIEID` and `TIEHeader` space forms a strict total order (modulo
   uncomparable sequence numbers in the problem of finding very unlikely event that can
   occur if a set TIE is "stuck" in a part of FRs for a given node; network while the problem of finding originator
   reboots and reissues TIEs many times to the minimal set
       is (similar to) a NP-Complete problem point its sequence# rolls
   over and a globally optimal set
       may not be forms incomparable distance to the minimal one if load-balancing with other nodes is
       an important consideration;

   6.  it is expected "stuck" copy) which
   implies that there will be often sets of equivalent nodes
       at a level L, defined as having a common set of parents at L+1.
       Applying this observation at both L and L+1, an algorithm may
       attempt to split the larger problem in a sum of smaller separate
       problems;

   7.  it comparison relation is another expectation possible between two elements.
   With that there will be from time it is implictly possible to time a
       broken link between a parent compare TIEs, TIEHeaders and a grandparent,
   TIEIDs to each other whereas the shortest viable key is always
   implied.

   When generating and in sending TIDEs an implementation SHOULD ensure
   that case
       the parent enough bandwidth is probably a poor FR due left to its lower reliability.
       An algorithm may attempt send elements of Floodstate
   structure.

4.2.3.3.1.2.1.  TIDE Generation

   As given by timer constant, periodically generate TIDEs by:

      NEXT_TIDE_ID: ID of next TIE to eliminate parents with broken
       northbound adjacencies first be sent in order to reduce the number TIDE.

      TIDE_START: Begin of
       FRs.  Albeit it could be argued that relying on higher fanout FRs
       will slow flooding due TIDE packet range.

   a.  NEXT_TIDE_ID = MIN_TIEID

   b.  while NEXT_TIDE_ID not equal to MAX_TIEID do

       1.  TIDE_START = NEXT_TIDE_ID

       2.  HEADERS = At most TIRDEs_PER_PKT headers in TIEDB starting at
           NEXT_TIDE_ID or higher replication load reliability of
       FR's links seems to be a more pressing concern.

   In a fully connected Clos Network, this means that a node selects one
   arbitrary parent as FR and then a second one for redundancy.  The
   computation can SHOULD be kept relatively simple filtered by
           is_tide_entry_filtered and completely distributed
   without any need for synchronization amongst nodes.  In MUST either have a "PoD"
   structure, where the Level L+2 lifetime left >
           0 or have no content

       3.  if HEADERS is partitioned empty then START = MIN_TIEID else START = first
           element in silos of equivalent
   grandparents that are only reachable from respective parents, this
   means treating each silo HEADERS

       4.  if HEADERS' size less than TIRDEs_PER_PKT then END =
           MAX_TIEID else END = last element in HEADERS

       5.  send sorted HEADERS as a fully connected Clos Network TIDE setting START and solve END as its
           range

       6.  NEXT_TIDE_ID = END

   The constant `TIRDEs_PER_PKT` SHOULD be generated and used by the problem within
   implementation to limit the silo.

   In terms amount of signaling, a node has enough information TIE headers per TIDE so the
   sent TIDE PDU does not exceed interface MTU.

   TIDE PDUs SHOULD be spaced on sending to select its
   set prevent packet drops.

4.2.3.3.1.2.2.  TIDE Processing

   On reception of FRs; this information is derived from the node's parents' Node
   S-TIEs, which indicate TIDEs the parent's reachable northbound adjacencies following processing is performed:

      TXKEYS: Collection of TIE Headers to its own parents, i.e. the node's grandparents.  A node may be send a
   LIE to a northbound neighbor with after processing of
      the optional boolean field
   `you_are_flood_repeater` set to false, packet
      REQKEYS: Collection of TIEIDs to indicate that the
   northbound neighbor is not a flood repeater for the node that sent
   the LIE.  In that case be requested after processing of
      the northbound neighbor SHOULD NOT reflood
   northbound TIEs received packet

      CLEARKEYS: Collection of TIEIDs to be removed from flood state
      queues

      LASTPROCESSED: Last processed TIEID in TIDE

      DBTIE: TIE in the node that sent the LIE.  If the
   `you_are_flood_repeater` LSDB if found

   a.  LASTPROCESSED = TIDE.start_range

   b.  for every HEADER in TIDE do

       1.  DBTIE = find HEADER in current LSDB

       2.  if HEADER < LASTPROCESSED then report error and reset
           adjacency and return

       3.  put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and
           TIE.HEADER < HEADER) into TXKEYS

       4.  LASTPROCESSED = HEADER

       5.  if DBTIE not found then

           I)     if originator is absent or this node then bump_own_tie

           II)    else put HEADER into REQKEYS

       6.  if `you_are_flood_repeater` DBTIE.HEADER < HEADER then

           I)    if originator is
   set to true, this node then the northbound neighbor bump_own_tie else

                 i.     if this is a flood repeater for the
   node that sent the LIE and MUST reflood North TIE header from a northbound
                        neighbor then override DBTIE in LSDB with HEADER

                 ii.    else put HEADER into REQKEYS

       7.  if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS

       8.  if DBTIE.HEADER = HEADER then

           I)     if DBTIE has content already then put DBTIE.HEADER
                  into CLEARKEYS

           II)    else put HEADER into REQKEYS

   c.  put all TIEs received in LSDB where (TIE.HEADER > LASTPROCESSED and
       TIE.HEADER <= TIDE.end_range) into TXKEYS

   d.  for all TIEs in TXKEYS try_to_transmit_tie(TIE)

   e.  for all TIEs in REQKEYS request_tie(TIE)

   f.  for all TIEs in CLEARKEYS remove_from_all_queues(TIE)

4.2.3.3.1.3.  TIREs

4.2.3.3.1.3.1.  TIRE Generation

   There is not much to say here.  Elements from
   that node.

   This specification proposes a simple default algorithm that SHOULD be
   implemented both TIES_REQ and used by default on every RIFT node.

   o  let |NA(Node)
   TIES_ACK MUST be the set of Northbound adjacencies of node Node collected and CN(Node) be sent out as fast as feasible as TIREs.
   When sending TIREs with elements from TIES_REQ the cardinality of |NA(Node);

   o  let |SA(Node) `lifetime` field
   MUST be the set of Southbound adjacencies of node Node
      and CS(Node) be to 0 to force reflooding from the cardinality of |SA(Node);

   o  let |P(Node) be neighbor even if the set of node Node's parents;

   o  let |G(Node)
   TIEs seem to be the set same.

4.2.3.3.1.3.2.  TIRE Processing

   On reception of node Node's grandparents.  Observe
      that |G(Node) = |P(|P(Node));

   o  let N be TIREs the child node at level L computing a set of FR;

   o  let P be a node at level L+1 and a parent node of N, i.e. bi-
      directionally reachable over adjacency A(N, P);

   o  let G be a grandparent node following processing is performed:

      TXKEYS: Collection of N, reachable transitively via a
      parent P over adjacencies ADJ(N, P) and ADJ(P, G).  Observe that N
      does not have enough information TIE Headers to check bidirectional
      reachability of A(P, G);

   o  let R be a redundancy constant integer; a value send after processing of 2 or higher for
      R is RECOMMENDED;

   o  let S be a similarity constant integer; a value in range 0 .. 2
      for S is RECOMMENDED,
      the value packet

      REQKEYS: Collection of 1 SHOULD be used.  Two
      cardinalities are considered as equivalent if their absolute
      difference is less than or equal TIEIDs to S, i.e.  |a-b|<=S.

   o  let RND be a 64-bit random number generated by requested after processing of
      the system once on
      startup.

   The algorithm consists packet

      ACKKEYS: Collection of TIEIDs that have been acked

      DBTIE: TIE in the following steps:

   1.  Derive a 64-bits number by XOR'ing 'N's system ID with RND.

   2.  Derive a 16-bits pseudo-random unsigned integer PR(N) from the
       resulting 64-bits number by splitting it LSDB if found

   a.  for every HEADER in 16-bits-long words
       W1, W2, W3, W4 (where W1 are the least significant 16 bits of the
       64-bits number, and W4 are the most significant 16 bits) and TIRE do

       1.  DBTIE = find HEADER in current LSDB

       2.  if DBTIE not found then
       XOR'ing the circularly shifted resulting words together:

          (W1<<1) xor (W2<<2) xor (W3<<3) xor (W4<<4);

          where << is the circular shift operator. do nothing

       3.  Sort the parents by decreasing number of northbound adjacencies
       (using decreasing system id of the parent as tie-breaker):
       sort |P(N) by decreasing CN(P),  if DBTIE.HEADER < HEADER then put HEADER into REQKEYS

       4.  if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS

       5.  if DBTIE.HEADER = HEADER then put DBTIE.HEADER into ACKKEYS

   b.  for all P TIEs in |P(N), as ordered
       array |A(N)

   4.  Partition |A(N) TXKEYS try_to_transmit_tie(TIE)

   c.  for all TIEs in subarrays |A_k(N) of parents with equivalent
       cardinality of northbound adjacencies (in other words with
       equivalent number REQKEYS request_tie(TIE)
   d.  for all TIEs in ACKKEYS tie_been_acked(TIE)

4.2.3.3.1.4.  TIEs Processing on Flood State Adjacency

   On reception of grandparents they can reach):

       1.  set k=0; // k is TIEs the ID of following processing is performed:

      ACKTIE: TIE to acknowledge

      TXTIE: TIE to transmit

      DBTIE: TIE in the subarrray

       2.  set i=0;

       3.  while i < CN(N) do LSDB if found

   a.  DBTIE = find TIE in current LSDB

   b.  if DBTIE not found then

       1.  set j=i;  if originator is this node then bump_own_tie with a short
           remaining lifetime

       2.  while i < CN(N)  else insert TIE into LSDB and CN(|A(N)[j]) - CN(|A(N)[i]) <= S ACKTIE = TIE

       else

       1.  place |A(N)[i] in |A_k(N) // abstract action, maybe
                   noop  if DBTIE.HEADER = TIE.HEADER then

           i.     if DBTIE has content already then ACKTIE = TIE

           ii.    else process like the "DBTIE.HEADER < TIE.HEADER" case

       2.  set i=i+1;

           3.  /* At this point j  if DBTIE.HEADER < TIE.HEADER then

           i.     if originator is the index in |A(N) of the first
               member of |A_k(N) this node then bump_own_tie

           ii.    else insert TIE into LSDB and (i-j) ACKTIE = TIE

       3.  if DBTIE.HEADER > TIE.HEADER then

           i.     if DBTIE has content already then TXTIE = DBTIE

           ii.    else ACKTIE = DBTIE

   c.  if TXTIE is C_k(N) defined as the
               cardinality of |A_k(N) */

           4. set k=k+1;

       4.  /* At this point k then try_to_transmit_tie(TXTIE)

   d.  if ACKTIE is the total number of subarrays,
           initialized for the shuffling operation below */

   5.  shuffle individually each subarrays |A_k(N) of cardinality C_k(N)
       within |A(N) using the Durstenfeld variation of Fisher-Yates
       algorithm that depends on N's System ID:

       1.  while k > 0 do

           1.  for i from C_k(N)-1 to 1 decrementing by 1 do

               1. set j then ack_tie(TIE)

4.2.3.3.1.5.  TIEs Processing When LSDB Received Newer Version on Other
              Adjacencies

   The Link State Database can be considered to PR(N) modulo i;

               2.  exchange |A_k[j] and |A_k[i];

           2.  set k=k-1;

   6.  For each grandparent G, initialize be a counter c(G) with the number switchboard that
   does not need any flooding procedures but can be given new versions
   of its south-bound adjacencies to elected flood repeaters (which
       is initially zero):

       1.  for each G in |G(N) set c(G) = 0;

   7.  Finally keep as FRs only parents that are needed to maintain TIEs by a peer.  Consecutively, a peer receives from the
       number LSDB
   newer versions of adjacencies between the FRs TIEs received by other peeers and processes them
   (without any grandparent G equal
       or above the redundancy constant R:

       1.  for each P in reshuffled |A(N);

           1.  if there exists an adjacency ADJ(P, G) in |NA(P) such
               that c(G) < R then

               1.  place P in FR set;

               2.  for all adjacencies ADJ(P, G') filtering) just like receving TIEs from its remote peer.
   This publisher model can be implemented in |NA(P) increment
                   c(G')

       2.  If any c(G) is still < R, it was not possible to elect many ways.

4.2.3.3.1.6.  Sending TIEs

   On a set
           of FRs that covers periodic basis all grandparents TIEs with redundancy R

   Additional rules for flooding reduction:

   1.  The algorithm lifetime left > 0 MUST be re-evaluated by sent out
   on the adjacency, removed from TIES_TX list and requeued onto
   TIES_RTX list.

4.2.3.4.  TIE Flooding Scopes

   In a node somewhat analogous fashion to link-local, area and domain
   flooding scopes, RIFT defines several complex "flooding scopes"
   depending on every change of
       local adjacencies or reception the direction and type of TIE propagated.

   Every North TIE is flooded northbound, providing a parent S-TIE with changed
       adjacencies.  A node MAY apply at a hysteresis to prevent excessive
       amount of computation during periods given
   level with the complete topology of the Clos or Fat Tree network instability just
       like in case of reachability computation.

   2.  A node SHOULD send out LIEs that grant flood repeater status
       before LIEs
   underneath it, including all specific prefixes.  This means that revoke it on flood repeater set changes to
       prevent transient behavior where a
   packet received from a node at the full coverage of grand
       parents same or lower level whose
   destination is not guaranteed.  Albeit covered by one of those specific prefixes may be
   routed directly towards the condition will correct in
       positively stable manner due to LIE retransmission and periodic
       TIDEs, it can slow down flooding convergence on flood repeater
       status changes.

   3.  A node always floods its self-originated TIEs.

   4.  A node receiving a TIE originated by advertising that prefix rather than
   sending the packet to a node for which it is not at a
       flood repeater does NOT re-flood such higher level.

   A node's Node South TIEs, consisting of all node's adjacencies and
   prefix South TIEs limited to its neighbors
       except for rules those related to default IP prefix and
   disaggregated prefixes, are flooded southbound in Paragraph 6.

   5.  The indication order to allow the
   nodes one level down to see connectivity of flood reduction capability is carried in the
       node TIEs and can be used higher level as well
   as reachability to optimize the algorithm rest of the fabric.  In order to account
       for nodes that will flood regardless.

   6.  A allow an E-W
   disconnected node generates TIDEs as usual but when receiving TIREs or TIDEs
       resulting in requests for a TIE given level to receive the South TIEs of other
   nodes at its level, every *NODE* South TIE is "reflected" northbound
   to level from which the newest received copy
       came on an adjacency where the node was not flood repeater it
       SHOULD ignore such requests on first and first request ONLY.
       Normally, the nodes that received the TIEs as flooding repeaters was received.  It should satisfy the requesting node and with be noted that no further TIREs
       for such East-
   West links are included in South TIE flooding (except at ToF level);
   those TIEs will need to be generated.  Otherwise, the next set of
       TIDEs and TIREs MUST lead flooded to flooding independent of the flood
       repeater status.  This solves a very difficult incast problem on satisfy algorithms in Section 4.2.4.
   In that way nodes restarting with at same level can learn about each other without a very wide fanout, especially northbound.
       To retrieve the full database they often end up processing many
       in-rushing copies whereas this approach should load-balance
   lower level, e.g. in case of leaf level.  The precise, normative
   flooding scopes are given in Table 3.  Those rules govern as well
   what SHOULD be included in TIDEs on the
       incoming database between adjacent nodes and flood repeaters
       should guarantee that two copies adjacency.  Again, East-West
   flooding scopes are sent by different nodes identical to
       ensure against any losses.

   7.  Obviously sine South flooding reduction does NOT apply to self
       originated TIEs and since all policy-guided information consists scopes except in case
   of self-originated TIEs those ToF East-West links (rings) which are unaffected.

5.2.3.10.  Special Considerations

   First, due basically performing
   northbound flooding.

   Node South TIE "south reflection" allows to the distributed, asynchronous nature of ZTP, it can
   create temporary convergence anomalies where nodes at higher levels
   of the fabric temporarily see themselves lower than they belong.
   Since flooding can begin before ZTP is "finished" and support positive
   disaggregation on failures describes in fact must do
   so given there is no global termination criteria, information may end
   up Section 4.2.5 and flooding
   reduction in wrong layers.  A special clause when changing Section 4.2.3.9.

  +-----------+----------------------+---------------+-----------------+
  | Type /    | South                | North         | East-West       |
  | Direction |                      |               |                 |
  +-----------+----------------------+---------------+-----------------+
  | node      | flood if level takes care of that.

   More difficult is a condition where a node floods a    | flood if      | flood only if   |
  | South TIE north towards
   a super-spine, then its spine reboots, in fact partitioning
   superspine from it directly and then the node itself reboots.  That
   leaves in a sense the super-spine holding the "primary copy" | originator is equal  | level of the
   node's TIE.  Normally      | this condition node is resolved easily by the    |
  |           | to this node
   re-originating its TIE with a         | originator is | not ToF         |
  |           |                      | higher sequence number than it sees in
   northbound TIEs, here however, when spine comes back it won't be able
   to obtain a N-TIE from its superspine easily and with that the   |                 |
  |           |                      | this node
   below may issue the same version of the     |                 |
  +-----------+----------------------+---------------+-----------------+
  | non-node  | flood self-          | flood only if | flood only if   |
  | South TIE with a lower sequence
   number.  Flooding procedures are are extended to deal with the
   problem by the means of special clauses that override the database | originated only      | neighbor is   | self-originated |
  |           |                      | originator of
   a lower level with headers of newer | and this node   |
  |           |                      | TIE           | is not ToF      |
  +-----------+----------------------+---------------+-----------------+
  | all North | never flood          | flood always  | flood only if   |
  | TIEs seen in TIDEs coming from
   the north.

5.2.4.  Reachability Computation

   A      |                      |               | this node has three sources of relevant information.  A is    |
  |           |                      |               | ToF             |
  +-----------+----------------------+---------------+-----------------+
  | TIDE      | include at least all | include at    | if this node knows the
   full topology south from the received N-TIEs.  A is |
  |           | non-self originated  | least all     | ToF then        |
  |           | North TIE headers    | node has the set of
   prefixes with associated distances South    | include all     |
  |           | and bandwidths from received
   S-TIEs.

   To compute reachability, a node runs conceptually a northbound self-originated  | TIEs and a
   southbound SPF.  We call that N-SPF all  | North TIEs,     |
  |           | South TIE headers    | South TIEs    | otherwise only  |
  |           | and S-SPF.

   Since neither computation can "loop", it node South TIEs  | originated by | self-originated |
  |           | of nodes at same     | peer and all  | TIEs            |
  |           | level                | North TIEs    |                 |
  +-----------+----------------------+---------------+-----------------+
  | TIRE as   | request all North    | request all   | if this node is possible to compute non-
   equal-cost or even k-shortest paths [EPPSTEIN] |
  | Request   | TIEs and "saturate" all peer's  | South TIEs    | ToF then apply  |
  |           | self-originated TIEs |               | North scope     |
  |           | and all node South   |               | rules,          |
  |           | TIEs                 |               | otherwise South |
  |           |                      |               | scope rules     |
  +-----------+----------------------+---------------+-----------------+
  | TIRE as   | Ack all received     | Ack all       | Ack all         |
  | Ack       | TIEs                 | received TIEs | received TIEs   |
  +-----------+----------------------+---------------+-----------------+

                    Table 3: Normative Flooding Scopes

   If the
   fabric to TIDE includes additional TIE headers beside the extent desired but we use simple, familiar SPF
   algorithms and concepts here due ones
   specified, the receiving neighbor must apply according filter to their prevalence in today's
   routing.

5.2.4.1.  Northbound SPF

   N-SPF uses northbound and East-West adjacencies in the computing
   node's node N-TIEs (since if
   received TIDE strictly and MUST NOT request the node is a leaf it may not have
   generated a node S-TIE) when starting Dijkstra.  Observe extra TIE headers
   that N-SPF
   is really just a one hop variety since Node S-TIEs are were not re-flooded
   southbound beyond a single level (or East-West) and with that allowed by the
   computation cannot progress beyond adjacent nodes.

   Once progressing, we are flooding scope rules in its direction.

   As an example to illustrate these rules, consider using the next level's node S-TIEs to find
   according adjacencies to verify backlink connectivity.  Just as topology
   in
   case of IS-IS or OSPF, two unidirectional links are associated
   together to confirm bidirectional connectivity.  Particular care MUST
   be paid that Figure 2, with the Node optional link between spine 111 and spine 112,
   and the associated TIEs do not only contain given in Figure 14.  The flooding from
   particular nodes of the correct system IDs
   but matching levels as well.

   Default route found when crossing an E-W link TIEs is used IIF

   1.  the given in Table 4.

   +-----------+----------+--------------------------------------------+
   | Router    | Neighbor | TIEs                                       |
   | floods to |          |                                            |
   +-----------+----------+--------------------------------------------+
   | Leaf111   | Spine    | Leaf111 North TIEs, Spine 111 node itself does NOT have any northbound adjacencies AND

   2.  the adjacent South   |
   |           | 112      | TIE                                        |
   | Leaf111   | Spine    | Leaf111 North TIEs, Spine 112 node has one South   |
   |           | 111      | TIE                                        |
   |           |          |                                            |
   | Spine 111 | Leaf111  | Spine 111 South TIEs                       |
   | Spine 111 | Leaf112  | Spine 111 South TIEs                       |
   | Spine 111 | Spine    | Spine 111 South TIEs                       |
   |           | 112      |                                            |
   | Spine 111 | ToF 21   | Spine 111 North TIEs, Leaf111 North TIEs,  |
   |           |          | Leaf112 North TIEs, ToF 22 node South TIE  |
   | Spine 111 | ToF 22   | Spine 111 North TIEs, Leaf111 North TIEs,  |
   |           |          | Leaf112 North TIEs, ToF 21 node South TIE  |
   |           |          |                                            |
   | ...       | ...      | ...                                        |
   | ToF 21    | Spine    | ToF 21 South TIEs                          |
   |           | 111      |                                            |
   | ToF 21    | Spine    | ToF 21 South TIEs                          |
   |           | 112      |                                            |
   | ToF 21    | Spine    | ToF 21 South TIEs                          |
   |           | 121      |                                            |
   | ToF 21    | Spine    | ToF 21 South TIEs                          |
   |           | 122      |                                            |
   | ...       | ...      | ...                                        |
   +-----------+----------+--------------------------------------------+

             Table 4: Flooding some TIEs from example topology

4.2.3.5.  'Flood Only Node TIEs' Bit

   RIFT includes an optional ECN mechanism to prevent "flooding inrush"
   on restart or more northbound adjacencies

   This rule forms a "one-hop default route split-horizon" and prevents
   looping over default routes while allowing for "one-hop protection"
   of nodes that lost all northbound adjacencies except at Top-of-Fabric
   where bring-up with many southbound neighbors.  A node MAY
   set on its LIEs the links are used exclusively according bit to indicate to the neighbor that it
   should temporarily flood topology information node TIEs only to it.  It SHOULD only set it
   in
   multi-plane designs.

   Other south prefixes found when crossing E-W link MAY be used IIF

   1.  no north neighbors are advertising same or supersuming non-
       default prefix AND

   2. the southbound direction.  The receiving node does not originate a non-default supersuming prefix
       itself.

   i.e. SHOULD accomodate
   the E-W link can be used as a gateway of last resort for a
   specific prefix only.  Using south prefixes across E-W link can be
   beneficial e.g.  on automatic de-aggregation in pathological fabric
   partitioning scenarios.

   A detailed example can be found in Section 6.4.

5.2.4.2.  Southbound SPF

   S-SPF uses only request to lessen the southbound adjacencies in flooding load on the affected node S-TIEs, i.e.
   progresses towards nodes at lower levels.  Observe that E-W
   adjacencies are NEVER used in if south
   of the computation.  This enforces sender and SHOULD ignore the
   requirement that a packet traversing bit if northbound.

   Obviously this mechanism is most useful in a southbound direction must
   never change its direction.

   S-SPF uses northbound adjacencies in  The
   distribution of node N-TIEs to verify backlink
   connectivity.

5.2.4.3.  East-West Forwarding Within a non-ToF Level

   Ultimately, it should be observed that in presence TIEs guarantees correct behavior of a "ring" algorithms
   like disaggregation or default route origination.  Furthermore
   though, the use of E-W
   links in any level (except ToF level) neither SPF will provide a
   "ring protection" scheme this bit presents an inherent trade-off between
   processing load and convergence speed since such a computation would have suppressing flooding of
   northbound prefixes from neighbors will lead to deal
   necessarily with breaking blackholes.

4.2.3.6.  Initial and Periodic Database Synchronization

   The initial exchange of "loops" in generic Dijkstra sense; an
   application for which RIFT is not intended.  It is outside modeled after ISIS with TIDE being
   equivalent to CSNP and TIRE playing the scope role of this document how an underlay can be used PSNP.  The content of
   TIDEs and TIREs is governed by Table 3.

4.2.3.7.  Purging and Roll-Overs

   RIFT does not purge information that has been distributed by the
   protocol.  Purging mechanisms in other routing protocols have proven
   to provide be complex and fragile over many years of experience.  Abundant
   amounts of memory are available today even on low-end platforms.  The
   information will age out and all computations will deliver correct
   results if a full-mesh
   connectivity between nodes in node leaves the same level that would allow for
   N-SPF network due to provide protection for the new information
   distributed by its adjacent nodes.

   Once a single RIFT node loosing all its
   northbound adjacencies (as issues a TIE with an ID, it MUST preserve the ID as
   long as any feasible (also when the protocol restarts), even if the TIE
   looses all content.  The re-advertisement of empty TIE fulfills the other nodes
   purpose of purging any information advertised in the
   level are northbound connected).

   Using south prefixes over horizontal links previous versions.
   The originator is optional and can
   protect against pathological fabric partitioning cases that leave
   only paths free to destinations that would necessitate multiple changes not re-originate the according empty TIE
   again or originate an empty TIE with relatively short lifetime to
   prevent large number of
   forwarding direction between north long-lived empty stubs polluting the network.
   Each node MUST timeout and south.

5.2.4.4.  East-West Links Within ToF Level

   E-W ToF links behave in terms of flooding scopes defined in
   Section 5.2.3.4 like northbound links.  Even though clean up the according empty TIEs
   independently.

   Upon restart a ToF node could MUST, as any link-state implementation, be tempted
   prepared to use those links during southbound SPF this MUST NOT receive TIEs with its own system ID and supersede them
   with equivalent, newly generated, empty TIEs with a higher sequence
   number.  As above, the lifetime can be
   attempted relatively short since it may lead in, e.g. anycast cases to routing loops.
   An implemention could try only
   needs to resolve exceed the looping problem necessary propagation and processing delay by following
   on all
   the ring strictly tie-broken shortest-paths only but nodes that are within the details TIE's flooding scope.

   TIE sequence numbers are outside this specification.  And even then, rolled over using the problem of proper
   capacity provisioning of such links when they become traffic-bearing method described in case
   Appendix A.  First sequence number of failures is vexing.

5.2.5.  Automatic Disaggregation on Link & Node Failures

5.2.5.1.  Positive, Non-transitive Disaggregation

   Under normal circumstances, node's S-TIEs contain just the
   adjacencies and any spontaneously originated
   TIE (i.e. not originated to override a default route.  However, if detected older copy in the
   network) MUST be a node detects reasonbly unpredictable random number in the
   interval [0, 2^10-1] which will prevent otherwise identical TIE
   headers to remain "stuck" in the network with content different from
   TIE originated after reboot.

4.2.3.8.  Southbound Default Route Origination

   Under certain conditions nodes issue a default route in their South
   Prefix TIEs with costs as computed in Section 4.3.6.1.

   A node X that

   1.  is NOT overloaded AND

   2.  has southbound or East-West adjacencies

   originates in its
   default IP south prefix covers one or more prefixes that TIE such a default route IIF

   1.  all other nodes at X's' level are reachable
   through it but not through one or more overloaded OR

   2.  all other nodes at the same level,
   then it MUST explicitly advertise those prefixes in an S-TIE.
   Otherwise, some percentage of the X's' level have NO northbound traffic for those
   prefixes would be sent adjacencies OR

   3.  X has computed reachability to a default route during N-SPF.

   The term "all other nodes without according reachability,
   causing it to be black-holed.  Even when not black-holing, at X's' level" describes obviously just the
   resulting forwarding could 'backhaul' packets through
   nodes at the higher same level spines, clearly an undesirable condition affecting the blocking
   probabilities of the fabric.

   We refer to in the process of advertising additional prefixes southbound
   as 'positive de-aggregation' or 'positive dis-aggregation'.  Such
   dis-aggregation is non-transitive, i.e. its' effects are always
   contained to PoD with a single viable lower level of
   (otherwise the fabric only.  Naturally, multiple node or link failures can lead to several independent instances of
   positive dis-aggregation necessary to prevent looping or bow-tying South TIEs cannot be reflected and the fabric. nodes in
   e.g.  PoD 1 and PoD 2 are "invisible" to each other).

   A node determines the set originating a southbound default route MUST install a default
   discard route if it did not compute a default route during N-SPF.

4.2.3.9.  Northbound TIE Flooding Reduction

   Section 1.4 of prefixes needing de-aggregation using
   the following steps:

   1.  A DAG computation in the southern direction is performed first,
       i.e. Optimized Link State Routing Protocol [RFC3626]
   (OLSR) introduces the N-TIEs are used to find all concept of prefixes it can reach and a "multipoint relay" (MPR) that
   minimize the set overhead of next-hops flooding messages in the lower level for each of them.  Such a
       computation can be easily performed on a fat tree network by e.g. setting
       all link costs reducing
   redundant retransmissions in the southern direction to 1 and all northern
       directions same region.

   A similar technique is applied to infinity.  We term set of those prefixes |R, and
       for each prefix, r, in |R, we define its set of next-hops RIFT to
       be |H(r).

   2.  The control northbound
   flooding.  Important observations first:

   1.  a node uses reflected S-TIEs MUST flood self-originated North TIEs to find all the reachable
       nodes at the same level in above which we call the same PoD and node's "parents";

   2.  it is typically not necessary that all parents reflood the set of southbound adjacencies for
       each.  The set North
       TIEs to achieve a complete flooding of nodes at all the same level is termed |N and for
       each node, n, in |N, reachable nodes
       two levels above which we define its set of southbound adjacencies choose to be |A(n). call the node's
       "grandparents";

   3.  For a given r, if  to control the intersection volume of |H(r) and |A(n), for any n,
       is null then that prefix r must be explicitly advertised by the
       node in an S-TIE.

   4.  Identical set of de-aggregated prefixes is flooded on each of the
       node's southbound adjacencies.  In accordance with the normal its flooding rules for an S-TIE, a node at the lower level that
       receives this S-TIE will not propagate two hops North and yet keep
       it south-bound.  Neither
       is robust enough, it necessary is advantageous for the receiving a node to reflect the
       disaggregated prefixes back over its adjacencies to nodes at the
       level from which it was received.

   To summarize the above in simplest terms: if select a node detects that
       subset of its
   default route encompasses prefixes for parents as "Flood Repeaters" (FRs), which one combined
       together deliver two or more copies of its flooding to all of its
       parents, i.e. the other originating node's grandparents;

   4.  nodes
   in its level has no possible next-hops in at the same level below, it has to
   disaggregate it do NOT have to prevent black-holing or suboptimal routing through
   such nodes.  Hence agree on a node X needs specific
       algorithm to determine if it can reach a select the FRs, but overall load balancing should be
       achieved so that different set of south neighbors than other nodes at the same level,
   which level should tend to
       select different parents as FRs;

   5.  there are connected usually many solutions to it via at least one common south neighbor.  If
   it can, then prefix disaggregation may be required.  If it can't,
   then no prefix disaggregation is needed.  An example the problem of
   disaggregation is provided in Section 6.3.

   A possible algorithm is described last:

   1.  Create partial_neighbors = (empty), finding a set
       of neighbors with
       partial connectivity to FRs for a given node; the node X's level from X's perspective.
       Each entry problem of finding the minimal set
       is (similar to) a list of south neighbor of X NP-Complete problem and a list of nodes
       of X.level that can't reach that neighbor.

   2.  A node X determines its globally optimal set of southbound neighbors
       X.south_neighbors.

   3.  For each S-TIE originated from a node Y that X has which is at
       X.level, if Y.south_neighbors is
       may not be the same as
       X.south_neighbors but the nodes share at least minimal one southern
       neighbor, for each neighbor N in X.south_neighbors but not in
       Y.south_neighbors, add (N, (Y)) to partial_neighbors if N isn't
       there or add Y to the list for N.

   4.  If partial_neighbors load-balancing with other nodes is empty, then node X does not to
       disaggregate any prefixes.  If node X
       an important consideration;

   6.  it is advertising
       disaggregated prefixes in its S-TIE, X SHOULD remove them expected that there will be often sets of equivalent nodes
       at a level L, defined as having a common set of parents at L+1.
       Applying this observation at both L and re-
       advertise its according S-TIEs.

   A node X computes reachability L+1, an algorithm may
       attempt to all nodes below it based upon split the
   received N-TIEs first.  This results larger problem in a set sum of routes, each
   categorized by (prefix, path_distance, next-hop-set).  Alternately,
   for clarity in the following procedure, these can smaller separate
       problems;

   7.  it is another expectation that there will be organized by
   next-hop-set as ( (next-hops), {(prefix, path_distance)}).  If
   partial_neighbors isn't empty, then the following procedure describes
   how from time to identify prefixes to disaggregate.

            disaggregated_prefixes = { empty }
            nodes_same_level = { empty }
            for each S-TIE
              if (S-TIE.level == X.level time a
       broken link between a parent and a grandparent, and
                  X shares at least one S-neighbor with X)
                add S-TIE.originator to nodes_same_level
                end if
              end for

            for each next-hop-set NHS
              isolated_nodes = nodes_same_level
              for each NH in NHS
                if NH in partial_neighbors
                  isolated_nodes = intersection(isolated_nodes,
                                                partial_neighbors[NH].nodes)
                  end if
                end for

              if isolated_nodes that case
       the parent is not empty
                for each prefix using NHS
                  add (prefix, distance) probably a poor FR due to disaggregated_prefixes
                  end for
                end if
              end for

            copy disaggregated_prefixes its lower reliability.
       An algorithm may attempt to X's S-TIE
            if X's S-TIE is different
              schedule S-TIE for flooding
              end if

             Figure 15: Computation of Disaggregated Prefixes

   Each disaggregated prefix is sent eliminate parents with the according path_distance.
   This allows a node broken
       northbound adjacencies first in order to send reduce the same S-TIE to each south neighbor.
   The south neighbor which is connected to number of
       FRs.  Albeit it could be argued that prefix relying on higher fanout FRs
       will thus have a
   shorter path.

   Finally, slow flooding due to summarize the less obvious points partially omitted in
   the algorithms higher replication load reliability of
       FR's links seems to keep them be a more tractable:

   1.  all neighbor relationships MUST perform backlink checks.

   2.  overload bits pressing concern.

   In a fully connected Clos Network, this means that a node selects one
   arbitrary parent as introduced in Section 5.3.1 have to FR and then a second one for redundancy.  The
   computation can be respected
       during the computation.

   3.  all kept relatively simple and completely distributed
   without any need for synchronization amongst nodes.  In a "PoD"
   structure, where the lower level nodes Level L+2 is partitioned in silos of equivalent
   grandparents that are flooded the same disaggregated
       prefixes since we don't want to build an S-TIE per node only reachable from respective parents, this
   means treating each silo as a fully connected Clos Network and
       complicate things unnecessarily.  The PoD containing solve
   the prefix
       will prefer southbound anyway.

   4.  positively disaggregated prefixes do NOT have to propagate to
       lower levels.  With that problem within the disturbance in silo.

   In terms of new flooding signaling, a node has enough information to select its
   set of FRs; this information is contained derived from the node's parents' Node
   South TIEs, which indicate the parent's reachable northbound
   adjacencies to its own parents, i.e. the node's grandparents.  A node
   may send a single level experiencing failures.

   5.  disaggregated prefix S-TIEs are not "reflected" by LIE to a northbound neighbor with the lower
       level, i.e.  nodes within same level do NOT need optional boolean
   field `you_are_flood_repeater` set to be aware
       which node computed false, to indicate that the need for disaggregation.

   6.  The fabric
   northbound neighbor is still supporting maximum load balancing properties
       while not trying to send traffic northbound unless necessary. a flood repeater for the node that sent
   the LIE.  In that case positive disaggregation is triggered and due to the very
   stable but un-synchronized nature of northbound neighbor SHOULD NOT reflood
   northbound TIEs received from the algorithm node that sent the nodes may
   issue LIE.  If the necessary disaggregated prefixes at different points in
   time.  This can lead for a short time
   `you_are_flood_repeater` is absent or if `you_are_flood_repeater` is
   set to an "incast" behavior where true, then the first advertising router based on northbound neighbor is a flood repeater for the nature of longest prefix
   match will attract all
   node that sent the traffic.  An implementation MAY hence
   choose different strategies to address this behavior if needed.

   To close this section it is worth to observe LIE and MUST reflood northbound TIEs received from
   that in node.

   This specification proposes a single plane
   ToF this disaggregation prevents blackholing up to (K_LEAF * P) link
   failures in terms of Section 5.1.2 or in other terms, it takes at
   minimum simple default algorithm that many link failures to partition the ToF into multiple
   planes.

5.2.5.2.  Negative, Transitive Disaggregation for Fallen Leafs

   As explained in Section 5.1.3 failures in multi-plane Top-of-Fabric
   or more than (K_LEAF * P) links failing in single plane design can
   generate fallen leafs.  Such scenario cannot SHOULD be addressed by positive
   disaggregation only
   implemented and needs a further mechanism.

5.2.5.2.1.  Cabling of Multiple Top-of-Fabric Planes

   Let us return in this section to designs with multiple planes as
   shown in Figure 3.  Figure 16 highlights how the ToF is cabled in
   case of two planes used by default on every RIFT node.

   o  let |NA(Node) be the means set of dual-rings to distribute all the
   N-TIEs within both planes.  For people familiar with traditional
   link-state routing protocols ToF level can Northbound adjacencies of node Node
      and CN(Node) be considered equivalent
   to area 0 in OSPF or level-2 in ISIS which need to the cardinality of |NA(Node);

   o  let |SA(Node) be "connected" as
   well for the protocol to operate correctly.

             .     ++==========++          ++==========++
             .     II          II          II          II
             .+----++--+  +----++--+  +----++--+  +----++--+
             .|ToF   A1|  |ToF   B1|  |ToF   B2|  |ToF   A2|
             .++-+-++--+  ++-+-++--+  ++-+-++--+  ++-+-++--+
             . | | II      | | II      | | II      | | II
             . | | ++==========++      | | ++==========++
             . | |         | |         | |         | |
             .
             . ~~~ Highlighted ToF set of Southbound adjacencies of node Node
      and CS(Node) be the previous multi-plane figure ~~

                 Figure 16: Topologically connected planes

   As described in Section 5.1.3 failures in multi-plane fabrics can
   lead to blackholes which normal positive disaggregation cannot fix.
   The mechanism cardinality of negative, transitive disaggregation incorporated in
   RIFT provides |SA(Node);

   o  let |P(Node) be the according solution.

5.2.5.2.2.  Transitive Advertisement set of Negative Disaggregates

   A ToF node that discovers that it cannot reach a fallen leaf
   disaggregates all Node's parents;

   o  let |G(Node) be the prefixes set of such leafs.  It uses for that
   purpose negative prefix S-TIEs node Node's grandparents.  Observe
      that are, as usual, flooded southwards
   with |G(Node) = |P(|P(Node));

   o  let N be the scope defined in Section 5.2.3.4.

   Transitively, a child node explicitly loses connectivity to at level L computing a prefix when
   none set of its children advertises it FR;

   o  let P be a node at level L+1 and when the prefix is negatively
   disaggregated by all a parent node of its parents.  When that happens, the N, i.e. bi-
      directionally reachable over adjacency A(N, P);

   o  let G be a grandparent node
   originates the negative prefix further down south.  Since the
   mechanism applies recursively south the negative prefix may propagate of N, reachable transitively all the way down to the leaf.  This is necessary since
   leafs connected to multiple planes by means of disjoint paths may via a
      parent P over adjacencies ADJ(N, P) and ADJ(P, G).  Observe that N
      does not have enough information to choose the correct plane already at the very bottom check bidirectional
      reachability of the
   fabric to make sure that they don't send traffic towards another leaf
   using A(P, G);

   o  let R be a plane where it is "fallen" at which in point redundancy constant integer; a blackhole is
   unavoidable.

   When the connectivity value of 2 or higher for
      R is restored, RECOMMENDED;

   o  let S be a node that disaggregated similarity constant integer; a prefix
   withdraws the negative disaggregation by the usual mechanism of re-
   advertising TIEs omitting the negative prefix.

5.2.5.2.3.  Computation of Negative Disaggregates

   The document omitted so far the description of the computation
   necessary to generate value in range 0 .. 2
      for S is RECOMMENDED, the correct set value of negative prefixes.  Negative
   prefixes can in fact 1 SHOULD be advertised due to two different triggers.  We
   describe them consecutively.

   The first origination reason used.  Two
      cardinalities are considered as equivalent if their absolute
      difference is a computation that uses all the node
   N-TIEs less than or equal to build the set of all reachable nodes S, i.e.  |a-b|<=S.

   o  let RND be a 64-bit random number generated by reachability
   computation over the complete graph and including ToF links. system once on
      startup.

   The
   computation uses algorithm consists of the node itself as root.  This is compared following steps:

   1.  Derive a 64-bits number by XOR'ing 'N's system ID with RND.

   2.  Derive a 16-bits pseudo-random unsigned integer PR(N) from the
   result of the normal southbound SPF as described
       resulting 64-bits number by splitting it in Section 5.2.4.2.
   The difference 16-bits-long words
       W1, W2, W3, W4 (where W1 are the fallen leafs least significant 16 bits of the
       64-bits number, and all their attached prefixes W4 are advertised as negative prefixes southbound if the node does not
   see most significant 16 bits) and then
       XOR'ing the prefix being reachable within southbound SPF.

   The second mechanism hinges on circularly shifted resulting words together:

       A.  (W1<<1) xor (W2<<2) xor (W3<<3) xor (W4<<4);

           where << is the understanding how circular shift operator.

   3.  Sort the negative
   prefixes are used within parents by decreasing number of northbound adjacencies
       (using decreasing system id of the computation parent as described tie-breaker):
       sort |P(N) by decreasing CN(P), for all P in Figure 17.
   When attaching the negative prefixes at certain point |P(N), as ordered
       array |A(N)

   4.  Partition |A(N) in time the
   negative prefix may find itself subarrays |A_k(N) of parents with all the viable nodes from the
   shorter match nexthop being pruned.  In other words, all its equivalent
       cardinality of northbound neighbors provided a negative prefix advertisement.  This adjacencies (in other words with
       equivalent number of grandparents they can reach):

       A.  set k=0; // k is the trigger to advertise this negative prefix transitively south
   and normally caused by ID of the node being subarrray

       B.  set i=0;

       C.  while i < CN(N) do

           i)     set j=i;

           ii)    while i < CN(N) and CN(|A(N)[j]) - CN(|A(N)[i]) <= S

                  a.  place |A(N)[i] in a plane where |A_k(N) // abstract action,
                      maybe noop

                  b.  set i=i+1;

           iii)   /* At this point j is the prefix
   belongs to a fabric leaf that has "fallen" index in this plane.  Obviously,
   when one |A(N) of the northbound switches withdraws its negative
   advertisement, the node has to withdraw its transitively provided
   negative prefix as well.

5.2.6.  Attaching Prefixes

   After SPF is run, it first
                  member of |A_k(N) and (i-j) is necessary to attach C_k(N) defined as the resulting
   reachability information in form
                  cardinality of prefixes.  For S-SPF, prefixes
   from an N-TIE are attached to the originating node with that node's
   next-hop |A_k(N) */

                  set and a distance equal to k=k+1;

       /* At this point k is the prefix's cost plus total number of subarrays, initialized
       for the
   node's minimized path distance.  The RIFT route database, a set shuffling operation below */

   5.  shuffle individually each subarrays |A_k(N) of
   (prefix, prefix-type, attributes, path_distance, next-hop set),
   accumulates these results.

   In case cardinality C_k(N)
       within |A(N) using the Durstenfeld variation of N-SPF prefixes Fisher-Yates
       algorithm that depends on N's System ID:

       A.  while k > 0 do

           i)    for i from each S-TIE need C_k(N)-1 to also be added 1 decrementing by 1 do
                 a.  set j to
   the RIFT route database.  The N-SPF is really just PR(N) modulo i;

                 b.  exchange |A_k[j] and |A_k[i];

           ii)   set k=k-1;

   6.  For each grandparent G, initialize a stub so counter c(G) with the
   computing node needs simply number
       of its south-bound adjacencies to determine, elected flood repeaters (which
       is initially zero):

       A.  for each prefix G in an S-TIE |G(N) set c(G) = 0;

   7.  Finally keep as FRs only parents that originated from adjacent node, what next-hops to use are needed to reach
   that node.  Since there may be parallel links, maintain the next-hops to use
   can be a set; presence
       number of adjacencies between the computing node in FRs and any grandparent G equal
       or above the associated Node
   S-TIE redundancy constant R:

       A.  for each P in reshuffled |A(N);

           i)   if there exists an adjacency ADJ(P, G) in |NA(P) such
                that c(G) < R then

                a.  place P in FR set;

                b.  for all adjacencies ADJ(P, G') in |NA(P) increment
                    c(G')

       B.  If any c(G) is sufficient still < R, it was not possible to verify elect a set
           of FRs that at least one link has
   bidirectional connectivity. covers all grandparents with redundancy R

   Additional rules for flooding reduction:

   1.  The set algorithm MUST be re-evaluated by a node on every change of minimum cost next-hops from
   the computing
       local adjacencies or reception of a parent South TIE with changed
       adjacencies.  A node X MAY apply a hysteresis to the originating adjacent prevent excessive
       amount of computation during periods of network instability just
       like in case of reachability computation.

   2.  A node is determined.

   Each prefix has its cost adjusted SHOULD send out LIEs that grant flood repeater status
       before being added into the RIFT
   route database.  The cost of the prefix is LIEs that revoke it on flood repeater set changes to
       prevent transient behavior where the cost received
   plus the cost full coverage of grand
       parents is not guaranteed.  Albeit the minimum distance next-hop condition will correct in
       positively stable manner due to that neighbor while
   taking into account its attributes such as mobility per Section 5.3.3
   necessary.  Then each prefix LIE retransmission and periodic
       TIDEs, it can be added into the RIFT route
   database with the next_hop_set; ties are broken based upon type first
   and then distance and further attributes and only the best
   combination is used for forwarding.  RIFT route preferences are
   normalized slow down flooding convergence on flood repeater
       status changes.

   3.  A node MUST always flood its self-originated TIEs.

   4.  A node receiving a TIE originated by the according Thrift [thrift] model type.

   An example implementation for a node X follows: for each S-TIE
     if S-TIE.level > X.level
        next_hop_set = set of minimum cost links to the S-TIE.originator
        next_hop_cost = minimum cost link which it is not a
       flood repeater does NOT re-flood such TIEs to S-TIE.originator
        end if its neighbors
       except for each prefix P in the S-TIE
        P.cost = P.cost + next_hop_cost
        if P not rules in route_database:
          add (P, P.cost, P.type, P.attributes, next_hop_set) to route_database
          end if
        if (P Paragraph 6.

   5.  The indication of flood reduction capability MUST be carried in route_database):
          if route_database[P].cost > P.cost or route_database[P].type > P.type:
            update route_database[P] with (P, P.type, P.cost, P.attributes, next_hop_set)
          else if route_database[P].cost == P.cost and route_database[P].type == P.type:
            update route_database[P] with (P, P.type, P.cost, P.attributes,
               merge(next_hop_set, route_database[P].next_hop_set))
          else
            // Not preferred route so ignore
            end if
          end if
        end for
     end for

    Figure 17: Adding Routes from S-TIE Positive and Negative Prefixes

   After
       the positive prefixes are attached and tie-broken, negative
   prefixes are attached node TIEs and CAN be used in case of northbound computation,
   ideally from the shortest length to optimize the longest.  The nexthop
   adjacencies algorithm to
       account for nodes that will flood regardless.

   6.  A node generates TIDEs as usual but when receiving TIREs or TIDEs
       resulting in requests for a negative prefix are inherited from TIE of which the longest
   prefix that aggregates it, newest received copy
       came on an adjacency where the node was not flood repeater it
       SHOULD ignore such requests on first and subsequently adjacencies to first request ONLY.
       Normally, the nodes that
   advertised negative received the TIEs as flooding repeaters
       should satisfy the requesting node and with that no further TIREs
       for this prefix are removed.

   The rule such TIEs will be generated.  Otherwise, the next set of inheritance
       TIDEs and TIREs MUST be maintained when lead to flooding independent of the nexthop list for flood
       repeater status.  This solves a prefix is modified, as very difficult incast problem on
       nodes restarting with a very wide fanout, especially northbound.
       To retrieve the modification may affect full database they often end up processing many
       in-rushing copies whereas this approach should load-balance the entries for
   matching negative prefixes of immediate longer prefix length.  For
   instance, if a nexthop is added, then
       incoming database between adjacent nodes and flood repeaters
       should guarantee that two copies are sent by inheritance it must be added different nodes to
       ensure against any losses.

4.2.3.10.  Special Considerations

   First, due to all the negative routes distributed, asynchronous nature of immediate longer prefixes length unless ZTP, it is pruned due to a negative advertisement for can
   create temporary convergence anomalies where nodes at higher levels
   of the same next hop.
   Similarily, if a nexthop fabric temporarily see themselves lower than they belong.
   Since flooding can begin before ZTP is deleted for a "finished" and in fact must do
   so given prefix, then it there is
   deleted for all the immediately aggregated negative routes.  This
   will recurse no global termination criteria, information may end
   up in the case of nested negative prefix aggregations.

   The rule of inheritance must also be maintained wrong layers.  A special clause when a new prefix changing level takes care
   of
   intermediate length is inserted, or when the immediately aggregating
   prefix that.

   More difficult is deleted a condition where a node floods a TIE north towards
   a super-spine, then its spine reboots, in fact partitioning
   superspine from it directly and then the routing table, making an even shorter
   aggregating prefix node itself reboots.  That
   leaves in a sense the one from which super-spine holding the negative routes now inherit
   their adjacencies.  As "primary copy" of the aggregating prefix changes, all
   node's TIE.  Normally this condition is resolved easily by the
   negative routes must node
   re-originating its TIE with a higher sequence number than it sees in
   northbound TIEs, here however, when spine comes back it won't be recomputed, able
   to obtain a North TIE from its superspine easily and then again with that the process
   node below may
   recurse in case of nested negative prefix aggregations.

   Although these operations can be computationally expensive, issue the
   overall load on devices in same version of the network is low because these
   computations are not run very often, as positive route advertisements TIE with a lower
   sequence number.  Flooding procedures are always preferred over negative ones.  This prevents recursion in
   most cases because positive reachability information never inherits
   next hops.

   To make extended to deal with the negative disaggregation less abstract and provide an
   example let us consider
   problem by the means of special clauses that override the database of
   a ToP node T1 lower level with 4 ToF parents S1..S4 as
   represented headers of newer TIEs seen in Figure 18:

                    +----+    +----+    +----+    +----+          N
                    | S1 |    | S1 |    | S1 |    | S1 |          ^
                    +----+    +----+    +----+    +----+       W< + >E
                     |         |         |         |              v
                     |+--------+         |         |              S
                     ||+-----------------+         |
                     |||+----------------+---------+
                     ||||
                    +----+
                    | T1 |
                    +----+

                   Figure 18: TIDEs coming from
   the north.

4.2.4.  Reachability Computation

   A ToP node with 4 parents

   If all ToF nodes can reach all has three possible sources of relevant information for
   reachability computation.  A node knows the prefixes in full topology south of it
   from the network; received North Node TIEs or alternately north of it from the
   South Node TIEs.  A node has the set of prefixes with
   RIFT, they will normally advertise their
   associated distances and bandwidths from corresponding prefix TIEs.

   To compute prefix reachability, a default route south.  An
   abstract Routing Information Base (RIB), more commonly known as node runs conceptually a
   routing table, stores all types of maintained routes including northbound
   and a southbound SPF.  We call that N-SPF and S-SPF denoting the
   negative ones
   direction in which the computation front is progressing.

   Since neither computation can "loop", it is possible to compute non-
   equal-cost or even k-shortest paths [EPPSTEIN] and "tie-breaks" for "saturate" the best one, whereas an abstract
   Forwarding table (FIB) retains only
   fabric to the ultimately computed
   "positive" routing instructions.  In T1, those tables would look extent desired but we use simple, familiar SPF
   algorithms and concepts here as
   illustrated example due to their prevalence in Figure 19:

                                  +---------+
                                  | Default |
                                  +---------+
                                       |
                                       |     +--------+
                                       +---> | Via S1 |
                                       |     +--------+
                                       |
                                       |     +--------+
                                       +---> | Via S2 |
                                       |     +--------+
                                       |
                                       |     +--------+
                                       +---> | Via S3 |
                                       |     +---------+
                                       |
                                       |     +--------+
                                       +---> | Via S4 |
                                             +--------+

                          Figure 19: Abstract RIB

   In case T1 receives
   today's routing.

4.2.4.1.  Northbound SPF

   N-SPF MUST use ONLY northbound and East-West adjacencies in the
   computing node's node North TIEs (since if the node is a negative advertisement for prefix 2001:db8::/32
   from S1 leaf it may
   not have generated a negative route node South TIE) when starting SPF.  Observe that
   N-SPF is stored in the RIB (indicated by really just a ~
   sign), while the more specific routes to the complementing ToF nodes one hop variety since Node South TIEs are installed in FIB.  RIB and FIB in T1 now look as illustrated in
   Figure 20 not
   re-flooded southbound beyond a single level (or East-West) and Figure 21, respectively:

           +---------+                 +-----------------+
           | Default | <-------------- | ~2001:db8::/32  |
           +---------+                 +-----------------+
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S1 |                +---> | Via S1 |
                |     +--------+                      +--------+
                |
                |     +--------+
                +---> | Via S2 |
                |     +--------+
                |
                |     +--------+
                +---> | Via S3 |
                |     +---------+
                |
                |     +--------+
                +---> | Via S4 |
                      +--------+

       Figure 20: Abstract RIB after negative 2001:db8::/32 from S1

   The negative 2001:db8::/32 prefix entry inherits from ::/0, so with
   that the
   positive more specific routes computation cannot progress beyond adjacent nodes.

   Once progressing, we are using the complements next higher level's node South
   TIEs to S1 find according adjacencies to verify backlink connectivity.
   Just as in the set case of
   next-hops for IS-IS or OSPF, two unidirectional links MUST be
   associated together to confirm bidirectional connectivity.
   Particular care MUST be paid that the default route.  That entry is composed of S2, S3,
   and S4, or, in other words, it uses all entries Node TIEs do not only contain
   the correct system IDs but matching levels as well.

   Default route found when crossing an E-W link SHOULD be used IIF

   1.  the node itself does NOT have any northbound adjacencies AND

   2.  the adjacent node has one or more northbound adjacencies

   This rule forms a "one-hop default route
   with a "hole punched" split-horizon" and prevents
   looping over default routes while allowing for S1 into them.  These are the next hops "one-hop protection"
   of nodes that lost all northbound adjacencies except at Top-of-Fabric
   where the links are still available used exclusively to reach 2001:db8::/32, now that S1 advertised
   that it will not forward 2001:db8::/32 anymore.  Ultimately, those
   resulting next-hops are installed flood topology information in FIB for
   multi-plane designs.

   Other south prefixes found when crossing E-W link MAY be used IIF
   1.  no north neighbors are advertising same or supersuming non-
       default prefix AND

   2.  the more specific route
   to 2001:db8::/32 node does not originate a non-default supersuming prefix
       itself.

   i.e. the E-W link can be used as illustrated below:

           +---------+                  +---------------+
           | Default |                  | 2001:db8::/32 |
           +---------+                  +---------------+
                |                               |
                |     +--------+                |
                +---> | Via S1 |                |
                |     +--------+                |
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S2 |                +---> | Via S2 |
                |     +--------+                |     +--------+
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S3 |                +---> | Via S3 |
                |     +--------+                |     +--------+
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S4 |                +---> | Via S4 |
                      +--------+                      +--------+

       Figure 21: Abstract FIB after negative 2001:db8::/32 from S1

   To illustrate matters further let us consider T1 receiving a negative
   advertisement gateway of last resort for a
   specific prefix 2001:db8:1::/48 from S2, which is stored in
   RIB again.  After the update, the RIB only.  Using south prefixes across E-W link can be
   beneficial e.g.  on automatic de-aggregation in T1 is illustrated pathological fabric
   partitioning scenarios.

   A detailed example can be found in
   Figure 22:

 +---------+        +----------------+         +------------------+
 | Default | <----- | ~2001:db8::/32 | <------ | ~2001:db8:1::/48 |
 +---------+        +----------------+         +------------------+
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S1 |      +---> | Via S1 |            |
      |     +--------+            +--------+            |
      |                                                 |
      |     +--------+                                  |     +--------+
      +---> | Via S2 |                                  +---> | Via S2 |
      |     +--------+                                        +--------+
      |
      |     +--------+
      +---> | Via S3 |
      |     +---------+
      |
      |     +--------+
      +---> | Via S4 |
            +--------+

      Figure 22: Abstract RIB after negative 2001:db8:1::/48 from S2

   Negative 2001:db8:1::/48 inherits from 2001:db8::/32 now, so the
   positive more specific routes are Section 5.4.

4.2.4.2.  Southbound SPF

   S-SPF MUST use ONLY the complements to S2 southbound adjacencies in the set of
   next hops for 2001:db8::/32, which node South
   TIEs, i.e. progresses towards nodes at lower levels.  Observe that
   E-W adjacencies are S3 and S4, or, NEVER used in other words,
   all entries of the father with the negative holes "punched in" again.
   After the update, computation.  This enforces the FIB in T1 shows as illustrated in Figure 23:

 +---------+         +---------------+         +-----------------+
 | Default |         | 2001:db8::/32 |         | 2001:db8:1::/48 |
 +---------+         +---------------+         +-----------------+
      |                     |                           |
      |     +--------+      |                           |
      +---> | Via S1 |      |                           |
      |     +--------+      |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S2 |      +---> | Via S2 |            |
      |     +--------+      |     +--------+            |
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S3 |      +---> | Via S3 |            +---> | Via S3 |
      |     +--------+      |     +--------+            |     +--------+
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S4 |      +---> | Via S4 |            +---> | Via S4 |
            +--------+            +--------+                  +--------+

      Figure 23: Abstract FIB after negative 2001:db8:1::/48 from S2

   Further, let us say
   requirement that S3 stops advertising a packet traversing in a southbound direction must
   never change its service as default
   gateway.  The entry is removed from RIB as usual.  In order to update
   the FIB, it is necessary direction.

   S-SPF MUST use northbound adjacencies in node North TIEs to eliminate the FIB entry verify
   backlink connectivity by checking for presence of the default
   route, as well as all link beside
   correct SystemID and level.

4.2.4.3.  East-West Forwarding Within a non-ToF Level

   Using south prefixes over horizontal links MAY occur if the FIB entries N-SPF
   includes East-West adjacencies in computation.  It can protect
   against pathological fabric partitioning cases that were created for negative
   routes pointing leave only paths
   to the RIB entry being removed (::/0).  This is done
   recursively destinations that would necessitate multiple changes of forwarding
   direction between north and south.

4.2.4.4.  East-West Links Within ToF Level

   E-W ToF links behave in terms of flooding scopes defined in
   Section 4.2.3.4 like northbound links and MUST be used for 2001:db8::/32 control
   plane information flooding ONLY.  Even though a ToF node could be
   tempted to use those links during southbound SPF and then for, 2001:db8:1::/48.  The
   related FIB entries via S3 carry traffic
   over them this MUST NOT be attempted since it may lead in, e.g.
   anycast cases to routing loops.  An implemention MAY try to resolve
   the looping problem by following on the ring strictly tie-broken
   shortest-paths only but the details are removed, as illustrated outside this specification.
   And even then, the problem of proper capacity provisioning of such
   links when they become traffic-bearing in Figure 24.

 +---------+         +---------------+         +-----------------+
 | Default |         | 2001:db8::/32 |         | 2001:db8:1::/48 |
 +---------+         +---------------+         +-----------------+
      |                     |                           |
      |     +--------+      |                           |
      +---> | Via S1 |      |                           |
      |     +--------+      |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S2 |      +---> | Via S2 |            |
      |     +--------+      |     +--------+            |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S4 |      +---> | Via S4 |            +---> | Via S4 |
            +--------+            +--------+                  +--------+

                 Figure 24: Abstract FIB after loss case of S3

   Say that at failures is vexing.

4.2.5.  Automatic Disaggregation on Link & Node Failures

4.2.5.1.  Positive, Non-transitive Disaggregation

   Under normal circumstances, node's South TIEs contain just the
   adjacencies and a default route.  However, if a node detects that time, S4 would also disaggregate its
   default IP prefix
   2001:db8:1::/48.  This would mean covers one or more prefixes that are reachable
   through it but not through one or more other nodes at the FIB entry same level,
   then it MUST explicitly advertise those prefixes in an South TIE.
   Otherwise, some percentage of the northbound traffic for
   2001:db8:1::/48 becomes a discard route, and that those
   prefixes would be sent to nodes without according reachability,
   causing it to be black-holed.  Even when not black-holing, the signal
   for T1
   resulting forwarding could 'backhaul' packets through the higher
   level spines, clearly an undesirable condition affecting the blocking
   probabilities of the fabric.

   We refer to disaggregate prefix 2001:db8:1::/48 negatively in a
   transitive fashion with its own children.

   Finally, let us look at the case where S3 becomes available again process of advertising additional prefixes southbound
   as
   a default gateway, and a negative advertisement 'positive de-aggregation' or 'positive dis-aggregation'.  Such
   dis-aggregation is received from S4
   about prefix 2001:db8:2::/48 as opposed non-transitive, i.e. its' effects are always
   contained to 2001:db8:1::/48.  Again, a
   negative route is stored in the RIB, and single level of the more specific route fabric only.  Naturally, multiple
   node or link failures can lead to several independent instances of
   positive dis-aggregation necessary to prevent looping or bow-tying
   the complementing ToF nodes are installed fabric.

   A node determines the set of prefixes needing de-aggregation using
   the following steps:

   1.  A DAG computation in FIB.  Since
   2001:db8:2::/48 inherits from 2001:db8::/32, the positive FIB routes southern direction is performed first,
       i.e. the North TIEs are chosen by removing S4 from S2, S3, S4.  The abstract FIB in T1
   now shows as illustrated in Figure 25:

                                                +-----------------+
                                                | 2001:db8:2::/48 |
                                                +-----------------+
                                                        |
  +---------+       +---------------+    +-----------------+
  | Default |       | 2001:db8::/32 |    | 2001:db8:1::/48 |
  +---------+       +---------------+    +-----------------+
       |                    |                    |      |
       |     +--------+     |                    |      |     +--------+
       +---> | Via S1 |     |                    |      +---> | Via S2 |
       |     +--------+     |                    |      |     +--------+
       |                    |                    |      |
       |     +--------+     |     +--------+     |      |     +--------+
       +---> | Via S2 |     +---> | Via S2 |     |      +---> | Via S3 |
       |     +--------+     |     +--------+     |            +--------+
       |                    |                    |
       |     +--------+     |     +--------+     |     +--------+
       +---> | Via S3 |     +---> | Via S3 |     +---> | Via S3 |
       |     +--------+     |     +--------+     |     +--------+
       |                    |                    |
       |     +--------+     |     +--------+     |     +--------+
       +---> | Via S4 |     +---> | Via S4 |     +---> | Via S4 |
             +--------+            +--------+          +--------+

      Figure 25: Abstract FIB after negative 2001:db8:2::/48 from S4

5.2.7.  Optional Zero Touch Provisioning (ZTP)

   Each RIFT node can operate in zero touch provisioning (ZTP) mode,
   i.e. it has no configuration (unless used to find all of prefixes it is a Top-of-Fabric at can reach
       and the top set of next-hops in the topology or the must operate lower level for each of them.
       Such a computation can be easily performed on a fat tree by e.g.
       setting all link costs in the topology as leaf and/or
   support leaf-2-leaf procedures) southern direction to 1 and it will fully configure itself
   after being attached all
       northern directions to the topology.  Configured nodes infinity.  We term set of those
       prefixes |R, and nodes
   operating for each prefix, r, in ZTP can |R, we define its set of
       next-hops to be mixed |H(r).

   2.  The node uses reflected South TIEs to find all nodes at the same
       level in the same PoD and will form a valid topology if
   achievable. the set of southbound adjacencies for
       each.  The derivation set of nodes at the same level of is termed |N and for
       each node happens based on offers
   received from node, n, in |N, we define its neighbors whereas each node (with possibly
   exceptions set of configured leafs) tries southbound adjacencies
       to attach at be |A(n).

   3.  For a given r, if the highest
   possible point intersection of |H(r) and |A(n), for any n,
       is null then that prefix r must be explicitly advertised by the
       node in an South TIE.

   4.  Identical set of de-aggregated prefixes is flooded on each of the fabric.  This guarantees that even if
       node's southbound adjacencies.  In accordance with the
   diffusion front reaches normal
       flooding rules for an South TIE, a node from "below" faster than from "above",
   it will greedily abandon already negotiated at the lower level derived from nodes
   topologically below that
       receives this South TIE SHOULD NOT propagate it and properly peers with south-bound or
       reflect the disaggregated prefixes back over its adjacencies to
       nodes above.

   The fabric is very conciously numbered at the level from which it was received.

   To summarize the top to allow for PoDs
   of different heights and minimize number of provisioning necessary, above in this case just simplest terms: if a TOP_OF_FABRIC flag on every node at the top of
   the fabric.

   This section describes the necessary concepts and procedures detects that its
   default route encompasses prefixes for ZTP
   operation.

5.2.7.1.  Terminology

   The interdependencies between the different flags and the configured
   level can be somewhat vexing at first and it may take multiple reads which one of the glossary to comprehend them.

   Automatic Level Derivation:  Procedures which allow other nodes without
   in its level configured to derive it automatically.  Only applied if
      CONFIGURED_LEVEL is undefined.

   UNDEFINED_LEVEL:  A "null" value that indicates that has no possible next-hops in the level below, it has
      not beeen determined and has not been configured.  Schemas
      normally indicate that by a missing optional value without an
      available defined default.

   LEAF_ONLY:  An optional configuration flag that can be configured on
      a node to make sure
   disaggregate it never leaves the "bottom of the hierarchy".
      TOP_OF_FABRIC flag and CONFIGURED_LEVEL cannot be defined at the
      same time as this flag.  It implies CONFIGURED_LEVEL value of 0.

   TOP_OF_FABRIC flag:  Configuration flag that MUST be provided to all
      Top-of-Fabric prevent black-holing or suboptimal routing through
   such nodes.  LEAF_FLAG and CONFIGURED_LEVEL cannot be
      defined at the same time as this flag.  It implies  Hence a
      CONFIGURED_LEVEL value.  In fact, node X needs to determine if it is basically can reach a shortcut for
      configuring same level at all Top-of-Fabric nodes which is
      unavoidable since an initial 'seed' is needed for
   different set of south neighbors than other ZTP nodes
      to derive their level in at the topology.  The flag plays an
      important role in fabrics with multiple planes same level,
   which are connected to enable
      successful negative it via at least one common south neighbor.  If
   it can, then prefix disaggregation (Section 5.2.5.2).

   CONFIGURED_LEVEL:  A level value provided manually.  When this is
      defined (i.e. may be required.  If it can't,
   then no prefix disaggregation is not an UNDEFINED_LEVEL) the node needed.  An example of
   disaggregation is not
      participating provided in ZTP.  TOP_OF_FABRIC flag is ignored when this
      value is defined.  LEAF_ONLY can be set only if this value Section 5.3.

   A possible algorithm is
      undefined or described last:

   1.  Create partial_neighbors = (empty), a set of neighbors with
       partial connectivity to 0.

   DERIVED_LEVEL:  Level value computed via automatic the node X's level derivation
      when CONFIGURED_LEVEL from X's perspective.
       Each entry in the set is equal to UNDEFINED_LEVEL.

   LEAF_2_LEAF:  An optional flag that can be configured on a node to
      make sure it supports procedures defined in Section 5.3.9.  In south neighbor of X and a
      strict sense it is list of
       nodes of X.level that can't reach that neighbor.

   2.  A node X determines its set of southbound neighbors
       X.south_neighbors.

   3.  For each South TIE originated from a capability node Y that implies LEAF_ONLY and the
      according restrictions.  TOP_OF_FABRIC flag X has which is ignored when set
       at X.level, if Y.south_neighbors is not the same time as this flag.

   LEVEL_VALUE:  In ZTP case the original definition of "level" in
      Section 3.1 is both extended and relaxed.  First, level is defined
      now as LEVEL_VALUE and is
       X.south_neighbors but the first defined value of
      CONFIGURED_LEVEL followed by DERIVED_LEVEL.  Second, it is
      possible for nodes to be more than share at least one level apart southern
       neighbor, for each neighbor N in X.south_neighbors but not in
       Y.south_neighbors, add (N, (Y)) to form
      adjacencies partial_neighbors if any of the nodes N isn't
       there or add Y to the list for N.

   4.  If partial_neighbors is at least LEAF_ONLY.

   Valid Offered Level (VOL): empty, then node X does not disaggregate
       any prefixes.  If node X is advertising disaggregated prefixes in
       its South TIE, X SHOULD remove them and re-advertise its
       according South TIEs.

   A neighbor's level received on a valid
      LIE (i.e. passing all checks for adjacency formation while
      disregarding node X computes reachability to all clauses involving level values) persisting for nodes below it based upon the duration
   received North TIEs first.  This results in a set of routes, each
   categorized by (prefix, path_distance, next-hop-set).  Alternately,
   for clarity in the holdtime interval on the LIE.  Observe that
      offers from nodes offering level value of 0 do not constitute VOLs
      (since no valid DERIVED_LEVEL following procedure, these can be obtained from those organized by
   next-hop-set as ( (next-hops), {(prefix, path_distance)}).  If
   partial_neighbors isn't empty, then the following procedure describes
   how to identify prefixes to disaggregate.

            disaggregated_prefixes = { empty }
            nodes_same_level = { empty }
            for each South TIE
              if (South TIE.level == X.level and
      consequently `not_a_ztp_offer` MUST be ignored).  Offers from LIEs
                  X shares at least one S-neighbor with `not_a_ztp_offer` being true are X)
                add South TIE.originator to nodes_same_level
                end if
              end for

            for each next-hop-set NHS
              isolated_nodes = nodes_same_level
              for each NH in NHS
                if NH in partial_neighbors
                  isolated_nodes = intersection(isolated_nodes,
                                                partial_neighbors[NH].nodes)
                  end if
                end for

              if isolated_nodes is not VOLs either.  If empty
                for each prefix using NHS
                  add (prefix, distance) to disaggregated_prefixes
                  end for
                end if
              end for

            copy disaggregated_prefixes to X's South TIE
            if X's South TIE is different
              schedule South TIE for flooding
              end if

             Figure 15: Computation of Disaggregated Prefixes

   Each disaggregated prefix is sent with the according path_distance.
   This allows a node
      maintains parallel adjacencies to send the neighbor, VOL on same South TIE to each
      adjacency south neighbor.
   The south neighbor which is considered as equivalent, i.e. the newest VOL from
      any such adjacency updates connected to that prefix will thus have a
   shorter path.

   Finally, to summarize the VOL received from less obvious points partially omitted in
   the same node.

   Highest Available Level (HAL):  Highest defined level value seen from
      all VOLs received.

   Highest Available Level Systems (HALS):  Set of nodes offering HAL
      VOLs.

   Highest Adjacency Three Way (HAT):  Highest neigbhor level of algorithms to keep them more tractable:

   1.  all neighbor relationships MUST perform backlink checks.

   2.  overload bits as introduced in Section 4.3.1 have to be respected
       during the
      formed three way adjacencies for computation.

   3.  all the node.

5.2.7.2.  Automatic SystemID Selection

   RIFT lower level nodes require a 64 bit SystemID which SHOULD be derived as
   EUI-64 MA-L derive according are flooded the same disaggregated
       prefixes since we don't want to [EUI64]. build an South TIE per node and
       complicate things unnecessarily.  The organizationally
   goverened portion of this ID (24 bits) lower level node that can be used
       compute a southbound route to generate
   multiple IDs if required the prefix will prefer it to indicate more than one RIFT instance."

   As matter of operational concern, the router MUST ensure
       disaggregated route anyway based on route preference rules.

   4.  positively disaggregated prefixes do NOT have to propagate to
       lower levels.  With that such
   identifier is not changing very frequently (or at least not without
   sending all its TIEs with fairly short lifetimes) since otherwise the
   network may be left with large amounts disturbance in terms of stale new flooding
       is contained to a single level experiencing failures.

   5.  disaggregated prefix South TIEs in other are not "reflected" by the lower
       level, i.e.  nodes
   (though this within same level do NOT need to be aware
       which node computed the need for disaggregation.

   6.  The fabric is still supporting maximum load balancing properties
       while not necessarily trying to send traffic northbound unless necessary.

   In case positive disaggregation is triggered and due to the very
   stable but un-synchronized nature of the algorithm the nodes may
   issue the necessary disaggregated prefixes at different points in
   time.  This can lead for a serious problem if short time to an "incast" behavior where
   the procedures
   described first advertising router based on the nature of longest prefix
   match will attract all the traffic.  An implementation MAY hence
   choose different strategies to address this behavior if needed.

   To close this section it is worth to observe that in a single plane
   ToF this disaggregation prevents blackholing up to (K_LEAF * P) link
   failures in terms of Section 8 are implemented).

5.2.7.3.  Generic Fabric Example

   ZTP forces us 4.1.2 or in other terms, it takes at
   minimum that many link failures to think about miscabled partition the ToF into multiple
   planes.

4.2.5.2.  Negative, Transitive Disaggregation for Fallen Leafs

   As explained in Section 4.1.3 failures in multi-plane Top-of-Fabric
   or unusually cabled fabric and
   how such a topology more than (K_LEAF * P) links failing in single plane design can
   generate fallen leafs.  Such scenario cannot be forced into a "lattice" structure which addressed by positive
   disaggregation only and needs a
   fabric represents (with further restrictions). mechanism.

4.2.5.2.1.  Cabling of Multiple Top-of-Fabric Planes

   Let us consider a
   necessary and sufficient physical cabling return in this section to designs with multiple planes as
   shown in Figure 26.  We assume 3.  Figure 16 highlights how the ToF is cabled in
   case of two planes by the means of dual-rings to distribute all nodes being the
   North TIEs within both planes.  For people familiar with traditional
   link-state routing protocols ToF level can be considered equivalent
   to area 0 in OSPF or level-2 in ISIS which need to be "connected" as
   well for the same PoD.

          .        +---+
          .        | A |                      s   = TOP_OF_FABRIC
          .        | s |                      l   = LEAF_ONLY
          .        ++-++                      l2l = LEAF_2_LEAF
          .         | |
          .      +--+ +--+
          .      |       |
          .   +--++     ++--+
          .   | E |     | F |
          .   |   +-+   |   +-----------+
          .   ++--+ |   ++-++           |
          .    |    |    | |            |
          .    | +-------+ |            |
          .    | |  |      |            |
          .    | |  +----+ |            |
          .    | |       | |            |
          .   ++-++     ++-++           | protocol to operate correctly.

             .   | I +-----+ J |           |     ++==========++          ++==========++
             .   |   |     |   +-+         |     II          II          II          II
             .+----++--+  +----++--+  +----++--+  +----++--+
             .|ToF   A1|  |ToF   B1|  |ToF   B2|  |ToF   A2|
             .++-+-++--+  ++-+-++--+  ++-+-++--+  ++-+-++--+
             .   ++-++     +--++ | |
          .    | II      | | II      | |
          .    +---------+ II      |  +------+ | II
             . | | ++==========++      | |  | ++==========++
             .      +-----------------+ | |
          .         | |         | |         |
          .             ++-++     ++-++ |
             .             | X +-----+ Y +-+
             .             |l2l|     | l |
          .             +---+     +---+

               Figure 26: Generic ZTP Cabling Considerations

   First, we must anchor the "top" ~~~ Highlighted ToF of the cabling and that's what the
   TOP_OF_FABRIC flag at node A is for.  Then things look smooth until
   we have previous multi-plane figure ~~

                 Figure 16: Topologically connected planes

   As described in Section 4.1.3 failures in multi-plane fabrics can
   lead to decide whether node Y is at the same level as I, J or at
   the same level as Y and consequently, X is south of it.  This is
   unresolvable here until we "nail down the bottom" blackholes which normal positive disaggregation cannot fix.
   The mechanism of the topology.
   To achieve that we choose to use negative, transitive disaggregation incorporated in this example
   RIFT provides the leaf flags.  We
   will see further then whether Y chooses to form adjacencies to F or
   I, J successively.

5.2.7.4.  Level Determination Procedure according solution.

4.2.5.2.2.  Transitive Advertisement of Negative Disaggregates

   A ToF node starting up with UNDEFINED_VALUE (i.e. without that discovers that it cannot reach a
   CONFIGURED_LEVEL or any fallen leaf or TOP_OF_FABRIC flag) MUST follow those
   additional procedures:

   1.  It advertises its LEVEL_VALUE on
   disaggregates all LIEs (observe the prefixes of such leafs.  It uses for that this can
       be UNDEFINED_LEVEL which
   purpose negative prefix South TIEs that are, as usual, flooded
   southwards with the scope defined in terms Section 4.2.3.4.

   Transitively, a node explicitly loses connectivity to a prefix when
   none of its children advertises it and when the schema prefix is simply an
       omitted optional value).

   2.  It computes HAL as numerically highest available level in negatively
   disaggregated by all
       VOLs.

   3.  It chooses then MAX(HAL-1,0) as of its DERIVED_LEVEL.  The node then
       starts to advertise this derived level.

   4.  A node parents.  When that lost happens, the node
   originates the negative prefix further down south.  Since the
   mechanism applies recursively south the negative prefix may propagate
   transitively all adjacencies with HAL value MUST hold the way down
       computation to the leaf.  This is necessary since
   leafs connected to multiple planes by means of new DERIVED_LEVEL for a short period disjoint paths may
   have to choose the correct plane already at the very bottom of time
       unless it has no VOLs from southbound adjacencies.  After the
       holddown expired, it MUST discard all received offers, recompute
       DERIVED_LEVEL and announce it
   fabric to all neighbors.

   5.  A node MUST reset any adjacency make sure that has changed the level they don't send traffic towards another leaf
   using a plane where it is
       offering and is "fallen" at which in three way state.

   6.  A point a blackhole is
   unavoidable.

   When the connectivity is restored, a node that changed its defined level value MUST readvertise its
       own disaggregated a prefix
   withdraws the negative disaggregation by the usual mechanism of re-
   advertising TIEs (since omitting the new `PacketHeader` will contain a different
       level than before).  Sequence number negative prefix.

4.2.5.2.3.  Computation of each TIE MUST Negative Disaggregates

   The document omitted so far the description of the computation
   necessary to generate the correct set of negative prefixes.  Negative
   prefixes can in fact be
       increased.

   7.  After advertised due to two different triggers.  We
   describe them consecutively.

   The first origination reason is a level has been derived computation that uses all the node MUST set
   North TIEs to build the
       `not_a_ztp_offer` on LIEs towards set of all systems offering a VOL for
       HAL.

   8.  A reachable nodes by reachability
   computation over the complete graph and including ToF links.  The
   computation uses the node that changed its level SHOULD flush from its link state
       database TIEs itself as root.  This is compared with the
   result of the normal southbound SPF as described in Section 4.2.4.2.
   The difference are the fallen leafs and all other nodes, otherwise stale information may
       persist on "direction reversal", i.e.  nodes that seemed south their attached prefixes
   are now north or east-west.  This will advertised as negative prefixes southbound if the node does not prevent
   see the correct
       operation of prefix being reachable within southbound SPF.

   The second mechanism hinges on the protocol but could be slightly confusing
       operationally.

   A node starting understanding how the negative
   prefixes are used within the computation as described in Figure 17.
   When attaching the negative prefixes at certain point in time the
   negative prefix may find itself with LEVEL_VALUE all the viable nodes from the
   shorter match nexthop being 0 (i.e. it assumes pruned.  In other words, all its
   northbound neighbors provided a leaf
   function negative prefix advertisement.  This
   is the trigger to advertise this negative prefix transitively south
   and normally caused by the node being configured with in a plane where the appropriate flags or has prefix
   belongs to a
   CONFIGURED_LEVEL fabric leaf that has "fallen" in this plane.  Obviously,
   when one of 0) MUST follow those additional procedures:

   1.  It computes HAT per procedures above but does NOT use it the northbound switches withdraws its negative
   advertisement, the node has to
       compute DERIVED_LEVEL.  HAT withdraw its transitively provided
   negative prefix as well.

4.2.6.  Attaching Prefixes

   After SPF is used run, it is necessary to attach the resulting
   reachability information in form of prefixes.  For S-SPF, prefixes
   from an North TIE are attached to the originating node with that
   node's next-hop set and a distance equal to the prefix's cost plus
   the node's minimized path distance.  The RIFT route database, a set
   of (prefix, prefix-type, attributes, path_distance, next-hop set),
   accumulates these results.

   In case of N-SPF prefixes from each South TIE need to limit adjacency formation
       per Section 5.2.2.

   It MAY also follow modified procedures:

   1.  It may pick be added
   to the RIFT route database.  The N-SPF is really just a different strategy stub so the
   computing node needs simply to determine, for each prefix in an South
   TIE that originated from adjacent node, what next-hops to choose VOL, e.g. use to
   reach that node.  Since there may be parallel links, the VOL
       value with highest number next-hops to
   use can be a set; presence of VOLs.  Such strategies are only
       possible since the computing node always remains "at in the bottom associated
   Node South TIE is sufficient to verify that at least one link has
   bidirectional connectivity.  The set of minimum cost next-hops from
   the
       fabric" while another layer could "invert" computing node X to the fabric by picking originating adjacent node is determined.

   Each prefix has its prefered VOL in a different fashion than always trying to
       achieve cost adjusted before being added into the highest viable level.

5.2.7.5.  Resulting Topologies RIFT
   route database.  The procedures defined in Section 5.2.7.4 will lead cost of the prefix is set to the RIFT
   topology and levels depicted in Figure 27.

                      .        +---+
                      .        | As|
                      .        | 24|
                      .        ++-++
                      .         | |
                      .      +--+ +--+
                      .      |       |
                      .   +--++     ++--+
                      .   | E |     | F |
                      .   | 23+-+   | 23+-----------+
                      .   ++--+ |   ++-++           |
                      .    |    |    | |            |
                      .    | +-------+ |            |
                      .    | |  |      |            |
                      .    | |  +----+ |            |
                      .    | |       | |            |
                      .   ++-++     ++-++           |
                      .   | I +-----+ J |           |
                      .   | 22|     | 22|           |
                      .   ++--+     +--++           |
                      .    |           |            |
                      .    +---------+ |            |
                      .              | |            |
                      .             ++-++     +---+ |
                      .             | X |     | Y +-+
                      .             | 0 |     | 0 |
                      .             +---+     +---+

              Figure 27: Generic ZTP Topology Autoconfigured

   In case we imagine cost received
   plus the LEAF_ONLY restriction on Y is removed cost of the
   outcome would minimum distance next-hop to that neighbor while
   taking into account its attributes such as mobility per
   Section 4.3.3.  Then each prefix can be very different however added into the RIFT route
   database with the next_hop_set; ties are broken based upon type first
   and result in Figure 28.
   This demonstrates basically that auto configuration makes miscabling
   detection hard then distance and with that can lead further on `PrefixAttributes` and only the best
   combination is used for forwarding.  RIFT route preferences are
   normalized by the according Thrift [thrift] model type.

   An example implementation for node X follows:

  for each South TIE
     if South TIE.level > X.level
        next_hop_set = set of minimum cost links to undesirable effects the South TIE.originator
        next_hop_cost = minimum cost link to South TIE.originator
        end if
     for each prefix P in cases
   where leafs are the South TIE
        P.cost = P.cost + next_hop_cost
        if P not "nailed" by in route_database:
          add (P, P.cost, P.type, P.attributes, next_hop_set) to route_database
          end if
        if (P in route_database):
          if route_database[P].cost > P.cost or route_database[P].type > P.type:
            update route_database[P] with (P, P.type, P.cost, P.attributes, next_hop_set)
          else if route_database[P].cost == P.cost and route_database[P].type == P.type:
            update route_database[P] with (P, P.type, P.cost, P.attributes,
               merge(next_hop_set, route_database[P].next_hop_set))
          else
            // Not preferred route so ignore
            end if
          end if
        end for
     end for

       Figure 17: Adding Routes from South TIE Positive and Negative
                                 Prefixes

   After the accordingly configured flags positive prefixes are attached and
   arbitrarily cabled.

   A node MAY analyze tie-broken, negative
   prefixes are attached and used in case of northbound computation,
   ideally from the outstanding level offers on its interfaces shortest length to the longest.  The nexthop
   adjacencies for a negative prefix are inherited from the longest
   positive prefix that aggregates it, and
   generate warnings subsequently adjacencies to
   nodes that advertised negative for this prefix are removed.

   The rule of inheritance MUST be maintained when its internal ruleset flags the nexthop list for
   a possible
   miscabling.  As an example, when prefix is modified, as the modification may affect the entries for
   matching negative prefixes of immediate longer prefix length.  For
   instance, if a node's sees ZTP level offers that
   differ nexthop is added, then by more than one level from its chosen level (with proper
   accounting for leaf's being at level 0) this can indicate miscabling.

                       .        +---+
                       .        | As|
                       . inheritance it must be added
   to all the negative routes of immediate longer prefixes length unless
   it is pruned due to a negative advertisement for the same next hop.
   Similarily, if a nexthop is deleted for a given prefix, then it is
   deleted for all the immediately aggregated negative routes.  This
   will recurse in the case of nested negative prefix aggregations.

   The rule of inheritance must also be maintained when a new prefix of
   intermediate length is inserted, or when the immediately aggregating
   prefix is deleted from the routing table, making an even shorter
   aggregating prefix the one from which the negative routes now inherit
   their adjacencies.  As the aggregating prefix changes, all the
   negative routes must be recomputed, and then again the process may
   recurse in case of nested negative prefix aggregations.

   Although these operations can be computationally expensive, the
   overall load on devices in the network is low because these
   computations are not run very often, as positive route advertisements
   are always preferred over negative ones.  This prevents recursion in
   most cases because positive reachability information never inherits
   next hops.

   To make the negative disaggregation less abstract and provide an
   example let us consider a ToP node T1 with 4 ToF parents S1..S4 as
   represented in Figure 18:

                    +----+    +----+    +----+    +----+          N
                    | 24|
                       .        ++-++
                       . S1 |    |
                       .      +--+ +--+
                       . S1 |    |
                       .   +--++     ++--+
                       . S1 | E    | S1 | F          ^
                    +----+    +----+    +----+    +----+       W< + >E
                     |
                       .         | 23+-+         | 23+-------+
                       .   ++--+         |   ++-++              v
                     |+--------+         |
                       .         |              S
                     ||+-----------------+         |
                     |||+----------------+---------+
                     ||||
                    +----+
                    | T1 |
                    +----+

                   Figure 18: A ToP node with 4 parents

   If all ToF nodes can reach all the prefixes in the network; with
   RIFT, they will normally advertise a default route south.  An
   abstract Routing Information Base (RIB), more commonly known as a
   routing table, stores all types of maintained routes including the
   negative ones and "tie-breaks" for the best one, whereas an abstract
   Forwarding table (FIB) retains only the ultimately computed
   "positive" routing instructions.  In T1, those tables would look as
   illustrated in Figure 19:

                                  +---------+
                                  | Default |
                                  +---------+
                                       |
                       .
                                       | +-------+     +--------+
                                       +---> | Via S1 |
                       .
                                       |     +--------+
                                       |
                                       |     +--------+
                                       +---> | Via S2 |
                       .
                                       |     +--------+
                                       |  +----+
                                       |     +--------+
                                       +---> |
                       . Via S3 |
                                       |     +---------+
                                       |
                                       |     +--------+
                                       +---> |
                       .   ++-++     ++-++     +-+-+
                       . Via S4 | I +-----+ J +-----+ Y
                                             +--------+

                          Figure 19: Abstract RIB

   In case T1 receives a negative advertisement for prefix 2001:db8::/32
   from S1 a negative route is stored in the RIB (indicated by a ~
   sign), while the more specific routes to the complementing ToF nodes
   are installed in FIB.  RIB and FIB in T1 now look as illustrated in
   Figure 20 and Figure 21, respectively:

           +---------+                 +-----------------+
           |
                       . Default | 22| <-------------- | 22| ~2001:db8::/32  | 22|
                       .   ++-++     +--++     ++-++
                       .
           +---------+                 +-----------------+
                |                               |
                |     +--------+                |     +--------+
                +---> |
                       . Via S1 | +-----------------+                +---> |
                       . Via S1 |
                |     +--------+                      +--------+
                |
                       .    +---------+
                |     +--------+
                +---> |
                       . Via S2 |
                |     +--------+
                |
                       .             ++-++
                |
                       .     +--------+
                +---> | Via S3 |
                |     +---------+
                |
                | X     +--------+
                       .
                +---> | 0 Via S4 |
                       .             +---+
                      +--------+

       Figure 28: Generic ZTP Topology Autoconfigured

5.2.8.  Stability Considerations

   The autoconfiguration mechanism computes a global maximum of levels
   by diffusion. 20: Abstract RIB after negative 2001:db8::/32 from S1

   The achieved equilibrium can be disturbed massively by
   all nodes with highest level either leaving or entering negative 2001:db8::/32 prefix entry inherits from ::/0, so the domain
   (with some finer distinctions not explained further).  It is
   therefore recommended that each node is multi-homed towards nodes
   with respective HAL offerings.  Fortuntately, this is
   positive more specific routes are the natural
   state complements to S1 in the set of things
   next-hops for the topology variants considered in RIFT.

5.3.  Further Mechanisms

5.3.1.  Overload Bit

   The overload Bit MUST be respected in all according reachability
   computations.  A node with overload bit set SHOULD NOT advertise any
   reachability prefixes southbound except locally hosted ones.  A node default route.  That entry is composed of S2, S3,
   and S4, or, in overload SHOULD advertise other words, it uses all its locally hosted prefixes north
   and southbound.

   The leaf node SHOULD set the 'overload' bit on its node TIEs, since
   if the spine nodes were to forward traffic not meant for the local
   node, the leaf node does not have entries the topology information to prevent
   a routing/forwarding loop.

5.3.2.  Optimized Route Computation on Leafs

   Since the leafs do see only "one hop away" they do not need to run default route
   with a
   "proper" SPF.  Instead, they can gather "hole punched" for S1 into them.  These are the next hops that
   are still available prefix
   candidates from their neighbors and build the routing table
   accordingly.

   A leaf will have no N-TIEs except its own and optionally from its
   East-West neighbors.  A leaf will have S-TIEs from its neighbors.

   Instead of creating a network graph from its N-TIEs and neighbor's
   S-TIEs and then running an SPF, a leaf node can simply compute the
   minimum cost and next_hop_set to each leaf neighbor by examining its
   local adjacencies, determining bi-directionality from the associated
   N-TIE, and specifying the neighbor's next_hop_set set and cost from
   the minimum cost local adjacency to reach 2001:db8::/32, now that neighbor.

   Then a leaf attaches prefixes as described S1 advertised
   that it will not forward 2001:db8::/32 anymore.  Ultimately, those
   resulting next-hops are installed in Section 5.2.6.

5.3.3.  Mobility

   It is a requirement FIB for RIFT to maintain at the control plane more specific route
   to 2001:db8::/32 as illustrated below:

           +---------+                  +---------------+
           | Default |                  | 2001:db8::/32 |
           +---------+                  +---------------+
                |                               |
                |     +--------+                |
                +---> | Via S1 |                |
                |     +--------+                |
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S2 |                +---> | Via S2 |
                |     +--------+                |     +--------+
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S3 |                +---> | Via S3 |
                |     +--------+                |     +--------+
                |                               |
                |     +--------+                |     +--------+
                +---> | Via S4 |                +---> | Via S4 |
                      +--------+                      +--------+

       Figure 21: Abstract FIB after negative 2001:db8::/32 from S1

   To illustrate matters further let us consider T1 receiving a real
   time status of which negative
   advertisement for prefix is attached to which port of 2001:db8:1::/48 from S2, which leaf,
   even is stored in a context of mobility where
   RIB again.  After the point of attachement may
   change several times update, the RIB in a subsecond period of time.

   There are two classical approaches to maintain such knowledge in an
   unambiguous fashion:

   time stamp:  With this method, the infrastructure records the precise
      time at which the movement is observed.  One key advantage of this
      technique T1 is that it has no dependency on illustrated in
   Figure 22:

 +---------+        +----------------+         +------------------+
 | Default | <----- | ~2001:db8::/32 | <------ | ~2001:db8:1::/48 |
 +---------+        +----------------+         +------------------+
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S1 |      +---> | Via S1 |            |
      |     +--------+            +--------+            |
      |                                                 |
      |     +--------+                                  |     +--------+
      +---> | Via S2 |                                  +---> | Via S2 |
      |     +--------+                                        +--------+
      |
      |     +--------+
      +---> | Via S3 |
      |     +---------+
      |
      |     +--------+
      +---> | Via S4 |
            +--------+

      Figure 22: Abstract RIB after negative 2001:db8:1::/48 from S2

   Negative 2001:db8:1::/48 inherits from 2001:db8::/32 now, so the mobile device.  One
      drawback is that
   positive more specific routes are the infrastructure must be precisely synchronized
      to be able complements to compare time stamps as observed by the various
      points of attachment, e.g., using S2 in the variation set of the Precision
      Time Protocol (PTP) IEEE Std. 1588 [IEEEstd1588], [IEEEstd8021AS]
      designed
   next hops for bridged LANs IEEE Std. 802.1AS [IEEEstd8021AS].  Both
      the precision of the synchronisation protocol 2001:db8::/32, which are S3 and the resolution S4, or, in other words,
   all entries of the time stamp must beat the highest possible roaming time on parent with the fabric.  Another drawback is that negative holes "punched in" again.
   After the presence of update, the mobile
      device may be observed only asynchronously, e.g., after it starts
      using an IP protocol such FIB in T1 shows as ARP [RFC0826], IPv6 Neighbor
      Discovery [RFC4861][RFC4862], or DHCP [RFC2131][RFC8415].

   sequence counter:  With this method, a mobile node notifies its point
      of attachment on arrival with a sequence counter illustrated in Figure 23:

 +---------+         +---------------+         +-----------------+
 | Default |         | 2001:db8::/32 |         | 2001:db8:1::/48 |
 +---------+         +---------------+         +-----------------+
      |                     |                           |
      |     +--------+      |                           |
      +---> | Via S1 |      |                           |
      |     +--------+      |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S2 |      +---> | Via S2 |            |
      |     +--------+      |     +--------+            |
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S3 |      +---> | Via S3 |            +---> | Via S3 |
      |     +--------+      |     +--------+            |     +--------+
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S4 |      +---> | Via S4 |            +---> | Via S4 |
            +--------+            +--------+                  +--------+

      Figure 23: Abstract FIB after negative 2001:db8:1::/48 from S2

   Further, let us say that S3 stops advertising its service as default
   gateway.  The entry is
      incremented upon each movement.  On the positive side, this method
      does not have a dependency on a precise sense of time, since the
      sequence of movements is kept in removed from RIB as usual.  In order by to update
   the device.  The
      disadvantage of this approach FIB, it is necessary to eliminate the lack of support FIB entry for protocols
      that may be used by the mobile node to register its presence default
   route, as well as all the FIB entries that were created for negative
   routes pointing to the leaf node with the capability to provide a sequence counter.
      Well-known issues with wrapping sequence counters must be
      addressed properly, and many forms of sequence counters that vary
      in both wrapping rules and comparison rules.  A particular
      knowledge of the source of the sequence counter RIB entry being removed (::/0).  This is required to
      operate it, and the comparison between sequence counters from
      heterogeneous sources can be hard to impossible.

   RIFT supports a hybrid approach contained in an optional
   `PrefixSequenceType` prefix attribute that we call a `monotonic
   clock` consisting of a timestamp done
   recursively for 2001:db8::/32 and optional sequence number.  In
   case of presence of the attribute:

   o then for, 2001:db8:1::/48.  The leaf node MAY advertise a time stamp of the latest sighting
   related FIB entries via S3 are removed, as illustrated in Figure 24.

 +---------+         +---------------+         +-----------------+
 | Default |         | 2001:db8::/32 |         | 2001:db8:1::/48 |
 +---------+         +---------------+         +-----------------+
      |                     |                           |
      |     +--------+      |                           |
      +---> | Via S1 |      |                           |
      |     +--------+      |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |
      +---> | Via S2 |      +---> | Via S2 |            |
      |     +--------+      |     +--------+            |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |                     |                           |
      |     +--------+      |     +--------+            |     +--------+
      +---> | Via S4 |      +---> | Via S4 |            +---> | Via S4 |
            +--------+            +--------+                  +--------+

                 Figure 24: Abstract FIB after loss of
      a prefix, e.g., by snooping IP protocols or the node using the
      time S3

   Say that at which it advertised the prefix.  RIFT transports the time
      stamp within the desired that time, S4 would also disaggregate prefix N-TIEs as 802.1AS timestamp.

   o  RIFT may interoperate with
   2001:db8:1::/48.  This would mean that the "update to 6LoWPAN Neighbor
      Discovery" [RFC8505], which provides a method FIB entry for registering a
      prefix with a sequence counter called
   2001:db8:1::/48 becomes a Transaction ID (TID).
      RIFT transports in such case discard route, and that would be the TID signal
   for T1 to disaggregate prefix 2001:db8:1::/48 negatively in a
   transitive fashion with its native form.

   o  RIFT also defines an abstract negative clock (ANSC) that compares
      as less than any other clock.  By default, own children.

   Finally, let us look at the lack of case where S3 becomes available again as
   a
      `PrefixSequenceType` in default gateway, and a Prefix N-TIE negative advertisement is interpreted as ANSC.  We
      call this also an `undefined` clock.

   o  Any received from S4
   about prefix present on the fabric in multiple nodes that has the
      `same` clock is considered 2001:db8:2::/48 as anycast.  ASNC is always considered
      smaller than any defined clock.

   o  RIFT implementation assumes by default that all nodes are being
      synchronized opposed to 200 milliseconds precision which 2001:db8:1::/48.  Again, a
   negative route is easily
      achievable even stored in very large fabrics using [RFC5905].  An
      implementation MAY provide a way to reconfigure a domain to a
      different value.  We call this variable MAXIMUM_CLOCK_DELTA.

5.3.3.1.  Clock Comparison

   All monotonic clock values are comparable to each other using the
   following rules:

   1.  ASNC is older than any other value except ASNC AND

   2.  Clock with timestamp differing by RIB, and the more than MAXIMUM_CLOCK_DELTA
       are comparable by using specific route to
   the timestamps only AND

   3.  Clocks with timestamps differing by less than MAXIMUM_CLOCK_DELTA
       are comparable by using their TIDs only AND

   4.  An undefined TID is always older than any other TID AND

   5.  TIDs complementing ToF nodes are compared using rules of [RFC8505].

5.3.3.2.  Interaction between Time Stamps and Sequence Counters

   For slow movements that occur less frequently than e.g. once per
   second, the time stamp that installed in FIB.  Since
   2001:db8:2::/48 inherits from 2001:db8::/32, the positive FIB routes
   are chosen by removing S4 from S2, S3, S4.  The abstract FIB in T1
   now shows as illustrated in Figure 25:

                                                +-----------------+
                                                | 2001:db8:2::/48 |
                                                +-----------------+
                                                        |
  +---------+       +---------------+    +-----------------+
  | Default |       | 2001:db8::/32 |    | 2001:db8:1::/48 |
  +---------+       +---------------+    +-----------------+
       |                    |                    |      |
       |     +--------+     |                    |      |     +--------+
       +---> | Via S1 |     |                    |      +---> | Via S2 |
       |     +--------+     |                    |      |     +--------+
       |                    |                    |      |
       |     +--------+     |     +--------+     |      |     +--------+
       +---> | Via S2 |     +---> | Via S2 |     |      +---> | Via S3 |
       |     +--------+     |     +--------+     |            +--------+
       |                    |                    |
       |     +--------+     |     +--------+     |     +--------+
       +---> | Via S3 |     +---> | Via S3 |     +---> | Via S3 |
       |     +--------+     |     +--------+     |     +--------+
       |                    |                    |
       |     +--------+     |     +--------+     |     +--------+
       +---> | Via S4 |     +---> | Via S4 |     +---> | Via S4 |
             +--------+            +--------+          +--------+

      Figure 25: Abstract FIB after negative 2001:db8:2::/48 from S4

4.2.7.  Optional Zero Touch Provisioning (ZTP)

   Each RIFT infrastruture captures node can operate in zero touch provisioning (ZTP) mode,
   i.e. it has no configuration (unless it is enough
   to determine the freshest discovery.  If a Top-of-Fabric at the point top
   of attachement
   changes faster than the maximum drift of topology or the time stamping mechanism
   (i.e.  MAXIMUM_CLOCK_DELTA), then a sequence counter is required to
   add resolution to must operate in the freshness evaluation, topology as leaf and/or
   support leaf-2-leaf procedures) and it must be sized so
   that the counters stay comparable within the resolution of will fully configure itself
   after being attached to the time
   stampling mechanism.

   The sequence counter topology.  Configured nodes and nodes
   operating in [RFC8505] is encoded as one octet ZTP can be mixed and wraps
   around using Appendix A.

   Within the resolution will form a valid topology if
   achievable.

   The derivation of MAXIMUM_CLOCK_DELTA the sequence counters
   captured during 2 sequential values level of each node happens based on offers
   received from its neighbors whereas each node (with possibly
   exceptions of configured leafs) tries to attach at the time stamp SHOULD be
   comparable. highest
   possible point in the fabric.  This means with default values guarantees that even if the
   diffusion front reaches a node may move up from "below" faster than from "above",
   it will greedily abandon already negotiated level derived from nodes
   topologically below it and properly peers with nodes above.

   The fabric is very conciously numbered from the top to 127 times during a 200 milliseconds period allow for PoDs
   of different heights and minimize number of provisioning necessary,
   in this case just a TOP_OF_FABRIC flag on every node at the clocks remain
   still comparable thus allowing top of
   the infrastructure to assert fabric.

   This section describes the
   freshest advertisement with no ambiguity.

5.3.3.3.  Anycast vs. Unicast

   A unicast prefix necessary concepts and procedures for ZTP
   operation.

4.2.7.1.  Terminology

   The interdependencies between the different flags and the configured
   level can be attached to somewhat vexing at most one leaf, whereas an
   anycast prefix first and it may be reachable via more than one leaf.

   If a monotonic clock attribute is provided on the prefix, then take multiple reads
   of the
   prefix with glossary to comprehend them.

   Automatic Level Derivation:  Procedures which allow nodes without
      level configured to derive it automatically.  Only applied if
      CONFIGURED_LEVEL is undefined.

   UNDEFINED_LEVEL:  A "null" value that indicates that the `newest` clock level has
      not beeen determined and has not been configured.  Schemas
      normally indicate that by a missing optional value is strictly prefered. without an
      available defined default.

   LEAF_ONLY:  An
   anycast prefix does not carry optional configuration flag that can be configured on
      a clock or all clock attributes MUST node to make sure it never leaves the "bottom of the hierarchy".
      TOP_OF_FABRIC flag and CONFIGURED_LEVEL cannot be defined at the
      same under the rules time as this flag.  It implies CONFIGURED_LEVEL value of Section 5.3.3.1.

   Observe 0.

   TOP_OF_FABRIC flag:  Configuration flag that MUST be provided to all
      Top-of-Fabric nodes.  LEAF_FLAG and CONFIGURED_LEVEL cannot be
      defined at the same time as this flag.  It implies a
      CONFIGURED_LEVEL value.  In fact, it is important that in mobility events the leaf basically a shortcut for
      configuring same level at all Top-of-Fabric nodes which is re-
   flooding as quickly as possible the absence of the prefix that moved
   away.

   Observe further that without support
      unavoidable since an initial 'seed' is needed for [RFC8505] movements on other ZTP nodes
      to derive their level in the
   fabric within intervals smaller than 100msec will be seen as anycast.

5.3.3.4.  Overlays and Signaling

   RIFT topology.  The flag plays an
      important role in fabrics with multiple planes to enable
      successful negative disaggregation (Section 4.2.5.2).

   CONFIGURED_LEVEL:  A level value provided manually.  When this is agnostic whether any overlay technology like [MIP, LISP,
   VxLAN, NVO3] and
      defined (i.e. it is not an UNDEFINED_LEVEL) the associated signaling node is deployed over it.  But
   it not
      participating in ZTP.  TOP_OF_FABRIC flag is expected ignored when this
      value is defined.  LEAF_ONLY can be set only if this value is
      undefined or set to 0.

   DERIVED_LEVEL:  Level value computed via automatic level derivation
      when CONFIGURED_LEVEL is equal to UNDEFINED_LEVEL.

   LEAF_2_LEAF:  An optional flag that leaf nodes, and possibly Top-of-Fabric nodes can
   perform the correct encapsulation.

   In the context of mobility, overlays provide be configured on a classical solution node to
   avoid injecting mobile prefixes
      make sure it supports procedures defined in the fabric and improve the
   scalability of the solution.  It makes Section 4.3.8.  In a
      strict sense on it is a data center capability that
   already uses overlays to consider their applicability to implies LEAF_ONLY and the mobility
   solution; as an example, a mobility protocol such as LISP may inform
      according restrictions.  TOP_OF_FABRIC flag is ignored when set at
      the ingress leaf of same time as this flag.

   LEVEL_VALUE:  In ZTP case the location original definition of the egress leaf "level" in real time.

   Another possibility
      Section 3.1 is to consider that mobility both extended and relaxed.  First, level is defined
      now as an underlay
   service LEVEL_VALUE and support it in RIFT to an extent.  The load on the fabric
   augments with is the amount first defined value of mobility obviously since a move forces
   flooding and computation on all
      CONFIGURED_LEVEL followed by DERIVED_LEVEL.  Second, it is
      possible for nodes in the scope of the move so
   tunneling from leaf to the Top-of-Fabric may be desired.  Future
   versions more than one level apart to form
      adjacencies if any of this document may describe support for such tunneling in
   RIFT.

5.3.4.  Key/Value Store

5.3.4.1.  Southbound

   The protocol supports the nodes is at least LEAF_ONLY.

   Valid Offered Level (VOL):  A neighbor's level received on a southbound distribution valid
      LIE (i.e. passing all checks for adjacency formation while
      disregarding all clauses involving level values) persisting for
      the duration of key-value pairs the holdtime interval on the LIE.  Observe that
      offers from nodes offering level value of 0 do not constitute VOLs
      (since no valid DERIVED_LEVEL can be used to e.g. distribute configuration information during
   topology bring-up.  The KV S-TIEs can arrive obtained from multiple nodes those and
   hence need tie-breaking per key.  We use the following rules

   1.  Only KV TIEs originated by nodes
      consequently `not_a_ztp_offer` MUST be ignored).  Offers from LIEs
      with `not_a_ztp_offer` being true are not VOLs either.  If a node
      maintains parallel adjacencies to which the receiver has a bi-
       directional neighbor, VOL on each
      adjacency are considered.

   2.  Within all is considered as equivalent, i.e. the newest VOL from
      any such valid KV S-TIEs containing adjacency updates the key, VOL received from the same node.

   Highest Available Level (HAL):  Highest defined level value seen from
      all VOLs received.

   Highest Available Level Systems (HALS):  Set of nodes offering HAL
      VOLs.

   Highest Adjacency 3-way (HAT):  Highest neigbhor level of all the KV S-TIE
      formed 3-way adjacencies for which the node.

4.2.7.2.  Automatic SystemID Selection

   RIFT nodes require a 64 bit SystemID which SHOULD be derived as
   EUI-64 MA-L derive according node S-TIE is present, has
       the highest level and within the same level has highest
       originating system to [EUI64].  The organizationally
   goverened portion of this ID is preferred.  If keys in the most
       preferred TIEs are overlapping, the behavior is undefined.

   Observe that (24 bits) can be used to generate
   multiple IDs if a node goes down, the node south of it looses
   adjacencies required to it and with indicate more than one RIFT instance."

   As matter of operational concern, the router MUST ensure that such
   identifier is not changing very frequently (or at least not without
   sending all its TIEs with fairly short lifetimes) since otherwise the KVs will
   network may be disregarded and on
   tie-break changes new KV re-advertised to prevent left with large amounts of stale information
   being used by nodes further south.  KV information TIEs in southbound
   direction other nodes
   (though this is not result of independent computation of every node over
   same set of TIEs but necessarily a diffused computation.

5.3.4.2.  Northbound

   Certain use cases seem serious problem if the procedures
   described in Section 7 are implemented).

4.2.7.3.  Generic Fabric Example

   ZTP forces us to necessitate distribution of essentialy KV
   information that is generated think about miscabled or unusually cabled fabric and
   how such a topology can be forced into a "lattice" structure which a
   fabric represents (with further restrictions).  Let us consider a
   necessary and sufficient physical cabling in the leafs Figure 26.  We assume
   all nodes being in the northbound
   direction.  Such information is flooded in KV N-TIEs.  Since the
   originator of northbound KV is preserved during northbound flooding,
   overlapping keys could be used.  However, to omit further protocol
   complexity, only the value of the key in TIE tie-broken in same
   fashion as southbound KV TIEs is used.

5.3.5.  Interactions with BFD

   RIFT MAY incorporate BFD [RFC5881] to react quickly to link failures.
   In such case following procedures are introduced:

      After RIFT three way hello adjacency convergence a BFD session MAY
      be formed automatically between the RIFT endpoints without further
      configuration using the exchanged discriminators.  The capability
      of the remote side to support BFD is carried on the LIEs.

      In case established BFD session goes Down after it was Up, RIFT
      adjacency should be re-initialized started from Init.

      In case of parallel links between nodes each link may run its own
      independent BFD session or they may share a session.

      In case RIFT changes link identifiers or BFD capability indication
      both the LIE as well as the BFD sessions SHOULD be brought down
      and back up again.

      Multiple RIFT instances MAY choose to share a single BFD session
      (in such case it is undefined what discriminators are used albeit
      RIFT CAN advertise the same link ID for the same interface in
      multiple instances and with that "share" the discriminators).

      BFD TTL follows [RFC5082].

5.3.6.  Fabric Bandwidth Balancing

   A well understood problem in fabrics is that in case of link losses
   it would be ideal to rebalance how much traffic is offered to
   switches in the next level based on the ingress and egress bandwidth
   they have.  Current attempts rely mostly on specialized traffic
   engineering via controller or leafs being aware of complete topology
   with according cost and complexity.

   RIFT can support a very light weight mechanism that can deal with the
   problem in an approximate way based on the fact that RIFT is loop-
   free.

5.3.6.1.  Northbound Direction

   Every RIFT node SHOULD compute the amount of northbound bandwith
   available through neighbors at higher level and modify distance
   received on default route from this neighbor.  Those different
   distances SHOULD be used to support weighted ECMP forwarding towards
   higher level when using default route.  We call such a distance
   Bandwidth Adjusted Distance or BAD.  This is best illustrated by a
   simple example. PoD.

          .   100  x             100 100 MBits        +---+
          .        |   x A |                      s   = TOP_OF_FABRIC
          .        | s |                      l   = LEAF_ONLY
          .  +-+---+-+          +-+---+-+        ++-++                      l2l = LEAF_2_LEAF
          .         | |
          .      +--+ +--+
          .      |       |
          .  |Spin111|          |Spin112|
                             .  +-+---+++          ++----+++   +--++     ++--+
          .    |x  ||           ||    ||   | E |     | F |
          .    ||  |+---------------+ ||   |   +-+   |   +-----------+
          .    ||  +---------------+| ||   ++--+ |   ++-++           |
          .    ||               || || ||    |    |    | |            |
          .    ||               || || ||    | +-------+ |            |
          .   -----All Links 10 MBit-------    | |  |      |            |
          .    ||               || || ||    | |  +----+ |            |
          .    ||               || || ||    | |       | |            |
          .    ||  +------------+| || ||   ++-++     ++-++           |
          .    ||  |+------------+ || ||   | I +-----+ J |           |
          .    |x  ||              || ||   |   |     |   +-+         |
          .  +-+---+++          +--++-+++   ++-++     +--++ |         |
          .    | |         |  |         |
          .  |Leaf111|          |Leaf112|
                             .  +-------+          +-------+

                      Figure 29: Balancing Bandwidth

   All links from Leafs in Figure 29 are assumed to 10 MBit/s bandwidth
   while the uplinks one level further up are assumed to be 100 MBit/s.
   Further, in Figure 29 we assume that Leaf111 lost one of the parallel
   links to Spine 111 and with that wants to possibly push more traffic
   onto Spine 112.  Leaf 112 has equal bandwidth to Spine 111 and Spine
   112 but Spine 111 lost one of its uplinks.

   The local modification of the received default route distance from
   upper level is achieved by running a relatively simple algorithm
   where the bandwidth is weighted exponentially while the distance on
   the default route represents a multiplier for the bandwidth weight
   for easy operational adjustements.

   On a node L use Node TIEs to compute for each non-overloaded
   northbound neighbor N three values:

      L_N_u: as sum of the bandwidth available to N

      N_u: as sum of the uplink bandwidth available on N

      T_N_u: as sum of L_N_u * OVERSUBSCRIPTION_CONSTANT + N_u

   For all T_N_u determine the according M_N_u as
   log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value
   of all M_N_u.

   For each advertised default route from a node N modify the advertised
   distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead
   of distance D to weight balance default forwarding towards N.

   For the example above a simple table of values will help the
   understanding.  We assume the default route distance is advertised
   with D=1 everywhere and OVERSUBSCRIPTION_CONSTANT = 1.

               +---------+-----------+-------+-------+-----+
               | Node    | N         | T_N_u | M_N_u | BAD |
               +---------+-----------+-------+-------+-----+
               | Leaf111 | Spine 111 | 110   | 7     | 2   |
               +---------+-----------+-------+-------+-----+    +---------+ | Leaf111  +------+  | Spine 112
          .      | 220       | 8 | 1         |
               +---------+-----------+-------+-------+-----+  | Leaf112
          .      +-----------------+ | Spine 111  | 120
          .              | 7 | 2       |
               +---------+-----------+-------+-------+-----+ | Leaf112  | Spine 112
          .             ++-++     ++-++ | 220
          .             | 8 X +-----+ Y +-+
          .             |l2l|     | 1 l |
               +---------+-----------+-------+-------+-----+

                         Table 5: BAD Computation

   If a calculation produces a result exceeding
          .             +---+     +---+

               Figure 26: Generic ZTP Cabling Considerations

   First, we must anchor the range "top" of the type,
   e.g. bandwidth, the result is set to cabling and that's what the highest possible value for
   that type.

   BAD is only computed for default routes.  A
   TOP_OF_FABRIC flag at node MAY compute and use
   BAD for any disaggregated prefixes or other RIFT routes. A node MAY
   use another algorithm than BAD is for.  Then things look smooth until
   we have to weight northbound traffic based on
   bandwidth given that decide whether node Y is at the algorithm same level as I, J (and as
   consequence, X is distributed and un-synchronized
   and ultimately, its correct behavior does not depend on uniformity south of
   balancing algorithms used in it) or at the fabric.  E.g. it same level as X.  This is conceivable that
   leafs could use real time link loads gathered by analytics to change
   unresolvable here until we "nail down the amount bottom" of traffic assigned to each default route next hop.

   Observe further the topology.
   To achieve that a change in available bandwidth will only affect
   at maximum two levels down we choose to use in this example the fabric, i.e. blast radius of
   bandwidth changes is contained no matter its height.

5.3.6.2.  Southbound Direction

   Due to its loop free properties a node CAN take during S-SPF into
   account the available bandwidth on the nodes leaf flags in lower levels X
   and
   modify the amount of traffic offered Y.  In case where Y would not have a leaf flag it will try to next level's "southbound"
   nodes based
   elect highest level offered and end up being in same level as what it sees is the total achievable maximum flow
   through I and
   J.

4.2.7.4.  Level Determination Procedure

   A node starting up with UNDEFINED_VALUE (i.e. without a
   CONFIGURED_LEVEL or any leaf or TOP_OF_FABRIC flag) MUST follow those nodes.
   additional procedures:

   1.  It is worth observing advertises its LEVEL_VALUE on all LIEs (observe that such computations
   may work better if standardized but does not have to this can
       be necessarily.
   As long the packet keeps on heading south it will take one UNDEFINED_LEVEL which in terms of the schema is simply an
       omitted optional value).

   2.  It computes HAL as numerically highest available paths and arrive at the intended destination.

5.3.7.  Label Binding

   A level in all
       VOLs.

   3.  It chooses then MAX(HAL-1,0) as its DERIVED_LEVEL.  The node MAY then
       starts to advertise on its TIEs a locally significant, downstream
   assigned label for the according interface.  One use this derived level.

   4.  A node that lost all adjacencies with HAL value MUST hold down
       computation of such label is
   a hop-by-hop encapsulation allowing to easily distinguish forwarding
   planes served by new DERIVED_LEVEL for a multiplicity short period of RIFT instances.

5.3.8.  Segment Routing Support with RIFT

   Recently, alternative architecture time
       unless it has no VOLs from southbound adjacencies.  After the
       holddown expired, it MUST discard all received offers, recompute
       DERIVED_LEVEL and announce it to reuse labels as segment
   identifiers [RFC8402] all neighbors.

   5.  A node MUST reset any adjacency that has gained traction changed the level it is
       offering and may present use cases is in IP fabric 3-way state.

   6.  A node that would justify changed its deployment.  Such use cases will
   either precondition an assignment of a label per node (or other
   entities where defined level value MUST readvertise its
       own TIEs (since the mechanisms are equivalent) or a global assignment
   and new `PacketHeader` will contain a knowledge of topology everywhere to compute segment stacks of
   interest.  We deal with the two issues separately.

5.3.8.1.  Global Segment Identifiers Assignment

   Global segment identifiers are normally assumed to be provided by
   some kind different
       level than before).  Sequence number of a centralized "controller" instance and distributed to
   other entities.  This can each TIE MUST be performed in RIFT by attaching
       increased.

   7.  After a
   controller to the Top-of-Fabric nodes at the top of the fabric where level has been derived the whole topology is always visible, assign such identifiers and
   then distribute those via node MUST set the KV mechanism
       `not_a_ztp_offer` on LIEs towards all nodes so they
   can perform things like probing the fabric for failures using systems offering a stack
   of segments.

5.3.8.2.  Distribution of Topology Information

   Some segment routing use cases seem to precondition full knowledge VOL for
       HAL.

   8.  A node that changed its level SHOULD flush from its link state
       database TIEs of
   fabric topology in all other nodes, otherwise stale information may
       persist on "direction reversal", i.e.  nodes which can be performed albeit at that seemed south
       are now north or east-west.  This will not prevent the
   loss of one correct
       operation of highly desirable properties of RIFT, namely minimal
   blast radius.  Basically, RIFT can function as the protocol but could be slightly confusing
       operationally.

   A node starting with LEVEL_VALUE being 0 (i.e. it assumes a flat IGP leaf
   function by
   switching off its flooding scopes.  All nodes will end up being configured with full
   topology view and albeit the N-SPF and S-SPF are still performed
   based on RIFT rules, any computation with segment identifiers that
   needs full topology can use it.

   Beside blast radius problem, excessive flooding may present
   significant load on implementations.

5.3.9.  Leaf to Leaf Procedures

   RIFT can optionally allow special leaf East-West adjacencies under
   additional set appropriate flags or has a
   CONFIGURED_LEVEL of rules.  The leaf supporting 0) MUST follow those additional procedures:

   1.  It computes HAT per procedures MUST:

      advertise the LEAF_2_LEAF flag in node capabilities AND

      set the overload bit on all leaf's node TIEs AND

      flood only node's own north and south TIEs over E-W leaf
      adjacencies AND

      always above but does NOT use E-W leaf adjacency in both north as well as south
      computation AND

      install a discard route for any advertised aggregate in leaf's
      TIEs AND

      never form southbound adjacencies.

   This will allow the E-W leaf nodes it to exchange traffic strictly for
   the prefixes advertised in each other's north prefix TIEs (since the
   southbound computation will find the reverse direction in the other
   node's TIE and install its north prefixes).

5.3.10.  Address Family and Multi Topology Considerations

   Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC8202]
       compute DERIVED_LEVEL.  HAT is used
   today in link-state routing protocols to support several domains on
   the same physical topology.  RIFT supports this capability by
   carrying transport ports in the LIE protocol exchanges.  Multiplexing
   of LIEs can be achieved by either choosing varying multicast
   addresses or ports on the same address.

   BFD interactions in limit adjacency formation
       per Section 5.3.5 are implementation dependent when
   multiple RIFT instances run on 4.2.2.

   It MAY also follow modified procedures:

   1.  It may pick a different strategy to choose VOL, e.g.  use the same link.

5.3.11.  Reachability VOL
       value with highest number of Internal Nodes in the Fabric

   RIFT does not precondition that its nodes have reachable addresses
   albeit for operational purposes this is clearly desirable.  Under
   normal operating conditions this can be easily achieved by e.g.
   injecting VOLs.  Such strategies are only
       possible since the node's loopback address into North and South Prefix
   TIEs or other implementation specific mechanisms.

   Things get more interesting in case a node looses all its northbound
   adjacencies but is not at always remains "at the top bottom of the fabric.  That is outside
       fabric" while another layer could "invert" the
   scope of this document and may be covered fabric by picking
       its prefered VOL in a separate document
   about policy guided prefixes [PGP reference].

5.3.12.  One-Hop Healing of Levels with East-West Links

   Based on different fashion than always trying to
       achieve the rules defined in Section 5.2.4, Section 5.2.3.8 highest viable level.

4.2.7.5.  ZTP FSM

   This section specifies the precise, normative ZTP FSM and
   given presence of E-W links, RIFT can provide a one-hop protection of
   nodes that lost all their northbound links or in other complex link
   set failure scenarios except at Top-of-Fabric where be
   omitted unless the links are
   used exclusively to flood topology information in multi-plane
   designs.  Section 6.4 explains reader is pursuing an implemenentation of the resulting behavior based on one
   such example.

5.4.  Security

5.4.1.  Security Model

   An inherent property of any security and ZTP architecture
   protocol.

   Initial state is the
   resulting trade-off in regard to integrity verification of the
   information distributed through the fabric vs. necessary provisioning
   and auto-configuration.  At a minimum, in all approaches, the
   security ComputeBestOffer.

digraph Gd436cc3ced8c471eb30bd4f3ac946261 {
    N06108ba9ac894d988b3e4e8ea5ace007
[label="Enter"]
[style="invis"]
[shape="plain"];
    Na47ff5eac9aa4b2eaf12839af68aab1f
[label="MultipleNeighborsWait"]
[shape="oval"];
    N57a829be68e2489d8dc6b84e10597d0b
[label="OneWay"]
[shape="oval"];
    Na641d400819a468d987e31182cdb013e
[label="ThreeWay"]
[shape="oval"];
    Necfbfc2d8e5b482682ee66e604450c7b
[label="Enter"]
[style="dashed"]
[shape="plain"];
    N16db54bf2c5d48f093ad6c18e70081ee
[label="TwoWay"]
[shape="oval"];
    N1b89016876b44cc1b9c1e4a735769560
[label="Exit"]
[style="invis"]
[shape="plain"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N57a829be68e2489d8dc6b84e10597d0b

[label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|NeighborDroppedReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Necfbfc2d8e5b482682ee66e604450c7b -> N57a829be68e2489d8dc6b84e10597d0b
[label=""]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|NewNeighbor|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|NeighborDroppedReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|TimerTick|\n|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|\n|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N57a829be68e2489d8dc6b84e10597d0b
[label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> Na47ff5eac9aa4b2eaf12839af68aab1f

[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> N57a829be68e2489d8dc6b84e10597d0b
[label="|MultipleNeighborsDone|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|TimerTick|\n|LieRcvd|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|TimerTick|\n|LieRcvd|\n|NeighborChangedLevel|\n|NeighborChangedAddress|\n|NeighborAddressAdded|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|TimerTick|\n|LieRcvd|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
}
                                ZTP FSM DOT

   Events

   o  TimerTick: one second timer tic

   o  LevelChanged: node's level has been changed by ZTP or
      configuration

   o  HALChanged: best HAL computed by ZTP has changed

   o  HATChanged: HAT computed by ZTP has changed

   o  HALSChanged: set of an established HAL offering systems computed by ZTP has
      changed

   o  LieRcvd: received LIE

   o  NewNeighbor: new neighbor parsed

   o  ValidReflection: received own reflection from neighbor

   o  NeighborDroppedReflection: lost previous own reflection from
      neighbor

   o  NeighborChangedLevel: neighbor changed advertised level

   o  NeighborChangedAddress: neighbor changed IP address

   o  UnacceptableHeader: unacceptable header seen

   o  MTUMismatch: MTU mismatched

   o  PODMismatch: Unacceptable PoD seen

   o  HoldtimeExpired: adjacency can be ensured.  The stricter
   the security model the hold down expired

   o  MultipleNeighbors: more provisioning must take over the role of
   ZTP.

   The most security conscious operators will want to have full control
   over which port than one neighbor seen on which router/switch is connected to the respective
   port interface

   o  MultipleNeighborsDone: cooldown for multiple neighbors expired

   o  SendLie: send a LIE out

   o  UpdateZTPOffer: update this node's ZTP offer

   Actions

      on the "other side", which we will call the "port-association
   model" (PAM) achievable e.g. by configuring MTUMismatch in OneWay finishes in OneWay: no action
      on each port pair a
   designated shared key or pair of private/public keys.  In secure data
   center locations, operators may want to control which router/switch
   is connected to which other router/switch only or choose a "node-
   association model" (NAM) which allows, for example, simplified port
   sparing.  In an even more relaxed environment, an operator may only
   be concerned that the router/switches share credentials ensuring that
   they belong to this particular data center network hence allowing the
   flexible sparing of whole routers/switches.  We will define that case
   as the "fabric-association model" (FAM), equivalent to using a shared
   secret for the whole fabric.  Such flexibility may make sense for
   leaf nodes such as servers where the addition and swapping of servers
   is more frequent than the rest of the data center network.
   Generally, leafs of the fabric tend to be less trusted than switches.
   The different models could be mixed throughout the fabric if the
   benefits outweigh the cost of increased complexity HoldtimeExpired in provisioning.

   In each of the above cases, some configuration mechanism is needed to
   allow the operator to specify which connections are allowed, and some
   mechanism is needed to:

   a.  specify the according level OneWay finishes in the fabric,

   b.  discover and report missing connections,

   c.  discover and report unexpected connections, and prevent such
       adjacencies from forming.

   On the more relaxed configuration side of the spectrum, operators
   might only configure the level of each switch, but don't explicitly
   configure which connections are allowed.  In this case, RIFT will
   only allow adjacencies to come up between nodes are that OneWay: no action

      on LevelChanged in adjacent
   levels.  The operators ThreeWay finishes in OneWay: update level with lowest security requirements may not use
   any configuration to specify which connections are allowed.  Such
   fabrics could rely fully
      event value

      on ZTP for each router/switch to discover
   its level and would only allow adjacencies between adjacent levels to
   come up.  Figure 30 illustrates the tradeoffs inherent MultipleNeighbors in the
   different security models.

   Ultimately, some level of verification of the link quality may be
   required before an adjacency is allowed to be used for forwarding.
   For example, an implementation may require that a BFD session comes
   up before advertising the adjacency.

   For the above outlined cases, RIFT has two approaches to enforce that
   a local port is connected to the correct port MultipleNeighborsWait finishes in
      MultipleNeighborsWait: start multiple neighbors timer as 4 *
      DEFAULT_LIE_HOLDTIME

      on the correct remote
   router/switch.  One approach is to piggy-back HALChanged in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: store new HAL

      on RIFT's
   authentication mechanism.  Assuming the provisioning model (e.g. the
   YANG model) is flexible enough, operators can choose to provision a
   unique authentication key for:

   a.  each pair of ports NeighborChangedAddress in "port-association model" or

   b.  each pair of switches ThreeWay finishes in "node-association model" or

   c.  each pair of levels or

   d.  the entire fabric OneWay: no
      action

      on ValidReflection in "fabric-association model".

   The other approach is to rely OneWay finishes in ThreeWay: no action

      on the system-id, port-id and level
   fields MTUMismatch in the LIE message to validate an adjacency against the
   configured expected cabling topology, and optionally introduce some
   new rules TwoWay finishes in the FSM to allow the adjacency to come up OneWay: no action

      on TimerTick in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: decrement MultipleNeighbors timer, if the
   expectations are met.

                   ^                 /\                  |
                  /|\               /  \                 |
                   |               /    \                |
                   |              / PAM  \               |
               Increasing        /        \          Increasing
               Integrity        +----------+         Flexibility
                   &           /    NAM     \            &
              Increasing      +--------------+         Less
              Provisioning   /      FAM       \     Configuration
                   |        +------------------+         |
                   |       / Level Provisioning \        |
                   |      +----------------------+      \|/
                   |     /    Zero Configuration  \      v
                        +--------------------------+

                         Figure 30: Security Model

5.4.2.  Security Mechanisms

   RIFT Security goals are to ensure authentication, message integrity
   and prevention of replay attacks.  Low processing overhead and
   efficient messaging are also a goal.  Message confidentiality is a
   non-goal.

   The model
      expired PUSH MultipleNeighborsDone

      on MultipleNeighborsDone in the previous section allows a range of security key
   types that are analogous to the various security association models.
   PAM and NAM allow security associations at the port or node level
   using symmetric or asymmetric keys that are pre-installed.  FAM
   argues for security associations to be applied only at a group level
   or to be refined once the topology has been established.  RIFT does
   not specify how security keys are installed or updated it specifies
   how the key can be used to achieve goals.

   The protocol has provisions for "weak" nonces to prevent replay
   attacks and includes authentication mechanisms comparable to
   [RFC5709] and [RFC7987].

5.4.3.  Security Envelope

   RIFT MUST be carried MultipleNeighborsWait finishes in a mandatory secure envelope illustrated
      OneWay: decrement MultipleNeighbors timer, if expired PUSH
      MultipleNeighborsDone

      on HATChanged in
   Figure 31.  Any value ThreeWay finishes in the packet following a security fingerprint
   MUST be used only after the according fingerprint has been validated.

   Local configuration MAY allow to skip the checking of the envelope's
   integrity.

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

      UDP Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           Source Port         |       RIFT destination port   |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           UDP Length          |        UDP Checksum           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      Outer Security Envelope Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           RIFT MAGIC          |         Packet Number         |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |    Reserved   |  RIFT Major   | Outer Key ID  | Fingerprint   |
      |               |    Version    |               |    Length     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~       Security Fingerprint covers all following content       ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Weak Nonce Local              | Weak Nonce Remote             |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |            Remaining TIE Lifetime (all 1s ThreeWay: store HAT

      on UpdateZTPOffer in case of LIE)     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      TIE Origin Security Envelope Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |              TIE Origin Key ID                |  Fingerprint  |
      |                                               |    Length     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~       Security Fingerprint covers all following content       ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      Serialized RIFT Model Object
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~                Serialized RIFT Model Object                   ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

                       Figure 31: Security Envelope

   RIFT MAGIC:  16 bits.  Constant value of 0xA1F7 that allows TwoWay finishes in TwoWay: send offer to
      classify RIFT packets independent of used UDP port.

   Packet Number:  16 bits.  An optional, per packet type monotonically
      growing number rolling over using sequence number arithmetic
      defined inAppendix A.  A node SHOULD correctly set the number ZTP
      FSM

      on
      subsequent packets or otherwise MUST set the value to
      `undefined_packet_number` as provided HALSChanged in the schema.  This number
      can be used to detect losses and misordering TwoWay finishes in flooding for
      either operational purposes or TwoWay: store HALS

      on PODMismatch in implementation to adjust
      flooding behavior to current link or buffer quality.  This number
      MUST NOT be used to discard or validate the correctness of
      packets.

   RIFT Major Version:  8 bits.  It allows to check whether protocol
      versions are compatible, i.e. the serialized object can be decoded
      at all.  An implementation MUST drop packets with unexpected value
      and MAY report a problem.  Must be same as TwoWay finishes in encoded model
      object, otherwise packet is dropped.

   Outer Key ID:  8 bits to allow key rollovers.  This implies key type
      and used algorithm.  Value 0 means that OneWay: no valid fingerprint was
      computed.  This key ID scope is local to the nodes action

      on both ends of
      the adjacency.

   TIE Origin Key ID:  24 bits.  This implies key type and used
      algorithm.  Value 0 means that LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE

      on PODMismatch in ThreeWay finishes in OneWay: no valid fingerprint was computed.
      This key ID scope is global to the RIFT instance since it implies
      the originator of the TIE so the contained object does not have to
      be de-serialized to obtain it.

   Length of Fingerprint:  8 bits.  Length action

      on TimerTick in 32-bit multiples of the
      following fingerprint not including lifetime or weak nonces.  It
      allows to navigate the structure when an unknown key type is
      present.  To clarify a common cornercase when this value is set to
      0 it signifies an empty (0 bytes long) security fingerprint.

   Security Fingerprint:  32 bits * Length of Fingerprint.  This is a
      signature that is computed over all data following after it.  If
      the signficant bits of fingerprint are fewer than the 32 bits
      padded length than the signficant bits MUST be left aligned and
      remaining bits TwoWay finishes in TwoWay: PUSH SendLie event, if
      holdtime expired PUSH HoldtimeExpired event

      on the right padded with 0s.  When using PKI the
      Security fingerprint originating node uses its private key to
      create the signature.  The original packet can then be verified
      provided the public key is shared and current.

   Remaining TIE Lifetime:  32 bits.  In case of anything but TIEs this
      field MUST be set to all ones and Origin Security Envelope Header
      MUST NOT be present SendLie in the packet.  For TIEs this field represents
      the remaining lifetime of the TIE and Origin Security Envelope
      Header MUST be present TwoWay finishes in the packet.  The value TwoWay: SEND_LIE

      on SendLie in the serialized
      model object MUST be ignored.

   Weak Nonce Local:   16 bits.  Local Weak Nonce of the adjacency as
      advertised OneWay finishes in LIEs.

   Weak Nonce Remote:   16 bits.  Remote Weak Nonce of the adjacency as
      received OneWay: SEND_LIE

      on TimerTick in LIEs.

   TIE Origin Security Envelope Header:  It MUST be present if and only OneWay finishes in OneWay: PUSH SendLie event
      on HALChanged in OneWay finishes in OneWay: store new HAL

      on HALSChanged in ThreeWay finishes in ThreeWay: store HALS

      on NeighborChangedLevel in TwoWay finishes in OneWay: no action

      on PODMismatch in OneWay finishes in OneWay: no action

      on HoldtimeExpired in TwoWay finishes in OneWay: no action

      on TimerTick in ThreeWay finishes in ThreeWay: PUSH SendLie event,
      if the Remaining TIE Lifetime field is NOT all ones.  It carries
      through the originators key ID and according fingerprint of the
      object to protect TIE from modification during flooding.  This
      ensures origin validation and integrity (but does not provide
      validation of a chain of trust).

   Observe that due to the schema migration rules per Appendix B the
   contained model can be always decoded if the major version matches
   and the envelope integrity has been validated.  Consequently,
   description of the TIE is available to flood it properly including
   unknown TIE types.

5.4.4.  Weak Nonces

   The protocol uses two 16 bit nonces to salt generated signatures.  We
   use the term "nonce" a bit loosely since RIFT nonces are not being
   changed holdtime expired PUSH HoldtimeExpired event

      on every packet MultipleNeighbors in TwoWay finishes in MultipleNeighborsWait:
      start multiple neighbors timer as common 4 * DEFAULT_LIE_HOLDTIME

      on UpdateZTPOffer in cryptography.  For efficiency
   purposes they are changed at a frequency high enough to dwarf replay
   attacks attempts for all practical purposes.  Therefore, we call them
   "weak" nonces.

   Any implementation including RIFT security MUST generate and wrap
   around local nonces properly.  When a nonce increment leads MultipleNeighborsWait finishes in
      MultipleNeighborsWait: send offer to
   `undefined_nonce` value the value SHOULD be incremented again
   immediately.  All implementation MUST reflect the neighbor's nonces.
   An implementation SHOULD increment a chosen nonce on every LIE ZTP FSM
   transition that ends up

      on LieRcvd in a different state from the previous and
   MUST increment its nonce at least every 5 minutes (such
   considerations allow for efficient implementations without opening a
   significant security risk).  When flooding TIEs, the implementation
   MUST use recent (i.e. within allowed difference) nonces reflected OneWay finishes in
   the LIE exchange.  The schema specifies maximum allowable nonce value
   difference OneWay: PROCESS_LIE

      on a packet compared to reflected nonces LevelChanged in the LIEs.  Any
   packet received with nonces deviating more than the allowed delta
   MUST be discarded without further computation of signatures to
   prevent computation load attacks.

   In case where a secure implementation does not receive signatures or
   receives undefined nonces from neighbor indicating that it does not
   support or verify signatures, it is a matter of local policy how such
   packets are treated.  Any secure implementation may choose to either
   refuse forming an adjacency with an implementation not advertising
   signatures or valid nonces or simply keep on signing local packets
   while accepting neighbor's packets without further security
   verification.

   As a necessary exception, an implementation MUST advertise
   `undefined_nonce` for remote nonce value when the FSM is not MultipleNeighborsWait finishes in 2-way
   or 3-way state and accept an `undefined_nonce` for its local nonce OneWay:
      update level with event value

      on packets UpdateZTPOffer in any other state than 3-way.

   As optional optimization, an implemenation MAY ThreeWay finishes in ThreeWay: send one LIE with
   previously negotiated neighbor's nonce to try to speed up a
   neighbor's transition from 3-way to 1-way and MUST revert offer to sending
   `undefined_nonce` after that.

5.4.5.  Lifetime

   Protecting lifetime
      ZTP FSM

      on flooding may lead to excessive number of
   security fingerprint computation and hence an application generating
   such fingerprints HALChanged in TwoWay finishes in TwoWay: store new HAL

      on TIEs MAY round the value down to the next
   `rounddown_lifetime_interval` defined UnacceptableHeader in the schema when sending TIEs
   albeit such optimization OneWay finishes in presence of security hashes over
   advancing weak nonces may not be feasible.

5.4.6.  Key Management

   As outlined OneWay: no action

      on HALSChanged in the Security Model a private shared key or a public/
   private key pair is used to Authenticate the adjacency.  The actual
   method of key distribution and key synchronization is assumed to be
   out of band from RIFT's perspective.  Both nodes OneWay finishes in the adjacency
   must share the same keys and configuration of key type and algorithm
   for a key ID.  Mismatched keys will obviously not inter-operate due
   to unverifiable security envelope.

   Key roll-over while the adjacency is active is allowed and the
   technique is well known and described OneWay: store HALS

      on HALSChanged in e.g.  [RFC6518].  Key
   distribution procedures are out of scope for RIFT.

5.4.7.  Security Association Changes

   There MultipleNeighborsWait finishes in
      MultipleNeighborsWait: store HALS

      on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE

      on MTUMismatch in ThreeWay finishes in OneWay: no mechanism action

      on HATChanged in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: store HAT

      on NeighborChangedAddress in OneWay finishes in OneWay: no action

      on ValidReflection in TwoWay finishes in ThreeWay: no action
      on MultipleNeighbors in OneWay finishes in MultipleNeighborsWait:
      start multiple neighbors timer as 4 * DEFAULT_LIE_HOLDTIME

      on NeighborChangedLevel in OneWay finishes in OneWay: no action

      on HATChanged in OneWay finishes in OneWay: store HAT

      on NeighborDroppedReflection in OneWay finishes in OneWay: no
      action

      on HALChanged in ThreeWay finishes in ThreeWay: store new HAL

      on NeighborAddressAdded in OneWay finishes in OneWay: no action

      on NeighborChangedAddress in TwoWay finishes in OneWay: no action

      on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE

      on UnacceptableHeader in TwoWay finishes in OneWay: no action

      on LevelChanged in TwoWay finishes in OneWay: update level with
      event value

      on HATChanged in TwoWay finishes in TwoWay: store HAT

      on UpdateZTPOffer in OneWay finishes in OneWay: send offer to convert a security envelope ZTP
      FSM

      on ValidReflection in ThreeWay finishes in ThreeWay: no action

      on UnacceptableHeader in ThreeWay finishes in OneWay: no action

      on HoldtimeExpired in ThreeWay finishes in OneWay: no action

      on NeighborChangedLevel in ThreeWay finishes in OneWay: no action

      on LevelChanged in OneWay finishes in OneWay: update level with
      event value, PUSH SendLie event

      on NewNeighbor in OneWay finishes in TwoWay: PUSH SendLie event

      on NeighborDroppedReflection in ThreeWay finishes in TwoWay: no
      action

      on MultipleNeighbors in ThreeWay finishes in
      MultipleNeighborsWait: start multiple neighbors timer as 4 *
      DEFAULT_LIE_HOLDTIME
      on Entry into OneWay: CLEANUP

   Following words are used for the same key
   ID from one algorithm well known procedures:

   1.  PUSH Event: pushes an event to another once be executed by the envelope is operational.
   The recommended procedure to change FSM upon exit
       of this action

   2.  CLEANUP: neighbor MUST be reset to unknown

   3.  SEND_LIE: create a new algorithm is to take LIE packet

       1.  reflecting the
   adjacency down neighbor if known and make the changes valid and then bring the adjacency up.

   Obviously, an implementation may choose to stop verifying security
   envelope for the duration of key change to keep

       2.  setting the adjacency up but
   since necessary `not_a_ztp_offer` variable if level was
           derived from last known neighbor on this introduces a security vulnerability window, such roll-over
   is not recommended.

6.  Examples

6.1.  Normal Operation

   This section describes RIFT deployment in the example topology
   without any node or link failures.  We disregard flooding reduction
   for simplicity's sake.

   As first step, the following bi-directional adjacencies will be
   created (and any other links that do not fulfill LIE rules in
   Section 5.2.2 disregarded):

   1.  Spine 21 (PoD 0) to Spine 111, Spine 112, Spine 121, and Spine
       122

   2.  Spine 22 (PoD 0) to Spine 111, Spine 112, Spine 121, interface and Spine
       122

       3.  Spine 111  setting `you_are_not_flood_repeater` to Leaf 111, Leaf 112 computed value

   4.  Spine 112 to Leaf 111, Leaf 112

   5.  Spine 121 to Leaf 121, Leaf 122

   6.  Spine 122 to Leaf 121, Leaf 122

   Consequently, N-TIEs would be originated by Spine 111 and Spine 112
   and each set would be sent to both Spine 21 and Spine 22.  N-TIEs
   also would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/
   Prefix 112 and the multi-homed prefix) and each set would be sent to
   Spine 111 and Spine 112.  Spine 111 and Spine 112 would  PROCESS_LIE:

       1.  if lie has wrong major version OR our own system ID or
           invalid system ID then flood
   these N-TIEs to Spine 21 and Spine 22.

   Similarly, N-TIEs would be originated by Spine 121 and Spine 122 and
   each set would be sent to both Spine 21 and Spine 22.  N-TIEs also
   would be originated by Leaf 121 (w/ Prefix 121 and the multi-homed
   prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent to
   Spine 121 and Spine 122.  Spine 121 and Spine 122 would CLEANUP else

       2.  if lie has non matching MTUs then flood
   these N-TIEs to Spine 21 and Spine 22.

   At CLEANUP, PUSH
           UpdateZTPOffer, PUSH MTUMismatch else

       3.  if PoD rules do not allow adjacency forming then CLEANUP,
           PUSH PODMismatch, PUSH MTUMismatch else

       4.  if lie has undefined level OR my level is undefined OR this point both Spine 21
           node is leaf and Spine 22, as well as any controller
   to which they are connected, would have the complete network
   topology.  At the same time, Spine 111/112/121/122 hold only the
   N-ties of remote level 0 of their respective PoD.  Leafs hold only their own
   N-TIEs.

   S-TIEs lower than HAT OR (lie's level
           is not leaf AND its difference is more than one from my
           level) then CLEANUP, PUSH UpdateZTPOffer, PUSH
           UnacceptableHeader else

       5.  PUSH UpdateZTPOffer, construct temporary new neighbor
           structure with adjacencies and a default IP prefix would values from lie, if no current neighbor exists
           then be
   originated by Spine 21 and Spine 22 and each would be flooded set neighbor to
   Spine 111, Spine 112, Spine 121, and Spine 122.  Spine 111, Spine
   112, Spine 121, and Spine 122 would each send the S-TIE new neighbor, PUSH NewNeighbor event,
           CHECK_THREE_WAY else

           1.  if current neighbor system ID differs from Spine 21
   to Spine 22 and the S-TIE lie's system
               ID then PUSH MultipleNeighbors else

           2.  if current neighbor stored level differs from Spine 22 to Spine 21.  (S-TIEs are
   reflected up to lie's level
               then PUSH NeighborChangedLevel else

           3.  if current neighbor stored IPv4/v6 address differs from which they are received but they are NOT
   propagated southbound.)

   A S-TIE with a default IP prefix would be originated by Node 111 and
   Spine 112 and each would be sent to Leaf 111 and Leaf 112.

   Similarly, an S-TIE with a default IP prefix would be originated by
   Node 121
               lie's address then PUSH NeighborChangedAddress else

           4.  if any of neighbor's flood address port, name, local
               linkid changed then PUSH NeighborChangedMinorFields and Spine 122

           5.  CHECK_THREE_WAY

   5.  CHECK_THREE_WAY: if current state is one-way do nothing else

       1.  if lie packet does not contain neighbor then if current state
           is three-way then PUSH NeighborDroppedReflection else

       2.  if packet reflects this system's ID and each would be sent to Leaf 121 local port and Leaf
   122.  At this point IP connectivity with maximum possible ECMP has
   been established between the leafs while constraining the amount of
   information held by each node state
           is three-way then PUSH event ValidReflection else PUSH event
           MultipleNeighbors

4.2.7.6.  Resulting Topologies

   The procedures defined in Section 4.2.7.4 will lead to the minimum necessary for normal
   operation RIFT
   topology and dealing with failures.

6.2.  Leaf Link Failure

                    .  |   |              |   |
                    .+-+---+-+          +-+---+-+
                    .|       |          |       |
                    .|Spin111|          |Spin112|
                    .+-+---+-+          ++----+-+ levels depicted in Figure 27.

                      .  |   |             |    |        +---+
                      .        |   +---------------+  X As|
                      .        |                 | |  X Failure 24|
                      .  |   +-------------+ |  X        ++-++
                      .         | |               |  |
                    .+-+---+-+          +--+--+-+
                    .|       |          |       |
                    .|Leaf111|          |Leaf112|
                    .+-------+          +-------+
                    .      +                  +
                    .     Prefix111     Prefix112

                    Figure 32: Single Leaf link failure

   In case of a failing leaf link between spine 112 and leaf 112 the
   link-state information will cause re-computation of the necessary SPF
   and the higher levels will stop forwarding towards prefix 112 through
   spine 112.  Only spines 111 and 112, as well as both spines will see
   control traffic.  Leaf 111 will receive a new S-TIE from spine 112
   and reflect back to spine 111.  Spine 111 will de-aggregate prefix
   111 and prefix 112 but we will not describe it further here since de-
   aggregation is emphasized in the next example.  It is worth observing
   however in this example that if leaf 111 would keep on forwarding
   traffic towards prefix 112 using the advertised south-bound default
   of spine 112 the traffic would end up on Top-of-Fabric 21 and ToF 22
   and cross back into pod 1 using spine 111.  This is arguably not as
   bad as black-holing present in the next example but clearly
   undesirable.  Fortunately, de-aggregation prevents this type of
   behavior except for a transitory period of time.

6.3.  Partitioned Fabric
                      .                +--------+          +--------+   S-TIE of Spine21      +--+ +--+
                      .      |       |          |        |   received by
   .                |ToF   21|          |ToF   22|   south reflection of
                      .                ++-+--+-++          ++-+--+-++   spines 112 and 111   +--++     ++--+
                      .   | E |     | |            | |  | F |
                      .   | 23+-+   |  | |            | |  | 0/0 23+-----------+
                      .   ++--+ |   ++-++           |  |
                      .    |    |    | |            |
                      .    | +-------+ |            |
                      .    | |  |      |            |
                      .  +--------------+    |  +--- XXXXXX + |  +----+ |            | +---------------+
                      .    | |       | |            | |  |                 |
                      .  |    +-----------------------------+ |  |   ++-++     ++-++           |
                      .  0/0  |   | I +-----+ J |           |
                      .   | 22|     | 22|           |
                      .  |    0/0       0/0    +- XXXXXXXXXXXXXXXXXXXXXXXXX -+   ++--+     +--++           |
                      .    |  1.1/16        |              |    |  |           |            |
                      .  |    |           +-+    +-0/0-----------+    +---------+ |            |
                      .              | |            |   1.1./16  |
                      .             ++-++     +---+ |
                      .             | X |
   .+-+----++          +-+-----+     ++-----0/0          ++----0/0
   .|     | Y +-+
                      .             | 0 |     |    1.1/16 0 |   1.1/16
   .|Spin111|          |Spin112|     |Spin121|           |Spin122|
   .+-+---+-+          ++----+-+     +-+---+-+           ++---+--+
                      .             +---+     +---+

              Figure 27: Generic ZTP Topology Autoconfigured

   In case we imagine the LEAF_ONLY restriction on Y is removed the
   outcome would be very different however and result in Figure 28.
   This demonstrates basically that auto configuration makes miscabling
   detection hard and with that can lead to undesirable effects in cases
   where leafs are not "nailed" by the accordingly configured flags and
   arbitrarily cabled.

   A node MAY analyze the outstanding level offers on its interfaces and
   generate warnings when its internal ruleset flags a possible
   miscabling.  As an example, when a node's sees ZTP level offers that
   differ by more than one level from its chosen level (with proper
   accounting for leaf's being at level 0) this can indicate miscabling.

                       .        +---+
                       .        | As|
                       .        | 24|
                       .        ++-++
                       .         | |
                       .      +--+ +--+
                       .      |       |
                       .   +--++     ++--+
                       .   | E |     | F |
                       .   |   +---------------+ 23+-+   | 23+-------+
                       .   ++--+ |   +----------------+   ++-++       |
                       .    |    |    | |        |
                       .    | +-------+ |        |
                       .    |   +-------------+ |  |      |   +--------------+        |
                       .    | |  +----+ |        |
                       .    | |       | |        |
                       .   ++-++     ++-++     +-+-+
                       .   | I +-----+ J +-----+ Y |
                       .   |
   .+-+---+-+          +--+--+-+     +-+---+-+          +---+-+-+
   .| 22|     | 22|     | 22|
                       .   ++-++     +--++     ++-++
                       .    | |         |       | |
   .|Leaf111|          |Leaf112|     |Leaf121|          |Leaf122|
   .+-+-----+          ++------+     +-----+-+          +-+-----+
                       .  +                 +                  +              +    | +-----------------+ |
                       .  Prefix111    Prefix112             Prefix121     Prefix122    |           |         |
                       .                                       1.1/16

                        Figure 33: Fabric partition    +---------+ |         |
                       .              | |         |
                       .             ++-++        |
                       .             | X +--------+
                       .             | 0 |
                       .             +---+

              Figure 33 shows the arguably a more catastrophic but also a more
   interesting case.  Spine 21 is completely severed from access to
   Prefix 121 (we use in the figure 1.1/16 as example) by double link
   failure.  However unlikely, if left unresolved, forwarding from leaf
   111 and leaf 112 to prefix 121 would suffer 50% black-holing based on
   pure default route advertisements by Top-of-Fabric 21 and ToF 22. 28: Generic ZTP Topology Autoconfigured

4.2.8.  Stability Considerations

   The autoconfiguration mechanism used to resolve this scenario is hinging on the
   distribution computes a global maximum of southbound representation levels
   by Top-of-Fabric 21 that is
   reflected diffusion.  The achieved equilibrium can be disturbed massively by spine 111 and spine 112 to ToF 22.  Spine 22, having
   computed reachability to
   all prefixes in the network, advertises nodes with
   the default route the ones that are reachable only via lower highest level
   neighbors that ToF 21 does not show an adjacency to.  That results in
   spine 111 and spine 112 obtaining a longest-prefix match to prefix
   121 which leads through ToF 22 and prevents black-holing through ToF
   21 still advertising either leaving or entering the 0/0 aggregate only.

   The prefix 121 advertised by Top-of-Fabric 22 does domain
   (with some finer distinctions not have to be
   propagated further explained further).  It is
   therefore recommended that each node is multi-homed towards leafs since they do no benefit from nodes
   with respective HAL offerings.  Fortuntately, this
   information.  Hence the amount of flooding is restricted to ToF 21
   reissuing its S-TIEs and south reflection of those by spine 111 and
   spine 112.  The resulting SPF in ToF 22 issues a new prefix S-TIEs
   containing 1.1/16.  None of the leafs become aware natural
   state of things for the changes and
   the failure is constrained strictly to the level that became
   partitioned.

   To finish with an example of the resulting sets computed using
   notation introduced topology variants considered in Section 5.2.5, Top-of-Fabric 22 constructs the
   following sets:

      |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122

      |H (for r=Prefix 111) = Spine 111, Spine 112

      |H (for r=Prefix 112) = Spine 111, Spine 112

      |H (for r=Prefix 121) = Spine 121, Spine 122

      |H (for r=Prefix 122) = Spine 121, Spine 122

      |A (for Spine 21) = Spine 111, Spine 112

   With that and |H (for r=prefix 121) and |H (for r=prefix 122) being
   disjoint from |A (for Top-of-Fabric 21), ToF 22 will originate an
   S-TIE with prefix 121 and prefix 122, that is flooded to spines 112,
   112, 121 and 122.

6.4.  Northbound Partitioned Router and Optional East-West Links

         .   +                  +                  +
         .   X N1               | N2               | N3
         .   X                  |                  |
         .+--+----+          +--+----+          +--+-----+
         .|       |0/0>  <0/0|       |0/0>  <0/0|        |
         .|  A01  +----------+  A02  +----------+  A03   | Level 1
         .++-+-+--+          ++--+--++          +---+-+-++
         . | | |              |  |  |               | | |
         . | | +----------------------------------+ | | |
         . | |                |  |  |             | | | |
         . | +-------------+  |  |  |  +--------------+ |
         . |               |  |  |  |  |          | |   |
         . | +----------------+  |  +-----------------+ |
         . | |             |     |     |          | | | |
         . | | +------------------------------------+ | |
         . | | |           |     |     |          |   | |
         .++-+-+--+        | +---+---+ |        +-+---+-++
         .|       |        +-+       +-+        |        |
         .|  L01  |          |  L02  |          |  L03   | Level 0
         .+-------+          +-------+          +--------+

                    Figure 34: North Partitioned Router

   Figure 34 shows a part of a fabric where level 1 is horizontally
   connected and A01 lost its only northbound adjacency.  Based on N-SPF
   rules RIFT.

4.3.  Further Mechanisms

4.3.1.  Overload Bit

   The overload Bit MUST be respected in Section 5.2.4.1 A01 will compute northbound all according reachability by
   using the link A01 to A02 (whereas A02 will
   computations.  A node with overload bit set SHOULD NOT use this link during
   N-SPF).  Hence A01 will still advertise the default towards level 0
   and route unidirectionally using the horizontal link.

   As further consideration, the moment A02 looses link N2 the situation
   evolves again.  A01 will have no more northbound any
   reachability while
   still seeing A03 advertising northbound adjacencies prefixes southbound except locally hosted ones.  A node
   in overload SHOULD advertise all its south locally hosted prefixes north
   and southbound.

   The leaf node
   tie.  With that it will stop advertising a default route due SHOULD set the 'overload' bit on its node TIEs, since
   if the spine nodes were to
   Section 5.2.3.8.

7.  Implementation and Operation: Further Details

7.1.  Considerations forward traffic not meant for Leaf-Only Implementation

   RIFT can and is intended to be stretched to the lowest level in local
   node, the
   IP fabric leaf node does not have the topology information to integrate ToRs or even servers. prevent
   a routing/forwarding loop.

4.3.2.  Optimized Route Computation on Leafs

   Since those entities
   would run as the leafs only, it is worth to observe that a leaf do see only
   version is significantly simpler "one hop away" they do not need to implement and requires much less
   resources:

   1.  Under normal conditions, the leaf needs to support run a multipath
       default route only.  In most catastrophic partitioning case it
       has to be capable of accommodating all
   "proper" SPF.  Instead, they can gather the available prefix
   candidates from their neighbors and build the routing table
   accordingly.

   A leaf routes in will have no North TIEs except its own
       PoD to prevent black-holing.

   2.  Leaf nodes hold only their own N-TIEs and S-TIEs of Level 1 nodes
       they are connected to; so overall few in numbers.

   3.  Leaf node does not optionally from its
   East-West neighbors.  A leaf will have to support any type South TIEs from its neighbors.

   Instead of de-aggregation
       computation or propagation.

   4.  Leaf nodes do not have to support overload bit normally.

   5.  Unless optional leaf-2-leaf procedures are desired default route
       origination creating a network graph from its North TIEs and S-TIE origination is unnecessary.

7.2.  Considerations for Spine Implementation

   In case of spines, i.e. nodes that will never act as Top of Fabric
   neighbor's South TIEs and then running an SPF, a
   full implementation is not required, specifically the leaf node does not
   need can simply
   compute the minimum cost and next_hop_set to perform any computation of negative disaggregation except
   respecting northbound disaggregation advertised each leaf neighbor by
   examining its local adjacencies, determining bi-directionality from
   the north.

7.3.  Adaptations associated North TIE, and specifying the neighbor's next_hop_set
   set and cost from the minimum cost local adjacency to Other Proposed Data Center Topologies

                         .  +-----+        +-----+
                         .  |     |        |     |
                         .+-+ S0  |        | S1  |
                         .| ++---++        ++---++
                         .|  |   |          |   |
                         .|  | +------------+   |
                         .|  | | +------------+ |
                         .|  | |              | |
                         .| ++-+--+        +--+-++
                         .| |     |        |     |
                         .| | A0  |        | A1  |
                         .| +-+--++        ++---++
                         .|   |  |          |   |
                         .|   |  +------------+ |
                         .|   | +-----------+ | |
                         .|   | |             | |
                         .| +-+-+-+        +--+-++
                         .+-+     |        |     |
                         .  | L0  |        | L1  |
                         .  +-----+        +-----+

                         Figure 35: Level Shortcut

   Strictly speaking, that neighbor.

   Then a leaf attaches prefixes as described in Section 4.2.6.

4.3.3.  Mobility

   It is a requirement for RIFT to maintain at the control plane a real
   time status of which prefix is not limited attached to Clos variations only.  The
   protocol preconditions only which port of which leaf,
   even in a sense context of 'compass rose direction'
   achieved by configuration (or derivation) mobility where the point of levels and other
   topologies attachement may
   change several times in a subsecond period of time.

   There are possible within this framework.  So, conceptually, one
   could include leaf two classical approaches to leaf links and even shortcut between levels but
   certain requirements maintain such knowledge in Section 4 will not be met anymore.  As an
   example, shortcutting levels illustrated in Figure 35 will lead
   either to suboptimal routing when L0 sends traffic to L1 (since using
   S0's default route will lead to
   unambiguous fashion:

   time stamp:  With this method, the traffic being sent back to A0 or
   A1) or infrastructure records the leafs need each other's routes installed to understand precise
      time at which the movement is observed.  One key advantage of this
      technique is that only A0 and A1 should be used it has no dependency on the mobile device.  One
      drawback is that the infrastructure must be precisely synchronized
      to talk be able to each other.

   Whether such modifications compare time stamps as observed by the various
      points of topology constraints make sense is
   dependent on many technology variables and attachment, e.g., using the exhausting treatment variation of the topic is definitely outside Precision
      Time Protocol (PTP) IEEE Std. 1588 [IEEEstd1588], [IEEEstd8021AS]
      designed for bridged LANs IEEE Std. 802.1AS [IEEEstd8021AS].  Both
      the scope precision of this document.

7.4.  Originating Non-Default Route Southbound

   Obviously, an implementation may choose to originate southbound
   instead the synchronisation protocol and the resolution
      of a strict default route (as described in Section 5.2.3.8) a
   shorter prefix P' but in such a scenario all addresses carried within the RIFT domain time stamp must beat the highest possible roaming time on
      the fabric.  Another drawback is that the presence of the mobile
      device may be contained within P'.

8.  Security Considerations

8.1.  General

   One can consider attack vectors where observed only asynchronously, e.g., after it starts
      using an IP protocol such as ARP [RFC0826], IPv6 Neighbor
      Discovery [RFC4861][RFC4862], or DHCP [RFC2131][RFC8415].

   sequence counter:  With this method, a router may reboot many times
   while changing mobile node notifies its system ID and pollute the network with many stale
   TIEs or TIEs are sent point
      of attachment on arrival with very long lifetimes and not cleaned up
   when a sequence counter that is
      incremented upon each movement.  On the routes vanishes.  Those attack vectors are positive side, this method
      does not unique to
   RIFT.  Given large memory footprints available today those attacks
   should be relatively benign.  Otherwise have a node SHOULD implement dependency on a
   strategy precise sense of discarding contents time, since the
      sequence of all TIEs that were not present movements is kept in order by the SPF tree over a certain, configurable period device.  The
      disadvantage of time.  Since the
   protocol, like all modern link-state protocols, this approach is self-stabilizing
   and will advertise the presence lack of such TIEs to its neighbors, they
   can be re-requested again if a computation finds support for protocols
      that it sees an
   adjacency formed towards the system ID of may be used by the discarded TIEs.

8.2.  ZTP

   Section 5.2.7 presents many attack vectors in untrusted environments,
   starting with nodes that oscillate their level offers mobile node to register its presence to
      the
   possiblity of a leaf node offering a three way adjacency with the highest
   possible level value with a very long holdtime trying capability to put itself
   "on top of the lattice" and provide a sequence counter.
      Well-known issues with wrapping sequence counters must be
      addressed properly, and many forms of sequence counters that gaining access to the whole
   southbound topology.  Session authentication mechanisms are necessary vary
      in environments where this is possible both wrapping rules and RIFT provides comparison rules.  A particular
      knowledge of the
   according security envelope to ensure this if desired.

8.3.  Lifetime

   Traditional IGP protocols are vulnerable source of the sequence counter is required to lifetime modification
      operate it, and
   replay attacks that the comparison between sequence counters from
      heterogeneous sources can be somewhat mitigated by using techniques
   like [RFC7987]. hard to impossible.

   RIFT removes this attack vector by protecting the
   lifetime behind supports a signature computed over it and additional nonce
   combination which makes even the replay attack window very small and
   for practical purposes irrelevant since lifetime cannot be
   artificially shortened by the attacker.

8.4.  Packet Number

   Optional packet number is carried hybrid approach contained in the security envelope without
   any encryption protection and is hence vulnerable to replay and
   modification attacks.  Contrary to nonces this number must change on
   every packet and would present an optional
   `PrefixSequenceType` prefix attribute that we call a very high cryptographic load if
   signed.  The attack vector packet number present is relatively
   benign.  Changing the packet number by `monotonic
   clock` consisting of a man-in-the-middle attack
   will only affect operational validation tools timestamp and possibly some
   performance optimizations on flooding.  It is expected that an
   implementation detecting too many "fake losses" or "misorderings" due
   to the attack on optional sequence number.  In
   case of presence of the packet number would simply suppress its further
   processing.

8.5.  Outer Fingerprint Attacks

   A attribute:

   o  The leaf node can try to inject LIE packets observing MAY advertise a conversation on time stamp of the
   wire latest sighting of
      a prefix, e.g., by snooping IP protocols or the node using the outer key ID albeit it cannot generate valid hashes
   in case
      time at which it changes advertised the integrity of prefix.  RIFT transports the message so time
      stamp within the only possible
   attack is DoS due to excessive LIE validation.

   A node can try to replay previous LIEs desired prefix North TIEs as 802.1AS timestamp.

   o  RIFT may interoperate with changed state that it
   recorded but the attack is hard to replicate since the nonce
   combination must match the ongoing exchange and is then limited "update to 6LoWPAN Neighbor
      Discovery" [RFC8505], which provides a
   single flap only since both nodes will advance their nonces method for registering a
      prefix with a sequence counter called a Transaction ID (TID).
      RIFT transports in such case the adjacency state changed.  Even TID in the most unlikely case the
   attack length is limited due to both sides periodically increasing
   their nonces.

8.6.  TIE Origin Fingerprint DoS Attacks

   A compromised node can attempt to generate "fake TIEs" using its native form.

   o  RIFT also defines an abstract negative clock (ANSC) that compares
      as less than any other
   nodes' TIE origin key identifiers.  Albeit clock.  By default, the ultimate validation lack of
   the origin fingerprint will fail a
      `PrefixSequenceType` in such scenarios and not progress
   further than immediately peering nodes, the resulting denial of
   service attack seems unavoidable since the a Prefix North TIE origin key id is only
   protected by the, here assumed to be compromised, node.

8.7.  Host Implementations

   It can be reasonably expected interpreted as ANSC.
      We call this also an `undefined` clock.

   o  Any prefix present on the fabric in multiple nodes that with has the proliferation of RotH
   servers, rather
      `same` clock is considered as anycast.  ASNC is always considered
      smaller than dedicated networking devices, will constitute
   significant amount of any defined clock.

   o  RIFT devices.  Given their normally far wider
   software envelope and access granted to them, such servers implementation assumes by default that all nodes are also
   far more likely to be compromised and present an attack vector on the
   protocol.  Hijacking of prefixes being
      synchronized to attract traffic 200 milliseconds precision which is a trust
   problem and cannot be addressed within the protocol if the trust
   model is breached, i.e. the server presents valid credentials to form
   an adjacency and issue TIEs.  However, in a move devious way, the
   servers can present DoS (or easily
      achievable even DDos) vectors of issuing too many
   LIE packets, flood in very large amount of N-TIEs and similar anomalies.  A
   prudent fabrics using [RFC5905].  An
      implementation hosting leafs should implement thresholds and
   raise warnings when leaf MAY provide a way to reconfigure a domain to a
      different value.  We call this variable MAXIMUM_CLOCK_DELTA.

4.3.3.1.  Clock Comparison

   All monotonic clock values are comparable to each other using the
   following rules:

   1.  ASNC is advertising number of TIEs in excess of
   those.

9.  IANA Considerations

   This specification requests multicast address assignments and
   standard port numbers.  Additionally registries for older than any other value except ASNC AND

   2.  Clock with timestamp differing by more than MAXIMUM_CLOCK_DELTA
       are comparable by using the schema timestamps only AND

   3.  Clocks with timestamps differing by less than MAXIMUM_CLOCK_DELTA
       are
   requested comparable by using their TIDs only AND

   4.  An undefined TID is always older than any other TID AND

   5.  TIDs are compared using rules of [RFC8505].

4.3.3.2.  Interaction between Time Stamps and suggested values provided Sequence Counters

   For slow movements that reflect occur less frequently than e.g. once per
   second, the numbers
   allocated in time stamp that the given schema.

9.1.  Requested Multicast and Port Numbers

   This document requests allocation in RIFT infrastruture captures is enough
   to determine the 'IPv4 Multicast Address
   Space' registry freshest discovery.  If the suggested value point of 224.0.0.120 as
   'ALL_V4_RIFT_ROUTERS' attachement
   changes faster than the maximum drift of the time stamping mechanism
   (i.e.  MAXIMUM_CLOCK_DELTA), then a sequence counter is required to
   add resolution to the freshness evaluation, and in it must be sized so
   that the 'IPv6 Multicast Address Space'
   registry counters stay comparable within the suggested value resolution of FF02::A1F7 as 'ALL_V6_RIFT_ROUTERS'.

   This document requests allocation in the 'Service Name time
   stampling mechanism.

   The sequence counter in [RFC8505] is encoded as one octet and Transport
   Protocol Port Number Registry' wraps
   around using Appendix A.

   Within the allocation of a suggested value resolution of
   914 on udp for 'RIFT_LIES_PORT' and suggested value MAXIMUM_CLOCK_DELTA the sequence counters
   captured during 2 sequential values of 915 for
   'RIFT_TIES_PORT'.

9.2.  Requested Registries with Suggested Values the time stamp SHOULD be
   comparable.  This section requests registries means with default values that help govern a node may move up
   to 127 times during a 200 milliseconds period and the schema via
   usual IANA registry procedures.  Allocation of new values is always
   performed clocks remain
   still comparable thus allowing the infrastructure to assert the
   freshest advertisement with no ambiguity.

4.3.3.3.  Anycast vs. Unicast

   A unicast prefix can be attached to at most one leaf, whereas an
   anycast prefix may be reachable via `Expert Review` action.  IANA more than one leaf.

   If a monotonic clock attribute is requested to store provided on the
   schema version introducing prefix, then the allocated
   prefix with the `newest` clock value as well as,
   optionally, its description when present.  All values is strictly prefered.  An
   anycast prefix does not suggested
   as to carry a clock or all clock attributes MUST be considered `Unassigned`. The range
   the same under the rules of every registry Section 4.3.3.1.

   Observe that it is a
   16-bit integer.

9.2.1.  RIFT/common/AddressFamilyType

   address family

9.2.1.1.  Requested Entries

          Name                  Value Schema Version Description
          Illegal                   0            2.0
          AddressFamilyMinValue     1            2.0
          IPv4                      2            2.0
          IPv6                      3            2.0
          AddressFamilyMaxValue     4            2.0

9.2.2.  RIFT/common/HierarchyIndications

   flags indicating nodes behavior in case of ZTP

9.2.2.1.  Requested Entries

   Name                                 Value Schema Version Description
   leaf_only                                0            2.0
   leaf_only_and_leaf_2_leaf_procedures     1            2.0
   top_of_fabric                            2            2.0

9.2.3.  RIFT/common/IEEE802_1ASTimeStampType

   timestamp per IEEE 802.1AS, values MUST be interpreted important that in
   implementation mobility events the leaf is re-
   flooding as unsigned

9.2.3.1.  Requested Entries

                 Name    Value Schema Version Description
                 AS_sec      1            2.0
                 AS_nsec     2            2.0

9.2.4.  RIFT/common/IPAddressType

   IP address type

9.2.4.1.  Requested Entries

               Name        Value Schema Version Description
               ipv4address     1            2.0
               ipv6address     2            2.0

9.2.5.  RIFT/common/IPPrefixType

   prefix representing reachablity.

   @note: for interface addresses quickly as possible the protocol can propagate absence of the address
   part beyond prefix that moved
   away.

   Observe further that without support for [RFC8505] movements on the subnet mask
   fabric within intervals smaller than 100msec will be seen as anycast.

4.3.3.4.  Overlays and on reachability computation Signaling

   RIFT is agnostic whether any overlay technology like [MIP, LISP,
   VxLAN, NVO3] and the associated signaling is deployed over it.  But
   it is expected that has
   to be normalized.  The non-significant bits leaf nodes, and possibly Top-of-Fabric nodes can be used for
   operational purposes.

9.2.5.1.  Requested Entries

                Name       Value Schema Version Description
                ipv4prefix     1            2.0
                ipv6prefix     2            2.0

9.2.6.  RIFT/common/IPv4PrefixType

   IP v4 prefix type

9.2.6.1.  Requested Entries

                Name      Value Schema Version Description
                address       1            2.0
                prefixlen     2            2.0

9.2.7.  RIFT/common/IPv6PrefixType

   IP v6 prefix type

9.2.7.1.  Requested Entries

                Name      Value Schema Version Description
                address       1            2.0
                prefixlen     2            2.0

9.2.8.  RIFT/common/PrefixSequenceType

   sequence
   perform the correct encapsulation.

   In the context of mobility, overlays provide a prefix when it moves

9.2.8.1.  Requested Entries

   Name          Value       Schema Description
                            Version
   timestamp         1          2.0
   transactionid     2          2.0 transaction ID set by client in e.g.
                                    in 6LoWPAN

9.2.9.  RIFT/common/RouteType

   RIFT route types.

   @note: route types which MUST be ordered on their preference PGP
   prefixes are most preferred attracting traffic north (towards spine)
   and then south normal classical solution to
   avoid injecting mobile prefixes are attracting traffic south (towards
   leafs), i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH
   PREFIX TIE.

   @note: The only purpose the fabric and improve the
   scalability of those values is the solution.  It makes sense on a data center that
   already uses overlays to introduce an ordering
   whereas consider their applicability to the mobility
   solution; as an implementation can choose internally any other values example, a mobility protocol such as
   long LISP may inform
   the ordering is preserved

9.2.9.1.  Requested Entries

           Name                Value Schema Version Description
           Illegal                 0            2.0
           RouteTypeMinValue       1            2.0
           Discard                 2            2.0
           LocalPrefix             3            2.0
           SouthPGPPrefix          4            2.0
           NorthPGPPrefix          5            2.0
           NorthPrefix             6            2.0
           NorthExternalPrefix     7            2.0
           SouthPrefix             8            2.0
           SouthExternalPrefix     9            2.0
           NegativeSouthPrefix    10            2.0
           RouteTypeMaxValue      11            2.0

9.2.10.  RIFT/common/TIETypeType

   type ingress leaf of TIE.

   This enum indicates what TIE type the TIE is carrying.  In case location of the
   value egress leaf in real time.

   Another possibility is not known to the receiver, re-flooded the same way consider that mobility as prefix
   TIEs.  This allows for future extensions of the protocol within an underlay
   service and support it in RIFT to an extent.  The load on the
   same schema major fabric
   augments with types opaque to some nodes unless the amount of mobility obviously since a move forces
   flooding
   scope is not and computation on all nodes in the same as prefix TIE, then a major version revision
   MUST scope of the move so
   tunneling from leaf to the Top-of-Fabric may be performed.

9.2.10.1.  Requested Entries
   Name                                        Value  Schema Description
                                                     Version
   Illegal                                         0     2.0
   TIETypeMinValue                                 1     2.0
   NodeTIEType                                     2     2.0
   PrefixTIEType                                   3     2.0
   PositiveDisaggregationPrefixTIEType             4     2.0
   NegativeDisaggregationPrefixTIEType             5     2.0
   PGPrefixTIEType                                 6     2.0
   KeyValueTIEType                                 7     2.0
   ExternalPrefixTIEType                           8     2.0
   PositiveExternalDisaggregationPrefixTIEType     9     2.0
   TIETypeMaxValue                                10     2.0

9.2.11.  RIFT/common/TieDirectionType

   direction desired.

4.3.4.  Key/Value Store

4.3.4.1.  Southbound

   The protocol supports a southbound distribution of tie

9.2.11.1.  Requested Entries

            Name              Value Schema Version Description
            Illegal               0            2.0
            South                 1            2.0
            North                 2            2.0
            DirectionMaxValue     3            2.0

9.2.12.  RIFT/encoding/Community

   community

9.2.12.1.  Requested Entries

                  Name   Value Schema Version Description
                  top        1            2.0
                  bottom     2            2.0

9.2.13.  RIFT/encoding/KeyValueTIEElement

   Generic key value key-value pairs

9.2.13.1.  Requested Entries

   Name      Value  Schema Description
                   Version
   keyvalues     1     2.0 if the same key repeats in multiple
   that can be used to e.g. distribute configuration information during
   topology bring-up.  The KV South TIEs of
                           same node or with different values, behavior
                           is unspecified

9.2.14.  RIFT/encoding/LIEPacket

   RIFT LIE packet

   @note this node's level is already included on can arrive from multiple nodes
   and hence need tie-breaking per key.  We use the packet header

9.2.14.1.  Requested Entries

   Name                        Value  Schema Description
                                     Version
   name                            1     2.0 node or adjacency name
   local_id                        2     2.0 local link ID
   flood_port                      3     2.0 UDP port following rules

   1.  Only KV TIEs originated by nodes to which we can
                                             receive flooded the receiver has a bi-
       directional adjacency are considered.

   2.  Within all such valid KV South TIEs
   link_mtu_size                   4     2.0 layer 3 MTU, used to
                                             discover to mismatch.
   link_bandwidth                  5     2.0 local link bandwidth on containing the
                                             interface
   neighbor                        6     2.0 reflects key, the neighbor once
                                             received to provide 3-way
                                             connectivity
   pod                             7     2.0 node's PoD
   node_capabilities              10     2.0 value
       of the KV South TIE for which the according node capabilities shown in South TIE is
       present, has the LIE. The capabilies
                                             MUST match highest level and within the capabilities
                                             shown same level has
       highest originating system ID is preferred.  If keys in the Node TIEs,
                                             otherwise most
       preferred TIEs are overlapping, the behavior is
                                             unspecified. A undefined.

   Observe that if a node
                                             detecting goes down, the mismatch
                                             SHOULD generate according
                                             error
   link_capabilities              11     2.0 capabilities of this link
   holdtime                       12     2.0 required holdtime node south of the
                                             adjacency, i.e. how much
                                             time MUST expire without
                                             LIE for the adjacency it looses
   adjacencies to
                                             drop
   label                          13     2.0 unsolicited, downstream
                                             assigned locally
                                             significant label value for
                                             the adjacency
   not_a_ztp_offer                21     2.0 indicates it and with that the level on
                                             the LIE MUST NOT KVs will be used disregarded and on
   tie-break changes new KV re-advertised to
                                             derive a ZTP level prevent stale information
   being used by the
                                             receiving nodes further south.  KV information in southbound
   direction is not result of independent computation of every node
   you_are_flood_repeater         22     2.0 indicates over
   same set of TIEs but a diffused computation.

4.3.4.2.  Northbound

   Certain use cases seem to northbound
                                             neighbor necessitate distribution of essentialy KV
   information that it should be
                                             reflooding this node's
                                             N-TIEs to achieve flood
                                             reduction and balancing for is generated in the leafs in the northbound flooding. To be
                                             ignored if received from a
   direction.  Such information is flooded in KV North TIEs.  Since the
   originator of northbound adjacency
   you_are_sending_too_quickly    23     2.0 can KV is preserved during northbound flooding,
   overlapping keys could be optionally set to
                                             indicate used.  However, to neighbor that
                                             packet losses are seen on
                                             reception based on packet
                                             numbers or omit further protocol
   complexity, only the rate is too
                                             high. The receiver SHOULD
                                             temporarily slow down
                                             flooding rates
   instance_name                  24     2.0 instance name value of the key in TIE tie-broken in case
                                             multiple RIFT instances
                                             running on same interface

9.2.15.  RIFT/encoding/LinkCapabilities

   link capabilities

9.2.15.1.  Requested Entries

   Name                  Value  Schema Description
                               Version
   bfd                       1     2.0 indicates that the link
   fashion as southbound KV TIEs is
                                       supporting used.

4.3.5.  Interactions with BFD
   v4_forwarding_capable     2     2.0 indicates whether the interface
                                       will support v4 forwarding. This
                                       MUST be set

   RIFT MAY incorporate BFD [RFC5881] to true when LIEs
                                       from a v4 address react quickly to link failures.
   In such case following procedures are sent and introduced:

      After RIFT 3-way hello adjacency convergence a BFD session MAY be set
      formed automatically between the RIFT endpoints without further
      configuration using the exchanged discriminators.  The capability
      of the remote side to true in LIEs support BFD is carried on v6
                                       address. If v4 and v6 LIEs
                                       indicate contradicting
                                       information the behavior is
                                       unspecified.

9.2.16.  RIFT/encoding/LinkIDPair

   LinkID pair describes one LIEs.

      In case established BFD session goes Down after it was Up, RIFT
      adjacency SHOULD be re-initialized and subsequently started from
      Init after it sees a consecutive BFD Up.

      In case of parallel links between two nodes

9.2.16.1.  Requested Entries
   Name                       Value  Schema Description
                                    Version
   local_id                       1     2.0 node-wide unique value for
                                            the local link
   remote_id                      2     2.0 received remote link ID for
                                            this each link
   platform_interface_index      10     2.0 describes the local
                                            interface index of the MAY run its own
      independent BFD session or they may share a session.

      In case RIFT changes link
   platform_interface_name       11     2.0 describes the local
                                            interface name
   trusted_outer_security_key    12     2.0 identifiers or BFD capability indication whether
      both the link LIE as well as the BFD sessions SHOULD be brought down
      and back up again.

      Multiple RIFT instances MAY choose to share a single BFD session
      (in such case it is secured, i.e. protected
                                            by outer key, absence of
                                            this element means no
                                            indication, undefined outer
                                            key means not secured
   bfd_up                        13     2.0 indication whether what discriminators are used albeit
      RIFT CAN advertise the same link
                                            is protected by established
                                            BFD session

9.2.17.  RIFT/encoding/Neighbor

   neighbor structure

9.2.17.1.  Requested Entries

       Name       Value Schema Version Description
       originator     1            2.0 system ID of for the originator
       remote_id      2            2.0 ID of remote side of same interface in
      multiple instances and with that "share" the discriminators).

      BFD TTL follows [RFC5082].

4.3.6.  Fabric Bandwidth Balancing

   A well understood problem in fabrics is that in case of link

9.2.18.  RIFT/encoding/NodeCapabilities

   capabilities the node supports.  The schema may add losses
   it would be ideal to this field
   future capabilities rebalance how much traffic is offered to indicate whether it will support
   interpretation of future schema extensions
   switches in the next level based on the same major
   revision.  Such fields MUST be optional ingress and have an implicit egress bandwidth
   they have.  Current attempts rely mostly on specialized traffic
   engineering via controller or
   explicit false default value.  If leafs being aware of complete topology
   with according cost and complexity.

   RIFT can support a future capability changes route
   selection or generates blackholes if some nodes are not supporting it
   then a major version increment is unavoidable.

9.2.18.1.  Requested Entries
   Name                   Value  Schema Description
                                Version
   protocol_minor_version     1     2.0 must advertise supported minor
                                        version dialect very light weight mechanism that way
   flood_reduction            2     2.0 can this node participate deal with the
   problem in
                                        flood reduction
   hierarchy_indications      3     2.0 does this node restrict itself
                                        to be top-of-fabric or leaf only
                                        (in ZTP) and does it support
                                        leaf-2-leaf procedures

9.2.19.  RIFT/encoding/NodeFlags

   Flags an approximate way based on the node sets

9.2.19.1.  Requested Entries

    Name     Value    Schema Description
                     Version
    overload     1       2.0 indicates fact that node RIFT is in overload, do not
                             transit traffic through it

9.2.20.  RIFT/encoding/NodeNeighborsTIEElement

   neighbor of a loop-
   free.

4.3.6.1.  Northbound Direction

   Every RIFT node

9.2.20.1.  Requested Entries

    Name      Value  Schema Description
                    Version
    level         1     2.0 level of neighbor
    cost          3     2.0
    link_ids      4     2.0 can carry description SHOULD compute the amount of multiple parallel
                            links in a TIE
    bandwidth     5     2.0 total northbound bandwith to neighbor,
   available through neighbors at higher level and modify distance
   received on default route from this will neighbor.  Those different
   distances SHOULD be
                            normally sum of the bandwidths of all the
                            parallel links.

9.2.21.  RIFT/encoding/NodeTIEElement

   Description of used to support weighted ECMP forwarding towards
   higher level when using default route.  We call such a node.

   It may occur multiple times in different TIEs but if either *
   capabilities values do not match or * flags values do not match distance
   Bandwidth Adjusted Distance or *
   neighbors repeat with different values

   the behavior BAD.  This is undefined and best illustrated by a warning SHOULD be generated.
   Neighbors can be distributed across multiple TIEs however if
   simple example.

                             .   100  x             100 100 MBits
                             .    |   x              |   |
                             .  +-+---+-+          +-+---+-+
                             .  |       |          |       |
                             .  |Spin111|          |Spin112|
                             .  +-+---+++          ++----+++
                             .    |x  ||           ||    ||
                             .    ||  |+---------------+ ||
                             .    ||  +---------------+| ||
                             .    ||               || || ||
                             .    ||               || || ||
                             .   -----All Links 10 MBit-------
                             .    ||               || || ||
                             .    ||               || || ||
                             .    ||  +------------+| || ||
                             .    ||  |+------------+ || ||
                             .    |x  ||              || ||
                             .  +-+---+++          +--++-+++
                             .  |       |          |       |
                             .  |Leaf111|          |Leaf112|
                             .  +-------+          +-------+

                      Figure 29: Balancing Bandwidth

   All links from Leafs in Figure 29 are assumed to 10 MBit/s bandwidth
   while the sets uplinks one level further up are disjoint.  Miscablings SHOULD assumed to be repeated 100 MBit/s.
   Further, in every node TIE,
   otherwise Figure 29 we assume that Leaf111 lost one of the behavior is undefined.

   @note: observe parallel
   links to Spine 111 and with that absence wants to possibly push more traffic
   onto Spine 112.  Leaf 112 has equal bandwidth to Spine 111 and Spine
   112 but Spine 111 lost one of fields implies defined defaults

9.2.21.1.  Requested Entries

   Name            Value  Schema Description
                         Version
   level               1     2.0 level its uplinks.

   The local modification of the node
   neighbors           2     2.0 node's neighbors. If neighbor systemID
                                 repeats in other node TIEs of same node received default route distance from
   upper level is achieved by running a relatively simple algorithm
   where the behavior bandwidth is undefined
   capabilities        3     2.0 capabilities of weighted exponentially while the node
   flags               4     2.0 flags of distance on
   the node
   name                5     2.0 optional node name default route represents a multiplier for easier
                                 operations
   pod                 6     2.0 PoD to which the bandwidth weight
   for easy operational adjustements.

   On a node belongs
   miscabled_links    10     2.0 if any local links are miscabled, L use Node TIEs to compute for each non-overloaded
   northbound neighbor N three values:

      L_N_u: as sum of the
                                 indication is flooded

9.2.22.  RIFT/encoding/PacketContent

   content bandwidth available to N

      N_u: as sum of a RIFT packet

9.2.22.1.  Requested Entries

                   Name Value Schema Version Description
                   lie      1            2.0
                   tide     2            2.0
                   tire     3            2.0
                   tie      4            2.0

9.2.23.  RIFT/encoding/PacketHeader

   common RIFT packet header

9.2.23.1.  Requested Entries
   Name          Value  Schema Description
                       Version
   major_version     1     2.0 major version type the uplink bandwidth available on N

      T_N_u: as sum of protocol
   minor_version     2     2.0 minor version type L_N_u * OVERSUBSCRIPTION_CONSTANT + N_u

   For all T_N_u determine the according M_N_u as
   log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value
   of protocol
   sender            3     2.0 all M_N_u.

   For each advertised default route from a node sending N modify the packet, in case advertised
   distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead
   of
                               LIE/TIRE/TIDE also distance D to weight balance default forwarding towards N.

   For the originator of it
   level             4     2.0 level example above a simple table of values will help the node sending
   understanding.  We assume the packet,
                               required on everything except LIEs. Lack
                               of presence on LIEs indicates
                               UNDEFINED_LEVEL and default route distance is used in ZTP
                               procedures.

9.2.24.  RIFT/encoding/PrefixAttributes

9.2.24.1.  Requested Entries

   Name              Value  Schema Description
                           Version
   metric advertised
   with D=1 everywhere and OVERSUBSCRIPTION_CONSTANT = 1.

               +---------+-----------+-------+-------+-----+
               | Node    | N         | T_N_u | M_N_u | BAD |
               +---------+-----------+-------+-------+-----+
               | Leaf111 | Spine 111 | 110   | 7     | 2     2.0 distance   |
               +---------+-----------+-------+-------+-----+
               | Leaf111 | Spine 112 | 220   | 8     | 1   |
               +---------+-----------+-------+-------+-----+
               | Leaf112 | Spine 111 | 120   | 7     | 2   |
               +---------+-----------+-------+-------+-----+
               | Leaf112 | Spine 112 | 220   | 8     | 1   |
               +---------+-----------+-------+-------+-----+

                         Table 5: BAD Computation

   If a calculation produces a result exceeding the range of the prefix
   tags                  3     2.0 generic unordered type,
   e.g. bandwidth, the result is set of route tags,
                                   can be redistributed to other
                                   protocols the highest possible value for
   that type.

   BAD is only computed for default routes.  A node MAY compute and use
   BAD for any disaggregated prefixes or other RIFT routes.  A node MAY
   use within another algorithm than BAD to weight northbound traffic based on
   bandwidth given that the context algorithm is distributed and un-synchronized
   and ultimately, its correct behavior does not depend on uniformity of
   balancing algorithms used in the fabric.  E.g. it is conceivable that
   leafs could use real time link loads gathered by analytics
   monotonic_clock       4     2.0 monotonic clock for mobile addresses
   loopback              6     2.0 indicates if to change
   the interface is a node
                                   loopback
   directly_attached     7     2.0 indicates amount of traffic assigned to each default route next hop.

   Observe further that a change in available bandwidth will only affect
   at maximum two levels down in the prefix is directly
                                   attached, fabric, i.e. should be routed blast radius of
   bandwidth changes is contained no matter its height.

4.3.6.2.  Southbound Direction

   Due to
                                   even if the its loop free properties a node is in overload. *
   from_link            10     2.0 CAN take during S-SPF into
   account the available bandwidth on the nodes in case of locally originated
                                   prefixes, i.e. interface addresses
                                   this can describe which link lower levels and
   modify the
                                   address belongs to.

9.2.25.  RIFT/encoding/PrefixTIEElement

   TIE carrying prefixes

9.2.25.1.  Requested Entries

    Name     Value  Schema Description
                   Version
    prefixes     1     2.0 prefixes with amount of traffic offered to next level's "southbound"
   nodes based as what it sees is the associated attributes. total achievable maximum flow
   through those nodes.  It is worth observing that such computations
   may work better if standardized but does not have to be necessarily.
   As long the same prefix repeats in multiple TIEs packet keeps on heading south it will take one of
                           same the
   available paths and arrive at the intended destination.

4.3.7.  Label Binding

   A node behavior MAY advertise on its LIEs a locally significant, downstream
   assigned, interface specific label.  One use of such label is unspecified

9.2.26.  RIFT/encoding/ProtocolPacket a hop-
   by-hop encapsulation allowing to easily distinguish forwarding planes
   served by a multiplicity of RIFT packet structure

9.2.26.1.  Requested Entries

                 Name    Value Schema Version Description
                 header      1            2.0
                 content     2            2.0

9.2.27.  RIFT/encoding/TIDEPacket

   TIDE with sorted TIE headers, if headers are unsorted, behavior is
   undefined

9.2.27.1.  Requested Entries

   Name        Value Schema Version Description
   start_range     1            2.0 first TIE header in instances.

4.3.8.  Leaf to Leaf Procedures

   RIFT can optionally allow special leaf East-West adjacencies under
   additional set of rules.  The leaf supporting those procedures MUST:

      advertise the tide packet
   end_range       2            2.0 last TIE header LEAF_2_LEAF flag in node capabilities AND

      set the tide packet
   headers         3            2.0 _sorted_ list of headers

9.2.28.  RIFT/encoding/TIEElement

   single element overload bit on all leaf's node TIEs AND

      flood only node's own north and south TIEs over E-W leaf
      adjacencies AND

      always use E-W leaf adjacency in both north as well as south
      computation AND

      install a TIE. enum `common.TIETypeType` discard route for any advertised aggregate in TIEID indicates
   which elements MUST be present leaf's
      TIEs AND

      never form southbound adjacencies.

   This will allow the E-W leaf nodes to exchange traffic strictly for
   the prefixes advertised in each other's north prefix TIEs (since the TIEElement.  In case of
   mismatch
   southbound computation will find the unexpected elements MUST be ignored.  In case of lack of
   expected element reverse direction in the other
   node's TIE an error MUST be reported and the TIE MUST
   be ignored.

   This type can be extended with new optional elements for new
   `common.TIETypeType` values without breaking the major but if it install its north prefixes).

4.3.9.  Address Family and Multi Topology Considerations

   Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC8202] is
   necessary used
   today in link-state routing protocols to understand whether all nodes support several domains on
   the new type a node same physical topology.  RIFT supports this capability must be added as well.

9.2.28.1.  Requested Entries
   Name                            Valu Schema Description
                                      e Versio
                                             n
   node                               1    2.0 used in case of enum comm
                                               on.TIETypeType.NodeTIETyp
                                               e
   prefixes                           2    2.0 used by
   carrying transport ports in case the LIE protocol exchanges.  Multiplexing
   of enum comm
                                               on.TIETypeType.PrefixTIET
                                               ype
   positive_disaggregation_prefixe    3    2.0 positive prefixes (always
   s                                           southbound) It MUST NOT
                                               be advertised within a
                                               North TIE and ignored
                                               otherwise
   negative_disaggregation_prefixe    5    2.0 transitive, negative
   s                                           prefixes (always
                                               southbound) which MUST LIEs can be
                                               aggregated and propagated
                                               according to achieved by either choosing varying multicast
   addresses or ports on the
                                               specification southwards
                                               towards lower levels to
                                               heal pathological upper
                                               level partitioning,
                                               otherwise blackholes may
                                               occur same address.

   BFD interactions in multiplane
                                               fabrics.  It MUST NOT be
                                               advertised within a North
                                               TIE.
   external_prefixes                  6    2.0 externally reimported
                                               prefixes
   positive_external_disaggregatio    7    2.0 positive external
   n_prefixes                                  disaggregated prefixes
                                               (always southbound).  It
                                               MUST NOT be advertised
                                               within a North TIE and
                                               ignored otherwise
   keyvalues                          9    2.0 Key-Value store elements

9.2.29.  RIFT/encoding/TIEHeader

   Header Section 4.3.5 are implementation dependent when
   multiple RIFT instances run on the same link.

4.3.10.  Reachability of a TIE.

   @note: TIEID space Internal Nodes in the Fabric

   RIFT does not precondition that its nodes have reachable addresses
   albeit for operational purposes this is a total order clearly desirable.  Under
   normal operating conditions this can be easily achieved by comparing e.g.
   injecting the
   elements in sequence defined node's loopback address into North and comparing each value as an unsigned
   integer South Prefix
   TIEs or other implementation specific mechanisms.

   Things get more interesting in case a node looses all its northbound
   adjacencies but is not at the top of according length.

   @note: After sequence number the lifetime received on fabric.  That is outside the envelope
   must be used for comparison before further fields.

   @note: `origination_time` and `origination_lifetime` are disregarded
   for comparison purposes
   scope of this document and carried purely for debugging/security
   purposes if present.

9.2.29.1.  Requested Entries

   Name                 Value  Schema Description
                              Version
   tieid                    2     2.0 ID may be covered in a separate document.

4.3.11.  One-Hop Healing of Levels with East-West Links

   Based on the tie
   seq_nr                   3     2.0 sequence number rules defined in Section 4.2.4, Section 4.2.3.8 and
   given presence of the tie
   origination_time        10     2.0 absolute timestamp when the TIE
                                      was generated. This E-W links, RIFT can be provide a one-hop protection of
   nodes that lost all their northbound links or in other complex link
   set failure scenarios except at Top-of-Fabric where the links are
   used on
                                      fabrics with synchronized clock exclusively to
                                      prevent lifetime modification
                                      attacks.
   origination_lifetime    12     2.0 original lifetime when flood topology information in multi-plane
   designs.  Section 5.4 explains the TIE was
                                      generated. This can be used resulting behavior based on
                                      fabrics with synchronized clock to
                                      prevent lifetime modification
                                      attacks.

9.2.30.  RIFT/encoding/TIEHeaderWithLifeTime

   Header one
   such example.

4.4.  Security

4.4.1.  Security Model

   An inherent property of a TIE as described any security and ZTP architecture is the
   resulting trade-off in TIRE/TIDE.

9.2.30.1.  Requested Entries

   Name               Value  Schema Description
                            Version
   header                 1     2.0
   remaining_lifetime     2     2.0 remaining lifetime that expires down regard to 0 just like in ISIS.  TIEs with
                                    lifetimes differing by less than
                                    `lifetime_diff2ignore` MUST be
                                    considered EQUAL.

9.2.31.  RIFT/encoding/TIEID

   ID integrity verification of a TIE

   @note: TIEID space is a total order achieved by comparing the
   elements in sequence defined
   information distributed through the fabric vs. necessary provisioning
   and comparing each value as an unsigned
   integer of according length.

9.2.31.1.  Requested Entries

      Name       Value Schema Version Description
      direction      1            2.0 direction of TIE
      originator     2            2.0 indicates originator of auto-configuration.  At a minimum, in all approaches, the TIE
      tietype        3            2.0 type
   security of an established adjacency can be ensured.  The stricter
   the tie
      tie_nr         4            2.0 number of security model the tie

9.2.32.  RIFT/encoding/TIEPacket

   TIE packet

9.2.32.1.  Requested Entries

                 Name    Value Schema Version Description
                 header      1            2.0
                 element     2            2.0

9.2.33.  RIFT/encoding/TIREPacket

   TIRE packet

9.2.33.1.  Requested Entries

                 Name    Value Schema Version Description
                 headers     1            2.0

10.  Acknowledgments

   A new routing protocol in its complexity is not a product more provisioning must take over the role of a parent
   but of a village as the author list shows already.  However, many
   more people provided input, fine-combed the specification based on
   their experience in design or implementation.  This section
   ZTP.

   The most security conscious operators will make
   an inadequate attempt in recording their contribution.

   Many thanks want to have full control
   over which port on which router/switch is connected to Naiming Shen for some of the early discussions around respective
   port on the topic "other side", which we will call the "port-association
   model" (PAM) achievable e.g. by configuring on each port pair a
   designated shared key or pair of using IGPs for routing in topologies related private/public keys.  In secure data
   center locations, operators may want to Clos.
   Russ White control which router/switch
   is connected to be especially acknowledged which other router/switch only or choose a "node-
   association model" (NAM) which allows, for example, simplified port
   sparing.  In an even more relaxed environment, an operator may only
   be concerned that the key conversation on
   epistomology router/switches share credentials ensuring that allowed to tie current asynchronous distributed
   systems theory results
   they belong to a modern protocol design presented here.
   Adrian Farrel, Joel Halpern, Jeffrey Zhang, Krzysztof Szarkowicz,
   Nagendra Kumar provided thoughtful comments that improved this particular data center network hence allowing the
   readability
   flexible sparing of whole routers/switches.  We will define that case
   as the document and found good amount of corners "fabric-association model" (FAM), equivalent to using a shared
   secret for the whole fabric.  Such flexibility may make sense for
   leaf nodes such as servers where the light failed to shine.  Kris Price was first to mention single
   router, single arm default considerations.  Jeff Tantsura helped out
   with some initial thoughts on BFD interactions while Jeff Haas
   corrected several misconceptions about BFD's finer points.  Artur
   Makutunowicz pointed out many possible improvements addition and acted as
   sounding board in regard to modern protocol implementation techniques
   RIFT swapping of servers
   is exploring.  Barak Gafni formalized first time clearly more frequent than the
   problem rest of partitioned spine and fallen the data center network.
   Generally, leafs on a (clean) napkin in
   Singapore that led of the fabric tend to be less trusted than switches.
   The different models could be mixed throughout the very important part fabric if the
   benefits outweigh the cost of increased complexity in provisioning.

   In each of the specification
   centered around multiple Top-of-Fabric planes and negative
   disaggregation.  Igor Gashinsky above cases, some configuration mechanism is needed to
   allow the operator to specify which connections are allowed, and others shared many thoughts on
   problems encountered some
   mechanism is needed to:

   a.  specify the according level in design the fabric,

   b.  discover and operation report missing connections,

   c.  discover and report unexpected connections, and prevent such
       adjacencies from forming.

   On the more relaxed configuration side of large-scale data
   center fabrics.  Xu Benchong found a delicate error in the flooding
   procedures while implementing.

11.  References

11.1.  Normative References

   [ISO10589]
              ISO "International Organization for Standardization",
              "Intermediate system spectrum, operators
   might only configure the level of each switch, but don't explicitly
   configure which connections are allowed.  In this case, RIFT will
   only allow adjacencies to Intermediate system intra-domain
              routeing information exchange protocol for use come up between nodes are that in
              conjunction adjacent
   levels.  The operators with the protocol for providing the
              connectionless-mode Network Service (ISO 8473), ISO/IEC
              10589:2002, Second Edition.", Nov 2002.

   [RFC1982]  Elz, R. lowest security requirements may not use
   any configuration to specify which connections are allowed.  Such
   fabrics could rely fully on ZTP for each router/switch to discover
   its level and R. Bush, "Serial Number Arithmetic", RFC 1982,
              DOI 10.17487/RFC1982, August 1996,
              <https://www.rfc-editor.org/info/rfc1982>.

   [RFC2119]  Bradner, S., "Key words would only allow adjacencies between adjacent levels to
   come up.  Figure 30 illustrates the tradeoffs inherent in the
   different security models.

   Ultimately, some level of verification of the link quality may be
   required before an adjacency is allowed to be used for use forwarding.
   For example, an implementation may require that a BFD session comes
   up before advertising the adjacency.

   For the above outlined cases, RIFT has two approaches to enforce that
   a local port is connected to the correct port on the correct remote
   router/switch.  One approach is to piggy-back on RIFT's
   authentication mechanism.  Assuming the provisioning model (e.g. the
   YANG model) is flexible enough, operators can choose to provision a
   unique authentication key for:

   a.  each pair of ports in "port-association model" or

   b.  each pair of switches in "node-association model" or

   c.  each pair of levels or
   d.  the entire fabric in RFCs "fabric-association model".

   The other approach is to Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [RFC2328]  Moy, J., "OSPF Version 2", STD 54, RFC 2328,
              DOI 10.17487/RFC2328, April 1998,
              <https://www.rfc-editor.org/info/rfc2328>.

   [RFC2365]  Meyer, D., "Administratively Scoped IP Multicast", BCP 23,
              RFC 2365, DOI 10.17487/RFC2365, July 1998,
              <https://www.rfc-editor.org/info/rfc2365>.

   [RFC4271]  Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A
              Border Gateway Protocol 4 (BGP-4)", RFC 4271,
              DOI 10.17487/RFC4271, January 2006,
              <https://www.rfc-editor.org/info/rfc4271>.

   [RFC4291]  Hinden, R. rely on the system-id, port-id and S. Deering, "IP Version 6 Addressing
              Architecture", RFC 4291, DOI 10.17487/RFC4291, February
              2006, <https://www.rfc-editor.org/info/rfc4291>.

   [RFC5082]  Gill, V., Heasley, J., Meyer, D., Savola, P., Ed., level
   fields in the LIE message to validate an adjacency against the
   configured expected cabling topology, and C.
              Pignataro, "The Generalized TTL optionally introduce some
   new rules in the FSM to allow the adjacency to come up if the
   expectations are met.

                   ^                 /\                  |
                  /|\               /  \                 |
                   |               /    \                |
                   |              / PAM  \               |
               Increasing        /        \          Increasing
               Integrity        +----------+         Flexibility
                   &           /    NAM     \            &
              Increasing      +--------------+         Less
              Provisioning   /      FAM       \     Configuration
                   |        +------------------+         |
                   |       / Level Provisioning \        |
                   |      +----------------------+      \|/
                   |     /    Zero Configuration  \      v
                        +--------------------------+

                         Figure 30: Security Mechanism
              (GTSM)", RFC 5082, DOI 10.17487/RFC5082, October 2007,
              <https://www.rfc-editor.org/info/rfc5082>.

   [RFC5120]  Przygienda, T., Shen, N., Model

4.4.2.  Security Mechanisms

   RIFT Security goals are to ensure authentication, message integrity
   and N. Sheth, "M-ISIS: Multi
              Topology (MT) Routing prevention of replay attacks.  Low processing overhead and
   efficient messaging are also a goal.  Message confidentiality is a
   non-goal.

   The model in Intermediate System the previous section allows a range of security key
   types that are analogous to
              Intermediate Systems (IS-ISs)", RFC 5120,
              DOI 10.17487/RFC5120, February 2008,
              <https://www.rfc-editor.org/info/rfc5120>.

   [RFC5303]  Katz, D., Saluja, R., the various security association models.
   PAM and D. Eastlake 3rd, "Three-Way
              Handshake NAM allow security associations at the port or node level
   using symmetric or asymmetric keys that are pre-installed.  FAM
   argues for IS-IS Point-to-Point Adjacencies", RFC 5303,
              DOI 10.17487/RFC5303, October 2008,
              <https://www.rfc-editor.org/info/rfc5303>.

   [RFC5549]  Le Faucheur, F. and E. Rosen, "Advertising IPv4 Network
              Layer Reachability Information with an IPv6 Next Hop",
              RFC 5549, DOI 10.17487/RFC5549, May 2009,
              <https://www.rfc-editor.org/info/rfc5549>.

   [RFC5709]  Bhatia, M., Manral, V., Fanto, M., White, R., Barnes, M.,
              Li, T., and R. Atkinson, "OSPFv2 HMAC-SHA Cryptographic
              Authentication", RFC 5709, DOI 10.17487/RFC5709, October
              2009, <https://www.rfc-editor.org/info/rfc5709>.

   [RFC5881]  Katz, D. and D. Ward, "Bidirectional Forwarding Detection
              (BFD) security associations to be applied only at a group level
   or to be refined once the topology has been established.  RIFT does
   not specify how security keys are installed or updated it specifies
   how the key can be used to achieve goals.

   The protocol has provisions for IPv4 and IPv6 (Single Hop)", RFC 5881,
              DOI 10.17487/RFC5881, June 2010,
              <https://www.rfc-editor.org/info/rfc5881>.

   [RFC5905]  Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch,
              "Network Time Protocol Version 4: Protocol and Algorithms
              Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010,
              <https://www.rfc-editor.org/info/rfc5905>.

   [RFC7752]  Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and
              S. Ray, "North-Bound Distribution of Link-State and
              Traffic Engineering (TE) Information Using BGP", RFC 7752,
              DOI 10.17487/RFC7752, March 2016,
              <https://www.rfc-editor.org/info/rfc7752>.

   [RFC7987]  Ginsberg, L., Wells, P., Decraene, B., Przygienda, T., and
              H. Gredler, "IS-IS Minimum Remaining Lifetime", RFC 7987,
              DOI 10.17487/RFC7987, October 2016,
              <https://www.rfc-editor.org/info/rfc7987>.

   [RFC8200]  Deering, S. and R. Hinden, "Internet Protocol, Version 6
              (IPv6) Specification", STD 86, RFC 8200,
              DOI 10.17487/RFC8200, July 2017,
              <https://www.rfc-editor.org/info/rfc8200>.

   [RFC8202]  Ginsberg, L., Previdi, S., and W. Henderickx, "IS-IS
              Multi-Instance", RFC 8202, DOI 10.17487/RFC8202, June
              2017, <https://www.rfc-editor.org/info/rfc8202>.

   [RFC8402]  Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L.,
              Decraene, B., Litkowski, S., "weak" nonces to prevent replay
   attacks and R. Shakir, "Segment
              Routing Architecture", RFC 8402, DOI 10.17487/RFC8402,
              July 2018, <https://www.rfc-editor.org/info/rfc8402>.

   [RFC8505]  Thubert, P., Ed., Nordmark, E., Chakrabarti, S., includes authentication mechanisms comparable to
   [RFC5709] and C.
              Perkins, "Registration Extensions for IPv6 over Low-Power
              Wireless Personal Area Network (6LoWPAN) Neighbor
              Discovery", RFC 8505, DOI 10.17487/RFC8505, November 2018,
              <https://www.rfc-editor.org/info/rfc8505>.

   [thrift]   Apache Software Foundation, "Thrift Interface Description
              Language", <https://thrift.apache.org/docs/idl>.

11.2.  Informative References

   [CLOS]     Yuan, X., "On Nonblocking Folded-Clos Networks [RFC7987].

4.4.3.  Security Envelope

   RIFT MUST be carried in Computer
              Communication Environments", IEEE International Parallel &
              Distributed Processing Symposium, 2011.

   [DIJKSTRA]
              Dijkstra, E., "A Note on Two Problems a mandatory secure envelope illustrated in
   Figure 31.  Any value in Connexion with
              Graphs", Journal Numer. Math. , 1959.

   [DOT]      Ellson, J. and L. Koutsofios, "Graphviz: open source graph
              drawing tools", Springer-Verlag , 2001.

   [DYNAMO]   De Candia et al., G., "Dynamo: amazon's highly available
              key-value store", ACM SIGOPS symposium on Operating
              systems principles (SOSP '07), 2007.

   [EPPSTEIN]
              Eppstein, D., "Finding the k-Shortest Paths", 1997.

   [EUI64]    IEEE, "Guidelines for Use of Extended Unique Identifier
              (EUI), Organizationally Unique Identifier (OUI), and
              Company ID (CID)", IEEE EUI,
              <http://standards.ieee.org/develop/regauth/tut/eui.pdf>.

   [FATTREE]  Leiserson, C., "Fat-Trees: Universal Networks for
              Hardware-Efficient Supercomputing", 1985.

   [IEEEstd1588]
              IEEE, "IEEE Standard for packet following a Precision Clock Synchronization
              Protocol for Networked Measurement and Control Systems",
              IEEE Standard 1588,
              <https://ieeexplore.ieee.org/document/4579760/>.

   [IEEEstd8021AS]
              IEEE, "IEEE Standard for Local and Metropolitan Area
              Networks - Timing and Synchronization for Time-Sensitive
              Applications in Bridged security fingerprint
   MUST be used only after the according fingerprint has been validated.

   Local Area Networks",
              IEEE Standard 802.1AS,
              <https://ieeexplore.ieee.org/document/5741898/>.

   [ISO10589-Second-Edition]
              International Organization for Standardization,
              "Intermediate system configuration MAY allow to Intermediate system intra-domain
              routeing information exchange protocol for use in
              conjunction with the protocol for providing the
              connectionless-mode Network Service (ISO 8473)", Nov 2002.

   [MAKSIC2013]
              Maksic et al., N., "Improving Utilization of Data Center
              Networks", IEEE Communications Magazine, Nov 2013.

   [RFC0826]  Plummer, D., "An Ethernet Address Resolution Protocol: Or
              Converting Network Protocol Addresses to 48.bit Ethernet
              Address for Transmission on Ethernet Hardware", STD 37,
              RFC 826, DOI 10.17487/RFC0826, November 1982,
              <https://www.rfc-editor.org/info/rfc826>.

   [RFC2131]  Droms, R., "Dynamic Host Configuration Protocol",
              RFC 2131, DOI 10.17487/RFC2131, March 1997,
              <https://www.rfc-editor.org/info/rfc2131>.

   [RFC3626]  Clausen, T., Ed. and P. Jacquet, Ed., "Optimized Link
              State Routing Protocol (OLSR)", RFC 3626,
              DOI 10.17487/RFC3626, October 2003,
              <https://www.rfc-editor.org/info/rfc3626>.

   [RFC4655]  Farrel, A., Vasseur, J., and J. Ash, "A Path Computation
              Element (PCE)-Based Architecture", RFC 4655,
              DOI 10.17487/RFC4655, August 2006,
              <https://www.rfc-editor.org/info/rfc4655>.

   [RFC4861]  Narten, T., Nordmark, E., Simpson, W., and H. Soliman,
              "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861,
              DOI 10.17487/RFC4861, September 2007,
              <https://www.rfc-editor.org/info/rfc4861>.

   [RFC4862]  Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless
              Address Autoconfiguration", RFC 4862,
              DOI 10.17487/RFC4862, September 2007,
              <https://www.rfc-editor.org/info/rfc4862>.

   [RFC6518]  Lebovitz, G. and M. Bhatia, "Keying and Authentication for
              Routing Protocols (KARP) Design Guidelines", RFC 6518,
              DOI 10.17487/RFC6518, February 2012,
              <https://www.rfc-editor.org/info/rfc6518>.

   [RFC7855]  Previdi, S., Ed., Filsfils, C., Ed., Decraene, B.,
              Litkowski, S., Horneffer, M., and R. Shakir, "Source
              Packet Routing in Networking (SPRING) Problem Statement
              and Requirements", RFC 7855, DOI 10.17487/RFC7855, May
              2016, <https://www.rfc-editor.org/info/rfc7855>.

   [RFC7938]  Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
              BGP for Routing in Large-Scale Data Centers", RFC 7938,
              DOI 10.17487/RFC7938, August 2016,
              <https://www.rfc-editor.org/info/rfc7938>.

   [RFC8415]  Mrugalski, T., Siodelski, M., Volz, B., Yourtchenko, A.,
              Richardson, M., Jiang, S., Lemon, T., and T. Winters,
              "Dynamic Host Configuration Protocol for IPv6 (DHCPv6)",
              RFC 8415, DOI 10.17487/RFC8415, November 2018,
              <https://www.rfc-editor.org/info/rfc8415>.

   [VAHDAT08]
              Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable,
              Commodity Data Center Network Architecture", SIGCOMM ,
              2008.

   [Wikipedia]
              Wikipedia,
              "https://en.wikipedia.org/wiki/Serial_number_arithmetic",
              2016.

Appendix A.  Sequence Number Binary Arithmetic

   The only reasonably reference to a cleaner than [RFC1982] sequence
   number solution is given in [Wikipedia].  It basically converts the
   problem into two complement's arithmetic.  Assuming a straight two
   complement's substractions on the bit-width of the sequence number
   the according >: and =: relations are defined as:

      U_1, U_2 are 12-bits aligned unsigned version number

      D_f is  ( U_1 - U_2 ) interpreted as two complement signed 12-bits
      D_b is  ( U_2 - U_1 ) interpreted as two complement signed 12-bits

      U_1 >: U_2 IIF D_f > 0 AND D_b < 0
      U_1 =: U_2 IIF D_f = 0

   The >: relationsship is symmetric but not transitive.  Observe that
   this leaves skip the case checking of the numbers having maximum two complement
   distance, e.g. ( envelope's
   integrity.

       0 and 0x800 ) undefined in our 12-bits case since
   D_f and D_b are both -0x7ff.

   A simple example of the relationship in case of 3-bit arithmetic
   follows as table indicating D_f/D_b values and then the relationship
   of U_1 to U_2:

           U2 / U1                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0        +/+  +/-  +/-  +/-  -/-  -/+  -/+  -/+ 1        -/+  +/+  +/-  +/-  +/-  -/-  -/+  -/+ 2        -/+  -/+  +/+  +/-  +/-  +/-  -/-  -/+ 3        -/+  -/+  -/+  +/+  +/-  +/-  +/-  -/- 4        -/-  -/+  -/+  -/+  +/+  +/-  +/-  +/- 5        +/-  -/-  -/+  -/+  -/+  +/+  +/-  +/- 6        +/-  +/-  -/-  -/+  -/+  -/+  +/+  +/- 7        +/-  +/-  +/-  -/-  -/+  -/+  -/+  +/+

          U2 / U1 8 9 0 1 2 3 4 5 6 7 8 9 0         =    >    >    >    ?    <    <    < 1         <    =    >    >    >    ?    <    <
          2         <    <    =    >    >    >    ?    <
          3         <    <    <    =    >    >    >    ?
          4         ?    <    <    <    =    >    >    >
          5         >    ?    <    <    <    =    >    >
          6         >    >    ?    <    <    <    =    >
          7         >    >    >    ?    <    <    <    =

Appendix B.  Information Elements Schema

   This section introduces the schema for information elements.  The IDL
   is Thrift [thrift].

   On schema changes that

   1.   change field numbers or
   2.   add new *required* fields or

   3.   remove any fields or

   4.   change lists into sets, unions into structures or

   5.   change multiplicity of fields or

   6.   changes name of any field or type or

   7.   change datatypes of any field or

   8.   adds, changes or removes a default value

      UDP Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           Source Port         |       RIFT destination port   |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           UDP Length          |        UDP Checksum           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      Outer Security Envelope Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |           RIFT MAGIC          |         Packet Number         |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |    Reserved   |  RIFT Major   | Outer Key ID  | Fingerprint   |
      |               |    Version    |               |    Length     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~       Security Fingerprint covers all following content       ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Weak Nonce Local              | Weak Nonce Remote             |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |            Remaining TIE Lifetime (all 1s in case of any *existing* field
        or

   9.   removes or changes any defined constant or constant LIE)     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      TIE Origin Security Envelope Header:
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |              TIE Origin Key ID                |  Fingerprint  |
      |                                               |    Length     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~       Security Fingerprint covers all following content       ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

      Serialized RIFT Model Object
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                                                               |
      ~                Serialized RIFT Model Object                   ~
      |                                                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

                       Figure 31: Security Envelope

   RIFT MAGIC:  16 bits.  Constant value or

   10.  changes any enumeration type except extending `common.TIEType`
        (use of enumeration types is generally discouraged)

   major version 0xA1F7 that allows to
      classify RIFT packets independent of used UDP port.

   Packet Number:  16 bits.  An optional, per packet type monotonically
      growing number rolling over using sequence number arithmetic
      defined inAppendix A.  A node SHOULD correctly set the schema MUST increase.  All other changes number on
      subsequent packets or otherwise MUST
   increase minor version within the same major.

   Observe however that introducing an optional field does not cause a
   major version increase even if the fields inside set the structure are
   optional with defaults.

   All signed integer value to
      `undefined_packet_number` as forced by Thrift [thrift] support must provided in the schema.  This number
      can be cast used to detect losses and misordering in flooding for internal
      either operational purposes to equivalent unsigned values without
   discarding the signedness bit.  An or in implementation SHOULD try to avoid
   using the signedness bit when generating values.

   The schema is normative.

B.1.  common.thrift

/** @note adjust
      flooding behavior to current link or buffer quality.  This number
      MUST be interpreted in implementation as unsigned 64 bits.
 *        The implementation SHOULD NOT use the MSB.
 */
typedef i64      SystemIDType
typedef i32      IPv4Address
/** this has to be used to discard or validate the correctness of length long enough
      packets.

   RIFT Major Version:  8 bits.  It allows to accomodate prefix */
typedef binary   IPv6Address
/** @note MUST be interpreted in implementation as unsigned */
typedef i16      UDPPortType
/** @note MUST be interpreted in implementation as unsigned */
typedef i32      TIENrType
/** @note MUST be interpreted in implementation as unsigned */
typedef i32      MTUSizeType
/** @note MUST check whether protocol
      versions are compatible, i.e. the serialized object can be interpreted in decoded
      at all.  An implementation as unsigned rollling over number */
typedef i16      SeqNrType
/** @note MUST drop packets with unexpected value
      and MAY report a problem.  Must be interpreted in implementation same as unsigned */
typedef i32      LifeTimeInSecType
/** @note MUST be interpreted in implementation as unsigned */
typedef i8       LevelType
/** optional, recommended monotonically increasing number _per encoded model
      object, otherwise packet is dropped.

   Outer Key ID:  8 bits to allow key rollovers.  This implies key type per adjacency_
    that can be
      and used algorithm.  Value 0 means that no valid fingerprint was
      computed.  This key ID scope is local to detect losses/misordering/restarts. the nodes on both ends of
      the adjacency.

   TIE Origin Key ID:  24 bits.  This will be moved into envelope in implies key type and used
      algorithm.  Value 0 means that no valid fingerprint was computed.
      This key ID scope is global to the future.
    @note MUST RIFT instance since it implies
      the originator of the TIE so the contained object does not have to
      be interpreted de-serialized to obtain it.

   Length of Fingerprint:  8 bits.  Length in implementation as unsigned rollling 32-bit multiples of the
      following fingerprint not including lifetime or weak nonces.  It
      allows to navigate the structure when an unknown key type is
      present.  To clarify a common cornercase when this value is set to
      0 it signifies an empty (0 bytes long) security fingerprint.

   Security Fingerprint:  32 bits * Length of Fingerprint.  This is a
      signature that is computed over number */
typedef i16      PacketNumberType
/** @note all data following after it.  If
      the signficant bits of fingerprint are fewer than the 32 bits
      padded length than the signficant bits MUST be interpreted in implementation as unsigned */
typedef i32      PodType
/** @note MUST left aligned and
      remaining bits on the right padded with 0s.  When using PKI the
      Security fingerprint originating node uses its private key to
      create the signature.  The original packet can then be interpreted in implementation as unsigned. This is carried in verified
      provided the
          security envelope public key is shared and MUST fit into 8 current.

   Remaining TIE Lifetime:  32 bits. */
typedef i8       VersionType
/** @note MUST be interpreted in implementation as unsigned */
typedef i16      MinorVersionType
/** @note  In case of anything but TIEs this
      field MUST be interpreted in implementation as unsigned */
typedef i32      MetricType
/** @note set to all ones and Origin Security Envelope Header
      MUST NOT be interpreted present in implementation as unsigned the packet.  For TIEs this field represents
      the remaining lifetime of the TIE and unstructured */
typedef i64      RouteTagType
/** @note Origin Security Envelope
      Header MUST be interpreted present in implementation as unstructured label the packet.  The value */
typedef i32      LabelType
/** @note in the serialized
      model object MUST be interpreted ignored.

   Weak Nonce Local:   16 bits.  Local Weak Nonce of the adjacency as
      advertised in implementation LIEs.

   Weak Nonce Remote:   16 bits.  Remote Weak Nonce of the adjacency as unsigned */
typedef i32      BandwithInMegaBitsType
/** @note Key Value
      received in LIEs.

   TIE Origin Security Envelope Header:  It MUST be present if and only
      if the Remaining TIE Lifetime field is NOT all ones.  It carries
      through the originators key ID type */
typedef string   KeyIDType
/** node local, unique identification for a link (interface/tunnel
  * etc. Basically anything RIFT runs on). and according fingerprint of the
      object to protect TIE from modification during flooding.  This
      ensures origin validation and integrity (but does not provide
      validation of a chain of trust).

   Observe that due to the schema migration rules per Appendix B the
   contained model can be always decoded if the major version matches
   and the envelope integrity has been validated.  Consequently,
   description of the TIE is kept
  * at 32 bits so available to flood it aligns with BFD [RFC5880] discriminator size.
  */
typedef i32    LinkIDType
typedef string KeyNameType
typedef i8     PrefixLenType
/** timestamp in seconds since properly including
   unknown TIE types.

4.4.4.  Weak Nonces

   The protocol uses two 16 bit nonces to salt generated signatures.  We
   use the epoch */
typedef i64    TimestampInSecsType
/** security nonce.
 *  @note MUST be interpreted term "nonce" a bit loosely since RIFT nonces are not being
   changed on every packet as common in cryptography.  For efficiency
   purposes they are changed at a frequency high enough to dwarf replay
   attacks attempts for all practical purposes.  Therefore, we call them
   "weak" nonces.

   Any implementation as rolling over unsigned including RIFT security MUST generate and wrap
   around local nonces properly.  When a nonce increment leads to
   `undefined_nonce` value */
typedef i16    NonceType
/** LIE FSM holdtime type */
typedef i16    TimeIntervalInSecType
/** Transaction ID type for prefix mobility as specified by RFC6550, the value
    MUST SHOULD be interpreted in incremented again
   immediately.  All implementation as unsigned  */
typedef i8     PrefixTransactionIDType
/** timestamp per IEEE 802.1AS, values MUST be interpreted in reflect the neighbor's nonces.
   An implementation as unsigned  */
struct IEEE802_1ASTimeStampType {
    1: required     i64     AS_sec;
    2: optional     i32     AS_nsec;
}
/** generic counter type */
typedef i64 CounterType
/** Platform Interface Index type, i.e. index of interface SHOULD increment a chosen nonce on hardware, can be used e.g. with
    RFC5837 */
typedef i32 PlatformInterfaceIndex

/** flags indicating nodes behavior every LIE FSM
   transition that ends up in case of ZTP
 */
enum HierarchyIndications {
    /** forces level to `leaf_level` and enables according procedures */
    leaf_only                            = 0,
    /** forces level to `leaf_level` and enables according procedures */
    leaf_only_and_leaf_2_leaf_procedures = 1,
    /** forces level to `top_of_fabric` a different state from the previous and enables according procedures */
    top_of_fabric                        = 2,
}

const PacketNumberType  undefined_packet_number    = 0
/** This
   MUST be used when node is configured as top of fabric in ZTP.
    This is kept reasonably low to alow increment its nonce at least every 5 minutes (such
   considerations allow for fast ZTP convergence on
    failures. */
const LevelType   top_of_fabric_level              = 24
/** default bandwidth on efficient implementations without opening a link */
const BandwithInMegaBitsType  default_bandwidth    = 100
/** fixed leaf level
   significant security risk).  When flooding TIEs, the implementation
   MUST use recent (i.e. within allowed difference) nonces reflected in
   the LIE exchange.  The schema specifies maximum allowable nonce value
   difference on a packet compared to reflected nonces in the LIEs.  Any
   packet received with nonces deviating more than the allowed delta
   MUST be discarded without further computation of signatures to
   prevent computation load attacks.

   In case where a secure implementation does not receive signatures or
   receives undefined nonces from neighbor indicating that it does not
   support or verify signatures, it is a matter of local policy how such
   packets are treated.  Any secure implementation may choose to either
   refuse forming an adjacency with an implementation not advertising
   signatures or valid nonces or simply keep on signing local packets
   while accepting neighbor's packets without further security
   verification.

   As a necessary exception, an implementation MUST advertise
   `undefined_nonce` for remote nonce value when ZTP the FSM is not used */
const LevelType   leaf_level                  = 0
const LevelType   default_level               = leaf_level
const PodType     default_pod                 = 0
const LinkIDType  undefined_linkid            = 0

/** default distance used */
const MetricType  default_distance         = 1
/** in 2-way
   or 3-way state and accept an `undefined_nonce` for its local nonce
   value on packets in any distance larger other state than this will be considered infinity */
const MetricType  infinite_distance       = 0x7FFFFFFF
/** represents invalid distance */
const MetricType  invalid_distance        = 0
const bool overload_default               = false
const bool flood_reduction_default        = true
/** default LIE FSM holddown time */
const TimeIntervalInSecType   default_lie_holdtime  = 3
/** default ZTP FSM holddown time */
const TimeIntervalInSecType   default_ztp_holdtime  = 1
/** by default 3-way.

   As optional optimization, an implemenation MAY send one LIE levels are ZTP offers */
const bool default_not_a_ztp_offer        = false
/** by default e'one is repeating flooding */
const bool default_you_are_flood_repeater = true
/** 0 is illegal for SystemID */
const SystemIDType IllegalSystemID        = 0
/** empty set of nodes */
const set<SystemIDType> empty_set_of_nodeids = {}
/** default with
   previously negotiated neighbor's nonce to try to speed up a
   neighbor's transition from 3-way to 1-way and MUST revert to sending
   `undefined_nonce` after that.

4.4.5.  Lifetime

   Protecting lifetime on flooding may lead to excessive number of TIE is one week */
const LifeTimeInSecType default_lifetime      = 604800
/** default lifetime when
   security fingerprint computation and hence an application generating
   such fingerprints on TIEs are purged is 5 minutes */
const LifeTimeInSecType purge_lifetime        = 300
/** MAY round the value down interval to the next
   `rounddown_lifetime_interval` defined in the schema when sending TIEs are sent with
   albeit such optimization in presence of security hashes
    to prevent excessive computation. **/
const LifeTimeInSecType rounddown_lifetime_interval = 60
/** any `TieHeader` that has over
   advancing weak nonces may not be feasible.

4.4.6.  Key Management

   As outlined in the Security Model a smaller lifetime difference
    than this constant private shared key or a public/
   private key pair is equal (if other fields equal). This
    constant MUST be larger than `purge_lifetime` to avoid
    retransmissions */
const LifeTimeInSecType lifetime_diff2ignore  = 400

/** default UDP port used to run LIEs on */
const UDPPortType     default_lie_udp_port       =  914
/** default UDP port to receive TIEs on, that can be peer specific */
const UDPPortType     default_tie_udp_flood_port =  915

/** default MTU link size to use */
const MTUSizeType     default_mtu_size           = 1400
/** default link being BFD capable */
const bool            bfd_default                = true

/** undefined nonce, equivalent to missing nonce */
const NonceType       undefined_nonce            = 0;
/** outer security key id, MUST be interpreted as in implementation as unsigned */
typedef i8            OuterSecurityKeyID
/** security key id, MUST be interpreted as in implementation as unsigned */
typedef i32           TIESecurityKeyID
/** undefined key */
const TIESecurityKeyID undefined_securitykey_id   = 0;
/** Maximum delta (negative or positive) that a mirrored nonce can
    deviate from local value Authenticate the adjacency.  The actual
   method of key distribution and key synchronization is assumed to be considered valid. If nonces are
    changed every minute on both sides this opens statistically
    a `maximum_valid_nonce_delta` minutes window
   out of identical LIEs,
    TIE, TI(x)E replays.
    The interval cannot be too small since LIE FSM may change
    states fairly quickly during ZTP without sending LIEs*/
const i16             maximum_valid_nonce_delta  = 5;

/** direction band from RIFT's perspective.  Both nodes in the adjacency
   must share the same keys and configuration of tie */
enum TieDirectionType {
    Illegal           = 0,
    South             = 1,
    North             = 2,
    DirectionMaxValue = 3,
}

/** address family */
enum AddressFamilyType {
   Illegal                = 0,
   AddressFamilyMinValue  = 1,
   IPv4     = 2,
   IPv6     = 3,
   AddressFamilyMaxValue  = 4,
}

/** IP v4 prefix type */
struct IPv4PrefixType {
    1: required IPv4Address    address;
    2: required PrefixLenType  prefixlen;
}

/** IP v6 prefix type */
struct IPv6PrefixType {
    1: required IPv6Address    address;
    2: required PrefixLenType  prefixlen;
}

/** IP address key type */
union IPAddressType {
    1: optional IPv4Address   ipv4address;
    2: optional IPv6Address   ipv6address;
}

/** prefix representing reachablity.

    @note: and algorithm
   for interface
        addresses the protocol can propagate a key ID.  Mismatched keys will obviously not inter-operate due
   to unverifiable security envelope.

   Key roll-over while the address part beyond adjacency is active is allowed and the subnet mask
   technique is well known and on reachability computation that has to
        be normalized. The non-significant bits can be used for operational
        purposes.
*/
union IPPrefixType {
    1: optional IPv4PrefixType   ipv4prefix;
    2: optional IPv6PrefixType   ipv6prefix;
}

/** sequence of a prefix when it moves
 */

struct PrefixSequenceType {
    1: required IEEE802_1ASTimeStampType  timestamp;
    /** transaction ID set by client described in e.g. in 6LoWPAN */
    2: optional PrefixTransactionIDType   transactionid;
}

/** type  [RFC6518].  Key
   distribution procedures are out of TIE.

    This enum indicates what TIE type scope for RIFT.

4.4.7.  Security Association Changes

   There in no mechanism to convert a security envelope for the TIE is carrying.
    In case same key
   ID from one algorithm to another once the value envelope is operational.
   The recommended procedure to change to a new algorithm is not known to take the receiver,
    re-flooded
   adjacency down and make the same way as prefix TIEs. This allows for
    future extensions of changes and then bring the protocol within adjacency up.

   Obviously, an implementation may choose to stop verifying security
   envelope for the same schema major
    with types opaque duration of key change to some nodes unless keep the flooding scope adjacency up but
   since this introduces a security vulnerability window, such roll-over
   is not
    the same as prefix TIE, then a major version revision MUST
    be performed.
*/
enum TIETypeType {
    Illegal                                     = 0,
    TIETypeMinValue                             = 1,
    /** first legal value */
    NodeTIEType                                 = 2,
    PrefixTIEType                               = 3,
    PositiveDisaggregationPrefixTIEType         = 4,
    NegativeDisaggregationPrefixTIEType         = 5,
    PGPrefixTIEType                             = 6,
    KeyValueTIEType                             = 7,
    ExternalPrefixTIEType                       = 8,
    PositiveExternalDisaggregationPrefixTIEType = 9,
    TIETypeMaxValue                             = 10,
}

/** recommended.

5.  Examples

5.1.  Normal Operation

   This section describes RIFT route types.

    @note: route types which MUST be ordered on their preference
            PGP prefixes are most preferred attracting
            traffic north (towards spine) and then south
            normal prefixes are attracting traffic south (towards leafs),
            i.e. prefix deployment in NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE.

    @note: The only purpose of those values is to introduce an
           ordering whereas an implementation can choose internally
           any other values as long the ordering is preserved
 */
enum RouteType {
    Illegal               =  0,
    RouteTypeMinValue     =  1,
    /** first legal value. */
    /** discard routes are most prefered */
    Discard               =  2,

    /** local prefixes are directly attached prefixes on the
     *  system such as e.g. interface routes.
     */
    LocalPrefix           =  3,
    /** advertised in S-TIEs */
    SouthPGPPrefix        =  4,
    /** advertised in N-TIEs */
    NorthPGPPrefix        =  5,
    /** advertised in N-TIEs */
    NorthPrefix           =  6,
    /** externally imported north */
    NorthExternalPrefix   =  7,
    /** advertised in S-TIEs, either normal prefix example topology
   without any node or positive disaggregation */
    SouthPrefix           =  8,
    /** externally imported south */
    SouthExternalPrefix   =  9,
    /** negative, transitive prefixes are least preferred */
    NegativeSouthPrefix   = 10,
    RouteTypeMaxValue     = 11,
}

B.2.  encoding.thrift

/**
    Thrift file for packet encodings link failures.  We disregard flooding reduction
   for RIFT
*/

include "common.thrift"

/** Represents protocol encoding schema major version */
const common.VersionType protocol_major_version = 2
/** Represents protocol encoding schema minor version */
const common.MinorVersionType protocol_minor_version =  0

/** common RIFT packet header */
struct PacketHeader {
    /** major version type of protocol */
    1: required common.VersionType major_version = protocol_major_version;
    /** minor version type of protocol */
    2: required common.VersionType minor_version = protocol_minor_version;
    /** node sending the packet, in case of LIE/TIRE/TIDE
      * also the originator of it */
    3: required common.SystemIDType  sender;
    /** level of the node sending simplicity's sake.

   As first step, the packet, required on everything except
      * LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and is used
      * following bi-directional adjacencies will be
   created (and any other links that do not fulfill LIE rules in ZTP procedures.
     */
    4: optional common.LevelType            level;
}

/** community */
struct Community {
    1: required i32          top;
    2: required i32          bottom;
}

/** neighbor structure  */
struct Neighbor {
    /** system ID of the originator */
    1: required common.SystemIDType        originator;
    /** ID of remote side of the link */
    2: required common.LinkIDType          remote_id;
}

/** capabilities the node supports. The schema may add
   Section 4.2.2 disregarded):

   1.  ToF 21 (PoD 0) to this
    field future capabilities Spine 111, Spine 112, Spine 121, and Spine 122

   2.  ToF 22 (PoD 0) to indicate whether it will support
    interpretation of future schema extensions on the same major
    revision. Such fields MUST be optional Spine 111, Spine 112, Spine 121, and have an implicit or
    explicit false default value. If a future capability changes route
    selection or generates blackholes if some nodes are not supporting
    it then a major version increment is unavoidable.
*/
struct NodeCapabilities {
    /** must advertise supported minor version dialect that way */
    1: required common.MinorVersionType        protocol_minor_version =
            protocol_minor_version;
    /** can this node participate in flood reduction */
    2: optional bool                           flood_reduction =
            common.flood_reduction_default;
    /** does this node restrict itself Spine 122

   3.  Spine 111 to Leaf 111, Leaf 112

   4.  Spine 112 to Leaf 111, Leaf 112

   5.  Spine 121 to Leaf 121, Leaf 122

   6.  Spine 122 to Leaf 121, Leaf 122

   Consequently, North TIEs would be top-of-fabric or
        leaf only (in ZTP) originated by Spine 111 and does it support leaf-2-leaf procedures */
    3: optional common.HierarchyIndications    hierarchy_indications;
}

/** link capabilities */
struct LinkCapabilities {
    /** indicates that the link is supporting BFD */
    1: optional bool                           bfd =
            common.bfd_default;
    /** indicates whether the interface will support v4 forwarding. This MUST
      * be Spine
   112 and each set would be sent to true when LIEs from a v4 address are both ToF 21 and ToF 22.  North TIEs
   also would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/
   Prefix 112 and the multi-homed prefix) and each set would be sent to
   Spine 111 and MAY Spine 112.  Spine 111 and Spine 112 would then flood
   these North TIEs to ToF 21 and ToF 22.

   Similarly, North TIEs would be originated by Spine 121 and Spine 122
   and each set
      * would be sent to true in LIEs on v6 address. If v4 both ToF 21 and ToF 22.  North TIEs
   also would be originated by Leaf 121 (w/ Prefix 121 and v6 LIEs indicate contradicting
      * information the behavior is unspecified. */
    2: optional bool                           v4_forwarding_capable =
            true;
}

/** RIFT LIE packet

    @note this node's level is already included on the packet header */
struct LIEPacket {
    /** node or adjacency name */
    1: optional string                        name;
    /** local link ID */
    2: required common.LinkIDType             local_id;
    /** UDP port multi-
   homed prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent
   to which we can receive flooded Spine 121 and Spine 122.  Spine 121 and Spine 122 would then flood
   these North TIEs */
    3: required common.UDPPortType            flood_port =
            common.default_tie_udp_flood_port;
    /** layer 3 MTU, used to discover to mismatch. */
    4: optional common.MTUSizeType            link_mtu_size =
            common.default_mtu_size;
    /** local link bandwidth on the interface */
    5: optional common.BandwithInMegaBitsType link_bandwidth =
            common.default_bandwidth;
    /** reflects the neighbor once received ToF 21 and ToF 22.

   At this point both ToF 21 and ToF 22, as well as any controller to provide
        3-way connectivity */
    6: optional Neighbor                      neighbor;
    /** node's PoD */
    7: optional common.PodType                pod =
            common.default_pod;
    /** node capabilities shown in the LIE. The capabilies
        MUST match the capabilities shown in
   which they are connected, would have the Node TIEs, otherwise complete network topology.
   At the behavior is unspecified. A node detecting same time, Spine 111/112/121/122 hold only the mismatch
        SHOULD generate according error */
   10: required NodeCapabilities              node_capabilities;
   /** capabilities N-ties of this link */
   11: optional LinkCapabilities              link_capabilities;
   /** required holdtime level
   0 of the adjacency, i.e. how much time
       MUST expire without LIE for the adjacency to drop */
   12: required common.TimeIntervalInSecType  holdtime =
            common.default_lie_holdtime;
   /** unsolicited, downstream assigned locally significant label
       value for the adjacency */
   13: optional common.LabelType              label;
    /** indicates that the level on the LIE MUST NOT be used
        to derive their respective PoD.  Leafs hold only their own North TIEs.

   South TIEs with adjacencies and a ZTP level default IP prefix would then be
   originated by the receiving node */
   21: optional bool                          not_a_ztp_offer =
            common.default_not_a_ztp_offer;
   /** indicates to northbound neighbor that it should ToF 21 and ToF 22 and each would be reflooding this node's N-TIEs flooded to achieve flood reduction Spine
   111, Spine 112, Spine 121, and
       balancing for northbound flooding. To be ignored if received Spine 122.  Spine 111, Spine 112,
   Spine 121, and Spine 122 would each send the South TIE from a
       northbound adjacency */
   22: optional bool                          you_are_flood_repeater =
             common.default_you_are_flood_repeater;
   /** can be optionally set ToF 21 to indicate
   ToF 22 and the South TIE from ToF 22 to neighbor that packet losses ToF 21.  (South TIEs are
   reflected up to level from which they are seen on
       reception based on packet numbers or the rate is too high. The receiver SHOULD
       temporarily slow down flooding rates
    */
   23: optional bool                          you_are_sending_too_quickly =
             false;
   /** instance name in case multiple RIFT instances running on same interface */
   24: optional string                        instance_name;
}

/** LinkID pair describes one of parallel links between two nodes */
struct LinkIDPair {
    /** node-wide unique value for the local link */
    1: required common.LinkIDType      local_id;
    /** received remote link ID for this link */
    2: required common.LinkIDType      remote_id;

    /** describes the local interface index of the link */
   10: optional common.PlatformInterfaceIndex       platform_interface_index;
   /** describes the local interface name */
   11: optional string                              platform_interface_name;
   /** indication whether the link is secured, i.e. protected by outer key, absence
       of this element means no indication, undefined outer key means not secured */
   12: optional common.OuterSecurityKeyID           trusted_outer_security_key;
   /** indication whether the link is protected by established BFD session */
   13: optional bool                                bfd_up;
}

/** ID of a but they are NOT
   propagated southbound.)

   A South TIE

    @note: TIEID space is with a total order achieved default IP prefix would be originated by comparing the elements
           in sequence defined Node 111
   and Spine 112 and comparing each value as would be sent to Leaf 111 and Leaf 112.

   Similarly, an
           unsigned integer of according length.
*/
struct TIEID {
    /** direction of TIE */
    1: required common.TieDirectionType    direction;
    /** indicates originator of the South TIE */
    2: required common.SystemIDType        originator;
    /** type of the tie */
    3: required common.TIETypeType         tietype;
    /** number of the tie */
    4: required common.TIENrType           tie_nr;

}

/** Header of a TIE.

   @note: TIEID space is with a total order achieved default IP prefix would be originated
   by comparing the elements
              in sequence defined Node 121 and Spine 122 and comparing each value as an
              unsigned integer of according length.

   @note: After sequence number the lifetime received on the envelope
              must would be used for comparison before further fields.

   @note: `origination_time` and `origination_lifetime` are disregarded
              for comparison purposes sent to Leaf 121 and carried purely for debugging/security
              purposes if present.
*/
struct TIEHeader {
    /** ID of the tie */
    2: required TIEID                             tieid;
    /** sequence number of Leaf
   122.  At this point IP connectivity with maximum possible ECMP has
   been established between the tie */
    3: required common.SeqNrType                  seq_nr;

    /** absolute timestamp when leafs while constraining the TIE
        was generated. This can be used on fabrics with
        synchronized clock amount of
   information held by each node to prevent lifetime modification attacks. */
   10: optional common.IEEE802_1ASTimeStampType   origination_time;
   /** original lifetime when the TIE
       was generated. This can be used on fabrics minimum necessary for normal
   operation and dealing with
       synchronized clock to prevent lifetime modification attacks. */
   12: optional common.LifeTimeInSecType          origination_lifetime;
}

/** Header failures.

5.2.  Leaf Link Failure

                    .  |   |              |   |
                    .+-+---+-+          +-+---+-+
                    .|       |          |       |
                    .|Spin111|          |Spin112|
                    .+-+---+-+          ++----+-+
                    .  |   |             |    |
                    .  |   +---------------+  X
                    .  |                 | |  X Failure
                    .  |   +-------------+ |  X
                    .  |   |               |  |
                    .+-+---+-+          +--+--+-+
                    .|       |          |       |
                    .|Leaf111|          |Leaf112|
                    .+-------+          +-------+
                    .      +                  +
                    .     Prefix111     Prefix112

                    Figure 32: Single Leaf link failure

   In case of a TIE as described in TIRE/TIDE.
*/
struct TIEHeaderWithLifeTime {
    1: required     TIEHeader                         header;
    /** remaining lifetime that expires down to 0 just like in ISIS.
        TIEs with lifetimes differing by less than `lifetime_diff2ignore` MUST
        be considered EQUAL. */
    2: required     common.LifeTimeInSecType          remaining_lifetime;
}

/** TIDE with sorted TIE headers, if headers are unsorted, behavior is undefined */
struct TIDEPacket {
    /** first TIE header in the tide packet */
    1: required TIEID                       start_range;
    /** last TIE header in failing leaf link between spine 112 and leaf 112 the tide packet */
    2: required TIEID                       end_range;
    /** _sorted_ list of headers */
    3: required list<TIEHeaderWithLifeTime> headers;
}

/** TIRE packet */
struct TIREPacket {
    1: required set<TIEHeaderWithLifeTime>  headers;
}

/** neighbor of a node */
struct NodeNeighborsTIEElement {
    /** level
   link-state information will cause re-computation of neighbor */
    1: required common.LevelType                level;
    /**  Cost to neighbor.

         @note: All parallel links to same node
         incur same cost, in case the neighbor has multiple
         parallel links at different cost, necessary SPF
   and the largest distance
         (highest numerical value) MUST be advertised
         @note: any neighbor with cost <= 0 MUST be ignored in computations */
    3: optional common.MetricType               cost = common.default_distance;
    /** can carry description of multiple parallel links in higher levels will stop forwarding towards prefix 112 through
   spine 112.  Only spines 111 and 112, as well as both spines will see
   control traffic.  Leaf 111 will receive a new South TIE */
    4: optional set<LinkIDPair>                 link_ids;

    /** total bandwith from spine
   112 and reflect back to neighbor, this spine 111.  Spine 111 will de-aggregate
   prefix 111 and prefix 112 but we will be normally sum of the
        bandwidths of all the parallel links. */
    5: optional common.BandwithInMegaBitsType   bandwidth =
            common.default_bandwidth;
}

/** Flags the node sets */
struct NodeFlags {
    /** indicates that node is in overload, do not transit traffic through describe it */
    1: optional bool         overload = common.overload_default;
}

/** Description of a node. further here
   since de-aggregation is emphasized in the next example.  It may occur multiple times is worth
   observing however in different TIEs but if either
        * capabilities values do not match or
        * flags values do not match or
        * neighbors repeat with different values

    the behavior is undefined and a warning SHOULD be generated.
    Neighbors can be distributed across multiple TIEs however if
    the sets are disjoint. Miscablings SHOULD be repeated in every
    node TIE, otherwise the behavior is undefined.

    @note: observe this example that absence of fields implies defined defaults

*/
struct NodeTIEElement {
    /** level of the node */
    1: required common.LevelType            level;
    /** node's neighbors. If neighbor systemID repeats in other node TIEs of
        same node the behavior is undefined */
    2: required map<common.SystemIDType,
                NodeNeighborsTIEElement>    neighbors;
    /** capabilities of the node */
    3: required NodeCapabilities            capabilities;
    /** flags of the node */
    4: optional NodeFlags                   flags;
    /** optional node name for easier operations */
    5: optional string                      name;
    /** PoD to which the node belongs */
    6: optional common.PodType              pod;

    /** if any local links are miscabled, the indication is flooded */
   10: optional set<common.LinkIDType>      miscabled_links;

}

struct PrefixAttributes {
    /** distance of the leaf 111 would keep on
   forwarding traffic towards prefix */
    2: required common.MetricType            metric = common.default_distance;
    /** generic unordered set of route tags, can be redistributed to other protocols or use
        within 112 using the context advertised south-
   bound default of real time analytics */
    3: optional set<common.RouteTagType>     tags;
    /** monotonic clock for mobile addresses */
    4: optional common.PrefixSequenceType    monotonic_clock;
    /** indicates if the interface is a node loopback */
    6: optional bool                         loopback = false;
    /** indicates that the prefix is directly attached, i.e. should be routed to even if spine 112 the node traffic would end up on Top-of-Fabric
   21 and ToF 22 and cross back into pod 1 using spine 111.  This is
   arguably not as bad as black-holing present in overload. **/
    7: optional bool                         directly_attached = true;

    /** in case of locally originated prefixes, i.e. interface addresses this can
        describe which link the address belongs to. */
   10: optional common.LinkIDType            from_link;
}

/** TIE carrying prefixes */
struct PrefixTIEElement {
    /** prefixes with the associated attributes.
        if the same prefix repeats in multiple TIEs of same node
        behavior is unspecified */
    1: required map<common.IPPrefixType, PrefixAttributes> prefixes;
}
/** Generic key value pairs */
struct KeyValueTIEElement {
    /** if the same key repeats in multiple TIEs next example but
   clearly undesirable.  Fortunately, de-aggregation prevents this type
   of same node
        or with different values, behavior is unspecified */
    1: required map<common.KeyIDType,string>    keyvalues;
}

/** single element in except for a TIE. enum `common.TIETypeType`
    in TIEID indicates which elements MUST be present
    in the TIEElement. In case transitory period of mismatch the unexpected
    elements MUST be ignored. In case time.

5.3.  Partitioned Fabric

   .                +--------+          +--------+   South TIE of lack ToF 21
   .                |        |          |        |   received by
   .                |ToF   21|          |ToF   22|   south reflection of expected
    element the TIE an error MUST be reported
   .                ++-+--+-++          ++-+--+-++   spines 112 and the TIE
    MUST be ignored.

    This type can be extended with new optional elements
    for new `common.TIETypeType` values without breaking
    the major but if it is necessary to understand whether
    all nodes support the new type 111
   .                 | |  | |            | |  | |
   .                 | |  | |            | |  | 0/0
   .                 | |  | |            | |  | |
   .                 | |  | |            | |  | |
   .  +--------------+ |  +--- XXXXXX +  | |  | +---------------+
   .  |                |    |         |  | |  |                 |
   .  |    +-----------------------------+ |  |                 |
   .  0/0  |           |    |         |    |  |                 |
   .  |    0/0       0/0    +- XXXXXXXXXXXXXXXXXXXXXXXXX -+     |
   .  |  1.1/16        |              |    |  |           |     |
   .  |    |           +-+    +-0/0-----------+           |     |
   .  |    |             |   1.1./16  |    |              |     |
   .+-+----++          +-+-----+     ++-----0/0          ++----0/0
   .|       |          |       |     |    1.1/16         |   1.1/16
   .|Spin111|          |Spin112|     |Spin121|           |Spin122|
   .+-+---+-+          ++----+-+     +-+---+-+           ++---+--+
   .  |   |             |    |         |   |              |   |
   .  |   +---------------+  |         |   +----------------+ |
   .  |                 | |  |         |                  | | |
   .  |   +-------------+ |  |         |   +--------------+ | |
   .  |   |               |  |         |   |                | |
   .+-+---+-+          +--+--+-+     +-+---+-+          +---+-+-+
   .|       |          |       |     |       |          |       |
   .|Leaf111|          |Leaf112|     |Leaf121|          |Leaf122|
   .+-+-----+          ++------+     +-----+-+          +-+-----+
   .  +                 +                  +              +
   .  Prefix111    Prefix112             Prefix121     Prefix122
   .                                       1.1/16

                        Figure 33: Fabric partition

   Figure 33 shows the arguably a node capability must
    be added as well.
 */
union TIEElement {
    /** used more catastrophic but also a more
   interesting case.  ToF 21 is completely severed from access to Prefix
   121 (we use in case of enum common.TIETypeType.NodeTIEType */
    1: optional NodeTIEElement            node;
    /** the figure 1.1/16 as example) by double link failure.
   However unlikely, if left unresolved, forwarding from leaf 111 and
   leaf 112 to prefix 121 would suffer 50% black-holing based on pure
   default route advertisements by ToF 21 and ToF 22.

   The mechanism used in case to resolve this scenario is hinging on the
   distribution of enum common.TIETypeType.PrefixTIEType */
    2: optional PrefixTIEElement          prefixes;
    /** positive prefixes (always southbound)
        It MUST NOT be advertised within a North TIE and ignored otherwise
    */
    3: optional PrefixTIEElement          positive_disaggregation_prefixes;
    /** transitive, negative prefixes (always southbound) which
        MUST be aggregated southbound representation by Top-of-Fabric 21 that is
   reflected by spine 111 and propagated
        according spine 112 to ToF 22.  ToF 22, having
   computed reachability to all prefixes in the specification
        southwards towards network, advertises with
   the default route the ones that are reachable only via lower levels to heal
        pathological upper level partitioning, otherwise
        blackholes may occur
   neighbors that ToF 21 does not show an adjacency to.  That results in multiplane fabrics.
        It MUST NOT be advertised within
   spine 111 and spine 112 obtaining a North TIE.
    */
    5: optional PrefixTIEElement          negative_disaggregation_prefixes;
    /** externally reimported prefixes */
    6: optional PrefixTIEElement          external_prefixes;
    /** positive external disaggregated prefixes (always southbound).
        It MUST NOT be longest-prefix match to prefix
   121 which leads through ToF 22 and prevents black-holing through ToF
   21 still advertising the 0/0 aggregate only.

   The prefix 121 advertised within a North TIE by Top-of-Fabric 22 does not have to be
   propagated further towards leafs since they do no benefit from this
   information.  Hence the amount of flooding is restricted to ToF 21
   reissuing its South TIEs and ignored otherwise
    */
    7: optional PrefixTIEElement          positive_external_disaggregation_prefixes;
    /** Key-Value store elements */
    9: optional KeyValueTIEElement        keyvalues;
}
/** TIE packet */
struct TIEPacket {
    1: required TIEHeader  header;
    2: required TIEElement element;
}

/** content south reflection of a RIFT packet */
union PacketContent {
    1: optional LIEPacket     lie;
    2: optional TIDEPacket    tide;
    3: optional TIREPacket    tire;
    4: optional TIEPacket     tie;
}

/** RIFT packet structure */
struct ProtocolPacket {
    1: required PacketHeader  header;
    2: required PacketContent content;
}

Appendix C.  Finite State Machines those by spine 111
   and Precise Operational
             Specifications

   Some FSM figures are provided as [DOT] description due to limitations spine 112.  The resulting SPF in ToF 22 issues a new prefix South
   TIEs containing 1.1/16.  None of ASCII art.

   On Entry action is performed every time and right before the
   according state is entered, i.e. after any transitions from previous
   state.

   On Exit action is performed every time leafs become aware of the
   changes and immediately when a state the failure is exited, i.e. before any transitions towards target state are
   performed.

   Any attempt constrained strictly to transition from a state towards another on reception
   of an event where no action is specified MUST be considered the level that
   became partitioned.

   To finish with an
   unrecoverable error.

   The FSMs and procedures are NOT normative example of the resulting sets computed using
   notation introduced in Section 4.2.5, Top-of-Fabric 22 constructs the sense
   following sets:

      |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122

      |H (for r=Prefix 111) = Spine 111, Spine 112

      |H (for r=Prefix 112) = Spine 111, Spine 112

      |H (for r=Prefix 121) = Spine 121, Spine 122

      |H (for r=Prefix 122) = Spine 121, Spine 122

      |A (for ToF 21) = Spine 111, Spine 112

   With that and |H (for r=prefix 121) and |H (for r=prefix 122) being
   disjoint from |A (for Top-of-Fabric 21), ToF 22 will originate an
   implementation MUST implement them literally (which would be
   overspecification) but an implementation MUST exhibit externally
   observable behavior
   South TIE with prefix 121 and prefix 122, that is identical flooded to the execution spines
   112, 112, 121 and 122.

5.4.  Northbound Partitioned Router and Optional East-West Links

         .   +                  +                  +
         .   X N1               | N2               | N3
         .   X                  |                  |
         .+--+----+          +--+----+          +--+-----+
         .|       |0/0>  <0/0|       |0/0>  <0/0|        |
         .|  A01  +----------+  A02  +----------+  A03   | Level 1
         .++-+-+--+          ++--+--++          +---+-+-++
         . | | |              |  |  |               | | |
         . | | +----------------------------------+ | | |
         . | |                |  |  |             | | | |
         . | +-------------+  |  |  |  +--------------+ |
         . |               |  |  |  |  |          | |   |
         . | +----------------+  |  +-----------------+ |
         . | |             |     |     |          | | | |
         . | | +------------------------------------+ | |
         . | | |           |     |     |          |   | |
         .++-+-+--+        | +---+---+ |        +-+---+-++
         .|       |        +-+       +-+        |        |
         .|  L01  |          |  L02  |          |  L03   | Level 0
         .+-------+          +-------+          +--------+

                    Figure 34: North Partitioned Router

   Figure 34 shows a part of the
   specified FSMs.

   Where a FSM representation fabric where level 1 is inconvenient, i.e. the amount of
   procedures horizontally
   connected and kept state exceeds the amount of transitions, we defer
   to a more procedural description on data structures.

C.1.  LIE FSM

   Initial state is `OneWay`.

   Event `MultipleNeighbors` occurs normally when more than two nodes
   see each other A01 lost its only northbound adjacency.  Based on N-SPF
   rules in Section 4.2.4.1 A01 will compute northbound reachability by
   using the same link or a remote node is quickly
   reconfigured or rebooted without regressing A01 to `OneWay` first.  Each
   occurence of A02 (whereas A02 will NOT use this link during
   N-SPF).  Hence A01 will still advertise the event SHOULD generate default towards level 0
   and route unidirectionally using the horizontal link.

   As further consideration, the moment A02 looses link N2 the situation
   evolves again.  A01 will have no more northbound reachability while
   still seeing A03 advertising northbound adjacencies in its south node
   tie.  With that it will stop advertising a clear, according
   notification to help operational deployments.

   The machine sends LIEs on several transitions default route due to accelerate adjacency
   bring-up without waiting
   Section 4.2.3.8.

6.  Implementation and Operation: Further Details

6.1.  Considerations for the timer tic.

digraph Ga556dde74c30450aae125eaebc33bd57 {
    Nd16ab5092c6b421c88da482eb4ae36b6[label="ThreeWay"][shape="oval"];
    N54edd2b9de7641688608f44fca346303[label="OneWay"][shape="oval"];
    Nfeef2e6859ae4567bd7613a32cc28c0e[label="TwoWay"][shape="oval"];
    N7f2bb2e04270458cb5c9bb56c4b96e23[label="Enter"][style="invis"][shape="plain"];
    N292744a4097f492f8605c926b924616b[label="Enter"][style="dashed"][shape="plain"];
    Nc48847ba98e348efb45f5b78f4a5c987[label="Exit"][style="invis"][shape="plain"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"][color="blue"]
    [arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nd16ab5092c6b421c88da482eb4ae36b6
    [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303
    [label="|TimerTick|\n|LieRcvd|\n|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|SendLie|"]
    [color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    N292744a4097f492f8605c926b924616b -> N54edd2b9de7641688608f44fca346303
    [label=""][color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|NewNeighbor|"][color="black"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303
    [label="|LevelChanged|\n|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
    [color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
    [color="blue"][arrowhead="normal" dir="both" arrowtail="none"];
    Nd16ab5092c6b421c88da482eb4ae36b6 -> Nfeef2e6859ae4567bd7613a32cc28c0e
    [label="|NeighborDroppedReflection|"]
    [color="red"][arrowhead="normal" dir="both" arrowtail="none"];
    N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303
    [label="|NeighborDroppedReflection|"][color="red"]
    [arrowhead="normal" dir="both" arrowtail="none"];
}

                                LIE FSM DOT

   .. To Leaf-Only Implementation

   RIFT can and is intended to be updated ..

                              LIE FSM Figure

   Events

   o  TimerTick: one second timer tic

   o  LevelChanged: node's stretched to the lowest level has been changed by ZTP in the
   IP fabric to integrate ToRs or
      configuration

   o  HALChanged: best HAL computed by ZTP has changed

   o  HATChanged: HAT computed by ZTP even servers.  Since those entities
   would run as leafs only, it is worth to observe that a leaf only
   version is significantly simpler to implement and requires much less
   resources:

   1.  Under normal conditions, the leaf needs to support a multipath
       default route only.  In most catastrophic partitioning case it
       has changed

   o  HALSChanged: set to be capable of HAL offering systems computed by ZTP has
      changed

   o  LieRcvd: received LIE

   o  NewNeighbor: new neighbor parsed

   o  ValidReflection: received own reflection from neighbor

   o  NeighborDroppedReflection: lost previous accommodating all the leaf routes in its own reflection from
      neighbor

   o  NeighborChangedLevel: neighbor changed advertised level

   o  NeighborChangedAddress: neighbor changed IP address

   o  UnacceptableHeader: unacceptable header seen

   o  MTUMismatch: MTU mismatched

   o  PODMismatch: Unacceptable
       PoD seen

   o  HoldtimeExpired: adjacency to prevent black-holing.

   2.  Leaf nodes hold down expired

   o  MultipleNeighbors: more than one neighbor seen on interface

   o  SendLie: send a LIE out

   o  UpdateZTPOffer: update this node's ZTP offer

   Actions

      on TimerTick in TwoWay finishes in TwoWay: PUSH SendLie event, if
      holdtime expired PUSH HoldtimeExpired event

      on HALChanged in TwoWay finishes in TwoWay: store new HAL

      on MTUMismatch only their own North TIEs and South TIEs of Level
       1 nodes they are connected to; so overall few in ThreeWay finishes numbers.

   3.  Leaf node does not have to support any type of de-aggregation
       computation or propagation.

   4.  Leaf nodes do not have to support overload bit normally.

   5.  Unless optional leaf-2-leaf procedures are desired default route
       origination and South TIE origination is unnecessary.

6.2.  Considerations for Spine Implementation

   In case of spines, i.e. nodes that will never act as Top of Fabric a
   full implementation is not required, specifically the node does not
   need to perform any computation of negative disaggregation except
   respecting northbound disaggregation advertised from the north.

6.3.  Adaptations to Other Proposed Data Center Topologies

                         .  +-----+        +-----+
                         .  |     |        |     |
                         .+-+ S0  |        | S1  |
                         .| ++---++        ++---++
                         .|  |   |          |   |
                         .|  | +------------+   |
                         .|  | | +------------+ |
                         .|  | |              | |
                         .| ++-+--+        +--+-++
                         .| |     |        |     |
                         .| | A0  |        | A1  |
                         .| +-+--++        ++---++
                         .|   |  |          |   |
                         .|   |  +------------+ |
                         .|   | +-----------+ | |
                         .|   | |             | |
                         .| +-+-+-+        +--+-++
                         .+-+     |        |     |
                         .  | L0  |        | L1  |
                         .  +-----+        +-----+

                         Figure 35: Level Shortcut

   Strictly speaking, RIFT is not limited to Clos variations only.  The
   protocol preconditions only a sense of 'compass rose direction'
   achieved by configuration (or derivation) of levels and other
   topologies are possible within this framework.  So, conceptually, one
   could include leaf to leaf links and even shortcut between levels As
   an example, shortcutting levels illustrated in OneWay: no action Figure 35 will lead
   either to suboptimal routing when L0 sends traffic to L1 (since using
   S0's default route will lead to the traffic being sent back to A0 or
   A1) or the leafs need each other's routes installed to understand
   that only A0 and A1 should be used to talk to each other.

   Whether such modifications of topology constraints make sense is
   dependent on HALChanged in ThreeWay finishes many technology variables and the exhausting treatment
   of the topic is definitely outside the scope of this document.

6.4.  Originating Non-Default Route Southbound

   Obviously, an implementation may choose to originate southbound
   instead of a strict default route (as described in ThreeWay: store new HAL

      on ValidReflection Section 4.2.3.8) a
   shorter prefix P' but in TwoWay finishes such a scenario all addresses carried within
   the RIFT domain must be contained within P'.

7.  Security Considerations

7.1.  General

   One can consider attack vectors where a router may reboot many times
   while changing its system ID and pollute the network with many stale
   TIEs or TIEs are sent with very long lifetimes and not cleaned up
   when the routes vanishes.  Those attack vectors are not unique to
   RIFT.  Given large memory footprints available today those attacks
   should be relatively benign.  Otherwise a node SHOULD implement a
   strategy of discarding contents of all TIEs that were not present in ThreeWay: no action

      on ValidReflection in OneWay finishes in ThreeWay: no action

      on NeighborDroppedReflection in ThreeWay finishes in TwoWay: no
      action

      on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE

      on MultipleNeighbors in TwoWay finishes in OneWay: no action

      on UnacceptableHeader in ThreeWay finishes in OneWay: no action

      on MTUMismatch in TwoWay finishes in OneWay: no action

      on LevelChanged in OneWay finishes
   the SPF tree over a certain, configurable period of time.  Since the
   protocol, like all modern link-state protocols, is self-stabilizing
   and will advertise the presence of such TIEs to its neighbors, they
   can be re-requested again if a computation finds that it sees an
   adjacency formed towards the system ID of the discarded TIEs.

7.2.  ZTP

   Section 4.2.7 presents many attack vectors in OneWay: update untrusted environments,
   starting with nodes that oscillate their level offers to the
   possiblity of a node offering a 3-way adjacency with
      event value, PUSH SendLie event

      on UnacceptableHeader the highest
   possible level value with a very long holdtime trying to put itself
   "on top of the lattice" and with that gaining access to the whole
   southbound topology.  Session authentication mechanisms are necessary
   in TwoWay finishes environments where this is possible and RIFT provides the
   according security envelope to ensure this if desired.

7.3.  Lifetime

   Traditional IGP protocols are vulnerable to lifetime modification and
   replay attacks that can be somewhat mitigated by using techniques
   like [RFC7987].  RIFT removes this attack vector by protecting the
   lifetime behind a signature computed over it and additional nonce
   combination which makes even the replay attack window very small and
   for practical purposes irrelevant since lifetime cannot be
   artificially shortened by the attacker.

7.4.  Packet Number

   Optional packet number is carried in OneWay: no action the security envelope without
   any encryption protection and is hence vulnerable to replay and
   modification attacks.  Contrary to nonces this number must change on HALSChanged in TwoWay finishes in TwoWay: store HALS
   every packet and would present a very high cryptographic load if
   signed.  The attack vector packet number present is relatively
   benign.  Changing the packet number by a man-in-the-middle attack
   will only affect operational validation tools and possibly some
   performance optimizations on UpdateZTPOffer in TwoWay finishes in TwoWay: send offer flooding.  It is expected that an
   implementation detecting too many "fake losses" or "misorderings" due
   to ZTP
      FSM

      on NeighborChangedLevel in TwoWay finishes in OneWay: no action the attack on NewNeighbor in OneWay finishes in TwoWay: PUSH SendLie event the packet number would simply suppress its further
   processing.

7.5.  Outer Fingerprint Attacks

   A node can try to inject LIE packets observing a conversation on NeighborChangedAddress in ThreeWay finishes the
   wire by using the outer key ID albeit it cannot generate valid hashes
   in OneWay: no
      action

      on HALChanged case it changes the integrity of the message so the only possible
   attack is DoS due to excessive LIE validation.

   A node can try to replay previous LIEs with changed state that it
   recorded but the attack is hard to replicate since the nonce
   combination must match the ongoing exchange and is then limited to a
   single flap only since both nodes will advance their nonces in OneWay finishes case
   the adjacency state changed.  Even in OneWay: store new HAL

      on NeighborChangedLevel in OneWay finishes in OneWay: no action

      on HoldtimeExpired in TwoWay finishes the most unlikely case the
   attack length is limited due to both sides periodically increasing
   their nonces.

7.6.  TIE Origin Fingerprint DoS Attacks

   A compromised node can attempt to generate "fake TIEs" using other
   nodes' TIE origin key identifiers.  Albeit the ultimate validation of
   the origin fingerprint will fail in OneWay: no action such scenarios and not progress
   further than immediately peering nodes, the resulting denial of
   service attack seems unavoidable since the TIE origin key id is only
   protected by the, here assumed to be compromised, node.

7.7.  Host Implementations

   It can be reasonably expected that with the proliferation of RotH
   servers, rather than dedicated networking devices, will constitute
   significant amount of RIFT devices.  Given their normally far wider
   software envelope and access granted to them, such servers are also
   far more likely to be compromised and present an attack vector on SendLie the
   protocol.  Hijacking of prefixes to attract traffic is a trust
   problem and cannot be addressed within the protocol if the trust
   model is breached, i.e. the server presents valid credentials to form
   an adjacency and issue TIEs.  However, in TwoWay finishes a move devious way, the
   servers can present DoS (or even DDos) vectors of issuing too many
   LIE packets, flood large amount of North TIEs and similar anomalies.
   A prudent implementation hosting leafs should implement thresholds
   and raise warnings when leaf is advertising number of TIEs in TwoWay: SEND_LIE

      on LevelChanged excess
   of those.

8.  IANA Considerations

   This specification requests multicast address assignments and
   standard port numbers.  Additionally registries for the schema are
   requested and suggested values provided that reflect the numbers
   allocated in TwoWay finishes the given schema.

8.1.  Requested Multicast and Port Numbers

   This document requests allocation in OneWay: update level with
      event the 'IPv4 Multicast Address
   Space' registry the suggested value

      on NeighborChangedAddress of 224.0.0.120 as
   'ALL_V4_RIFT_ROUTERS' and in OneWay finishes the 'IPv6 Multicast Address Space'
   registry the suggested value of FF02::A1F7 as 'ALL_V6_RIFT_ROUTERS'.

   This document requests allocation in OneWay: no action the 'Service Name and Transport
   Protocol Port Number Registry' the allocation of a suggested value of
   914 on HATChanged in TwoWay finishes udp for 'RIFT_LIES_PORT' and suggested value of 915 for
   'RIFT_TIES_PORT'.

8.2.  Requested Registries with Suggested Values

   This section requests registries that help govern the schema via
   usual IANA registry procedures.  A top level 'RIFT' registry should
   hold the according registries requested in TwoWay: following sections with
   their pre-defined values.  IANA is requested to store HAT

      on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE

      on MultipleNeighbors in ThreeWay finishes in OneWay: no action the schema
   version introducing the allocated value as well as, optionally, its
   description when present.  This will allow to assign different values
   to an entry depending on MTUMismatch schema version.  Alternately, IANA is
   requested to consider a root RIFT/2 registry to store RIFT schema
   major version 2 values and may be requested in OneWay finishes the future to create a
   RIFT/3 registry under that.  In any case, IANA is requested to store
   the schema version in OneWay: no action

      on SendLie the entries since that will allow to
   distinguish between minor versions in OneWay finishes in OneWay: SEND_LIE

      on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE

      on TimerTick the same major schema version.
   All values not suggested as to be considered `Unassigned`. The range
   of every registry is a 16-bit integer.  Allocation of new values is
   always performed via `Expert Review` action.

8.2.1.  Registry RIFT/common/AddressFamilyType

   Address family type.

8.2.1.1.  Requested Entries

          Name                  Value Schema Version Description
          Illegal                   0            2.0
          AddressFamilyMinValue     1            2.0
          IPv4                      2            2.0
          IPv6                      3            2.0
          AddressFamilyMaxValue     4            2.0

8.2.2.  Registry RIFT/common/HierarchyIndications

   Flags indicating node configuration in ThreeWay finishes case of ZTP.

8.2.2.1.  Requested Entries

   Name                                 Value Schema Version Description
   leaf_only                                0            2.0
   leaf_only_and_leaf_2_leaf_procedures     1            2.0
   top_of_fabric                            2            2.0

8.2.3.  Registry RIFT/common/IEEE802_1ASTimeStampType

   Timestamp per IEEE 802.1AS, all values MUST be interpreted in ThreeWay: PUSH SendLie event,
      if holdtime expired PUSH HoldtimeExpired event
   implementation as unsigned.

8.2.3.1.  Requested Entries

                 Name    Value Schema Version Description
                 AS_sec      1            2.0
                 AS_nsec     2            2.0

8.2.4.  Registry RIFT/common/IPAddressType

   IP address type.

8.2.4.1.  Requested Entries

             Name        Value Schema Version Description
             ipv4address     1            2.0 Content is IPv4
             ipv6address     2            2.0 Content is IPv6

8.2.5.  Registry RIFT/common/IPPrefixType

   Prefix advertisement.

   @note: for interface addresses the protocol can propagate the address
   part beyond the subnet mask and on TimerTick in OneWay finishes reachability computation that has
   to be normalized.  The non-significant bits can be used for
   operational purposes.

8.2.5.1.  Requested Entries

                Name       Value Schema Version Description
                ipv4prefix     1            2.0
                ipv6prefix     2            2.0

8.2.6.  Registry RIFT/common/IPv4PrefixType

   IPv4 prefix type.

8.2.6.1.  Requested Entries

                Name      Value Schema Version Description
                address       1            2.0
                prefixlen     2            2.0

8.2.7.  Registry RIFT/common/IPv6PrefixType

   IPv6 prefix type.

8.2.7.1.  Requested Entries

                Name      Value Schema Version Description
                address       1            2.0
                prefixlen     2            2.0

8.2.8.  Registry RIFT/common/PrefixSequenceType

   Sequence of a prefix in OneWay: PUSH SendLie event

      on PODMismatch case of move.

8.2.8.1.  Requested Entries

   Name          Value      Schema Description
                           Version
   timestamp         1         2.0
   transactionid     2         2.0 Transaction ID set by client in ThreeWay finishes e.g.
                                   in OneWay: no action 6LoWPAN.

8.2.9.  Registry RIFT/common/RouteType

   RIFT route types.

   @note: route types which MUST be ordered on LevelChanged in ThreeWay finishes their preference PGP
   prefixes are most preferred attracting traffic north (towards spine)
   and then south normal prefixes are attracting traffic south (towards
   leafs), i.e. prefix in OneWay: update level with
      event NORTH PREFIX TIE is preferred over SOUTH
   PREFIX TIE.

   @note: The only purpose of those values is to introduce an ordering
   whereas an implementation can choose internally any other values as
   long the ordering is preserved

8.2.9.1.  Requested Entries

           Name                Value Schema Version Description
           Illegal                 0            2.0
           RouteTypeMinValue       1            2.0
           Discard                 2            2.0
           LocalPrefix             3            2.0
           SouthPGPPrefix          4            2.0
           NorthPGPPrefix          5            2.0
           NorthPrefix             6            2.0
           NorthExternalPrefix     7            2.0
           SouthPrefix             8            2.0
           SouthExternalPrefix     9            2.0
           NegativeSouthPrefix    10            2.0
           RouteTypeMaxValue      11            2.0

8.2.10.  Registry RIFT/common/TIETypeType

   Type of TIE.

   This enum indicates what TIE type the TIE is carrying.  In case the
   value

      on NeighborChangedLevel in ThreeWay finishes in OneWay: no action
      on UpdateZTPOffer in OneWay finishes in OneWay: send offer is not known to ZTP
      FSM

      on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer the receiver, the TIE MUST be re-flooded.  This
   allows for future extensions of the protocol within the same major
   schema with types opaque to
      ZTP FSM

      on HATChanged in OneWay finishes in OneWay: store HAT

      on HATChanged in ThreeWay finishes in ThreeWay: store HAT

      on HoldtimeExpired in OneWay finishes in OneWay: no action

      on UnacceptableHeader in OneWay finishes in OneWay: no action

      on PODMismatch in OneWay finishes in OneWay: no action

      on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE

      on NeighborChangedAddress in TwoWay finishes in OneWay: no action

      on ValidReflection in ThreeWay finishes in ThreeWay: no action

      on HALSChanged in OneWay finishes in OneWay: store HALS

      on HoldtimeExpired in ThreeWay finishes in OneWay: no action

      on HALSChanged in ThreeWay finishes in ThreeWay: store HALS some nodes UNLESS the flooding scope is
   not the same as prefix TIE, then a major version revision MUST be
   performed.

8.2.10.1.  Requested Entries

   Name                                        Value  Schema Description
                                                     Version
   Illegal                                         0     2.0
   TIETypeMinValue                                 1     2.0
   NodeTIEType                                     2     2.0
   PrefixTIEType                                   3     2.0
   PositiveDisaggregationPrefixTIEType             4     2.0
   NegativeDisaggregationPrefixTIEType             5     2.0
   PGPrefixTIEType                                 6     2.0
   KeyValueTIEType                                 7     2.0
   ExternalPrefixTIEType                           8     2.0
   PositiveExternalDisaggregationPrefixTIEType     9     2.0
   TIETypeMaxValue                                10     2.0

8.2.11.  Registry RIFT/common/TieDirectionType

   Direction of TIEs.

8.2.11.1.  Requested Entries

            Name              Value Schema Version Description
            Illegal               0            2.0
            South                 1            2.0
            North                 2            2.0
            DirectionMaxValue     3            2.0

8.2.12.  Registry RIFT/encoding/Community

   Prefix community.

8.2.12.1.  Requested Entries

               Name   Value Schema Version Description
               top        1            2.0 Higher order bits
               bottom     2            2.0 Lower order bits

8.2.13.  Registry RIFT/encoding/KeyValueTIEElement

   Generic key value pairs.

8.2.13.1.  Requested Entries

                Name      Value Schema Version Description
                keyvalues     1            2.0

8.2.14.  Registry RIFT/encoding/LIEPacket

   RIFT LIE Packet.

   @note: this node's level is already included on NeighborDroppedReflection in OneWay finishes in OneWay: no
      action the packet header

8.2.14.1.  Requested Entries

   Name                        Value  Schema Description
                                     Version
   name                            1     2.0 Node or adjacency name.
   local_id                        2     2.0 Local link ID.
   flood_port                      3     2.0 UDP port to which we can
                                             receive flooded TIEs.
   link_mtu_size                   4     2.0 Layer 3 MTU, used to
                                             discover to mismatch.
   link_bandwidth                  5     2.0 Local link bandwidth on PODMismatch the
                                             interface.
   neighbor                        6     2.0 Reflects the neighbor once
                                             received to provide 3-way
                                             connectivity.
   pod                             7     2.0 Node's PoD.
   node_capabilities              10     2.0 Node capabilities shown in TwoWay finishes
                                             the LIE. The capabilies
                                             MUST match the capabilities
                                             shown in OneWay: no action

      on Entry into OneWay: CLEANUP

   Following words are used for well known procedures:

   1.  PUSH Event: pushes an event to be executed by the FSM upon exit Node TIEs,
                                             otherwise the behavior is
                                             unspecified. A node
                                             detecting the mismatch
                                             SHOULD generate according
                                             error.
   link_capabilities              11     2.0 Capabilities of this action

   2.  CLEANUP: neighbor link.
   holdtime                       12     2.0 Required holdtime of the
                                             adjacency, i.e. how much
                                             time MUST expire without
                                             LIE for the adjacency to
                                             drop.
   label                          13     2.0 Unsolicited, downstream
                                             assigned locally
                                             significant label value for
                                             the adjacency.
   not_a_ztp_offer                21     2.0 Indicates that the level on
                                             the LIE MUST NOT be reset used to unknown

   3.  SEND_LIE: create
                                             derive a new LIE packet

       1.  reflecting ZTP level by the
                                             receiving node.
   you_are_flood_repeater         22     2.0 Indicates to northbound
                                             neighbor if known and valid that it should be
                                             reflooding this node's
                                             N-TIEs to achieve flood
                                             reduction and

       2.  setting the necessary `not_a_ztp_offer` variable balancing for
                                             northbound flooding. To be
                                             ignored if level was
           derived received from last known a
                                             northbound adjacency.
   you_are_sending_too_quickly    23     2.0 Can be optionally set to
                                             indicate to neighbor that
                                             packet losses are seen on this interface and

       3.  setting `you_are_not_flood_repeater` to computed value

   4.  PROCESS_LIE:

       1.  if lie has wrong major version OR our own system ID
                                             reception based on packet
                                             numbers or
           invalid system ID then CLEANUP else

       2.  if lie has non matching MTUs then CLEANUP, PUSH
           UpdateZTPOffer, PUSH MTUMismatch else

       3.  if PoD rules do not allow adjacency forming then CLEANUP,
           PUSH PODMismatch, PUSH MTUMismatch else

       4.  if lie has undefined level OR my level the rate is undefined OR this
           node too
                                             high. The receiver SHOULD
                                             temporarily slow down
                                             flooding rates.
   instance_name                  24     2.0 Instance name in case
                                             multiple RIFT instances
                                             running on same interface.

8.2.15.  Registry RIFT/encoding/LinkCapabilities

   Link capabilities.

8.2.15.1.  Requested Entries

    Name                  Value      Schema Description
                                    Version
    bfd                       1         2.0 Indicates that the link is leaf and
                                            supporting BFD.
    v4_forwarding_capable     2         2.0

8.2.16.  Registry RIFT/encoding/LinkIDPair

   LinkID pair describes one of parallel links between two nodes.

8.2.16.1.  Requested Entries
   Name                       Value  Schema Description
                                    Version
   local_id                       1     2.0 Node-wide unique value for
                                            the local link.
   remote_id                      2     2.0 Received remote level lower than HAT OR (lie's level link ID for
                                            this link.
   platform_interface_index      10     2.0 Describes the local
                                            interface index of the link.
   platform_interface_name       11     2.0 Describes the local
                                            interface name.
   trusted_outer_security_key    12     2.0 Indication whether the link
                                            is secured, i.e. protected
                                            by outer key, absence of
                                            this element means no
                                            indication, undefined outer
                                            key means not leaf AND its difference secured.
   bfd_up                        13     2.0 Indication whether the link
                                            is more than one from my
           level) then CLEANUP, PUSH UpdateZTPOffer, PUSH
           UnacceptableHeader else

       5.  PUSH UpdateZTPOffer, construct temporary new neighbor
           structure with values from lie, if no current neighbor exists
           then set neighbor to new neighbor, PUSH NewNeighbor event,
           CHECK_THREE_WAY else

           1.  if current neighbor system protected by established
                                            BFD session.

8.2.17.  Registry RIFT/encoding/Neighbor

   Neighbor structure.

8.2.17.1.  Requested Entries

      Name       Value Schema Version Description
      originator     1            2.0 System ID differs from lie's system of the originator.
      remote_id      2            2.0 ID then PUSH MultipleNeighbors else

           2.  if current neighbor stored level differs from lie's level
               then PUSH NeighborChangedLevel else

           3.  if current neighbor stored IPv4/v6 address differs from
               lie's address then PUSH NeighborChangedAddress else

           4.  if any of neighbor's flood address port, name, local
               linkid changed then PUSH NeighborChangedMinorFields remote side of the link.

8.2.18.  Registry RIFT/encoding/NodeCapabilities

   Capabilities the node supports.

   @note: The schema may add to this field future capabilities to
   indicate whether it will support interpretation of future schema
   extensions on the same major revision.  Such fields MUST be optional
   and

           5.  CHECK_THREE_WAY

   5.  CHECK_THREE_WAY: if current state is one-way do nothing else

       1. have an implicit or explicit false default value.  If a future
   capability changes route selection or generates blackholes if lie packet does some
   nodes are not contain neighbor supporting it then if current state a major version increment is three-way then PUSH NeighborDroppedReflection else

       2.  if packet reflects
   unavoidable.

8.2.18.1.  Requested Entries
   Name                   Value  Schema Description
                                Version
   protocol_minor_version     1     2.0 Must advertise supported minor
                                        version dialect that way.
   flood_reduction            2     2.0 Can this system's ID and local port node participate in
                                        flood reduction.
   hierarchy_indications      3     2.0 Does this node restrict itself
                                        to be top-of-fabric or leaf only
                                        (in ZTP) and state
           is three-way then PUSH event ValidReflection else PUSH event
           MultipleNeighbors

C.2.  ZTP FSM

   Initial state does it support
                                        leaf-2-leaf procedures.

8.2.19.  Registry RIFT/encoding/NodeFlags

   Indication flags of the node.

8.2.19.1.  Requested Entries

    Name     Value    Schema Description
                     Version
    overload     1       2.0 Indicates that node is ComputeBestOffer.

digraph Gd436cc3ced8c471eb30bd4f3ac946261 {
    N06108ba9ac894d988b3e4e8ea5ace007
[label="Enter"]
[style="invis"]
[shape="plain"];
    Na47ff5eac9aa4b2eaf12839af68aab1f
[label="MultipleNeighborsWait"]
[shape="oval"];
    N57a829be68e2489d8dc6b84e10597d0b
[label="OneWay"]
[shape="oval"];
    Na641d400819a468d987e31182cdb013e
[label="ThreeWay"]
[shape="oval"];
    Necfbfc2d8e5b482682ee66e604450c7b
[label="Enter"]
[style="dashed"]
[shape="plain"];
    N16db54bf2c5d48f093ad6c18e70081ee
[label="TwoWay"]
[shape="oval"];
    N1b89016876b44cc1b9c1e4a735769560
[label="Exit"]
[style="invis"]
[shape="plain"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N57a829be68e2489d8dc6b84e10597d0b
[label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|NeighborDroppedReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Necfbfc2d8e5b482682ee66e604450c7b -> N57a829be68e2489d8dc6b84e10597d0b
[label=""]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|NewNeighbor|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|NeighborDroppedReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|TimerTick|\n|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|\n|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N57a829be68e2489d8dc6b84e10597d0b
[label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> Na47ff5eac9aa4b2eaf12839af68aab1f
[label="|MultipleNeighbors|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> N57a829be68e2489d8dc6b84e10597d0b
[label="|MultipleNeighborsDone|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na47ff5eac9aa4b2eaf12839af68aab1f -> N57a829be68e2489d8dc6b84e10597d0b
[label="|LevelChanged|"]
[color="blue"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|TimerTick|\n|LieRcvd|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> N57a829be68e2489d8dc6b84e10597d0b
[label="|TimerTick|\n|LieRcvd|\n|NeighborChangedLevel|\n|NeighborChangedAddress|\n|NeighborAddressAdded|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N57a829be68e2489d8dc6b84e10597d0b -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
    N16db54bf2c5d48f093ad6c18e70081ee -> N16db54bf2c5d48f093ad6c18e70081ee
[label="|TimerTick|\n|LieRcvd|\n|SendLie|"]
[color="black"]
[arrowhead="normal" dir="both" arrowtail="none"];
    Na641d400819a468d987e31182cdb013e -> Na641d400819a468d987e31182cdb013e
[label="|ValidReflection|"]
[color="red"]
[arrowhead="normal" dir="both" arrowtail="none"];
}

                                ZTP FSM DOT

   Events

   o  TimerTick: one second timer tic

   o  LevelChanged: node's level has been changed by ZTP or
      configuration

   o  HALChanged: best HAL computed by ZTP has changed

   o  HATChanged: HAT computed by ZTP has changed
   o  HALSChanged: set of HAL offering systems computed by ZTP has
      changed

   o  LieRcvd: received LIE

   o  NewNeighbor: new neighbor parsed

   o  ValidReflection: received own reflection from neighbor

   o  NeighborDroppedReflection: lost previous own reflection from
      neighbor

   o  NeighborChangedLevel: in overload, do not
                             transit traffic through it.

8.2.20.  Registry RIFT/encoding/NodeNeighborsTIEElement

   neighbor changed advertised of a node

8.2.20.1.  Requested Entries

    Name      Value  Schema Description
                    Version
    level

   o  NeighborChangedAddress: neighbor changed IP address

   o  UnacceptableHeader: unacceptable header seen

   o  MTUMismatch: MTU mismatched

   o  PODMismatch: Unacceptable PoD seen

   o  HoldtimeExpired: adjacency hold down expired

   o  MultipleNeighbors: more than one         1     2.0 level of neighbor seen on interface

   o  MultipleNeighborsDone: cooldown for
    cost          3     2.0
    link_ids      4     2.0 can carry description of multiple neighbors expired

   o  SendLie: send parallel
                            links in a LIE out

   o  UpdateZTPOffer: update TIE
    bandwidth     5     2.0 total bandwith to neighbor, this node's ZTP offer

   Actions

      on MTUMismatch in OneWay finishes in OneWay: no action

      on HoldtimeExpired in OneWay finishes in OneWay: no action

      on LevelChanged in ThreeWay finishes will be
                            normally sum of the bandwidths of all the
                            parallel links.

8.2.21.  Registry RIFT/encoding/NodeTIEElement

   Description of a node.

   It may occur multiple times in OneWay: update level different TIEs but if either

      capabilities values do not match or

      flags values do not match or
      neighbors repeat with
      event value

      on MultipleNeighbors in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: start different values

   the behavior is undefined and a warning SHOULD be generated.
   Neighbors can be distributed across multiple TIEs however if the sets
   are disjoint.  Miscablings SHOULD be repeated in every node TIE,
   otherwise the behavior is undefined.

   @note: Observe that absence of fields implies defined defaults.

8.2.21.1.  Requested Entries

   Name            Value  Schema Description
                         Version
   level               1     2.0 Level of the node.
   neighbors timer as 4 *
      DEFAULT_LIE_HOLDTIME

      on HALChanged           2     2.0 Node's neighbors. If neighbor systemID
                                 repeats in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: store new HAL
      on NeighborChangedAddress in ThreeWay finishes in OneWay: no
      action

      on ValidReflection in OneWay finishes other node TIEs of same node
                                 the behavior is undefined.
   capabilities        3     2.0 Capabilities of the node.
   flags               4     2.0 Flags of the node.
   name                5     2.0 Optional node name for easier
                                 operations.
   pod                 6     2.0 PoD to which the node belongs.
   miscabled_links    10     2.0 If any local links are miscabled, the
                                 indication is flooded.

8.2.22.  Registry RIFT/encoding/PacketContent

   Content of a RIFT packet.

8.2.22.1.  Requested Entries

                   Name Value Schema Version Description
                   lie      1            2.0
                   tide     2            2.0
                   tire     3            2.0
                   tie      4            2.0

8.2.23.  Registry RIFT/encoding/PacketHeader

   Common RIFT packet header.

8.2.23.1.  Requested Entries
   Name          Value  Schema Description
                       Version
   major_version     1     2.0 Major version of protocol.
   minor_version     2     2.0 Minor version of protocol.
   sender            3     2.0 Node sending the packet, in ThreeWay: no action case of
                               LIE/TIRE/TIDE also the originator of it.
   level             4     2.0 Level of the node sending the packet,
                               required on MTUMismatch in TwoWay finishes in OneWay: no action everything except LIEs. Lack
                               of presence on TimerTick in MultipleNeighborsWait finishes LIEs indicates
                               UNDEFINED_LEVEL and is used in
      MultipleNeighborsWait: decrement MultipleNeighbors timer, ZTP
                               procedures.

8.2.24.  Registry RIFT/encoding/PrefixAttributes

   Attributes of a prefix.

8.2.24.1.  Requested Entries

   Name              Value  Schema Description
                           Version
   metric                2     2.0 Distance of the prefix.
   tags                  3     2.0 Generic unordered set of route tags,
                                   can be redistributed to other
                                   protocols or use within the context
                                   of real time analytics.
   monotonic_clock       4     2.0 Monotonic clock for mobile addresses.
   loopback              6     2.0 Indicates if
      expired PUSH MultipleNeighborsDone

      on MultipleNeighborsDone the interface is a node
                                   loopback.
   directly_attached     7     2.0 Indicates that the prefix is directly
                                   attached, i.e. should be routed to
                                   even if the node is in MultipleNeighborsWait finishes overload. *
   from_link            10     2.0 In case of locally originated
                                   prefixes, i.e. interface addresses
                                   this can describe which link the
                                   address belongs to.

8.2.25.  Registry RIFT/encoding/PrefixTIEElement

   TIE carrying prefixes

8.2.25.1.  Requested Entries

    Name     Value  Schema Description
                   Version
    prefixes     1     2.0 Prefixes with the associated attributes.  If
                           the same prefix repeats in
      OneWay: decrement MultipleNeighbors timer, multiple TIEs of
                           same node behavior is unspecified.

8.2.26.  Registry RIFT/encoding/ProtocolPacket

   RIFT packet structure.

8.2.26.1.  Requested Entries

                 Name    Value Schema Version Description
                 header      1            2.0
                 content     2            2.0

8.2.27.  Registry RIFT/encoding/TIDEPacket

   TIDE with sorted TIE headers, if expired PUSH
      MultipleNeighborsDone

      on HATChanged headers are unsorted, behavior is
   undefined.

8.2.27.1.  Requested Entries

   Name        Value Schema Version Description
   start_range     1            2.0 First TIE header in ThreeWay finishes the tide packet.
   end_range       2            2.0 Last TIE header in ThreeWay: store HAT

      on UpdateZTPOffer the tide packet.
   headers         3            2.0 _Sorted_ list of headers.

8.2.28.  Registry RIFT/encoding/TIEElement

   Single element in TwoWay finishes a TIE.

   Schema enum `common.TIETypeType` in TwoWay: send offer TIEID indicates which elements
   MUST be present in the TIEElement.  In case of mismatch the
   unexpected elements MUST be ignored.  In case of lack of expected
   element the TIE an error MUST be reported and the TIE MUST be
   ignored.

   This type can be extended with new optional elements for new
   `common.TIETypeType` values without breaking the major but if it is
   necessary to ZTP
      FSM

      on HALSChanged understand whether all nodes support the new type a node
   capability must be added as well.

8.2.28.1.  Requested Entries
   Name                            Valu Schem Description
                                      e a Ver
                                         sion
   node                               1   2.0 Used in TwoWay finishes case of enum commo
                                              n.TIETypeType.NodeTIEType.
   prefixes                           2   2.0 Used in TwoWay: store HALS

      on PODMismatch in TwoWay finishes in OneWay: no action

      on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE

      on PODMismatch case of enum commo
                                              n.TIETypeType.PrefixTIETyp
                                              e.
   positive_disaggregation_prefixe    3   2.0 Positive prefixes (always
   s                                          southbound).  It MUST NOT
                                              be advertised within a
                                              North TIE and ignored
                                              otherwise
   negative_disaggregation_prefixe    5   2.0 Transitive, negative
   s                                          prefixes (always
                                              southbound) which MUST be
                                              aggregated and propagated
                                              according to the
                                              specification southwards
                                              towards lower levels to
                                              heal pathological upper
                                              level partitioning,
                                              otherwise blackholes may
                                              occur in ThreeWay finishes multiplane
                                              fabrics.  It MUST NOT be
                                              advertised within a North
                                              TIE.
   external_prefixes                  6   2.0 Externally reimported
                                              prefixes.
   positive_external_disaggregatio    7   2.0 Positive external
   n_prefixes                                 disaggregated prefixes
                                              (always southbound).  It
                                              MUST NOT be advertised
                                              within a North TIE and
                                              ignored otherwise.
   keyvalues                          9   2.0 Key-Value store elements.

8.2.29.  Registry RIFT/encoding/TIEHeader

   Header of a TIE.

   @note: TIEID space is a total order achieved by comparing the
   elements in OneWay: no action sequence defined and comparing each value as an unsigned
   integer of according length.

   @note: After sequence number the lifetime received on TimerTick in TwoWay finishes in TwoWay: PUSH SendLie event, the envelope
   must be used for comparison before further fields.

   @note: `origination_time` and `origination_lifetime` are disregarded
   for comparison purposes and carried purely for debugging/security
   purposes if
      holdtime expired PUSH HoldtimeExpired event

      on SendLie in TwoWay finishes in TwoWay: SEND_LIE present.

8.2.29.1.  Requested Entries

   Name                 Value  Schema Description
                              Version
   tieid                    2     2.0 ID of the tie.
   seq_nr                   3     2.0 Sequence number of the tie.
   origination_time        10     2.0 Absolute timestamp when the TIE
                                      was generated. This can be used on SendLie in OneWay finishes in OneWay: SEND_LIE
                                      fabrics with synchronized clock to
                                      prevent lifetime modification
                                      attacks.
   origination_lifetime    12     2.0 Original lifetime when the TIE was
                                      generated. This can be used on TimerTick in OneWay finishes
                                      fabrics with synchronized clock to
                                      prevent lifetime modification
                                      attacks.

8.2.30.  Registry RIFT/encoding/TIEHeaderWithLifeTime

   Header of a TIE as described in OneWay: PUSH SendLie event

      on HALChanged TIRE/TIDE.

8.2.30.1.  Requested Entries

   Name               Value  Schema Description
                            Version
   header                 1     2.0
   remaining_lifetime     2     2.0 Remaining lifetime that expires down
                                    to 0 just like in OneWay finishes ISIS.  TIEs with
                                    lifetimes differing by less than
                                    `lifetime_diff2ignore` MUST be
                                    considered EQUAL.

8.2.31.  Registry RIFT/encoding/TIEID

   ID of a TIE.

   @note: TIEID space is a total order achieved by comparing the
   elements in OneWay: store sequence defined and comparing each value as an unsigned
   integer of according length.

8.2.31.1.  Requested Entries

      Name       Value Schema Version Description
      direction      1            2.0 direction of TIE
      originator     2            2.0 indicates originator of the TIE
      tietype        3            2.0 type of the tie
      tie_nr         4            2.0 number of the tie

8.2.32.  Registry RIFT/encoding/TIEPacket

   TIE packet

8.2.32.1.  Requested Entries

                 Name    Value Schema Version Description
                 header      1            2.0
                 element     2            2.0

8.2.33.  Registry RIFT/encoding/TIREPacket

   TIRE packet

8.2.33.1.  Requested Entries

                 Name    Value Schema Version Description
                 headers     1            2.0

9.  Acknowledgments

   A new HAL

      on HALSChanged in ThreeWay finishes in ThreeWay: store HALS

      on NeighborChangedLevel in TwoWay finishes routing protocol in OneWay: no action its complexity is not a product of a parent
   but of a village as the author list shows already.  However, many
   more people provided input, fine-combed the specification based on PODMismatch in OneWay finishes
   their experience in OneWay: no action

      on HoldtimeExpired design or implementation.  This section will make
   an inadequate attempt in TwoWay finishes recording their contribution.

   Many thanks to Naiming Shen for some of the early discussions around
   the topic of using IGPs for routing in topologies related to Clos.
   Russ White to be especially acknowledged for the key conversation on
   epistomology that allowed to tie current asynchronous distributed
   systems theory results to a modern protocol design presented here.
   Adrian Farrel, Joel Halpern, Jeffrey Zhang, Krzysztof Szarkowicz,
   Nagendra Kumar provided thoughtful comments that improved the
   readability of the document and found good amount of corners where
   the light failed to shine.  Kris Price was first to mention single
   router, single arm default considerations.  Jeff Tantsura helped out
   with some initial thoughts on BFD interactions while Jeff Haas
   corrected several misconceptions about BFD's finer points.  Artur
   Makutunowicz pointed out many possible improvements and acted as
   sounding board in regard to modern protocol implementation techniques
   RIFT is exploring.  Barak Gafni formalized first time clearly the
   problem of partitioned spine and fallen leafs on a (clean) napkin in
   Singapore that led to the very important part of the specification
   centered around multiple Top-of-Fabric planes and negative
   disaggregation.  Igor Gashinsky and others shared many thoughts on
   problems encountered in design and operation of large-scale data
   center fabrics.  Xu Benchong found a delicate error in the flooding
   procedures while implementing.

10.  References

10.1.  Normative References

   [EUI64]    IEEE, "Guidelines for Use of Extended Unique Identifier
              (EUI), Organizationally Unique Identifier (OUI), and
              Company ID (CID)", IEEE EUI,
              <http://standards.ieee.org/develop/regauth/tut/eui.pdf>.

   [ISO10589]
              ISO "International Organization for Standardization",
              "Intermediate system to Intermediate system intra-domain
              routeing information exchange protocol for use in
              conjunction with the protocol for providing the
              connectionless-mode Network Service (ISO 8473), ISO/IEC
              10589:2002, Second Edition.", Nov 2002.

   [RFC1982]  Elz, R. and R. Bush, "Serial Number Arithmetic", RFC 1982,
              DOI 10.17487/RFC1982, August 1996,
              <https://www.rfc-editor.org/info/rfc1982>.

   [RFC2328]  Moy, J., "OSPF Version 2", STD 54, RFC 2328,
              DOI 10.17487/RFC2328, April 1998,
              <https://www.rfc-editor.org/info/rfc2328>.

   [RFC2365]  Meyer, D., "Administratively Scoped IP Multicast", BCP 23,
              RFC 2365, DOI 10.17487/RFC2365, July 1998,
              <https://www.rfc-editor.org/info/rfc2365>.

   [RFC4271]  Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A
              Border Gateway Protocol 4 (BGP-4)", RFC 4271,
              DOI 10.17487/RFC4271, January 2006,
              <https://www.rfc-editor.org/info/rfc4271>.

   [RFC4291]  Hinden, R. and S. Deering, "IP Version 6 Addressing
              Architecture", RFC 4291, DOI 10.17487/RFC4291, February
              2006, <https://www.rfc-editor.org/info/rfc4291>.

   [RFC5082]  Gill, V., Heasley, J., Meyer, D., Savola, P., Ed., and C.
              Pignataro, "The Generalized TTL Security Mechanism
              (GTSM)", RFC 5082, DOI 10.17487/RFC5082, October 2007,
              <https://www.rfc-editor.org/info/rfc5082>.

   [RFC5120]  Przygienda, T., Shen, N., and N. Sheth, "M-ISIS: Multi
              Topology (MT) Routing in Intermediate System to
              Intermediate Systems (IS-ISs)", RFC 5120,
              DOI 10.17487/RFC5120, February 2008,
              <https://www.rfc-editor.org/info/rfc5120>.

   [RFC5303]  Katz, D., Saluja, R., and D. Eastlake 3rd, "Three-Way
              Handshake for IS-IS Point-to-Point Adjacencies", RFC 5303,
              DOI 10.17487/RFC5303, October 2008,
              <https://www.rfc-editor.org/info/rfc5303>.

   [RFC5549]  Le Faucheur, F. and E. Rosen, "Advertising IPv4 Network
              Layer Reachability Information with an IPv6 Next Hop",
              RFC 5549, DOI 10.17487/RFC5549, May 2009,
              <https://www.rfc-editor.org/info/rfc5549>.

   [RFC5709]  Bhatia, M., Manral, V., Fanto, M., White, R., Barnes, M.,
              Li, T., and R. Atkinson, "OSPFv2 HMAC-SHA Cryptographic
              Authentication", RFC 5709, DOI 10.17487/RFC5709, October
              2009, <https://www.rfc-editor.org/info/rfc5709>.

   [RFC5881]  Katz, D. and D. Ward, "Bidirectional Forwarding Detection
              (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881,
              DOI 10.17487/RFC5881, June 2010,
              <https://www.rfc-editor.org/info/rfc5881>.

   [RFC5905]  Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch,
              "Network Time Protocol Version 4: Protocol and Algorithms
              Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010,
              <https://www.rfc-editor.org/info/rfc5905>.

   [RFC7752]  Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and
              S. Ray, "North-Bound Distribution of Link-State and
              Traffic Engineering (TE) Information Using BGP", RFC 7752,
              DOI 10.17487/RFC7752, March 2016,
              <https://www.rfc-editor.org/info/rfc7752>.

   [RFC7987]  Ginsberg, L., Wells, P., Decraene, B., Przygienda, T., and
              H. Gredler, "IS-IS Minimum Remaining Lifetime", RFC 7987,
              DOI 10.17487/RFC7987, October 2016,
              <https://www.rfc-editor.org/info/rfc7987>.

   [RFC8174]  Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
              2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
              May 2017, <https://www.rfc-editor.org/info/rfc8174>.

   [RFC8200]  Deering, S. and R. Hinden, "Internet Protocol, Version 6
              (IPv6) Specification", STD 86, RFC 8200,
              DOI 10.17487/RFC8200, July 2017,
              <https://www.rfc-editor.org/info/rfc8200>.

   [RFC8202]  Ginsberg, L., Previdi, S., and W. Henderickx, "IS-IS
              Multi-Instance", RFC 8202, DOI 10.17487/RFC8202, June
              2017, <https://www.rfc-editor.org/info/rfc8202>.

   [RFC8505]  Thubert, P., Ed., Nordmark, E., Chakrabarti, S., and C.
              Perkins, "Registration Extensions for IPv6 over Low-Power
              Wireless Personal Area Network (6LoWPAN) Neighbor
              Discovery", RFC 8505, DOI 10.17487/RFC8505, November 2018,
              <https://www.rfc-editor.org/info/rfc8505>.

   [thrift]   Apache Software Foundation, "Thrift Interface Description
              Language", <https://thrift.apache.org/docs/idl>.

10.2.  Informative References

   [CLOS]     Yuan, X., "On Nonblocking Folded-Clos Networks in Computer
              Communication Environments", IEEE International Parallel &
              Distributed Processing Symposium, 2011.

   [DIJKSTRA]
              Dijkstra, E., "A Note on Two Problems in Connexion with
              Graphs", Journal Numer. Math. , 1959.

   [DOT]      Ellson, J. and L. Koutsofios, "Graphviz: open source graph
              drawing tools", Springer-Verlag , 2001.

   [DYNAMO]   De Candia et al., G., "Dynamo: amazon's highly available
              key-value store", ACM SIGOPS symposium on Operating
              systems principles (SOSP '07), 2007.

   [EPPSTEIN]
              Eppstein, D., "Finding the k-Shortest Paths", 1997.

   [FATTREE]  Leiserson, C., "Fat-Trees: Universal Networks for
              Hardware-Efficient Supercomputing", 1985.

   [IEEEstd1588]
              IEEE, "IEEE Standard for a Precision Clock Synchronization
              Protocol for Networked Measurement and Control Systems",
              IEEE Standard 1588,
              <https://ieeexplore.ieee.org/document/4579760/>.

   [IEEEstd8021AS]
              IEEE, "IEEE Standard for Local and Metropolitan Area
              Networks - Timing and Synchronization for Time-Sensitive
              Applications in Bridged Local Area Networks",
              IEEE Standard 802.1AS,
              <https://ieeexplore.ieee.org/document/5741898/>.

   [ISO10589-Second-Edition]
              International Organization for Standardization,
              "Intermediate system to Intermediate system intra-domain
              routeing information exchange protocol for use in
              conjunction with the protocol for providing the
              connectionless-mode Network Service (ISO 8473)", Nov 2002.

   [RFC0826]  Plummer, D., "An Ethernet Address Resolution Protocol: Or
              Converting Network Protocol Addresses to 48.bit Ethernet
              Address for Transmission on Ethernet Hardware", STD 37,
              RFC 826, DOI 10.17487/RFC0826, November 1982,
              <https://www.rfc-editor.org/info/rfc826>.

   [RFC2131]  Droms, R., "Dynamic Host Configuration Protocol",
              RFC 2131, DOI 10.17487/RFC2131, March 1997,
              <https://www.rfc-editor.org/info/rfc2131>.

   [RFC3626]  Clausen, T., Ed. and P. Jacquet, Ed., "Optimized Link
              State Routing Protocol (OLSR)", RFC 3626,
              DOI 10.17487/RFC3626, October 2003,
              <https://www.rfc-editor.org/info/rfc3626>.

   [RFC4861]  Narten, T., Nordmark, E., Simpson, W., and H. Soliman,
              "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861,
              DOI 10.17487/RFC4861, September 2007,
              <https://www.rfc-editor.org/info/rfc4861>.

   [RFC4862]  Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless
              Address Autoconfiguration", RFC 4862,
              DOI 10.17487/RFC4862, September 2007,
              <https://www.rfc-editor.org/info/rfc4862>.

   [RFC6518]  Lebovitz, G. and M. Bhatia, "Keying and Authentication for
              Routing Protocols (KARP) Design Guidelines", RFC 6518,
              DOI 10.17487/RFC6518, February 2012,
              <https://www.rfc-editor.org/info/rfc6518>.

   [RFC7938]  Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
              BGP for Routing in Large-Scale Data Centers", RFC 7938,
              DOI 10.17487/RFC7938, August 2016,
              <https://www.rfc-editor.org/info/rfc7938>.

   [RFC8415]  Mrugalski, T., Siodelski, M., Volz, B., Yourtchenko, A.,
              Richardson, M., Jiang, S., Lemon, T., and T. Winters,
              "Dynamic Host Configuration Protocol for IPv6 (DHCPv6)",
              RFC 8415, DOI 10.17487/RFC8415, November 2018,
              <https://www.rfc-editor.org/info/rfc8415>.

   [VAHDAT08]
              Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable,
              Commodity Data Center Network Architecture", SIGCOMM ,
              2008.

   [Wikipedia]
              Wikipedia,
              "https://en.wikipedia.org/wiki/Serial_number_arithmetic",
              2016.

Appendix A.  Sequence Number Binary Arithmetic

   The only reasonably reference to a cleaner than [RFC1982] sequence
   number solution is given in OneWay: no action [Wikipedia].  It basically converts the
   problem into two complement's arithmetic.  Assuming a straight two
   complement's substractions on TimerTick the bit-width of the sequence number
   the according >: and =: relations are defined as:

      U_1, U_2 are 12-bits aligned unsigned version number

      D_f is  ( U_1 - U_2 ) interpreted as two complement signed 12-bits
      D_b is  ( U_2 - U_1 ) interpreted as two complement signed 12-bits

      U_1 >: U_2 IIF D_f > 0 AND D_b < 0
      U_1 =: U_2 IIF D_f = 0

   The >: relationsship is anti-symmetric but not transitive.  Observe
   that this leaves >: of the numbers having maximum two complement
   distance, e.g. ( 0 and 0x800 ) undefined in ThreeWay finishes our 12-bits case since
   D_f and D_b are both -0x7ff.

   A simple example of the relationship in ThreeWay: PUSH SendLie event, case of 3-bit arithmetic
   follows as table indicating D_f/D_b values and then the relationship
   of U_1 to U_2:

           U2 / U1   0    1    2    3    4    5    6    7
           0        +/+  +/-  +/-  +/-  -/-  -/+  -/+  -/+
           1        -/+  +/+  +/-  +/-  +/-  -/-  -/+  -/+
           2        -/+  -/+  +/+  +/-  +/-  +/-  -/-  -/+
           3        -/+  -/+  -/+  +/+  +/-  +/-  +/-  -/-
           4        -/-  -/+  -/+  -/+  +/+  +/-  +/-  +/-
           5        +/-  -/-  -/+  -/+  -/+  +/+  +/-  +/-
           6        +/-  +/-  -/-  -/+  -/+  -/+  +/+  +/-
           7        +/-  +/-  +/-  -/-  -/+  -/+  -/+  +/+

          U2 / U1   0    1    2    3    4    5    6    7
          0         =    >    >    >    ?    <    <    <
          1         <    =    >    >    >    ?    <    <
          2         <    <    =    >    >    >    ?    <
          3         <    <    <    =    >    >    >    ?
          4         ?    <    <    <    =    >    >    >
          5         >    ?    <    <    <    =    >    >
          6         >    >    ?    <    <    <    =    >
          7         >    >    >    ?    <    <    <    =

Appendix B.  Information Elements Schema

   This section introduces the schema for information elements.  The IDL
   is Thrift [thrift].

   On schema changes that

   1.   change field numbers or

   2.   add new *required* fields or

   3.   remove any fields or

   4.   change lists into sets, unions into structures or

   5.   change multiplicity of fields or

   6.   changes name of any field or type or

   7.   change datatypes of any field or

   8.   adds, changes or removes a default value of any *existing* field
        or

   9.   removes or changes any defined constant or constant value or

   10.  changes any enumeration type except extending `common.TIEType`
        (use of enumeration types is generally discouraged)

   major version of the schema MUST increase.  All other changes MUST
   increase minor version within the same major.

   Observe however that introducing an optional field does not cause a
   major version increase even if holdtime expired PUSH HoldtimeExpired event

      on MultipleNeighbors in TwoWay finishes the fields inside the structure are
   optional with defaults.

   All signed integer as forced by Thrift [thrift] support must be cast
   for internal purposes to equivalent unsigned values without
   discarding the signedness bit.  An implementation SHOULD try to avoid
   using the signedness bit when generating values.

   The schema is normative.

B.1.  common.thrift

/** @note MUST be interpreted in MultipleNeighborsWait:
      start multiple neighbors timer implementation as 4 unsigned 64 bits.
 * DEFAULT_LIE_HOLDTIME

      on UpdateZTPOffer in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: send offer        The implementation SHOULD NOT use the MSB.
 */
typedef i64      SystemIDType
typedef i32      IPv4Address
/** this has to ZTP FSM

      on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE

      on LevelChanged in MultipleNeighborsWait finishes in OneWay:
      update level with event value

      on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer be of length long enough to
      ZTP FSM

      on HALChanged in TwoWay finishes in TwoWay: store new HAL

      on UnacceptableHeader in OneWay finishes in OneWay: no action

      on HALSChanged in OneWay finishes in OneWay: store HALS

      on HALSChanged in MultipleNeighborsWait finishes in
      MultipleNeighborsWait: store HALS

      on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE

      on MTUMismatch accomodate prefix */
typedef binary   IPv6Address
/** @note MUST be interpreted in ThreeWay finishes implementation as unsigned */
typedef i16      UDPPortType
/** @note MUST be interpreted in OneWay: no action

      on HATChanged implementation as unsigned */
typedef i32      TIENrType
/** @note MUST be interpreted in MultipleNeighborsWait finishes implementation as unsigned */
typedef i32      MTUSizeType
/** @note MUST be interpreted in
      MultipleNeighborsWait: store HAT

      on NeighborChangedAddress implementation as unsigned rollling over number */
typedef i16      SeqNrType
/** @note MUST be interpreted in OneWay finishes implementation as unsigned */
typedef i32      LifeTimeInSecType
/** @note MUST be interpreted in OneWay: no action

      on ValidReflection implementation as unsigned */
typedef i8       LevelType
/** optional, recommended monotonically increasing number _per packet type per adjacency_
    that can be used to detect losses/misordering/restarts.
    This will be moved into envelope in TwoWay finishes the future.
    @note MUST be interpreted in ThreeWay: no action

      on MultipleNeighbors implementation as unsigned rollling over number */
typedef i16      PacketNumberType
/** @note MUST be interpreted in OneWay finishes implementation as unsigned */
typedef i32      PodType
/** @note MUST be interpreted in MultipleNeighborsWait:
      start multiple neighbors timer implementation as 4 * DEFAULT_LIE_HOLDTIME

      on NeighborChangedLevel unsigned. This is carried in OneWay finishes the
          security envelope and MUST fit into 8 bits. */
typedef i8       VersionType
/** @note MUST be interpreted in OneWay: no action

      on HATChanged implementation as unsigned */
typedef i16      MinorVersionType
/** @note MUST be interpreted in OneWay finishes implementation as unsigned */
typedef i32      MetricType
/** @note MUST be interpreted in OneWay: store HAT

      on NeighborDroppedReflection implementation as unsigned and unstructured */
typedef i64      RouteTagType
/** @note MUST be interpreted in OneWay finishes implementation as unstructured label value */
typedef i32      LabelType
/** @note MUST be interpreted in OneWay: no
      action

      on HALChanged implementation as unsigned */
typedef i32      BandwithInMegaBitsType
/** @note Key Value key ID type */
typedef string   KeyIDType
/** node local, unique identification for a link (interface/tunnel
  * etc. Basically anything RIFT runs on). This is kept
  * at 32 bits so it aligns with BFD [RFC5880] discriminator size.
  */
typedef i32    LinkIDType
typedef string KeyNameType
typedef i8     PrefixLenType
/** timestamp in ThreeWay finishes seconds since the epoch */
typedef i64    TimestampInSecsType
/** security nonce.
 *  @note MUST be interpreted in ThreeWay: store new HAL
      on NeighborAddressAdded implementation as rolling over unsigned value */
typedef i16    NonceType
/** LIE FSM holdtime type */
typedef i16    TimeIntervalInSecType
/** Transaction ID type for prefix mobility as specified by RFC6550,  value
    MUST be interpreted in OneWay finishes implementation as unsigned  */
typedef i8     PrefixTransactionIDType
/** timestamp per IEEE 802.1AS, values MUST be interpreted in OneWay: no action implementation as unsigned  */
struct IEEE802_1ASTimeStampType {
    1: required     i64     AS_sec;
    2: optional     i32     AS_nsec;
}
/** generic counter type */
typedef i64 CounterType
/** Platform Interface Index type, i.e. index of interface on NeighborChangedAddress hardware, can be used e.g. with
    RFC5837 */
typedef i32 PlatformInterfaceIndex

/** flags indicating nodes behavior in TwoWay finishes case of ZTP
 */
enum HierarchyIndications {
    /** forces level to `leaf_level` and enables according procedures */
    leaf_only                            = 0,
    /** forces level to `leaf_level` and enables according procedures */
    leaf_only_and_leaf_2_leaf_procedures = 1,
    /** forces level to `top_of_fabric` and enables according procedures */
    top_of_fabric                        = 2,
}

const PacketNumberType  undefined_packet_number    = 0
/** This MUST be used when node is configured as top of fabric in OneWay: no action ZTP.
    This is kept reasonably low to alow for fast ZTP convergence on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE
    failures. */
const LevelType   top_of_fabric_level              = 24
/** default bandwidth on UnacceptableHeader in TwoWay finishes in OneWay: no action a link */
const BandwithInMegaBitsType  default_bandwidth    = 100
/** fixed leaf level when ZTP is not used */
const LevelType   leaf_level                  = 0
const LevelType   default_level               = leaf_level
const PodType     default_pod                 = 0
const LinkIDType  undefined_linkid            = 0

/** default distance used */
const MetricType  default_distance         = 1
/** any distance larger than this will be considered infinity */
const MetricType  infinite_distance       = 0x7FFFFFFF
/** represents invalid distance */
const MetricType  invalid_distance        = 0
const bool overload_default               = false
const bool flood_reduction_default        = true
/** default LIE FSM holddown time */
const TimeIntervalInSecType   default_lie_holdtime  = 3
/** default ZTP FSM holddown time */
const TimeIntervalInSecType   default_ztp_holdtime  = 1
/** by default LIE levels are ZTP offers */
const bool default_not_a_ztp_offer        = false
/** by default e'one is repeating flooding */
const bool default_you_are_flood_repeater = true
/** 0 is illegal for SystemID */
const SystemIDType IllegalSystemID        = 0
/** empty set of nodes */
const set<SystemIDType> empty_set_of_nodeids = {}
/** default lifetime of TIE is one week */
const LifeTimeInSecType default_lifetime      = 604800
/** default lifetime when TIEs are purged is 5 minutes */
const LifeTimeInSecType purge_lifetime        = 300
/** round down interval when TIEs are sent with security hashes
    to prevent excessive computation. **/
const LifeTimeInSecType rounddown_lifetime_interval = 60
/** any `TieHeader` that has a smaller lifetime difference
    than this constant is equal (if other fields equal). This
    constant MUST be larger than `purge_lifetime` to avoid
    retransmissions */
const LifeTimeInSecType lifetime_diff2ignore  = 400

/** default UDP port to run LIEs on LevelChanged */
const UDPPortType     default_lie_udp_port       =  914
/** default UDP port to receive TIEs on, that can be peer specific */
const UDPPortType     default_tie_udp_flood_port =  915

/** default MTU link size to use */
const MTUSizeType     default_mtu_size           = 1400
/** default link being BFD capable */
const bool            bfd_default                = true

/** undefined nonce, equivalent to missing nonce */
const NonceType       undefined_nonce            = 0;
/** outer security key id, MUST be interpreted as in TwoWay finishes implementation as unsigned */
typedef i8            OuterSecurityKeyID
/** security key id, MUST be interpreted as in OneWay: update level with
      event implementation as unsigned */
typedef i32           TIESecurityKeyID
/** undefined key */
const TIESecurityKeyID undefined_securitykey_id   = 0;
/** Maximum delta (negative or positive) that a mirrored nonce can
    deviate from local value

      on HATChanged in TwoWay finishes in TwoWay: store HAT

      on UpdateZTPOffer in OneWay finishes in OneWay: send offer to ZTP
      FSM

      on ValidReflection in ThreeWay finishes in ThreeWay: no action

      on UnacceptableHeader in ThreeWay finishes in OneWay: no action

      on HoldtimeExpired in ThreeWay finishes in OneWay: no action be considered valid. If nonces are
    changed every minute on NeighborChangedLevel in ThreeWay finishes in OneWay: no action both sides this opens statistically
    a `maximum_valid_nonce_delta` minutes window of identical LIEs,
    TIE, TI(x)E replays.
    The interval cannot be too small since LIE FSM may change
    states fairly quickly during ZTP without sending LIEs*/
const i16             maximum_valid_nonce_delta  = 5;

/** direction of tie */
enum TieDirectionType {
    Illegal           = 0,
    South             = 1,
    North             = 2,
    DirectionMaxValue = 3,
}

/** address family */
enum AddressFamilyType {
   Illegal                = 0,
   AddressFamilyMinValue  = 1,
   IPv4     = 2,
   IPv6     = 3,
   AddressFamilyMaxValue  = 4,
}
/** IP v4 prefix type */
struct IPv4PrefixType {
    1: required IPv4Address    address;
    2: required PrefixLenType  prefixlen;
}

/** IP v6 prefix type */
struct IPv6PrefixType {
    1: required IPv6Address    address;
    2: required PrefixLenType  prefixlen;
}

/** IP address type */
union IPAddressType {
    1: optional IPv4Address   ipv4address;
    2: optional IPv6Address   ipv6address;
}

/** prefix representing reachablity.

    @note: for interface
        addresses the protocol can propagate the address part beyond
        the subnet mask and on LevelChanged reachability computation that has to
        be normalized. The non-significant bits can be used for operational
        purposes.
*/
union IPPrefixType {
    1: optional IPv4PrefixType   ipv4prefix;
    2: optional IPv6PrefixType   ipv6prefix;
}

/** sequence of a prefix when it moves
 */
struct PrefixSequenceType {
    1: required IEEE802_1ASTimeStampType  timestamp;
    /** transaction ID set by client in OneWay finishes e.g. in OneWay: update level 6LoWPAN */
    2: optional PrefixTransactionIDType   transactionid;
}

/** type of TIE.

    This enum indicates what TIE type the TIE is carrying.
    In case the value is not known to the receiver,
    re-flooded the same way as prefix TIEs. This allows for
    future extensions of the protocol within the same schema major
    with
      event value, PUSH SendLie event types opaque to some nodes unless the flooding scope is not
    the same as prefix TIE, then a major version revision MUST
    be performed.

*/
enum TIETypeType {
    Illegal                                     = 0,
    TIETypeMinValue                             = 1,
    /** first legal value */
    NodeTIEType                                 = 2,
    PrefixTIEType                               = 3,
    PositiveDisaggregationPrefixTIEType         = 4,
    NegativeDisaggregationPrefixTIEType         = 5,
    PGPrefixTIEType                             = 6,
    KeyValueTIEType                             = 7,
    ExternalPrefixTIEType                       = 8,
    PositiveExternalDisaggregationPrefixTIEType = 9,
    TIETypeMaxValue                             = 10,
}

/** RIFT route types.

    @note: route types which MUST be ordered on NewNeighbor in OneWay finishes their preference
            PGP prefixes are most preferred attracting
            traffic north (towards spine) and then south
            normal prefixes are attracting traffic south (towards leafs),
            i.e. prefix in TwoWay: PUSH SendLie event NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE.

    @note: The only purpose of those values is to introduce an
           ordering whereas an implementation can choose internally
           any other values as long the ordering is preserved
 */
enum RouteType {
    Illegal               =  0,
    RouteTypeMinValue     =  1,
    /** first legal value. */
    /** discard routes are most prefered */
    Discard               =  2,

    /** local prefixes are directly attached prefixes on NeighborDroppedReflection the
     *  system such as e.g. interface routes.
     */
    LocalPrefix           =  3,
    /** advertised in ThreeWay finishes S-TIEs */
    SouthPGPPrefix        =  4,
    /** advertised in TwoWay: no
      action

      on MultipleNeighbors N-TIEs */
    NorthPGPPrefix        =  5,
    /** advertised in ThreeWay finishes N-TIEs */
    NorthPrefix           =  6,
    /** externally imported north */
    NorthExternalPrefix   =  7,
    /** advertised in
      MultipleNeighborsWait: start multiple neighbors timer as 4 *
      DEFAULT_LIE_HOLDTIME

      on Entry into OneWay: CLEANUP

   Following words S-TIEs, either normal prefix or positive disaggregation */
    SouthPrefix           =  8,
    /** externally imported south */
    SouthExternalPrefix   =  9,
    /** negative, transitive prefixes are used least preferred */
    NegativeSouthPrefix   = 10,
    RouteTypeMaxValue     = 11,
}

B.2.  encoding.thrift

/**
    Thrift file for well known procedures:

   1.  PUSH Event: pushes an event to be executed by the FSM upon exit
       of this action

   2.  CLEANUP: neighbor MUST be reset to unknown

   3.  SEND_LIE: create a new LIE packet

       1.  reflecting the neighbor if known and valid and
       2.  setting the necessary `not_a_ztp_offer` variable if level was
           derived from last known neighbor on this interface and

       3.  setting `you_are_not_flood_repeater` to computed value

   4.  PROCESS_LIE:

       1.  if lie has wrong encodings for RIFT
*/

include "common.thrift"

/** Represents protocol encoding schema major version OR our own system ID or
           invalid system ID then CLEANUP else

       2.  if lie has non matching MTUs then CLEANUP, PUSH
           UpdateZTPOffer, PUSH MTUMismatch else

       3.  if PoD rules do not allow adjacency forming then CLEANUP,
           PUSH PODMismatch, PUSH MTUMismatch else

       4.  if lie has undefined level OR my */
const common.VersionType protocol_major_version = 2
/** Represents protocol encoding schema minor version */
const common.MinorVersionType protocol_minor_version =  0

/** common RIFT packet header */
struct PacketHeader {
    /** major version type of protocol */
    1: required common.VersionType major_version = protocol_major_version;
    /** minor version type of protocol */
    2: required common.VersionType minor_version = protocol_minor_version;
    /** node sending the packet, in case of LIE/TIRE/TIDE
      * also the originator of it */
    3: required common.SystemIDType  sender;
    /** level is undefined OR this of the node is leaf sending the packet, required on everything except
      * LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and remote level lower than HAT OR (lie's level
           is not leaf AND its difference is more than one from my
           level) then CLEANUP, PUSH UpdateZTPOffer, PUSH
           UnacceptableHeader else

       5.  PUSH UpdateZTPOffer, construct temporary new used
      * in ZTP procedures.
     */
    4: optional common.LevelType            level;
}

/** community */
struct Community {
    1: required i32          top;
    2: required i32          bottom;
}

/** neighbor structure with values from lie, if no current neighbor exists
           then set neighbor to new neighbor, PUSH NewNeighbor event,
           CHECK_THREE_WAY else

           1.  if current neighbor  */
struct Neighbor {
    /** system ID differs from lie's system of the originator */
    1: required common.SystemIDType        originator;
    /** ID then PUSH MultipleNeighbors else

           2.  if current neighbor stored level differs from lie's level
               then PUSH NeighborChangedLevel else

           3.  if current neighbor stored IPv4/v6 address differs from
               lie's address then PUSH NeighborChangedAddress else

           4.  if any of neighbor's flood address port, name, local
               linkid changed then PUSH NeighborChangedMinorFields remote side of the link */
    2: required common.LinkIDType          remote_id;
}

/** capabilities the node supports. The schema may add to this
    field future capabilities to indicate whether it will support
    interpretation of future schema extensions on the same major
    revision. Such fields MUST be optional and

           5.  CHECK_THREE_WAY

   5.  CHECK_THREE_WAY: if current state is one-way do nothing else

       1. have an implicit or
    explicit false default value. If a future capability changes route
    selection or generates blackholes if lie packet does some nodes are not contain neighbor supporting
    it then if current state a major version increment is three-way then PUSH NeighborDroppedReflection else

       2.  if packet reflects unavoidable.
*/
struct NodeCapabilities {
    /** must advertise supported minor version dialect that way */
    1: required common.MinorVersionType        protocol_minor_version =
            protocol_minor_version;
    /** can this system's ID and local port and state
           is three-way then PUSH event ValidReflection else PUSH event
           MultipleNeighbors

C.3.  Flooding Procedures

   Flooding Procedures are described node participate in terms of a flooding state of an
   adjacency flood reduction */
    2: optional bool                           flood_reduction =
            common.flood_reduction_default;
    /** does this node restrict itself to be top-of-fabric or
        leaf only (in ZTP) and resulting operations on does it driven by packet arrivals.
   The FSM has basically a single state and support leaf-2-leaf procedures */
    3: optional common.HierarchyIndications    hierarchy_indications;
}

/** link capabilities */
struct LinkCapabilities {
    /** indicates that the link is not well suited to
   represent supporting BFD */
    1: optional bool                           bfd =
            common.bfd_default;
    /** indicates whether the behavior.

   RIFT does not specify any kind of flood rate limiting since such
   specifications always assume particular points in available
   technology speeds and feeds and those points interface will support v4 forwarding. This MUST
      * be set to true when LIEs from a v4 address are shifting at faster sent and faster rate (speed of light holding for the moment).  The encoded
   packets provide hints to react accordingly to losses or overruns.

   Flooding of all according topology exchange elements SHOULD be
   performed at highest feasible rate whereas the rate of transmission
   MUST MAY be throttled by reacting set
      * to adequate features of the system such
   as e.g. queue lengths or congestion indications true in LIEs on v6 address. If v4 and v6 LIEs indicate contradicting
      * information the protocol
   packets.

C.3.1.  FloodState Structure per Adjacency

   The structure contains conceptually the following elements.  The word
   collection behavior is unspecified. */
    2: optional bool                           v4_forwarding_capable =
            true;
}

/** RIFT LIE packet

    @note this node's level is already included on the packet header */
struct LIEPacket {
    /** node or queue indicates a set of elements that adjacency name */
    1: optional string                        name;
    /** local link ID */
    2: required common.LinkIDType             local_id;
    /** UDP port to which we can be iterated:

   TIES_TX:  Collection containing all the receive flooded TIEs */
    3: required common.UDPPortType            flood_port =
            common.default_tie_udp_flood_port;
    /** layer 3 MTU, used to transmit discover to mismatch. */
    4: optional common.MTUSizeType            link_mtu_size =
            common.default_mtu_size;
    /** local link bandwidth on the
      adjacency.

   TIES_ACK:  Collection containing all interface */
    5: optional common.BandwithInMegaBitsType link_bandwidth =
            common.default_bandwidth;
    /** reflects the TIEs that have neighbor once received to be
      acknowledged on provide
        3-way connectivity */
    6: optional Neighbor                      neighbor;
    /** node's PoD */
    7: optional common.PodType                pod =
            common.default_pod;
    /** node capabilities shown in the adjacency.

   TIES_REQ:  Collection containing all LIE. The capabilies
        MUST match the TIE headers that have to be
      requested on capabilities shown in the adjacency.

   TIES_RTX:  Collection containing all TIEs that need retransmission
      with Node TIEs, otherwise
        the behavior is unspecified. A node detecting the mismatch
        SHOULD generate according error */
   10: required NodeCapabilities              node_capabilities;
   /** capabilities of this link */
   11: optional LinkCapabilities              link_capabilities;
   /** required holdtime of the adjacency, i.e. how much time
       MUST expire without LIE for the adjacency to retransmit.

   Following words are used drop */
   12: required common.TimeIntervalInSecType  holdtime =
            common.default_lie_holdtime;
   /** unsolicited, downstream assigned locally significant label
       value for well known procedures operating on this
   structure:

   TIE  Describes either a full RIFT TIE or accordingly just the
      `TIEHeader` or `TIEID`. The according meaning is unambiguously
      contained in adjacency */
   13: optional common.LabelType              label;
    /** indicates that the context of level on the algorithm.

   is_flood_reduced(TIE):  returns whether a TIE can be flood reduced or
      not.

   is_tide_entry_filtered(TIE):  returns whether a header should LIE MUST NOT be
      propagated in TIDE according used
        to flooding scopes.

   is_request_filtered(TIE):  returns whether derive a TIE request should be
      propagated ZTP level by the receiving node */
   21: optional bool                          not_a_ztp_offer =
            common.default_not_a_ztp_offer;
   /** indicates to northbound neighbor or not according that it should
       be reflooding this node's N-TIEs to flooding scopes.

   is_flood_filtered(TIE):  returns whether achieve flood reduction and
       balancing for northbound flooding. To be ignored if received from a TIE requested
       northbound adjacency */
   22: optional bool                          you_are_flood_repeater =
             common.default_you_are_flood_repeater;
   /** can be flooded optionally set to indicate to neighbor that packet losses are seen on
       reception based on packet numbers or not according to the rate is too high. The receiver SHOULD
       temporarily slow down flooding scopes.

   try_to_transmit_tie(TIE):

      A.  if not is_flood_filtered(TIE) then

          1.  remove TIE from TIES_RTX if present

          2.  if TIE" with same key rates
    */
   23: optional bool                          you_are_sending_too_quickly =
             false;
   /** instance name in case multiple RIFT instances running on TIES_ACK then

              a.  if TIE" same or newer than TIE do nothing else

              b.  remove TIE" from TIES_ACK and add TIE to TIES_TX

          3.  else insert TIE into TIES_TX

   ack_tie(TIE):  remove TIE from all collections and then insert TIE
      into TIES_ACK.

   tie_been_acked(TIE):  remove TIE from all collections.

   remove_from_all_queues(TIE): same as `tie_been_acked`.

   request_tie(TIE):  if not is_request_filtered(TIE) then
      remove_from_all_queues(TIE) and add to TIES_REQ.

   move_to_rtx_list(TIE):  remove TIE from TIES_TX and then add to
      TIES_RTX using TIE retransmission interval.

   clear_requests(TIEs):  remove all TIEs from TIES_REQ.

   bump_own_tie(TIE): interface */
   24: optional string                        instance_name;
}
/** LinkID pair describes one of parallel links between two nodes */
struct LinkIDPair {
    /** node-wide unique value for self-originated TIE originate an empty or re-
      generate with version number higher then the one in TIE.

   The collection SHOULD be served with following priorities if local link */
    1: required common.LinkIDType      local_id;
    /** received remote link ID for this link */
    2: required common.LinkIDType      remote_id;

    /** describes the
   system cannot process all local interface index of the collections in real time:

      Elements on TIES_ACK should be processed with highest priority
      TIES_TX

      TIES_REQ and TIES_RTX

C.3.2.  TIDEs

   `TIEID` and `TIEHeader` link */
   10: optional common.PlatformInterfaceIndex       platform_interface_index;
   /** describes the local interface name */
   11: optional string                              platform_interface_name;
   /** indication whether the link is secured, i.e. protected by outer key, absence
       of this element means no indication, undefined outer key means not secured */
   12: optional common.OuterSecurityKeyID           trusted_outer_security_key;
   /** indication whether the link is protected by established BFD session */
   13: optional bool                                bfd_up;
}

/** ID of a TIE

    @note: TIEID space forms is a strict total order (modulo
   uncomparable sequence numbers in achieved by comparing the very unlikely event that can
   occur if a TIE is "stuck" elements
           in a part sequence defined and comparing each value as an
           unsigned integer of a network while the according length.
*/
struct TIEID {
    /** direction of TIE */
    1: required common.TieDirectionType    direction;
    /** indicates originator
   reboots and reissues TIEs many times to of the point its sequence# rolls
   over and forms incomparable distance to TIE */
    2: required common.SystemIDType        originator;
    /** type of the "stuck" copy) which
   implies that tie */
    3: required common.TIETypeType         tietype;
    /** number of the tie */
    4: required common.TIENrType           tie_nr;
}

/** Header of a comparison relation is possible between two elements.
   With that it TIE.

   @note: TIEID space is implictly possible to compare TIEs, TIEHeaders and
   TIEIDs to each other whereas a total order achieved by comparing the shortest viable key is always
   implied.

   When generating and sending TIDEs an implementation SHOULD ensure
   that enough bandwidth is left to send elements of Floodstate
   structure.

C.3.2.1.  TIDE Generation

   As given by timer constant, periodically generate TIDEs by:

      NEXT_TIDE_ID: ID of next TIE to be sent
              in TIDE.

      TIDE_START: Begin sequence defined and comparing each value as an
              unsigned integer of TIDE packet range.

   a.  NEXT_TIDE_ID = MIN_TIEID

   b.  while NEXT_TIDE_ID not equal to MAX_TIEID do

       1.  TIDE_START = NEXT_TIDE_ID

       2.  HEADERS = At most TIRDEs_PER_PKT headers in TIEDB starting at
           NEXT_TIDE_ID or higher that SHOULD according length.

   @note: After sequence number the lifetime received on the envelope
              must be filtered by
           is_tide_entry_filtered used for comparison before further fields.

   @note: `origination_time` and MUST either have a lifetime left >
           0 or have no content

       3.  if HEADERS is empty then START = MIN_TIEID else START = first
           element in HEADERS

       4.  if HEADERS' size less than TIRDEs_PER_PKT then END =
           MAX_TIEID else END = last element in HEADERS

       5.  send sorted HEADERS as TIDE setting START `origination_lifetime` are disregarded
              for comparison purposes and END as its
           range

       6.  NEXT_TIDE_ID = END

   The constant `TIRDEs_PER_PKT` SHOULD carried purely for debugging/security
              purposes if present.
*/
struct TIEHeader {
    /** ID of the tie */
    2: required TIEID                             tieid;
    /** sequence number of the tie */
    3: required common.SeqNrType                  seq_nr;

    /** absolute timestamp when the TIE
        was generated. This can be generated and used by the
   implementation on fabrics with
        synchronized clock to limit prevent lifetime modification attacks. */
   10: optional common.IEEE802_1ASTimeStampType   origination_time;
   /** original lifetime when the amount of TIE headers per TIDE so the
   sent TIDE PDU does not exceed interface MTU.

   TIDE PDUs SHOULD
       was generated. This can be spaced used on sending fabrics with
       synchronized clock to prevent packet drops.

C.3.2.2.  TIDE Processing

   On reception of TIDEs the following processing is performed:

      TXKEYS: Collection lifetime modification attacks. */
   12: optional common.LifeTimeInSecType          origination_lifetime;
}

/** Header of a TIE Headers as described in TIRE/TIDE.
*/
struct TIEHeaderWithLifeTime {
    1: required     TIEHeader                         header;
    /** remaining lifetime that expires down to 0 just like in ISIS.
        TIEs with lifetimes differing by less than `lifetime_diff2ignore` MUST
        be send after processing of considered EQUAL. */
    2: required     common.LifeTimeInSecType          remaining_lifetime;
}

/** TIDE with sorted TIE headers, if headers are unsorted, behavior is undefined */
struct TIDEPacket {
    /** first TIE header in the tide packet

      REQKEYS: Collection of TIEIDs to be requested after processing of */
    1: required TIEID                       start_range;
    /** last TIE header in the tide packet

      CLEARKEYS: Collection */
    2: required TIEID                       end_range;
    /** _sorted_ list of TIEIDs headers */
    3: required list<TIEHeaderWithLifeTime> headers;
}

/** TIRE packet */
struct TIREPacket {
    1: required set<TIEHeaderWithLifeTime>  headers;
}

/** neighbor of a node */
struct NodeNeighborsTIEElement {
    /** level of neighbor */
    1: required common.LevelType                level;
    /**  Cost to be removed from flood state
      queues

      LASTPROCESSED: Last processed TIEID in TIDE

      DBTIE: TIE neighbor.

         @note: All parallel links to same node
         incur same cost, in case the LSDB if found

   a.  LASTPROCESSED = TIDE.start_range

   b.  for every HEADER neighbor has multiple
         parallel links at different cost, the largest distance
         (highest numerical value) MUST be advertised
         @note: any neighbor with cost <= 0 MUST be ignored in TIDE do

       1.  DBTIE computations */
    3: optional common.MetricType               cost = find HEADER common.default_distance;
    /** can carry description of multiple parallel links in current LSDB

       2.  if HEADER < LASTPROCESSED then report error and reset
           adjacency and return

       3.  put a TIE */
    4: optional set<LinkIDPair>                 link_ids;

    /** total bandwith to neighbor, this will be normally sum of the
        bandwidths of all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and
           TIE.HEADER < HEADER) into TXKEYS

       4.  LASTPROCESSED the parallel links. */
    5: optional common.BandwithInMegaBitsType   bandwidth = HEADER

       5.  if DBTIE not found then

           I)     if originator is this node then bump_own_tie

           II)    else put HEADER into REQKEYS

       6.  if DBTIE.HEADER < HEADER then

           I)    if originator is this
            common.default_bandwidth;
}

/** Flags the node then bump_own_tie else
                 i.     if this is a N-TIE header from a northbound
                        neighbor then override DBTIE in LSDB with HEADER

                 ii.    else put HEADER into REQKEYS

       7.  if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS

       8.  if DBTIE.HEADER = HEADER then

           I)     if DBTIE has content already then put DBTIE.HEADER
                  into CLEARKEYS

           II)    else put HEADER into REQKEYS

   c.  put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and
       TIE.HEADER <= TIDE.end_range) into TXKEYS

   d.  for all TIEs sets */
struct NodeFlags {
    /** indicates that node is in TXKEYS try_to_transmit_tie(TIE)

   e.  for all TIEs overload, do not transit traffic through it */
    1: optional bool         overload = common.overload_default;
}

/** Description of a node.

    It may occur multiple times in REQKEYS request_tie(TIE)

   f.  for all different TIEs in CLEARKEYS remove_from_all_queues(TIE)

C.3.3.  TIREs

C.3.3.1.  TIRE Generation

   There is but if either
        * capabilities values do not much to say here.  Elements from both TIES_REQ match or
        * flags values do not match or
        * neighbors repeat with different values

    the behavior is undefined and
   TIES_ACK MUST a warning SHOULD be collected and sent out as fast as feasible as TIREs.
   When sending TIREs with elements from TIES_REQ generated.
    Neighbors can be distributed across multiple TIEs however if
    the `lifetime` field
   MUST sets are disjoint. Miscablings SHOULD be set to 0 to force reflooding from repeated in every
    node TIE, otherwise the neighbor even if behavior is undefined.

    @note: observe that absence of fields implies defined defaults
*/
struct NodeTIEElement {
    /** level of the node */
    1: required common.LevelType            level;
    /** node's neighbors. If neighbor systemID repeats in other node TIEs seem to be same.

C.3.3.2.  TIRE Processing

   On reception of TIREs
        same node the following processing behavior is performed:

      TXKEYS: Collection undefined */
    2: required map<common.SystemIDType,
                NodeNeighborsTIEElement>    neighbors;
    /** capabilities of TIE Headers the node */
    3: required NodeCapabilities            capabilities;
    /** flags of the node */
    4: optional NodeFlags                   flags;
    /** optional node name for easier operations */
    5: optional string                      name;
    /** PoD to be send after processing which the node belongs */
    6: optional common.PodType              pod;

    /** if any local links are miscabled, the indication is flooded */
   10: optional set<common.LinkIDType>      miscabled_links;

}

struct PrefixAttributes {
    /** distance of the packet

      REQKEYS: Collection prefix */
    2: required common.MetricType            metric = common.default_distance;
    /** generic unordered set of TIEIDs to route tags, can be requested after processing of redistributed to other protocols or use
        within the packet

      ACKKEYS: Collection context of TIEIDs real time analytics */
    3: optional set<common.RouteTagType>     tags;
    /** monotonic clock for mobile addresses */
    4: optional common.PrefixSequenceType    monotonic_clock;
    /** indicates if the interface is a node loopback */
    6: optional bool                         loopback = false;
    /** indicates that have been acked

      DBTIE: TIE in the LSDB prefix is directly attached, i.e. should be routed to even if found

   a.  for every HEADER in TIRE do
       1.  DBTIE = find HEADER
        the node is in current LSDB

       2.  if DBTIE not found then do nothing

       3.  if DBTIE.HEADER < HEADER then put HEADER into REQKEYS

       4.  if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS

       5.  if DBTIE.HEADER overload. **/
    7: optional bool                         directly_attached = HEADER then put DBTIE.HEADER into ACKKEYS

   b.  for all TIEs in TXKEYS try_to_transmit_tie(TIE)

   c.  for all TIEs in REQKEYS request_tie(TIE)

   d.  for all TIEs true;

    /** in ACKKEYS tie_been_acked(TIE)

C.3.4.  TIEs Processing on Flood State Adjacency

   On reception case of TIEs locally originated prefixes, i.e. interface addresses this can
        describe which link the following processing is performed:

      ACKTIE: TIE to acknowledge

      TXTIE: TIE to transmit

      DBTIE: address belongs to. */
   10: optional common.LinkIDType            from_link;
}

/** TIE in carrying prefixes */
struct PrefixTIEElement {
    /** prefixes with the LSDB if found

   a.  DBTIE = find TIE in current LSDB

   b.  if DBTIE not found then

       1. associated attributes.
        if originator the same prefix repeats in multiple TIEs of same node
        behavior is this unspecified */
    1: required map<common.IPPrefixType, PrefixAttributes> prefixes;
}

/** Generic key value pairs */
struct KeyValueTIEElement {
    /** if the same key repeats in multiple TIEs of same node then bump_own_tie
        or with different values, behavior is unspecified */
    1: required map<common.KeyIDType,string>    keyvalues;
}

/** single element in a short
           remaining lifetime

       2.  else insert TIE into LSDB and ACKTIE = TIE

       else

       1.  if DBTIE.HEADER = TIE.HEADER then

           i.     if DBTIE has content already then ACKTIE = TIE

           ii.    else process like TIE. enum `common.TIETypeType`
    in TIEID indicates which elements MUST be present
    in the "DBTIE.HEADER < TIE.HEADER" TIEElement. In case

       2.  if DBTIE.HEADER < TIE.HEADER then

           i.     if originator is this node then bump_own_tie

           ii.    else insert of mismatch the unexpected
    elements MUST be ignored. In case of lack of expected
    element the TIE into LSDB an error MUST be reported and ACKTIE = the TIE

       3.  if DBTIE.HEADER > TIE.HEADER then

           i.     if DBTIE has content already then TXTIE = DBTIE

           ii.    else ACKTIE = DBTIE

   c.  if TXTIE is set then try_to_transmit_tie(TXTIE)

   d.  if ACKTIE is set then ack_tie(TIE)

C.3.5.  TIEs Processing When LSDB Received Newer Version on Other
        Adjacencies

   The Link State Database
    MUST be ignored.

    This type can be considered extended with new optional elements
    for new `common.TIETypeType` values without breaking
    the major but if it is necessary to be understand whether
    all nodes support the new type a switchboard that
   does not need any flooding procedures but can node capability must
    be given new versions added as well.
 */
union TIEElement {
    /** used in case of TIEs by a peer.  Consecutively, a peer receives from the LSDB
   newer versions enum common.TIETypeType.NodeTIEType */
    1: optional NodeTIEElement            node;
    /** used in case of TIEs received by other peeers enum common.TIETypeType.PrefixTIEType */
    2: optional PrefixTIEElement          prefixes;
    /** positive prefixes (always southbound)
        It MUST NOT be advertised within a North TIE and processes them
   (without any filtering) just like receving TIEs from its remote peer.
   This publisher model can ignored otherwise
    */
    3: optional PrefixTIEElement          positive_disaggregation_prefixes;
    /** transitive, negative prefixes (always southbound) which
        MUST be implemented aggregated and propagated
        according to the specification
        southwards towards lower levels to heal
        pathological upper level partitioning, otherwise
        blackholes may occur in many ways.

C.3.6.  Sending TIEs

   On multiplane fabrics.
        It MUST NOT be advertised within a periodic basis all TIEs with lifetime left > 0 North TIE.
    */
    5: optional PrefixTIEElement          negative_disaggregation_prefixes;
    /** externally reimported prefixes */
    6: optional PrefixTIEElement          external_prefixes;
    /** positive external disaggregated prefixes (always southbound).
        It MUST NOT be sent out
   on the adjacency, removed from TIES_TX list advertised within a North TIE and requeued onto
   TIES_RTX list. ignored otherwise
    */
    7: optional PrefixTIEElement          positive_external_disaggregation_prefixes;
    /** Key-Value store elements */
    9: optional KeyValueTIEElement        keyvalues;
}

/** TIE packet */
struct TIEPacket {
    1: required TIEHeader  header;
    2: required TIEElement element;
}

/** content of a RIFT packet */
union PacketContent {
    1: optional LIEPacket     lie;
    2: optional TIDEPacket    tide;
    3: optional TIREPacket    tire;
    4: optional TIEPacket     tie;
}
/** RIFT packet structure */
struct ProtocolPacket {
    1: required PacketHeader  header;
    2: required PacketContent content;
}

Appendix D. C.  Constants

D.1.

C.1.  Configurable Protocol Constants

   This section gathers constants that are provided in the schema files
   and in the document.

   +----------------+--------------+-----------------------------------+
   |                | Type         | Value                             |
   +----------------+--------------+-----------------------------------+
   | LIE IPv4       | Default      | 224.0.0.120 or all-rift-routers   |
   | Multicast      | Value,       | to be assigned in IPv4 Multicast  |
   | Address        | Configurable | Address Space Registry in Local   |
   |                |              | Network Control Block             |
   +----------------+--------------+-----------------------------------+
   | LIE IPv6       | Default      | FF02::A1F7 or all-rift-routers to |
   | Multicast      | Value,       | be assigned in IPv6 Multicast     |
   | Address        | Configurable | Address Assignments               |
   +----------------+--------------+-----------------------------------+
   | LIE            | Default      | 914                               |
   | Destination    | Value,       |                                   |
   | Port           | Configurable |                                   |
   +----------------+--------------+-----------------------------------+
   | Level value    | Constant     | 24                                |
   | for            |              |                                   |
   | TOP_OF_FABRIC  |              |                                   |
   | flag           |              |                                   |
   +----------------+--------------+-----------------------------------+
   | Default LIE    | Default      | 3 seconds                         |
   | Holdtime       | Value,       |                                   |
   |                | Configurable |                                   |
   +----------------+--------------+-----------------------------------+
   | TIE            | Default      | 1 second                          |
   | Retransmission | Value        |                                   |
   | Interval       |              |                                   |
   +----------------+--------------+-----------------------------------+
   | TIDE           | Default      | 5 seconds                         |
   | Generation     | Value,       |                                   |
   | Interval       | Configurable |                                   |
   +----------------+--------------+-----------------------------------+
   | MIN_TIEID      | Constant     | TIE Key with minimal values:      |
   | signifies      |              | TIEID(originator=0,               |
   | start of TIDEs |              | tietype=TIETypeMinValue,          |
   |                |              | tie_nr=0, direction=South)        |
   +----------------+--------------+-----------------------------------+
   | MAX_TIEID      | Constant     | TIE Key with maximal values:      |
   | signifies end  |              | TIEID(originator=MAX_UINT64,      |
   | of TIDEs       |              | tietype=TIETypeMaxValue,          |
   |                |              | tie_nr=MAX_UINT64,                |
   |                |              | direction=North)                  |
   +----------------+--------------+-----------------------------------+

                          Table 6: all_constants

Authors' Addresses

   Tony Przygienda (editor)
   Juniper
   1137 Innovation Way
   Sunnyvale, CA
   USA

   Email: prz@juniper.net

   Alankar Sharma
   Comcast
   1800 Bishops Gate Blvd
   Mount Laurel, NJ  08054
   US

   Email: Alankar_Sharma@comcast.com

   Pascal Thubert
   Cisco Systems, Inc
   Building D
   45 Allee des Ormes - BP1200
   MOUGINS - Sophia Antipolis  06254
   FRANCE

   Phone: +33 497 23 26 34
   Email: pthubert@cisco.com

   Bruno Rijsman
   Individual

   Email: fl0w@yandex-team.ru brunorijsman@gmail.com

   Dmitry Afanasiev
   Yandex

   Email: fl0w@yandex-team.ru