[Docs] [txt|pdf] [Tracker] [WG] [Email] [Nits]

Versions: 00

Inter-Domain Multicast Routing (IDMR)                       A. Ballardie
INTERNET-DRAFT                                              Consultant

                                                            March 1997.

             Core Based Tree (CBT) Multicast Border Router
             Specification for Connecting a CBT Stub Region
                          to a DVMRP Backbone


Status of this Memo

   This document is an Internet Draft.  Internet Drafts are working doc-
   uments of the Internet Engineering Task Force (IETF), its Areas, and
   its Working Groups. Note that other groups may also distribute work-
   ing documents as Internet Drafts).

   Internet Drafts are draft documents valid for a maximum of six
   months. Internet Drafts may be updated, replaced, or obsoleted by
   other documents at any time.  It is not appropriate to use Internet
   Drafts as reference material or to cite them other than as a "working
   draft" or "work in progress."

   Please check the I-D abstract listing contained in each Internet
   Draft directory to learn the current status of this or any other In-
   ternet Draft.


   This document specifies the behaviour of CBT multicast border router
   interconnecting a CBT multicast stub domain (region) to a DVMRP [1]
   multicast backbone.

   The document provides guidelines that are intended to be generally
   aligned with the mechanisms described in the "Interoperability Rules
   for Multicast Routing Protocols" [2]. Certain aspects of those rules
   may be interpreted and implemented differently, and therefore some
   discretion is left to the implementor.

   This document assumes the reader is familiar with  the CBT protocol

Expires September 1997                                          [Page 1]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

1.  Interoperability Model & Assumptions

   The interoperability model and mechanisms we present assume:

   +o    a CBT border router (BR) interconnects a multicast stub region
        (collection of networks defined by one or more CBT-capable bor-
        der routers) to a dense mode multicast region, such as a DVMRP
        backbone.  The CBT stub region may or may not be multi-homed.

   +o    logically, a BR has at least two "components", each component
        being associated with a particular multicast routing protocol.
        Each component may have more than one associated interface which
        is running the particular multicast protocol associated with the
        component. One of these components is a CBT component.

   +o    all components share a common multicast forwarding cache of
        (S,G) entries for the purpose of inter-domain multicast routing.
        To ensure that all components have a consistent view of the
        shared cache a BR's components must be able to communicate with
        each other; how is implementation dependent.  Additionally, each
        component may have a separate multicast forwarding cache spe-
        cific to that component's multicast routing protocol.

   +o    the presence of a domain-wide reporting (DWR) mechanism [4],
        enabling the dynamic and timely reporting of group joining and
        leaving inside a CBT domain to the domain's border routers. DWRs
        are only necessary for inter-domain scoped groups; they may not
        be sent for intra-domain scoped (administratively scoped [5])
        groups.  DWRs are not assumed present in DVMRP domains.

   +o    a <core, group> mapping table per CBT component exists which
        maintains mappings between core routers and active groups pre-
        sent inside a CBT domain.

   +o    mixed multicast protocol LANs are not permitted.

   +o    The term "region" and "domain" are used interchangeably through-

2.  Overview

   There are essentially two aspects to this specification: the election
   of a "designated border router", and the intrinsics of a border

Expires September 1997                                          [Page 2]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

   router's shared multicast forwarding cache.

   Like other routers in a CBT domain, a border router (BR) listens to
   bootstrap messages [8].  A BR joins each core advertised in a valid
   bootstrap message by sending a JOIN_REQUEST, with the BR_JOIN option
   set [3], towards each core; in doing so the BR joins all groups that
   map to this core router. The state created by this join represents
   (*, core) state, unlike the typical (G) state maintained by other on-
   tree routers.  This state exists between the sending BR router and
   the core router; it transends any distribution trees that are
   "attached" to this core router. Hence, some (probably small number
   of) routers within a CBT domain - those on the path between a BR and
   a core router - may have to store both (G) state and (*, core) state.

   Only the domain's elected designated border router may forward data
   packets across the CBT domain boundary. Similarly with routing traf-
   fic. Any other border routers may participate in the interactions
   necessary to maintain the BR shared forwarding cache, but they may
   not forward data packets or routing traffic across boundary.

3.  Designated Border Router Election

   One of a CBT stub region's border routers (BRs) is elected, dynami-
   cally, as the region's designated BR.

   The designated BR is elected using CBT's HELLO protocol [3], which
   operates over a tree spanning all the region's BRs. The core router
   for this tree is elected using the intra-domain core router discovery
   ("bootstrap") mechanism described in [3, 6]. The multicast address
   for this administratively scoped group - the "all-border-routers"
   (ABR) group - is (IANA assignment pending).  The desig-
   nated BR election criteria are the same as those for LAN DR election

   When a BR starts up, like other routers in the domain, it awaits
   bootstrap messages from the domain's elected bootstrap router. Once a
   BR receives a valid candidate core set (CC-set) advertisement, it
   performs a hash function on the ABR group address yielding a core
   from the CC-set. The BR then sends a JOIN_REQUEST towards the core so
   as to join the ABR group tree.

   On receipt of the JOIN_ACK, the BR begins engaging in the HELLO pro-
   tocol for border routers.

Expires September 1997                                          [Page 3]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

   The HELLO protocol for designated BR election is virtually identical
   to the LAN DR election, except for the following:

   +o    HELLO messages sent by a BR are sent with HELLO option type 0
        (zero); this is the BR_HELLO option type. Its option length is 0
        (zero), and its option value is 0 (zero).

   +o    HELLO messages sent by a BR are addressed to the ABR group, and
        sent with IP TTL MAX_TTL.

   +o    a "designated BR flag" is kept per BR (not per interface as with
        the DR flag). By default, this flag is set (in steady state,
        only one of a domain's BRs will have its "designated BR flag"

   Consequently, a BR participating in LAN DR election as well as desig-
   nated BR election, must maintain LAN DR and designated BR HELLO
   information separately, i.e. random response, HELLO preference, and
   HELLO convergence time (for definitions, see [3]).  Hence, different
   random response intervals, and different HELLO preferences, can be
   configured for LAN DR election and designated BR election on border

   Besides these differences, HELLO protocol operation is identical to
   that specified for LAN DR election in [3].

             |     |           |             |         |   |    X = component
             |     | comp A    |             | comp B  |   |         interface
             |     -------------             -----------   |
             |                                             |    comp = component
             |         -----------------------------       |
             |         |     Shared Multicast      |       |
             |         |   Forwarding Cache (S,G)  |       |
             |         -----------------------------       |
             |                                             |
             |     ------------              ----------    |
             |     |  comp C   |             | comp D  |   |
             |     |           |             |         |   |

            Figure 1: Example Representation of a Border Router

Expires September 1997                                          [Page 4]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

4.  Multicast Forwarding Cache Manipulation

   Note that this section describes some, but not necessarily all, of a
   DVMRP component's interactions with regards to manipulating the
   shared forwarding cache. These are covered elsewhere [2].

   A border router's shared multicast forwarding cache consists of a set
   of (S, G) entries, where S represents an address if the entry is cre-
   ated by the BR's DVMRP component, or the wildcard address (*) if the
   entry is created by the BR's CBT component. The creating component of
   an entry is the entry's "owner".  Only the owner of an entry can
   remove it from the shared forwarding cache.  Each entry consists of
   an incoming interface, and an outgoing interface list.

   If multicast data arrives over one of a border router's DVMRP compo-
   nent interfaces, the BR first attempts to match the packet's source
   and group addresses (S, G). If no match is found and the data arrives
   over the correct interface for source S, an (S, G) forwarding cache
   entry is created. The incoming interface is copied to the entry's
   incoming interface, and any interfaces listed in any other (S, G) or
   (*, G) entries, and not yet listed under this (S, G), are placed in
   this (S, G) entry's outgoing interface list.

   Whenever the outgoing interface list of an (S, G) entry becomes NULL,
   the creator of the (S, G) entry sends a DVMRP prune message upstream.
   However, if the outgoing interface list of a (*, G) entry becomes
   NULL, the CBT component deletes the (*, G) forwarding cache entry.
   However, the CBT component must not send a QUIT_NOTIFICATION upstream
   (internally) towards to core for the group until all group associa-
   tions are removed from that core in the <core, group> mapping table.
   When a QUIT_NOTIFICATION is sent, the interface over which the
   QUIT_NOTIFICATION is sent is removed from the outgoing interface list
   of any (*, G) and (S, G) entries.

   If the outgoing interface list of an (S, G) entry goes from NULL to
   non-NULL, the owning DVMRP component must send a DVMRP graft message
   upstream for source S.

   If a domain-wide report (DWR) is received by one of a BR's CBT compo-
   nent interfaces, the group is added to the component group table
   under the corresponding core entry.  The CBT component searches for
   any (*, G) and (S, G) entries, and adds the interface to the outgoing
   interface list of each such entry. If no entry is found, the CBT com-
   ponent creates a (*, G) entry, adds the corresponding interface as

Expires September 1997                                          [Page 5]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

   the entry's incoming interface, and also copies any interfaces listed
   under any (*, G) and (S, G) entries not yet listed in this entry to
   this entry's outgoing interface list.

   Once a CBT component establishes a particular group is no longer pre-
   sent inside its domain, the group is removed from the corresponding
   <core, group> table entry.  If the component owns a (*, G) entry
   which has a non-NULL outgoing interface list, no further action is
   taken. If however, the entry's outgoing interface list is NULL, or
   becomes NULL, the CBT component deletes the (*, G) entry from the
   forwarding cache.  The CBT component can only send a CBT
   QUIT_NOTIFICATION when all groups have been removed from a <core,
   group> table entry.

   If an IGMP [7] host membership report/leave is received by one of a
   BR's CBT component interfaces, it is processed according to normal
   IGMP behaviour. Additionally, this event causes the CBT component to
   generate a DWR for itself.

5.  Data Flow

   When data arrives at a BR, a longest match, i.e. (S, G) is first
   attempted.  For any (S, G) match, the data must arrive over the
   entry's incoming interface, else it is discarded. If the data arrives
   over the correct interface for S, a copy of the data packet is for-
   warded over each interface listed in the entry's outgoing interface

    If only (*, G) can be matched, a copy of the data pkt is forwarded
   over each interface listed in the entry except the incoming inter-

   If no entry is found in the multicast forwarding cache, the data is

6.  Routing Information Flow

   A CBT stub region need never import routes from a dense mode region
   since the nature of CBT forwarding does not require this information.
   The reverse is not true; the dense mode region requires routes from

Expires September 1997                                          [Page 6]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

   the CBT region, so that, for traffic sourced inside the CBT region,
   DVMRP routers outside the CBT stub region can correctly perform their
   RPF check, upon which their data forwarding decision is based.

   The designated BR is responsible for injecting internal routing
   information into the DVMRP region.

   The designated BR can either inject only "active" routes (those net-
   work prefixes for active sources inside the CBT region), or a CIDR
   aggregate, into the DVMRP region.

7.  Summary

   This memo describes the rules and mechanisms of operation of a CBT
   component of a Border Router attaching a stub CBT region to a DVMRP
   backbone. The mechanisms described are generally aligned with the
   rules and procedures given in "Interoperability Rules for Multicast
   Protocols" [2].


   Special thanks goes to Paul Francis, NTT Japan, for the original
   brainstorming sessions that led to the development of CBT.

   Dave Thaler provided many comments with regards to aligning this
   specification with [2].


  [1] Distance Vector Multicast Routing Protocol (DVMRP); T. Pusateri;
  Working draft, 1997.

  [2] Interoperability Rules for Multicast Routing Protocols; D. Thaler;
  November 1996.

Expires September 1997                                          [Page 7]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

  [3] Core Based Trees (CBT) Multicast Routing: Protocol Specification;
  A.  Ballardie; ftp://ds.internic.net/internet-drafts/draft-ietf-idmr-
  cbt-spec-**.txt.  Working draft, March 1997.

  [4]  IETF IDMR Working Group minutes, July 1995.

  [5] Administrative Scoping...

  [6] Protocol Independent Multicast (PIM) Sparse Mode Specification; D.
  Estrin et al; ftp://netweb.usc.edu/pim   Working drafts, 1996.

  [7] Internet Group Management Protocol, version 2 (IGMPv2); W. Fenner;
  Working draft, 1996.

  [8] A Dynamic Bootstrap Mechanism for Rendezvous-based Multicast Rout-
  ing; D. Estrin et al.; Technical Report, available from:

Expires September 1997                                          [Page 8]

INTERNET-DRAFT        CBT - DVMRP Interoperability            March 1997

Author Information:

   Tony Ballardie,
   Research Consultant.
   e-mail: ABallardie@acm.org

Expires September 1997                                          [Page 9]

Html markup produced by rfcmarkup 1.129d, available from https://tools.ietf.org/tools/rfcmarkup/