Internet Engineering Task Force                         D. Joachimpillai
Internet-Draft                                                   Verizon
Intended status: Standards Track                           J. Hadi Salim
Expires: September 7, 2015 May 5, 2016                                   Mojatatu Networks
                                                           March 6,
                                                        November 2, 2015

                          ForCES Inter-FE LFB
                    draft-ietf-forces-interfelfb-01
                    draft-ietf-forces-interfelfb-02

Abstract

   This document describes extending how to extend the ForCES LFB topology across
   FEs
   i.e inter-FE connectivity without needing any changes to the ForCES
   specification by defining the Inter-FE LFB. LFB Class.  The Inter-FE LFB Class
   provides the ability to pass data, metadata data and exceptions metadata across FEs.
   The document describes a generic way FEs without
   needing any changes to transport the mentioned
   details but ForCES specification.  The document
   focuses on ethernet Ethernet transport.

Status of this This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 7, 2015. May 5, 2016.

Copyright Notice

   Copyright (c) 2015 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Terminology and Conventions . . . . . . . . . . . . . . . . .   3
     1.1.  Requirements Language . . . . . . . . . . . . . . . . . .   3
     1.2.  Definitions . . . . . . . . . . . . . . . . . . . . . . .   3
   2.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . .   3
   3.  Problem Scope And Use Cases . . . . . . . . . . . . . . . . .   4
     3.1.  Basic Router  Assumptions . . . . . . . . . . . . . . . . . . . . . . .   4
       3.1.1.  Distributing The LFB Topology
     3.2.  Sample Use Cases  . . . . . . . . . . . . . . .  6
     3.2.  Arbitrary Network Function . . . . .   4
       3.2.1.  Basic IPv4 Router . . . . . . . . . . .  7
       3.2.1. . . . . . . .   4
         3.2.1.1.  Distributing The Basic IPv4 Router  . . . . . . .   6
       3.2.2.  Arbitrary Network Function  . . . . .  8
   4.  Proposal Overview . . . . . . . .   7
         3.2.2.1.  Distributing The Arbitrary Network Function . . .   7
   4.  Inter-FE LFB Overview . . . . . . . . . . . . . . . .  9 . . . .   8
     4.1.  Inserting The Inter-FE LFB  . . . . . . . . . . . . . . . .  9   8
   5.  Generic  Inter-FE connectivity  . Ethernet Connectivity  . . . . . . . . . . . . . . . 11  10
     5.1.  Inter-FE Ethernet Connectivity Issues . . . . . . . . . .  10
       5.1.1.  MTU Consideration . . . . . . . 13
       5.1.1. . . . . . . . . . . .  10
       5.1.2.  Quality Of Service Considerations . . . . . . . . . .  11
       5.1.3.  Congestion Considerations . . . . . . . . . . . . . .  11
       5.1.4.  Deployment Considerations . . . . . . . . . . . . . .  11
     5.2.  Inter-FE Ethernet Connectivity Issues Encapsulation . . . . . . . . 15 . . . . .  12
   6.  Detailed Description of the Ethernet inter-FE LFB . . . . . . 16  13
     6.1.  Data Handling . . . . . . . . . . . . . . . . . . . . . . 16  13
       6.1.1.  Egress Processing . . . . . . . . . . . . . . . . . . 17  14
       6.1.2.  Ingress Processing  . . . . . . . . . . . . . . . . . . 18  15
     6.2.  Components  . . . . . . . . . . . . . . . . . . . . . . . . 19  16
     6.3.  Inter-FE LFB XML Model  . . . . . . . . . . . . . . . . . . 19  16
   7.  Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . 24  21
   8.  IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24  21
   9.  IEEE Assignment Considerations  . . . . . . . . . . . . . . . . 24  21
   10. Security Considerations . . . . . . . . . . . . . . . . . . . 24  21
   11. References  . . . . . . . . . . . . . . . . . . . . . . . . . . 25  22
     11.1.  Normative References . . . . . . . . . . . . . . . . . . . 25  22
     11.2.  Informative References . . . . . . . . . . . . . . . . . . 25  23
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . . . 26  24

1.  Terminology and Conventions

1.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in [RFC2119].

1.2.  Definitions

   This document reiterates the terminology defined in several ForCES
   documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] [RFC7391]
   [RFC7408] for the sake of contextual clarity.

      Control Engine (CE)

      Forwarding Engine (FE)

      FE Model

      LFB (Logical Functional Block) Class (or type)

      LFB Instance

      LFB Model

      LFB Metadata

      ForCES Component

      LFB Component

      ForCES Protocol Layer (ForCES PL)

      ForCES Protocol Transport Mapping Layer (ForCES TML)

2.  Introduction

   In the ForCES architecture, a packet service can be modelled by
   composing a graph of one or more LFB instances.  The reader is
   referred to the details in the ForCES Model [RFC5812].

   The FEObject LFB capabilities in the current ForCES Model [RFC5812] define
   component ModifiableLFBTopology which, when advertised by the FE,
   implies that model describes the advertising FE is capable of allowing creation and
   modification processing within a single
   Forwarding Element (FE) in terms of LFB graph(s) by logical forwarding blocks (LFB),
   including provision for the control plane.  Details on how a
   graph of LFB class instances can be created can be derived by the
   control plane by looking at the FE's FEObject LFB class table
   component SupportedLFBs.  The SupportedLFBs table contains
   information about each LFB class Control Element (CE) to establish and
   modify that processing sequence, and the FE supports.  For each LFB
   class supported, details are provided on how parameters of the supported LFB class
   may individual
   LFBs.

   Under some circumstance, it would be connected beneficial to other LFB classes.  The SupportedLFBs table
   describes which LFB class a specified LFB class may succeed or
   precede in an LFB class instance topology.  Each link connecting two
   LFB class instances is described in the LFBLinkType dataTypeDef and
   has sufficient details be able to identify precisely the end points of a link
   of a service graph.

   The CE may therefore create a packet service by describing an LFB
   instance graph connection; extend
   this is achieved by updating the FEOBject
   LFBTopology table.

   Often there are requirements for view, and the packet service graph to cross FE
   boundaries. resulting processing across more than one FE.
   This could may be from a desire in order to achieve scale by splitting the service processing
   across elements, or need
   to interact with LFBs which reside in a separate FE (eg lookaside
   interface to a shared TCAM, an interconnected chip, or as coarse
   grained functionality as an external NAT FE box being part of the
   service graph etc). utilize specialized hardware available on
   specific FEs.

   Given that the ForCES inter-LFB architecture calls out for the
   ability to pass metadata between LFBs, it is imperative therefore to
   define mechanisms to extend that existing feature and allow passing
   the metadata between LFBs across FEs.

   This document describes extending how to extend the LFB topology across FEs i.e
   inter-FE connectivity without needing any changes to the ForCES
   definitions.  It focusses focuses on using Ethernet as the interconnection as
   a starting point while leaving room for other protocols (such as
   directly on top of IP, UDP, VXLAN, etc) to be addressed by other
   future documents.
   between FEs.

3.  Problem Scope And Use Cases

   The scope of this document is to solve the challenge of passing
   ForCES defined metadata and exceptions alongside packet data across FEs (be they
   physical or virtual).  To illustrate the problem scope we present two use
   cases where we start with a single FE running all the functionality
   then split it into multiple FEs.

3.1.  Basic Router

   A sample LFB topology Figure 1 demonstrates a service graph virtual) for
   delivering basic IPV4 forwarding service within one FE.  For the purpose of illustration, distributing the diagram shows LFB classes as graph nodes
   instead of multiple LFB class
   processing.

3.1.  Assumptions

   o  The FEs involved in the Inter-FE LFB belong to the same Network
      Element(NE) and are within a single administrative private network
      which is in close proximity.

   o  The FEs are already interconnected using Ethernet.  We focus on
      Ethernet because it is a very common setup as an FE interconnect.
      While other higher transports (such as UDP over IP) or lower
      transports could be defined to carry the data and metadata it is
      simpler to use Ethernet (for the functional scope of a single
      distributed device already interconnected with ethernet).

3.2.  Sample Use Cases

   To illustrate the problem scope we present two use cases where we
   start with a single FE running all the LFBs functionality then split
   it into multiple FEs achieving the same end goals.

3.2.1.  Basic IPv4 Router

   A sample LFB topology depicted in Figure 1 demonstrates a service
   graph for delivering basic IPV4 forwarding service within one FE.
   For the purpose of illustration, the diagram shows LFB classes as
   graph nodes instead of multiple LFB class instances.

   Since the illustration on Figure 1 is meant only as an exercise to
   showcase how data and metadata are sent down or upstream on a graph
   of LFBs, LFB instances, it abstracts out any ports in both directions and
   talks about a generic ingress and egress LFB.  Again, for
   illustration purposes, the diagram does not show exception or error
   paths.  Also left out are details on Reverse Path Filtering, ECMP,
   multicast handling etc.  In other words, this is not meant to be a
   complete description of an IPV4 forwarding application; for a more
   complete example, please refer to the LFBlib document [RFC6956].

   The output of the ingress LFB(s) coming into the IPv4 Validator LFB
   will have both the IPV4 packets and, depending on the implementation,
   a variety of ingress metadata such as offsets into the different
   headers, any classification metadata, physical and virtual ports
   encountered, tunnelling information etc.  These metadata are lumped
   together as "ingress metadata".

   Once the IPV4 validator vets the packet (example ensures that no
   expired TTL etc), it feeds the packet and inherited metadata into the
   IPV4 unicast LPM LFB.

                      +----+
                      |    |
           IPV4 pkt   |    | IPV4 pkt     +-----+             +---+
       +------------->|    +------------->|     |             |   |
       |  + ingress   |    | + ingress    |IPv4 |   IPV4 pkt  |   |
       |   metadata   |    | metadata     |Ucast+------------>|   +--+
       |              +----+              |LPM  |  + ingress  |   |  |
     +-+-+             IPv4               +-----+  + NHinfo   +---+  |
     |   |             Validator                   metadata   IPv4   |
     |   |             LFB                                    NextHop|
     |   |                                                     LFB   |
     |   |                                                           |
     |   |                                                  IPV4 pkt |
     |   |                                               + {ingress  |
     +---+                                                  + NHdetails}
     Ingress                                                metadata |
      LFB                                +--------+                  |
                                         | Egress |                  |
                                      <--+        |<-----------------+
                                         |  LFB   |
                                         +--------+

             Figure 1: Basic IPV4 packet service LFB topology

   The IPV4 unicast LPM LFB does a longest prefix match lookup on the
   IPV4 FIB using the destination IP address as a search key.  The
   result is typically a next hop selector which is passed downstream as
   metadata.

   The Nexthop LFB receives the IPv4 packet with an associated next hop
   info metadata.  The NextHop LFB consumes the NH info metadata and
   derives from it a table index to look up the next hop table in order
   to find the appropriate egress information.  The lookup result is
   used to build the next hop details to be used downstream on the
   egress.  This information may include any source and destination
   information (MAC address (for our purposes, MAC addresses to use, if ethernet;) use) as well as
   egress ports.  [Note: It is also at this LFB where typically the
   forwarding TTL
   decrement decrementing and IP checksum recalculation occurs.]

   The details of the egress LFB are considered out of scope for this
   discussion.  Suffice it is to say that somewhere within or beyond the
   Egress LFB the IPV4 packet will be sent out a port (ethernet, (Ethernet, virtual
   or physical etc).

3.1.1.

3.2.1.1.  Distributing The LFB Topology Basic IPv4 Router

   Figure 2 demonstrates one way the router LFB topology in Figure 1 may
   be split across two FEs (eg two ASICs).  Figure 2 shows the LFB
   topology split across FEs after the IPV4 unicast LPM LFB.

     FE1
   +-------------------------------------------------------------+
   |                            +----+                           |
   | +----------+               |    |                           |
   | | Ingress  |    IPV4 pkt   |    | IPV4 pkt     +-----+      |
   | |  LFB     +-------------->|    +------------->|     |      |
   | |          |  + ingress    |    | + ingress    |IPv4 |      |
   | +----------+    metadata   |    |   metadata   |Ucast|      |
   |      ^                     +----+              |LPM  |      |
   |      |                      IPv4               +--+--+      |
   |      |                     Validator              |         |
   |                             LFB                   |         |
   +---------------------------------------------------|---------+
                                                       |
                                                  IPv4 packet +
                                                {ingress + NHinfo}
                                                    metadata
     FE2                                               |
   +---------------------------------------------------|---------+
   |                                                   V         |
   |             +--------+                       +--------+     |
   |             | Egress |     IPV4 packet       | IPV4   |     |
   |       <-----+  LFB   |<----------------------+NextHop |     |
   |             |        |{ingress + NHdetails}  | LFB    |     |
   |             +--------+      metadata         +--------+     |
   +-------------------------------------------------------------+

             Figure 2: Split IPV4 packet service LFB topology

   Some proprietary inter-connect (example Broadcom Higig HiGig over XAUI
   [brcm-higig]) are known to exist to carry both the IPV4 packet and
   the related metadata between the IPV4 Unicast LFB and IPV4 NextHop
   LFB across the two FEs.

   The purpose of

   This document defines the inter-FE LFB is to define LFB, a standard mechanisms for
   interconnecting FEs and mechanism for that reason we are not going to touch
   anymore on proprietary chip-chip interconnects other than state the
   fact they exist and that it is feasible to have translation to
   encapsulating, generating, receiving and
   from proprietary approaches.  The document focus is the FE-FE
   interconnect where the FE could be physical or virtual decapsulating packets and the
   interconnecting technology runs a standard protocol such as ethernet,
   IP or other protocols on top of IP.

3.2.
   associated metadata FEs over Ethernet.

3.2.2.  Arbitrary Network Function

   In this section we show an example of an arbitrary network function Network Function
   which is more coarse grained in terms of functionality.  Each Network
   function
   Function may constitute more than one LFB.

     FE1
   +-------------------------------------------------------------+
   |                            +----+                           |
   | +----------+               |    |                           |
   | | Network  |   pkt         |NF2 |    pkt       +-----+      |
   | | Function +-------------->|    +------------->|     |      |
   | |    1     |  + NF1        |    | + NF1/2      |NF3  |      |
   | +----------+    metadata   |    |   metadata   |     |      |
   |      ^                     +----+              |     |      |
   |      |                                         +--+--+      |
   |      |                                            |         |
   |                                                   |         |
   +---------------------------------------------------|---------+
                                                       V

         Figure 3: A Network Function Service Chain within one FE

   The setup in Figure 3 is a typical of most packet processing boxes
   where we have functions like DPI, NAT, Routing, etc connected in such
   a topology to deliver a packet processing service to flows.

3.2.1.

3.2.2.1.  Distributing The Arbitrary Network Function
   The setup in Figure 3 can be split out across 3 FEs instead of as
   demonstrated in Figure 4.  This could be motivated by scale out
   reasons or because different vendors provide different functionality
   which is plugged-in to provide such functionality.  The end result is
   to have the same packet service delivered to the different flows
   passing through.

      FE1                        FE2
      +----------+               +----+               FE3
      | Network  |   pkt         |NF2 |    pkt       +-----+
      | Function +-------------->|    +------------->|     |
      |    1     |  + NF1        |    | + NF1/2      |NF3  |
      +----------+    metadata   |    |   metadata   |     |
           ^                     +----+              |     |
           |                                         +--+--+
                                                        |
                                                        V

       Figure 4: A Network Function Service Chain Distributed Across
                               Multiple FEs

4.  Proposal  Inter-FE LFB Overview

   We address the inter-FE connectivity requirements by proposing defining the
   inter-FE LFB class.  Using a standard LFB class definition implies no
   change to the basic ForCES architecture in the form of the core LFBs
   (FE Protocol or Object LFBs).  This design choice was made after
   considering an alternative approach that would have required changes
   to both the FE Object capabilities (SupportedLFBs) as well
   LFBTopology component to describe the inter-FE connectivity
   capabilities as well as runtime topology of the LFB instances.

4.1.  Inserting The Inter-FE LFB

   The distributed LFB topology described in Figure 2 is re-illustrated
   in Figure 5 to show the topology location where the inter-FE LFB
   would fit in.

      FE1
    +-------------------------------------------------------------+
    | +----------+               +----+

   As can be observed in Figure 5, the same details passed between IPV4
   unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of
   the Inter-FE LFB.  This information is illustrated as multiplicity of
   inputs into the egress InterFE LFB instance.  Each input represents a
   unique set of selection information.

     FE1
   +-------------------------------------------------------------+
   | +----------+               +----+                           |
   | | Ingress  |    IPV4 pkt   |    | IPV4 pkt     +-----+      |
   | |  LFB     +-------------->|    +------------->|     |      |
   | |          |  + ingress    |    | + ingress    |IPv4 |      |
   | +----------+    metadata   |    |   metadata   |Ucast|      |
   |      ^                     +----+              |LPM  |      |
   |      |                      IPv4               +--+--+      |
   |      |                     Validator              |         |
   |      |                      LFB                   |         |
   |      |                                  IPv4 pkt + metadata |
   |      |                                   {ingress + NHinfo + InterFEid}| NHinfo} |
   |      |                                            |         |
   |      |                                       +..--+..+      |
   |      |                                       |..| |  |      |
   |                                            +-V--V-V--V-+    |
   |                                            |   Egress  |                                              +----V----+    |
   |                                            |  InterFE  |    |
   |                                            |   LFB     |    |
   |                                              +----+----+                                            +------+----+    |
   +---------------------------------------------------|---------+
                                                       |
                   Ethernet Frame with:    |
                    IPv4 packet data and metadata
                               {ingress + NHinfo + Inter FE info}
    FE2                                                |
   +---------------------------------------------------|---------+
   |                                              +----V----+                                                +..+.+..+    |
   |                                                |..|.|..|    |
   |                                              +-V--V-V--V-+  |
   |                                              | Ingress   |  |
   |                                              | InterFE   |  |
   |                                              |   LFB     |  |
   |                                              +----+----+                                              +----+------+  |
   |                                                   |         |
   |                                         IPv4 pkt + metadata |
   |                                          {ingress + NHinfo} |
   |                                                   |         |
   |             +--------+                       +----V---+     |
   |             | Egress |     IPV4 packet       | IPV4   |     |
   |       <-----+  LFB   |<----------------------+NextHop |     |
   |             |        |{ingress + NHdetails}  | LFB    |     |
   |             +--------+      metadata         +--------+     |
   +-------------------------------------------------------------+

         Figure 5: Split IPV4 forwarding service with Inter-FE LFB

   As can be observed in Figure 5, the same details passed between IPV4
   unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of
   the Inter-FE LFB.  In addition an index for the inter-FE LFB
   (interFEid) is passed as metadata.

   The egress of the inter-FE LFB uses the received Inter-FE index
   (InterFEid metadata) packet and metadata
   to select details for encapsulation when sending messages towards the
   selected neighboring FE.  These details will include what to communicate
   as the source and destination FEID; FEs (abstracted as MAC addresses as
   described in Section 5.2); in addition the original metadata and any exception IDs may be
   passed along with the original IPV4 packet.

   On the ingress side of the inter-FE LFB the received packet and its
   associated details metadata are used to decide the packet graph continuation.
   This includes what of the which of the original metadata and exception IDs
   to restore and what which next LFB class
   instance to continue processing on.  In the illustrated case above, Figure 5, an
   IPV4 Nexthop LFB instance is selected and appropriate metadata is
   passed on to it.

   The ingress side of the inter-FE LFB consumes some of the information
   passed (eg the destination FEID) and passes on the IPV4 packet alongside with the ingress + and
   NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both
   Figure 1 and Figure 2.

5.  Generic  Inter-FE connectivity

   In this section we describe the generic encapsulation format in
   Figure 6 as extended from the ForCES redirect packet format.  We
   intend for the described encapsulation to be a generic guideline Ethernet Connectivity

   Section 5.1 describes some of the different needed fields issues related to be made available by any used
   transport for inter-FE LFB connectivity.  We expect that for any using Ethernet as
   the transport mechanism used, a description of and how the different fields
   will be encapsulated we mitigate them.

   Section 5.2 defines a payload format that is to be correlated to the information described in
   Figure 6.  The goal used over
   Ethernet.  An existing implementation of this document is to provide ethernet
   encapsulation, and specification on top of
   Linux Traffic Control [linux-tc] is described in [tc-ife].

5.1.  Inter-FE Ethernet Connectivity Issues

   There are several issues that may occur due to using direct Ethernet
   encapsulation that end in Section 5.1 we illustrate how need consideration.

5.1.1.  MTU Consideration

   Because we
   use the guidelines provided in this section are adding data to describe existing Ethernet frames, MTU issues
   may arise.  We recommend:

   o  To use large MTUs when possible (example with jumbo frames).

   o  Limit the fit amount of metadata that could be transmitted; our
      definition allows for
   inter-FE LFB interfacing over ethernet.

            +-- Main ForCES header
            |   |
            |   +---- msg type = REDIRECT
            |   +---- Destination FEID
            |   +---- Source FEID
            |   +---- NEID (first word filtering of Correlator)
            |
            +-- T = ExceptionID-TLV
               |  |
               |  +-- +-Exception Data ILV (I = exceptionID , L= length)
               |  |   |  |
               |  |   |  +----- V= Metadata value
               |  .   |
               |  .   |
               |  .   +-Exception Data ILV
               .
               |
               +- T = METADATA-TLV
               |  |
               |  +-- +-Meta Data ILV (I = metaid, L= length)
               |  |   |  |
               |  |   |  +----- V= Metadata value
               |  .   |
               |  .   |
               |  .   +-Meta Data ILV
               .
               +- T = REDIRECTDATA-TLV
                  |
                  +--  Redirected packet Data

                    Figure 6: Packet format suggestion

   o  The ForCES main header select metadata to be
      encapsulated in the frame as described in RFC5810 is used Section 6.  We recommend
      sizing the egress port MTU so as a fixed
      header to describe allow space for maximum size
      of the Inter-FE encapsulation.

      *  The Source FEID field is mapped metadata total size to allow between FEs.  In such a setup,
      the originating FE and the
         destination FEID port is mapped configured to the destination FEID.

      *  The first 32 bits of the correlator field are used "lie" to carry the
         NEID.  The 32-bit NEID defaults upper layers by claiming to 0.

   o  The ExceptionID TLV carries one or more exception IDs within ILVs.
      The I in the ILV carries
      have a globally defined exceptionID as per-
      ForCES specification defined by IANA.  This TLV lower MTU than it is new to ForCES
      and sits in the global capable of.  MTU setting can be
      achieved by ForCES TLV namespace.

   o  The METADATA and REDIRECTDATA TLV encapsulations are taken
      directly from [RFC5810] section 7.9.

   It is expected that a variety control of transport encapsulations would be
   applicable to carry the format described in Figure 6. port LFB(or other config).  In such a
   case, a description of a mapping to interpret
      essence, the inter-FE details
   and translate into proprietary or legacy formatting would need to be
   defined.  For any mapping towards these definitions control plane when explicitly making a different
   document to describe decision for
      the mapping, one per transport, MTU settings of the egress port is expected to implicitly deciding how
      much metadata will be defined.

5.1. allowed.

5.1.2.  Quality Of Service Considerations

   A raw packet arriving at the Inter-FE LFB (from upstream LFB Class
   instances) may have COS metadatum indicating how it should be treated
   from a Quality of Service perspective.

   The resulting Ethernet Connectivity

   In this document, we describe frame will be eventually (preferentially)
   treated by a format that is to downstream LFB(typically a port LFB instance) and their
   COS marks will be used over
   Ethernet.  An existing implementation honored in terms of this specification on top priority.  In other words the
   presence of
   Linux Traffic Control [linux-tc] is described in [tc-ife]. the Inter-FE LFB does not change the COS semantics

5.1.3.  Congestion Considerations

   The following describes addition of the mapping from Figure 6 Inter-FE encapsulation adds overhead to ethernet the
   packets and therefore bandwidth consumption on the wire.  In cases
   where Inter-FE encapsulated traffic shares wire
   encapsulation illustrated in Figure 7.

   o  When an NE tag is needed, resources with other
   traffic, the new dynamics could potentially lead to congestion.  In
   such a VLAN tag will be used.  Note: case, given that the
      NEID as per Figure 6 is described as being 32 bits while a vlan
      tag is 12 bits.  It Inter-FE LFB is however thought to be sufficient to use 12
      bits deployed within the scope of a LAN NE cluster.

   o  An ethernet type will be used single
   administrative domain, the operator may need to imply enforce usage
   restrictions.  These restrictions may take the form of approriate
   provisioning; example by rate limiting at an upstream LFB all Inter-
   FE LFB traffic; or prioritizing non Inter-FE LFB traffic or other
   techniques such as managed circuit breaking[circuit-b].

   It is noted that a wire format is
      carrying lot of the traffic passing through an inter-FE FE that
   utilizes the Inter-FE LFB packet.  The ethernet type is expected to be used IP based which is
      0xFEFE (XXX: Note to editor,
   generally assumed to be updated when issued by IEEE
      Standards Association).

   o  The destination FEID will be mapped congestion controlled and therefore does not
   need addtional congestion control mechanisms[RFC5405].

5.1.4.  Deployment Considerations

   While we expect to use a unique IEEE-issued ethertype for the destination MAC address
      of the target FEID.

   o  The source FEID will be mapped inter-
   FE traffic, we use lessons learned from VXLAN deployment to be more
   flexible on the source MAC address settings of the
      originating FEID.

   o  In this version ethertype value used.  We make the
   ether type an LFB read-write component.  Linux VXLAN implementation
   uses UDP port 8472 because the deployment happened much earlier than
   the point of RFC publication where the specification, we only focus on data and
      metadata.  Therefore IANA assigned udp port issued
   was 4789 [vxlan-udp].  For this reason we are not going make it possible to describe how define
   at control time what ethertype to use and default to carry the
      ExceptionID information (future versions may). IEEE issued
   ethertype.  We are also not
      going justify this by assuming that a given ForCES NE is
   likely to use METADATA-TLV or REDIRECTDATA-TLV be owned by a single organization and that the
   organization's CE(or CE cluster) could program all participating FEs
   via the inter-FE LFB (described in order this document) to save
      shave off some overhead bytes.  Figure 7 describes the payload.

       0                   1                   2                   3
       0 1 2 3 recognize a
   private Ethernet type used for inter-LFB traffic (possibly those
   defined as available for private use by the IEEE, namely: IDs 0x88B5
   and 0x88B6).

5.2.  Inter-FE Ethernet Encapsulation

   The Ethernet wire encapsulation is illustrated in Figure 6.  The
   process that leads to this encapsulation is described in Section 6.
   The resulting frame is 32 bit aligned.

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |  Outer Destination MAC Address  (Destination FEID)                                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Outer Destination MAC Address       | Outer   Source MAC Address          |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |  Outer Source MAC Address  (Source FEID)                                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Optional 802.1Q info (NEID)   | Inter-FE ethertype            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Metadata length               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | TLV encoded Metadata ~~~..............~~                      |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | Original Ethernet payload packet data ~~................~~                     |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

                    Figure 7: 6: Packet format suggestion

   An outer

   The Ethernet header is introduced to carry illustrated in Figure 6) has the information on
   Destination FEID, Source FEID and optional NEID. following
   semantics:

   o  The Outer Destination MAC Address carries is used to identify the Destination
      FEID
      identification. by the CE policy (as described in Section 6).

   o  Outer  The Source MAC Address carries is used to identify the Source FEID identification.

   o  When an NEID is needed, an optional 802.1Q is carried with 12-bit
      VLANid representing by the NEID.
      CE policy (as described in Section 6).

   o  The ethernet Ethernet type is used to identify the frame as inter-FE LFB
      type.  Ethertype 0xFEFE is to be used (XXX: Note, to editor update
      when available).

   o  The 16-bit metadata length is used to described the total encoded
      metadata length (including the 16 bits used to encode the metadata
      length).

   o  One or more TLV encoded metadatum follows the metadata length
      field.  The TLV type identifies the Metadata id.  ForCES IANA-
      defined Metadata ids will be used.  We recognize that using a 16
      bit TLV restricts the metadata id to 16 bits instead of ForCES
      define space of 32 bits.  However, at the time of publication we
      believe this is sufficient Note to carry all the info editor, likely
      we need and
      approach taken would save us 4 bytes per Metadatum transferred.

   o  The original ethernet payload is appended at the end of the
      metadata as shown.

5.1.1.  Inter-FE Ethernet Connectivity Issues

   There are several issues wont get that may arise due to using direct ethernet
   encapsulation.

   o  Because we are adding data to existing ethernet frames, MTU issues
      may arise.  We recommend:

      *  To use large MTUs value - update when possible (example with jumbo frames).

      *  Limit the amount of metadata that could be transmitted; our
         definition allows for filtering of which metadata is to be
         encapsulated in the frame.  We recommend implementing this by
         setting the egress port MTU to allow space for maximum size of
         the metadata total size you wish to allow between FEs.  In such
         a setup, the port is configured to "lie" to the upper layers by
         claiming to have a lower MTU than it is capable of.  MTU
         setting can be achieved by ForCES control of the port LFB(or
         other config).  In essence, the control plane making a decision
         for the MTU settings of the egress port is implicitly deciding
         how much metadata will be allowed. available).

   o  The frame may be dropped if there is congestion on the receiving
      FE side.  One approach to mitigate this issue 16-bit metadata length is used to make sure that
      inter-FE LFB frames receive the highest priority treatment when
      scheduled on the wire.  Typically protocols that tunnel in described the
      middle box do not care and depend on total encoded
      metadata length (including the packet originator 16 bits used to
      resend if encode the originator cares about reliability.  We do not
      expect to be any different. metadata
      length).

   o  While we expect to use a unique IEEE-issued ethertype for the
      inter-FE traffic, we use lessons learnt from VXLAN deployment xref
      to be  One or more flexible on 16-bit TLV encoded Metadatum follows the settings of metadata
      length field.  The TLV type identifies the ethertype value Metadata id.  ForCES
      IANA-defined Metadata ids will be used.  All TLVs will be 32 bit
      aligned.  We make the ether type an LFB read-write component.  Linux VXLAN
      implementation uses UDP port 8472 because recognize that using a 16 bit TLV restricts the deployment happened
      much earlier than
      metadata id to 16 bits instead of ForCES-defined component ID
      space of 32 bits.  However, at the point time of RFC publication where the IANA
      assigned udp port issued was 4789 [vxlan-udp].  For this reason we
      make it possible to define at control time what ethertype to use
      and default to the IEEE issued ethertype.  We justify believe
      this by
      assuming that a given ForCES NE is likely sufficient to be owned by a single
      organization and that the organization's CE(or CE cluster) could
      program carry all participating FEs via the inter-FE LFB (described in
      this document) to recognize a private ethernet type used for
      inter-LFB traffic (possibly those defined as available for private
      use by the IEEE, namely: IDs 0x88B5 info we need and 0x88B6) approach
      taken would save us 4 bytes per Metadatum transferred.

   o  The original packet data payload is appended at the end of the
      metadata as shown.

6.  Detailed Description of the Ethernet inter-FE LFB

   The ethernet Ethernet inter-FE LFB has two LFB input ports port groups and three LFB
   output ports. ports as shown in Figure 7.

   The inter-FE LFB defines two components used in aiding processing
   described in Section 6.2.

                    +-----------------+
     Inter-FE LFB   |                 |
     Encapsulated   |             OUT2+--> decapsulated Packet
     -------------->|IngressInGroup   |       + metadata
    -------------->|IN2              |
    Packet
     Ethernet Frame |                 |
                    |                 |
     raw Packet +   |             OUT1+--> encapsulated Packet
    -------------->|IN1 Encapsulated Ethernet
     -------------->|EgressInGroup    |           Frame
     Metadata       |                 |
                    |    EXCEPTIONOUT +--> ExceptionID, packet + metadata
                    |                 |           + metadata
                    +-----------------+

                          Figure 8: 7: Inter-FE LFB

6.1.  Data Handling

   The Inter-FE LFB (instance) can be positioned at the egress of a
   source FE.  Figure 5 illustrates an example source FE in the form of
   FE1.  In such a case an Inter-FE LFB instance receives receives, via port IN1,
   group EgressInGroup, a raw packet and associated metadata IDs from the
   preceding LFB instance. instances.  The
   InterFEid metadatum MAY be present on input information is used to produce a
   selection of how to generate and encapsulate the incoming raw data. new frame.  The set
   of all selections is stored in the LFB component IFETable described
   further below.  The processed encapsulated packet Ethernet Frame will go out
   on either LFB port OUT1 to a downstream LFB instance when processing succeeds or to
   the EXCEPTIONOUT port in the case of a failure.

   The Inter-FE LFB (instance) can be positioned at the ingress of a
   receiving FE.  Figure 5 illustrates an example destination FE in the
   form of FE1.  In such a case an Inter-FE LFB receives, via an LFB
   port IN2, in the IngressInGroup, an encapsulated packet. Ethernet frame.
   Successful processing of the packet will result in a raw packet with
   associated metadata IDs going downstream to an LFB connected on OUT2.
   On failure the data is sent out EXCEPTIONOUT.

6.1.1.  Egress Processing

   The egress Inter-FE LFB receives packet data and any accompanying
   Metadatum at an LFB port of the LFB instance's input port group
   labelled EgressInGroup.

   The LFB implementation may use the InterFEid metadatum on egress of an FE incoming LFB port (within LFB port
   group EgressInGroup) to map to a table index used to lookup the
   IFETable table.  The interFEid in such a case will be
   generated by an upstream LFB instance (i.e one preceding the Inter-FE
   LFB).  The output result constitutes

   If lookup is successful, a matched table row which has the
   InterFEinfo details i.e. is retrieved with the tuple {NEID,Destination FEID,Source
   FEID, inter FE type, {optional IFEtype,
   optional StatId, Destination MAC address(DSTFE), Source MAC
   address(SRCFE), optional metafilters}.  The metafilters lists define
   a whitelist of which Metadatum are to be passed to the neighboring
   FE.  The component names used in describing processing are defined in
   Section 6.2

6.1.1.  Egress Processing

   The egress Inter-FE LFB will receive an ethernet frame and
   accompanying metadatum (including optionally the InterFEid metadatum)
   at LFB port IN1.  The ethernet frame may be 802.1Q tagged.

   The InterFEid may be used to lookup IFETable table.  If lookup is
   successful, the inter-FE LFB will perform the following actions using the
   resulting tuple:

   o  Increment statistics for packet and byte count observed. observed at
      corresponding IFEStats entry.

   o  Walk  When MetaFilterList is present, then walk each packet metadatum received Metadatum
      and apply against the relevant MetaFilterList.  If no legitimate metadata
      is found that needs to be passed downstream then the processing
      stops and send the packet is
      allowed through as is. and metadata out the EXCEPTIONOUT port
      with exceptionID of EncapTableLookupFailed [RFC6956].

   o  Check that the additional overhead of the outer Ethernet header and
      encapsulated metadata will not exceed MTU.  If it does, increment
      the error packet count statistics and return allowing send the packet
      to pass through.

   o  create the outer ethernet header which is a duplicate of the
      incoming frame's ethernet header.  The outer ethernet header may
      have an optional 802.1q header (if one was included in the
      original frame).

   o  If the NEID field is present (not 0) and metadata
      out the original header had a
      vlan tag, replace the vlan tag on the outer header EXCEPTIONOUT port with exceptionID of FragRequired
      [RFC6956].

   o  Create the value
      from the matched NEID field.  If the NEID field is present (not 0)
      and the original Ethernet header did not have a vlan tag, create one that
      matches the NEID field and appropriately add it to the outer
      header.  If the NEID field is absent or 0, do nothing.

   o  If the optional DSTFE is present, set  Set the Destination MAC address of the outer Ethernet header with value
      found in the DSTFE field.  When
      absent, then the inner destination MAC address is used (at this
      point already copied).

   o  If the optional SRCFE is present, set  Set the Source MAC address of the outer Ethernet header with value found
      in the SRCFE field.  If SRCFE is
      absent then the inner source MAC address is used (at this point
      already copied).

   o  If the optional IFETYPE is present, set the outer ethernet Ethernet type to the
      value found in IFETYPE.  If IFETYPE is absent then the standard ethernet
      Inter-FE LFB Ethernet type is used (XXX: Note to editor, to be
      updated).

   o  encapsulate  Encapsulate each allowed metadatum Metadatum in a TLV.  Use the Metaid as
      the "type" field in the TLV header.  The TLV should be aligned to
      32 bits.  This means you may need to add padding of zeroes to
      ensure alignment.

   o  Update the Metadata length to the sum of each TLV's space + plus 2
      bytes (for the Metadata length field 16 bit space).

   The resulting packet is sent to the next LFB instance connected to
   the OUT1 LFB-port; typically a port LFB.

   In the case of a failed lookup or a zero-value InterFEid, (or absence
   of InterFEid when needed by the implementation) the original packet and associated
   metadata is sent out unchanged via the OUT1 EXCEPTIONOUT port with exceptionID of
   EncapTableLookupFailed [RFC6956].  Note that the EXCEPTIONOUT LFB Class instance
   port (typically towards
   a Port LFB). is merely an abstraction and implementation may in fact drop
   packets as described above.

6.1.2.  Ingress Processing

   An ingressing inter-FE LFB packet is recognized by looking at inspecting the etherype
   received on
   ethertype, and optionally the destination and source MAC addresses.
   A matching packet is mapped to an LFB instance port IN2. in the
   IngressInGroup.  The IFETable table row entry matching the LFB
   instance port may be have optionally utilized to provide programmed metadata filters.  In
   such a case the ingress processing should use the metadata filters as
   a whitelist of what metadatum is to be allowed.

   o  Increment statistics for packet and byte count observed.

   o  Look at the metadata length field and walk the packet data
      extracting from the TLVs the metadata values.  For each metadatum Metadatum
      extracted, in the presence of metadata filters the metaid is
      compared against the relevant IFETable row metafilter list.  If
      the metadatum Metadatum is recognized, and is allowed by the filter the
      corresponding implementation metadatum Metadatum field is set.  If an
      unknown metadatum Metadatum id is encountered, or if the metaid is not found in
      the option allowed filter list the implementation is expected to ignore
      it, increment the packet error statistic and proceed processing
      other metadatum. Metadatum.

   o  Upon completion of processing all the metadata, the inter-FE LFB
      instance resets the header to data point to the original (inner)
      ethernet header payload i.e skips
      the IFE header information.  At this point the the original ethernet frame packet
      that was passed to the egress Inter-FE LFB at the source FE is
      reconstructed.  This data is then passed along with the
      reconstructed metadata downstream to the next LFB instance in the
      graph.

   In the case of processing failure of either ingress or egress
   positioning of the LFB, the packet and metadata are sent out the
   EXCEPTIONOUT LFB port with appropriate error id.  Note that the
   EXCEPTIONOUT LFB port is merely an abstraction and implementation may
   in fact drop packets as described above.

6.2.  Components

   There are two LFB component populated components accessed by the CE.  The CE optionally programs LFB instances in a service graph that
   require inter-FE connectivity with InterFEid values reader is asked
   to correspond refer to the inter-FE LFB IFETable table entries to use. definitions in Figure 8.

   The first component component, populated by the CE, is an array known as the
   IFETable table.  The array rows are made up of IFEInfo structure.
   The IFEInfo structure constitutes: optional NEID, optional IFETYPE, optional optionally
   present StatId, Destination
   FEID(DSTFE), optional MAC address(DSTFE), Source FEID (SRCFE), optional MAC
   address(SRCFE), optionally present array of allowed Metaids
   (MetaFilterList).

   The table is looked up by a 32 bit index
   passed from an upstream LFB class instance in the form of InterFEid
   metadatum.

   The second component(ID 2) 2), populated by the FE and read by the CE,
   is an indexed array known as the IFEStats table table.  Each IFEStats row
   which carries statistics information in the basic
   stats structure bstats.  The

   A note about the StatId relationship between the IFETable table index value used and
   IFEStats table: An implementation may choose to lookup this map between an
   IFETable row and IFEStats table is row using the same one as StatId entry in the
   matching IFETable table; in other words for a row.  In that case the IFETable StatId must be
   present.  Alternative implementation may map at provisioning time an
   IFETable row to IFEStats table row.  Yet another alternative
   implementation may choose not to use the IFETable row index 10 in StatId and
   instead use the IFETable table, its corresponding stats
   will be found in row index of as the IFEStats table. index.  For these
   reasons the StatId component is optional.

6.3.  Inter-FE LFB XML Model

   <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.0" xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          provides="IFE">
     <frameDefs>

        <frameDef>
            <name>EthernetAny</name>
             <synopsis>Packet with any Ethernet type</synopsis>
        <name>PacketAny</name>
         <synopsis>Arbitrary Packet</synopsis>

        </frameDef>
        <frameDef>
        <name>InterFEFrame</name>
        <synopsis>
                    Packet
            Ethernet Frame with an encapsulate IFE Ethernet type information
        </synopsis>
        </frameDef>

     </frameDefs>

     <dataTypeDefs>

       <dataTypeDef>
          <name>bstats</name>
          <synopsis>Basic stats</synopsis>
       <struct>
       <component componentID="1">
        <name>bytes</name>
        <synopsis>The total number of bytes seen</synopsis>
        <typeRef>uint64</typeRef>
       </component>

       <component componentID="2">
        <name>packets</name>
        <synopsis>The total number of packets seen</synopsis>
        <typeRef>uint32</typeRef>
       </component>

       <component componentID="3">
        <name>errors</name>
        <synopsis>The total number of packets with errors</synopsis>
        <typeRef>uint32</typeRef>
       </component>
       </struct>

      </dataTypeDef>

        <dataTypeDef>
           <name>IFEInfo</name>
       <synopsis>Describing IFE table row Information</synopsis>
           <struct>
              <component componentID="1">
                <name>NEID</name>
                <synopsis>
                     The VLAN Id 12 bits part of the 802.1q TCI field.
                </synopsis>
                <optional/>
                <typeRef>uint16</typeRef>
              </component>
              <component componentID="2">
                <name>IFETYPE</name>
            <synopsis>
            the ethernet type to be used for outgoing IFE frame
            </synopsis>
            <optional/>
                <typeRef>uint16</typeRef>

              </component>
              <component componentID="2">
                <name>StatId</name>
            <synopsis>
            the Index into the stats table
            </synopsis>
            <optional/>
                <typeRef>uint32</typeRef>
              </component>
              <component componentID="3">
                <name>DSTFE</name>
            <synopsis>
                the destination MAC address of destination FE
            </synopsis>
                <optional/>
                <typeRef>byte[6]</typeRef>
              </component>
              <component componentID="4">
                <name>SRCFE</name>
            <synopsis>
                the source MAC address used for the source FE
            </synopsis>
                <optional/>
                <typeRef>byte[6]</typeRef>
              </component>
              <component componentID="5">
                <name>MetaFilterList</name>
            <synopsis>
                the allowed metadata filter table
            </synopsis>
            <optional/>
                <array type="variable-size">
                  <typeRef>uint32</typeRef>
                </array>
               </component>

           </struct>
        </dataTypeDef>

     </dataTypeDefs>

      <metadataDefs>
         <metadataDef>
           <name>InterFEid</name>
           <synopsis>
                   Metadata identifying the index of the NexFE table
           </synopsis>
             <metadataID>16</metadataID>
             <typeRef>uint32</typeRef>
          </metadataDef>
      </metadataDefs>

     <LFBClassDefs>
       <LFBClassDef LFBClassID="6612"> LFBClassID="18">
         <name>IFE</name>
         <synopsis>
            This LFB describes IFE connectivity parameterization
         </synopsis>
         <version>1.0</version>

       <inputPorts>
             <inputPort>
              <name>IN1</name>

         <inputPort group="true">
          <name>EgressInGroup</name>
          <synopsis>
              The input port group of the egress side.
              It expects any type of Ethernet frame.
          </synopsis>
          <expectation>
           <frameExpected>
                   <ref>EthernetAny</ref>
           <ref>PacketAny</ref>
           </frameExpected>
          </expectation>
         </inputPort>
             <inputPort>
              <name>IN2</name>

         <inputPort  group="true">
          <name>IngressInGroup</name>
          <synopsis>
              The input port group of the ingress side.
              It expects an inter-FE interFE encapsulated Ethernet frame
                      with associated metadata. frame.
           </synopsis>
          <expectation>
           <frameExpected>
           <ref>InterFEFrame</ref>
           </frameExpected>
                   <metadataExpected>
                     <ref>InterFEid</ref>
                   </metadataExpected>
          </expectation>
       </inputPort>

          </inputPorts>

          <outputPorts>

            <outputPort>
              <name>OUT1</name>
              <synopsis>
                   The output port of the egress side.
              </synopsis>
              <product>
                 <frameProduced>
                   <ref>InterFEFrame</ref>
                 </frameProduced>
                     <metadataProduced>
                        <ref>InterFEid</ref>
                     </metadataProduced>
              </product>
           </outputPort>

           <outputPort>
             <name>OUT2</name>
             <synopsis>
                 The output port of the Ingress side.

             </synopsis>
             <product>
                <frameProduced>
                        <ref>EthernetAny</ref>
                  <ref>PacketAny</ref>
                </frameProduced>
                     <metadataProduced>
                        <ref>InterFEid</ref>
                     </metadataProduced>
             </product>
          </outputPort>

          <outputPort>
            <name>EXCEPTIONOUT</name>
            <synopsis>
               The exception handling path
            </synopsis>
            <product>
               <frameProduced>
                        <ref>EthernetAny</ref>
                 <ref>PacketAny</ref>
               </frameProduced>
               <metadataProduced>
                 <ref>ExceptionID</ref>
                        <ref>InterFEid</ref>
               </metadataProduced>
            </product>
         </outputPort>

      </outputPorts>

      <components>

         <component componentID="1" access="read-write">
            <name>IFETable</name>
            <synopsis>
               the table of all InterFE relations
            </synopsis>
            <array type="variable-size">
               <typeRef>IFEInfo</typeRef>
            </array>
         </component>

        <component componentID="2"> componentID="2" access="read-only">
          <name>IFEStats</name>
          <synopsis>
           the stats corresponding to the IFETable table
          </synopsis>
          <typeRef>bstats</typeRef>
        </component>

     </components>

    </LFBClassDef>

   </LFBClassDefs>

   </LFBLibrary>

                        Figure 9: 8: Inter-FE LFB XML

7.  Acknowledgements

   The authors would like would like to thank Joel Halpern and Dave Hood for the
   stimulating discussions.  Evangelos Haleplidis shepherded and
   contributed to improving this document.  Alia Atlas was the AD
   sponsor of this document and did a tremendous job of critiquing it.
   The authors are grateful to thank Joel Halpern and Dave Hood for in his role as the
   stimulating discussions.  Evangelos Haleplidis contributed to
   improving Routing
   Area reviewer in shaping the content of this document.

8.  IANA Considerations

   This memo includes two one IANA requests within the registry
   https://www.iana.org/assignments/forces https://
   www.iana.org/assignments/forces

   The first request is for the sub-registry "Logical Functional Block (LFB)
   Class Names and Class Identifiers" to request for the reservation of
   LFB class name IFE with LFB classid 6112 18 with version 1.0.

   The second request is for the sub-registry "Metadata ID"

   +--------------+---------+---------+-------------------+------------+
   |  LFB Class   |   LFB   |   LFB   |    Description    | Reference  |
   |  Identifier  |  Class  | Version |                   |            |
   |              |   Name  |         |                   |            |
   +--------------+---------+---------+-------------------+------------+
   |      18      |   IFE   |   1.0   |   An IFE LFB to request   |    This    |
   |              |         |         |    standardize    |  document  |
   |              |         |         |  inter-FE LFB for the InterFEid metadata the value 0x00000010. |            |
   |              |         |         |   ForCES Network  |            |
   |              |         |         |      Elements     |            |
   +--------------+---------+---------+-------------------+------------+

     Logical Functional Block (LFB) Class Names and Class Identifiers

9.  IEEE Assignment Considerations

   This memo includes a request for a new ethernet protocol type as
   described in Section 5.1. 5.2.

10.  Security Considerations

   The FEs involved in the Inter-FE LFB belong to the same Network
   Device (NE) and are within the scope of a single administrative
   Ethernet LAN private network.  Trust of policy in the control and its
   treatment in the datapath exists already.

   This document does not alter either the ForCES model the ForCES Model [RFC5812] or the ForCES Protocol [RFC5810]
   Protocol[RFC5810].  As such, it has no impact on their security
   considerations.  This document simply defines the operational
   parameters and capabilities of an LFB that performs LFB class
   instance extensions across nodes under a single administrative
   control. this  This document does not attempt to analyze the presence or
   possibility of security interactions created by allowing LFB graph
   extension on packets.  Any such issues, if they exist, are for exist should be
   resolved by the designers of the particular data path, path i.e they are
   not the responsibility of general mechanism. mechanism outlined in this
   document; one such option for protecting Ethernet is the use of IEEE
   802.1AE Media Access Control Security [ieee8021ae] which provides
   encryption and authentication.

11.  References

11.1.  Normative References

   [RFC3746]  Yang, L., Dantu, R., Anderson, T., and R. Gopal,
              "Forwarding and Control Element Separation (ForCES)
              Framework", RFC 3746, April 2004.

   [RFC5810]  Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed.,
              Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and
              J. Halpern, "Forwarding and Control Element Separation
              (ForCES) Protocol Specification", RFC 5810, DOI 10.17487/
              RFC5810, March 2010. 2010,
              <http://www.rfc-editor.org/info/rfc5810>.

   [RFC5811]  Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping
              Layer (TML) for the Forwarding and Control Element
              Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/
              RFC5811, March 2010. 2010,
              <http://www.rfc-editor.org/info/rfc5811>.

   [RFC5812]  Halpern, J. and J. Hadi Salim, "Forwarding and Control
              Element Separation (ForCES) Forwarding Element Model", RFC
              5812, DOI 10.17487/RFC5812, March 2010. 2010,
              <http://www.rfc-editor.org/info/rfc5812>.

   [RFC7391]  Hadi Salim, J., "Forwarding and Control Element Separation
              (ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/
              RFC7391, October 2014,
              <http://www.rfc-editor.org/info/rfc7391>.

   [RFC7408]  Haleplidis, E., "Forwarding and Control Element Separation
              (ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408,
              November 2014, <http://www.rfc-editor.org/info/rfc7408>.

11.2.  Informative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/
              RFC2119, March 1997. 1997,
              <http://www.rfc-editor.org/info/rfc2119>.

   [RFC3746]  Yang, L., Dantu, R., Anderson, T., and R. Gopal,
              "Forwarding and Control Element Separation (ForCES)
              Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004,
              <http://www.rfc-editor.org/info/rfc3746>.

   [RFC5405]  Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines
              for Application Designers", BCP 145, RFC 5405, DOI
              10.17487/RFC5405, November 2008,
              <http://www.rfc-editor.org/info/rfc5405>.

   [RFC6956]  Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J.
              Halpern, "Forwarding and Control Element Separation
              (ForCES) Logical Function Block (LFB) Library", RFC 6956,
              DOI 10.17487/RFC6956, June 2013. 2013,
              <http://www.rfc-editor.org/info/rfc6956>.

   [brcm-higig]
              "Higig",
              , "HiGig",
              <http://www.broadcom.com/products/brands/HiGig>.

   [circuit-b]
              Fairhurst, G., "Network Transport Circuit Breakers", Sep
              2015, <https://tools.ietf.org/html/draft-fairhurst-tsvwg-
              circuit-breaker-04>.

   [ieee8021ae]
              , "IEEE Standard for Local and metropolitan area networks
              Media Access Control (MAC) Security", IEEE 802.1AE-2006,
              Aug 2006.

   [linux-tc]
              Hadi Salim, J., "Linux Traffic Control Classifier-Action
              Subsystem Architecture", netdev 01, Feb 2015.

   [tc-ife]   Hadi Salim, J. and D. Joachimpillai, "Distributing Linux
              Traffic Control Classifier-Action Subsystem", netdev 01,
              Feb 2015.

   [vxlan-udp]
              , "iproute2 and kernel code (drivers/net/vxlan.c)",
              <https://www.kernel.org/pub/linux/utils/net/iproute2/>.

Authors' Addresses

   Damascane M. Joachimpillai
   Verizon
   60 Sylvan Rd
   Waltham, Mass.  02451
   USA

   Email: damascene.joachimpillai@verizon.com

   Jamal Hadi Salim
   Mojatatu Networks
   Suite 400, 303 Moodie Dr. 200, 15 Fitzgerald Rd.
   Ottawa, Ontario  K2H 9R4 9G1
   Canada

   Email: hadi@mojatatu.com