draft-ietf-forces-interfelfb-02.txt   rfc8013.txt 
Internet Engineering Task Force D. Joachimpillai Internet Engineering Task Force (IETF) D. Joachimpillai
Internet-Draft Verizon Request for Comments: 8013 Verizon
Intended status: Standards Track J. Hadi Salim Category: Standards Track J. Hadi Salim
Expires: May 5, 2016 Mojatatu Networks ISSN: 2070-1721 Mojatatu Networks
November 2, 2015 February 2017
ForCES Inter-FE LFB Forwarding and Control Element Separation (ForCES)
draft-ietf-forces-interfelfb-02 Inter-FE Logical Functional Block (LFB)
Abstract Abstract
This document describes how to extend the ForCES LFB topology across This document describes how to extend the Forwarding and Control
FEs by defining the Inter-FE LFB Class. The Inter-FE LFB Class Element Separation (ForCES) Logical Functional Block (LFB) topology
provides the ability to pass data and metadata across FEs without across Forwarding Elements (FEs) by defining the inter-FE LFB class.
needing any changes to the ForCES specification. The document The inter-FE LFB class provides the ability to pass data and metadata
focuses on Ethernet transport. across FEs without needing any changes to the ForCES specification.
The document focuses on Ethernet transport.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This is an Internet Standards Track document.
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Engineering Task Force
and may be updated, replaced, or obsoleted by other documents at any (IETF). It represents the consensus of the IETF community. It has
time. It is inappropriate to use Internet-Drafts as reference received public review and has been approved for publication by the
material or to cite them other than as "work in progress." Internet Engineering Steering Group (IESG). Further information on
Internet Standards is available in Section 2 of RFC 7841.
This Internet-Draft will expire on May 5, 2016. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
http://www.rfc-editor.org/info/rfc8013.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology and Conventions . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 2. Terminology and Conventions . . . . . . . . . . . . . . . . . 3
1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3 2.1. Requirements Language . . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3
3. Problem Scope And Use Cases . . . . . . . . . . . . . . . . . 4 3. Problem Scope and Use Cases . . . . . . . . . . . . . . . . . 4
3.1. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 4 3.1. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 4
3.2. Sample Use Cases . . . . . . . . . . . . . . . . . . . . 4 3.2. Sample Use Cases . . . . . . . . . . . . . . . . . . . . 4
3.2.1. Basic IPv4 Router . . . . . . . . . . . . . . . . . . 4 3.2.1. Basic IPv4 Router . . . . . . . . . . . . . . . . . . 4
3.2.1.1. Distributing The Basic IPv4 Router . . . . . . . 6 3.2.1.1. Distributing the Basic IPv4 Router . . . . . . . 6
3.2.2. Arbitrary Network Function . . . . . . . . . . . . . 7 3.2.2. Arbitrary Network Function . . . . . . . . . . . . . 7
3.2.2.1. Distributing The Arbitrary Network Function . . . 7 3.2.2.1. Distributing the Arbitrary Network Function . . . 8
4. Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . . 8 4. Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . . 8
4.1. Inserting The Inter-FE LFB . . . . . . . . . . . . . . . 8 4.1. Inserting the Inter-FE LFB . . . . . . . . . . . . . . . 8
5. Inter-FE Ethernet Connectivity . . . . . . . . . . . . . . . 10 5. Inter-FE Ethernet Connectivity . . . . . . . . . . . . . . . 10
5.1. Inter-FE Ethernet Connectivity Issues . . . . . . . . . . 10 5.1. Inter-FE Ethernet Connectivity Issues . . . . . . . . . . 10
5.1.1. MTU Consideration . . . . . . . . . . . . . . . . . . 10 5.1.1. MTU Consideration . . . . . . . . . . . . . . . . . . 10
5.1.2. Quality Of Service Considerations . . . . . . . . . . 11 5.1.2. Quality-of-Service Considerations . . . . . . . . . . 11
5.1.3. Congestion Considerations . . . . . . . . . . . . . . 11 5.1.3. Congestion Considerations . . . . . . . . . . . . . . 11
5.1.4. Deployment Considerations . . . . . . . . . . . . . . 11
5.2. Inter-FE Ethernet Encapsulation . . . . . . . . . . . . . 12 5.2. Inter-FE Ethernet Encapsulation . . . . . . . . . . . . . 12
6. Detailed Description of the Ethernet inter-FE LFB . . . . . . 13 6. Detailed Description of the Ethernet Inter-FE LFB . . . . . . 13
6.1. Data Handling . . . . . . . . . . . . . . . . . . . . . . 13 6.1. Data Handling . . . . . . . . . . . . . . . . . . . . . . 13
6.1.1. Egress Processing . . . . . . . . . . . . . . . . . . 14 6.1.1. Egress Processing . . . . . . . . . . . . . . . . . . 14
6.1.2. Ingress Processing . . . . . . . . . . . . . . . . . 15 6.1.2. Ingress Processing . . . . . . . . . . . . . . . . . 15
6.2. Components . . . . . . . . . . . . . . . . . . . . . . . 16 6.2. Components . . . . . . . . . . . . . . . . . . . . . . . 16
6.3. Inter-FE LFB XML Model . . . . . . . . . . . . . . . . . 16 6.3. Inter-FE LFB XML Model . . . . . . . . . . . . . . . . . 17
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 8. IEEE Assignment Considerations . . . . . . . . . . . . . . . 21
9. IEEE Assignment Considerations . . . . . . . . . . . . . . . 21 9. Security Considerations . . . . . . . . . . . . . . . . . . . 22
10. Security Considerations . . . . . . . . . . . . . . . . . . . 21 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 23
11. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 10.1. Normative References . . . . . . . . . . . . . . . . . . 23
11.1. Normative References . . . . . . . . . . . . . . . . . . 22 10.2. Informative References . . . . . . . . . . . . . . . . . 24
11.2. Informative References . . . . . . . . . . . . . . . . . 23 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . 25
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25
1. Terminology and Conventions 1. Introduction
1.1. Requirements Language In the ForCES architecture, a packet service can be modeled by
composing a graph of one or more LFB instances. The reader is
referred to the details in the ForCES model [RFC5812].
The ForCES model describes the processing within a single Forwarding
Element (FE) in terms of Logical Functional Blocks (LFBs), including
provision for the Control Element (CE) to establish and modify that
processing sequence, and the parameters of the individual LFBs.
Under some circumstances, it would be beneficial to be able to extend
this view and the resulting processing across more than one FE. This
may be in order to achieve scale by splitting the processing across
elements or to utilize specialized hardware available on specific
FEs.
Given that the ForCES inter-LFB architecture calls for the ability to
pass metadata between LFBs, it is imperative to define mechanisms to
extend that existing feature and allow passing the metadata between
LFBs across FEs.
This document describes how to extend the LFB topology across FEs,
i.e., inter-FE connectivity without needing any changes to the ForCES
definitions. It focuses on using Ethernet as the interconnection
between FEs.
2. Terminology and Conventions
2.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
1.2. Definitions 2.2. Definitions
This document reiterates the terminology defined in several ForCES This document depends on the terms (below) defined in several ForCES
documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] [RFC7391] documents: [RFC3746], [RFC5810], [RFC5811], [RFC5812], [RFC7391], and
[RFC7408] for the sake of contextual clarity. [RFC7408].
Control Engine (CE) Control Element (CE)
Forwarding Engine (FE) Forwarding Element (FE)
FE Model FE Model
LFB (Logical Functional Block) Class (or type) LFB (Logical Functional Block) Class (or type)
LFB Instance LFB Instance
LFB Model LFB Model
LFB Metadata LFB Metadata
skipping to change at page 3, line 36 skipping to change at page 4, line 4
LFB Instance LFB Instance
LFB Model LFB Model
LFB Metadata LFB Metadata
ForCES Component ForCES Component
LFB Component LFB Component
ForCES Protocol Layer (ForCES PL) ForCES Protocol Layer (ForCES PL)
ForCES Protocol Transport Mapping Layer (ForCES TML) ForCES Protocol Transport Mapping Layer (ForCES TML)
2. Introduction 3. Problem Scope and Use Cases
In the ForCES architecture, a packet service can be modelled by
composing a graph of one or more LFB instances. The reader is
referred to the details in the ForCES Model [RFC5812].
The current ForCES model describes the processing within a single
Forwarding Element (FE) in terms of logical forwarding blocks (LFB),
including provision for the Control Element (CE) to establish and
modify that processing sequence, and the parameters of the individual
LFBs.
Under some circumstance, it would be beneficial to be able to extend
this view, and the resulting processing across more than one FE.
This may be in order to achieve scale by splitting the processing
across elements, or to utilize specialized hardware available on
specific FEs.
Given that the ForCES inter-LFB architecture calls out for the
ability to pass metadata between LFBs, it is imperative therefore to
define mechanisms to extend that existing feature and allow passing
the metadata between LFBs across FEs.
This document describes how to extend the LFB topology across FEs i.e
inter-FE connectivity without needing any changes to the ForCES
definitions. It focuses on using Ethernet as the interconnection
between FEs.
3. Problem Scope And Use Cases
The scope of this document is to solve the challenge of passing The scope of this document is to solve the challenge of passing
ForCES defined metadata alongside packet data across FEs (be they ForCES-defined metadata alongside packet data across FEs (be they
physical or virtual) for the purpose of distributing the LFB physical or virtual) for the purpose of distributing the LFB
processing. processing.
3.1. Assumptions 3.1. Assumptions
o The FEs involved in the Inter-FE LFB belong to the same Network o The FEs involved in the inter-FE LFB belong to the same Network
Element(NE) and are within a single administrative private network Element (NE) and are within a single administrative private
which is in close proximity. network that is in close proximity.
o The FEs are already interconnected using Ethernet. We focus on o The FEs are already interconnected using Ethernet. We focus on
Ethernet because it is a very common setup as an FE interconnect. Ethernet because it is commonly used for FE interconnection.
While other higher transports (such as UDP over IP) or lower Other higher transports (such as UDP over IP) or lower transports
transports could be defined to carry the data and metadata it is could be defined to carry the data and metadata, but these cases
simpler to use Ethernet (for the functional scope of a single are not addressed in this document.
distributed device already interconnected with ethernet).
3.2. Sample Use Cases 3.2. Sample Use Cases
To illustrate the problem scope we present two use cases where we To illustrate the problem scope, we present two use cases where we
start with a single FE running all the LFBs functionality then split start with a single FE running all the LFBs functionality and then
it into multiple FEs achieving the same end goals. split it into multiple FEs achieving the same end goals.
3.2.1. Basic IPv4 Router 3.2.1. Basic IPv4 Router
A sample LFB topology depicted in Figure 1 demonstrates a service A sample LFB topology depicted in Figure 1 demonstrates a service
graph for delivering basic IPV4 forwarding service within one FE. graph for delivering a basic IPv4-forwarding service within one FE.
For the purpose of illustration, the diagram shows LFB classes as For the purpose of illustration, the diagram shows LFB classes as
graph nodes instead of multiple LFB class instances. graph nodes instead of multiple LFB class instances.
Since the illustration on Figure 1 is meant only as an exercise to Since the purpose of the illustration in Figure 1 is to showcase how
showcase how data and metadata are sent down or upstream on a graph data and metadata are sent down or upstream on a graph of LFB
of LFB instances, it abstracts out any ports in both directions and instances, it abstracts out any ports in both directions and talks
talks about a generic ingress and egress LFB. Again, for about a generic ingress and egress LFB. Again, for illustration
illustration purposes, the diagram does not show exception or error purposes, the diagram does not show exception or error paths. Also
paths. Also left out are details on Reverse Path Filtering, ECMP, left out are details on Reverse Path Filtering, ECMP, multicast
multicast handling etc. In other words, this is not meant to be a handling, etc. In other words, this is not meant to be a complete
complete description of an IPV4 forwarding application; for a more description of an IPv4-forwarding application; for a more complete
complete example, please refer the LFBlib document [RFC6956]. example, please refer to the LFBLibrary document [RFC6956].
The output of the ingress LFB(s) coming into the IPv4 Validator LFB The output of the ingress LFB(s) coming into the IPv4 Validator LFB
will have both the IPV4 packets and, depending on the implementation, will have both the IPv4 packets and, depending on the implementation,
a variety of ingress metadata such as offsets into the different a variety of ingress metadata such as offsets into the different
headers, any classification metadata, physical and virtual ports headers, any classification metadata, physical and virtual ports
encountered, tunnelling information etc. These metadata are lumped encountered, tunneling information, etc. These metadata are lumped
together as "ingress metadata". together as "ingress metadata".
Once the IPV4 validator vets the packet (example ensures that no Once the IPv4 validator vets the packet (for example, it ensures that
expired TTL etc), it feeds the packet and inherited metadata into the there is no expired TTL), it feeds the packet and inherited metadata
IPV4 unicast LPM LFB. into the IPv4 unicast LPM (Longest-Prefix-Matching) LFB.
+----+ +----+
| | | |
IPV4 pkt | | IPV4 pkt +-----+ +---+ IPv4 pkt | | IPv4 pkt +-----+ +---+
+------------->| +------------->| | | | +------------->| +------------->| | | |
| + ingress | | + ingress |IPv4 | IPV4 pkt | | | + ingress | | + ingress |IPv4 | IPv4 pkt | |
| metadata | | metadata |Ucast+------------>| +--+ | metadata | | metadata |Ucast+------------>| +--+
| +----+ |LPM | + ingress | | | | +----+ |LPM | + ingress | | |
+-+-+ IPv4 +-----+ + NHinfo +---+ | +-+-+ IPv4 +-----+ + NHinfo +---+ |
| | Validator metadata IPv4 | | | Validator metadata IPv4 |
| | LFB NextHop| | | LFB NextHop|
| | LFB | | | LFB |
| | | | | |
| | IPV4 pkt | | | IPv4 pkt |
| | + {ingress | | | + {ingress |
+---+ + NHdetails} +---+ + NHdetails}
Ingress metadata | Ingress metadata |
LFB +--------+ | LFB +--------+ |
| Egress | | | Egress | |
<--+ |<-----------------+ <--+ |<-----------------+
| LFB | | LFB |
+--------+ +--------+
Figure 1: Basic IPV4 packet service LFB topology Figure 1: Basic IPv4 Packet Service LFB Topology
The IPV4 unicast LPM LFB does a longest prefix match lookup on the
IPV4 FIB using the destination IP address as a search key. The
result is typically a next hop selector which is passed downstream as
metadata.
The Nexthop LFB receives the IPv4 packet with an associated next hop The IPv4 unicast LPM LFB does an LPM lookup on the IPv4 FIB using the
info metadata. The NextHop LFB consumes the NH info metadata and destination IP address as a search key. The result is typically a
derives from it a table index to look up the next hop table in order next-hop selector, which is passed downstream as metadata.
to find the appropriate egress information. The lookup result is
used to build the next hop details to be used downstream on the
egress. This information may include any source and destination
information (for our purposes, MAC addresses to use) as well as
egress ports. [Note: It is also at this LFB where typically the
forwarding TTL decrementing and IP checksum recalculation occurs.]
The NextHop LFB receives the IPv4 packet with associated next-hop
(NH) information metadata. The NextHop LFB consumes the NH
information metadata and derives a table index from it to look up the
next-hop table in order to find the appropriate egress information.
The lookup result is used to build the next-hop details to be used
downstream on the egress. This information may include any source
and destination information (for our purposes, which Media Access
Control (MAC) addresses to use) as well as egress ports. (Note: It
is also at this LFB where typically, the forwarding TTL-decrementing
and IP checksum recalculation occurs.)
The details of the egress LFB are considered out of scope for this The details of the egress LFB are considered out of scope for this
discussion. Suffice it is to say that somewhere within or beyond the discussion. Suffice it to say that somewhere within or beyond the
Egress LFB the IPV4 packet will be sent out a port (Ethernet, virtual Egress LFB, the IPv4 packet will be sent out a port (e.g., Ethernet,
or physical etc). virtual or physical).
3.2.1.1. Distributing The Basic IPv4 Router 3.2.1.1. Distributing the Basic IPv4 Router
Figure 2 demonstrates one way the router LFB topology in Figure 1 may Figure 2 demonstrates one way that the router LFB topology in
be split across two FEs (eg two ASICs). Figure 2 shows the LFB Figure 1 may be split across two FEs (e.g., two Application-Specific
topology split across FEs after the IPV4 unicast LPM LFB. Integrated Circuits (ASICs)). Figure 2 shows the LFB topology split
across FEs after the IPv4 unicast LPM LFB.
FE1 FE1
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| +----+ | | +----+ |
| +----------+ | | | | +----------+ | | |
| | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | Ingress | IPv4 pkt | | IPv4 pkt +-----+ |
| | LFB +-------------->| +------------->| | | | | LFB +-------------->| +------------->| | |
| | | + ingress | | + ingress |IPv4 | | | | | + ingress | | + ingress |IPv4 | |
| +----------+ metadata | | metadata |Ucast| | | +----------+ metadata | | metadata |Ucast| |
| ^ +----+ |LPM | | | ^ +----+ |LPM | |
| | IPv4 +--+--+ | | | IPv4 +--+--+ |
| | Validator | | | | Validator | |
| LFB | | | LFB | |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| |
IPv4 packet + IPv4 packet +
{ingress + NHinfo} {ingress + NHinfo}
metadata metadata
FE2 | FE2 |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| V | | V |
| +--------+ +--------+ | | +--------+ +--------+ |
| | Egress | IPV4 packet | IPV4 | | | | Egress | IPv4 packet | IPv4 | |
| <-----+ LFB |<----------------------+NextHop | | | <-----+ LFB |<----------------------+NextHop | |
| | |{ingress + NHdetails} | LFB | | | | |{ingress + NHdetails} | LFB | |
| +--------+ metadata +--------+ | | +--------+ metadata +--------+ |
+-------------------------------------------------------------+ +-------------------------------------------------------------+
Figure 2: Split IPV4 packet service LFB topology Figure 2: Split IPv4 Packet Service LFB Topology
Some proprietary inter-connect (example Broadcom HiGig over XAUI Some proprietary interconnections (for example, Broadcom HiGig over
[brcm-higig]) are known to exist to carry both the IPV4 packet and XAUI [brcm-higig]) are known to exist to carry both the IPv4 packet
the related metadata between the IPV4 Unicast LFB and IPV4 NextHop and the related metadata between the IPv4 Unicast LFB and IPv4NextHop
LFB across the two FEs. LFB across the two FEs.
This document defines the inter-FE LFB, a standard mechanism for This document defines the inter-FE LFB, a standard mechanism for
encapsulating, generating, receiving and decapsulating packets and encapsulating, generating, receiving, and decapsulating packets and
associated metadata FEs over Ethernet. associated metadata FEs over Ethernet.
3.2.2. Arbitrary Network Function 3.2.2. Arbitrary Network Function
In this section we show an example of an arbitrary Network Function In this section, we show an example of an arbitrary Network Function
which is more coarse grained in terms of functionality. Each Network that is more coarsely grained in terms of functionality. Each
Function may constitute more than one LFB. Network Function may constitute more than one LFB.
FE1 FE1
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| +----+ | | +----+ |
| +----------+ | | | | +----------+ | | |
| | Network | pkt |NF2 | pkt +-----+ | | | Network | pkt |NF2 | pkt +-----+ |
| | Function +-------------->| +------------->| | | | | Function +-------------->| +------------->| | |
| | 1 | + NF1 | | + NF1/2 |NF3 | | | | 1 | + NF1 | | + NF1/2 |NF3 | |
| +----------+ metadata | | metadata | | | | +----------+ metadata | | metadata | | |
| ^ +----+ | | | | ^ +----+ | | |
| | +--+--+ | | | +--+--+ |
| | | | | | | |
| | | | | |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
V V
Figure 3: A Network Function Service Chain within one FE Figure 3: A Network Function Service Chain within One FE
The setup in Figure 3 is a typical of most packet processing boxes The setup in Figure 3 is typical of most packet processing boxes
where we have functions like DPI, NAT, Routing, etc connected in such where we have functions like deep packet inspection (DPI), NAT,
a topology to deliver a packet processing service to flows. Routing, etc., connected in such a topology to deliver a packet
processing service to flows.
3.2.2.1. Distributing The Arbitrary Network Function 3.2.2.1. Distributing the Arbitrary Network Function
The setup in Figure 3 can be split out across 3 FEs instead of as
demonstrated in Figure 4. This could be motivated by scale out The setup in Figure 3 can be split across three FEs instead of as
reasons or because different vendors provide different functionality demonstrated in Figure 4. This could be motivated by scale-out
reasons or because different vendors provide different functionality,
which is plugged-in to provide such functionality. The end result is which is plugged-in to provide such functionality. The end result is
to have the same packet service delivered to the different flows having the same packet service delivered to the different flows
passing through. passing through.
FE1 FE2 FE1 FE2
+----------+ +----+ FE3 +----------+ +----+ FE3
| Network | pkt |NF2 | pkt +-----+ | Network | pkt |NF2 | pkt +-----+
| Function +-------------->| +------------->| | | Function +-------------->| +------------->| |
| 1 | + NF1 | | + NF1/2 |NF3 | | 1 | + NF1 | | + NF1/2 |NF3 |
+----------+ metadata | | metadata | | +----------+ metadata | | metadata | |
^ +----+ | | ^ +----+ | |
| +--+--+ | +--+--+
| |
V V
Figure 4: A Network Function Service Chain Distributed Across Figure 4: A Network Function Service Chain Distributed across
Multiple FEs Multiple FEs
4. Inter-FE LFB Overview 4. Inter-FE LFB Overview
We address the inter-FE connectivity requirements by defining the We address the inter-FE connectivity requirements by defining the
inter-FE LFB class. Using a standard LFB class definition implies no inter-FE LFB class. Using a standard LFB class definition implies no
change to the basic ForCES architecture in the form of the core LFBs change to the basic ForCES architecture in the form of the core LFBs
(FE Protocol or Object LFBs). This design choice was made after (FE Protocol or Object LFBs). This design choice was made after
considering an alternative approach that would have required changes considering an alternative approach that would have required changes
to both the FE Object capabilities (SupportedLFBs) as well to both the FE Object capabilities (SupportedLFBs) and the
LFBTopology component to describe the inter-FE connectivity LFBTopology component to describe the inter-FE connectivity
capabilities as well as runtime topology of the LFB instances. capabilities as well as the runtime topology of the LFB instances.
4.1. Inserting The Inter-FE LFB 4.1. Inserting the Inter-FE LFB ne 15
The distributed LFB topology described in Figure 2 is re-illustrated The distributed LFB topology described in Figure 2 is re-illustrated
in Figure 5 to show the topology location where the inter-FE LFB in Figure 5 to show the topology location where the inter-FE LFB
would fit in. would fit in.
As can be observed in Figure 5, the same details passed between IPV4 As can be observed in Figure 5, the same details passed between IPv4
unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of unicast LPM LFB and the IPv4 NH LFB are passed to the egress side of
the Inter-FE LFB. This information is illustrated as multiplicity of the inter-FE LFB. This information is illustrated as multiplicity of
inputs into the egress InterFE LFB instance. Each input represents a inputs into the egress inter-FE LFB instance. Each input represents
unique set of selection information. a unique set of selection information.
FE1 FE1
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| +----------+ +----+ | | +----------+ +----+ |
| | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | Ingress | IPv4 pkt | | IPv4 pkt +-----+ |
| | LFB +-------------->| +------------->| | | | | LFB +-------------->| +------------->| | |
| | | + ingress | | + ingress |IPv4 | | | | | + ingress | | + ingress |IPv4 | |
| +----------+ metadata | | metadata |Ucast| | | +----------+ metadata | | metadata |Ucast| |
| ^ +----+ |LPM | | | ^ +----+ |LPM | |
| | IPv4 +--+--+ | | | IPv4 +--+--+ |
| | Validator | | | | Validator | |
| | LFB | | | | LFB | |
| | IPv4 pkt + metadata | | | IPv4 pkt + metadata |
| | {ingress + NHinfo} | | | {ingress + NHinfo} |
| | | | | | | |
| | +..--+..+ | | | +..--+..+ |
| | |..| | | | | | |..| | | |
| +-V--V-V--V-+ | | +-V--V-V--V-+ |
| | Egress | | | | Egress | |
| | InterFE | | | | Inter-FE | |
| | LFB | | | | LFB | |
| +------+----+ | | +------+----+ |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| |
Ethernet Frame with: | Ethernet Frame with: |
IPv4 packet data and metadata IPv4 packet data and metadata
{ingress + NHinfo + Inter FE info} {ingress + NHinfo + Inter-FE info}
FE2 | FE2 |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| +..+.+..+ | | +..+.+..+ |
| |..|.|..| | | |..|.|..| |
| +-V--V-V--V-+ | | +-V--V-V--V-+ |
| | Ingress | | | | Ingress | |
| | InterFE | | | | Inter-FE | |
| | LFB | | | | LFB | |
| +----+------+ | | +----+------+ |
| | | | | |
| IPv4 pkt + metadata | | IPv4 pkt + metadata |
| {ingress + NHinfo} | | {ingress + NHinfo} |
| | | | | |
| +--------+ +----V---+ | | +--------+ +----V---+ |
| | Egress | IPV4 packet | IPV4 | | | | Egress | IPv4 packet | IPv4 | |
| <-----+ LFB |<----------------------+NextHop | | | <-----+ LFB |<----------------------+NextHop | |
| | |{ingress + NHdetails} | LFB | | | | |{ingress + NHdetails} | LFB | |
| +--------+ metadata +--------+ | | +--------+ metadata +--------+ |
+-------------------------------------------------------------+ +-------------------------------------------------------------+
Figure 5: Split IPV4 forwarding service with Inter-FE LFB Figure 5: Split IPv4-Forwarding Service with Inter-FE LFB
The egress of the inter-FE LFB uses the received packet and metadata The egress of the inter-FE LFB uses the received packet and metadata
to select details for encapsulation when sending messages towards the to select details for encapsulation when sending messages towards the
selected neighboring FE. These details include what to communicate selected neighboring FE. These details include what to communicate
as the source and destination FEs (abstracted as MAC addresses as as the source and destination FEs (abstracted as MAC addresses as
described in Section 5.2); in addition the original metadata may be described in Section 5.2); in addition, the original metadata may be
passed along with the original IPV4 packet. passed along with the original IPv4 packet.
On the ingress side of the inter-FE LFB the received packet and its On the ingress side of the inter-FE LFB, the received packet and its
associated metadata are used to decide the packet graph continuation. associated metadata are used to decide the packet graph continuation.
This includes which of the original metadata and which next LFB class This includes which of the original metadata and on which next LFB
instance to continue processing on. In the illustrated Figure 5, an class instance to continue processing. In Figure 5, an IPv4NextHop
IPV4 Nexthop LFB instance is selected and appropriate metadata is LFB instance is selected and the appropriate metadata is passed to
passed on to it. it.
The ingress side of the inter-FE LFB consumes some of the information The ingress side of the inter-FE LFB consumes some of the information
passed and passes on the IPV4 packet alongside with the ingress and passed and passes it the IPv4 packet alongside with the ingress and
NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both NHinfo metadata to the IPv4NextHop LFB as was done earlier in both
Figure 1 and Figure 2. Figures 1 and 2.
5. Inter-FE Ethernet Connectivity 5. Inter-FE Ethernet Connectivity
Section 5.1 describes some of the issues related to using Ethernet as Section 5.1 describes some of the issues related to using Ethernet as
the transport and how we mitigate them. the transport and how we mitigate them.
Section 5.2 defines a payload format that is to be used over Section 5.2 defines a payload format that is to be used over
Ethernet. An existing implementation of this specification on top of Ethernet. An existing implementation of this specification that runs
Linux Traffic Control [linux-tc] is described in [tc-ife]. on top of Linux Traffic Control [linux-tc] is described in [tc-ife].
5.1. Inter-FE Ethernet Connectivity Issues 5.1. Inter-FE Ethernet Connectivity Issues
There are several issues that may occur due to using direct Ethernet There are several issues that may occur due to using direct Ethernet
encapsulation that need consideration. encapsulation that need consideration.
5.1.1. MTU Consideration 5.1.1. MTU Consideration
Because we are adding data to existing Ethernet frames, MTU issues Because we are adding data to existing Ethernet frames, MTU issues
may arise. We recommend: may arise. We recommend:
o To use large MTUs when possible (example with jumbo frames). o Using large MTUs when possible (example with jumbo frames).
o Limit the amount of metadata that could be transmitted; our o Limiting the amount of metadata that could be transmitted; our
definition allows for filtering of select metadata to be definition allows for filtering of select metadata to be
encapsulated in the frame as described in Section 6. We recommend encapsulated in the frame as described in Section 6. We recommend
sizing the egress port MTU so as to allow space for maximum size sizing the egress port MTU so as to allow space for maximum size
of the metadata total size to allow between FEs. In such a setup, of the metadata total size to allow between FEs. In such a setup,
the port is configured to "lie" to the upper layers by claiming to the port is configured to "lie" to the upper layers by claiming to
have a lower MTU than it is capable of. MTU setting can be have a lower MTU than it is capable of. Setting the MTU can be
achieved by ForCES control of the port LFB(or other config). In achieved by ForCES control of the port LFB (or some other
essence, the control plane when explicitly making a decision for configuration. In essence, the control plane when explicitly
the MTU settings of the egress port is implicitly deciding how making a decision for the MTU settings of the egress port is
much metadata will be allowed. implicitly deciding how much metadata will be allowed. Caution
needs to be exercised on how low the resulting reported link MTU
could be: for IPv4 packets, the minimum size is 64 octets [RFC791]
and for IPv6 the minimum size is 1280 octets [RFC2460].
5.1.2. Quality Of Service Considerations 5.1.2. Quality-of-Service Considerations
A raw packet arriving at the Inter-FE LFB (from upstream LFB Class A raw packet arriving at the inter-FE LFB (from upstream LFB class
instances) may have COS metadatum indicating how it should be treated instances) may have Class-of-Service (CoS) metadata indicating how it
from a Quality of Service perspective. should be treated from a Quality-of-Service perspective.
The resulting Ethernet frame will be eventually (preferentially) The resulting Ethernet frame will be eventually (preferentially)
treated by a downstream LFB(typically a port LFB instance) and their treated by a downstream LFB (typically a port LFB instance) and their
COS marks will be honored in terms of priority. In other words the CoS marks will be honored in terms of priority. In other words, the
presence of the Inter-FE LFB does not change the COS semantics presence of the inter-FE LFB does not change the CoS semantics.
5.1.3. Congestion Considerations 5.1.3. Congestion Considerations
The addition of the Inter-FE encapsulation adds overhead to the Most of the traffic passing through FEs that utilize the inter-FE LFB
packets and therefore bandwidth consumption on the wire. In cases is expected to be IP based, which is generally assumed to be
where Inter-FE encapsulated traffic shares wire resources with other congestion controlled [UDP-GUIDE]. For example, if congestion causes
traffic, the new dynamics could potentially lead to congestion. In a TCP packet annotated with additional ForCES metadata to be dropped
such a case, given that the Inter-FE LFB is deployed within a single between FEs, the sending TCP can be expected to react in the same
administrative domain, the operator may need to enforce usage fashion as if that packet had been dropped at a different point on
restrictions. These restrictions may take the form of approriate its path where ForCES is not involved. For this reason, additional
provisioning; example by rate limiting at an upstream LFB all Inter- inter-FE congestion-control mechanisms are not specified.
FE LFB traffic; or prioritizing non Inter-FE LFB traffic or other
techniques such as managed circuit breaking[circuit-b].
It is noted that a lot of the traffic passing through an FE that However, the increased packet size due to the addition of ForCES
utilizes the Inter-FE LFB is expected to be IP based which is metadata is likely to require additional bandwidth on inter-FE links
generally assumed to be congestion controlled and therefore does not in comparison to what would be required to carry the same traffic
need addtional congestion control mechanisms[RFC5405]. without ForCES metadata. Therefore, traffic engineering SHOULD be
done when deploying inter-FE encapsulation.
5.1.4. Deployment Considerations Furthermore, the inter-FE LFB MUST only be deployed within a single
network (with a single network operator) or networks of an adjacent
set of cooperating network operators where traffic is managed to
avoid congestion. These are Controlled Environments, as defined by
Section 3.6 of [UDP-GUIDE]. Additional measures SHOULD be imposed to
restrict the impact of inter-FE-encapsulated traffic on other
traffic; for example:
While we expect to use a unique IEEE-issued ethertype for the inter- o rate-limiting all inter-FE LFB traffic at an upstream LFB
FE traffic, we use lessons learned from VXLAN deployment to be more
flexible on the settings of the ethertype value used. We make the o managing circuit breaking [circuit-b]
ether type an LFB read-write component. Linux VXLAN implementation o Isolating the inter-FE traffic either via dedicated interfaces or
uses UDP port 8472 because the deployment happened much earlier than VLANs
the point of RFC publication where the IANA assigned udp port issued
was 4789 [vxlan-udp]. For this reason we make it possible to define
at control time what ethertype to use and default to the IEEE issued
ethertype. We justify this by assuming that a given ForCES NE is
likely to be owned by a single organization and that the
organization's CE(or CE cluster) could program all participating FEs
via the inter-FE LFB (described in this document) to recognize a
private Ethernet type used for inter-LFB traffic (possibly those
defined as available for private use by the IEEE, namely: IDs 0x88B5
and 0x88B6).
5.2. Inter-FE Ethernet Encapsulation 5.2. Inter-FE Ethernet Encapsulation
The Ethernet wire encapsulation is illustrated in Figure 6. The The Ethernet wire encapsulation is illustrated in Figure 6. The
process that leads to this encapsulation is described in Section 6. process that leads to this encapsulation is described in Section 6.
The resulting frame is 32 bit aligned. The resulting frame is 32-bit aligned.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Destination MAC Address | | Destination MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Destination MAC Address | Source MAC Address | | Destination MAC Address | Source MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source MAC Address | | Source MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Inter-FE ethertype | Metadata length | | Inter-FE ethertype | Metadata length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TLV encoded Metadata ~~~..............~~ | | TLV encoded Metadata ~~~..............~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TLV encoded Metadata ~~~..............~~ | | TLV encoded Metadata ~~~..............~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Original packet data ~~................~~ | | Original packet data ~~................~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 6: Packet format suggestion Figure 6: Packet Format Definition
The Ethernet header illustrated in Figure 6) has the following The Ethernet header (illustrated in Figure 6) has the following
semantics: semantics:
o The Destination MAC Address is used to identify the Destination o The Destination MAC Address is used to identify the Destination
FEID by the CE policy (as described in Section 6). FEID by the CE policy (as described in Section 6).
o The Source MAC Address is used to identify the Source FEID by the o The Source MAC Address is used to identify the Source FEID by the
CE policy (as described in Section 6). CE policy (as described in Section 6).
o The Ethernet type is used to identify the frame as inter-FE LFB o The ethertype is used to identify the frame as inter-FE LFB type.
type. Ethertype 0xFEFE is to be used (XXX: Note to editor, likely Ethertype ED3E (base 16) is to be used.
we wont get that value - update when available).
o The 16-bit metadata length is used to described the total encoded o The 16-bit metadata length is used to describe the total encoded
metadata length (including the 16 bits used to encode the metadata metadata length (including the 16 bits used to encode the metadata
length). length).
o One or more 16-bit TLV encoded Metadatum follows the metadata o One or more 16-bit TLV-encoded metadatum follows the Metadata
length field. The TLV type identifies the Metadata id. ForCES length field. The TLV type identifies the metadata ID. ForCES
IANA-defined Metadata ids will be used. All TLVs will be 32 bit metadata IDs that have been registered with IANA will be used.
aligned. We recognize that using a 16 bit TLV restricts the
metadata id to 16 bits instead of ForCES-defined component ID All TLVs will be 32-bit-aligned. We recognize that using a 16-bit
space of 32 bits. However, at the time of publication we believe TLV restricts the metadata ID to 16 bits instead of a ForCES-
this is sufficient to carry all the info we need and approach defined component ID space of 32 bits if an Index-Length-Value
taken would save us 4 bytes per Metadatum transferred. (ILV) is used. However, at the time of publication, we believe
this is sufficient to carry all the information we need; the TLV
approach has been selected because it saves us 4 bytes per
metadatum transferred as compared to the ILV approach.
o The original packet data payload is appended at the end of the o The original packet data payload is appended at the end of the
metadata as shown. metadata as shown.
6. Detailed Description of the Ethernet inter-FE LFB 6. Detailed Description of the Ethernet Inter-FE LFB
The Ethernet inter-FE LFB has two LFB input port groups and three LFB The Ethernet inter-FE LFB has two LFB input port groups and three LFB
output ports as shown in Figure 7. output ports as shown in Figure 7.
The inter-FE LFB defines two components used in aiding processing The inter-FE LFB defines two components used in aiding processing
described in Section 6.2. described in Section 6.1.
+-----------------+ +-----------------+
Inter-FE LFB | | Inter-FE LFB | |
Encapsulated | OUT2+--> decapsulated Packet Encapsulated | OUT2+--> Decapsulated Packet
-------------->|IngressInGroup | + metadata -------------->|IngressInGroup | + metadata
Ethernet Frame | | Ethernet Frame | |
| | | |
raw Packet + | OUT1+--> Encapsulated Ethernet raw Packet + | OUT1+--> Encapsulated Ethernet
-------------->|EgressInGroup | Frame -------------->|EgressInGroup | Frame
Metadata | | Metadata | |
| EXCEPTIONOUT +--> ExceptionID, packet | EXCEPTIONOUT +--> ExceptionID, packet
| | + metadata | | + metadata
+-----------------+ +-----------------+
Figure 7: Inter-FE LFB Figure 7: Inter-FE LFB
6.1. Data Handling 6.1. Data Handling
The Inter-FE LFB (instance) can be positioned at the egress of a The inter-FE LFB (instance) can be positioned at the egress of a
source FE. Figure 5 illustrates an example source FE in the form of source FE. Figure 5 illustrates an example source FE in the form of
FE1. In such a case an Inter-FE LFB instance receives, via port FE1. In such a case, an inter-FE LFB instance receives, via port
group EgressInGroup, a raw packet and associated metadata from the group EgressInGroup, a raw packet and associated metadata from the
preceding LFB instances. The input information is used to produce a preceding LFB instances. The input information is used to produce a
selection of how to generate and encapsulate the new frame. The set selection of how to generate and encapsulate the new frame. The set
of all selections is stored in the LFB component IFETable described of all selections is stored in the LFB component IFETable described
further below. The processed encapsulated Ethernet Frame will go out further below. The processed encapsulated Ethernet frame will go out
on OUT1 to a downstream LFB instance when processing succeeds or to on OUT1 to a downstream LFB instance when processing succeeds or to
the EXCEPTIONOUT port in the case of a failure. the EXCEPTIONOUT port in the case of failure.
The Inter-FE LFB (instance) can be positioned at the ingress of a The inter-FE LFB (instance) can be positioned at the ingress of a
receiving FE. Figure 5 illustrates an example destination FE in the receiving FE. Figure 5 illustrates an example destination FE in the
form of FE1. In such a case an Inter-FE LFB receives, via an LFB form of FE1. In such a case, an inter-FE LFB receives, via an LFB
port in the IngressInGroup, an encapsulated Ethernet frame. port in the IngressInGroup, an encapsulated Ethernet frame.
Successful processing of the packet will result in a raw packet with Successful processing of the packet will result in a raw packet with
associated metadata IDs going downstream to an LFB connected on OUT2. associated metadata IDs going downstream to an LFB connected on OUT2.
On failure the data is sent out EXCEPTIONOUT. On failure, the data is sent out EXCEPTIONOUT.
6.1.1. Egress Processing 6.1.1. Egress Processing
The egress Inter-FE LFB receives packet data and any accompanying The egress inter-FE LFB receives packet data and any accompanying
Metadatum at an LFB port of the LFB instance's input port group metadatum at an LFB port of the LFB instance's input port group
labelled EgressInGroup. labeled EgressInGroup.
The LFB implementation may use the incoming LFB port (within LFB port The LFB implementation may use the incoming LFB port (within the LFB
group EgressInGroup) to map to a table index used to lookup the port group EgressInGroup) to map to a table index used to look up the
IFETable table. IFETable table.
If lookup is successful, a matched table row which has the If the lookup is successful, a matched table row that has the IFEInfo
InterFEinfo details is retrieved with the tuple {optional IFEtype, details is retrieved with the tuple (optional IFETYPE, optional
optional StatId, Destination MAC address(DSTFE), Source MAC StatId, Destination MAC address (DSTFE), Source MAC address (SRCFE),
address(SRCFE), optional metafilters}. The metafilters lists define and optional metafilters). The metafilters lists define a whitelist
a whitelist of which Metadatum are to be passed to the neighboring of which metadatum are to be passed to the neighboring FE. The
FE. The inter-FE LFB will perform the following actions using the inter-FE LFB will perform the following actions using the resulting
resulting tuple: tuple:
o Increment statistics for packet and byte count observed at o Increment statistics for packet and byte count observed at the
corresponding IFEStats entry. corresponding IFEStats entry.
o When MetaFilterList is present, then walk each received Metadatum o When the MetaFilterList is present, walk each received metadatum
and apply against the MetaFilterList. If no legitimate metadata and apply it against the MetaFilterList. If no legitimate
is found that needs to be passed downstream then the processing metadata is found that needs to be passed downstream, then the
stops and send the packet and metadata out the EXCEPTIONOUT port processing stops and the packet and metadata are sent out the
with exceptionID of EncapTableLookupFailed [RFC6956]. EXCEPTIONOUT port with the exceptionID of EncapTableLookupFailed
[RFC6956].
o Check that the additional overhead of the Ethernet header and o Check that the additional overhead of the Ethernet header and
encapsulated metadata will not exceed MTU. If it does, increment encapsulated metadata will not exceed MTU. If it does, increment
the error packet count statistics and send the packet and metadata the error-packet-count statistics and send the packet and metadata
out the EXCEPTIONOUT port with exceptionID of FragRequired out the EXCEPTIONOUT port with the exceptionID of FragRequired
[RFC6956]. [RFC6956].
o Create the Ethernet header o Create the Ethernet header.
o Set the Destination MAC address of the Ethernet header with value o Set the Destination MAC address of the Ethernet header with the
found in the DSTFE field. value found in the DSTFE field.
o Set the Source MAC address of the Ethernet header with value found o Set the Source MAC address of the Ethernet header with the value
in the SRCFE field. found in the SRCFE field.
o If the optional IFETYPE is present, set the Ethernet type to the o If the optional IFETYPE is present, set the ethertype to the value
value found in IFETYPE. If IFETYPE is absent then the standard found in IFETYPE. If IFETYPE is absent, then the standard inter-
Inter-FE LFB Ethernet type is used (XXX: Note to editor, to be FE LFB ethertype ED3E (base 16) is used.
updated).
o Encapsulate each allowed Metadatum in a TLV. Use the Metaid as o Encapsulate each allowed metadatum in a TLV. Use the metaID as
the "type" field in the TLV header. The TLV should be aligned to the "type" field in the TLV header. The TLV should be aligned to
32 bits. This means you may need to add padding of zeroes to 32 bits. This means you may need to add a padding of zeroes at
ensure alignment. the end of the TLV to ensure alignment.
o Update the Metadata length to the sum of each TLV's space plus 2 o Update the metadata length to the sum of each TLV's space plus 2
bytes (for the Metadata length field 16 bit space). bytes (a 16-bit space for the Metadata length field).
The resulting packet is sent to the next LFB instance connected to The resulting packet is sent to the next LFB instance connected to
the OUT1 LFB-port; typically a port LFB. the OUT1 LFB-port, typically a port LFB.
In the case of a failed lookup the original packet and associated In the case of a failed lookup, the original packet and associated
metadata is sent out the EXCEPTIONOUT port with exceptionID of metadata is sent out the EXCEPTIONOUT port with the exceptionID of
EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB
port is merely an abstraction and implementation may in fact drop port is merely an abstraction and implementation may in fact drop
packets as described above. packets as described above.
6.1.2. Ingress Processing 6.1.2. Ingress Processing
An ingressing inter-FE LFB packet is recognized by inspecting the An ingressing inter-FE LFB packet is recognized by inspecting the
ethertype, and optionally the destination and source MAC addresses. ethertype, and optionally the destination and source MAC addresses.
A matching packet is mapped to an LFB instance port in the A matching packet is mapped to an LFB instance port in the
IngressInGroup. The IFETable table row entry matching the LFB IngressInGroup. The IFETable table row entry matching the LFB
instance port may have optionally programmed metadata filters. In instance port may have optionally programmed metadata filters. In
such a case the ingress processing should use the metadata filters as such a case, the ingress processing should use the metadata filters
a whitelist of what metadatum is to be allowed. as a whitelist of what metadatum is to be allowed.
o Increment statistics for packet and byte count observed. o Increment statistics for packet and byte count observed.
o Look at the metadata length field and walk the packet data o Look at the metadata length field and walk the packet data,
extracting from the TLVs the metadata values. For each Metadatum extracting the metadata values from the TLVs. For each metadatum
extracted, in the presence of metadata filters the metaid is extracted, in the presence of metadata filters, the metaID is
compared against the relevant IFETable row metafilter list. If compared against the relevant IFETable row metafilter list. If
the Metadatum is recognized, and is allowed by the filter the the metadatum is recognized and allowed by the filter, the
corresponding implementation Metadatum field is set. If an corresponding implementation Metadatum field is set. If an
unknown Metadatum id is encountered, or if the metaid is not in unknown metadatum ID is encountered or if the metaID is not in the
the allowed filter list the implementation is expected to ignore allowed filter list, then the implementation is expected to ignore
it, increment the packet error statistic and proceed processing it, increment the packet error statistic, and proceed processing
other Metadatum. other metadatum.
o Upon completion of processing all the metadata, the inter-FE LFB o Upon completion of processing all the metadata, the inter-FE LFB
instance resets the data point to the original payload i.e skips instance resets the data point to the original payload (i.e.,
the IFE header information. At this point the original packet skips the IFE header information). At this point, the original
that was passed to the egress Inter-FE LFB at the source FE is packet that was passed to the egress inter-FE LFB at the source FE
reconstructed. This data is then passed along with the is reconstructed. This data is then passed along with the
reconstructed metadata downstream to the next LFB instance in the reconstructed metadata downstream to the next LFB instance in the
graph. graph.
In the case of processing failure of either ingress or egress In the case of a processing failure of either ingress or egress
positioning of the LFB, the packet and metadata are sent out the positioning of the LFB, the packet and metadata are sent out the
EXCEPTIONOUT LFB port with appropriate error id. Note that the EXCEPTIONOUT LFB port with the appropriate error ID. Note that the
EXCEPTIONOUT LFB port is merely an abstraction and implementation may EXCEPTIONOUT LFB port is merely an abstraction and implementation may
in fact drop packets as described above. in fact drop packets as described above.
6.2. Components 6.2. Components
There are two LFB components accessed by the CE. The reader is asked There are two LFB components accessed by the CE. The reader is asked
to refer to the definitions in Figure 8. to refer to the definitions in Figure 8.
The first component, populated by the CE, is an array known as the The first component, populated by the CE, is an array known as the
IFETable table. The array rows are made up of IFEInfo structure. "IFETable" table. The array rows are made up of IFEInfo structure.
The IFEInfo structure constitutes: optional IFETYPE, optionally The IFEInfo structure constitutes the optional IFETYPE, the
present StatId, Destination MAC address(DSTFE), Source MAC optionally present StatId, the Destination MAC address (DSTFE), the
address(SRCFE), optionally present array of allowed Metaids Source MAC address (SRCFE), and an optionally present array of
(MetaFilterList). allowed metaIDs (MetaFilterList).
The second component(ID 2), populated by the FE and read by the CE, The second component (ID 2), populated by the FE and read by the CE,
is an indexed array known as the IFEStats table. Each IFEStats row is an indexed array known as the "IFEStats" table. Each IFEStats row
which carries statistics information in the structure bstats. carries statistics information in the structure bstats.
A note about the StatId relationship between the IFETable table and A note about the StatId relationship between the IFETable table and
IFEStats table: An implementation may choose to map between an the IFEStats table -- an implementation may choose to map between an
IFETable row and IFEStats table row using the StatId entry in the IFETable row and IFEStats table row using the StatId entry in the
matching IFETable row. In that case the IFETable StatId must be matching IFETable row. In that case, the IFETable StatId must be
present. Alternative implementation may map at provisioning time an present. An alternative implementation may map an IFETable row to an
IFETable row to IFEStats table row. Yet another alternative IFEStats table row at provisioning time. Yet another alternative
implementation may choose not to use the IFETable row StatId and implementation may choose not to use the IFETable row StatId and
instead use the IFETable row index as the IFEStats index. For these instead use the IFETable row index as the IFEStats index. For these
reasons the StatId component is optional. reasons, the StatId component is optional.
6.3. Inter-FE LFB XML Model 6.3. Inter-FE LFB XML Model
<LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1" <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
provides="IFE"> provides="IFE">
<frameDefs> <frameDefs>
<frameDef>
<frameDef> <name>PacketAny</name>
<name>PacketAny</name> <synopsis>Arbitrary Packet</synopsis>
<synopsis>Arbitrary Packet</synopsis> </frameDef>
<frameDef>
</frameDef> <name>InterFEFrame</name>
<frameDef> <synopsis>
<name>InterFEFrame</name> Ethernet frame with encapsulated IFE information
<synopsis> </synopsis>
Ethernet Frame with encapsulate IFE information </frameDef>
</synopsis>
</frameDef>
</frameDefs>
<dataTypeDefs>
<dataTypeDef> </frameDefs>
<name>bstats</name>
<synopsis>Basic stats</synopsis>
<struct>
<component componentID="1">
<name>bytes</name>
<synopsis>The total number of bytes seen</synopsis>
<typeRef>uint64</typeRef>
</component>
<component componentID="2"> <dataTypeDefs>
<name>packets</name>
<synopsis>The total number of packets seen</synopsis>
<typeRef>uint32</typeRef>
</component>
<component componentID="3"> <dataTypeDef>
<name>errors</name> <name>bstats</name>
<synopsis>The total number of packets with errors</synopsis> <synopsis>Basic stats</synopsis>
<typeRef>uint32</typeRef> <struct>
</component> <component componentID="1">
</struct> <name>bytes</name>
<synopsis>The total number of bytes seen</synopsis>
<typeRef>uint64</typeRef>
</component>
</dataTypeDef> <component componentID="2">
<name>packets</name>
<synopsis>The total number of packets seen</synopsis>
<typeRef>uint32</typeRef>
</component>
<dataTypeDef> <component componentID="3">
<name>IFEInfo</name> <name>errors</name>
<synopsis>Describing IFE table row Information</synopsis> <synopsis>The total number of packets with errors</synopsis>
<struct> <typeRef>uint32</typeRef>
<component componentID="1"> </component>
<name>IFETYPE</name> </struct>
<synopsis>
the ethernet type to be used for outgoing IFE frame
</synopsis>
<optional/>
<typeRef>uint16</typeRef>
</dataTypeDef>
<dataTypeDef>
<name>IFEInfo</name>
<synopsis>Describing IFE table row Information</synopsis>
<struct>
<component componentID="1">
<name>IFETYPE</name>
<synopsis>
The ethertype to be used for outgoing IFE frame
</synopsis>
<optional/>
<typeRef>uint16</typeRef>
</component>
<component componentID="2">
<name>StatId</name>
<synopsis>
The Index into the stats table
</synopsis>
<optional/>
<typeRef>uint32</typeRef>
</component>
<component componentID="3">
<name>DSTFE</name>
<synopsis>
The destination MAC address of the destination FE
</synopsis>
<typeRef>byte[6]</typeRef>
</component>
<component componentID="4">
<name>SRCFE</name>
<synopsis>
The source MAC address used for the source FE
</synopsis>
<typeRef>byte[6]</typeRef>
</component>
<component componentID="5">
<name>MetaFilterList</name>
<synopsis>
The allowed metadata filter table
</synopsis>
<optional/>
<array type="variable-size">
<typeRef>uint32</typeRef>
</array>
</component> </component>
<component componentID="2">
<name>StatId</name>
<synopsis>
the Index into the stats table
</synopsis>
<optional/>
<typeRef>uint32</typeRef>
</component>
<component componentID="3">
<name>DSTFE</name>
<synopsis>
the destination MAC address of destination FE
</synopsis>
<typeRef>byte[6]</typeRef>
</component>
<component componentID="4">
<name>SRCFE</name>
<synopsis>
the source MAC address used for the source FE
</synopsis>
<typeRef>byte[6]</typeRef>
</component>
<component componentID="5">
<name>MetaFilterList</name>
<synopsis>
the allowed metadata filter table
</synopsis>
<optional/>
<array type="variable-size">
<typeRef>uint32</typeRef>
</array>
</component>
</struct>
</dataTypeDef>
</dataTypeDefs> </struct>
</dataTypeDef>
<LFBClassDefs> </dataTypeDefs>
<LFBClassDef LFBClassID="18">
<name>IFE</name>
<synopsis>
This LFB describes IFE connectivity parameterization
</synopsis>
<version>1.0</version>
<inputPorts> <LFBClassDefs>
<LFBClassDef LFBClassID="18">
<name>IFE</name>
<synopsis>
This LFB describes IFE connectivity parameterization
</synopsis>
<version>1.0</version>
<inputPort group="true"> <inputPorts>
<name>EgressInGroup</name>
<synopsis>
The input port group of the egress side.
It expects any type of Ethernet frame.
</synopsis>
<expectation>
<frameExpected>
<ref>PacketAny</ref>
</frameExpected>
</expectation>
</inputPort>
<inputPort group="true"> <inputPort group="true">
<name>IngressInGroup</name> <name>EgressInGroup</name>
<synopsis> <synopsis>
The input port group of the ingress side. The input port group of the egress side.
It expects an interFE encapsulated Ethernet frame. It expects any type of Ethernet frame.
</synopsis> </synopsis>
<expectation> <expectation>
<frameExpected> <frameExpected>
<ref>InterFEFrame</ref> <ref>PacketAny</ref>
</frameExpected> </frameExpected>
</expectation> </expectation>
</inputPort> </inputPort>
</inputPorts> <inputPort group="true">
<name>IngressInGroup</name>
<synopsis>
The input port group of the ingress side.
It expects an interFE-encapsulated Ethernet frame.
</synopsis>
<expectation>
<frameExpected>
<ref>InterFEFrame</ref>
</frameExpected>
</expectation>
</inputPort>
<outputPorts> </inputPorts>
<outputPort> <outputPorts>
<name>OUT1</name>
<synopsis>
The output port of the egress side.
</synopsis>
<product>
<frameProduced>
<ref>InterFEFrame</ref>
</frameProduced>
</product>
</outputPort>
<outputPort> <outputPort>
<name>OUT2</name> <name>OUT1</name>
<synopsis> <synopsis>
The output port of the Ingress side. The output port of the egress side
</synopsis> </synopsis>
<product> <product>
<frameProduced> <frameProduced>
<ref>PacketAny</ref> <ref>InterFEFrame</ref>
</frameProduced> </frameProduced>
</product> </product>
</outputPort> </outputPort>
<outputPort> <outputPort>
<name>EXCEPTIONOUT</name> <name>OUT2</name>
<synopsis> <synopsis>
The exception handling path The output port of the Ingress side
</synopsis> </synopsis>
<product> <product>
<frameProduced> <frameProduced>
<ref>PacketAny</ref> <ref>PacketAny</ref>
</frameProduced> </frameProduced>
<metadataProduced>
<ref>ExceptionID</ref>
</metadataProduced>
</product> </product>
</outputPort> </outputPort>
</outputPorts> <outputPort>
<name>EXCEPTIONOUT</name>
<synopsis>
The exception handling path
</synopsis>
<product>
<frameProduced>
<ref>PacketAny</ref>
</frameProduced>
<metadataProduced>
<ref>ExceptionID</ref>
</metadataProduced>
</product>
</outputPort>
<components> </outputPorts>
<component componentID="1" access="read-write"> <components>
<name>IFETable</name>
<synopsis>
the table of all InterFE relations
</synopsis>
<array type="variable-size">
<typeRef>IFEInfo</typeRef>
</array>
</component>
<component componentID="2" access="read-only"> <component componentID="1" access="read-write">
<name>IFEStats</name> <name>IFETable</name>
<synopsis> <synopsis>
the stats corresponding to the IFETable table The table of all inter-FE relations
</synopsis> </synopsis>
<typeRef>bstats</typeRef> <array type="variable-size">
<typeRef>IFEInfo</typeRef>
</array>
</component> </component>
</components> <component componentID="2" access="read-only">
<name>IFEStats</name>
</LFBClassDef> <synopsis>
The stats corresponding to the IFETable table
</synopsis>
<typeRef>bstats</typeRef>
</component>
</components>
</LFBClassDefs> </LFBClassDef>
</LFBClassDefs>
</LFBLibrary> </LFBLibrary>
Figure 8: Inter-FE LFB XML Figure 8: Inter-FE LFB XML
7. Acknowledgements 7. IANA Considerations
The authors would like to thank Joel Halpern and Dave Hood for the IANA has registered the following LFB class name in the "Logical
stimulating discussions. Evangelos Haleplidis shepherded and Functional Block (LFB) Class Names and Class Identifiers" subregistry
contributed to improving this document. Alia Atlas was the AD of the "Forwarding and Control Element Separation (ForCES)" registry
sponsor of this document and did a tremendous job of critiquing it. <https://www.iana.org/assignments/forces>.
The authors are grateful to Joel Halpern in his role as the Routing
Area reviewer in shaping the content of this document.
8. IANA Considerations +------------+--------+---------+-----------------------+-----------+
| LFB Class | LFB | LFB | Description | Reference |
| Identifier | Class | Version | | |
| | Name | | | |
+------------+--------+---------+-----------------------+-----------+
| 18 | IFE | 1.0 | An IFE LFB to | This |
| | | | standardize inter-FE | document |
| | | | LFB for ForCES | |
| | | | Network Elements | |
+------------+--------+---------+-----------------------+-----------+
This memo includes one IANA requests within the registry https:// Logical Functional Block (LFB) Class Names and Class Identifiers
www.iana.org/assignments/forces
The request is for the sub-registry "Logical Functional Block (LFB) 8. IEEE Assignment Considerations
Class Names and Class Identifiers" to request for the reservation of
LFB class name IFE with LFB classid 18 with version 1.0.
+--------------+---------+---------+-------------------+------------+ This memo includes a request for a new Ethernet protocol type as
| LFB Class | LFB | LFB | Description | Reference | described in Section 5.2.
| Identifier | Class | Version | | |
| | Name | | | |
+--------------+---------+---------+-------------------+------------+
| 18 | IFE | 1.0 | An IFE LFB to | This |
| | | | standardize | document |
| | | | inter-FE LFB for | |
| | | | ForCES Network | |
| | | | Elements | |
+--------------+---------+---------+-------------------+------------+
Logical Functional Block (LFB) Class Names and Class Identifiers 9. Security Considerations
9. IEEE Assignment Considerations The FEs involved in the inter-FE LFB belong to the same NE and are
within the scope of a single administrative Ethernet LAN private
network. While trust of policy in the control and its treatment in
the datapath exists already, an inter-FE LFB implementation SHOULD
support security services provided by Media Access Control Security
(MACsec) [ieee8021ae]. MACsec is not currently sufficiently widely
deployed in traditional packet processing hardware although it is
present in newer versions of the Linux kernel (which will be widely
deployed) [linux-macsec]. Over time, we expect that most FEs will be
able to support MACsec.
This memo includes a request for a new ethernet protocol type as MACsec provides security services such as a message authentication
described in Section 5.2. service and an optional confidentiality service. The services can be
configured manually or automatically using the MACsec Key Agreement
(MKA) over the IEEE 802.1x [ieee8021x] Extensible Authentication
Protocol (EAP) framework. It is expected that FE implementations are
going to start with shared keys configured from the control plane but
progress to automated key management.
10. Security Considerations The following are the MACsec security mechanisms that need to be in
place for the inter-FE LFB:
The FEs involved in the Inter-FE LFB belong to the same Network o Security mechanisms are NE-wide for all FEs. Once the security is
Device (NE) and are within the scope of a single administrative turned on, depending upon the chosen security level (e.g.,
Ethernet LAN private network. Trust of policy in the control and its Authentication, Confidentiality), it will be in effect for the
treatment in the datapath exists already. inter-FE LFB for the entire duration of the session.
This document does not alter [RFC5812] or the ForCES o An operator SHOULD configure the same security policies for all
Protocol[RFC5810]. As such, it has no impact on their security participating FEs in the NE cluster. This will ensure uniform
considerations. This document simply defines the operational operations and avoid unnecessary complexity in policy
parameters and capabilities of an LFB that performs LFB class configuration. In other words, the Security Association Keys
instance extensions across nodes under a single administrative (SAKs) should be pre-shared. When using MKA, FEs must identify
control. This document does not attempt to analyze the presence or themselves with a shared Connectivity Association Key (CAK) and
possibility of security interactions created by allowing LFB graph Connectivity Association Key Name (CKN). EAP-TLS SHOULD be used
extension on packets. Any such issues, if they exist should be as the EAP method.
resolved by the designers of the particular data path i.e they are
not the responsibility of general mechanism outlined in this
document; one such option for protecting Ethernet is the use of IEEE
802.1AE Media Access Control Security [ieee8021ae] which provides
encryption and authentication.
11. References o An operator SHOULD configure the strict validation mode, i.e., all
non-protected, invalid, or non-verifiable frames MUST be dropped.
11.1. Normative References It should be noted that given the above choices, if an FE is
compromised, an entity running on the FE would be able to fake inter-
FE or modify its content, causing bad outcomes.
10. References
10.1. Normative References
[ieee8021ae]
IEEE, "IEEE Standard for Local and metropolitan area
networks Media Access Control (MAC) Security", IEEE
802.1AE-2006, DOI 10.1109/IEEESTD.2006.245590,
<http://ieeexplore.ieee.org/document/1678345/>.
[ieee8021x]
IEEE, "IEEE Standard for Local and metropolitan area
networks - Port-Based Network Access Control.", IEEE
802.1X-2010, DOI 10.1109/IEEESTD.2010.5409813,
<http://ieeexplore.ieee.org/document/5409813/>.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>.
[RFC5810] Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed., [RFC5810] Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed.,
Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and
J. Halpern, "Forwarding and Control Element Separation J. Halpern, "Forwarding and Control Element Separation
(ForCES) Protocol Specification", RFC 5810, DOI 10.17487/ (ForCES) Protocol Specification", RFC 5810,
RFC5810, March 2010, DOI 10.17487/RFC5810, March 2010,
<http://www.rfc-editor.org/info/rfc5810>. <http://www.rfc-editor.org/info/rfc5810>.
[RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping [RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping
Layer (TML) for the Forwarding and Control Element Layer (TML) for the Forwarding and Control Element
Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/ Separation (ForCES) Protocol", RFC 5811,
RFC5811, March 2010, DOI 10.17487/RFC5811, March 2010,
<http://www.rfc-editor.org/info/rfc5811>. <http://www.rfc-editor.org/info/rfc5811>.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control [RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control
Element Separation (ForCES) Forwarding Element Model", RFC Element Separation (ForCES) Forwarding Element Model",
5812, DOI 10.17487/RFC5812, March 2010, RFC 5812, DOI 10.17487/RFC5812, March 2010,
<http://www.rfc-editor.org/info/rfc5812>. <http://www.rfc-editor.org/info/rfc5812>.
[RFC7391] Hadi Salim, J., "Forwarding and Control Element Separation [RFC7391] Hadi Salim, J., "Forwarding and Control Element Separation
(ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/ (ForCES) Protocol Extensions", RFC 7391,
RFC7391, October 2014, DOI 10.17487/RFC7391, October 2014,
<http://www.rfc-editor.org/info/rfc7391>. <http://www.rfc-editor.org/info/rfc7391>.
[RFC7408] Haleplidis, E., "Forwarding and Control Element Separation [RFC7408] Haleplidis, E., "Forwarding and Control Element Separation
(ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408, (ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408,
November 2014, <http://www.rfc-editor.org/info/rfc7408>. November 2014, <http://www.rfc-editor.org/info/rfc7408>.
11.2. Informative References 10.2. Informative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [brcm-higig]
Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/ Broadcom, "HiGig", <http://www.broadcom.com/products/
RFC2119, March 1997, ethernet-communication-and-switching/switching/bcm56720>.
<http://www.rfc-editor.org/info/rfc2119>.
[circuit-b]
Fairhurst, G., "Network Transport Circuit Breakers", Work
in Progress, draft-ietf-tsvwg-circuit-breaker-15, April
2016.
[linux-macsec]
Dubroca, S., "MACsec: Encryption for the wired LAN",
Netdev 11, Feb 2016.
[linux-tc] Hadi Salim, J., "Linux Traffic Control Classifier-Action
Subsystem Architecture", Netdev 01, Feb 2015.
[RFC791] Postel, J., "Internet Protocol", STD 5, RFC 791,
DOI 10.17487/RFC0791, September 1981,
<http://www.rfc-editor.org/info/rfc791>.
[RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
(IPv6) Specification", RFC 2460, DOI 10.17487/RFC2460,
December 1998, <http://www.rfc-editor.org/info/rfc2460>.
[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal, [RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal,
"Forwarding and Control Element Separation (ForCES) "Forwarding and Control Element Separation (ForCES)
Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004, Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004,
<http://www.rfc-editor.org/info/rfc3746>. <http://www.rfc-editor.org/info/rfc3746>.
[RFC5405] Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines
for Application Designers", BCP 145, RFC 5405, DOI
10.17487/RFC5405, November 2008,
<http://www.rfc-editor.org/info/rfc5405>.
[RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J. [RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J.
Halpern, "Forwarding and Control Element Separation Halpern, "Forwarding and Control Element Separation
(ForCES) Logical Function Block (LFB) Library", RFC 6956, (ForCES) Logical Function Block (LFB) Library", RFC 6956,
DOI 10.17487/RFC6956, June 2013, DOI 10.17487/RFC6956, June 2013,
<http://www.rfc-editor.org/info/rfc6956>. <http://www.rfc-editor.org/info/rfc6956>.
[brcm-higig]
, "HiGig",
<http://www.broadcom.com/products/brands/HiGig>.
[circuit-b]
Fairhurst, G., "Network Transport Circuit Breakers", Sep
2015, <https://tools.ietf.org/html/draft-fairhurst-tsvwg-
circuit-breaker-04>.
[ieee8021ae]
, "IEEE Standard for Local and metropolitan area networks
Media Access Control (MAC) Security", IEEE 802.1AE-2006,
Aug 2006.
[linux-tc]
Hadi Salim, J., "Linux Traffic Control Classifier-Action
Subsystem Architecture", netdev 01, Feb 2015.
[tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux [tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux
Traffic Control Classifier-Action Subsystem", netdev 01, Traffic Control Classifier-Action Subsystem", Netdev 01,
Feb 2015. Feb 2015.
[vxlan-udp] [UDP-GUIDE]
, "iproute2 and kernel code (drivers/net/vxlan.c)", Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage
<https://www.kernel.org/pub/linux/utils/net/iproute2/>. Guidelines", Work in Progress, draft-ietf-tsvwg-
rfc5405bis-19, October 2016.
Acknowledgements
The authors would like to thank Joel Halpern and Dave Hood for the
stimulating discussions. Evangelos Haleplidis shepherded and
contributed to improving this document. Alia Atlas was the AD
sponsor of this document and did a tremendous job of critiquing it.
The authors are grateful to Joel Halpern and Sue Hares in their roles
as the Routing Area reviewers for shaping the content of this
document. David Black put in a lot of effort to make sure the
congestion-control considerations are sane. Russ Housley did the
Gen-ART review, Joe Touch did the TSV area review, and Shucheng LIU
(Will) did the OPS review. Suresh Krishnan helped us provide clarity
during the IESG review. The authors are appreciative of the efforts
Stephen Farrell put in to fixing the security section.
Authors' Addresses Authors' Addresses
Damascane M. Joachimpillai Damascane M. Joachimpillai
Verizon Verizon
60 Sylvan Rd 60 Sylvan Rd
Waltham, Mass. 02451 Waltham, MA 02451
USA United States of America
Email: damascene.joachimpillai@verizon.com Email: damascene.joachimpillai@verizon.com
Jamal Hadi Salim Jamal Hadi Salim
Mojatatu Networks Mojatatu Networks
Suite 200, 15 Fitzgerald Rd. Suite 200, 15 Fitzgerald Rd.
Ottawa, Ontario K2H 9G1 Ottawa, Ontario K2H 9G1
Canada Canada
Email: hadi@mojatatu.com Email: hadi@mojatatu.com
 End of changes. 170 change blocks. 
648 lines changed or deleted 678 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/