draft-ietf-forces-interfelfb-01.txt   draft-ietf-forces-interfelfb-02.txt 
Internet Engineering Task Force D. Joachimpillai Internet Engineering Task Force D. Joachimpillai
Internet-Draft Verizon Internet-Draft Verizon
Intended status: Standards Track J. Hadi Salim Intended status: Standards Track J. Hadi Salim
Expires: September 7, 2015 Mojatatu Networks Expires: May 5, 2016 Mojatatu Networks
March 6, 2015 November 2, 2015
ForCES Inter-FE LFB ForCES Inter-FE LFB
draft-ietf-forces-interfelfb-01 draft-ietf-forces-interfelfb-02
Abstract Abstract
This document describes extending the ForCES LFB topology across FEs This document describes how to extend the ForCES LFB topology across
i.e inter-FE connectivity without needing any changes to the ForCES FEs by defining the Inter-FE LFB Class. The Inter-FE LFB Class
specification by defining the Inter-FE LFB. The Inter-FE LFB provides the ability to pass data and metadata across FEs without
provides ability to pass data, metadata and exceptions across FEs. needing any changes to the ForCES specification. The document
The document describes a generic way to transport the mentioned focuses on Ethernet transport.
details but focuses on ethernet transport.
Status of this Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 7, 2015. This Internet-Draft will expire on May 5, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology and Conventions . . . . . . . . . . . . . . . . . 3 1. Terminology and Conventions . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3
1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3 1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Problem Scope And Use Cases . . . . . . . . . . . . . . . . . 4 3. Problem Scope And Use Cases . . . . . . . . . . . . . . . . . 4
3.1. Basic Router . . . . . . . . . . . . . . . . . . . . . . . 4 3.1. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 4
3.1.1. Distributing The LFB Topology . . . . . . . . . . . . 6 3.2. Sample Use Cases . . . . . . . . . . . . . . . . . . . . 4
3.2. Arbitrary Network Function . . . . . . . . . . . . . . . . 7 3.2.1. Basic IPv4 Router . . . . . . . . . . . . . . . . . . 4
3.2.1. Distributing The Arbitrary Network Function . . . . . 8 3.2.1.1. Distributing The Basic IPv4 Router . . . . . . . 6
4. Proposal Overview . . . . . . . . . . . . . . . . . . . . . . 9 3.2.2. Arbitrary Network Function . . . . . . . . . . . . . 7
4.1. Inserting The Inter-FE LFB . . . . . . . . . . . . . . . . 9 3.2.2.1. Distributing The Arbitrary Network Function . . . 7
5. Generic Inter-FE connectivity . . . . . . . . . . . . . . . . 11 4. Inter-FE LFB Overview . . . . . . . . . . . . . . . . . . . . 8
5.1. Inter-FE Ethernet Connectivity . . . . . . . . . . . . . . 13 4.1. Inserting The Inter-FE LFB . . . . . . . . . . . . . . . 8
5.1.1. Inter-FE Ethernet Connectivity Issues . . . . . . . . 15 5. Inter-FE Ethernet Connectivity . . . . . . . . . . . . . . . 10
6. Detailed Description of the Ethernet inter-FE LFB . . . . . . 16 5.1. Inter-FE Ethernet Connectivity Issues . . . . . . . . . . 10
6.1. Data Handling . . . . . . . . . . . . . . . . . . . . . . 16 5.1.1. MTU Consideration . . . . . . . . . . . . . . . . . . 10
6.1.1. Egress Processing . . . . . . . . . . . . . . . . . . 17 5.1.2. Quality Of Service Considerations . . . . . . . . . . 11
6.1.2. Ingress Processing . . . . . . . . . . . . . . . . . . 18 5.1.3. Congestion Considerations . . . . . . . . . . . . . . 11
6.2. Components . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.4. Deployment Considerations . . . . . . . . . . . . . . 11
6.3. Inter-FE LFB XML Model . . . . . . . . . . . . . . . . . . 19 5.2. Inter-FE Ethernet Encapsulation . . . . . . . . . . . . . 12
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 24 6. Detailed Description of the Ethernet inter-FE LFB . . . . . . 13
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24 6.1. Data Handling . . . . . . . . . . . . . . . . . . . . . . 13
9. IEEE Assignment Considerations . . . . . . . . . . . . . . . . 24 6.1.1. Egress Processing . . . . . . . . . . . . . . . . . . 14
10. Security Considerations . . . . . . . . . . . . . . . . . . . 24 6.1.2. Ingress Processing . . . . . . . . . . . . . . . . . 15
11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.2. Components . . . . . . . . . . . . . . . . . . . . . . . 16
11.1. Normative References . . . . . . . . . . . . . . . . . . . 25 6.3. Inter-FE LFB XML Model . . . . . . . . . . . . . . . . . 16
11.2. Informative References . . . . . . . . . . . . . . . . . . 25 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21
9. IEEE Assignment Considerations . . . . . . . . . . . . . . . 21
10. Security Considerations . . . . . . . . . . . . . . . . . . . 21
11. References . . . . . . . . . . . . . . . . . . . . . . . . . 22
11.1. Normative References . . . . . . . . . . . . . . . . . . 22
11.2. Informative References . . . . . . . . . . . . . . . . . 23
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24
1. Terminology and Conventions 1. Terminology and Conventions
1.1. Requirements Language 1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
1.2. Definitions 1.2. Definitions
This document reiterates the terminology defined in several ForCES This document reiterates the terminology defined in several ForCES
documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] for the sake documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] [RFC7391]
of contextual clarity. [RFC7408] for the sake of contextual clarity.
Control Engine (CE) Control Engine (CE)
Forwarding Engine (FE) Forwarding Engine (FE)
FE Model FE Model
LFB (Logical Functional Block) Class (or type) LFB (Logical Functional Block) Class (or type)
LFB Instance LFB Instance
skipping to change at page 3, line 47 skipping to change at page 3, line 47
ForCES Protocol Layer (ForCES PL) ForCES Protocol Layer (ForCES PL)
ForCES Protocol Transport Mapping Layer (ForCES TML) ForCES Protocol Transport Mapping Layer (ForCES TML)
2. Introduction 2. Introduction
In the ForCES architecture, a packet service can be modelled by In the ForCES architecture, a packet service can be modelled by
composing a graph of one or more LFB instances. The reader is composing a graph of one or more LFB instances. The reader is
referred to the details in the ForCES Model [RFC5812]. referred to the details in the ForCES Model [RFC5812].
The FEObject LFB capabilities in the ForCES Model [RFC5812] define The current ForCES model describes the processing within a single
component ModifiableLFBTopology which, when advertised by the FE, Forwarding Element (FE) in terms of logical forwarding blocks (LFB),
implies that the advertising FE is capable of allowing creation and including provision for the Control Element (CE) to establish and
modification of LFB graph(s) by the control plane. Details on how a modify that processing sequence, and the parameters of the individual
graph of LFB class instances can be created can be derived by the LFBs.
control plane by looking at the FE's FEObject LFB class table
component SupportedLFBs. The SupportedLFBs table contains
information about each LFB class that the FE supports. For each LFB
class supported, details are provided on how the supported LFB class
may be connected to other LFB classes. The SupportedLFBs table
describes which LFB class a specified LFB class may succeed or
precede in an LFB class instance topology. Each link connecting two
LFB class instances is described in the LFBLinkType dataTypeDef and
has sufficient details to identify precisely the end points of a link
of a service graph.
The CE may therefore create a packet service by describing an LFB
instance graph connection; this is achieved by updating the FEOBject
LFBTopology table.
Often there are requirements for the packet service graph to cross FE Under some circumstance, it would be beneficial to be able to extend
boundaries. This could be from a desire to scale the service or need this view, and the resulting processing across more than one FE.
to interact with LFBs which reside in a separate FE (eg lookaside This may be in order to achieve scale by splitting the processing
interface to a shared TCAM, an interconnected chip, or as coarse across elements, or to utilize specialized hardware available on
grained functionality as an external NAT FE box being part of the specific FEs.
service graph etc).
Given that the ForCES inter-LFB architecture calls out for ability to Given that the ForCES inter-LFB architecture calls out for the
pass metadata between LFBs, it is imperative therefore to define ability to pass metadata between LFBs, it is imperative therefore to
mechanisms to extend that existing feature and allow passing the define mechanisms to extend that existing feature and allow passing
metadata between LFBs across FEs. the metadata between LFBs across FEs.
This document describes extending the LFB topology across FEs i.e This document describes how to extend the LFB topology across FEs i.e
inter-FE connectivity without needing any changes to the ForCES inter-FE connectivity without needing any changes to the ForCES
definitions. It focusses on using Ethernet as the interconnection as definitions. It focuses on using Ethernet as the interconnection
a starting point while leaving room for other protocols (such as between FEs.
directly on top of IP, UDP, VXLAN, etc) to be addressed by other
future documents.
3. Problem Scope And Use Cases 3. Problem Scope And Use Cases
The scope of this document is to solve the challenge of passing The scope of this document is to solve the challenge of passing
ForCES defined metadata and exceptions across FEs (be they physical ForCES defined metadata alongside packet data across FEs (be they
or virtual). To illustrate the problem scope we present two use physical or virtual) for the purpose of distributing the LFB
cases where we start with a single FE running all the functionality processing.
then split it into multiple FEs.
3.1. Basic Router 3.1. Assumptions
A sample LFB topology Figure 1 demonstrates a service graph for o The FEs involved in the Inter-FE LFB belong to the same Network
delivering basic IPV4 forwarding service within one FE. For the Element(NE) and are within a single administrative private network
purpose of illustration, the diagram shows LFB classes as graph nodes which is in close proximity.
instead of multiple LFB class instances.
Since the illustration is meant only as an exercise to showcase how o The FEs are already interconnected using Ethernet. We focus on
data and metadata are sent down or upstream on a graph of LFBs, it Ethernet because it is a very common setup as an FE interconnect.
abstracts out any ports in both directions and talks about a generic While other higher transports (such as UDP over IP) or lower
ingress and egress LFB. Again, for illustration purposes, the transports could be defined to carry the data and metadata it is
diagram does not show exception or error paths. Also left out are simpler to use Ethernet (for the functional scope of a single
details on Reverse Path Filtering, ECMP, multicast handling etc. In distributed device already interconnected with ethernet).
other words, this is not meant to be a complete description of an
IPV4 forwarding application; for a more complete example, please 3.2. Sample Use Cases
refer to the LFBlib document [RFC6956].
To illustrate the problem scope we present two use cases where we
start with a single FE running all the LFBs functionality then split
it into multiple FEs achieving the same end goals.
3.2.1. Basic IPv4 Router
A sample LFB topology depicted in Figure 1 demonstrates a service
graph for delivering basic IPV4 forwarding service within one FE.
For the purpose of illustration, the diagram shows LFB classes as
graph nodes instead of multiple LFB class instances.
Since the illustration on Figure 1 is meant only as an exercise to
showcase how data and metadata are sent down or upstream on a graph
of LFB instances, it abstracts out any ports in both directions and
talks about a generic ingress and egress LFB. Again, for
illustration purposes, the diagram does not show exception or error
paths. Also left out are details on Reverse Path Filtering, ECMP,
multicast handling etc. In other words, this is not meant to be a
complete description of an IPV4 forwarding application; for a more
complete example, please refer the LFBlib document [RFC6956].
The output of the ingress LFB(s) coming into the IPv4 Validator LFB The output of the ingress LFB(s) coming into the IPv4 Validator LFB
will have both the IPV4 packets and, depending on the implementation, will have both the IPV4 packets and, depending on the implementation,
a variety of ingress metadata such as offsets into the different a variety of ingress metadata such as offsets into the different
headers, any classification metadata, physical and virtual ports headers, any classification metadata, physical and virtual ports
encountered, tunnelling information etc. These metadata are lumped encountered, tunnelling information etc. These metadata are lumped
together as "ingress metadata". together as "ingress metadata".
Once the IPV4 validator vets the packet (example ensures that no Once the IPV4 validator vets the packet (example ensures that no
expired TTL etc), it feeds the packet and inherited metadata into the expired TTL etc), it feeds the packet and inherited metadata into the
skipping to change at page 6, line 16 skipping to change at page 6, line 13
IPV4 FIB using the destination IP address as a search key. The IPV4 FIB using the destination IP address as a search key. The
result is typically a next hop selector which is passed downstream as result is typically a next hop selector which is passed downstream as
metadata. metadata.
The Nexthop LFB receives the IPv4 packet with an associated next hop The Nexthop LFB receives the IPv4 packet with an associated next hop
info metadata. The NextHop LFB consumes the NH info metadata and info metadata. The NextHop LFB consumes the NH info metadata and
derives from it a table index to look up the next hop table in order derives from it a table index to look up the next hop table in order
to find the appropriate egress information. The lookup result is to find the appropriate egress information. The lookup result is
used to build the next hop details to be used downstream on the used to build the next hop details to be used downstream on the
egress. This information may include any source and destination egress. This information may include any source and destination
information (MAC address to use, if ethernet;) as well egress ports. information (for our purposes, MAC addresses to use) as well as
[Note: It is also at this LFB where typically the forwarding TTL egress ports. [Note: It is also at this LFB where typically the
decrement and IP checksum recalculation occurs.] forwarding TTL decrementing and IP checksum recalculation occurs.]
The details of the egress LFB are considered out of scope for this The details of the egress LFB are considered out of scope for this
discussion. Suffice it is to say that somewhere within or beyond the discussion. Suffice it is to say that somewhere within or beyond the
Egress LFB the IPV4 packet will be sent out a port (ethernet, virtual Egress LFB the IPV4 packet will be sent out a port (Ethernet, virtual
or physical etc). or physical etc).
3.1.1. Distributing The LFB Topology 3.2.1.1. Distributing The Basic IPv4 Router
Figure 2 demonstrates one way the router LFB topology in Figure 1 may Figure 2 demonstrates one way the router LFB topology in Figure 1 may
be split across two FEs (eg two ASICs). Figure 2 shows the LFB be split across two FEs (eg two ASICs). Figure 2 shows the LFB
topology split across FEs after the IPV4 unicast LPM LFB. topology split across FEs after the IPV4 unicast LPM LFB.
FE1 FE1
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| +----+ | | +----+ |
| +----------+ | | | | +----------+ | | |
| | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ |
| | LFB +-------------->| +------------->| | | | | LFB +-------------->| +------------->| | |
| | | + ingress | | + ingress |IPv4 | | | | | + ingress | | + ingress |IPv4 | |
| +----------+ metadata | | metadata |Ucast| | | +----------+ metadata | | metadata |Ucast| |
| ^ +----+ |LPM | | | ^ +----+ |LPM | |
| | IPv4 +--+--+ | | | IPv4 +--+--+ |
| | Validator | | | | Validator | |
| LFB | | | LFB | |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| |
IPv4 packet + IPv4 packet +
{ingress + NHinfo} {ingress + NHinfo}
metadata metadata
FE2 | FE2 |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
| V | | V |
| +--------+ +--------+ | | +--------+ +--------+ |
| | Egress | IPV4 packet | IPV4 | | | | Egress | IPV4 packet | IPV4 | |
| <-----+ LFB |<----------------------+NextHop | | | <-----+ LFB |<----------------------+NextHop | |
| | |{ingress + NHdetails} | LFB | | | | |{ingress + NHdetails} | LFB | |
| +--------+ metadata +--------+ | | +--------+ metadata +--------+ |
+-------------------------------------------------------------+ +-------------------------------------------------------------+
Figure 2: Split IPV4 packet service LFB topology Figure 2: Split IPV4 packet service LFB topology
Some proprietary inter-connect (example Broadcom Higig over XAUI Some proprietary inter-connect (example Broadcom HiGig over XAUI
[brcm-higig]) are known to exist to carry both the IPV4 packet and [brcm-higig]) are known to exist to carry both the IPV4 packet and
the related metadata between the IPV4 Unicast LFB and IPV4 NextHop the related metadata between the IPV4 Unicast LFB and IPV4 NextHop
LFB across the two FEs. LFB across the two FEs.
The purpose of the inter-FE LFB is to define standard mechanisms for This document defines the inter-FE LFB, a standard mechanism for
interconnecting FEs and for that reason we are not going to touch encapsulating, generating, receiving and decapsulating packets and
anymore on proprietary chip-chip interconnects other than state the associated metadata FEs over Ethernet.
fact they exist and that it is feasible to have translation to and
from proprietary approaches. The document focus is the FE-FE
interconnect where the FE could be physical or virtual and the
interconnecting technology runs a standard protocol such as ethernet,
IP or other protocols on top of IP.
3.2. Arbitrary Network Function 3.2.2. Arbitrary Network Function
In this section we show an example of an arbitrary network function In this section we show an example of an arbitrary Network Function
which is more coarse grained in terms of functionality. Each Network which is more coarse grained in terms of functionality. Each Network
function may constitute more than one LFB. Function may constitute more than one LFB.
FE1 FE1
+-------------------------------------------------------------+ +-------------------------------------------------------------+
| +----+ | | +----+ |
| +----------+ | | | | +----------+ | | |
| | Network | pkt |NF2 | pkt +-----+ | | | Network | pkt |NF2 | pkt +-----+ |
| | Function +-------------->| +------------->| | | | | Function +-------------->| +------------->| | |
| | 1 | + NF1 | | + NF1/2 |NF3 | | | | 1 | + NF1 | | + NF1/2 |NF3 | |
| +----------+ metadata | | metadata | | | | +----------+ metadata | | metadata | | |
| ^ +----+ | | | | ^ +----+ | | |
| | +--+--+ | | | +--+--+ |
| | | | | | | |
| | | | | |
+---------------------------------------------------|---------+ +---------------------------------------------------|---------+
V V
Figure 3: A Network Function Service Chain within one FE Figure 3: A Network Function Service Chain within one FE
The setup in Figure 3 is a typical of most packet processing boxes The setup in Figure 3 is a typical of most packet processing boxes
where we have functions like DPI, NAT, Routing, etc connected in such where we have functions like DPI, NAT, Routing, etc connected in such
a topology to deliver a packet processing service to flows. a topology to deliver a packet processing service to flows.
3.2.1. Distributing The Arbitrary Network Function 3.2.2.1. Distributing The Arbitrary Network Function
The setup in Figure 3 can be split out across 3 FEs instead of as
The setup in Figure 3 can be split out across 3 FEs instead as
demonstrated in Figure 4. This could be motivated by scale out demonstrated in Figure 4. This could be motivated by scale out
reasons or because different vendors provide different functionality reasons or because different vendors provide different functionality
which is plugged-in to provide such functionality. The end result is which is plugged-in to provide such functionality. The end result is
to have the same packet service delivered to the different flows to have the same packet service delivered to the different flows
passing through. passing through.
FE1 FE2 FE1 FE2
+----------+ +----+ FE3 +----------+ +----+ FE3
| Network | pkt |NF2 | pkt +-----+ | Network | pkt |NF2 | pkt +-----+
| Function +-------------->| +------------->| | | Function +-------------->| +------------->| |
| 1 | + NF1 | | + NF1/2 |NF3 | | 1 | + NF1 | | + NF1/2 |NF3 |
+----------+ metadata | | metadata | | +----------+ metadata | | metadata | |
^ +----+ | | ^ +----+ | |
| +--+--+ | +--+--+
| |
V V
Figure 4: A Network Function Service Chain Distributed Across Figure 4: A Network Function Service Chain Distributed Across
Multiple FEs Multiple FEs
4. Proposal Overview 4. Inter-FE LFB Overview
We address the inter-FE connectivity requirements by proposing the We address the inter-FE connectivity requirements by defining the
inter-FE LFB class. Using a standard LFB class definition implies no inter-FE LFB class. Using a standard LFB class definition implies no
change to the basic ForCES architecture in the form of the core LFBs change to the basic ForCES architecture in the form of the core LFBs
(FE Protocol or Object LFBs). This design choice was made after (FE Protocol or Object LFBs). This design choice was made after
considering an alternative approach that would have required changes considering an alternative approach that would have required changes
to both the FE Object capabilities (SupportedLFBs) as well to both the FE Object capabilities (SupportedLFBs) as well
LFBTopology component to describe the inter-FE connectivity LFBTopology component to describe the inter-FE connectivity
capabilities as well as runtime topology of the LFB instances. capabilities as well as runtime topology of the LFB instances.
4.1. Inserting The Inter-FE LFB 4.1. Inserting The Inter-FE LFB
The distributed LFB topology described in Figure 2 is re-illustrated The distributed LFB topology described in Figure 2 is re-illustrated
in Figure 5 to show the topology location where the inter-FE LFB in Figure 5 to show the topology location where the inter-FE LFB
would fit in. would fit in.
FE1
+-------------------------------------------------------------+
| +----------+ +----+ |
| | Ingress | IPV4 pkt | | IPV4 pkt +-----+ |
| | LFB +-------------->| +------------->| | |
| | | + ingress | | + ingress |IPv4 | |
| +----------+ metadata | | metadata |Ucast| |
| ^ +----+ |LPM | |
| | IPv4 +--+--+ |
| | Validator | |
| | LFB | |
| | IPv4 pkt + metadata |
| | {ingress + NHinfo + InterFEid}|
| | | |
| +----V----+ |
| | InterFE | |
| | LFB | |
| +----+----+ |
+---------------------------------------------------|---------+
|
IPv4 packet and metadata
{ingress + NHinfo + Inter FE info}
FE2 |
+---------------------------------------------------|---------+
| +----V----+ |
| | InterFE | |
| | LFB | |
| +----+----+ |
| | |
| IPv4 pkt + metadata |
| {ingress + NHinfo} |
| | |
| +--------+ +----V---+ |
| | Egress | IPV4 packet | IPV4 | |
| <-----+ LFB |<----------------------+NextHop | |
| | |{ingress + NHdetails} | LFB | |
| +--------+ metadata +--------+ |
+-------------------------------------------------------------+
Figure 5: Split IPV4 forwarding service with Inter-FE LFB
As can be observed in Figure 5, the same details passed between IPV4 As can be observed in Figure 5, the same details passed between IPV4
unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of
the Inter-FE LFB. In addition an index for the inter-FE LFB the Inter-FE LFB. This information is illustrated as multiplicity of
(interFEid) is passed as metadata. inputs into the egress InterFE LFB instance. Each input represents a
unique set of selection information.
The egress of the inter-FE LFB uses the received Inter-FE index FE1
(InterFEid metadata) to select details for encapsulation when sending +-------------------------------------------------------------+
messages towards the selected neighboring FE. These details will | +----------+ +----+ |
include what to communicate as the source and destination FEID; in | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ |
addition the original metadata and any exception IDs may be passed | | LFB +-------------->| +------------->| | |
along with the original IPV4 packet. | | | + ingress | | + ingress |IPv4 | |
| +----------+ metadata | | metadata |Ucast| |
| ^ +----+ |LPM | |
| | IPv4 +--+--+ |
| | Validator | |
| | LFB | |
| | IPv4 pkt + metadata |
| | {ingress + NHinfo} |
| | | |
| | +..--+..+ |
| | |..| | | |
| +-V--V-V--V-+ |
| | Egress | |
| | InterFE | |
| | LFB | |
| +------+----+ |
+---------------------------------------------------|---------+
|
Ethernet Frame with: |
IPv4 packet data and metadata
{ingress + NHinfo + Inter FE info}
FE2 |
+---------------------------------------------------|---------+
| +..+.+..+ |
| |..|.|..| |
| +-V--V-V--V-+ |
| | Ingress | |
| | InterFE | |
| | LFB | |
| +----+------+ |
| | |
| IPv4 pkt + metadata |
| {ingress + NHinfo} |
| | |
| +--------+ +----V---+ |
| | Egress | IPV4 packet | IPV4 | |
| <-----+ LFB |<----------------------+NextHop | |
| | |{ingress + NHdetails} | LFB | |
| +--------+ metadata +--------+ |
+-------------------------------------------------------------+
Figure 5: Split IPV4 forwarding service with Inter-FE LFB
The egress of the inter-FE LFB uses the received packet and metadata
to select details for encapsulation when sending messages towards the
selected neighboring FE. These details include what to communicate
as the source and destination FEs (abstracted as MAC addresses as
described in Section 5.2); in addition the original metadata may be
passed along with the original IPV4 packet.
On the ingress side of the inter-FE LFB the received packet and its On the ingress side of the inter-FE LFB the received packet and its
associated details are used to decide the packet graph continuation. associated metadata are used to decide the packet graph continuation.
This includes what of the of the original metadata and exception IDs This includes which of the original metadata and which next LFB class
to restore and what next LFB class instance to continue processing instance to continue processing on. In the illustrated Figure 5, an
on. In the illustrated case above, an IPV4 Nexthop LFB is selected IPV4 Nexthop LFB instance is selected and appropriate metadata is
and metadata is passed on to it. passed on to it.
The ingress side of the inter-FE LFB consumes some of the information The ingress side of the inter-FE LFB consumes some of the information
passed (eg the destination FEID) and passes on the IPV4 packet passed and passes on the IPV4 packet alongside with the ingress and
alongside with the ingress + NHinfo metadata to the IPV4 NextHop LFB NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both
as was done earlier in both Figure 1 and Figure 2. Figure 1 and Figure 2.
5. Generic Inter-FE connectivity 5. Inter-FE Ethernet Connectivity
In this section we describe the generic encapsulation format in Section 5.1 describes some of the issues related to using Ethernet as
Figure 6 as extended from the ForCES redirect packet format. We the transport and how we mitigate them.
intend for the described encapsulation to be a generic guideline of
the different needed fields to be made available by any used
transport for inter-FE LFB connectivity. We expect that for any
transport mechanism used, a description of how the different fields
will be encapsulated to be correlated to the information described in
Figure 6. The goal of this document is to provide ethernet
encapsulation, and to that end in Section 5.1 we illustrate how we
use the guidelines provided in this section to describe the fit for
inter-FE LFB interfacing over ethernet.
+-- Main ForCES header Section 5.2 defines a payload format that is to be used over
| | Ethernet. An existing implementation of this specification on top of
| +---- msg type = REDIRECT Linux Traffic Control [linux-tc] is described in [tc-ife].
| +---- Destination FEID
| +---- Source FEID
| +---- NEID (first word of Correlator)
|
+-- T = ExceptionID-TLV
| |
| +-- +-Exception Data ILV (I = exceptionID , L= length)
| | | |
| | | +----- V= Metadata value
| . |
| . |
| . +-Exception Data ILV
.
|
+- T = METADATA-TLV
| |
| +-- +-Meta Data ILV (I = metaid, L= length)
| | | |
| | | +----- V= Metadata value
| . |
| . |
| . +-Meta Data ILV
.
+- T = REDIRECTDATA-TLV
|
+-- Redirected packet Data
Figure 6: Packet format suggestion 5.1. Inter-FE Ethernet Connectivity Issues
o The ForCES main header as described in RFC5810 is used as a fixed There are several issues that may occur due to using direct Ethernet
header to describe the Inter-FE encapsulation. encapsulation that need consideration.
* The Source FEID field is mapped to the originating FE and the 5.1.1. MTU Consideration
destination FEID is mapped to the destination FEID.
* The first 32 bits of the correlator field are used to carry the Because we are adding data to existing Ethernet frames, MTU issues
NEID. The 32-bit NEID defaults to 0. may arise. We recommend:
o The ExceptionID TLV carries one or more exception IDs within ILVs. o To use large MTUs when possible (example with jumbo frames).
The I in the ILV carries a globally defined exceptionID as per-
ForCES specification defined by IANA. This TLV is new to ForCES
and sits in the global ForCES TLV namespace.
o The METADATA and REDIRECTDATA TLV encapsulations are taken o Limit the amount of metadata that could be transmitted; our
directly from [RFC5810] section 7.9. definition allows for filtering of select metadata to be
encapsulated in the frame as described in Section 6. We recommend
sizing the egress port MTU so as to allow space for maximum size
of the metadata total size to allow between FEs. In such a setup,
the port is configured to "lie" to the upper layers by claiming to
have a lower MTU than it is capable of. MTU setting can be
achieved by ForCES control of the port LFB(or other config). In
essence, the control plane when explicitly making a decision for
the MTU settings of the egress port is implicitly deciding how
much metadata will be allowed.
It is expected that a variety of transport encapsulations would be 5.1.2. Quality Of Service Considerations
applicable to carry the format described in Figure 6. In such a
case, a description of a mapping to interpret the inter-FE details
and translate into proprietary or legacy formatting would need to be
defined. For any mapping towards these definitions a different
document to describe the mapping, one per transport, is expected to
be defined.
5.1. Inter-FE Ethernet Connectivity A raw packet arriving at the Inter-FE LFB (from upstream LFB Class
instances) may have COS metadatum indicating how it should be treated
from a Quality of Service perspective.
In this document, we describe a format that is to be used over The resulting Ethernet frame will be eventually (preferentially)
Ethernet. An existing implementation of this specification on top of treated by a downstream LFB(typically a port LFB instance) and their
Linux Traffic Control [linux-tc] is described in [tc-ife]. COS marks will be honored in terms of priority. In other words the
presence of the Inter-FE LFB does not change the COS semantics
The following describes the mapping from Figure 6 to ethernet wire 5.1.3. Congestion Considerations
encapsulation illustrated in Figure 7.
o When an NE tag is needed, a VLAN tag will be used. Note: that the The addition of the Inter-FE encapsulation adds overhead to the
NEID as per Figure 6 is described as being 32 bits while a vlan packets and therefore bandwidth consumption on the wire. In cases
tag is 12 bits. It is however thought to be sufficient to use 12 where Inter-FE encapsulated traffic shares wire resources with other
bits within the scope of a LAN NE cluster. traffic, the new dynamics could potentially lead to congestion. In
such a case, given that the Inter-FE LFB is deployed within a single
administrative domain, the operator may need to enforce usage
restrictions. These restrictions may take the form of approriate
provisioning; example by rate limiting at an upstream LFB all Inter-
FE LFB traffic; or prioritizing non Inter-FE LFB traffic or other
techniques such as managed circuit breaking[circuit-b].
o An ethernet type will be used to imply that a wire format is It is noted that a lot of the traffic passing through an FE that
carrying an inter-FE LFB packet. The ethernet type to be used is utilizes the Inter-FE LFB is expected to be IP based which is
0xFEFE (XXX: Note to editor, to be updated when issued by IEEE generally assumed to be congestion controlled and therefore does not
Standards Association). need addtional congestion control mechanisms[RFC5405].
o The destination FEID will be mapped to the destination MAC address 5.1.4. Deployment Considerations
of the target FEID.
o The source FEID will be mapped to the source MAC address of the While we expect to use a unique IEEE-issued ethertype for the inter-
originating FEID. FE traffic, we use lessons learned from VXLAN deployment to be more
flexible on the settings of the ethertype value used. We make the
ether type an LFB read-write component. Linux VXLAN implementation
uses UDP port 8472 because the deployment happened much earlier than
the point of RFC publication where the IANA assigned udp port issued
was 4789 [vxlan-udp]. For this reason we make it possible to define
at control time what ethertype to use and default to the IEEE issued
ethertype. We justify this by assuming that a given ForCES NE is
likely to be owned by a single organization and that the
organization's CE(or CE cluster) could program all participating FEs
via the inter-FE LFB (described in this document) to recognize a
private Ethernet type used for inter-LFB traffic (possibly those
defined as available for private use by the IEEE, namely: IDs 0x88B5
and 0x88B6).
o In this version of the specification, we only focus on data and 5.2. Inter-FE Ethernet Encapsulation
metadata. Therefore we are not going to describe how to carry the
ExceptionID information (future versions may). We are also not The Ethernet wire encapsulation is illustrated in Figure 6. The
going to use METADATA-TLV or REDIRECTDATA-TLV in order to save process that leads to this encapsulation is described in Section 6.
shave off some overhead bytes. Figure 7 describes the payload. The resulting frame is 32 bit aligned.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Outer Destination MAC Address (Destination FEID) | | Destination MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Outer Destination MAC Address | Outer Source MAC Address | | Destination MAC Address | Source MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Outer Source MAC Address (Source FEID) | | Source MAC Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Optional 802.1Q info (NEID) | Inter-FE ethertype | | Inter-FE ethertype | Metadata length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Metadata length | TLV encoded Metadata | | TLV encoded Metadata ~~~..............~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TLV encoded Metadata ~~~..............~~ | | TLV encoded Metadata ~~~..............~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Original Ethernet payload ~~................~~ | | Original packet data ~~................~~ |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 7: Packet format suggestion Figure 6: Packet format suggestion
An outer Ethernet header is introduced to carry the information on
Destination FEID, Source FEID and optional NEID.
o The Outer Destination MAC Address carries the Destination FEID The Ethernet header illustrated in Figure 6) has the following
identification. semantics:
o Outer Source MAC Address carries the Source FEID identification. o The Destination MAC Address is used to identify the Destination
FEID by the CE policy (as described in Section 6).
o When an NEID is needed, an optional 802.1Q is carried with 12-bit o The Source MAC Address is used to identify the Source FEID by the
VLANid representing the NEID. CE policy (as described in Section 6).
o The ethernet type is used to identify the frame as inter-FE LFB o The Ethernet type is used to identify the frame as inter-FE LFB
type. Ethertype 0xFEFE is to be used (XXX: Note, to editor update type. Ethertype 0xFEFE is to be used (XXX: Note to editor, likely
when available). we wont get that value - update when available).
o The 16-bit metadata length is used to described the total encoded o The 16-bit metadata length is used to described the total encoded
metadata length (including the 16 bits used to encode the metadata metadata length (including the 16 bits used to encode the metadata
length). length).
o One or more TLV encoded metadatum follows the metadata length o One or more 16-bit TLV encoded Metadatum follows the metadata
field. The TLV type identifies the Metadata id. ForCES IANA- length field. The TLV type identifies the Metadata id. ForCES
defined Metadata ids will be used. We recognize that using a 16 IANA-defined Metadata ids will be used. All TLVs will be 32 bit
bit TLV restricts the metadata id to 16 bits instead of ForCES aligned. We recognize that using a 16 bit TLV restricts the
define space of 32 bits. However, at the time of publication we metadata id to 16 bits instead of ForCES-defined component ID
believe this is sufficient to carry all the info we need and space of 32 bits. However, at the time of publication we believe
approach taken would save us 4 bytes per Metadatum transferred. this is sufficient to carry all the info we need and approach
taken would save us 4 bytes per Metadatum transferred.
o The original ethernet payload is appended at the end of the o The original packet data payload is appended at the end of the
metadata as shown. metadata as shown.
5.1.1. Inter-FE Ethernet Connectivity Issues
There are several issues that may arise due to using direct ethernet
encapsulation.
o Because we are adding data to existing ethernet frames, MTU issues
may arise. We recommend:
* To use large MTUs when possible (example with jumbo frames).
* Limit the amount of metadata that could be transmitted; our
definition allows for filtering of which metadata is to be
encapsulated in the frame. We recommend implementing this by
setting the egress port MTU to allow space for maximum size of
the metadata total size you wish to allow between FEs. In such
a setup, the port is configured to "lie" to the upper layers by
claiming to have a lower MTU than it is capable of. MTU
setting can be achieved by ForCES control of the port LFB(or
other config). In essence, the control plane making a decision
for the MTU settings of the egress port is implicitly deciding
how much metadata will be allowed.
o The frame may be dropped if there is congestion on the receiving
FE side. One approach to mitigate this issue is to make sure that
inter-FE LFB frames receive the highest priority treatment when
scheduled on the wire. Typically protocols that tunnel in the
middle box do not care and depend on the packet originator to
resend if the originator cares about reliability. We do not
expect to be any different.
o While we expect to use a unique IEEE-issued ethertype for the
inter-FE traffic, we use lessons learnt from VXLAN deployment xref
to be more flexible on the settings of the ethertype value used.
We make the ether type an LFB read-write component. Linux VXLAN
implementation uses UDP port 8472 because the deployment happened
much earlier than the point of RFC publication where the IANA
assigned udp port issued was 4789 [vxlan-udp]. For this reason we
make it possible to define at control time what ethertype to use
and default to the IEEE issued ethertype. We justify this by
assuming that a given ForCES NE is likely to be owned by a single
organization and that the organization's CE(or CE cluster) could
program all participating FEs via the inter-FE LFB (described in
this document) to recognize a private ethernet type used for
inter-LFB traffic (possibly those defined as available for private
use by the IEEE, namely: IDs 0x88B5 and 0x88B6)
6. Detailed Description of the Ethernet inter-FE LFB 6. Detailed Description of the Ethernet inter-FE LFB
The ethernet inter-FE LFB has two LFB input ports and three LFB The Ethernet inter-FE LFB has two LFB input port groups and three LFB
output ports. output ports as shown in Figure 7.
+-----------------+
Inter-FE LFB | |
Encapsulated | OUT2+--> decapsulated Packet + metadata
-------------->|IN2 |
Packet | |
| |
raw Packet + | OUT1+--> encapsulated Packet
-------------->|IN1 |
Metadata | |
| EXCEPTIONOUT +--> ExceptionID, packet + metadata
| |
+-----------------+
Figure 8: Inter-FE LFB The inter-FE LFB defines two components used in aiding processing
described in Section 6.2.
6.1. Data Handling +-----------------+
Inter-FE LFB | |
Encapsulated | OUT2+--> decapsulated Packet
-------------->|IngressInGroup | + metadata
Ethernet Frame | |
| |
raw Packet + | OUT1+--> Encapsulated Ethernet
-------------->|EgressInGroup | Frame
Metadata | |
| EXCEPTIONOUT +--> ExceptionID, packet
| | + metadata
+-----------------+
The Inter-FE LFB can be positioned at the egress of a source FE. In Figure 7: Inter-FE LFB
such a case an Inter-FE LFB instance receives via port IN1, raw
packet and metadata IDs from the preceding LFB instance. The
InterFEid metadatum MAY be present on the incoming raw data. The
processed encapsulated packet will go out on either LFB port OUT1 to
a downstream LFB or EXCEPTIONOUT port in the case of a failure.
The Inter-FE LFB can be positioned at the ingress of a receiving FE. 6.1. Data Handling
In such a case an Inter-FE LFB receives, via port IN2, an
encapsulated packet. Successful processing of the packet will result
in a raw packet with associated metadata IDs going downstream to an
LFB connected on OUT2. On failure the data is sent out EXCEPTIONOUT.
The Inter-FE LFB may use the InterFEid metadatum on egress of an FE The Inter-FE LFB (instance) can be positioned at the egress of a
to lookup the IFETable table. The interFEid in such a case will be source FE. Figure 5 illustrates an example source FE in the form of
generated by an upstream LFB instance (i.e one preceding the Inter-FE FE1. In such a case an Inter-FE LFB instance receives, via port
LFB). The output result constitutes a matched table row which has group EgressInGroup, a raw packet and associated metadata from the
the InterFEinfo details i.e. the tuple {NEID,Destination FEID,Source preceding LFB instances. The input information is used to produce a
FEID, inter FE type, metafilters}. The metafilters lists define selection of how to generate and encapsulate the new frame. The set
which Metadatum are to be passed to the neighboring FE. of all selections is stored in the LFB component IFETable described
further below. The processed encapsulated Ethernet Frame will go out
on OUT1 to a downstream LFB instance when processing succeeds or to
the EXCEPTIONOUT port in the case of a failure.
The component names used in describing processing are defined in The Inter-FE LFB (instance) can be positioned at the ingress of a
Section 6.2 receiving FE. Figure 5 illustrates an example destination FE in the
form of FE1. In such a case an Inter-FE LFB receives, via an LFB
port in the IngressInGroup, an encapsulated Ethernet frame.
Successful processing of the packet will result in a raw packet with
associated metadata IDs going downstream to an LFB connected on OUT2.
On failure the data is sent out EXCEPTIONOUT.
6.1.1. Egress Processing 6.1.1. Egress Processing
The egress Inter-FE LFB will receive an ethernet frame and The egress Inter-FE LFB receives packet data and any accompanying
accompanying metadatum (including optionally the InterFEid metadatum) Metadatum at an LFB port of the LFB instance's input port group
at LFB port IN1. The ethernet frame may be 802.1Q tagged. labelled EgressInGroup.
The InterFEid may be used to lookup IFETable table. If lookup is The LFB implementation may use the incoming LFB port (within LFB port
successful, the inter-FE LFB will perform the following actions using group EgressInGroup) to map to a table index used to lookup the
the resulting tuple: IFETable table.
o Increment statistics for packet and byte count observed. If lookup is successful, a matched table row which has the
InterFEinfo details is retrieved with the tuple {optional IFEtype,
optional StatId, Destination MAC address(DSTFE), Source MAC
address(SRCFE), optional metafilters}. The metafilters lists define
a whitelist of which Metadatum are to be passed to the neighboring
FE. The inter-FE LFB will perform the following actions using the
resulting tuple:
o Walk each packet metadatum and apply against the relevant o Increment statistics for packet and byte count observed at
MetaFilterList. If no legitimate metadata is found that needs to corresponding IFEStats entry.
be passed downstream then the processing stops and the packet is
allowed through as is.
o Check that the additional overhead of the outer header and o When MetaFilterList is present, then walk each received Metadatum
encapsulated metadata will not exceed MTU. If it does, increment and apply against the MetaFilterList. If no legitimate metadata
the error packet count statistics and return allowing the packet is found that needs to be passed downstream then the processing
to pass through. stops and send the packet and metadata out the EXCEPTIONOUT port
with exceptionID of EncapTableLookupFailed [RFC6956].
o create the outer ethernet header which is a duplicate of the o Check that the additional overhead of the Ethernet header and
incoming frame's ethernet header. The outer ethernet header may encapsulated metadata will not exceed MTU. If it does, increment
have an optional 802.1q header (if one was included in the the error packet count statistics and send the packet and metadata
original frame). out the EXCEPTIONOUT port with exceptionID of FragRequired
[RFC6956].
o If the NEID field is present (not 0) and the original header had a o Create the Ethernet header
vlan tag, replace the vlan tag on the outer header with the value
from the matched NEID field. If the NEID field is present (not 0)
and the original header did not have a vlan tag, create one that
matches the NEID field and appropriately add it to the outer
header. If the NEID field is absent or 0, do nothing.
o If the optional DSTFE is present, set the Destination MAC address o Set the Destination MAC address of the Ethernet header with value
of the outer header with value found in the DSTFE field. When found in the DSTFE field.
absent, then the inner destination MAC address is used (at this
point already copied).
o If the optional SRCFE is present, set the Source MAC address of o Set the Source MAC address of the Ethernet header with value found
the outer header with value found in the SRCFE field. If SRCFE is in the SRCFE field.
absent then the inner source MAC address is used (at this point
already copied).
o If the optional IFETYPE is present, set the outer ethernet type to o If the optional IFETYPE is present, set the Ethernet type to the
the value found in IFETYPE. If IFETYPE is absent then the value found in IFETYPE. If IFETYPE is absent then the standard
standard ethernet type is used (XXX: Note to editor, to be Inter-FE LFB Ethernet type is used (XXX: Note to editor, to be
updated). updated).
o encapsulate each allowed metadatum in a TLV. Use the Metaid as o Encapsulate each allowed Metadatum in a TLV. Use the Metaid as
the "type" field in the TLV header. The TLV should be aligned to the "type" field in the TLV header. The TLV should be aligned to
32 bits. This means you may need to add padding of zeroes to 32 bits. This means you may need to add padding of zeroes to
ensure alignment. ensure alignment.
o Update the Metadata length to the sum of each TLV's space + 2 o Update the Metadata length to the sum of each TLV's space plus 2
bytes (for the Metadata length field 16 bit space). bytes (for the Metadata length field 16 bit space).
The resulting packet is sent to the next LFB instance connected to The resulting packet is sent to the next LFB instance connected to
the OUT1 LFB-port; typically a port LFB. the OUT1 LFB-port; typically a port LFB.
In the case of a failed lookup or a zero-value InterFEid, (or absence In the case of a failed lookup the original packet and associated
of InterFEid when needed by the implementation) the packet is sent metadata is sent out the EXCEPTIONOUT port with exceptionID of
out unchanged via the OUT1 LFB Class instance port (typically towards EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB
a Port LFB). port is merely an abstraction and implementation may in fact drop
packets as described above.
6.1.2. Ingress Processing 6.1.2. Ingress Processing
An inter-FE LFB packet is recognized by looking at the etherype An ingressing inter-FE LFB packet is recognized by inspecting the
received on LFB instance port IN2. The IFETable table may be ethertype, and optionally the destination and source MAC addresses.
optionally utilized to provide metadata filters. A matching packet is mapped to an LFB instance port in the
IngressInGroup. The IFETable table row entry matching the LFB
instance port may have optionally programmed metadata filters. In
such a case the ingress processing should use the metadata filters as
a whitelist of what metadatum is to be allowed.
o Increment statistics for packet and byte count observed. o Increment statistics for packet and byte count observed.
o Look at the metadata length field and walk the packet data o Look at the metadata length field and walk the packet data
extracting from the TLVs the metadata values. For each metadatum extracting from the TLVs the metadata values. For each Metadatum
extracted, the metaid is compared against the relevant IFETable extracted, in the presence of metadata filters the metaid is
row metafilter list. If the metadatum is recognized, and is compared against the relevant IFETable row metafilter list. If
allowed by the filter the corresponding implementation metadatum the Metadatum is recognized, and is allowed by the filter the
field is set. If an unknown metadatum id is encountered, or if corresponding implementation Metadatum field is set. If an
the metaid is not found in the option allowed filter list the unknown Metadatum id is encountered, or if the metaid is not in
implementation is expected to ignore it, increment the packet the allowed filter list the implementation is expected to ignore
error statistic and proceed processing other metadatum. it, increment the packet error statistic and proceed processing
other Metadatum.
o Upon completion of processing all the metadata, the inter-FE LFB o Upon completion of processing all the metadata, the inter-FE LFB
instance resets the header to point to the original (inner) instance resets the data point to the original payload i.e skips
ethernet header i.e skips the IFE header information. At this the IFE header information. At this point the original packet
point the the original ethernet frame that was passed to the that was passed to the egress Inter-FE LFB at the source FE is
egress Inter-FE LFB at the source FE is reconstructed. This data reconstructed. This data is then passed along with the
is then passed along with the reconstructed metadata downstream to reconstructed metadata downstream to the next LFB instance in the
the next LFB instance in the graph. graph.
In the case of processing failure of either ingress or egress In the case of processing failure of either ingress or egress
positioning of the LFB, the packet and metadata are sent out the positioning of the LFB, the packet and metadata are sent out the
EXCEPTIONOUT LFB port with appropriate error id. Note that the EXCEPTIONOUT LFB port with appropriate error id. Note that the
EXCEPTIONOUT LFB port is merely an abstraction and implementation may EXCEPTIONOUT LFB port is merely an abstraction and implementation may
in fact drop packets as described above. in fact drop packets as described above.
6.2. Components 6.2. Components
There are two LFB component populated by the CE. There are two LFB components accessed by the CE. The reader is asked
to refer to the definitions in Figure 8.
The CE optionally programs LFB instances in a service graph that The first component, populated by the CE, is an array known as the
require inter-FE connectivity with InterFEid values to correspond to IFETable table. The array rows are made up of IFEInfo structure.
the inter-FE LFB IFETable table entries to use. The IFEInfo structure constitutes: optional IFETYPE, optionally
present StatId, Destination MAC address(DSTFE), Source MAC
address(SRCFE), optionally present array of allowed Metaids
(MetaFilterList).
The first component is an array known as the IFETable table. The The second component(ID 2), populated by the FE and read by the CE,
array rows are made up of IFEInfo structure. The IFEInfo structure is an indexed array known as the IFEStats table. Each IFEStats row
constitutes: optional NEID, optional IFETYPE, optional Destination which carries statistics information in the structure bstats.
FEID(DSTFE), optional Source FEID (SRCFE), optional array of allowed
Metaids (MetaFilterList). The table is looked up by a 32 bit index
passed from an upstream LFB class instance in the form of InterFEid
metadatum.
The second component(ID 2) is IFEStats table which carries the basic A note about the StatId relationship between the IFETable table and
stats structure bstats. The table index value used to lookup this IFEStats table: An implementation may choose to map between an
table is the same one as in IFETable table; in other words for a IFETable row and IFEStats table row using the StatId entry in the
table row index 10 in the IFETable table, its corresponding stats matching IFETable row. In that case the IFETable StatId must be
will be found in row index of the IFEStats table. present. Alternative implementation may map at provisioning time an
IFETable row to IFEStats table row. Yet another alternative
implementation may choose not to use the IFETable row StatId and
instead use the IFETable row index as the IFEStats index. For these
reasons the StatId component is optional.
6.3. Inter-FE LFB XML Model 6.3. Inter-FE LFB XML Model
<LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.0" <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
provides="IFE"> provides="IFE">
<frameDefs> <frameDefs>
<frameDef> <frameDef>
<name>EthernetAny</name> <name>PacketAny</name>
<synopsis>Packet with any Ethernet type</synopsis> <synopsis>Arbitrary Packet</synopsis>
</frameDef> </frameDef>
<frameDef> <frameDef>
<name>InterFEFrame</name> <name>InterFEFrame</name>
<synopsis> <synopsis>
Packet with an encapsulate IFE Ethernet type Ethernet Frame with encapsulate IFE information
</synopsis> </synopsis>
</frameDef> </frameDef>
</frameDefs> </frameDefs>
<dataTypeDefs> <dataTypeDefs>
<dataTypeDef> <dataTypeDef>
<name>bstats</name> <name>bstats</name>
<synopsis>Basic stats</synopsis> <synopsis>Basic stats</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>bytes</name> <name>bytes</name>
<synopsis>The total number of bytes seen</synopsis> <synopsis>The total number of bytes seen</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>packets</name> <name>packets</name>
<synopsis>The total number of packets seen</synopsis> <synopsis>The total number of packets seen</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>errors</name> <name>errors</name>
<synopsis>The total number of packets with errors</synopsis> <synopsis>The total number of packets with errors</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IFEInfo</name> <name>IFEInfo</name>
<synopsis>Describing IFE table row Information</synopsis> <synopsis>Describing IFE table row Information</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>NEID</name> <name>IFETYPE</name>
<synopsis> <synopsis>
The VLAN Id 12 bits part of the 802.1q TCI field. the ethernet type to be used for outgoing IFE frame
</synopsis> </synopsis>
<optional/> <optional/>
<typeRef>uint16</typeRef> <typeRef>uint16</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>IFETYPE</name> <name>StatId</name>
<synopsis> <synopsis>
the ethernet type to be used for outgoing IFE frame the Index into the stats table
</synopsis> </synopsis>
<optional/> <optional/>
<typeRef>uint16</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>DSTFE</name> <name>DSTFE</name>
<synopsis> <synopsis>
the destination MAC address of destination FE the destination MAC address of destination FE
</synopsis> </synopsis>
<optional/>
<typeRef>byte[6]</typeRef> <typeRef>byte[6]</typeRef>
</component> </component>
<component componentID="4"> <component componentID="4">
<name>SRCFE</name> <name>SRCFE</name>
<synopsis> <synopsis>
the source MAC address used for the source FE the source MAC address used for the source FE
</synopsis> </synopsis>
<optional/>
<typeRef>byte[6]</typeRef> <typeRef>byte[6]</typeRef>
</component> </component>
<component componentID="5"> <component componentID="5">
<name>MetaFilterList</name> <name>MetaFilterList</name>
<synopsis> <synopsis>
the allowed metadata filter table the allowed metadata filter table
</synopsis> </synopsis>
<optional/> <optional/>
<array type="variable-size"> <array type="variable-size">
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</array> </array>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
</dataTypeDefs> </dataTypeDefs>
<metadataDefs>
<metadataDef>
<name>InterFEid</name>
<synopsis>
Metadata identifying the index of the NexFE table
</synopsis>
<metadataID>16</metadataID>
<typeRef>uint32</typeRef>
</metadataDef>
</metadataDefs>
<LFBClassDefs> <LFBClassDefs>
<LFBClassDef LFBClassID="6612"> <LFBClassDef LFBClassID="18">
<name>IFE</name> <name>IFE</name>
<synopsis> <synopsis>
This LFB describes IFE connectivity parameterization This LFB describes IFE connectivity parameterization
</synopsis> </synopsis>
<version>1.0</version> <version>1.0</version>
<inputPorts> <inputPorts>
<inputPort>
<name>IN1</name> <inputPort group="true">
<synopsis> <name>EgressInGroup</name>
The input port of the egress side. <synopsis>
It expects any type of Ethernet frame. The input port group of the egress side.
</synopsis> It expects any type of Ethernet frame.
<expectation> </synopsis>
<frameExpected> <expectation>
<ref>EthernetAny</ref> <frameExpected>
</frameExpected> <ref>PacketAny</ref>
</expectation> </frameExpected>
</inputPort> </expectation>
<inputPort> </inputPort>
<name>IN2</name>
<synopsis> <inputPort group="true">
The input port of the ingress side. <name>IngressInGroup</name>
It expects an inter-FE encapsulated Ethernet frame <synopsis>
with associated metadata. The input port group of the ingress side.
</synopsis> It expects an interFE encapsulated Ethernet frame.
<expectation> </synopsis>
<frameExpected> <expectation>
<ref>InterFEFrame</ref> <frameExpected>
</frameExpected> <ref>InterFEFrame</ref>
<metadataExpected> </frameExpected>
<ref>InterFEid</ref> </expectation>
</metadataExpected> </inputPort>
</expectation>
</inputPort>
</inputPorts> </inputPorts>
<outputPorts> <outputPorts>
<outputPort> <outputPort>
<name>OUT1</name> <name>OUT1</name>
<synopsis> <synopsis>
The output port of the egress side. The output port of the egress side.
</synopsis> </synopsis>
<product> <product>
<frameProduced> <frameProduced>
<ref>InterFEFrame</ref> <ref>InterFEFrame</ref>
</frameProduced> </frameProduced>
<metadataProduced> </product>
<ref>InterFEid</ref> </outputPort>
</metadataProduced>
</product>
</outputPort>
<outputPort>
<name>OUT2</name>
<synopsis>
The output port of the Ingress side.
</synopsis>
<product>
<frameProduced>
<ref>EthernetAny</ref>
</frameProduced>
<metadataProduced>
<ref>InterFEid</ref>
</metadataProduced>
</product>
</outputPort>
<outputPort> <outputPort>
<name>EXCEPTIONOUT</name> <name>OUT2</name>
<synopsis> <synopsis>
The exception handling path The output port of the Ingress side.
</synopsis>
<product>
<frameProduced>
<ref>EthernetAny</ref>
</frameProduced>
<metadataProduced>
<ref>ExceptionID</ref>
<ref>InterFEid</ref>
</metadataProduced>
</product>
</outputPort>
</outputPorts> </synopsis>
<product>
<frameProduced>
<ref>PacketAny</ref>
</frameProduced>
</product>
</outputPort>
<components> <outputPort>
<name>EXCEPTIONOUT</name>
<synopsis>
The exception handling path
</synopsis>
<product>
<frameProduced>
<ref>PacketAny</ref>
</frameProduced>
<metadataProduced>
<ref>ExceptionID</ref>
</metadataProduced>
</product>
</outputPort>
<component componentID="1" access="read-write"> </outputPorts>
<name>IFETable</name>
<synopsis>
the table of all InterFE relations
</synopsis>
<array type="variable-size">
<typeRef>IFEInfo</typeRef>
</array>
</component>
<component componentID="2">
<name>IFEStats</name>
<synopsis>
the stats corresponding to the IFETable table
</synopsis> <components>
<typeRef>bstats</typeRef>
</component>
</components> <component componentID="1" access="read-write">
<name>IFETable</name>
<synopsis>
the table of all InterFE relations
</synopsis>
<array type="variable-size">
<typeRef>IFEInfo</typeRef>
</array>
</component>
<component componentID="2" access="read-only">
<name>IFEStats</name>
<synopsis>
the stats corresponding to the IFETable table
</synopsis>
<typeRef>bstats</typeRef>
</component>
</components>
</LFBClassDef>
</LFBClassDefs>
</LFBClassDef>
</LFBClassDefs>
</LFBLibrary> </LFBLibrary>
Figure 9: Inter-FE LFB XML Figure 8: Inter-FE LFB XML
7. Acknowledgements 7. Acknowledgements
The authors would like to thank Joel Halpern and Dave Hood for the The authors would like to thank Joel Halpern and Dave Hood for the
stimulating discussions. Evangelos Haleplidis contributed to stimulating discussions. Evangelos Haleplidis shepherded and
improving this document. contributed to improving this document. Alia Atlas was the AD
sponsor of this document and did a tremendous job of critiquing it.
The authors are grateful to Joel Halpern in his role as the Routing
Area reviewer in shaping the content of this document.
8. IANA Considerations 8. IANA Considerations
This memo includes two IANA requests within the registry This memo includes one IANA requests within the registry https://
https://www.iana.org/assignments/forces www.iana.org/assignments/forces
The first request is for the sub-registry "Logical Functional Block The request is for the sub-registry "Logical Functional Block (LFB)
(LFB) Class Names and Class Identifiers" to request for the Class Names and Class Identifiers" to request for the reservation of
reservation of LFB class name IFE with LFB classid 6112 with version LFB class name IFE with LFB classid 18 with version 1.0.
1.0.
The second request is for the sub-registry "Metadata ID" to request +--------------+---------+---------+-------------------+------------+
for the InterFEid metadata the value 0x00000010. | LFB Class | LFB | LFB | Description | Reference |
| Identifier | Class | Version | | |
| | Name | | | |
+--------------+---------+---------+-------------------+------------+
| 18 | IFE | 1.0 | An IFE LFB to | This |
| | | | standardize | document |
| | | | inter-FE LFB for | |
| | | | ForCES Network | |
| | | | Elements | |
+--------------+---------+---------+-------------------+------------+
Logical Functional Block (LFB) Class Names and Class Identifiers
9. IEEE Assignment Considerations 9. IEEE Assignment Considerations
This memo includes a request for a new ethernet protocol type as This memo includes a request for a new ethernet protocol type as
described in Section 5.1. described in Section 5.2.
10. Security Considerations 10. Security Considerations
This document does not alter either the ForCES model the ForCES Model The FEs involved in the Inter-FE LFB belong to the same Network
[RFC5812] or the ForCES Protocol [RFC5810] As such, it has no impact Device (NE) and are within the scope of a single administrative
on their security considerations. This document simply defines the Ethernet LAN private network. Trust of policy in the control and its
operational parameters and capabilities of an LFB that performs LFB treatment in the datapath exists already.
class instance extensions across nodes under a single administrative
control. this document does not attempt to analyze the presence or This document does not alter [RFC5812] or the ForCES
Protocol[RFC5810]. As such, it has no impact on their security
considerations. This document simply defines the operational
parameters and capabilities of an LFB that performs LFB class
instance extensions across nodes under a single administrative
control. This document does not attempt to analyze the presence or
possibility of security interactions created by allowing LFB graph possibility of security interactions created by allowing LFB graph
extension on packets. Any such issues, if they exist, are for the extension on packets. Any such issues, if they exist should be
designers of the particular data path, not the general mechanism. resolved by the designers of the particular data path i.e they are
not the responsibility of general mechanism outlined in this
document; one such option for protecting Ethernet is the use of IEEE
802.1AE Media Access Control Security [ieee8021ae] which provides
encryption and authentication.
11. References 11. References
11.1. Normative References 11.1. Normative References
[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal, [RFC5810] Doria, A., Ed., Hadi Salim, J., Ed., Haas, R., Ed.,
"Forwarding and Control Element Separation (ForCES) Khosravi, H., Ed., Wang, W., Ed., Dong, L., Gopal, R., and
Framework", RFC 3746, April 2004. J. Halpern, "Forwarding and Control Element Separation
(ForCES) Protocol Specification", RFC 5810, DOI 10.17487/
[RFC5810] Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang, RFC5810, March 2010,
W., Dong, L., Gopal, R., and J. Halpern, "Forwarding and <http://www.rfc-editor.org/info/rfc5810>.
Control Element Separation (ForCES) Protocol
Specification", RFC 5810, March 2010.
[RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping [RFC5811] Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping
Layer (TML) for the Forwarding and Control Element Layer (TML) for the Forwarding and Control Element
Separation (ForCES) Protocol", RFC 5811, March 2010. Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/
RFC5811, March 2010,
<http://www.rfc-editor.org/info/rfc5811>.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control [RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control
Element Separation (ForCES) Forwarding Element Model", Element Separation (ForCES) Forwarding Element Model", RFC
RFC 5812, March 2010. 5812, DOI 10.17487/RFC5812, March 2010,
<http://www.rfc-editor.org/info/rfc5812>.
[RFC7391] Hadi Salim, J., "Forwarding and Control Element Separation
(ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/
RFC7391, October 2014,
<http://www.rfc-editor.org/info/rfc7391>.
[RFC7408] Haleplidis, E., "Forwarding and Control Element Separation
(ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408,
November 2014, <http://www.rfc-editor.org/info/rfc7408>.
11.2. Informative References 11.2. Informative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/
RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>.
[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal,
"Forwarding and Control Element Separation (ForCES)
Framework", RFC 3746, DOI 10.17487/RFC3746, April 2004,
<http://www.rfc-editor.org/info/rfc3746>.
[RFC5405] Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines
for Application Designers", BCP 145, RFC 5405, DOI
10.17487/RFC5405, November 2008,
<http://www.rfc-editor.org/info/rfc5405>.
[RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J. [RFC6956] Wang, W., Haleplidis, E., Ogawa, K., Li, C., and J.
Halpern, "Forwarding and Control Element Separation Halpern, "Forwarding and Control Element Separation
(ForCES) Logical Function Block (LFB) Library", RFC 6956, (ForCES) Logical Function Block (LFB) Library", RFC 6956,
June 2013. DOI 10.17487/RFC6956, June 2013,
<http://www.rfc-editor.org/info/rfc6956>.
[brcm-higig] [brcm-higig]
"Higig", <http://www.broadcom.com/products/brands/HiGig>. , "HiGig",
<http://www.broadcom.com/products/brands/HiGig>.
[circuit-b]
Fairhurst, G., "Network Transport Circuit Breakers", Sep
2015, <https://tools.ietf.org/html/draft-fairhurst-tsvwg-
circuit-breaker-04>.
[ieee8021ae]
, "IEEE Standard for Local and metropolitan area networks
Media Access Control (MAC) Security", IEEE 802.1AE-2006,
Aug 2006.
[linux-tc] [linux-tc]
Hadi Salim, J., "Linux Traffic Control Classifier-Action Hadi Salim, J., "Linux Traffic Control Classifier-Action
Subsystem Architecture", netdev 01, Feb 2015. Subsystem Architecture", netdev 01, Feb 2015.
[tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux [tc-ife] Hadi Salim, J. and D. Joachimpillai, "Distributing Linux
Traffic Control Classifier-Action Subsystem", netdev 01, Traffic Control Classifier-Action Subsystem", netdev 01,
Feb 2015. Feb 2015.
[vxlan-udp] [vxlan-udp]
"iproute2 and kernel code (drivers/net/vxlan.c)", , "iproute2 and kernel code (drivers/net/vxlan.c)",
<https://www.kernel.org/pub/linux/utils/net/iproute2/>. <https://www.kernel.org/pub/linux/utils/net/iproute2/>.
Authors' Addresses Authors' Addresses
Damascane M. Joachimpillai Damascane M. Joachimpillai
Verizon Verizon
60 Sylvan Rd 60 Sylvan Rd
Waltham, Mass. 02451 Waltham, Mass. 02451
USA USA
Email: damascene.joachimpillai@verizon.com Email: damascene.joachimpillai@verizon.com
Jamal Hadi Salim Jamal Hadi Salim
Mojatatu Networks Mojatatu Networks
Suite 400, 303 Moodie Dr. Suite 200, 15 Fitzgerald Rd.
Ottawa, Ontario K2H 9R4 Ottawa, Ontario K2H 9G1
Canada Canada
Email: hadi@mojatatu.com Email: hadi@mojatatu.com
 End of changes. 134 change blocks. 
669 lines changed or deleted 658 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/