draft-ietf-forces-lfb-lib-05.txt   draft-ietf-forces-lfb-lib-06.txt 
Internet Engineering Task Force W. Wang Internet Engineering Task Force W. Wang
Internet-Draft Zhejiang Gongshang University Internet-Draft Zhejiang Gongshang University
Intended status: Standards Track E. Haleplidis Intended status: Standards Track E. Haleplidis
Expires: January 11, 2012 University of Patras Expires: April 27, 2012 University of Patras
K. Ogawa K. Ogawa
NTT Corporation NTT Corporation
C. Li C. Li
Hangzhou BAUD Networks Hangzhou BAUD Networks
J. Halpern J. Halpern
Ericsson Ericsson
July 10, 2011 October 25, 2011
ForCES Logical Function Block (LFB) Library ForCES Logical Function Block (LFB) Library
draft-ietf-forces-lfb-lib-05 draft-ietf-forces-lfb-lib-06
Abstract Abstract
This document defines basic classes of Logical Function Blocks (LFBs) This document defines basic classes of Logical Function Blocks (LFBs)
used in the Forwarding and Control Element Separation (ForCES). The used in the Forwarding and Control Element Separation (ForCES). The
basic LFB classes are defined according to ForCES FE model [RFC5812] basic LFB classes are defined according to ForCES FE model and ForCES
and ForCES protocol [RFC5810] specifications, and are scoped to meet protocol specifications, and are scoped to meet requirements of
requirements of typical router functions and considered as the basic typical router functions and considered as the basic LFB library for
LFB library for ForCES. The library includes the descriptions of the ForCES. The library includes the descriptions of the LFBs and the
LFBs and the XML definitions. XML definitions.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 11, 2012. This Internet-Draft will expire on April 27, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology and Conventions . . . . . . . . . . . . . . . . . 4 1. Terminology and Conventions . . . . . . . . . . . . . . . . . 4
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4
2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 7 3. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1. Scope of the Library . . . . . . . . . . . . . . . . . . . 7 3.1. Scope of the Library . . . . . . . . . . . . . . . . . . 8
3.2. Overview of LFB Classes in the Library . . . . . . . . . . 9 3.2. Overview of LFB Classes in the Library . . . . . . . . . 10
3.2.1. LFB Design Choices . . . . . . . . . . . . . . . . . . 9 3.2.1. LFB Design Choices . . . . . . . . . . . . . . . . . 10
3.2.2. LFB Class Groupings . . . . . . . . . . . . . . . . . 9 3.2.2. LFB Class Groupings . . . . . . . . . . . . . . . . . 10
3.2.3. Sample LFB Class Application . . . . . . . . . . . . . 11 3.2.3. Sample LFB Class Application . . . . . . . . . . . . 12
3.3. Document Structure . . . . . . . . . . . . . . . . . . . . 12 3.3. Document Structure . . . . . . . . . . . . . . . . . . . 13
4. Base Types . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4. Base Types . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1. Data Types . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1. Data Types . . . . . . . . . . . . . . . . . . . . . . . 15
4.1.1. Atomic . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1.1. Atomic . . . . . . . . . . . . . . . . . . . . . . . 15
4.1.2. Compound struct . . . . . . . . . . . . . . . . . . . 15 4.1.2. Compound struct . . . . . . . . . . . . . . . . . . . 16
4.1.3. Compound array . . . . . . . . . . . . . . . . . . . . 15 4.1.3. Compound array . . . . . . . . . . . . . . . . . . . 16
4.2. Frame Types . . . . . . . . . . . . . . . . . . . . . . . 16 4.2. Frame Types . . . . . . . . . . . . . . . . . . . . . . . 17
4.3. MetaData Types . . . . . . . . . . . . . . . . . . . . . . 16 4.3. MetaData Types . . . . . . . . . . . . . . . . . . . . . 17
4.4. XML for Base Type Library . . . . . . . . . . . . . . . . 17 4.4. XML for Base Type Library . . . . . . . . . . . . . . . . 18
5. LFB Class Description . . . . . . . . . . . . . . . . . . . . 38 5. LFB Class Description . . . . . . . . . . . . . . . . . . . . 40
5.1. Ethernet Processing LFBs . . . . . . . . . . . . . . . . . 38 5.1. Ethernet Processing LFBs . . . . . . . . . . . . . . . . 40
5.1.1. EtherPHYCop . . . . . . . . . . . . . . . . . . . . . 38 5.1.1. EtherPHYCop . . . . . . . . . . . . . . . . . . . . . 41
5.1.2. EtherMACIn . . . . . . . . . . . . . . . . . . . . . . 40 5.1.2. EtherMACIn . . . . . . . . . . . . . . . . . . . . . 43
5.1.3. EtherClassifier . . . . . . . . . . . . . . . . . . . 42 5.1.3. EtherClassifier . . . . . . . . . . . . . . . . . . . 44
5.1.4. EtherEncap . . . . . . . . . . . . . . . . . . . . . . 44 5.1.4. EtherEncap . . . . . . . . . . . . . . . . . . . . . 47
5.1.5. EtherMACOut . . . . . . . . . . . . . . . . . . . . . 46 5.1.5. EtherMACOut . . . . . . . . . . . . . . . . . . . . . 49
5.2. IP Packet Validation LFBs . . . . . . . . . . . . . . . . 47 5.2. IP Packet Validation LFBs . . . . . . . . . . . . . . . . 50
5.2.1. IPv4Validator . . . . . . . . . . . . . . . . . . . . 47 5.2.1. IPv4Validator . . . . . . . . . . . . . . . . . . . . 50
5.2.2. IPv6Validator . . . . . . . . . . . . . . . . . . . . 49 5.2.2. IPv6Validator . . . . . . . . . . . . . . . . . . . . 52
5.3. IP Forwarding LFBs . . . . . . . . . . . . . . . . . . . . 51 5.3. IP Forwarding LFBs . . . . . . . . . . . . . . . . . . . 53
5.3.1. IPv4UcastLPM . . . . . . . . . . . . . . . . . . . . . 51 5.3.1. IPv4UcastLPM . . . . . . . . . . . . . . . . . . . . 54
5.3.2. IPv4NextHop . . . . . . . . . . . . . . . . . . . . . 53 5.3.2. IPv4NextHop . . . . . . . . . . . . . . . . . . . . . 56
5.3.3. IPv6UcastLPM . . . . . . . . . . . . . . . . . . . . . 55 5.3.3. IPv6UcastLPM . . . . . . . . . . . . . . . . . . . . 58
5.3.4. IPv6NextHop . . . . . . . . . . . . . . . . . . . . . 57 5.3.4. IPv6NextHop . . . . . . . . . . . . . . . . . . . . . 60
5.4. Redirect LFBs . . . . . . . . . . . . . . . . . . . . . . 58 5.4. Redirect LFBs . . . . . . . . . . . . . . . . . . . . . . 62
5.4.1. RedirectIn . . . . . . . . . . . . . . . . . . . . . . 59 5.4.1. RedirectIn . . . . . . . . . . . . . . . . . . . . . 62
5.4.2. RedirectOut . . . . . . . . . . . . . . . . . . . . . 59 5.4.2. RedirectOut . . . . . . . . . . . . . . . . . . . . . 63
5.5. General Purpose LFBs . . . . . . . . . . . . . . . . . . . 60 5.5. General Purpose LFBs . . . . . . . . . . . . . . . . . . 64
5.5.1. BasicMetadataDispatch . . . . . . . . . . . . . . . . 60 5.5.1. BasicMetadataDispatch . . . . . . . . . . . . . . . . 64
5.5.2. GenericScheduler . . . . . . . . . . . . . . . . . . . 61 5.5.2. GenericScheduler . . . . . . . . . . . . . . . . . . 65
6. XML for LFB Library . . . . . . . . . . . . . . . . . . . . . 64 6. XML for LFB Library . . . . . . . . . . . . . . . . . . . . . 68
7. LFB Class Use Cases . . . . . . . . . . . . . . . . . . . . . 86 7. LFB Class Use Cases . . . . . . . . . . . . . . . . . . . . . 90
7.1. IPv4 Forwarding . . . . . . . . . . . . . . . . . . . . . 86 7.1. IPv4 Forwarding . . . . . . . . . . . . . . . . . . . . . 90
7.2. ARP processing . . . . . . . . . . . . . . . . . . . . . . 87 7.2. ARP processing . . . . . . . . . . . . . . . . . . . . . 91
8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 90 8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 94
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 91 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 95
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 92 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 96
10.1. LFB Class Names and LFB Class Identifiers . . . . . . . . 92 10.1. LFB Class Names and LFB Class Identifiers . . . . . . . . 96
10.2. Metadata ID . . . . . . . . . . . . . . . . . . . . . . . 94 10.2. Metadata ID . . . . . . . . . . . . . . . . . . . . . . . 98
10.3. Exception ID . . . . . . . . . . . . . . . . . . . . . . . 94 10.3. Exception ID . . . . . . . . . . . . . . . . . . . . . . 98
10.4. Validate Error ID . . . . . . . . . . . . . . . . . . . . 95 10.4. Validate Error ID . . . . . . . . . . . . . . . . . . . . 99
11. Security Considerations . . . . . . . . . . . . . . . . . . . 97 11. Security Considerations . . . . . . . . . . . . . . . . . . . 101
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 98 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 102
12.1. Normative References . . . . . . . . . . . . . . . . . . . 98 12.1. Normative References . . . . . . . . . . . . . . . . . . 102
12.2. Informative References . . . . . . . . . . . . . . . . . . 98 12.2. Informative References . . . . . . . . . . . . . . . . . 102
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 99 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 103
1. Terminology and Conventions 1. Terminology and Conventions
1.1. Requirements Language 1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
2. Definitions 2. Definitions
This document follows the terminology defined by the ForCES This document follows the terminology defined by the ForCES protocol
Requirements in [RFC3654]and by the ForCES framework in [RFC3746]. in [RFC5810] and by the ForCES FE model in [RFC5812]. The
The definitions below are repeated for clarity. definitions below are repeated for clarity.
Control Element (CE) - A logical entity that implements the ForCES Control Element (CE) - A logical entity that implements the ForCES
protocol and uses it to instruct one or more FEs on how to process protocol and uses it to instruct one or more FEs on how to process
packets. CEs handle functionality such as the execution of packets. CEs handle functionality such as the execution of
control and signaling protocols. control and signaling protocols.
Forwarding Element (FE) - A logical entity that implements the Forwarding Element (FE) - A logical entity that implements the
ForCES protocol. FEs use the underlying hardware to provide per- ForCES protocol. FEs use the underlying hardware to provide per-
packet processing and handling as directed/controlled by one or packet processing and handling as directed/controlled by one or
more CEs via the ForCES protocol. more CEs via the ForCES protocol.
skipping to change at page 5, line 36 skipping to change at page 5, line 36
LFB (Logical Function Block) - The basic building block that is LFB (Logical Function Block) - The basic building block that is
operated on by the ForCES protocol. The LFB is a well defined, operated on by the ForCES protocol. The LFB is a well defined,
logically separable functional block that resides in an FE and is logically separable functional block that resides in an FE and is
controlled by the CE via ForCES protocol. The LFB may reside at controlled by the CE via ForCES protocol. The LFB may reside at
the FE's datapath and process packets or may be purely an FE the FE's datapath and process packets or may be purely an FE
control or configuration entity that is operated on by the CE. control or configuration entity that is operated on by the CE.
Note that the LFB is a functionally accurate abstraction of the Note that the LFB is a functionally accurate abstraction of the
FE's processing capabilities, but not a hardware-accurate FE's processing capabilities, but not a hardware-accurate
representation of the FE implementation. representation of the FE implementation.
FE Model - The FE model is designed to model the logical
processing functions of an FE, which is defined by the ForCES FE
model document [RFC5812]. The FE model proposed in this document
includes three components; the LFB modeling of individual Logical
Functional Block (LFB model), the logical interconnection between
LFBs (LFB topology), and the FE-level attributes, including FE
capabilities. The FE model provides the basis to define the
information elements exchanged between the CE and the FE in the
ForCES protocol [RFC5810].
FE Topology - A representation of how the multiple FEs within a FE Topology - A representation of how the multiple FEs within a
single NE are interconnected. Sometimes this is called inter-FE single NE are interconnected. Sometimes this is called inter-FE
topology, to be distinguished from intra-FE topology (i.e., LFB topology, to be distinguished from intra-FE topology (i.e., LFB
topology). topology).
LFB Class and LFB Instance - LFBs are categorized by LFB Classes. LFB Class and LFB Instance - LFBs are categorized by LFB Classes.
An LFB Instance represents an LFB Class (or Type) existence. An LFB Instance represents an LFB Class (or Type) existence.
There may be multiple instances of the same LFB Class (or Type) in There may be multiple instances of the same LFB Class (or Type) in
an FE. An LFB Class is represented by an LFB Class ID, and an LFB an FE. An LFB Class is represented by an LFB Class ID, and an LFB
Instance is represented by an LFB Instance ID. As a result, an Instance is represented by an LFB Instance ID. As a result, an
skipping to change at page 6, line 17 skipping to change at page 6, line 30
visible to the CEs are conceptualized in the FE model as the LFB visible to the CEs are conceptualized in the FE model as the LFB
components. The LFB components include, for example, flags, components. The LFB components include, for example, flags,
single parameter arguments, complex arguments, and tables that the single parameter arguments, complex arguments, and tables that the
CE can read and/or write via the ForCES protocol (see below). CE can read and/or write via the ForCES protocol (see below).
LFB Topology - Representation of how the LFB instances are LFB Topology - Representation of how the LFB instances are
logically interconnected and placed along the datapath within one logically interconnected and placed along the datapath within one
FE. Sometimes it is also called intra-FE topology, to be FE. Sometimes it is also called intra-FE topology, to be
distinguished from inter-FE topology. distinguished from inter-FE topology.
Data Path - A conceptual path taken by packets within the
forwarding plane inside an FE. Note that more than one data path
can exist within an FE.
ForCES Protocol - While there may be multiple protocols used ForCES Protocol - While there may be multiple protocols used
within the overall ForCES architecture, the term "ForCES protocol" within the overall ForCES architecture, the term "ForCES protocol"
and "protocol" refer to the Fp reference points in the ForCES and "protocol" refer to the Fp reference points in the ForCES
Framework in [RFC3746]. This protocol does not apply to CE-to-CE Framework in [RFC3746]. This protocol does not apply to CE-to-CE
communication, FE-to-FE communication, or to communication between communication, FE-to-FE communication, or to communication between
FE and CE managers. Basically, the ForCES protocol works in a FE and CE managers. Basically, the ForCES protocol works in a
master-slave mode in which FEs are slaves and CEs are masters. master-slave mode in which FEs are slaves and CEs are masters.
This document defines the specifications for this ForCES protocol. This document defines the specifications for this ForCES protocol.
LFB Port - A port refers to an LFB input port or output port. See
Section 3.2 of [RFC5812] for more detailed definitions.
Physical Port - A port refers to a physical media input port or
output port of an FE. A physical port is usually assigned with a
physical port ID, abbreviated with a PHYPortID. This document
mainly deals with physical ports with Ethernet media.
Logical Port - A conceptually virtual port at data link layer (L2)
or network layer (L3). A logical port is usually assigned with a
logical port ID, abbreviated with a LogicalPortID. The logical
ports can be further categorized with a L2 logical port or a L3
logical port. An L2 logical port can be assigned with a L2
logical port ID, abbreviated with a L2PortID. An L3 logical port
can be assigned with a L3 logical port ID, abbreviated with a
L3PortID. MAC layer VLAN ports belongs to L2 logical ports as
well as logical ports.
LFB Class Library - The LFB class library is a set of LFB classes
that has been identified as the most common functions found in
most FEs and hence should be defined first by the ForCES Working
Group. The LFB Class Library is defined by this document.
3. Introduction 3. Introduction
RFC 3746 [RFC3746] specifies Forwarding and Control Element [RFC5810] specifies Forwarding and Control Element Separation
Separation (ForCES) framework. In the framework, Control Elements (ForCES) framework. In the framework, Control Elements (CEs)
(CEs) configure and manage one or more separate Forwarding Elements configure and manage one or more separate Forwarding Elements (FEs)
(FEs) within a Network Element (NE) by use of a ForCES protocol. RFC within a Network Element (NE) by use of a ForCES protocol. [RFC5810]
5810 [RFC5810] specifies the ForCES protocol. RFC 5812 [RFC5812] specifies the ForCES protocol. [RFC5812] specifies the Forwarding
specifies the Forwarding Element (FE) model. In the model, resources Element (FE) model. In the model, resources in FEs are described by
in FEs are described by classes of Logical Function Blocks (LFBs). classes of Logical Function Blocks (LFBs). The FE model defines the
The FE model defines the structure and abstract semantics of LFBs, structure and abstract semantics of LFBs, and provides XML schema for
and provides XML schema for the definitions of LFBs. the definitions of LFBs.
This document conforms to the specifications of the FE model This document conforms to the specifications of the FE model
[RFC5812] and specifies detailed definitions of classes of LFBs, [RFC5812] and specifies detailed definitions of classes of LFBs,
including detailed XML definitions of LFBs. These LFBs form a base including detailed XML definitions of LFBs. These LFBs form a base
LFB library for ForCES. LFBs in the base library are expected to be LFB library for ForCES. LFBs in the base library are expected to be
combined to form an LFB topology for a typical router to implement IP combined to form an LFB topology for a typical router to implement IP
forwarding. It should be emphasized that an LFB is an abstraction of forwarding. It should be emphasized that an LFB is an abstraction of
functions rather than its implementation details. The purpose of the functions rather than its implementation details. The purpose of the
LFB definitions is to represent functions so as to provide LFB definitions is to represent functions so as to provide
interoperability between separate CEs and FEs. interoperability between separate CEs and FEs.
More LFB classes with more functions may be developed in future time More LFB classes with more functions may be developed in future time
and documented by IETF. Vendors may also develop proprietary LFB and documented by IETF. Vendors may also develop proprietary LFB
classes as described in the FE model [RFC5812]. classes as described in the FE model [RFC5812].
3.1. Scope of the Library 3.1. Scope of the Library
It is intended that the LFB classes described in this document are It is intended that the LFB classes described in this document are
designed to provide the functions of a typical router. RFC 1812 designed to provide the functions of a typical router. [RFC5812]
specifies that a typical router is expected to provide functions to: specifies that a typical router is expected to provide functions to:
(1) Interface to packet networks and implement the functions required (1) Interface to packet networks and implement the functions
by that network. These functions typically include: required by that network. These functions typically include:
o Encapsulating and decapsulating the IP datagrams with the * Encapsulating and decapsulating the IP datagrams with the
connected network framing (e.g., an Ethernet header and checksum), connected network framing (e.g., an Ethernet header and
checksum),
o Sending and receiving IP datagrams up to the maximum size * Sending and receiving IP datagrams up to the maximum size
supported by that network, this size is the network's Maximum supported by that network, this size is the network's Maximum
Transmission Unit or MTU, Transmission Unit or MTU,
o Translating the IP destination address into an appropriate * Translating the IP destination address into an appropriate
network-level address for the connected network (e.g., an Ethernet network-level address for the connected network (e.g., an
hardware address), if needed, and Ethernet hardware address), if needed, and
o Responding to network flow control and error indications, if any. * Responding to network flow control and error indications, if
any.
(2) Conform to specific Internet protocols including the Internet (2) Conform to specific Internet protocols including the Internet
Protocol (IPv4 and/or IPv6), Internet Control Message Protocol Protocol (IPv4 and/or IPv6), Internet Control Message Protocol
(ICMP), and others as necessary. (ICMP), and others as necessary.
(3) Receive and forwards Internet datagrams. Important issues in (3) Receive and forward Internet datagrams. Important issues in
this process are buffer management, congestion control, and fairness. this process are buffer management, congestion control, and
fairness.
o Recognizes error conditions and generates ICMP error and * Recognizes error conditions and generates ICMP error and
information messages as required. information messages as required.
o Drops datagrams whose time-to-live fields have reached zero. * Drops datagrams whose time-to-live fields have reached zero.
o Fragments datagrams when necessary to fit into the MTU of the next * Fragments datagrams when necessary to fit into the MTU of the
network. next network.
(4) Choose a next-hop destination for each IP datagram, based on the (4) Choose a next-hop destination for each IP datagram, based on the
information in its routing database. information in its routing database.
(5) Usually support an interior gateway protocol (IGP) to carry out (5) Usually support an interior gateway protocol (IGP) to carry out
distributed routing and reachability algorithms with the other distributed routing and reachability algorithms with the other
routers in the same autonomous system. In addition, some routers routers in the same autonomous system. In addition, some
will need to support an exterior gateway protocol (EGP) to exchange routers will need to support an exterior gateway protocol (EGP)
topological information with other autonomous systems. For all to exchange topological information with other autonomous
routers, it is essential to provide ability to manage static routing systems. For all routers, it is essential to provide ability to
items. manage static routing items.
(6) Provide network management and system support facilities, (6) Provide network management and system support facilities,
including loading, debugging, status reporting, exception reporting including loading, debugging, status reporting, exception
and control. reporting and control.
The classical IP router utilizing the ForCES framework constitutes a The classical IP router utilizing the ForCES framework constitutes a
CE running some controlling IGP and/or EGP function and FEs CE running some controlling IGP and/or EGP function or static route
implementing using Logical Function Blocks (LFBs) conforming to the setup and FEs implementing using Logical Function Blocks (LFBs)
FE model[RFC5812] specifications. The CE, in conformance to the conforming to the FE model[RFC5812] specifications. The CE, in
ForCES protocol[RFC5810] and the FE model [RFC5812] specifications, conformance to the ForCES protocol[RFC5810] and the FE model
instructs the LFBs on the FE how to treat received/sent packets. [RFC5812] specifications, instructs the LFBs on the FE how to treat
received/sent packets.
Packets in an IP router are received and transmitted on physical Packets in an IP router are received and transmitted on physical
media typically referred to as "ports". Different physical port media typically referred to as "ports". Different physical port
media will have different way for encapsulating outgoing frames and media will have different ways for encapsulating outgoing frames and
decapsulating incoming frames. The different physical media will decapsulating incoming frames. The different physical media will
also have different attributes that influence its behavior and how also have different attributes that influence its behavior and how
frames get encapsulated or decapsulated. This document will only frames get encapsulated or decapsulated. This document will only
deal with Ethernet physical media. Other future documents may deal deal with Ethernet physical media. Other future documents may deal
with other types of media. This document will also interchangeably with other types of media. This document will also interchangeably
refer to a port to be an abstraction that constitutes a PHY and a MAC refer to a port to be an abstraction that constitutes a PHY and a MAC
as described by the LFBs like EtherPHYCop, EtherMACIn, and as described by the LFBs like EtherPHYCop, EtherMACIn, and
EtherMACOut. EtherMACOut.
IP packets emanating from port LFBs are then processed by a IP packets emanating from port LFBs are then processed by a
validation LFB before being further forwarded to the next LFB. After validation LFB before being further forwarded to the next LFB. After
the validation process the packet is passed to an LFB where IP the validation process the packet is passed to an LFB where IP
forwarding decision is made. In the IP Forwarding LFBs, a Longest forwarding decision is made. In the IP Forwarding LFBs, a Longest
Prefix Match LFB is used to look up the destination information in a Prefix Match LFB is used to look up the destination information in a
packet and select a next hop index for sending the packet onward. A packet and select a next hop index for sending the packet onward. A
next hop LFB uses the next hop index metadata to apply the proper next hop LFB uses the next hop index metadata to apply the proper
headers to the IP packets, and direct them to the proper egress. headers to the IP packets, and direct them to the proper egress.
Note that in the process of IP packets processing, in this document, Note that in the process of IP packets processing, in this document,
we are adhering to the weak-host model[RFC1122] since that is the we are adhering to the weak-host model [RFC1122] since that is the
most usable model for a packet processing Network Element. most usable model for a packet processing Network Element.
3.2. Overview of LFB Classes in the Library 3.2. Overview of LFB Classes in the Library
It is critical to classify functional requirements into various It is critical to classify functional requirements into various
classes of LFBs and construct a typical but also flexible enough base classes of LFBs and construct a typical but also flexible enough base
LFB library for various IP forwarding equipments. LFB library for various IP forwarding equipments.
3.2.1. LFB Design Choices 3.2.1. LFB Design Choices
skipping to change at page 9, line 47 skipping to change at page 11, line 5
o unless there is a clear difference in functionality, similar o unless there is a clear difference in functionality, similar
packet processing should not be represented as two or more packet processing should not be represented as two or more
different LFBs. Or else, it may add extra burden on different LFBs. Or else, it may add extra burden on
implementation to achieve interoperability. implementation to achieve interoperability.
3.2.2. LFB Class Groupings 3.2.2. LFB Class Groupings
The document defines groups of LFBs for typical router function The document defines groups of LFBs for typical router function
requirements: requirements:
(1) A group of Ethernet processing LFBs are defined to abstract the (1) A group of Ethernet processing LFBs are defined to abstract the
packet processing for Ethernet as the port media type. As the most packet processing for Ethernet as the port media type. As the
popular media type with rich processing features, Ethernet media most popular media type with rich processing features, Ethernet
processing LFBs was a natural choice. Definitions for processing of media processing LFBs was a natural choice. Definitions for
other port media types like POS or ATM may be incorporated in the processing of other port media types like POS or ATM may be
library in future version of the document or in a future separate incorporated in the library in future version of the document or
document. in a future separate document. The following LFBs are defined
for Ethernet processing:
The following LFBs are defined for Ethernet processing:
EtherPHYCop (section 5.1.1)
EtherMACIn (section 5.1.2)
EtherClassifier (section 5.1.3)
EtherEncapsulator (section 5.1.4)
EtherMACOut (section 5.1.5) * EtherPHYCop (Section 5.1.1)
(2) A group of LFBs are defined for IP packet validation process. * EtherMACIn (Section 5.1.2)
The following LFBs are defined for IP Validation processing: * EtherClassifier (Section 5.1.3)
IPv4Validator (section 5.2.1) * EtherEncap (Section 5.1.4)
IPv6Validator (section 5.2.2) * EtherMACOut (Section 5.1.5)
(3) A group of LFBs are defined to abstract IP forwarding process. (2) A group of LFBs are defined for IP packet validation process.
The following LFBs are defined for IP validation processing:
The following LFBs are defined for IP Forwarding processing: * IPv4Validator (Section 5.2.1)
IPv4UcastLPM (section 5.3.1) * IPv6Validator (Section 5.2.2)
IPv4NextHop (section 5.3.2) (3) A group of LFBs are defined to abstract IP forwarding process.
The following LFBs are defined for IP forwarding processing:
IPv6UcastLPM (section 5.3.4) * IPv4UcastLPM (Section 5.3.1)
IPv6NextHop (section 5.3.4) * IPv4NextHop (Section 5.3.2)
(4) A group of LFBs are defined to abstract the process for redirect * IPv6UcastLPM (Section 5.3.3)
operation, i.e., data packet transmission between CE and FEs.
The following LFBs are defined for redirect processing: * IPv6NextHop (Section 5.3.4)
RedirectIn (section 5.4.1) (4) A group of LFBs are defined to abstract the process for redirect
operation, i.e., data packet transmission between CE and FEs.
The following LFBs are defined for redirect processing:
RedirectOut (section 5.4.2) * RedirectIn (Section 5.4.1)
(5) A group of LFBs are defined for abstracting some general purpose * RedirectOut (Section 5.4.2)
packet processing. These processing processes are usually general to
many processing locations in an FE LFB topology.
The following LFBs are defined for redirect processing: (5) A group of LFBs are defined for abstracting some general purpose
packet processing. These processing processes are usually
general to many processing locations in an FE LFB topology. The
following LFBs are defined for redirect processing:
BasicMetadataDispatch (section 5.5.1) * BasicMetadataDispatch (Section 5.5.1)
GenericScheduler (section 5.5.2) * GenericScheduler (Section 5.5.2)
3.2.3. Sample LFB Class Application 3.2.3. Sample LFB Class Application
Although section 7 will present use cases for LFBs defined in this Although Section 7 will present use cases for LFBs defined in this
document, this section shows a sample LFB class application in document, this section shows a sample LFB class application in
advance so that readers can get a quick overlook of the LFB classes advance so that readers can get a quick overlook of the LFB classes
with the usage. with the usage.
Figure 1 shows the typical LFB processing path for an IPv4 unicast Figure 1 shows the typical LFB processing path for an IPv4 unicast
forwarding case with Ethernet media interfaces. To focus on the IP forwarding case with Ethernet media interfaces. To focus on the IP
forwarding function, some inputs or outputs of LFBs in the figure forwarding function, some inputs or outputs of LFBs in the figure
that are not related to the function are ignored. Section 7.1 will that are not related to the function are ignored. Section 7.1 will
describe the figure in more details. describe the figure in details.
+-----+ +------+ +-----+ +------+
| | | | | | | |
| |<---------------|Ether |<----------------------------+ | |<---------------|Ether |<----------------------------+
| | |MACOut| | | | |MACOut| |
| | | | | | | | | |
|Ether| +------+ | |Ether| +------+ |
|PHY | | |PHY | |
|Cop | +---+ | |Cop | +---+ |
|#1 | +-----+ | |----->IPv6 Packets | |#1 | +-----+ | |----->IPv6 Packets |
skipping to change at page 12, line 46 skipping to change at page 13, line 46
| | |MACOut| +---| | | | | |MACOut| +---| | |
| | | | | +----+ | | | | | | +----+ |
+-----+ +------+ | BasicMetadataDispatch | +-----+ +------+ | BasicMetadataDispatch |
+-------------------------+ +-------------------------+
Figure 1: LFB use case for IPv4 forwarding Figure 1: LFB use case for IPv4 forwarding
3.3. Document Structure 3.3. Document Structure
Base type definitions, including data types, packet frame types, and Base type definitions, including data types, packet frame types, and
etadata types are presented in advance for definitions of various LFB metadata types are presented in advance for definitions of various
classes. Section 4 (Base Types Section) provide a description on the LFB classes. Section 4 (Base Types section) provides a description
base types used by this LFB library. In order for an extensive use on the base types used by this LFB library. To enable extensive use
of these base types for other LFB class definitions, the base type of these base types by other LFB class definitions, the base type
definitions are provided by an xml file in a way as a library which definitions are provided as a separate library.
is separate from the LFB definition library.
Within every group of LFB classes, a set of LFBs are defined for Within every group of LFB classes, a set of LFBs are defined for
individual function purposes. Section 5 (LFB Class Descriptions individual function purposes. Section 5 (LFB Class Descriptions
Section) makes text descriptions on the individual LFBs. Note that section) provides text descriptions on the individual LFBs. Note
for a complete definition of an LFB, a text description as well as a that for a complete definition of an LFB, a text description as well
XML definition is required. as a XML definition is required.
LFB classes are finally defined by XML with specifications and schema LFB classes are finally defined by XML with specifications and schema
defined in the ForCES FE model[RFC5812]. Section 6 (XML LFB defined in the ForCES FE model[RFC5812]. Section 6 (XML LFB
Definitions Section) provide the complete XML definitions of the base Definitions section) provides the complete XML definitions of the
LFB classes library.. base LFB classes library.
Section 7 provides several use cases on how some typical router Section 7 provides several use cases on how some typical router
functions can be implemented using the base LFB library defined in functions can be implemented using the base LFB library defined in
this document. this document.
4. Base Types 4. Base Types
TThe FE model [RFC5812] has specified predefined (built-in) atomic The FE model [RFC5812] has specified predefined (built-in) atomic
data-types as below: data-types as below:
char, uchar, int16, uint16, int32, uint32, int64, uint64, string[N], char, uchar, int16, uint16, int32, uint32, int64, uint64, string[N],
string, byte[N], boolean, octetstring[N], float16, float32, float64. string, byte[N], boolean, octetstring[N], float16, float32, float64.
Based on the atomic data types and with the use of type definition Based on the atomic data types and with the use of type definition
elements in the FE model XML schema, new data types, packet frame elements in the FE model XML schema, new data types, packet frame
types, and metadata types can be defined. types, and metadata types can be defined.
To define a base LFB library for typical router functions, a set of To define a base LFB library for typical router functions, a set of
skipping to change at page 14, line 42 skipping to change at page 15, line 42
4.1.1. Atomic 4.1.1. Atomic
The following data types are defined as atomic data types and put in The following data types are defined as atomic data types and put in
the base type library: the base type library:
Data Type Name Brief Description Data Type Name Brief Description
-------------- ----------------- -------------- -----------------
IPv4Addr IPv4 address IPv4Addr IPv4 address
IPv6Addr IPv6 address IPv6Addr IPv6 address
IEEEMAC IEEE mac address. IEEEMAC IEEE MAC address
LANSpeedType Network speed values LANSpeedType Network speed values
DuplexType Duplex types DuplexType Duplex types
PortStatusValues The possible values of port status, used for PortStatusValues The possible values of port status, used for
both administrative and operative status. both administrative and operative status
SchdDisciplineType Scheduling discipline type. VlanIDType The type of VLAN ID
VlanPriorityType The type of VLAN priority
SchdDisciplineType Scheduling discipline type
4.1.2. Compound struct 4.1.2. Compound struct
The following compound struct types are defined in the base type The following compound struct types are defined in the base type
library: library:
Data Type Name Brief Description Data Type Name Brief Description
-------------- ----------------- -------------- -----------------
EtherDispatchEntryType Entry type for Ethernet dispatch table. EtherDispatchEntryType Entry type for Ethernet dispatch table
VlanInputTableEntryType Entry type for VLAN input table. VlanInputTableEntryType Entry type for VLAN input table
EncapTableEntryType Entry type for Ethernet encapsulation table. EncapTableEntryType Entry type for Ethernet encapsulation table
MACInStatsType Statistics type for EtherMACIn LFB. MACInStatsType Statistics type for EtherMACIn LFB
MACOutStatsType Statistics type for EtherMACOut LFB. MACOutStatsType Statistics type for EtherMACOut LFB
EtherClassifyStatsType Entry type for statistics table in EtherClassifyStatsType Entry type for statistics table in
EtherClassifier LFB. EtherClassifier LFB
IPv4PrefixInfoType Entry type for IPv4 prefix table. IPv4PrefixInfoType Entry type for IPv4 prefix table
IPv6PrefixInfoType Entry type for IPv6 prefix table IPv6PrefixInfoType Entry type for IPv6 prefix table
IPv4NextHopInfoType Entry type for IPv4 next hop table. IPv4NextHopInfoType Entry type for IPv4 next hop table
IPv6NextHopInfoType Entry type for IPv6 next hop table. IPv6NextHopInfoType Entry type for IPv6 next hop table
IPv4ValidatorStatsType Statistics type in IPv4validator LFB. IPv4ValidatorStatsType Statistics type in IPv4validator LFB
IPv6ValidatorStatsType Statistics type in IPv6validator LFB. IPv6ValidatorStatsType Statistics type in IPv6validator LFB
IPv4UcastLPMStatsType Statistics type in IPv4Unicast LFB. IPv4UcastLPMStatsType Statistics type in IPv4Unicast LFB
IPv6UcastLPMStatsType Statistics type in IPv6Unicast LFB. IPv6UcastLPMStatsType Statistics type in IPv6Unicast LFB
QueueDepthType Entry type for queue depth table. QueueStatsType Entry type for queue depth table
MetadataDispatchType Entry type for metadata dispatch table. MetadataDispatchType Entry type for metadata dispatch table
4.1.3. Compound array 4.1.3. Compound array
Compound array types are mostly created based on compound struct Compound array types are mostly created based on compound struct
types for LFB table components. The following compound array types types for LFB table components. The following compound array types
are defined in this base type library: are defined in this base type library:
Data Type Name Brief Description Data Type Name Brief Description
-------------- ----------------- -------------- -----------------
EtherClassifyStatsTableType Type for Ethernet classifier statistics EtherClassifyStatsTableType Type for Ethernet classifier statistics
information table information table
EtherDispatchTableType Type for Ethernet dispatch table. EtherDispatchTableType Type for Ethernet dispatch table
VlanInputTableType Type for VLAN input table. VlanInputTableType Type for VLAN input table
EncapTableType Type for Ethernet encapsulation table. EncapTableType Type for Ethernet encapsulation table
IPv4PrefixTableType Type for IPv4 prefix table. IPv4PrefixTableType Type for IPv4 prefix table
IPv6PrefixTableType Type for IPv6 prefix table. IPv6PrefixTableType Type for IPv6 prefix table
IPv4NextHopTableType Type for IPv4 next hop table. IPv4NextHopTableType Type for IPv4 next hop table
IPv6NextHopTableType Type for IPv6 next hop table. IPv6NextHopTableType Type for IPv6 next hop table
MetadataDispatchTableType Type for Metadata dispatch table. MetadataDispatchTableType Type for Metadata dispatch table
QueueDepthTableType Type for Queue depth table. QueueStatsTableType Type for Queue depth table
4.2. Frame Types 4.2. Frame Types
According to FE model [RFC5812], frame types are used in LFB According to FE model [RFC5812], frame types are used in LFB
definitions to define the types of frames the LFB expects at its definitions to define packet frame types both an LFB expects at its
input port and emits at its output port. The <frameDef> element in input port and the LFB emits at its output port. The <frameDef>
the FE model is used to define a new frame type. element in the FE model is used to define a new frame type.
The following frame types are defined in the base type library: The following frame types are defined in the base type library:
Frame Name Brief Description Frame Name Brief Description
-------------- ---------------- -------------- ----------------
EthernetII An Ethernet II frame EthernetII An Ethernet II frame
ARP An ARP packet ARP An ARP packet
IPv4 An IPv4 packet IPv4 An IPv4 packet
IPv6 An IPv6 packet IPv6 An IPv6 packet
IPv4Unicast An IPv4 unicast packet IPv4Unicast An IPv4 unicast packet
skipping to change at page 17, line 7 skipping to change at page 18, line 7
LFB Metadata is used to communicate per-packet state from one LFB to LFB Metadata is used to communicate per-packet state from one LFB to
another. The <metadataDef> element in the FE model is used to define another. The <metadataDef> element in the FE model is used to define
a new metadata type. a new metadata type.
The following metadata types are currently defined in the base type The following metadata types are currently defined in the base type
library. library.
Metadata Name Metadata ID Brief Description Metadata Name Metadata ID Brief Description
------------ ---------- ------------- ------------ ---------- -------------
PHYPortID 1 The physical port ID that the packet is PHYPortID 1 The ingress physical port that the packet
inputted. arrived on
SrcMAC 2 Source MAC address of the packet. SrcMAC 2 Source MAC address of the packet
DstMAC 3 Destination MAC address of the packet. DstMAC 3 Destination MAC address of the packet
LogicalPortID 4 ID of a logical port for the packet. LogicalPortID 4 ID of a logical port for the packet
EtherType 5 Indicating the Ethernet type of the EtherType 5 The packet's Ethernet type
Ethernet packet. VlanID 6 The VLAN ID of the Ethernet packet
VlanID 6 The VLAN ID of the Ethernet packet. VlanPriority 7 The priority of the Ethernet packet
VlanPriority 7 The priority of the Ethernet packet. NexthopIPv4Addr 8 Nexthop IPv4 address the packet is sent to
NexthopIPv4Addr 8 Nexthop IPv4 address the packet is sent to. NexthopIPv6Addr 9 Nexthop IPv6 address the packet is sent to
NexthopIPv6Addr 9 Nexthop IPv6 address the packet is sent to. HopSelector 10 A search key the packet can use to look up
HopSelector 10 An index the packet can use to look up a a nexthop table for next hop information
nexthop table for next hop information of of the packet
the packet. ExceptionID 11 Indicating exception type of the packet
ExceptionID 11 Indicating exception type of the packet which is exceptional for some processing
which is exceptional for some processing. ValidateErrorID 12 Indicating error type of the packet failed
ValidateErrorID 12 Indicating error type of the packet failed some validation process
some validation process. L3PortID 13 ID of L3 port
L3PortID 13 ID of L3 port. RedirectIndex 14 A metadata CE sends to RedirectIn LFB for
RedirectIndex 14 A metadata CE sends to RedirectIn LFB for the associated packet to select output
the associated packet to select output port in the LFB group output "PktsOut"
port in the LFB group output "PktsOut". MediaEncapInfoIndex 15 A search key the packet uses to look up a
MediaEncapInfoIndex 15 An index the packet uses to look up a media media encapsulation table to select its
encapsulation table to select its encapsulation media as well as followed
encapsulation media as well as followed encapsulation LFB
encapsulation LFB.
4.4. XML for Base Type Library 4.4. XML for Base Type Library
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.0" <LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
provides="BaseTypeLibrary"> provides="BaseTypeLibrary">
<frameDefs> <frameDefs>
<frameDef> <frameDef>
<name>EthernetAll</name> <name>EthernetAll</name>
skipping to change at page 18, line 48 skipping to change at page 19, line 46
<synopsis>IPv4 address</synopsis> <synopsis>IPv4 address</synopsis>
<typeRef>byte[4]</typeRef> <typeRef>byte[4]</typeRef>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6Addr</name> <name>IPv6Addr</name>
<synopsis>IPv6 address</synopsis> <synopsis>IPv6 address</synopsis>
<typeRef>byte[16]</typeRef> <typeRef>byte[16]</typeRef>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IEEEMAC</name> <name>IEEEMAC</name>
<synopsis>IEEE mac address.</synopsis> <synopsis>IEEE MAC address.</synopsis>
<typeRef>byte[6]</typeRef> <typeRef>byte[6]</typeRef>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>LANSpeedType</name> <name>LANSpeedType</name>
<synopsis>Network speed values</synopsis> <synopsis>Network speed values</synopsis>
<atomic> <atomic>
<baseType>uint32</baseType> <baseType>uint32</baseType>
<specialValues> <specialValues>
<specialValue value="0x00000001"> <specialValue value="0x00000001">
<name>LAN_SPEED_10M</name> <name>LAN_SPEED_10M</name>
skipping to change at page 19, line 51 skipping to change at page 20, line 50
<specialValue value="0x00000002"> <specialValue value="0x00000002">
<name>Half-duplex</name> <name>Half-duplex</name>
<synopsis>port negotitation half duplex</synopsis> <synopsis>port negotitation half duplex</synopsis>
</specialValue> </specialValue>
<specialValue value="0x00000003"> <specialValue value="0x00000003">
<name>Full-duplex</name> <name>Full-duplex</name>
<synopsis>port negotitation full duplex</synopsis> <synopsis>port negotitation full duplex</synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
<!-- XXX: This is very IEEE specific -->
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>PortStatusValues</name> <name>PortStatusValues</name>
<synopsis>The possible values of port status, used for both <synopsis>The possible values of port status, used for both
administrative and operative status.</synopsis> administrative and operative status.</synopsis>
<atomic> <atomic>
<baseType>uchar</baseType> <baseType>uchar</baseType>
<specialValues> <specialValues>
<specialValue value="0"> <specialValue value="0">
<name>Disabled </name> <name>Disabled </name>
skipping to change at page 20, line 30 skipping to change at page 21, line 27
</specialValue> </specialValue>
<specialValue value="2"> <specialValue value="2">
<name>Down</name> <name>Down</name>
<synopsis>The port is down.</synopsis> <synopsis>The port is down.</synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>MACInStatsType</name> <name>MACInStatsType</name>
<synopsis>Statistics type in EtherMACIn.</synopsis> <synopsis>Statistics type in EtherMACIn LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>NumPacketsReceived</name> <name>NumPacketsReceived</name>
<synopsis>The number of packets received.</synopsis> <synopsis>The number of packets received.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>NumPacketsDropped</name> <name>NumPacketsDropped</name>
<synopsis>The number of packets dropped.</synopsis> <synopsis>The number of packets dropped.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>MACOutStatsType</name> <name>MACOutStatsType</name>
<synopsis>Statistics type in EtherMACOut.</synopsis> <synopsis>Statistics type in EtherMACOut LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>NumPacketsTransmitted</name> <name>NumPacketsTransmitted</name>
<synopsis>The number of packets transmitted.</synopsis> <synopsis>The number of packets transmitted.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>NumPacketsDropped</name> <name>NumPacketsDropped</name>
<synopsis>The number of packets dropped.</synopsis> <synopsis>The number of packets dropped.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EtherDispatchEntryType</name> <name>EtherDispatchEntryType</name>
<synopsis>Entry type for Ethernet dispatch table.</synopsis> <synopsis>Entry type for Ethernet dispatch table in
EtherClassifier LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>LogicalPortID</name> <name>LogicalPortID</name>
<synopsis>Logical port ID.</synopsis> <synopsis>Logical port ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>EtherType</name> <name>EtherType</name>
<synopsis>The EtherType value in the Ether head. <synopsis>The EtherType value in the Ether head.
</synopsis> </synopsis>
skipping to change at page 21, line 45 skipping to change at page 22, line 44
Note: LFBOutputSelectIndex is the FromPortIndex for Note: LFBOutputSelectIndex is the FromPortIndex for
the port group "ClassifyOut" in the table LFBTopology the port group "ClassifyOut" in the table LFBTopology
(of FEObject LFB) as defined for the EtherClassifier (of FEObject LFB) as defined for the EtherClassifier
LFB.</synopsis> LFB.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EtherDispatchTableType</name> <name>EtherDispatchTableType</name>
<synopsis>Type for Ethernet dispatch table.</synopsis> <synopsis>Type for Ethernet dispatch table.This table is used
in EtherClassifier LFB. Every Ethernet packet can be
dispatched to the LFB output group ports according to the
logical port ID.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>EtherDispatchEntryType</typeRef> <typeRef>EtherDispatchEntryType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>VlanIDType</name>
<synopsis>The type of VLAN ID</synopsis>
<atomic>
<baseType>uint16</baseType>
<rangeRestriction>
<allowedRange min="0" max="4095"/>
</rangeRestriction>
</atomic>
</dataTypeDef>
<dataTypeDef>
<name>VlanPriorityType</name>
<synopsis>The type of VLAN priority.</synopsis>
<atomic>
<baseType>uchar</baseType>
<rangeRestriction>
<allowedRange min="0" max="7"/>
</rangeRestriction>
</atomic>
</dataTypeDef>
<dataTypeDef>
<name>VlanInputTableEntryType</name> <name>VlanInputTableEntryType</name>
<synopsis>Entry type for VLAN input table.</synopsis> <synopsis>Entry type for VLAN input table in EtherClassifier
LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>IncomingPortID</name> <name>IncomingPortID</name>
<synopsis>The incoming port ID.</synopsis> <synopsis>The incoming port ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>VlanID</name> <name>VlanID</name>
<synopsis>Vlan ID.</synopsis> <synopsis>Vlan ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>VlanIDType</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>LogicalPortID</name> <name>LogicalPortID</name>
<synopsis>logical port ID.</synopsis> <synopsis>logical port ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>VlanInputTableType</name> <name>VlanInputTableType</name>
<synopsis>Type for VLAN input table.</synopsis> <synopsis>Type for VLAN input table.This table is used
in EtherClassifier LFB. Every Ethernet packet can get a new
LogicalPortID according to the IncomingPortID and VlanID.
</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>VlanInputTableEntryType</typeRef> <typeRef>VlanInputTableEntryType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EtherClassifyStatsType</name> <name>EtherClassifyStatsType</name>
<synopsis>Entry type for statistics table in EtherClassifier <synopsis>Entry type for statistics table in EtherClassifier
LFB.</synopsis> LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
skipping to change at page 22, line 50 skipping to change at page 24, line 28
<component componentID="2"> <component componentID="2">
<name>PacketsNum</name> <name>PacketsNum</name>
<synopsis>Packets number</synopsis> <synopsis>Packets number</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EtherClassifyStatsTableType</name> <name>EtherClassifyStatsTableType</name>
<synopsis>Type for Ethernet classifier statistics <synopsis>Type for Ethernet classifier statistics
information table.</synopsis> information table in EtherClassifier LFB.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>EtherClassifyStatsType</typeRef> <typeRef>EtherClassifyStatsType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv4ValidatorStatsType</name> <name>IPv4ValidatorStatsType</name>
<synopsis>Statistics type in IPv4validator.</synopsis> <synopsis>Statistics type in IPv4validator LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>badHeaderPkts</name> <name>badHeaderPkts</name>
<synopsis>Number of bad header packets.</synopsis> <synopsis>Number of bad header packets.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>badTotalLengthPkts</name> <name>badTotalLengthPkts</name>
<synopsis>Number of bad total length packets.</synopsis> <synopsis>Number of bad total length packets.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
skipping to change at page 23, line 35 skipping to change at page 25, line 13
</component> </component>
<component componentID="4"> <component componentID="4">
<name>badChecksumPkts</name> <name>badChecksumPkts</name>
<synopsis>Number of bad checksum packets.</synopsis> <synopsis>Number of bad checksum packets.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6ValidatorStatsType</name> <name>IPv6ValidatorStatsType</name>
<synopsis>Statistics type in IPv6validator.</synopsis> <synopsis>Statistics type in IPv6validator LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>badHeaderPkts</name> <name>badHeaderPkts</name>
<synopsis>Number of bad header packets.</synopsis> <synopsis>Number of bad header packets.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>badTotalLengthPkts</name> <name>badTotalLengthPkts</name>
<synopsis>Number of bad total length packets.</synopsis> <synopsis>Number of bad total length packets.</synopsis>
<typeRef>uint64</typeRef> <typeRef>uint64</typeRef>
skipping to change at page 25, line 25 skipping to change at page 26, line 51
<synopsis>This route is a default route. <synopsis>This route is a default route.
</synopsis> </synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv4PrefixTableType</name> <name>IPv4PrefixTableType</name>
<synopsis>Type for IPv4 prefix table.</synopsis> <synopsis>Type for IPv4 prefix table. This table is currently
used in IPv4UcastLPM LFB. The LFB uses the destination IPv4
address of every input packet as search key to look up this
table in order extract a next hop selector.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>IPv4PrefixInfoType</typeRef> <typeRef>IPv4PrefixInfoType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv4UcastLPMStatsType</name> <name>IPv4UcastLPMStatsType</name>
<synopsis>Statistics type in IPv4Unicast LFB.</synopsis> <synopsis>Statistics type in IPv4Unicast LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>InRcvdPkts</name> <name>InRcvdPkts</name>
skipping to change at page 27, line 24 skipping to change at page 29, line 5
<synopsis>This route is a default route. <synopsis>This route is a default route.
</synopsis> </synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6PrefixTableType</name> <name>IPv6PrefixTableType</name>
<synopsis>Type for IPv6 prefix table.</synopsis> <synopsis>Type for IPv6 prefix table.This table is currently
used in IPv6UcastLPM LFB. The LFB uses the destination IPv6
address of every input packet as search key to look up this
table in order extract a next hop selector.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>IPv6PrefixInfoType</typeRef> <typeRef>IPv6PrefixInfoType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6UcastLPMStatsType</name> <name>IPv6UcastLPMStatsType</name>
<synopsis>Statistics type in IPv6Unicast LFB.</synopsis> <synopsis>Statistics type in IPv6Unicast LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>InRcvdPkts</name> <name>InRcvdPkts</name>
skipping to change at page 28, line 11 skipping to change at page 29, line 43
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv4NextHopInfoType</name> <name>IPv4NextHopInfoType</name>
<synopsis>Entry type for IPv4 next hop table.</synopsis> <synopsis>Entry type for IPv4 next hop table.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>L3PortID</name> <name>L3PortID</name>
<synopsis>The ID of the Logical/physical Output Port <synopsis>The ID of the Logical/physical Output Port
that we pass onto the neighboring LFB instance. This that we pass onto the downstream LFB instance. This
ID indicates what port to the neighbor is as defined ID indicates what port to the neighbor is as defined
by L3.</synopsis> by L3.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>MTU</name> <name>MTU</name>
<synopsis>Maximum Transmission Unit for out going port. <synopsis>Maximum Transmission Unit for out going port.
It is for desciding whether the packet need It is for desciding whether the packet need
fragmentation </synopsis> fragmentation </synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>NextHopIPAddr</name> <name>NextHopIPAddr</name>
<synopsis>Next Hop IPv4 Address</synopsis> <synopsis>Next Hop IPv4 Address</synopsis>
<typeRef>IPv4Addr</typeRef> <typeRef>IPv4Addr</typeRef>
</component> </component>
<component componentID="4"> <component componentID="4">
<name>MediaEncapInfoIndex</name> <name>MediaEncapInfoIndex</name>
<synopsis>The index we pass onto the neighboring LFB <synopsis>The index we pass onto the downstream LFB
instance. This index is used to lookup a table instance. This index is used to lookup a table
(typically media encapsulatation related) further (typically media encapsulatation related) further
downstream.</synopsis> downstream.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="5"> <component componentID="5">
<name>LFBOutputSelectIndex</name> <name>LFBOutputSelectIndex</name>
<synopsis>LFB Group output port index to select <synopsis>LFB Group output port index to select
downstream LFB port. Some possibilities of downstream downstream LFB port. Some possibilities of downstream
LFB instances are: LFB instances are:
skipping to change at page 29, line 4 skipping to change at page 30, line 36
b) Other type of media LFB b) Other type of media LFB
c) A metadata Dispatcher c) A metadata Dispatcher
d) A redirect LFB d) A redirect LFB
e) etc e) etc
Note: LFBOutputSelectIndex is the FromPortIndex for Note: LFBOutputSelectIndex is the FromPortIndex for
the port group "SuccessOut" in the table LFBTopology the port group "SuccessOut" in the table LFBTopology
(of FEObject LFB) as defined for the IPv4NextHop LFB. (of FEObject LFB) as defined for the IPv4NextHop LFB.
</synopsis> </synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv4NextHopTableType</name> <name>IPv4NextHopTableType</name>
<synopsis>Type for IPv4 next hop table.</synopsis> <synopsis>Type for IPv4 next hop table. This table is used
in IPv4NextHop LFB. The LFB uses metadata "HopSelector"
received to match the array index to get the next hop
information. </synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>IPv4NextHopInfoType</typeRef> <typeRef>IPv4NextHopInfoType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6NextHopInfoType</name> <name>IPv6NextHopInfoType</name>
<synopsis>Entry type for IPv6 next hop table.</synopsis> <synopsis>Entry type for IPv6 next hop table.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>L3PortID</name> <name>L3PortID</name>
<synopsis>The ID of the Logical/physical Output Port <synopsis>The ID of the Logical/physical Output Port
that we pass onto the neighboring LFB instance. This that we pass onto the downstream LFB instance. This
ID indicates what port to the neighbor is as defined ID indicates what port to the neighbor is as defined
by L3.</synopsis> by L3.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>MTU</name> <name>MTU</name>
<synopsis>Maximum Transmission Unit for out going port. <synopsis>Maximum Transmission Unit for out going port.
It is for desciding whether the packet need It is for desciding whether the packet need
fragmentation.</synopsis> fragmentation.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>NextHopIPAddr</name> <name>NextHopIPAddr</name>
<synopsis>Next Hop IPv6 Address</synopsis> <synopsis>Next Hop IPv6 Address</synopsis>
<typeRef>IPv6Addr</typeRef> <typeRef>IPv6Addr</typeRef>
</component> </component>
<component componentID="4"> <component componentID="4">
<name>MediaEncapInfoIndex</name> <name>MediaEncapInfoIndex</name>
<synopsis>The index we pass onto the neighboring LFB <synopsis>The index we pass onto the downstream LFB
instance. This index is used to lookup a table instance. This index is used to lookup a table
(typically media encapsulatation related) further (typically media encapsulatation related) further
downstream.</synopsis> downstream.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="5"> <component componentID="5">
<name>LFBOutputSelectIndex</name> <name>LFBOutputSelectIndex</name>
<synopsis>LFB Group output port index to select <synopsis>LFB Group output port index to select
downstream LFB port. Some possibilities of downstream downstream LFB port. Some possibilities of downstream
LFB instances are: LFB instances are:
skipping to change at page 30, line 17 skipping to change at page 32, line 4
Note: LFBOutputSelectIndex is the FromPortIndex for Note: LFBOutputSelectIndex is the FromPortIndex for
the port group "SuccessOut" in the table LFBTopology the port group "SuccessOut" in the table LFBTopology
(of FEObject LFB) as defined for the IPv6NextHop LFB. (of FEObject LFB) as defined for the IPv6NextHop LFB.
</synopsis> </synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>IPv6NextHopTableType</name> <name>IPv6NextHopTableType</name>
<synopsis>Type for IPv6 next hop table.</synopsis> <synopsis>Type for IPv6 next hop table. This table is used
in IPv6NextHop LFB. The LFB uses metadata "HopSelector"
received to match the array index to get the next hop
information.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>IPv6NextHopInfoType</typeRef> <typeRef>IPv6NextHopInfoType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EncapTableEntryType</name> <name>EncapTableEntryType</name>
<synopsis>Entry type for Ethernet encapsulation table. <synopsis>Entry type for Ethernet encapsulation table in
</synopsis> EtherEncap LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>DstMac</name> <name>DstMac</name>
<synopsis>Ethernet Mac of the Neighbor</synopsis> <synopsis>Ethernet Mac of the Neighbor</synopsis>
<typeRef>IEEEMAC</typeRef> <typeRef>IEEEMAC</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>SrcMac</name> <name>SrcMac</name>
<synopsis>Source MAC used in encapsulation</synopsis> <synopsis>Source MAC used in encapsulation</synopsis>
<typeRef>IEEEMAC</typeRef> <typeRef>IEEEMAC</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>VlanID</name> <name>VlanID</name>
<synopsis>VLAN ID.</synopsis> <synopsis>VLAN ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>VlanIDType</typeRef>
</component> </component>
<component componentID="4"> <component componentID="4">
<name>L2PortID</name> <name>L2PortID</name>
<synopsis>Output logical L2 port ID.</synopsis> <synopsis>Output logical L2 port ID.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>EncapTableType</name> <name>EncapTableType</name>
<synopsis>Type for Ethernet encapsulation table.</synopsis> <synopsis>Type for Ethernet encapsulation table. This
table is used in EtherEncap LFB. The LFB uses the metadata
"MediaEncapInfoIndex " received to get the encapsulation
information.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>EncapTableEntryType</typeRef> <typeRef>EncapTableEntryType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>MetadataDispatchType</name> <name>MetadataDispatchType</name>
<synopsis>Entry type for metadata dispatch table.</synopsis> <synopsis>Entry type for Metadata dispatch table in
BasicMetadataDispatch LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>MetadataID</name>
<synopsis>metadata ID</synopsis>
<typeRef>uint32</typeRef>
</component>
<component componentID="2">
<name>MetadataValue</name> <name>MetadataValue</name>
<synopsis>metadata value.</synopsis> <synopsis>metadata value.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="2">
<name>OutputIndex</name> <name>OutputIndex</name>
<synopsis>group output port index.</synopsis> <synopsis>group output port index.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>MetadataDispatchTableType</name> <name>MetadataDispatchTableType</name>
<synopsis>Type for Metadata dispatch table.</synopsis> <synopsis>Type for Metadata dispatch table. This table is used
in BasicMetadataDispatch LFB. The LFB uses MetadataValue to
get the LFB group output port index.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>MetadataDispatchType</typeRef> <typeRef>MetadataDispatchType</typeRef>
<contentKey contentKeyID="1">
<contentKeyField>MetadataValue</contentKeyField>
</contentKey>
</array> </array>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>SchdDisciplineType</name> <name>SchdDisciplineType</name>
<synopsis>Scheduling discipline type.</synopsis> <synopsis>Scheduling discipline type.</synopsis>
<atomic> <atomic>
<baseType>uint32</baseType> <baseType>uint32</baseType>
<specialValues> <specialValues>
<specialValue value="1"> <specialValue value="1">
<name>FIFO</name>
<synopsis>First In First Out scheduler.</synopsis>
</specialValue>
<specialValue value="2">
<name>RR</name> <name>RR</name>
<synopsis>Round Robin.</synopsis> <synopsis>Round Robin scheduler.</synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>QueueDepthType</name> <name>QueueStatsType</name>
<synopsis>Entry type for queue depth table.</synopsis> <synopsis>Entry type for queue statistics table in
GenericScheduler LFB.</synopsis>
<struct> <struct>
<component componentID="1"> <component componentID="1">
<name>QueueID</name> <name>QueueID</name>
<synopsis>Queue ID</synopsis> <synopsis>Queue ID</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="2"> <component componentID="2">
<name>QueueDepthInPackets</name> <name>QueueDepthInPackets</name>
<synopsis>the Queue Depth when the depth units <synopsis>the Queue Depth when the depth units
are packets.</synopsis> are packets.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component componentID="3"> <component componentID="3">
<name>QueueDepthInBytes</name> <name>QueueDepthInBytes</name>
<synopsis>the Queue Depth when the depth units <synopsis>the Queue Depth when the depth units
skipping to change at page 32, line 30 skipping to change at page 34, line 21
</component> </component>
<component componentID="3"> <component componentID="3">
<name>QueueDepthInBytes</name> <name>QueueDepthInBytes</name>
<synopsis>the Queue Depth when the depth units <synopsis>the Queue Depth when the depth units
are bytes.</synopsis> are bytes.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
</struct> </struct>
</dataTypeDef> </dataTypeDef>
<dataTypeDef> <dataTypeDef>
<name>QueueDepthTableType</name> <name>QueueStatsTableType</name>
<synopsis>Type for Queue depth table.</synopsis> <synopsis>Type for Queue statistics table in GenericScheduler
LFB.</synopsis>
<array type="variable-size"> <array type="variable-size">
<typeRef>QueueDepthType</typeRef> <typeRef>QueueDepthType</typeRef>
</array> </array>
</dataTypeDef> </dataTypeDef>
</dataTypeDefs> </dataTypeDefs>
<metadataDefs> <metadataDefs>
<metadataDef> <metadataDef>
<name>PHYPortID</name> <name>PHYPortID</name>
<synopsis>The physical port ID that a packet has entered. <synopsis>The physical port ID that a packet has entered.
</synopsis> </synopsis>
skipping to change at page 33, line 25 skipping to change at page 35, line 17
<name>EtherType</name> <name>EtherType</name>
<synopsis>Indicating the Ethernet type of the Ethernet packet. <synopsis>Indicating the Ethernet type of the Ethernet packet.
</synopsis> </synopsis>
<metadataID>5</metadataID> <metadataID>5</metadataID>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>VlanID</name> <name>VlanID</name>
<synopsis>The Vlan ID of the Ethernet packet.</synopsis> <synopsis>The Vlan ID of the Ethernet packet.</synopsis>
<metadataID>6</metadataID> <metadataID>6</metadataID>
<typeRef>uint32</typeRef> <typeRef>VlanIDType</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>VlanPriority</name> <name>VlanPriority</name>
<synopsis>The priority of the Ethernet packet.</synopsis> <synopsis>The priority of the Ethernet packet.</synopsis>
<metadataID>7</metadataID> <metadataID>7</metadataID>
<typeRef>uint32</typeRef> <typeRef>VlanPriorityType</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>NexthopIPv4Addr</name> <name>NexthopIPv4Addr</name>
<synopsis>Nexthop IPv4 address the packet is sent to. <synopsis>Nexthop IPv4 address the packet is sent to.
</synopsis> </synopsis>
<metadataID>8</metadataID> <metadataID>8</metadataID>
<typeRef>IPv4Addr</typeRef> <typeRef>IPv4Addr</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>NexthopIPv6Addr</name> <name>NexthopIPv6Addr</name>
<synopsis>Nexthop IPv6 address the packet is sent to. <synopsis>Nexthop IPv6 address the packet is sent to.
</synopsis> </synopsis>
<metadataID>9</metadataID> <metadataID>9</metadataID>
<typeRef>IPv6Addr</typeRef> <typeRef>IPv6Addr</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>HopSelector</name> <name>HopSelector</name>
<synopsis>An index the packet can use to look up a nexthop <synopsis>A search key the packet can use to look up a nexthop
table for next hop information of the packet.</synopsis> table for next hop information of the packet.</synopsis>
<metadataID>10</metadataID> <metadataID>10</metadataID>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>ExceptionID</name> <name>ExceptionID</name>
<synopsis>Indicating exception type of the packet which is <synopsis>Indicating exception type of the packet which is
exceptional for some processing.</synopsis> exceptional for some processing.</synopsis>
<metadataID>11</metadataID> <metadataID>11</metadataID>
<atomic> <atomic>
<baseType>uint32</baseType> <baseType>uint32</baseType>
<specialValues> <specialValues>
<specialValue value="0"> <specialValue value="0">
<name>AnyUnrecognizedExceptionCase</name> <name>AnyUnrecognizedExceptionCase</name>
<synopsis>any unrecognized exception case.</synopsis> <synopsis>any unrecognized exception case.</synopsis>
</specialValue> </specialValue>
<specialValue value="1"> <specialValue value="1">
<name>BroadCastPacket</name> <name>ClassifyNoMatching</name>
<synopsis>Packet with destination address equal to <synopsis>There is no matching when classifying the
255.255.255.255</synopsis> packet in EtherClassifier LFB.</synopsis>
</specialValue> </specialValue>
<specialValue value="2"> <specialValue value="2">
<name>BadTTL</name> <name>MediaEncapInfoIndexInvalid</name>
<synopsis>The packet can't be forwarded as the TTL <synopsis>The MediaEncapInfoIndex value of the
has expired.</synopsis> packet is invalid and can not be allocated in the
EncapTable.</synopsis>
</specialValue> </specialValue>
<specialValue value="3"> <specialValue value="3">
<name>IPv4HeaderLengthMismatch</name> <name>EncapTableLookupFailed</name>
<synopsis>IPv4 Packet's header length is less <synopsis>The packet failed lookup of the EncapTable
than 5.</synopsis> table even though the MediaEncapInfoIndex is valid.
</synopsis>
</specialValue> </specialValue>
<specialValue value="4"> <specialValue value="4">
<name>LengthMismatch</name> <name>BadTTL</name>
<synopsis>The packet length reported by link layer <synopsis>Packet with expired TTL.</synopsis>
is less than the total length field.</synopsis>
</specialValue> </specialValue>
<specialValue value="5"> <specialValue value="5">
<name>IPv4HeaderLengthMismatch</name>
<synopsis>Packet with header length more than 5
words.</synopsis>
</specialValue>
<specialValue value="6">
<name>RouterAlertOptions</name> <name>RouterAlertOptions</name>
<synopsis>Packet IP head include Router Alert <synopsis>Packet IP head include Router Alert
options.</synopsis> options.</synopsis>
</specialValue> </specialValue>
<specialValue value="6">
<name>RouteInTableNotFound</name>
<synopsis>There is no route in the route table
corresponding to the packet destination address
</synopsis>
</specialValue>
<specialValue value="7"> <specialValue value="7">
<name>NextHopInvalid</name> <name>IPv6HopLimitZero</name>
<synopsis>The NexthopID is invalid</synopsis> <synopsis>Packet with Hop Limit zero </synopsis>
</specialValue> </specialValue>
<specialValue value="8"> <specialValue value="8">
<name>FragRequired</name> <name>IPv6NextHeaderHBH</name>
<synopsis>The MTU for outgoing interface is less <synopsis>Packet with next header set to Hop-by-Hop
than the packet size.</synopsis> </synopsis>
</specialValue> </specialValue>
<specialValue value="9"> <specialValue value="9">
<name>LocalDelivery</name> <name>SrcAddressExecption</name>
<synopsis>The packet is for a local interface. <synopsis>Packet with exceptional source address.
</synopsis> </synopsis>
</specialValue> </specialValue>
<specialValue value="10"> <specialValue value="10">
<name>GenerateICMP</name> <name>DstAddressExecption</name>
<synopsis>ICMP packet needs to be generated. <synopsis>Packet with exceptional destination
</synopsis> address </synopsis>
</specialValue> </specialValue>
<specialValue value="11"> <specialValue value="11">
<name>PrefixIndexInvalid</name> <name>LPMLookupFailed</name>
<synopsis>The prefixIndex is wrong.</synopsis> <synopsis>The packet failed the LPM lookup of the
prefix table.</synopsis>
</specialValue> </specialValue>
<specialValue value="12"> <specialValue value="12">
<name>IPv6HopLimitZero</name> <name>HopSelectorInvalid</name>
<synopsis>Packet with Hop Limit zero </synopsis> <synopsis>The HopSelector for the packet is invalid.
</synopsis>
</specialValue> </specialValue>
<specialValue value="13"> <specialValue value="13">
<name>IPv6NextHeaderHBH</name> <name>NextHopLookupFailed</name>
<synopsis>Packet with next header set to Hop-by-Hop <synopsis>The packet failed lookup of the NextHop
table even though the HopSelector is valid.
</synopsis> </synopsis>
</specialValue> </specialValue>
<specialValue value="14">
<name>FragRequired</name>
<synopsis>The MTU for outgoing interface is less
than the packet size.</synopsis>
</specialValue>
<specialValue value="15">
<name>MetadataNoMatching</name>
<synopsis>There is no matching when looking up the
metadata dispatch table.</synopsis>
</specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>ValidateErrorID</name> <name>ValidateErrorID</name>
<synopsis>Indicating error type of the packet failed some <synopsis>Indicating error type of the packet failed some
validation process.</synopsis> validation process.</synopsis>
<metadataID>12</metadataID> <metadataID>12</metadataID>
<atomic> <atomic>
<baseType>uint32</baseType> <baseType>uint32</baseType>
<specialValues> <specialValues>
<specialValue value="0"> <specialValue value="0">
<name> AnyUnrecognizedValidateErrorCase</name> <name>AnyUnrecognizedValidateErrorCase</name>
<synopsis> Any unrecognized validate error case. <synopsis> Any unrecognized validate error case.
</synopsis> </synopsis>
</specialValue> </specialValue>
<specialValue value="1"> <specialValue value="1">
<name>InvalidIPv4PacketSize</name> <name>InvalidIPv4PacketSize</name>
<synopsis>Packet size reported is less than 20 <synopsis>Packet size reported is less than 20
bytes.</synopsis> bytes.</synopsis>
</specialValue> </specialValue>
<specialValue value="2"> <specialValue value="2">
<name>NotIPv4Packet</name> <name>NotIPv4Packet</name>
<synopsis>Packet is not IP version 4.</synopsis> <synopsis>Packet is not IP version 4.</synopsis>
skipping to change at page 36, line 12 skipping to change at page 38, line 18
<name>InvalidIPv4PacketSize</name> <name>InvalidIPv4PacketSize</name>
<synopsis>Packet size reported is less than 20 <synopsis>Packet size reported is less than 20
bytes.</synopsis> bytes.</synopsis>
</specialValue> </specialValue>
<specialValue value="2"> <specialValue value="2">
<name>NotIPv4Packet</name> <name>NotIPv4Packet</name>
<synopsis>Packet is not IP version 4.</synopsis> <synopsis>Packet is not IP version 4.</synopsis>
</specialValue> </specialValue>
<specialValue value="3"> <specialValue value="3">
<name>InvalidIPv4HeaderLengthSize</name> <name>InvalidIPv4HeaderLengthSize</name>
<synopsis>Packet's header length is less than 5. <synopsis>Packet with header length less than
</synopsis> 5 words.</synopsis>
</specialValue> </specialValue>
<specialValue value="4"> <specialValue value="4">
<name>InvalidIPv4Checksum</name> <name>InvalidIPv4LengthFieldSize</name>
<synopsis>Packet with invalid checksum.</synopsis> <synopsis>Packet with total length field less than
20 bytes.</synopsis>
</specialValue> </specialValue>
<specialValue value="5"> <specialValue value="5">
<name>InvalidIPv4SrcAddrCase1</name> <name>InvalidIPv4Checksum</name>
<synopsis>Packet with source address equal to <synopsis>Packet with invalid checksum.</synopsis>
255.255.255.255.</synopsis>
</specialValue> </specialValue>
<specialValue value="6"> <specialValue value="6">
<name>InvalidIPv4SrcAddrCase2</name> <name>InvalidIPv4SrcAddr</name>
<synopsis>Packet with source address 0.</synopsis> <synopsis>Packet with invalid source address.
</synopsis>
</specialValue> </specialValue>
<specialValue value="7"> <specialValue value="7">
<name>InvalidIPv4SrcAddrCase3</name> <name>InvalidIPv4DstAddr</name>
<synopsis>Packet with source address of form <synopsis>Packet with source address 0.</synopsis>
127.any.</synopsis>
</specialValue> </specialValue>
<specialValue value="8"> <specialValue value="8">
<name>InvalidIPv4SrcAddrCase4</name> <name>InvalidIPv6PacketSize</name>
<synopsis>Packet with source address in Class E
domain.</synopsis>
</specialValue>
<specialValue value="9">
<name>InvalidIPv6PakcetSize</name>
<synopsis>Packet size reported is less than 40 <synopsis>Packet size reported is less than 40
bytes.</synopsis> bytes.</synopsis>
</specialValue> </specialValue>
<specialValue value="10"> <specialValue value="9">
<name>NotIPv6Packet</name> <name>NotIPv6Packet</name>
<synopsis>Packet is not IP version 6.</synopsis> <synopsis>Packet is not IP version 6.</synopsis>
</specialValue> </specialValue>
<specialValue value="11"> <specialValue value="10">
<name>InvalidIPv6SrcAddrCase1</name> <name>InvalidIPv6SrcAddr</name>
<synopsis>Packet with multicast source address (the <synopsis>Packet with invalid source address.
MSB of the source address is 0xFF).</synopsis> </synopsis>
</specialValue>
<specialValue value="12">
<name>InvalidIPv6SrcAddrCase2</name>
<synopsis>Packet with source address set to
loopback(::1).</synopsis>
</specialValue> </specialValue>
<specialValue value="13"> <specialValue value="11">
<name>InvalidIPv6DstAddrCase1</name> <name>InvalidIPv6DstAddr</name>
<synopsis>Packet with destination set to 0 or ::1. <synopsis>Packet with invalid destination address.
</synopsis> </synopsis>
</specialValue> </specialValue>
</specialValues> </specialValues>
</atomic> </atomic>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>L3PortID</name> <name>L3PortID</name>
<synopsis>ID of L3 port.</synopsis> <synopsis>ID of L3 port. See the definition in
IPv4NextHopInfoType.</synopsis>
<metadataID>13</metadataID> <metadataID>13</metadataID>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>RedirectIndex</name> <name>RedirectIndex</name>
<synopsis>metadata CE sends to RedirectIn LFB for the <synopsis>metadata CE sends to RedirectIn LFB for the
associated packet to select output port in the LFB group associated packet to select output port in the LFB group
output "PktsOut".</synopsis> output "PktsOut".</synopsis>
<metadataID>14</metadataID> <metadataID>14</metadataID>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</metadataDef> </metadataDef>
<metadataDef> <metadataDef>
<name>MediaEncapInfoIndex</name> <name>MediaEncapInfoIndex</name>
<synopsis>An index the packet uses to look up a media <synopsis>A search key the packet uses to look up a media
encapsulation table to select its encapsulation media as encapsulation table to select its encapsulation media as
well as followed encapsulation LFB.</synopsis> well as followed encapsulation LFB.</synopsis>
<metadataID>15</metadataID> <metadataID>15</metadataID>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</metadataDef> </metadataDef>
</metadataDefs> </metadataDefs>
</LFBLibrary> </LFBLibrary>
5. LFB Class Description 5. LFB Class Description
According to ForCES specifications, LFB (Logical Function Block) is a According to ForCES specifications, LFB (Logical Function Block) is a
well defined, logically separable functional block that resides in an well defined, logically separable functional block that resides in an
FE, and is a functionally accurate abstraction of the FE's processing FE, and is a functionally accurate abstraction of the FE's processing
capabilities. An LFB Class (or type) is a template that represents a capabilities. An LFB Class (or type) is a template that represents a
fine-grained, logically separable aspect of FE processing. Most LFBs fine-grained, logically separable aspect of FE processing. Most LFBs
are related to packet processing in the data path. LFB classes are are related to packet processing in the data path. LFB classes are
the basic building blocks of the FE model. Note that RFC 5810 has the basic building blocks of the FE model. Note that [RFC5810] has
already defined an 'FE Protocol LFB' which is as a logical entity in already defined an 'FE Protocol LFB' which is a logical entity in
each FE to control the ForCES protocol. RFC 5812 has already defined each FE to control the ForCES protocol. [RFC5812] has already
an 'FE Object LFB'. Information like the FE Name, FE ID, FE State, defined an 'FE Object LFB'. Information like the FE Name, FE ID, FE
LFB Topology in the FE are represented in this LFB. State, LFB Topology in the FE are represented in this LFB.
As specified in Section 3.1, this document focuses the base LFB As specified in Section 3.1, this document focuses on the base LFB
library for implementing typical router functions, especially for IP library for implementing typical router functions, especially for IP
forwarding functions. As a result, LFB classes in the library are forwarding functions. As a result, LFB classes in the library are
all base LFBs to implement router forwarding. all base LFBs to implement router forwarding.
In this section, the terms "upstream LFB" and "downstream LFB" are
used. These are used relative to an LFB to an LFB that is being
described. An "upstream LFB" is one whose output ports are connected
to input ports of the LFB under consideration such that output
(typically packets with metadata) can be sent from the "upstream LFB"
to the LFB under consideration. Similarly, a "downstream LFB" whose
input ports are connected to output ports of the LFB under
consideration such that the LFB under consideration can send
information to the "downstream LFB". Note that in some rare
topologies, an LFB may be both upstream and downstream relative to
another LFB.
Also note that, as a default provision of [RFC5812], in FE model, all
metadata produced by upstream LFBs will pass through all downstream
LFBs by default without being specified by input port or output port.
Only those metadata that will be used (consumed) by an LFB will be
explicitly marked in input of the LFB as expected metadata. For
instance, in downstream LFBs of a physical layer LFB, even there is
no specific metadata expected, metadata like PHYPortID produced by
the physical layer LFB will always pass through all downstream LFBs
regardless of whether the metadata has been expected by the LFBs or
not.
5.1. Ethernet Processing LFBs 5.1. Ethernet Processing LFBs
As the most popular physical and data link layer protocols, Ethernets As the most popular physical and data link layer protocols, Ethernet
are widely deployed. It becomes a basic requirement for a router to is widely deployed. It becomes a basic requirement for a router to
be able to process various Ethernet data packets. be able to process various Ethernet data packets.
Note that there exist different versions of Ethernet protocols, like Note that there exist different versions of Ethernet formats, like
Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, IEEE 802.3/802.2 SNAP. Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, IEEE 802.3/802.2 SNAP.
There also exist varieties of LAN techniques based on Ethernet, like There also exist varieties of LAN techniques based on Ethernet, like
various VLANs, MACinMAC, etc. Ethernet processing LFBs defined here various VLANs, MACinMAC, etc. Ethernet processing LFBs defined here
are intended to be able to cope with all these variations of Ethernet are intended to be able to cope with all these variations of Ethernet
technology. technology.
There are also various types of Ethernet physical interface media. There are also various types of Ethernet physical interface media.
Among them, copper and fiber media may be the most popular ones. As Among them, copper and fiber media may be the most popular ones. As
a base LFB definition and a start work, the document only defines an a base LFB definition and a starting point, the document only defines
Ethernet physical LFB with copper media. For other media interfaces, an Ethernet physical LFB with copper media. For other media
specific LFBs may be defined in the future versions of the library. interfaces, specific LFBs may be defined in the future versions of
the library.
5.1.1. EtherPHYCop 5.1.1. EtherPHYCop
EtherPHYCop LFB abstracts an Ethernet interface physical layer with EtherPHYCop LFB abstracts an Ethernet interface physical layer with
media limited to copper. media limited to copper.
5.1.1.1. Data Handling 5.1.1.1. Data Handling
This LFB is the interface to the Ethernet physical media. The LFB This LFB is the interface to the Ethernet physical media. The LFB
handles ethernet frames coming in from or going out of the FE. handles ethernet frames coming in from or going out of the FE.
Ethernet frames sent and received cover all packets encapsulated with Ethernet frames sent and received cover all packets encapsulated with
different versions of Ethernet protocols, like Ethernet V2, 802.3 different versions of Ethernet protocols, like Ethernet V2, 802.3
RAW, IEEE 802.3/802.2,IEEE 802.3/802.2 SNAP, including packets RAW, IEEE 802.3/802.2,IEEE 802.3/802.2 SNAP, including packets
encapsulated with varieties of LAN techniques based on Ethernet, like encapsulated with varieties of LAN techniques based on Ethernet, like
various VLANs, MACinMAC, etc. Therefore in the XML an EthernetAll various VLANs, MACinMAC, etc. Therefore in the XML an EthernetAll
frame type has been introduced. frame type has been introduced.
Ethernet frames are received from the physical media port and passed Ethernet frames are received from the physical media port and passed
downstream to LFBs such as EtherMACIn via a singleton output known as downstream to LFBs such as EtherMACIn via a singleton output known as
"EtherPHYOut". A 'PHYPortID' metadatum, to indicate which physical "EtherPHYOut". A 'PHYPortID' metadata, to indicate which physical
port the frame came into from the external world, is passed along port the frame came into from the external world, is passed along
with the frame. with the frame.
Ethernet packets are received by this LFB from upstream LFBs such Ethernet packets are received by this LFB from upstream LFBs such as
EtherMacOut via the singleton input known as "EtherPHYIn" before EtherMacOut LFBs via the singleton input known as "EtherPHYIn" before
being sent out onto the external world. being sent out onto the external world.
5.1.1.2. Components 5.1.1.2. Components
The AdminStatus component is defined for CE to administratively The AdminStatus component is defined for CE to administratively
manage the status of the LFB. The CE may adminstratively startup or manage the status of the LFB. The CE may administratively startup or
shutdown the LFB by changing the value of AdminStatus. The default shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'. value is set to 'Down'.
An OperStatus component captures the physical port operational An OperStatus component captures the physical port operational
status. A PHYPortStatusChanged event is defined so the LFB can status. A PHYPortStatusChanged event is defined so the LFB can
report to the CE whenever there is an operational status change of report to the CE whenever there is an operational status change of
the physical port. the physical port.
The PHYPortID component is a unique identification for a physical The PHYPortID component is a unique identification for a physical
port. It is defined as 'read-only' by CE. Its value is enumerated port. It is defined as 'read-only' by CE. Its value is enumerated
by FE. The component will be used to produce a 'PHYPortID' metadatum by FE. The component will be used to produce a 'PHYPortID' metadata
at the LFB output and to associate it to every Ethernet packet this at the LFB output and to associate it to every Ethernet packet this
LFB receives. The metadatum will be handed to downstream LFBs for LFB receives. The metadata will be handed to downstream LFBs for
them to use the PHYPortID. them to use the PHYPortID.
A group of components are defined for link speed management. The A group of components are defined for link speed management. The
AdminLinkSpeed is for CE to configure link speed for the port and the AdminLinkSpeed is for CE to configure link speed for the port and the
OperLinkSpeed is for CE to query the actual link speed in operation. OperLinkSpeed is for CE to query the actual link speed in operation.
The default value for the AdminLinkSpeed is set to auto-negotiation The default value for the AdminLinkSpeed is set to auto-negotiation
mode. mode.
A group of components are defined for duplex mode management. The A group of components are defined for duplex mode management. The
AdminDuplexMode is for CE to configure proper duplex mode for the AdminDuplexMode is for CE to configure proper duplex mode for the
port and the OperDuplexMode is for CE to query the actual duplex mode port and the OperDuplexMode is for CE to query the actual duplex mode
in operation. The default value for the AdminDuplexMode is set to in operation. The default value for the AdminDuplexMode is set to
auto-negotiation mode. auto-negotiation mode.
A CarrierStatus component captures the status of the carrier and A CarrierStatus component captures the status of the carrier and
specifies whether the port is linked with an operational connector. specifies whether the port link is operationally up. The default
The default value for the CarrierStatus is 'false'. value for the CarrierStatus is 'false'.
5.1.1.3. Capabilities 5.1.1.3. Capabilities
The capability information for this LFB includes the link speeds that The capability information for this LFB includes the link speeds that
are supported by the FE (SupportedLinkSpeed) as well as the supported are supported by the FE (SupportedLinkSpeed) as well as the supported
duplex modes (SupportedDuplexMode). duplex modes (SupportedDuplexMode).
5.1.1.4. Events 5.1.1.4. Events
This LFB is defined to be able to generate several events in which Several events are generated. There is an event for changes in the
the CE may be interested. There is an event for changes in the
status of the physical port (PhyPortStatusChanged). Such an event status of the physical port (PhyPortStatusChanged). Such an event
will notify that the physical port status has been changed and the will notify that the physical port status has been changed and the
report will include the new status of the physical port. report will include the new status of the physical port.
Another event captures changes in the operational link speed Another event captures changes in the operational link speed
(LinkSpeedChanged). Such an event will notify the CE that the (LinkSpeedChanged). Such an event will notify the CE that the
operational speed has been changed and the report will include the operational speed has been changed and the report will include the
new negotiated operational speed. new negotiated operational speed.
A final event captures changes in the duplex mode A final event captures changes in the duplex mode
(DuplexModeChanged). Such an event will notify the CE that the (DuplexModeChanged). Such an event will notify the CE that the
duplex mode has been changed and the report will include the new duplex mode has been changed and the report will include the new
negotiated duplex mode. negotiated duplex mode.
5.1.2. EtherMACIn 5.1.2. EtherMACIn
EtherMACIn LFB abstracts an Ethernet port at MAC data link layer. It EtherMACIn LFB abstracts an Ethernet port at MAC data link layer.
specifically describes Ethernet processing functions like MAC address This LFB describes Ethernet processing functions like MAC address
locality check, deciding if the Ethernet packets should be bridged, locality check, deciding if the Ethernet packets should be bridged,
provide Ethernet layer flow control, etc. providing Ethernet layer flow control, etc.
5.1.2.1. Data Handling 5.1.2.1. Data Handling
The LFB is expected to receive all types of Ethernet packets, via a The LFB is expected to receive all types of Ethernet packets, via a
singleton input known as "EtherMACIn", which are usually output from singleton input known as "EtherPktsIn", which are usually output from
some Ethernet physical layer LFB, like an EtherPHYCop LFB, alongside some Ethernet physical layer LFB, like an EtherPHYCop LFB, alongside
with a metadatum indicating the physical port ID that the packet with a metadata indicating the physical port ID that the packet
comes. arrived on.
The LFB is defined with two separate singleton outputs. All Output The LFB is defined with two separate singleton outputs. All Output
packets are in Ethernet format, as received from the physical layer packets are emitted in the original ethernet format received at the
LFB and cover all types of Ethernet packets. physical port, unchanged, and cover all types of ethernet types.
The first singleton output is known as "NormalPathOut". It usually The first singleton output is known as "NormalPathOut". It usually
outputs Ethernet packets to some LFB like an EtherClassifier LFB for outputs Ethernet packets to some LFB like an EtherClassifier LFB for
further L3 forwarding process alongside with a PHYPortID metadata further L3 forwarding process alongside with a PHYPortID metadata
indicating which physical port the packet came from. indicating which physical port the packet came from.
The second singleton output is known as "L2BridgingPathOut". The second singleton output is known as "L2BridgingPathOut".
Although the LFB library this document defines is basically to meet Although the LFB library this document defines is basically to meet
typical router functions, it will attempt to be forward compatible typical router functions, it will attempt to be forward compatible
with future router functions. The "L2BridgingPathOut" is defined to with future router functions. The "L2BridgingPathOut" is defined to
skipping to change at page 41, line 26 skipping to change at page 43, line 49
bridging LFB instances following the L2BridgingPathOut, FEs are bridging LFB instances following the L2BridgingPathOut, FEs are
expected to fulfill L2 bridging functions. L2BridgingPathOut will expected to fulfill L2 bridging functions. L2BridgingPathOut will
output packets exactly the same as that in the NormalPathOut output. output packets exactly the same as that in the NormalPathOut output.
This LFB can be set to work in a Promiscuous Mode, allowing all This LFB can be set to work in a Promiscuous Mode, allowing all
packets to pass through the LFB without being dropped. Otherwise, a packets to pass through the LFB without being dropped. Otherwise, a
locality check will be performed based on the local MAC addresses. locality check will be performed based on the local MAC addresses.
All packets that do not pass through the locality check will be All packets that do not pass through the locality check will be
dropped. dropped.
This LFB can perform Ethernet layer flow control. This is usually This LFB participates in Ethernet flow control in cooperation with
implemented cooperatively by the EtherMACIn LFB and the EtherMACOut EtherMACOut LFB. This document does not go into the details of how
LFB. The flow control is further distinguished by Tx flow control this is implemented; the reader may refer to some relevant
and Rx flow control, separately for sending process and receiving references. This document also does not describe how the buffers
process flow controls. which induce the flow control messages behave - it is assumed that
such artifacts exist and describing them is out of scope in this
document.
5.1.2.2. Components 5.1.2.2. Components
The AdminStatus component is defined for CE to administratively The AdminStatus component is defined for the CE to administratively
manage the status of the LFB. The CE may administratively startup or manage the status of the LFB. The CE may administratively startup or
shutdown the LFB by changing the value of AdminStatus. The default shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'. value is set to 'Down'.
The LocalMACAddresses component specifies the local MAC addresses The LocalMACAddresses component specifies the local MAC addresses
based on which locality checks will be made. This component is an based on which locality checks will be made. This component is an
array of MAC addresses, and of 'read-write' access permission. array of MAC addresses, and of 'read-write' access permission.
An L2BridgingPathEnable component captures whether the LFB is set to An L2BridgingPathEnable component captures whether the LFB is set to
work as a L2 bridge. An FE that does not support bridging will work as a L2 bridge. An FE that does not support bridging will
internally set this flag to false, and additionally set the flag internally set this flag to false, and additionally set the flag
property as read-only. The default value for is 'false'. property as read-only. The default value for is 'false'.
The PromiscuousMode component specifies whether the LFB is set to The PromiscuousMode component specifies whether the LFB is set to
work as in a promiscuous mode. The default value for is 'false'. work as in a promiscuous mode. The default value for is 'false'.
The TxFlowControl component defines whether the LFB is performing The TxFlowControl component defines whether the LFB is performing
flow control on sending packets. The default value for is 'false' flow control on sending packets. The default value for is 'false'.
The RxFlowControl component defines whether the LFB is performing The RxFlowControl component defines whether the LFB is performing
flow contron on receiving packets. The default value for is 'false'. flow control on receiving packets. The default value for is 'false'.
A struct component, MACInStats, defines a set of statistics for this A struct component, MACInStats, defines a set of statistics for this
LFB, including the number of received packets and the number of LFB, including the number of received packets and the number of
dropped packets. dropped packets.
5.1.2.3. Capabilities 5.1.2.3. Capabilities
This LFB does not have a list of capabilities. This LFB does not have a list of capabilities.
5.1.2.4. Events 5.1.2.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.1.3. EtherClassifier 5.1.3. EtherClassifier
EtherClassifier LFB abstracts the process to decapsulate Ethernet EtherClassifier LFB abstracts the process to decapsulate Ethernet
packets and classify them. packets and then classify them.
5.1.3.1. Data Handling 5.1.3.1. Data Handling
This LFB describes the process of decapsulating Ethernet packets and This LFB describes the process of decapsulating Ethernet packets and
classify them into various network layer data packets according to classifying them into various network layer data packets according to
information included in the Ethernet packets headers. information included in the Ethernet packets headers.
TThe LFB is expected to receive all types of Ethernet packets, The LFB is expected to receive all types of Ethernet packets, via a
including VLAN Ethernet types, via a singleton input known as singleton input known as "EtherPktsIn", which are usually output from
"EtherPktsIn", which are usually output from an upstream LFB like an upstream LFB like EtherMACIn LFB. This input is also capable of
EtherMACIn LFB. This input is also capable of multiplexing to allow multiplexing to allow for multiple upstream LFBs being connected.
for multiple upstream LFBs being connected. For instance, when L2 For instance, when L2 bridging function is enabled in EtherMACIn LFB,
bridging function is enabled in EtherMACIn LFB, some L2 bridging LFBs some L2 bridging LFBs may be applied. In this case, some Ethernet
may be applied. In this case, some Ethernet packets after L2 packets after L2 processing may have to be input to EtherClassifier
processing may have to be input to EtherClassifier LFB for LFB for classification, while simultaneously packets directly output
classification, while simultaneously packets directly output from from EtherMACIn may also need to input to this LFB. This input is
EtherMACIn may also need to input to this LFB. This input is capable capable of handling such a case. Usually, all expected Ethernet
of handling this case. Usually, all expected Ethernet Packets will Packets will be associated with a PHYPortID metadata, indicating the
be associated with a PHYPortID metadatum, indicating the physical physical port the packet comes from. In some cases, for instance,
port the packet comes from. In some cases, for instance, like in a like in a MACinMAC case, a LogicalPortID metadata may be expected to
MACinMAC case, a LogicalPortID metadatum may be expected to associate associate with the Ethernet packet to further indicate which logical
with the Ethernet packet to further indicate which logical port the port the Ethernet packet belongs to. Note that PHYPortID metadata is
Ethernet packet belongs to. Note that PHYPortID metadata is always always expected while LogicalPortID metadata is optionally expected.
expected while LogicalPortID metadata is optionally expected.
The LFB is defined with a group output known as "ClassifyOut". Two output LFB ports are defined.
Because there may be various types of protocol packets at the output
ports, the produced output frame is defined as arbitrary for the The first output is a group output port known as "ClassifyOut".
purpose of wide extensibility in the future. In order for downstream Types of network layer protocol packets are output to instances of
LFBs to use, a bunch of metadata is produced to associate with every the port group. Because there may be various types of protocol
output packet. The medatdata, which may be used by downstream LFBs packets at the output ports, the produced output frame is defined as
for packet processing, contains the PHYPortID and it also contains arbitrary for the purpose of wide extensibility in the future.
information on Ethernet type, source MAC address, and destination MAC Metadata to be carried along with the packet data is produced at this
address of its original Ethernet packet. Moreover, it contains LFB for consumption by downstream LFBs. The metadata passed
information of logical port ID assigned by this LFB. Lastly, it may downstream includes PHYPortID, as well as information on Ethernet
conditionally contain information like VlanID and VlanPriority with type, source MAC address, destination MAC address and the logical
the condition that the packet is a VLAN packet. port ID. .If the original packet is a VLAN packet and contains a VLAN
ID and a VLAN priority value, then the VLAN ID and the VLAN priority
value are also carried downstream as metadata. As a result, the VLAN
ID and priority metadata are defined with the availability of
"conditional".
The second output is a singleton output port known as "ExceptionOut",
which will output packets for which the data processing failed, along
with an additional ExceptionID metadata to indicate what caused the
exception. Currently defined exception types include:
o There is no matching when classifying the packet.
Usually the exception out port may point to no where, indicating
packets with exceptions are dropped, while in some cases, the output
may be pointed to the path to the CE for further processing,
depending on individual implementations.
5.1.3.2. Components 5.1.3.2. Components
An EtherDispatchTable array component is defined in the LFB to An EtherDispatchTable array component is defined in the LFB to
dispatch every Ethernet packet to the output group according to the dispatch every Ethernet packet to the output group according to the
logical port ID assigned by the VLANInputTable to the packet and the logical port ID assigned by the VlanInputTable to the packet and the
Ethernet type in the Ethernet packet header. Each row of the array Ethernet type in the Ethernet packet header. Each row of the array
is a struct containing a Logical Port ID, an EtherType and an Output is a struct containing a Logical Port ID, an EtherType and an Output
Index. With the CE configuring the dispatch table, the LFB can be Index. With the CE configuring the dispatch table, the LFB can be
expected to classify various network layer protocol type packets and expected to classify various network layer protocol type packets and
output them at different output ports. It is expected that the LFB output them at different output ports. It is expected that the LFB
classify packets according to protocols like IPv4, IPv6, MPLS, ARP, classify packets according to protocols like IPv4, IPv6, MPLS, ARP,
ND, etc. ND, etc.
A VLANInputTable array component is defined in the LFB to classify A VlanInputTable array component is defined in the LFB to classify
VLAN Ethernet packets. Each row of the array is a strcut containing VLAN Ethernet packets. Each row of the array is a struct containing
an Incoming Port ID, a VLAN ID and a Logical Port ID. According to an Incoming Port ID, a VLAN ID and a Logical Port ID. According to
IEEE VLAN specifications, all Ethernet packets can be recognized as IEEE VLAN specifications, all Ethernet packets can be recognized as
VLAN types by defining that if there is no VLAN encapsulation in a VLAN types by defining that if there is no VLAN encapsulation in a
packet, a case with VLAN tag 0 is considered. Therefore the table packet, a case with VLAN tag 0 is considered. Every input packet is
actually applies to every input packet of the LFB. Every input assigned with a new LogicalPortID according to the packet incoming
packet is assigned with a new LogicalPortID according to the packet port ID and the VLAN ID. A packet incoming port ID is defined as a
incoming port ID and the VLAN ID. A packet incoming port ID is logical port ID if a logical port ID is associated with the packet,
defined as a physical port ID if there is no logical port ID or a physical port ID if no logical port ID associated. The VLAN ID
associated with the packet, or a logical port ID if there is a is exactly the VLAN ID in the packet if it is a VLAN packet, or 0 if
logical port ID associated with the packet. The VLAN ID is exactly it is not. Note that a logical port ID of a packet may be rewritten
the Vlan ID in the packet if it is a VLAN packet, or 0 if it is not a with a new one by the VlanInputTable processing.
VLAN packet. Note that a logical port ID of a packet may be
rewritten with a new one by the VLANInputTable processing.
Note that the logical port ID and physical port ID mentioned above Note that the logical port ID and physical port ID mentioned above
are all originally configured by CE, and are globally effective are all originally configured by CE, and are globally effective
within an ForCES NE (Network Element). To distinguish a physical within a ForCES NE (Network Element). To distinguish a physical port
port ID from a logical port ID in the incoming port ID field of the ID from a logical port ID in the incoming port ID field of the
VLANInputTable, physical port ID and logical port ID must be assigned VlanInputTable, physical port ID and logical port ID must be assigned
with separate number spaces. with separate number spaces.
An array component, EtherClassifyStats, defines a set of statistics An array component, EtherClassifyStats, defines a set of statistics
for this LFB, measuring the number of packets per EtherType. Each for this LFB, measuring the number of packets per EtherType. Each
row of the array is a struct containing an EtherType and a Packet row of the array is a struct containing an EtherType and a Packet
number. number.
5.1.3.3. Capabilities 5.1.3.3. Capabilities
This LFB does not have a list of capabilities. This LFB does not have a list of capabilities.
skipping to change at page 44, line 23 skipping to change at page 47, line 16
This LFB has no events specified. This LFB has no events specified.
5.1.4. EtherEncap 5.1.4. EtherEncap
The EtherEncap LFB abstracts the process to replace or attach The EtherEncap LFB abstracts the process to replace or attach
appropriate Ethernet headers to the packet. appropriate Ethernet headers to the packet.
5.1.4.1. Data Handling 5.1.4.1. Data Handling
This LFB abstracts the process to encapsulate IP packets to Ethernet This LFB abstracts the process of encapsulating Ethernet headers onto
packets according to the L2 information. received packets. The encapsulation is based on passed metadata.
The LFB is expected to receive types of IP packets, including IPv4 The LFB is expected to receive IPv4 and IPv6 packets, via a singleton
and IPv6 types, via a singleton one known as "EncapIn" which may be input port known as "EncapIn" which may be connected to an upstream
connected to an upstream LFB like an IPv4NextHop, an IPv6NextHop, LFB like an IPv4NextHop, an IPv6NextHop, BasicMetadataDispatch, or
BasicMetadataDispatch, or any LFB which requires to output packets any LFB which requires to output packets for Ethernet encapsulation.
for Ethernet encapsulation. The LFB always expects from upstream The LFB always expects from upstream LFBs the MediaEncapInfoIndex
LFBs the MediaEncapInfoIndex metadata which is used as an index to metadata which is used as a search key to lookup the Encapsulation
lookup the Encapsulation Table. Optinally an input packet may be Table. An input packet may also optionally receive a VLAN priority
accompanied by a Vlan priority metadata. In this case, default value metadata, indicating that the packet is originally with a priority
for the metadata is 0. value. The priority value will be loaded back to the packet when
encapsulating. The optional VLAN priority metadata is defined with a
default value 0.
Two singleton output ports are defined to output results. Two singleton output LFB ports are defined.
The first singleton output known as "SuccessOut". Upon a successful The first singleton output known as "SuccessOut". Upon a successful
table lookup, the destination and source MAC addresses, and the table lookup, the destination and source MAC addresses, and the
logical media port (L2PortID) are found in the matching table entry. logical media port (L2PortID) are found in the matching table entry.
The CE may set the VlanId in case VLANs are used. By default the The CE may set the VlanID in case VLANs are used. By default the
table entry for VlanId of 0 is used as per IEEE rules. Whatever the table entry for VlanID of 0 is used as per IEEE rules. Whatever the
value of VlanID is, if the Input metadata VlanPriority is non-zero, value of VlanID is, if the input metadata VlanPriority is non-zero,
the packet will have a VLAN tag. If the VlanPriority and the VlanID the packet will have a VLAN tag. If the VlanPriority and the VlanID
are all zero, there is no VLAN tag to this packet. After replacing are all zero, there is no VLAN tag to this packet. After replacing
or attaching the appropriate Ethernet headers to the packet is or attaching the appropriate Ethernet headers to the packet is
complete, the packet is passed out on the "SuccessOut" LFB port to a complete, the packet is passed out on the "SuccessOut" LFB port to a
downstream LFB instance alongside with the L2PortID. downstream LFB instance alongside with the L2PortID.
The second singleton output known as "ExceptionOut", which will The second singleton output known as "ExceptionOut", which will
output packets for which the table lookup fails, along with an output packets for which the table lookup fails, along with an
additional ExceptionID metadata. Currently defined exception types additional ExceptionID metadata. Currently defined exception types
only include the following case: only include the following case:
o MediaEncapInfoIndex value is not allocated in the EncapTable. o The MediaEncapInfoIndex value of the packet is invalid and can not
be allocated in the EncapTable.
o The packet failed lookup of the EncapTable table even though the
MediaEncapInfoIndex is valid.
The upstream LFB may be programmed by the CE to pass along a The upstream LFB may be programmed by the CE to pass along a
MediaEncapInfoIndex that does not exist in the EncapTable. That is MediaEncapInfoIndex that does not exist in the EncapTable. That is
to allow for resolution of the L2 headers, if needed, to be made at to allow for resolution of the L2 headers, if needed, to be made at
the L2 encapsulation level in this case(ethernet) via ARP, or ND (or the L2 encapsulation level in this case (Ethernet) via ARP, or ND (or
other methods depending on the link layer technology) when a table other methods depending on the link layer technology) when a table
miss occurs. miss occurs.
For neighbor L2 header resolution(table miss exception), the For neighbor L2 header resolution(table miss exception), the
processing LFB may pass this packet to the CE via the redirect LFB or processing LFB may pass this packet to the CE via the redirect LFB or
FE software or another LFB instance for further resolution. In such FE software or another LFB instance for further resolution. In such
a case the metadata NexthopIPv4Addr or NexthopIPv6Addr generated by a case the metadata NexthopIPv4Addr or NexthopIPv6Addr generated by
Nexthop LFB is also passed to the exception handling. Such an IP Nexthop LFB is also passed to the exception handling. Such an IP
address could be used to do activities such as ARP or ND by the address could be used to do activities such as ARP or ND by the
handler it is passed to. handler it is passed to.
The result of the L2 resolution is to update the EncapTable as well The result of the L2 resolution is to update the EncapTable as well
as the Nexthop LFB so subsequent packets do not fail EncapTable as the Nexthop LFB so subsequent packets do not fail EncapTable
lookup. The EtherEncap LFB does not make any assumptions of how the lookup. The EtherEncap LFB does not make any assumptions of how the
EncapTable is updated by the CE (or whether ARP/ND is used EncapTable is updated by the CE (or whether ARP/ND is used
dynamically or static maps exist). dynamically or static maps exist).
Downstream neighboring LFB instances could be either an EtherMACOut Downstream LFB instances could be either an EtherMACOut type or a
type or a BasicMetadataDispatch type. If the final packet L2 BasicMetadataDispatch type. If the final packet L2 processing is
processing is possible to be on per-media-port basis or resides on a possible to be on per-media-port basis or resides on a different FE
different FE or in cases where L2 header resolution is needed, then or in cases where L2 header resolution is needed, then the model
the model makes sense to use a BasicMetadataDispatch LFB to fanout to makes sense to use a BasicMetadataDispatch LFB to fanout to different
different LFB instances. If there is a direct egress port point, LFB instances. If there is a direct egress port point, then the
then the model makes sense to have a downstream LFB instance be an model makes sense to have a downstream LFB instance being an
EtherMACOut. EtherMACOut.
5.1.4.2. Components 5.1.4.2. Components
This LFB has only one component named EncapTable which is defined as This LFB has only one component named EncapTable which is defined as
an array. Each row of the array is a struct containing the an array. Each row of the array is a struct containing the
destination MAC address, the source MAC address, the VLAN ID with a destination MAC address, the source MAC address, the VLAN ID with a
default value of zero and the output logical L2 port ID. default value of zero and the output logical L2 port ID.
5.1.4.3. Capabilities 5.1.4.3. Capabilities
skipping to change at page 46, line 21 skipping to change at page 49, line 17
EtherMACOut LFB abstracts an Ethernet port at MAC data link layer. EtherMACOut LFB abstracts an Ethernet port at MAC data link layer.
This LFB describes Ethernet packet output process. Ethernet output This LFB describes Ethernet packet output process. Ethernet output
functions are closely related to Ethernet input functions, therefore functions are closely related to Ethernet input functions, therefore
many components defined in this LFB are as aliases of EtherMACIn LFB many components defined in this LFB are as aliases of EtherMACIn LFB
components. components.
5.1.5.1. Data Handling 5.1.5.1. Data Handling
The LFB is expected to receive all types of Ethernet packets, via a The LFB is expected to receive all types of Ethernet packets, via a
singleton input known as "EtherPktsIn", which are usually output from singleton input known as "EtherPktsIn", which are usually output from
an Ethernet encapsulation LFB, alongside with a metadatum indicating an Ethernet encapsulation LFB, alongside with a metadata indicating
the physical port ID that the packet will go through(editorial note: the physical port ID that the packet will go through.
need more discussion on the port ID being physical layer or L2
layer).
The LFB is defined with a singleton output. All Output packets are The LFB is defined with a singleton output. All Output packets are
in Ethernet format, possibly with various Ethernet types, alongside in Ethernet format, possibly with various Ethernet types, alongside
with a metadatum indicating the physical port ID the packet is to go with a metadata indicating the physical port ID the packet is to go
through. This output links to a downstream LFB that is usually an through. This output links to a downstream LFB that is usually an
Ethernet physical LFB like EtherPHYcop LFB. Ethernet physical LFB like EtherPHYcop LFB.
This LFB can perform Ethernet layer flow control. This is usually This LFB participates in Ethernet flow control in cooperation with
implemented cooperatively by the EtherMACIn LFB and the EtherMACOut EtherMACIn LFB. This document does not go into the details of how
LFB. The flow control is further distinguished by Tx flow control this is implemented; the reader may refer to some relevant
and Rx flow control, separately for sending process and receiving references. This document also does not describe how the buffers
process flow control. which induce the flow control messages behave - it is assumed that
such artifacts exist and describing them is out of scope in this
document.
Note that as a base definition, functions like multiple virtual MAC Note that as a base definition, functions like multiple virtual MAC
layers are not supported in this LFB version. It may be supported in layers are not supported in this LFB version. It may be supported in
the future by defining a subclass or a new version of this LFB the future by defining a subclass or a new version of this LFB.
5.1.5.2. Components 5.1.5.2. Components
The AdminStatus component is defined for CE to administratively The AdminStatus component is defined for CE to administratively
manage the status of the LFB. The CE may administratively startup or manage the status of the LFB. The CE may administratively startup or
shutdown the LFB by changing the value of AdminStatus. The default shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'. Note that this component is defined as an value is set to 'Down'. Note that this component is defined as an
alias of the AdminStatus component in the EtherMACIn LFB. This alias of the AdminStatus component in the EtherMACIn LFB. This
infers that an EtherMACOut LFB usually coexists with an EtherMACIn infers that an EtherMACOut LFB usually coexists with an EtherMACIn
LFB, both of which share the same administrative status management by LFB, both of which share the same administrative status management by
CE. Alias properties as defined in the ForCES FE model (RFC 5812) CE. Alias properties as defined in the ForCES FE model [RFC5812]
will be used by CE to declare the target component this alias refers, will be used by CE to declare the target component this alias refers,
which include the target LFB class and instance IDs as well as the which include the target LFB class and instance IDs as well as the
path to the target component. Whereas, these properties are set by path to the target component.
CE only when a system runs, which are outside the XML definitions of
this LFB.
The MTU component defines the maximum transmission unit The MTU component defines the maximum transmission unit.
The TxFlowControl component defines whether the LFB is performing The TxFlowControl component defines whether the LFB is performing
flow control on sending packets. The default value for is 'false'. flow control on sending packets. The default value for is 'false'.
Note that this component is defined as an alias of TxFlowControl Note that this component is defined as an alias of TxFlowControl
component in the EtherMACIn LFB. component in the EtherMACIn LFB.
The RxFlowControl component defines whether the LFB is performing The RxFlowControl component defines whether the LFB is performing
flow control on receiving packets. The default value for is 'false'. flow control on receiving packets. The default value for is 'false'.
Note that this component is defined as an alias of RxFlowControl Note that this component is defined as an alias of RxFlowControl
component in the EtherMACIn LFB. component in the EtherMACIn LFB.
skipping to change at page 47, line 42 skipping to change at page 50, line 36
5.2. IP Packet Validation LFBs 5.2. IP Packet Validation LFBs
The LFBs are defined to abstract IP packet validation process. An The LFBs are defined to abstract IP packet validation process. An
IPv4Validator LFB is specifically for IPv4 protocol validation and an IPv4Validator LFB is specifically for IPv4 protocol validation and an
IPv6Validator LFB for IPv6. IPv6Validator LFB for IPv6.
5.2.1. IPv4Validator 5.2.1. IPv4Validator
The IPv4Validator LFB performs IPv4 packets validation according to The IPv4Validator LFB performs IPv4 packets validation according to
RFC 1812. [RFC5812].
5.2.1.1. Data Handling 5.2.1.1. Data Handling
This LFB performs IPv4 validation according to RFC 1812. Then the This LFB performs IPv4 validation according to [RFC5812]. The IPv4
IPv4 packet will be output to the corresponding port regarding of the packet will be output to the corresponding LFB port the indication
validation result, whether the packet is a unicast or a multicast whether the packet is unicast, multicast or whether an exception has
one, an exception has occurred or the validation failed. occurred or the validation failed.
This LFB always expects, as input, packets which have been indicated This LFB always expects, as input, packets which have been indicated
as IPv4 packets by an upstream LFB, like an EtherClassifier LFB. as IPv4 packets by an upstream LFB, like an EtherClassifier LFB.
There is no specific metadata expected by the input of the LFB. There is no specific metadata expected by the input of the LFB.
Note that, as a default provision of RFC 5812, in FE model, all Four output LFB ports are defined.
metadata produced by upstream LFBs will pass through all downstream
LFBs by default without being specified by input port or output port.
Only those metadata that will be used(consumed) by an LFB will be
explicitly marked in input of the LFB as expected metadata. For
instance, in this LFB, even there is no specific metadata expected,
metadata like PHYPortID produced by some upstream physical layer LFBs
will always pass through this LFB. In some cases, if some component
in the LFB may use the metadata, it actually still can use it
regardless of whether the metadata has been expected or not.
Four output ports are defined to output various validation results.
All validated IPv4 unicast packets will be output at the singleton All validated IPv4 unicast packets will be output at the singleton
port known as "IPv4UnicastOut". All validated IPv4 multicast packets port known as "IPv4UnicastOut". All validated IPv4 multicast packets
will be output at the singleton port known as "IPv4MulticastOut" will be output at the singleton port known as "IPv4MulticastOut"
port. There is no metadata specifically required to produce at these port.
output ports.
A singleton port known as "ExceptionOut" is defined to output packets A singleton port known as "ExceptionOut" is defined to output packets
which have been validated as exception packets. An exception ID which have been validated as exception packets. An exception ID
metadatum is produced to indicate what has caused the exception. metadata is produced to indicate what has caused the exception. An
Currently defined exception types include: exception case is the case when a packet needs further processing
before being normally forwarded. Currently defined exception types
o Packet with destination address equal to 255.255.255.255 include:
o Packet with expired TTL o Packet with expired TTL
o Packet with header length more than 5 words o Packet with header length more than 5 words
o Packet IP head including Router Alert options o Packet IP head including Router Alert options
Note that, although TTL is checked in this LFB for validity, o Packet with exceptional source address
operations to TTL like TTL decreasing will be made only in a followed
forwarding LFB. o Packet with exceptional destination address
Note that although TTL is checked in this LFB for validity,
operations like TTL decrement are made by the downstream forwarding
LFB.
The final singleton port known as "FailOut" is defined for all The final singleton port known as "FailOut" is defined for all
packets which have failed the validation process. A validate error packets which have errors and failed the validation process. An
ID is associated to every failed packet to indicate the reason. error case is the case when a packet is unable to be further
Currently defined reasons include: processed nor forwarded except being dropped. An error ID is
associated a packet to indicate the failure reason. Currently
defined failure reasons include:
o Packet size reported is less than 20 bytes o Packet with size reported less than 20 bytes
o Packet with version is not IPv4 o Packet with version is not IPv4
o Packet with header length < 5
o Packet with total length field < 20
o Packet with invalid checksum o Packet with header length less than 5 words
o Packet with source address equal to 255.255.255.255 o Packet with total length field less than 20 bytes
o Packet with source address 0 o Packet with invalid checksum
o Packet with source address of form {127, <any>} o Packet with invalid source address
o Packet with source address in Class E domain o Packet with invalid destination address
5.2.1.2. Components 5.2.1.2. Components
This LFB has only one struct component, the This LFB has only one struct component, the
IPv4ValidatorStatisticsType, which defines a set of statistics for IPv4ValidatorStatisticsType, which defines a set of statistics for
validation process, including the number of bad header packets, the validation process, including the number of bad header packets, the
number of bad total length packets, the number of bad TTL packets, number of bad total length packets, the number of bad TTL packets,
and the number of bad checksum packets. and the number of bad checksum packets.
5.2.1.3. Capabilities 5.2.1.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.2.1.4. Events 5.2.1.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.2.2. IPv6Validator 5.2.2. IPv6Validator
The IPv6Validator LFB performs IPv6 packets validation according to The IPv6Validator LFB performs IPv6 packets validation according to
RFC 2460. [RFC2460].
5.2.2.1. Data Handling 5.2.2.1. Data Handling
This LFB performs IPv6 validation according to RFC 2460. Then the This LFB performs IPv6 validation according to [RFC2460]. Then the
IPv6 packet will be output to the corresponding port regarding of the IPv6 packet will be output to the corresponding port regarding of the
validation result, whether the packet is a unicast or a multicast validation result, whether the packet is a unicast or a multicast
one, an exception has occurred or the validation failed. one, an exception has occurred or the validation failed.
This LFB always expects, as input, packets which have been indicated This LFB always expects, as input, packets which have been indicated
as IPv6 packets by an upstream LFB, like an EtherClassifier LFB. as IPv6 packets by an upstream LFB, like an EtherClassifier LFB.
There is no specific metadata expected by the input of the LFB. There is no specific metadata expected by the input of the LFB.
Similar to the IPv4validator LFB, IPv6Validator has also defined four Similar to the IPv4validator LFB, IPv6Validator LFB has also defined
output ports to output packets for various validation results. four output ports to emit packets with various validation results.
All validated IPv6 unicast packets will be output at the singleton All validated IPv6 unicast packets will be output at the singleton
port known as "IPv6UnicastOut". All validated IPv6 multicast packets port known as "IPv6UnicastOut". All validated IPv6 multicast packets
will be output at the singleton port known as "IPv6MulticastOut" will be output at the singleton port known as "IPv6MulticastOut"
port. There is no metadata specifically required to produce at these port. There is no metadata produced at this LFB.
output ports.
A singleton port known as "ExceptionOut" is defined to output packets A singleton port known as "ExceptionOut" is defined to output packets
which have been validated as exception packets. An exception ID which have been validated as exception packets. An exception case is
metadata is produced to indicate what caused the exception. the case when a packet needs further processing before being normally
Currently defined exception types include: forwarded. An exception ID metadata is produced to indicate what
caused the exception. Currently defined exception types include:
o Packet with hop limit to zero o Packet with hop limit to zero
o Packet with a link-local destination address o Packet with next header set to Hop-by-Hop
o Packet with a link-local source address
o Packet with destination all-routers
o Packet with destination all-nodes o Packet with exceptional source address
o Packet with next header set to Hop-by-Hop o Packet with exceptional destination address
The final singleton port known as "FailOut" is defined for all The final singleton port known as "FailOut" is defined for all
packets which have failed the validation process. A validate error packets which have errors and failed the validation process. An
ID is associated to every failed packet to indicate the reason. error case is the case when a packet is unable to be further
Currently defined reasons include: processed nor forwarded except being dropped. A validate error ID is
associated to every failed packet to indicate the reason. Currently
o Packet size reported is less than 40 bytes defined reasons include:
o Packet with version is not IPv6 o Packet with size reported less than 40 bytes
o Packet with multicast source address (the MSB of the source o Packet with not IPv6 version
address is 0xFF)
o Packet with destination address set to 0 or ::1 o Packet with invalid source address
o Packet with source address set to loopback (::1). o Packet with invalid destination address
Note that in the base type library, definitions for exception ID and Note that in the base type library, definitions for exception ID and
validate error ID metadata are applied to both IPv4Validator and validate error ID metadata are applied to both IPv4Validator and
IPv6Validator LFBs, i.e., the two LFBs share the same medadata IPv6Validator LFBs, i.e., the two LFBs share the same medadata
definition, with different ID assignment inside. definition, with different ID assignment inside.
5.2.2.2. Components 5.2.2.2. Components
This LFB has only one struct component, the This LFB has only one struct component, the
IPv6ValidatorStatisticsType, which defines a set of statistics for IPv6ValidatorStatisticsType, which defines a set of statistics for
skipping to change at page 51, line 20 skipping to change at page 54, line 4
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.2.2.4. Events 5.2.2.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.3. IP Forwarding LFBs 5.3. IP Forwarding LFBs
IP Forwarding LFBs are specifically defined to abstract the IP IP Forwarding LFBs are specifically defined to abstract the IP
forwarding processes. As definitions for a base LFB library, this forwarding processes. As definitions for a base LFB library, this
document restricts its LFB definition scope for IP forwarding jobs document restricts its LFB definition scope only to IP unicast
only to IP unicast forwarding. LFBs for jobs like IP multicast may forwarding. IP multicast may be defined in future documents.
be defined in future versions of the document.
A typical IP unicast forwarding job is usually realized by looking up A typical IP unicast forwarding job is usually realized by looking up
some forwarding information table to find some next hop information, the forwarding information table to find next hop information, and
and then based on the next hop information, forwarding packets to then based on the next hop information, forwarding packets to
specific output ports. It usually takes two steps to do so, firstly specific physical output ports. It usually takes two steps to do so,
to look up a forwarding information table by means of Longest Prefix firstly to look up a forwarding information table by means of Longest
Matching(LPM) rule to find a next hop index, then to use the index to Prefix Matching(LPM) rule to find a next hop index, then to use the
look up a next hop information table to find enough information to index as a search key to look up a next hop information table to find
submit packets to output ports. This document abstracts the enough information to submit packets to output ports. This document
forwarding processes mainly based on the two steps model. However, abstracts the forwarding processes mainly based on the two steps
there actually exists other models, like one which may only have a model. However, there actually exists other models, like one which
forwarding information base that have conjoined next hop information may only have a forwarding information base that have conjoined next
together with forwarding information. In this case, if ForCES hop information together with forwarding information. In this case,
technology is to be applied, some translation work will have to be if ForCES technology is to be applied, some translation work will
done in FE to translate attributes defined by this document into real have to be done in the FE to translate attributes defined by this
attributes the implementation has actually applied. document into attributes related to the implementation.
Based on the IP forwarding abstraction, two kind of typical IP Based on the IP forwarding abstraction, two kind of typical IP
unicast forwarding LFBs are defined, Unicast LPM lookup LFB and next unicast forwarding LFBs are defined, Unicast LPM lookup LFB and next
hop application LFB. They are further distinguished by IPv4 and IPv6 hop application LFB. They are further distinguished by IPv4 and IPv6
protocols. protocols.
5.3.1. IPv4UcastLPM 5.3.1. IPv4UcastLPM
The IPv4UcastLPM LFB abstracts the IPv4 unicast Longest Prefix Match The IPv4UcastLPM LFB abstracts the IPv4 unicast Longest Prefix Match
(LPM) process.. (LPM) process.
This LFB also provides facilities to support users to implement This LFB also provides facilities to support users to implement
equal-cost multi-path routing (ECMP) or reverse path forwarding equal-cost multi-path routing (ECMP) or reverse path forwarding
(RPF). However, this LFB itself does not provide ECMP or RPF. To (RPF). However, this LFB itself does not provide ECMP or RPF. To
fully implement ECMP or RPF, additional specific LFBs, like a fully implement ECMP or RPF, additional specific LFBs, like a
specific ECMP LFB or an RPF LFB, will have to be defined. This work specific ECMP LFB or an RPF LFB, will have to be defined. This work
may be done in the future version of the document. may be done in the future version of the document.
5.3.1.1. Data Handling 5.3.1.1. Data Handling
This LFB performs the IPv4 unicast LPM table looking up. It always This LFB performs the IPv4 unicast LPM table looking up. It always
expects as input IPv4 unicast packets from one singleton input known expects as input IPv4 unicast packets from one singleton input known
as "PktsIn". Then the LFB uses the destination IPv4 address of every as "PktsIn". Then the LFB uses the destination IPv4 address of every
packet as index to look up the IPv4 prefix table and generate a hop packet as search key to look up the IPv4 prefix table and generate a
selector as the matching result. This result will associate to the hop selector as the matching result. The hop selector is passed as
packet as a metadatum to output to downstream LFBs, and will usually packet metadata to downstream LFBs, and will usually be used there as
be used there as an index to find more next hop information. a search index to find more next hop information.
Three singleton output ports are defined to output LPM results. Three singleton output LFB ports are defined.
The first singleton output known as "NormalOut", which will output The first singleton output known as "NormalOut" outputs IPv4 unicast
IPv4 unicast packets that has passed the LPM lookup and got a hop packets that succeed the LPM lookup and (got a hop selector). The
selector as the lookup result. The hop selector is associated with hop selector is associated with the packet as a metadata. Downstream
the packet as a metadatum. Followed the normal output of the LPM LFB from the LPM LFB is usually a next hop application LFB, like an
is usually a next hop application LFB, like an IPv4NextHop LFB. IPv4NextHop LFB.
The second singleton output known as "ECMPOut" is defined to provide The second singleton output known as "ECMPOut" is defined to provide
support for users wishing to implement ECMP. support for users wishing to implement ECMP.
An ECMP flag is defined in the LPM table to enable the LFB to support An ECMP flag is defined in the LPM table to enable the LFB to support
ECMP. When a table entry is created with the flag set true, it ECMP. When a table entry is created with the flag set true, it
indicates this table entry is for ECMP only. A packet, which has indicates this table entry is for ECMP only. A packet, which has
passed through this prefix lookup, will always output from "ECMPOut" passed through this prefix lookup, will always output from "ECMPOut"
output port, with the hop selector being its lookup result. The output port, with the hop selector being its lookup result. The
output will usually directly go to a downstream ECMP processing LFB, output will usually directly go to a downstream ECMP processing LFB,
where the hop selector can usually further generate optimized one or where the hop selector can usually further generate optimized one or
multiple next hop routes by use of ECMP algorithms. multiple next hop routes by use of ECMP algorithms.
A default route flag is defined in the LPM table to enable the LFB to A default route flag is defined in the LPM table to enable the LFB to
support a default route, and loose RPF also. When set true, the support a default route as well as loose RPF. When this flag is set
table entry is identified a default route and as a forbidden route true, the table entry is identified a default route which also
for RPF also. If a user wants to implement RPF on FE, a specific RPF implies that the route is forbidden for RPF. If a user wants to
LFB will have to be defined. In such RPF LFB, a component can be implement RPF on FE, a specific RPF LFB will have to be defined. In
defined as an alias of the prefix table component of this LFB as such RPF LFB, a component can be defined as an alias of the prefix
described below. table component of this LFB as described below.
The final singleton output is known as "ExceptionOut" and is defined The final singleton output is known as "ExceptionOut" and is defined
to allow exception packets to output here. Exceptions include cases to allow exception packets to output here, along with an ExceptionID
like: metadata to indicate what caused the exception. Currently defined
exception types include:
o Packets can not find any routes in the prefix table. o The packet failed the LPM lookup of the prefix table.
The upstream neighboring LFB of this LFB is usually IPv4Validator The upstream LFB of this LFB is usually IPv4Validator LFB. If RPF is
LFB. If RPF is to be adopted, the upstream can be an RPF LFB, when to be adopted, the upstream can be an RPF LFB, when defined.
defined.
The downstream neighboring LFB is usually IPv4NextHop LFB. If ECMP The downstream LFB is usually IPv4NextHop LFB. If ECMP is adopted,
is adopted, the downstream can be an ECMP LFB, when defined. the downstream can be an ECMP LFB, when defined.
5.3.1.2. Components 5.3.1.2. Components
This LFB has two components. This LFB has two components.
The IPv4PrefixTable component is defined as an array component of the The IPv4PrefixTable component is defined as an array component of the
LFB. Each row of the array contains an IPv4 adrress, a Prefix LFB. Each row of the array contains an IPv4 address, a Prefix
length, a Hop Selector, an ECMP flag and a Default Route flag. The length, a Hop Selector, an ECMP flag and a Default Route flag. The
LFB uses the destination IPv4 address of every input packet as index LFB uses the destination IPv4 address of every input packet as search
to look up this table to get a hop selector as the result. The ECMP key to look up this table in order extract a next hop selector. The
flag is for the LFB to support ECMP.The default route flag is for the ECMP flag is for the LFB to support ECMP. The default route flag is
LFB to support a default route and for loose RPF. for the LFB to support a default route and for loose RPF.
The IPv4UcastLPMStats component is a struct component which collects The IPv4UcastLPMStats component is a struct component which collects
statistics information, including the total number of input packets statistics information, including the total number of input packets
received, the IPv4 packets forwarded by this LFB and the number of IP received, the IPv4 packets forwarded by this LFB and the number of IP
datagrams discarded due to no route found. datagrams discarded due to no route found.
5.3.1.3. Capabilities 5.3.1.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
skipping to change at page 53, line 47 skipping to change at page 56, line 28
This LFB does not have any events specified. This LFB does not have any events specified.
5.3.2. IPv4NextHop 5.3.2. IPv4NextHop
This LFB abstracts the process of selecting ipv4 next hop action. This LFB abstracts the process of selecting ipv4 next hop action.
5.3.2.1. Data Handling 5.3.2.1. Data Handling
The LFB abstracts the process of next hop information application to The LFB abstracts the process of next hop information application to
IPv4 packets. It receives an IPv4 packet with an associated next hop IPv4 packets. It receives an IPv4 packet with an associated next hop
ID, and uses the ID to look up a next hop table to find an identifier (HopSelector), and uses the identifier as a table index to
appropriate output port from the LFB. look up a next hop table to find an appropriate LFB output port.
The LFB is expected to receive unicast IPv4 packets, via a singleton The LFB is expected to receive unicast IPv4 packets, via a singleton
input known as "PcktsIn" along with a HopSelector metadata which is input known as "PcktsIn" along with a HopSelector metadata which is
used as an index to lookup the NextHop table. Data processing used as a table index to lookup the NextHop table. The data
involves the forwarding TTL decrement and checksum recalculation. processing involves the forwarding TTL decrement and IP checksum
recalculation.
Two output ports are defined to output results. Two output LFB ports are defined.
The first output is a group output port known as "SuccessOut". On The first output is a group output port known as "SuccessOut". On
successful data processing the packet is sent out an LFB-port from successful data processing the packet is sent out an LFB-port from
within the LFB port group as selected by the LFBOutputSelectIndex within the LFB port group as selected by the LFBOutputSelectIndex
value of the matched table entry. The packet is sent to a downstream value of the matched table entry. The packet is sent to a downstream
LFB alongside with the L3PortID and MediaEncapInfoIndex metadata. LFB alongside with the L3PortID and MediaEncapInfoIndex metadata.
The second output is a singleton output port known as "ExceptionOut", The second output is a singleton output port known as "ExceptionOut",
which will output packets for which the data processing failed, along which will output packets for which the data processing failed, along
with an additional ExceptionID metadata to indicate what caused the with an additional ExceptionID metadata to indicate what caused the
exception. Currently defined exception types include: exception. Currently defined exception types include:
o The HopSelector is invalid o The HopSelector for the packet is invalid.
o The MTU for outgoing interface is less than the packet size o The packet failed lookup of the NextHop table even though the
HopSelector is valid.
o ICMP packet needs to be generated o The MTU for outgoing interface is less than the packet size.
Downstream neighboring LFB instances could be either a Downstream LFB instances could be either a BasicMetadataDispatch type
BasicMetadataDispatch type, used to fanout to different LFB instances (Section 5.5.1), used to fanout to different LFB instances or a media
or a media encapsulation related type, such as an EtherEncap type or encapsulation related type, such as an EtherEncap type or a
a RedirectOut type. For example, there are Ethernet and other tunnel RedirectOut type(Section 5.4.2). For example, if there are Ethernet
Encapsulation, then BasicMetadataDispatch can use the L3PortID and other tunnel Encapsulation, then a BasicMetadataDispatch LFB can
metadata to dispatch packets to different Encapsulator. use the L3PortID metadata (Section 5.3.2.2) to dispatch packets to
different Encapsulator.
5.3.2.2. Components 5.3.2.2. Components
This LFB has only one component named IPv4NextHopTable which is This LFB has only one component named IPv4NextHopTable which is
defined as an array. Each row of the array is a struct containing: defined as an array. The HopSelector received is used to match the
array index of IPv4NextHopTable to find out a row of the table as the
next hop information result. Each row of the array is a struct
containing:
o The L3PortID, which is the ID of the Logical Output Port that is o The L3PortID, which is the ID of the Logical Output Port that is
passed onto the neighboring LFB instance. This ID indicates what passed onto the downstream LFB instance. This ID indicates what
port to the neighbor is as defined by L3. port to the neighbor is as defined by L3. Usually this ID is used
for the NextHop LFB to distinguish packets that need different L2
encapsulating. For instance, some packets may require general
Ethernet encapsulation while others may require various types of
tunnel encapsulations. In such case, different L3PortIDs are
assigned to the packets and are as metadata passed to downstream
LFB. A BasicMetadataDispatch LFB(Section 5.5.1) may have to be
applied as the downstream LFB so as to dispatch packets to
different encapsulation LFB insatnces according to the L3PortIDs.
o MTU, the Maximum Transmission Unit for the outgoing port. o MTU, the Maximum Transmission Unit for the outgoing port.
o NextHopIPAddr, the IPv4 next hop Address. o NextHopIPAddr, the IPv4 next hop Address.
o MediaEncapInfoIndex, the index we pass onto the neighboring LFB o MediaEncapInfoIndex, the index we pass onto the downstream
instance. This index is used to lookup a table (typically media encapsulation LFB instance and that is used there as a search key
encapsulatation related) further downstream. The CE sets it to a to lookup a table (typically media encapsulation related) for
value that is not allocated in downstream LFB tables. (If a further encapsulation information. Note that an encapsulation LFB
downstream LFB lookup fails to find it, it indicates some other instance may not directly follow the NextHop LFB, but the index is
way to resolve it may be needed.) passed as a metadata associated, as such an encapsulation LFB
instance even further downstream to the NextHop LFB can still use
the index. In some cases, depending on implementation, the CE may
set the MediaEncapInfoIndex passed downstream to a value that will
fail lookup when it gets to a target encapsulation LFB; such a
lookup failure at that point is an indication that further
resolution is needed. For an example of this approach refer to
Section 7.2 which talks about ARP and mentions this approach.
o LFBOutputSelectIndex, the LFB Group output port index to select o LFBOutputSelectIndex, the LFB Group output port index to select
downstream LFB port. This index exactly is the FromPortIndex for downstream LFB port. It is a 1-to-1 mapping with FEObject LFB's
the port group "SuccessOut" in the table LFBTopology of FEObject table LFBTopology (See [RFC5812]) component FromPortIndex
LFB as defined for the Nexthop LFB. corresponding to the port group mapping FromLFBID as IPv4NextHop
LFB instance.
5.3.2.3. Capabilities 5.3.2.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.3.2.4. Events 5.3.2.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.3.3. IPv6UcastLPM 5.3.3. IPv6UcastLPM
skipping to change at page 55, line 34 skipping to change at page 58, line 39
This LFB also provides facilities to support users to implement This LFB also provides facilities to support users to implement
equal-cost multi-path routing (ECMP) or reverse path forwarding equal-cost multi-path routing (ECMP) or reverse path forwarding
(RPF). However, this LFB itself does not provide ECMP or RPF. To (RPF). However, this LFB itself does not provide ECMP or RPF. To
fully implement ECMP or RPF, additional specific LFBs, like a fully implement ECMP or RPF, additional specific LFBs, like a
specific ECMP LFB or an RPF LFB, will have to be defined. This work specific ECMP LFB or an RPF LFB, will have to be defined. This work
may be done in the future version of the document. may be done in the future version of the document.
5.3.3.1. Data Handling 5.3.3.1. Data Handling
This LFB performs the IPv6 unicast LPM table looking up. It always This LFB performs the IPv6 unicast LPM table look up. It always
expects as input IPv6 unicast packets from one singleton input known expects as input IPv6 unicast packets from one singleton input known
as "PktsIn". Then the LFB uses the destination IPv6 address of every as "PktsIn". The destination IPv6 address of an incoming packet is
packet as index to look up the IPv6 prefix table and generate a hop used as search key to look up the IPv6 prefix table and generate a
selector as the matching result. This result will associate to the hop selector. This hop selector result is associated to the packet
packet as a metadatum to output to downstream LFBs, and will usually as a metadata and sent to downstream LFBs, and will usually be used
be used there as an index to find more next hop information. in downstream LFBs as a search key to find more next hop information.
Three singleton output ports are defined to output LPM results. Three singleton output LFB ports are defined.
The first singleton output known as "NormalOut", which will output The first singleton output known as "NormalOut" outputs IPv6 unicast
IPv6 unicast packets that has passed the LPM lookup and got a hop packets that succeed the LPM lookup (and got a hop selector). The
selector as the lookup result. The hop selector is associated with hop selector is associated with the packet as a metadata. Downstream
the packet as a metadatum. Followed the normal output of the LPM LFB from the LPM LFB is usually a next hop application LFB, like an
is usually a next hop application LFB, like an IPv6NextHop LFB. IPv6NextHop LFB.
The second singleton output known as "ECMPOut" is defined to provide The second singleton output known as "ECMPOut" is defined to provide
support for users wishing to implement ECMP. support for users wishing to implement ECMP.
An ECMP flag is defined in the LPM table to enable the LFB to support An ECMP flag is defined in the LPM table to enable the LFB to support
ECMP. When a table entry is created with the flag set true, it ECMP. When a table entry is created with the flag set true, it
indicates this table entry is for ECMP only. A packet, which has indicates this table entry is for ECMP only. A packet, which has
passed through this prefix lookup, will always output from "ECMPOut" passed through this prefix lookup, will always output from "ECMPOut"
output port, with the hop selector being its lookup result. The output port, with the hop selector being its lookup result. The
output will usually directly go to a downstream ECMP processing LFB, output will usually directly go to a downstream ECMP processing LFB,
where the hop selector can usually further generate optimized one or where the hop selector can usually further generate optimized one or
multiple next hop routes by use of ECMP algorithms. multiple next hop routes by use of ECMP algorithms.
A default route flag is defined in the LPM table to enable the LFB to A default route flag is defined in the LPM table to enable the LFB to
support a default route, and loose RPF also. When set true, the support a default route as well as loose RPF. When this flag is set
table entry is identified a default route and as a forbidden route true, the table entry is identified a default route which also
for RPF also. If a user wants to implement RPF on FE, a specific RPF implies that the route is forbidden for RPF.
LFB will have to be defined. In such RPF LFB, a component can be
defined as an alias of the prefix table component of this LFB as If a user wants to implement RPF on FE, a specific RPF LFB will have
described below. to be defined. In such RPF LFB, a component can be defined as an
alias of the prefix table component of this LFB as described below.
The final singleton output is known as "ExceptionOut" and is defined The final singleton output is known as "ExceptionOut" and is defined
to allow exception packets to output here. Exceptions include cases to allow exception packets to output here, along with an ExceptionID
like: metadata to indicate what caused the exception. Currently defined
exception types include:
o Packets can not find any routes in the prefix table. o The packet failed the LPM lookup of the prefix table.
The upstream neighboring LFB of this LFB is usually IPv6Validator The upstream LFB of this LFB is usually IPv6Validator LFB. If RPF is
LFB. If RPF is to be adopted, the upstream can be an RPF LFB, when to be adopted, the upstream can be an RPF LFB, when defined.
defined.
The downstream neighboring LFB is usually an IPv6NextHop LFB. If The downstream LFB is usually an IPv6NextHop LFB. If ECMP is
ECMP is adopted, the downstream can be an ECMP LFB, when defined. adopted, the downstream can be an ECMP LFB, when defined.
5.3.3.2. Components 5.3.3.2. Components
This LFB has two components. This LFB has two components.
The IPv6PrefixTable component is defined as an array component of the The IPv6PrefixTable component is defined as an array component of the
LFB. Each row of the array contains an IPv6 adrress, a Prefix LFB. Each row of the array contains an IPv6 address, a Prefix
length, a Hop Selector, an ECMP flag and a Default Route flag. The length, a Hop Selector, an ECMP flag and a Default Route flag. The
LFB uses the destination IPv6 address of every input packet as index ECMP flag is so the LFB can support ECMP. The default route flag is
to look up this table to get a hop selector as the result. The ECMP for the LFB to support a default route and for loose RPF as described
flag is for the LFB to support ECMP. The default route flag is for earlier.
the LFB to support a default route and for loose RPF.
The IPv6UcastLPMStats component is a struct component which collects The IPv6UcastLPMStats component is a struct component which collects
statistics information, including the total number of input packets statistics information, including the total number of input packets
received, the IPv6 packets forwarded by this LFB and the number of IP received, the IPv6 packets forwarded by this LFB and the number of IP
datagrams discarded due to no route found. datagrams discarded due to no route found.
5.3.3.3. Capabilities 5.3.3.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
skipping to change at page 57, line 21 skipping to change at page 60, line 26
This LFB does not have any events specified. This LFB does not have any events specified.
5.3.4. IPv6NextHop 5.3.4. IPv6NextHop
This LFB abstracts the process of selecting IPv6 next hop action. This LFB abstracts the process of selecting IPv6 next hop action.
5.3.4.1. Data Handling 5.3.4.1. Data Handling
The LFB abstracts the process of next hop information application to The LFB abstracts the process of next hop information application to
IPv6 packets. It receives an IPv6 packet with an associated next hop IPv6 packets. It receives an IPv6 packet with an associated next hop
ID, and uses the ID to look up a next hop table to find an identifier (HopSelector), and uses the identifier to look up a next
appropriate output port from the LFB. hop table to find an appropriate output port from the LFB.
The LFB is expected to receive unicast IPv6 packets, via a singleton The LFB is expected to receive unicast IPv6 packets, via a singleton
input known as "PcktsIn" along with a HopSelector metadata which is input known as "PcktsIn" along with a HopSelector metadata which is
used as an index to lookup the NextHop table. used as a table index to lookup the NextHop table.
Two output ports are defined to output results. Two output LFB ports are defined.
The first output is a group output port known as "SuccessOut". On The first output is a group output port known as "SuccessOut". On
successful data processing the packet is sent out an LFB-port from successful data processing the packet is sent out an LFB port from
within the LFB port group as selected by the LFBOutputSelectIndex within the LFB port group as selected by the LFBOutputSelectIndex
value of the matched table entry. The packet is sent to a downstream value of the matched table entry. The packet is sent to a downstream
LFB alongside with the L3PortID and MediaEncapInfoIndex metadata. LFB alongside with the L3PortID and MediaEncapInfoIndex metadata.
The second output is a singleton output port known as "ExceptionOut", The second output is a singleton output port known as "ExceptionOut",
which will output packets for which the data processing failed, along which will output packets for which the data processing failed, along
with an additional ExceptionID metadata to indicate what caused the with an additional ExceptionID metadata to indicate what caused the
exception. Currently defined exception types include: exception. Currently defined exception types include:
o The HopSelector is invalid o The HopSelector for the packet is invalid.
o The MTU for outgoing interface is less than the packet size o The packet failed lookup of the NextHop table even though the
HopSelector is valid.
o ICMP packet needs to be generated o The MTU for outgoing interface is less than the packet size.
Downstream neighboring LFB instances could be either a Downstream LFB instances could be either a BasicMetadataDispatch
BasicMetadataDispatch type, used to fanout to different LFB instances type, used to fanout to different LFB instances or a media
or a media encapsulatation related type, such as an EtherEncap type encapsulatation related type, such as an EtherEncap type or a
or a RedirectOut type. For example, there are Ethernet and other RedirectOut type. For example, when the downstream LFB is
tunnel Encapsulation, then BasicMetadataDispatch can use the L3PortID BasicMetadataDispatch, and there exist Ethernet and other tunnel
metadata to dispatch packets to different Encapsulator. Encapsulation downstream from BasicMetadataDispatch, then the
BasicMetadataDispatch LFB can use the L3PortID metadata (See section
below) to dispatch packets to the different Encapsulator LFBs.
5.3.4.2. Components 5.3.4.2. Components
This LFB has only one component named IPv6NextHopTable which is This LFB has only one component named IPv6NextHopTable which is
defined as an array. Each row of the array is a struct containing: defined as an array. The array index of IPv6NextHopTable is used for
a HopSelector to find out a row of the table as the next hop
information. Each row of the array is a struct containing:
o The L3PortID, which is the ID of the Logical Output Port that is o The L3PortID, which is the ID of the Logical Output Port that is
passed onto the neighboring LFB instance. This ID indicates what passed onto the downstream LFB instance. This ID indicates what
port to the neighbor is as defined by L3. port to the neighbor is as defined by L3. Usually this ID is used
for the NextHop LFB to distinguish packets that need different L2
encapsulating. For instance, some packets may require general
Ethernet encapsulation while others may require various types of
tunnel encapsulations. In such case, different L3PortIDs are
assigned to the packets and are as metadata passed to downstream
LFB. A BasicMetadataDispatch LFB(Section 5.5.1) may have to be
applied as the downstream LFB so as to dispatch packets to
different encapsulation LFB instances according to the L3PortIDs.
o MTU, the Maximum Transmission Unit for the outgoing port. o MTU, the Maximum Transmission Unit for the outgoing port.
o NextHopIPAddr, the IPv6 next hop Address. o NextHopIPAddr, the IPv6 next hop Address.
o MediaEncapInfoIndex, the index we pass onto the neighboring LFB o MediaEncapInfoIndex, the index we pass onto the downstream
instance. This index is used to lookup a table (typically media encapsulation LFB instance and that is used there as a search key
encapsulatation related) further downstream. The CE sets it to a to lookup a table (typically media encapsulation related) for
value that is not allocated in downstream LFB tables. (If a further encapsulation information. Note that an encapsulation LFB
downstream LFB lookup fails to find it, it indicates some other instance may not directly follow the NextHop LFB, but the index is
way to resolve it may be needed.) passed as a metadata associated, as such an encapsulation LFB
instance even further downstream to the NextHop LFB can still use
the index. In some cases, depending on implementation, the CE may
set the MediaEncapInfoIndex passed downstream to a value that will
fail lookup when it gets to a target encapsulation LFB; such a
lookup failure at that point is an indication that further
resolution is needed. For an example of this approach refer to
Section 7.2 which talks about ARP and mentions this approach.
o LFBOutputSelectIndex, the LFB Group output port index to select o LFBOutputSelectIndex, the LFB Group output port index to select
downstream LFB port. This index exactly is the FromPortIndex for downstream LFB port. It is a 1-to-1 mapping with FEObject LFB's
the port group "SuccessOut" in the table LFBTopology of FEObject table LFBTopology (See [RFC5812]) component FromPortIndex
LFB as defined for the Nexthop LFB. corresponding to the port group mapping FromLFBID as IPv4NextHop
LFB instance.
5.3.4.3. Capabilities 5.3.4.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.3.4.4. Events 5.3.4.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.4. Redirect LFBs 5.4. Redirect LFBs
Redirect LFBs abstract data packets transportation process between CE Redirect LFBs abstract data packets transportation process between CE
and FE. Some packets output from some LFBs may have to be delivered and FE. Some packets output from some LFBs may have to be delivered
to CE for further processing, and some packets generated by CE may to CE for further processing, and some packets generated by CE may
have to be delivered to FE and further to some specific LFBs for data have to be delivered to FE and further to some specific LFBs for data
path processing. According to RFC 5810 [RFC5810], data packets and path processing. According to [RFC5810], data packets and their
their associated metadata are encapsulated in ForCES redirect message associated metadata are encapsulated in ForCES redirect message for
for transportation between CE and FE. We define two LFBs to abstract transportation between CE and FE. We define two LFBs to abstract the
the process, a RedirectIn LFB and a RedirectOut LFB. Usually, in an process, a RedirectIn LFB and a RedirectOut LFB. Usually, in an LFB
LFB topology of an FE, only one RedirectIn LFB instance and one topology of an FE, only one RedirectIn LFB instance and one
RedirectOut LFB instance exist. RedirectOut LFB instance exist.
5.4.1. RedirectIn 5.4.1. RedirectIn
RedirectIn LFB abstracts the process for the CE to inject data RedirectIn LFB abstracts the process for the CE to inject data
packets into the FE data path. packets into the FE data path.
5.4.1.1. Data Handling 5.4.1.1. Data Handling
A RedirectIn LFB abstracts the process for the CE to inject data A RedirectIn LFB abstracts the process for the CE to inject data
packets into the FE LFB topology so as to input data packets into FE packets into the FE LFB topology so as to input data packets into FE
data paths. From LFB topology point of view, the RedirectIn LFB acts data paths. From LFB topology point of view, the RedirectIn LFB acts
as a source point for data packets coming from CE, therefore the as a source point for data packets coming from CE, therefore
RedirectIn LFB is defined with only one output, while without any RedirectIn LFB is defined with a single output LFB port (and no input
input. LFB port).
The RedirectIn LFB has only one output defined as a group output
known as "PktsOut". Packets produced by this output will have
arbitrary frame types decided by the CE which generated the packets.
Possible frames may include IPv4, IPv6, or ARP protocol packets. The
CE may associate some metadata to indicate the frame types and may
also associate other metadata to indicate various information on the
packets. Among them, there MUST exist a 'RedirectIndex' metadata,
which is an integer acting as an index. When the CE transmits the
metadata along with the packet to a RedirectIn LFB, the LFB will read
the RedirectIndex metadata and output the packet to one of its group
output port instance, whose port index is indicated by the metadata.
All metadata from the CE other than the 'RedirectIndex' metadata will The single output port of RedirectIn LFB is defined as a group output
output from the RedirectIn LFB along with their binding packets. type, with the name of "PktsOut". Packets produced by this output
Note that, a packet without a 'RedirectIndex' metadata associated will have arbitrary frame types decided by the CE which generated the
will be dropped by the LFB. packets. Possible frames may include IPv4, IPv6, or ARP protocol
packets. The CE may associate some metadata to indicate the frame
types and may also associate other metadata to indicate various
information on the packets. Among them, there MUST exist a
'RedirectIndex' metadata, which is an integer acting as an index.
When the CE transmits the metadata along with the packet to a
RedirectIn LFB, the LFB will read the RedirectIndex metadata and
output the packet to one of its group output port instance, whose
port index is indicated by this metadata. Any other metadata, in
addition to 'RedirectIndex', will be passed untouched along the
packet delivered by the CE to downstream LFB. This means the
'RedirectIndex' metadata from CE will be "consumed" by the RedirectIn
LFB and will not be passed to downstream LFB. Note that, a packet
from CE without a 'RedirectIndex' metadata associated will be dropped
by the LFB.
5.4.1.2. Components 5.4.1.2. Components
There are no components defined for the current version of RedirectIn There are no components defined for the current version of RedirectIn
LFB. LFB.
5.4.1.3. Capabilities 5.4.1.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
skipping to change at page 60, line 10 skipping to change at page 63, line 37
5.4.2. RedirectOut 5.4.2. RedirectOut
RedirectOut LFB abstracts the process for LFBs in the FE to deliver RedirectOut LFB abstracts the process for LFBs in the FE to deliver
data packets to the CE. data packets to the CE.
5.4.2.1. Data Handling 5.4.2.1. Data Handling
A RedirectOut LFB abstracts the process for LFBs in the FE to deliver A RedirectOut LFB abstracts the process for LFBs in the FE to deliver
data packets to the CE. From the LFB's topology point of view, the data packets to the CE. From the LFB's topology point of view, the
RedirectOut LFB acts as a sink point for data packets going to the RedirectOut LFB acts as a sink point for data packets going to the
CE, therefore the RedirectOut LFB is defined with only one input, CE, therefore RedirectOut LFB is defined with a single input LFB port
while without any output. (and no output LFB port).
The RedirectOut LFB has only one singleton input known as "PktsIn", The RedirectOut LFB has only one singleton input known as "PktsIn",
but is capable of receiving packets from multiple LFBs by but is capable of receiving packets from multiple LFBs by
multiplexing this input. The input expects any kind of frame type multiplexing this input. The input expects any kind of frame type
therefore the frame type has been specified as arbitrary and also all therefore the frame type has been specified as arbitrary, and also
types of metadata are expected. All metadata associated with the all types of metadata are expected. All associated metadata produced
input packets will be delivered to CE via the ForCES protocol (but not consumed) by previous processed LFBs should be delivered to
redirect message [RFC5810]. CE via the ForCES protocol redirect message [RFC5810]. The CE can
decide on how to process the redirected packet by referencing the
associated metadata. As an example, a packet could be redirected by
the FE to the CE because the EtherEncap LFB is not able to resolve L2
information. The metadata "ExceptionID", created by the EtherEncap
LFB is passed along with the packet and should be sufficient for the
CE to do the necessary processing and resolve the L2 entry required.
5.4.2.2. Components 5.4.2.2. Components
There are no components defined for the current version of There are no components defined for the current version of
RedirectOut LFB. RedirectOut LFB.
5.4.2.3. Capabilities 5.4.2.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.4.2.4. Events 5.4.2.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.5. General Purpose LFBs 5.5. General Purpose LFBs
5.5.1. BasicMetadataDispatch 5.5.1. BasicMetadataDispatch
A basic medatata dispatch LFB is defined to abstract the process in The BasicMetadataDispatch LFB is defined to abstract the process in
which a packet is dispatched to some path based on its associated which a packet is dispatched to some output path based on its
metadata value. associated metadata value.
5.5.1.1. Data Handling 5.5.1.1. Data Handling
The BasicMetadataDispatch LFB provides the function to dispatch input
packets to a group output according to a metadata and a dispatch
table.
The BasicMetadataDispatch has only one singleton input known as The BasicMetadataDispatch has only one singleton input known as
"PktsIn" and expects any kind of frame type, therefore it has been "PktsIn". Every input packet should be associated with a metadata
specified as arbitrary, along with a metadata that will be used by that will be used by the LFB to do the dispatch. This LFB contains a
the LFB to do the dispatch. If a packet is not associated with such Metadata ID component a dispatch table named MetadataDispatchTable,
a metadata, the packet will be dropped inside the LFB. all configured by the CE. The Metadata ID specifies which metadata
is to be used for dispatching packets. The MetadataDispatchTable
contains entries of a Metadata value and an OutputIndex, specifying
that the packet with the metadata value must go out from the LFB
group output port instance with the OutputIndex.
The BasicMetadataDispatch LFB has only one output defined as a group Two output LFB ports are defined.
output known as "PktsOut". A packet, if it is associated with a
metadata with the metadata ID, will be output to the group port
instance with the index corresponding to the metadata value in the
Metadata Dispatch table. Currently the BasicMetadataDispatch only
allows an interger value for the metadata to be used for dispatch.
The BasicMetadataDispatch LFB is currently defined with only one The first output is a group output port known as "PktsOut". A packet
metadata adopted for dispatch, i.e., the metadata ID in the dispatch with its associated metadata having found an OutputIndex by
table is always the same for all table rows. successfully looking up the dispatch table will be output to the
group port instance with the corresponding index.
A more complex metadata dispatch LFB may be defined in future version The second output is a singleton output port known as "ExceptionOut",
of the library. In that LFB, multiple tuples of metadata may be which will output packets for which the data processing failed, along
adopted to dispatch packets. with an additional ExceptionID metadata to indicate what caused the
exception. Currently defined exception types include:
o There is no matching when looking up the metadata dispatch table.
As an example, if the CE decides to dispatch packets according to a
physical port ID (PHYPortID), the CE may set the ID of PHYPortID
metadata to the LFB first. Moreover, the CE also sets the PHYPortID
actual values (the metadata values) and assigned OutputIndex for the
values to the dispatch table in the LFB. When a packet arrives, a
PHYPortID metadata is found associated with the packet, the metadata
value is further used as a key to look up the dispatch table to find
out an output port instance for the packet.
Currently the BasicMetadataDispatch LFB only allows the metadata
value of the dispatch table entry be 32-bits integer. A metadata
with other types of value is not supported in this version. A more
complex metadata dispatch LFB may be defined in future version of the
library. In that LFB, multiple tuples of metadata with more value
types supported may be used to dispatch packets.
5.5.1.2. Components 5.5.1.2. Components
This LFB has only one component named MetadataDispatchTable which is This LFB has two components. One component is MetadataID and the
defined as an array. Each row of the array is a struct containing a other is MetadataDispatchTable. Each row entry of the dispatch table
Metadata ID, a Metadata value and the OutputIndex to selectt the is a struct containing metadata value and the OutputIndex. Note that
output port from the group. currently, the metadata value is only allowed to be 32-bits integer.
The metadata value is also defined as a content key for the table.
The concept of content key is a searching key for tables which is
defined in the ForCES FE Model [RFC5812]. See this document and also
the ForCES Protocol [RFC5810] for more details on the definition and
use of a content key.
5.5.1.3. Capabilities 5.5.1.3. Capabilities
This LFB does not have a list of capabilities This LFB does not have a list of capabilities
5.5.1.4. Events 5.5.1.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
5.5.2. GenericScheduler 5.5.2. GenericScheduler
skipping to change at page 61, line 47 skipping to change at page 66, line 6
This is a preliminary generic scheduler LFB for abstracting a simple This is a preliminary generic scheduler LFB for abstracting a simple
scheduling process. scheduling process.
5.5.2.1. Data Handling 5.5.2.1. Data Handling
There exist various kinds of scheduling strategies with various There exist various kinds of scheduling strategies with various
implementations. As a base LFB library, this document only defines a implementations. As a base LFB library, this document only defines a
preliminary generic scheduler LFB for abstracting a simple scheduling preliminary generic scheduler LFB for abstracting a simple scheduling
process. Users may use this LFB as a basic scheduler LFB to further process. Users may use this LFB as a basic scheduler LFB to further
construct more complex scheduler LFBs by means of inheritance as construct more complex scheduler LFBs by means of inheritance as
described in RFC 5812 [RFC5812]. described in [RFC5812].
Packets of any arbitrary frame type are received via a group input Packets of any arbitrary frame type are received via a group input
known as "PktsIn" with no additional metadata expected. This group known as "PktsIn" with no additional metadata expected. This group
input is capable of multiple input port instances. Each port input is capable of multiple input port instances. Each port
instance may be connected to different upstream LFB output. instance may be connected to different upstream LFB output.
Multiple queues reside at the input side, with every input port Multiple queues reside at the input side, with every input LFB port
instance connected to one queue. Every queue is marked with a queue instance connected to one queue. Every queue is marked with a queue
ID, and the queue ID is exactly the same as the index of ID, and the queue ID is exactly the same as the index of
corresponding input port instance. Scheduling disciplines are corresponding input port instance. Scheduling disciplines are
applied to all queues and also all packets in the queues. applied to all queues and also all packets in the queues.
Scheduled packets are output from a singleton output port of the LFB Scheduled packets are output from a singleton output port of the LFB
knows as "PktsOut" with no corresponding metadata. knows as "PktsOut" with no corresponding metadata.
More complex scheduler LFBs may be defined with more complex More complex scheduler LFBs may be defined with more complex
scheduling disciplines by succeeding this LFB. For instance, a scheduling disciplines by succeeding this LFB. For instance, a
priority scheduler LFB may be defined only by inheriting this LFB and priority scheduler LFB may be defined by inheriting this LFB and
defining a component to indicate priorities for all input queues. defining a component to indicate priorities for all input queues.
5.5.2.2. Components 5.5.2.2. Components
The QueueCount component is defined to specify the number of queues The QueueCount component is defined to specify the number of queues
to be scheduled. to be scheduled.
The SchedulingDiscipline component is for the CE to specify a The SchedulingDiscipline component is for the CE to specify a
scheduling discipline to the LFB. Currently defined scheduling scheduling discipline to the LFB. Currently defined scheduling
disciplines only include FIFO and Round Robin (RR). When a FIFO disciplines only include Round Robin (RR) strategy. The default
discipline is applied, it is requires that there is only one input scheduling discipline is RR then.
port instance for the group input. If the user accidentally defines
multiple input port instances for FIFO scheduling, only packets in
the input port with lowest port index will be scheduled to output
port, and all packets in other input port instances will just
ignored. Note that if the generic scheduler LFB is defined only one
input port instance, the default scheduling discipline is FIFO. If
the LFB is defined with more than one input port instances, the
default scheduling discipline is round robin (RR).
The CurrentQueueDepth component is defined to allow CE to query every The QueueStats component is defined to allow CE to query every queue
queue status of the scheduler. It is an array component and each row status of the scheduler. It is an array component and each row of
of the array is a struct containing a queue ID, the queue depth in the array is a struct containing a queue ID. Currently defined queue
packets and the queue depth in bytes. Using the queue ID as the status includes the queue depth in packets and the queue depth in
index, the CE can query every queue for its used length in unit of bytes. Using the queue ID as the index, the CE can query every queue
packets or bytes. for its used length in unit of packets or bytes.
5.5.2.3. Capabilities 5.5.2.3. Capabilities
Three capabilities are currently defined for the GenericScheduler. The following capability is currently defined for the
GenericScheduler.
o A queue number limit, which specify the limit of the maximum
supported number of queues, which is also the maximum number of
input port instances.
o The supported scheduling disciplines types by the FE, currently
maximum 6.
o The queue length limit providing the storage ability for every o The queue length limit providing the storage ability for every
queue. queue.
5.5.2.4. Events 5.5.2.4. Events
This LFB does not have any events specified. This LFB does not have any events specified.
6. XML for LFB Library 6. XML for LFB Library
skipping to change at page 65, line 24 skipping to change at page 69, line 24
<component componentID="3" access="read-only"> <component componentID="3" access="read-only">
<name>OperStatus</name> <name>OperStatus</name>
<synopsis>Operational status of the LFB.</synopsis> <synopsis>Operational status of the LFB.</synopsis>
<typeRef>PortStatusValues</typeRef> <typeRef>PortStatusValues</typeRef>
</component> </component>
<component componentID="4" access="read-write"> <component componentID="4" access="read-write">
<name>AdminLinkSpeed</name> <name>AdminLinkSpeed</name>
<synopsis>The link speed that the admin has requested. <synopsis>The link speed that the admin has requested.
</synopsis> </synopsis>
<typeRef>LANSpeedType</typeRef> <typeRef>LANSpeedType</typeRef>
<defaultValue>0x00000005</defaultValue> <defaultValue>LAN_SPEED_AUTO</defaultValue>
</component> </component>
<component componentID="5" access="read-only"> <component componentID="5" access="read-only">
<name>OperLinkSpeed</name> <name>OperLinkSpeed</name>
<synopsis>The actual operational link speed.</synopsis> <synopsis>The actual operational link speed.</synopsis>
<typeRef>LANSpeedType</typeRef> <typeRef>LANSpeedType</typeRef>
</component> </component>
<component componentID="6" access="read-write"> <component componentID="6" access="read-write">
<name>AdminDuplexMode</name> <name>AdminDuplexMode</name>
<synopsis>The duplex mode that the admin has requested. <synopsis>The duplex mode that the admin has requested.
</synopsis> </synopsis>
<typeRef>DuplexType</typeRef> <typeRef>DuplexType</typeRef>
<defaultValue>0x00000001</defaultValue> <defaultValue>Auto</defaultValue>
</component> </component>
<component componentID="7" access="read-only"> <component componentID="7" access="read-only">
<name>OperDuplexMode</name> <name>OperDuplexMode</name>
<synopsis>The actual duplex mode.</synopsis> <synopsis>The actual duplex mode.</synopsis>
<typeRef>DuplexType</typeRef> <typeRef>DuplexType</typeRef>
</component> </component>
<component componentID="8" access="read-only"> <component componentID="8" access="read-only">
<name>CarrierStatus</name> <name>CarrierStatus</name>
<synopsis>The status of the Carrier. Whether the port <synopsis>The status of the Carrier. Whether the port
is linked with an operational connector.</synopsis> is linked with an operational connector.</synopsis>
skipping to change at page 67, line 29 skipping to change at page 71, line 29
<name>EtherMACIn</name> <name>EtherMACIn</name>
<synopsis>An LFB abstracts an Ethernet port at MAC data link <synopsis>An LFB abstracts an Ethernet port at MAC data link
layer. It specifically describes Ethernet processing functions layer. It specifically describes Ethernet processing functions
like MAC address locality check, deciding if the Ethernet like MAC address locality check, deciding if the Ethernet
packets should be bridged, provide Ethernet layer flow control, packets should be bridged, provide Ethernet layer flow control,
etc.Multiple virtual MACs isn't supported in this LFB etc.Multiple virtual MACs isn't supported in this LFB
version.</synopsis> version.</synopsis>
<version>1.0</version> <version>1.0</version>
<inputPorts> <inputPorts>
<inputPort group="false"> <inputPort group="false">
<name>EtherMACIn</name> <name>EtherPktsIn</name>
<synopsis>The input port of the EtherMACIn. It <synopsis>The input port of the EtherMACIn. It
expects any kind of Ethernet frame.</synopsis> expects any kind of Ethernet frame.</synopsis>
<expectation> <expectation>
<frameExpected> <frameExpected>
<ref>EthernetAll</ref> <ref>EthernetAll</ref>
</frameExpected> </frameExpected>
<metadataExpected> <metadataExpected>
<ref>PHYPortID</ref> <ref>PHYPortID</ref>
</metadataExpected> </metadataExpected>
</expectation> </expectation>
skipping to change at page 72, line 30 skipping to change at page 76, line 30
<ref>EthernetAll</ref> <ref>EthernetAll</ref>
</frameExpected> </frameExpected>
<metadataExpected> <metadataExpected>
<ref>PHYPortID</ref> <ref>PHYPortID</ref>
</metadataExpected> </metadataExpected>
</expectation> </expectation>
</inputPort> </inputPort>
</inputPorts> </inputPorts>
<outputPorts> <outputPorts>
<outputPort group="false"> <outputPort group="false">
<name>EtherMACOut</name> <name>EtherPktsOut</name>
<synopsis>The Normal Output Port of the EtherMACOut. It <synopsis>The Normal Output Port of the EtherMACOut. It
can produce any kind of Ethernet frame and along with can produce any kind of Ethernet frame and along with
the frame passes the ID of the Physical Port as the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.</synopsis> metadata to be used by the next LFBs.</synopsis>
<product> <product>
<frameProduced> <frameProduced>
<ref>EthernetAll</ref> <ref>EthernetAll</ref>
</frameProduced> </frameProduced>
<metadataProduced> <metadataProduced>
<ref>PHYPortID</ref> <ref>PHYPortID</ref>
skipping to change at page 83, line 40 skipping to change at page 87, line 40
<synopsis>Data packet output</synopsis> <synopsis>Data packet output</synopsis>
<product> <product>
<frameProduced> <frameProduced>
<ref>Arbitrary</ref> <ref>Arbitrary</ref>
</frameProduced> </frameProduced>
</product> </product>
</outputPort> </outputPort>
</outputPorts> </outputPorts>
<components> <components>
<component access="read-write" componentID="1"> <component access="read-write" componentID="1">
<name>MetadataID</name>
<synopsis>the metadata ID for dispatching</synopsis>
<typeRef>uint32</typeRef>
</component>
<component access="read-write" componentID="2">
<name>MetadataDispatchTable</name> <name>MetadataDispatchTable</name>
<synopsis>Metadata dispatch table.</synopsis> <synopsis>Metadata dispatch table.</synopsis>
<typeRef>MetadataDispatchTableType</typeRef> <typeRef>MetadataDispatchTableType</typeRef>
</component> </component>
</components> </components>
</LFBClassDef> </LFBClassDef>
<LFBClassDef LFBClassID="17"> <LFBClassDef LFBClassID="17">
<name>GenericScheduler</name> <name>GenericScheduler</name>
<synopsis>This is a preliminary generic scheduler LFB for <synopsis>This is a preliminary generic scheduler LFB for
abstracting a simple scheduling process.Users may use this abstracting a simple scheduling process.Users may use this
LFB as a basic scheduler LFB to further construct more LFB as a basic scheduler LFB to further construct more
complex scheduler LFBs by means of inheritance as described complex scheduler LFBs by means of inheritance as described
in RFC 5812.</synopsis> in RFC5812.</synopsis>
<version>1.0</version> <version>1.0</version>
<inputPorts> <inputPorts>
<inputPort group="true"> <inputPort group="true">
<name>PktsIn</name> <name>PktsIn</name>
<synopsis>Input port for data packet.</synopsis> <synopsis>Input port for data packet.</synopsis>
<expectation> <expectation>
<frameExpected> <frameExpected>
<ref>Arbitrary</ref> <ref>Arbitrary</ref>
</frameExpected> </frameExpected>
</expectation> </expectation>
skipping to change at page 84, line 41 skipping to change at page 88, line 45
<synopsis>The number of queues to be scheduled. <synopsis>The number of queues to be scheduled.
</synopsis> </synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</component> </component>
<component access="read-write" componentID="2"> <component access="read-write" componentID="2">
<name>SchedulingDiscipline</name> <name>SchedulingDiscipline</name>
<synopsis>the Scheduler discipline.</synopsis> <synopsis>the Scheduler discipline.</synopsis>
<typeRef>SchdDisciplineType</typeRef> <typeRef>SchdDisciplineType</typeRef>
</component> </component>
<component access="read-only" componentID="3"> <component access="read-only" componentID="3">
<name>CurrentQueueDepth</name> <name>QueueStats</name>
<synopsis>Current Depth of all queues</synopsis> <synopsis>Current statistics for all queues</synopsis>
<typeRef>QueueDepthTableType</typeRef> <typeRef>QueueStatsTableType</typeRef>
</component> </component>
</components> </components>
<capabilities> <capabilities>
<capability componentID="30"> <capability componentID="30">
<name>QueueLenLimit</name> <name>QueueLenLimit</name>
<synopsis>Maximum length of each queue,the unit is <synopsis>Maximum length of each queue,the unit is
byte.</synopsis> byte.</synopsis>
<typeRef>uint32</typeRef> <typeRef>uint32</typeRef>
</capability> </capability>
<capability componentID="31">
<name>QueueScheduledLimit</name>
<synopsis>Max number of queues that can be scheduled
by this scheduluer.</synopsis>
<typeRef>uint32</typeRef>
</capability>
<capability componentID="32">
<name>DisciplinesSupported</name>
<synopsis>the scheduling disciplines supported.
</synopsis>
<array type="variable-size" maxLength="6">
<typeRef>SchdDisciplineType</typeRef>
</array>
</capability>
</capabilities> </capabilities>
</LFBClassDef> </LFBClassDef>
</LFBClassDefs> </LFBClassDefs>
</LFBLibrary> </LFBLibrary>
7. LFB Class Use Cases 7. LFB Class Use Cases
This section demonstrates examples on how the LFB classes defined by This section demonstrates examples on how the LFB classes defined by
the Base LFB library in Section 6 are applied to achieve some typical the Base LFB library in Section 6 can be applied to achieve some
router functions. The functions to demonstrate are: typical router functions. The functions demonstrated are:
o IPv4 forwarding o IPv4 forwarding
o ARP processing o ARP processing
To achieve the functions, processing paths organized by the LFB It is assumed the LFB topology on the FE described has already been
classes with their interconnections should be established in FE. In established by the CE and maps to the use cases illustrated in this
general, CE controls and manages the processing paths by use of the section.
ForCES protocol.
Note that LFB class use cases shown in this section are only as The use cases demonstrated in this section are mere examples and by
examples to demonstrate how typical router functions are able to be no means should be treated as the only way one would construct router
implemented with the defined base LFB library. Users and functionality from LFBs; based on the capability of the FE(s), a CE
implementers should not be limited by the example use cases. should be able to express different NE applications.
7.1. IPv4 Forwarding 7.1. IPv4 Forwarding
Figure 1 (Section 3.2.3) shows a normal IPv4 forwarding processing Figure 1 (Section 3.2.3) shows a typical IPv4 forwarding processing
path by use of the base LFB classes. To make it in focus, LFB path by use of the base LFB classes.
classes that are not close to IPv4 forwarding function are ignored in
the figure. Moreover, inputs or outputs of some LFBs that are not
related to IP forwarding are also ignored in the LFB figure.
In the example case, network interfaces are limited to copper A number of EtherPHYCop LFB(Section 5.1.1) instances are used to
Ethernet ports. A number of EtherPHYCop LFBs are used to describe describe physical layer functions of the ports. PHYPortID metadata
physical layer functions of the ports. An EtherMACIn LFB follows is generated by EtherPHYCop LFB and is used by all the subsequent
every EtherPHYCop LFB to describe the MAC layer processing. A downstream LFBs. An EtherMACIn LFB(Section 5.1.2), which describe
PHYPortID metadatum is generated by EtherPHYCop LFB and will be used the MAC layer processing, follows every EtherPHYCop LFB. The
by all the following LFBs. In EtherMACIn LFB, a locality check of EtherMACIn LFB may do a locality check of MAC addresses if the CE
MAC addresses may be performed if CE asks to do so by configuring the configures the appropriate EtherMACIn LFB component.
LFB component.
Ethernet packets out of the EtherMACIn LFB are sent to an Ethernet packets out of the EtherMACIn LFB are sent to an
EtherClassifier LFB to be decapsulated and classified into network EtherClassifier LFB (Section 5.1.3) to be decapsulated and classified
layer types like IPv4, IPv6, ARP, etc. In the example case, every into network layer types like IPv4, IPv6, ARP, etc. In the example
physical Ethernet interface is associated with one Classifier use case, every physical Ethernet interface is associated with one
instance, whereas it is also practical that all physical interfaces Classifier instance; although not illustrated, it is also feasible
are associated with only one Ethernet Classifier instance. that all physical interfaces are associated with only one Ethernet
EtherClassifier will use PHYPortID and Ethernet type of the input Classifier instance.
packet and VlanID, if exists in the input Ethernet packets, to decide
the packet network layer type and its output port from this LFB, and
also to assign a new logical port ID to the packet for later use. At
the same time, the LFB also generate some new metadata for every
packet like EtherType, SrcMAC, DstMAC, LogicPortID, etc for later
LFBs to use.
If a packet is classified as an IPv4 packet, it will be sent to an EtherClassifier uses the PHYPortID metadata, the Ethernet type of the
IPv4Validator LFB to validate the IPv4 packet. In the validator LFB, input packet, and VlanID (if present in the input Ethernet packets),
IPv4 packets will be classified into IPv4 unicast packets and to decide the packet network layer type and the LFB output port to
multicast packets, as well as validating the IPv4 packets. the downstream LFB. The EtherClassifier LFB also assigns a new
logical port ID metadata to the packet for later use. The
EtherClassifier may also generate some new metadata for every packet
like EtherType, SrcMAC, DstMAC, LogicPortID, etc for consumption by
downstream LFBs.
IPv4 unicast packets will be sent to IPv4UcastLPM LFB, where LPM is If a packet is classified as an IPv4 packet, it is sent downstream to
made and a next hop ID is achieved. The packet with the next hop ID an IPv4Validator LFB (Section 5.2.1) to validate the IPv4 packet. In
is further sent to an IPv4NextHop LFB, where further next hop the validator LFB, IPv4 packets are validated and are additionally
information is found for this packet. The information includes where classified into either IPv4 unicast packets or multicast packets.
the packet is to go next and even the media encapsulation type for IPv4 unicast packets are sent to downstream to the IPv4UcastLPM LFB
the port, etc. An L3PortID is used to identify a next hop output (Section 5.3.1).
port, which is represented as a metadatum associated with the packet
to be forwarded to via port. In the example case, the next hop
output port is an Ethernet type. As a result, the packet and its L3
port ID metadatum are sent to an EtherEncap LFB, where the packet is
encapsulated as an Ethernet packet. A BasicMetadataDispatch LFB
follows the EtherEncap LFB where packets will be dispatched to
different output port according to the L3PortID metadatum sent to the
LFB. As a result, IPv4 packets are forwarded out via various output
ports.
7.2. ARP processing The IPv4UcastLPM LFB is where the longest prefix match decision is
made, and a next hop selection is selected. The nexthop ID metadata
is generated by the IPv4UcastLPM LFB to be consumed downstream by the
IPv4NextHop LFB (Section 5.3.2).
Figure 2 shows the processing path for ARP protocol in the case that The IPv4NextHop LFB uses the nexthop ID metadata to do derive where
there is no specific ARP processing LFBs in FE. In such case, CE the packet is to go next and the media encapsulation type for the
should implement the ARP processing function. As usual, to make it port, etc. The IPv4NextHop LFB generates the L3PortID metadata used
in focus, the figure ignores LFB classes that are not related to ARP to identify a next hop output physical/logical port. In the example
processing. The figure also ignores some inputs or outputs of LFBs use case, the next hop output port is an Ethernet type; as a result,
that are out of the scope of ARP processing. the packet and its L3 port ID metadata are sent downstream to an
EtherEncap LFB (Section 5.1.4).
The example case still takes Ethernet ports as its network The EtherEncap LFB encapsulates the incoming packet into an Ethernet
interfaces. frame. A BasicMetadataDispatch LFB (Section 5.5.1) follows the
EtherEncap LFB. The BasicMetadataDispatch LFB is where packets are
finally dispatched to different output physical/logical ports based
on the L3PortID metadata sent to the LFB.
7.2. ARP processing
Figure 2 shows the processing path for ARP protocol in the case the
CE implements the ARP processing function. By no means is this the
only way ARP processing could be achieved; as an example ARP
processing could happen at the FE - but that discussion is out of
scope for this use case.
+---+ +---+ +---+ +---+
| | ARP packets | | | | ARP packets | |
| |------------------------+--->| | To CE | |------------------------+--->| | To CE
...-->| | . | | | ...-->| | . | | |
| | . | +---+ | | . | +---+
| | . | RedirectOut | | . | RedirectOut
+---+ | +---+ |
Ether EtherEncap | IPv4 packets lack Ether EtherEncap | IPv4 packets lack
Classifier +---+ | address resolution information Classifier +---+ | address resolution information
| | | | | |
Packets need | |--------->---+ Packets need | |--------->---+
...--------->| | ...--------->| |
L2 Encapsulation| | L2 Encapsulation| |
+---+ | | +------+ +---+ | | +------+
| | +-->| |--+ +---+ |Ether | | | +-->| |--+ +---+ |Ether |
| | | +---+ | | |--------->|MACOut|-->... | | | +---+ | | |--------->|MACOut|-->...
From CE| |--+ +-->| | . +------+ From CE| |--+ +-->| | . +------+
| |ARP Packets | | . | |ARP Packets | | .
| |from CE | | . +------+ | |from CE | | . +------+
| | | |--------> |Ether |-->... | | | |--------> |Ether |-->...
+---+ +---+ |MACOut| +---+ +---+ |MACOut|
RedirectIn BasicMetadata +------+ RedirectIn BasicMetadata +------+
Dispatch Dispatch
Figure 2: LFB use case for ARP Figure 2: LFB use case for ARP
As the figure shows, ARP protocol packets from network interfaces can There are two ways ARP processing could be triggered in the CE as
be filtered out by EtherClassifier LFB. In the example case, we illustrated in Figure 2:
presume the FE does not provide ability for ARP processing and relies
on CE to do the work. Hence, the classified ARP packets and some
associated metadata are then sent to RedirectOut LFB so as to be
transported to CE. CE can then process the received APR packets to
get information to establish ARP tables. While it depends on
individual implementations how this is implemented and is out of the
scope of ForCES
When CE deploys ARP function, it may need to generate ARP request or o ARP packets arriving from outside of the NE.
response packets and send them back to outer networks. To do so, the
packets are redirected to FE through a RedirectIn LFB first. Then,
just like to forward IPv4 packets, the ARP packets are also
encapsulated to Ethernet format by an EtherEncap LFB, and then
dispatched to different interfaces via a BasicMetadataDispatch LFB.
The BasicMetadataDispatch LFB will dispatch the packets according to
the L3PortID metadatum included in every ARP packet sent from CE.
The EtherEncap LFB also receives packets that need Ethernet L2 o IPV4 packets failing to resolve within the FE.
encapsulating. If the encapsulator finds that it can not fulfill
encapsulating some packets because of lack of L2 Ethernet information
for the packets, the LFB will output the packets from the
ExceptionOut output of the LFB. By connecting this output to
RedirectOut LFB, the packets can be redirected to CE for further ARP
processing. See Section 5.1.4 for details. CE may then generate ARP
requests based on the packets, and redirect ARP request messages to
FE to send to networks, just as the procedure shown above.
With these mechanisms and procedures, ARP function is expected to be ARP packets from network interfaces are filtered out by
implemented by CE with the help from FE. EtherClassifier LFB. The classified ARP packets and associated
metadata are then sent downstream to the RedirectOut LFB
(Section 5.4.2) to be transported to CE.
The EtherEncap LFB, as described earlier, receives packets that need
Ethernet L2 encapsulating. When the EtherEncap LFB fails to find the
necessary L2 Ethernet information to encapsulate the packet with, it
outputs the packet to its ExceptionOut LFB port. Downstream to
EtherEncap LFB's ExceptionOut LFB port is the RedirectOut LFB which
transports the packet to the CE (Section 5.1.4 on EtherEncap LFB for
details).
To achieve its goal, the CE needs to generate ARP request and
response packets and send them to external (to the NE) networks. ARP
request and response packets from the CE are redirected to an FE via
a RedirectIn LFB (Section 5.4.1).
As was the case with forwarded IPv4 packets, outgoing ARP packets are
also encapsulated to Ethernet format by the EtherEncap LFB, and then
dispatched to different interfaces via a BasicMetadataDispatch LFB.
The BasicMetadataDispatch LFB dispatches the packets according to the
L3PortID metadata included in every ARP packet sent from CE.
8. Contributors 8. Contributors
The authors would like to thank Jamal Hadi Salim, Ligang Dong, and The authors would like to thank Jamal Hadi Salim, Ligang Dong, and
Fenggen Jia who made major contributions to the development of this Fenggen Jia who made major contributions to the development of this
document. document.
Jamal Hadi Salim Jamal Hadi Salim
Mojatatu Networks Mojatatu Networks
Ottawa, Ontario Ottawa, Ontario
skipping to change at page 94, line 11 skipping to change at page 98, line 11
+-----------+---------------+------------------------+--------------+ +-----------+---------------+------------------------+--------------+
Table 1 Table 1
10.2. Metadata ID 10.2. Metadata ID
The Metadata ID namespace is 32 bits long. The following is the The Metadata ID namespace is 32 bits long. The following is the
guideline for managing the namespace. guideline for managing the namespace.
Metadata ID 0x00000000-0x7FFFFFFF Metadata ID 0x00000000-0x7FFFFFFF
Metadata with IDs in this range are Specification Required Metadata with IDs in this range are Specification Required
[RFC5226]. A metadata ID using this range MUST be documented in [RFC5226]. A metadata ID using this range MUST be documented in
an RFC or other permanent and readily available references. an RFC or other permanent and readily available references.
Values assigned by this specification: Values assigned by this specification:
+--------------+-------------------------+--------------------------+ +--------------+-------------------------+--------------------------+
| Value | Name | Definition | | Value | Name | Definition |
+--------------+-------------------------+--------------------------+ +--------------+-------------------------+--------------------------+
| 0x00000001 | EtherPHYCop | See Section 4.4 | | 0x00000001 | PHYPortID | See Section 4.4 |
| 0x00000002 | SrcMAC | See Section 4.4 | | 0x00000002 | SrcMAC | See Section 4.4 |
| 0x00000003 | DstMAC | See Section 4.4 | | 0x00000003 | DstMAC | See Section 4.4 |
| 0x00000004 | LogicalPortID | See Section 4.4 | | 0x00000004 | LogicalPortID | See Section 4.4 |
| 0x00000005 | EtherType | See Section 4.4 | | 0x00000005 | EtherType | See Section 4.4 |
| 0x00000006 | VlanID | See Section 4.4 | | 0x00000006 | VlanID | See Section 4.4 |
| 0x00000007 | VlanPriority | See Section 4.4 | | 0x00000007 | VlanPriority | See Section 4.4 |
| 0x00000008 | NexthopIPv4Addr | See Section 4.4 | | 0x00000008 | NexthopIPv4Addr | See Section 4.4 |
| 0x00000009 | NexthopIPv6Addr | See Section 4.4 | | 0x00000009 | NexthopIPv6Addr | See Section 4.4 |
| 0x0000000A | HopSelector | See Section 4.4 | | 0x0000000A | HopSelector | See Section 4.4 |
| 0x0000000B | ExceptionID | See Section 4.4 | | 0x0000000B | ExceptionID | See Section 4.4 |
| 0x0000000C | ValidateErrorID | See Section 4.4 | | 0x0000000C | ValidateErrorID | See Section 4.4 |
| 0x0000000D | L3PortID | See Section 4.4 | | 0x0000000D | L3PortID | See Section 4.4 |
| 0x0000000E | RedirectIndex | See Section 4.4 | | 0x0000000E | RedirectIndex | See Section 4.4 |
| 0x0000000F | MediaEncapInfoIndex | See Section 4.4 | | 0x0000000F | MediaEncapInfoIndex | See Section 4.4 |
+--------------+-------------------------+--------------------------+ +--------------+-------------------------+--------------------------+
Table 2 Table 2
Metadata ID 0x80000000-0xFFFFFFFFF Metadata ID 0x80000000-0xFFFFFFFF
Metadata IDs in this range are reserved for vendor private Metadata IDs in this range are reserved for vendor private
extensions and are the responsibility of individuals. extensions and are the responsibility of individuals.
10.3. Exception ID 10.3. Exception ID
The Exception ID namespace is 32 bits long. The following is the The Exception ID namespace is 32 bits long. The following is the
guideline for managing the namespace. guideline for managing the namespace.
Exception ID 0x00000000-0x7FFFFFFF Exception ID 0x00000000-0x7FFFFFFF
Exception IDs in this range are Specification Required [RFC5226]. Exception IDs in this range are Specification Required [RFC5226].
An exception ID using this range MUST be documented in an RFC or An exception ID using this range MUST be documented in an RFC or
other permanent and readily available references. other permanent and readily available references.
Values assigned by this specification: Values assigned by this specification:
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
| Value | Name | Definition | | Value | Name | Definition |
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
| 0x00000000 | AnyUnrecognizedExceptionCase | See Section 4.4 | | 0x00000000 | AnyUnrecognizedExceptionCase | See Section 4.4 |
| 0x00000001 | BroadCastPacket | See Section 4.4 | | 0x00000001 | ClassifyNoMatching | See Section 4.4 |
| 0x00000002 | BadTTL | See Section 4.4 | | 0x00000002 | MediaEncapInfoIndexInvalid | See Section 4.4 |
| 0x00000003 | IPv4HeaderLengthMismatch | See Section 4.4 | | 0x00000003 | EncapTableLookupFailed | See Section 4.4 |
| 0x00000004 | LengthMismatch | See Section 4.4 | | 0x00000004 | BadTTL | See Section 4.4 |
| 0x00000005 | RouterAlertOptions | See Section 4.4 | | 0x00000005 | IPv4HeaderLengthMismatch | See Section 4.4 |
| 0x00000006 | RouteInTableNotFound | See Section 4.4 | | 0x00000006 | RouterAlertOptions | See Section 4.4 |
| 0x00000007 | NextHopInvalid | See Section 4.4 | | 0x00000007 | IPv6HopLimitZero | See Section 4.4 |
| 0x00000008 | FragRequired | See Section 4.4 | | 0x00000008 | IPv6NextHeaderHBH | See Section 4.4 |
| 0x00000009 | LocalDelivery | See Section 4.4 | | 0x00000009 | SrcAddressExecption | See Section 4.4 |
| 0x0000000A | GenerateICMP | See Section 4.4 | | 0x0000000A | DstAddressExecption | See Section 4.4 |
| 0x0000000B | PrefixIndexInvalid | See Section 4.4 | | 0x0000000B | LPMLookupFailed | See Section 4.4 |
| 0x0000000C | IPv6HopLimitZero | See Section 4.4 | | 0x0000000C | HopSelectorInvalid | See Section 4.4 |
| 0x0000000D | IPv6NextHeaderHBH | See Section 4.4 | | 0x0000000D | NextHopLookupFailed | See Section 4.4 |
| 0x0000000E | FragRequired | See Section 4.4 |
| 0x0000000F | MetadataNoMatching | See Section 4.4 |
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
Table 3 Table 3
Exception ID 0x80000000-0xFFFFFFFFF Exception ID 0x80000000-0xFFFFFFFF
Exception IDs in this range are reserved for vendor private Exception IDs in this range are reserved for vendor private
extensions and are the responsibility of individuals. extensions and are the responsibility of individuals.
10.4. Validate Error ID 10.4. Validate Error ID
The Validate Error ID namespace is 32 bits long. The following is The Validate Error ID namespace is 32 bits long. The following is
the guideline for managing the namespace. the guideline for managing the namespace.
Validate Error ID 0x00000000-0x7FFFFFFF Validate Error ID 0x00000000-0x7FFFFFFF
Validate Error IDs in this range are Specification Required Validate Error IDs in this range are Specification Required
[RFC5226]. A Validate Error ID using this range MUST be [RFC5226]. A Validate Error ID using this range MUST be
documented in an RFC or other permanent and readily available documented in an RFC or other permanent and readily available
references. references.
Values assigned by this specification: Values assigned by this specification:
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
| Value | Name | Definition | | Value | Name | Definition |
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
| 0x00000000 | AnyUnrecognizedValidateErrorCase| See Section 4.4 | | 0x00000000 | AnyUnrecognizedValidateErrorCase| See Section 4.4 |
| 0x00000001 | InvalidIPv4PacketSize | See Section 4.4 | | 0x00000001 | InvalidIPv4PacketSize | See Section 4.4 |
| 0x00000002 | NotIPv4Packet | See Section 4.4 | | 0x00000002 | NotIPv4Packet | See Section 4.4 |
| 0x00000003 | InvalidIPv4HeaderLengthSize | See Section 4.4 | | 0x00000003 | InvalidIPv4HeaderLengthSize | See Section 4.4 |
| 0x00000004 | InvalidIPv4Checksum | See Section 4.4 | | 0x00000004 | InvalidIPv4LengthFieldSize | See Section 4.4 |
| 0x00000005 | InvalidIPv4SrcAddrCase1 | See Section 4.4 | | 0x00000005 | InvalidIPv4Checksum | See Section 4.4 |
| 0x00000006 | InvalidIPv4SrcAddrCase2 | See Section 4.4 | | 0x00000006 | InvalidIPv4SrcAddr | See Section 4.4 |
| 0x00000007 | InvalidIPv4SrcAddrCase3 | See Section 4.4 | | 0x00000007 | InvalidIPv4DstAddr | See Section 4.4 |
| 0x00000008 | InvalidIPv4SrcAddrCase4 | See Section 4.4 | | 0x00000008 | InvalidIPv6PakcetSize | See Section 4.4 |
| 0x00000009 | InvalidIPv6PakcetSize | See Section 4.4 | | 0x00000009 | NotIPv6Packet | See Section 4.4 |
| 0x0000000A | NotIPv6Packet | See Section 4.4 | | 0x0000000A | InvalidIPv6SrcAddr | See Section 4.4 |
| 0x0000000B | InvalidIPv6SrcAddrCase1 | See Section 4.4 | | 0x0000000B | InvalidIPv6DstAddr | See Section 4.4 |
| 0x0000000C | InvalidIPv6SrcAddrCase2 | See Section 4.4 |
| 0x0000000D | InvalidIPv6DstAddrCase1 | See Section 4.4 |
+--------------+---------------------------------+------------------+ +--------------+---------------------------------+------------------+
Table 4 Table 4
Validate Error ID 0x80000000-0xFFFFFFFFF Validate Error ID 0x80000000-0xFFFFFFFF
Validate Error IDs in this range are reserved for vendor private Validate Error IDs in this range are reserved for vendor private
extensions and are the responsibility of individuals. extensions and are the responsibility of individuals.
11. Security Considerations 11. Security Considerations
The ForCES framework document [RFC3746] provides a comprehensive The ForCES framework document [RFC3746] provides a comprehensive
security analysis for the overall ForCES architecture. For example, security analysis for the overall ForCES architecture. For example,
the ForCES protocol entities must be authenticated per the ForCES the ForCES protocol entities must be authenticated per the ForCES
requirements before they can access the information elements requirements before they can access the information elements
described in this document via ForCES. Access to the information described in this document via ForCES. Access to the information
skipping to change at page 98, line 20 skipping to change at page 102, line 20
W., Dong, L., Gopal, R., and J. Halpern, "Forwarding and W., Dong, L., Gopal, R., and J. Halpern, "Forwarding and
Control Element Separation (ForCES) Protocol Control Element Separation (ForCES) Protocol
Specification", RFC 5810, March 2010. Specification", RFC 5810, March 2010.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control [RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control
Element Separation (ForCES) Forwarding Element Model", Element Separation (ForCES) Forwarding Element Model",
RFC 5812, March 2010. RFC 5812, March 2010.
12.2. Informative References 12.2. Informative References
[RFC1122] Braden, R., "Requirements for Internet Hosts -
Communication Layers", STD 3, RFC 1122, October 1989.
[RFC1812] Baker, F., "Requirements for IP Version 4 Routers", [RFC1812] Baker, F., "Requirements for IP Version 4 Routers",
RFC 1812, June 1995. RFC 1812, June 1995.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6
(IPv6) Specification", RFC 2460, December 1998.
[RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
June 1999. June 1999.
[RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC
Text on Security Considerations", BCP 72, RFC 3552, Text on Security Considerations", BCP 72, RFC 3552,
July 2003. July 2003.
[RFC3654] Khosravi, H. and T. Anderson, "Requirements for Separation [RFC3654] Khosravi, H. and T. Anderson, "Requirements for Separation
of IP Control and Forwarding", RFC 3654, November 2003. of IP Control and Forwarding", RFC 3654, November 2003.
 End of changes. 300 change blocks. 
900 lines changed or deleted 1052 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/