draft-ietf-teas-actn-poi-applicability-00.txt   draft-ietf-teas-actn-poi-applicability-01.txt 
skipping to change at page 1, line 13 skipping to change at page 1, line 13
Internet Draft TIM Internet Draft TIM
Intended status: Informational Jean-Francois Bouquier Intended status: Informational Jean-Francois Bouquier
Vodafone Vodafone
Italo Busi Italo Busi
Huawei Huawei
Daniel King Daniel King
Old Dog Consulting Old Dog Consulting
Daniele Ceccarelli Daniele Ceccarelli
Ericsson Ericsson
Expires: March 2021 September 28, 2020 Expires: May 2021 November 2, 2020
Applicability of Abstraction and Control of Traffic Engineered Applicability of Abstraction and Control of Traffic Engineered
Networks (ACTN) to Packet Optical Integration (POI) Networks (ACTN) to Packet Optical Integration (POI)
draft-ietf-teas-actn-poi-applicability-00 draft-ietf-teas-actn-poi-applicability-01
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
skipping to change at page 1, line 41 skipping to change at page 1, line 41
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on March 28, 2020. This Internet-Draft will expire on April 9, 2021.
Copyright Notice Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the Copyright (c) 2020 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Abstract Abstract
This document considers the applicability of the IETF Abstraction This document considers the applicability of Abstraction and Control
and Control of Traffic Engineered Networks (ACTN) to Packet Optical of TE Networks (ACTN) architecture to Packet Optical Integration
Integration (POI), and IP and Optical DWDM domain internetworking. (POI)in the context of IP/MPLS and Optical internetworking,
identifying the YANG data models being defined by the IETF to
support this deployment architecture as well as specific scenarios
relevant for Service Providers.
In this document, we highlight the IETF protocols and YANG data Existing IETF protocols and data models are identified for each
models that may be used for the ACTN and control of POI networks, multi-layer (packet over optical) scenario with particular focus on
with particular focus on the interfaces between the MDSC (Multi- the MPI (Multi-Domain Service Coordinator to Provisioning Network
Domain Service Coordinator) and the underlying Packet and Optical Controllers Interface)in the ACTN architecture.
Domain Controllers (P-PNC and O-PNC) to support POI use cases.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
2. Reference Scenario.............................................5 2. Reference architecture and network scenario....................4
2.1. Generic Assumptions.......................................6 2.1. L2/L3VPN Service Request in North Bound of MDSC...........8
3. Multi-Layer Topology Coordination..............................7 2.2. Service and Network Orchestration........................10
3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP 2.2.1. Hard Isolation......................................12
services.......................................................7 2.2.2. Shared Tunnel Selection.............................13
3.1.1. Common YANG Models used at the MPI...................8 2.3. IP/MPLS Domain Controller and NE Functions...............13
3.1.1.1. YANG models used at the Optical MPIs............8 2.4. Optical Domain Controller and NE Functions...............15
3.1.1.2. Required YANG models at the Packet MPIs.........9 3. Interface protocols and YANG data models for the MPIs.........15
3.1.2. Inter-domain link Discovery..........................9 3.1. RESTCONF protocol at the MPIs............................15
3.2. Provisioning of an IP Link/LAG over DWDM.................10 3.2. YANG data models at the MPIs.............................16
3.2.1. YANG models used at the MPIs........................10 3.2.1. Common YANG data models at the MPIs.................16
3.2.1.1. YANG models used at the Optical MPIs...........10 3.2.2. YANG models at the Optical MPIs.....................16
3.2.1.2. Required YANG models at the Packet MPIs........11 3.2.3. YANG data models at the Packet MPIs.................18
3.2.2. IP Link Setup Procedure.............................12 4. Multi-layer and multi-domain services scenarios...............19
4.1. Scenario 1: network and service topology discovery.......19
3.3. Provisioning of an IP link/LAG over DWDM with path 4.1.1. Inter-domain link discovery.........................20
constraints...................................................12 4.1.2. IP Link Setup Procedure.............................21
3.3.1. YANG models used at the MPIs........................13 4.2. L2VPN/L3VPN establishment................................22
3.4. Provisioning Link Members to an existing LAG.............13 5. Security Considerations.......................................22
3.4.1. YANG Models used at the MPIs........................13 6. Operational Considerations....................................23
4. Multi-Layer Recovery Coordination.............................13 7. IANA Considerations...........................................23
4.1. Ensuring Network Resiliency during Maintenance Events....13 8. References....................................................23
4.2. Router Port Failure......................................13 8.1. Normative References.....................................23
5. Service Coordination for Multi-Layer network..................14 8.2. Informative References...................................24
5.1. L2/L3VPN/VN Service Request by the Customer..............17 Appendix A. Multi-layer and multi-domain resiliency...........27
5.2. Service and Network Orchestration........................19 A.1. Maintenance Window......................................27
5.3. IP/MPLS Domain Controller and NE Functions...............23 A.2. Router port failure.....................................27
5.3.1. Scenario A: Shared Tunnel Selection.................23 Acknowledgments..................................................28
5.3.1.1. Domain Tunnel Selection........................24 Contributors.....................................................28
5.3.1.2. VPN/VRF Provisioning for L3VPN.................25 Authors' Addresses...............................................29
5.3.1.3. VSI Provisioning for L2VPN.....................26
5.3.1.4. Inter-domain Links Update......................26
5.3.1.5. End-to-end Tunnel Management...................26
5.3.2. Scenario B: Isolated VN/Tunnel Establishment........27
5.4. Optical Domain Controller and NE Functions...............27
5.5. Orchestrator-Controllers-NEs Communication Protocol Flows29
6. Security Considerations.......................................31
7. Operational Considerations....................................31
8. IANA Considerations...........................................31
9. References....................................................31
9.1. Normative References.....................................31
9.2. Informative References...................................32
Acknowledgments..................................................34
Contributors.....................................................34
Authors' Addresses...............................................35
1. Introduction 1. Introduction
The full automation of the management and control of Service
Providers transport networks (IP/MPLS, Optical and also Microwave)
is key for achieving the new challenges coming now with 5G as well
as with the increased demand in terms of business agility and
mobility in a digital world. ACTN architecture, by abstracting the
network complexity from Optical and IP/MPLS networks towards MDSC
and then from MDSC towards OSS/BSS or Orchestration layer through
the use of standard interfaces and data models, is allowing a wide
range of transport connectivity services that can be requested by
the upper layers fulfilling almost any kind of service level
requirements from a network perspective (e.g. physical diversity,
latency, bandwidth, topology etc.)
Packet Optical Integration (POI) is an advanced use case of traffic Packet Optical Integration (POI) is an advanced use case of traffic
engineering. In wide-area networks, a packet network based on the engineering. In wide area networks, a packet network based on the
Internet Protocol (IP) and possibly Multiprotocol Label Switching Internet Protocol (IP) and possibly Multiprotocol Label Switching
(MPLS) is typically deployed on top of an optical transport network (MPLS) is typically realized on top of an optical transport network
that uses Dense Wavelength Division Multiplexing (DWDM). In many that uses Dense Wavelength Division Multiplexing (DWDM)(and
optionally an Optical Transport Network (OTN)layer). In many
existing network deployments, the packet and the optical networks existing network deployments, the packet and the optical networks
are engineered and operated independently of each other. There are are engineered and operated independently of each other. There are
technical differences between the technologies (e.g., routers versus technical differences between the technologies (e.g., routers vs.
optical switches) and the corresponding network engineering and optical switches) and the corresponding network engineering and
planning methods (e.g., inter-domain peering optimization in IP vs. planning methods (e.g., inter-domain peering optimization in IP vs.
dealing with physical impairments in DWDM, or very different time dealing with physical impairments in DWDM, or very different time
scales). In addition, customers and customer needs vary between a scales). In addition, customers needs can be different between a
packet and an optical network, and it is not uncommon to use packet and an optical network, and it is not uncommon to use
different vendors in both domains. Last but not least, state-of-the- different vendors in both domains. Last but not least, state-of-the-
art packet and optical networks use sophisticated but complex art packet and optical networks use sophisticated but complex
technologies, and for a network engineer, it may not be trivial to technologies, and for a network engineer it may not be trivial to be
be a full expert in both areas. As a result, packet and optical a full expert in both areas. As a result, packet and optical
networks are often managed by different technical and organizational networks are often operated in technical and organizational silos.
silos.
This separation is inefficient for many reasons. Both capital This separation is inefficient for many reasons. Both capital
expenditure (CAPEX) and operational expenditure (OPEX) could be expenditure (CAPEX) and operational expenditure (OPEX) could be
significantly reduced by better integrating the packet and the significantly reduced by better integrating the packet and the
optical network. Multi-layer online topology insight can speed up optical network. Multi-layer online topology insight can speed up
troubleshooting (e.g., alarm correlation) and network operation troubleshooting (e.g., alarm correlation) and network operation
(e.g., coordination of maintenance events), multi-layer offline (e.g., coordination of maintenance events), multi-layer offline
topology inventory can improve service quality (e.g., detection of topology inventory can improve service quality (e.g., detection of
diversity constraint violations) and multi-layer traffic engineering diversity constraint violations) and multi-layer traffic engineering
can use the available network capacity more efficiently (e.g., can use the available network capacity more efficiently (e.g.,
coordination of restoration). In addition, provisioning workflows coordination of restoration). In addition, provisioning workflows
can be simplified or automated as needed across layers (e.g, to can be simplified or automated as needed across layers (e.g, to
achieve bandwidth on demand, or to perform maintenance events). achieve bandwidth on demand, or to perform maintenance events).
Fully leveraging these benefits requires integration between the ACTN framework enables this complete multi-layer and multi-vendor
management and control of the packet and the optical network. The integration of packet and optical networks through MDSC and packet
Abstraction and Control of TE Networks (ACTN) framework outlines the and optical PNCs.
functional components and interfaces between a Multi-Domain Service
Coordinator (MDSC) and Provisioning Network Controllers (PNCs) that
can be used for coordinating the packet and optical layers.
In this document, critical use cases for Packet Optical Integration In this document, key scenarios for Packet Optical Integration (POI)
(POI) are described. We outline how and what is required for the are described from the packet service layer perspective. The
packet and the optical layer to interact to set up and operate objective is to explain the benefit and the impact for both the
services. The IP networks are operated as a client of optical packet and the optical layer, and to identify the required
networks. The use cases are ordered by increasing the level of coordination between both layers. Precise definitions of scenarios
integration and complexity. For each multi-layer use case, the can help with achieving a common understanding across different
document analyzes how to use the interfaces and data models of the disciplines. The focus of the scenarios are IP/MPLS networks
ACTN architecture. operated as client of optical DWDM networks. The scenarios are
ordered by increasing level of integration and complexity. For each
multi-layer scenario, the document analyzes how to use the
interfaces and data models of the ACTN architecture.
The document also captures the current issues with ACTN and POI Understanding the level of standardization and the possible gaps
deployment. By understanding the level of standardization and will help to better assess the feasibility of integration between IP
potential gaps, it will help to better assess the feasibility of and Optical DWDM domain (and optionally OTN layer), in an end-to-end
integration between IP and optical DWDM domain, in an end-to-end multi-vendor service provisioning perspective.
multi-vendor network.
2. Reference Scenario 2. Reference architecture and network scenario
This document uses "Reference Scenario 1" with multiple Optical This document analyses a number of deployment scenarios for Packet
domains and multiple Packet domains. The following Figure 1 shows and Optical Integration (POI) in which ACTN hierarchy is deployed to
this scenario in case of two Optical domains and two Packet domains: control a multi-layer and multi-domain network, with two Optical
domains and two Packet domains, as shown in Figure 1:
+----------+ +----------+
| MDSC | | MDSC |
+-----+----+ +-----+----+
| |
+-----------+-----+------+-----------+ +-----------+-----+------+-----------+
| | | | | | | |
+----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+
| P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 |
+----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+ +----+----+
| | | | | | | |
| \ / | | \ / |
+-------------------+ \ / +-------------------+ +-------------------+ \ / +-------------------+
CE / PE ASBR \ | / / ASBR PE \ CE CE / PE BR \ | / / BR PE \ CE
o--/---o o---\-|-------|--/---o o---\--o o--/---o o---\-|-------|--/---o o---\--o
\ : : / | | \ : : / \ : : / | | \ : : /
\ : AS Domain 1 : / | | \ : AS Domain 2 : / \ : PKT Domain 1 : / | | \ : PKT Domain 2 : /
+-:---------------:-+ | | +-:---------------:--+ +-:---------------:-+ | | +-:---------------:--+
: : | | : : : : | | : :
: : | | : : : : | | : :
+-:---------------:------+ +-------:---------------:--+ +-:---------------:------+ +-------:---------------:--+
/ : : \ / : : \ / : : \ / : : \
/ o...............o \ / o...............o \ / o...............o \ / o...............o \
\ Optical Domain 1 / \ Optical Domain 2 / \ Optical Domain 1 / \ Optical Domain 2 /
\ / \ / \ / \ /
+------------------------+ +--------------------------+ +------------------------+ +--------------------------+
Figure 1 - Reference Scenario 1 Figure 1 - Reference Scenario
The ACTN architecture, defined in [RFC8453], is used to control this The ACTN architecture, defined in [RFC8453], is used to control this
multi-domain network where each Packet PNC (P-PNC) is responsible multi-domain network where each Packet PNC (P-PNC) is responsible
for controlling its IP domain (AS), and each Optical PNC (O-PNC) is for controlling its IP domain, which can be either an Autonomous
responsible for controlling its Optical Domain. System (AS), [RFC1930], or an IGP area within the same operator
network, and each Optical PNC (O-PNC) is responsible for controlling
its Optical Domain.
The routers between IP domains can be either AS Boundary Routers
(ASBR) or Area Border Router (ABR): in this document the generic
term Border Router (BR) is used to represent either an ASBR or a
ABR.
The MDSC is responsible for coordinating the whole multi-domain The MDSC is responsible for coordinating the whole multi-domain
multi-layer (Packet and Optical) network. A specific standard multi-layer (Packet and Optical) network. A specific standard
interface (MPI) permits MDSC to interact with the different interface (MPI) permits MDSC to interact with the different
Provisioning Network Controller (O/P-PNCs). Provisioning Network Controller (O/P-PNCs).
The MPI interface presents an abstracted topology to MDSC hiding The MPI interface presents an abstracted topology to MDSC hiding
technology-specific aspects of the network and hiding topology technology-specific aspects of the network and hiding topology
details depending on the policy chosen regarding the level of details depending on the policy chosen regarding the level of
abstraction supported. The level of abstraction can be obtained abstraction supported. The level of abstraction can be obtained
based on P-PNC and O-PNC configuration parameters (e.g. provide the based on P-PNC and O-PNC configuration parameters (e.g. provide the
potential connectivity between any PE and any ABSR in an MPLS-TE potential connectivity between any PE and any BR in an MPLS-TE
network). network).
The MDSC in Figure 1 is responsible for multi-domain and multi-layer
coordination across multiple Packet and Optical domains, as well as
to provide IP services to different CNCs at its CMIs using YANG-
based service models (e.g., using L2SM [RFC8466], L3SM [RFC8299]).
The multi-domain coordination mechanisms for the IP tunnels
supporting these IP services are described in section 5. In some
cases, the MDSC could also rely on the multi-layer POI mechanisms,
described in this draft, to support multi-layer optimizations for
these IP services and tunnels.
In the network scenario of Figure 1, it is assumed that: In the network scenario of Figure 1, it is assumed that:
o The domain boundaries between the IP and Optical domains are o The domain boundaries between the IP and Optical domains are
congruent. In other words, one Optical domain supports congruent. In other words, one Optical domain supports
connectivity between Routers in one and only one Packet Domain; connectivity between Routers in one and only one Packet Domain;
o Inter-domain links exist only between Packet domains (i.e., o Inter-domain links exist only between Packet domains (i.e.,
between ASBR routers) and between Packet and Optical domains between BR routers) and between Packet and Optical domains (i.e.,
(i.e., between routers and ROADMs). In other words, there are no between routers and Optical NEs). In other words, there are no
inter-domain links between Optical domains; inter-domain links between Optical domains;
o The interfaces between the routers and the ROADM's are "Ethernet" o The interfaces between the Routers and the Optical NEs are
physical interfaces; "Ethernet" physical interfaces;
o The interfaces between the ASBR routers are "Ethernet" physical
interfaces.
2.1. Generic Assumptions
This section describes general assumptions which are applicable at
all the MPI interfaces, between each PNC (optical or packet) and the
MDSC, and also to all the scenarios discussed in this document.
The data models used on these interfaces are assumed to use the YANG
1.1 Data Modeling Language, as defined in [RFC7950].
The RESTCONF protocol, as defined in [RFC8040], using the JSON
representation, defined in [RFC7951], is assumed to be used at these
interfaces.
As required in [RFC8040], the "ietf-yang-library" YANG module
defined in [RFC8525] is used to allow the MDSC to discover the set
of YANG modules supported by each PNC at its MPI.
3. Multi-Layer Topology Coordination
In this scenario, the MSDC needs to discover the network topology,
at both WDM and IP layers, in terms of nodes (NEs) and links,
including inter-AS domain links as well as cross-layer links.
Each PNC provides to the MDSC an abstract topology view of the WDM
or of the IP topology of the domain it controls. This topology is
abstracted in the sense that some detailed NE information is hidden
at the MPI, and all or some of the NEs and related physical links
are exposed as abstract nodes and logical (virtual) links, depending
on the level of abstraction the user requires. This detailed
information is vital to understand both the inter-AS domain links
(seen by each controller as UNI interfaces but as I-NNI interfaces
by the MDSC) as well as the cross-layer mapping between IP and WDM
layer.
The MDSC also maintains an up-to-date network inventory of both IP
and WDM layers through the use of IETF notifications through MPI
with the PNCs.
For the cross-layer links, the MDSC needs to be capable of
automatically correlating physical ports information from the
routers (single link or bundle links for link aggregation groups -
LAG) to client ports in the ROADM.
3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP
services
Typically, an MDSC must be able to automatically discover network
topology of both WDM and IP layers (links and NE, links between two
domains), this assumes the following:
o An abstract view of the WDM and IP topology must be available;
o MDSC must keep an up-to-date network inventory of both IP and WDM
layers, and it should be possible to correlate such information
(e.g., which port, lambda/OTSi, the direction it is used by a
specific IP service on the WDM equipment);
o It should be possible at MDSC level to easily correlate WDM and
IP layers alarms to speed-up troubleshooting.
3.1.1. Common YANG Models used at the MPI
Both optical and packet PNCs use the following common topology YANG
models at the MPI to report their abstract topologies:
o The Base Network Model, defined in the "ietf-network" YANG module
of [RFC8345];
o The Base Network Topology Model, defined in the "ietf-network-
topology" YANG module of [RFC8345], which augments the Base
Network Model;
o The TE Topology Model, defined in the "ietf-te-topology" YANG
module of [TE-TOPO], which augments the Base Network Topology
Model.
These IETF YANG models are generic and augmented by technology-
specific YANG modules as described in the following sections.
3.1.1.1. YANG models used at the Optical MPIs
The optical PNC also uses at least the following technology-specific
topology YANG models, providing WDM and Ethernet technology-specific
augmentations of the generic TE Topology Model:
o The WSON Topology Model, defined in the "ietf-wson-topology" YANG
modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined
in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO].
o The Ethernet Topology Model, defined in the "ietf-eth-te-
topology" YANG module of [CLIENT-TOPO]
The WSON Topology Model or, alternatively, the Flexi-grid Topology
model is used to report the fixed-grid or, respectively, the
flexible-grid DWDM network topology (e.g., ROADMs and OMS links).
The Ethernet Topology Model is used to report the Ethernet access
links on the edge ROADMs.
3.1.1.2. Required YANG models at the Packet MPIs
The Packet PNC also uses at least the following technology-specific
topology YANG models, providing IP and Ethernet technology-specific
augmentations of the generic Topology Models:
o The L3 Topology Model, defined in the "ietf-l3-unicast-topology"
YANG modules of [RFC8346], which augments the Base Network
Topology Model
o The Ethernet Topology Model, defined in the "ietf-eth-te-
topology" YANG module of [CLIENT-TOPO], which augments the TE
Topology Model
o The L3-TE Topology Model, defined in the "ietf-l3-te-topology"
YANG modules of [L3-TE-TOPO], which augments the L3 Topology
Model
The Ethernet Topology Model is used to report the Ethernet links
between the IP routers and the edge ROADMs as well as the
inter-domain links between ASBRs, while the L3 Topology Model is
used to report the IP network topology (e.g., IP routers and IP
links).
The L3-TE Topology Model reports the relationship between the IP
routers and LTPs provided by the L3 Topology Model and the
underlying Ethernet nodes and LTPs provided by the Ethernet Topology
Model.
3.1.2. Inter-domain link Discovery
In the reference network of Figure 1, there are two types of
inter-domain links:
o Links between two IP domains/ASBRs (ASes)
o Links between an IP router and a ROADM
Both types of links are Ethernet physical links.
The inter-domain link information is reported to the MDSC by the two
adjacent PNCs, controlling the two ends of the inter-domain link,
using the Ethernet Topology Model defined in [CLIENT-TOPO].
The MDSC can understand how to merge these inter-domain Ethernet
links together using the plug-id attribute defined in the TE
Topology Model [TE-TOPO], as described in as described in section
4.3 of [TE-TOPO].
A more detailed description of how the plug-id can be used to
discover inter-domain link is also provided in section 5.1.4 of
[TNBI].
Both types of inter-domain Ethernet links are discovered using the o The interfaces between the Border Routers (BRs) are "Ethernet"
plug-id attributes reported in the Ethernet Topologies exposed by physical interfaces.
the two adjacent PNCs.
The MDSC, when discovering an Ethernet inter-domain link between two This version of the document assumes that the IP Link supported by
Ethernet LTPs which are associated with two IP LTPs, reported in the the Optical network are always intra-AS (PE-BR, intra-domain BR-BR,
IP Topologies exposed by the two adjacent P-PNCs, can also discover PE-P, BR-P, or P-P) and that the BRs are co-located and connected by
an inter-domain IP link/adjacency between these two IP LTPs. an IP Link supported by an Ethernet physical link.
Two options are possible to discover these inter-domain Ethernet The possibility to setup inter-AS/inter-area IP Links (e.g.,
links: inter-domain BR-BR or PE-PE), supported by Optical network, is for
further study.
1. Static configuration Therefore, if inter-domain links between the Optical domains exist,
they would be used to support multi-domain Optical services, which
are outside the scope of this document.
2. LLDP [IEEE 802.1AB] automatic discovery The Optical NEs within the optical domains can be ROADMs or OTN
switches, with or without a ROADM.
Since the static configuration requires an administrative burden to The MDSC in Figure 1 is responsible for multi-domain and multi-layer
configure network-wide unique identifiers, the automatic discovery coordination across multiple Packet and Optical domains, as well as
solution based on LLDP is preferable when LLDP is supported. to provide L2/L3VPN services.
As outlined in [TNBI], the encoding of the plug-id namespace as well Although the new technologies (e.g. QSFP-DD ZR 400G) are making
as of the LLDP information within the plug-id value is convenient to fit the DWDM pluggable interfaces on the Routers, the
implementation specific and needs to be consistent across all the deployment of those pluggable is not yet widely adopted by the
PNCs. operators. The reason is that most of operators are not yet ready to
manage Packet and Transport networks in a unified single domain. As
a consequence, this draft is not addressing the unified scenario.
This matter will be described in a different draft.
3.2. Provisioning of an IP Link/LAG over DWDM From an implementation perspective, the functions associated with
MDSC and described in [RFC8453] may be grouped in different ways.
In this scenario, the MSDC needs to coordinate the creation of an IP 1. Both the service- and network-related functions are collapsed into
link, or a LAG, between two routers through a DWDM network. a single, monolithic implementation, dealing with the end customer
service requests, received from the CMI (Customer MDSC Interface),
and the adaptation to the relevant network models. Such case is
represented in Figure 2 of [RFC8453]
2. An implementation can choose to split the service-related and the
network-related functions in different functional entities, as
described in [RFC8309] and in section 4.2 of [RFC8453]. In this
case, MDSC is decomposed into a top-level Service Orchestrator,
interfacing the customer via the CMI, and into a Network
Orchestrator interfacing at the southbound with the PNCs. The
interface between the Service Orchestrator and the Network
Orchestrator is not specified in [RFC8453].
3. Another implementation can choose to split the MDSC functions
between an H-MDSC responsible for packet-optical multi-layer
coordination, interfacing with one Optical L-MDSC, providing
multi-domain coordination between the O-PNCs and one Packet
L-MDSC, providing multi-domain coordination betweeh the P-PNCs
(see for example Figure 9 of [RFC8453]).
4. Another implementation can also choose to combine the MDSC and the
P-PNC functions together.
It is assumed that the MDSC has already discovered the whole network Please note that in current service provider's network deployments,
topology as described in section 3.1. at the North Bound of the MDSC, instead of a CNC, typically there is
an OSS/Orchestration layer. In this case, the MDSC would implement
only the Network Orchestration functions, as in [RFC8309] and
described in point 2 above. In this case, the MDSC is dealing with
the network services requests received from the OSS/Orchestration
layer.
3.2.1. YANG models used at the MPIs [Editors'note:] Check for a better term to define the network
services. It may be worthwhile defining what are the customer and
network services.
3.2.1.1. YANG models used at the Optical MPIs The OSS/Orchestration layer is a key part of the architecture
framework for a service provider:
The optical PNC uses at least the following YANG models: o to abstract (through MDSC and PNCs) the underlying transport
network complexity to the Business Systems Support layer
o The TE Tunnel Model, defined in the "ietf-te" YANG module of o to coordinate NFV, Transport (e.g. IP, Optical and Microwave
[TE-TUNNEL] networks), Fixed Acess, Core and Radio domains enabling full
automation of end-to-end services to the end customers.
o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG o to enable catalogue-driven service provisioning from external
modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, applications (e.g. Customer Portal for Enterprise Business
defined in the "ietf-flexi-grid-media-channel" YANG module of services) orchestrating the design and lifecycle management of
[Flexi-MC] these end-to-end transport connectivity services, consuming IP
and/or Optical transport connectivity services upon request.
o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- The functionality of the OSS/Orchestration layer as well as the
service" YANG module of [CLIENT-SIGNAL] interface toward the MDSC are usually operator-specific and outside
the scope of this draft. This document assumes that the
OSS/Orchestrator requests MDSC to setup L2VPN/L3VPN services through
mechanisms which are outside the scope of the draft.
The TE Tunnel model is generic and augmented by technology-specific There are two main cases when MDSC coordination of underlying PNCs
models such as the WSON Tunnel Model and the Flexi-grid Media in POI context is initiated:
Channel Model.
The WSON Tunnel Model or, alternatively, the Flexi-grid Media o Initiated by a request from the OSS/Orchestration layer to setup
Channel Model are used to setup connectivity within the DWDM network L2VPN/L3VPN services that requires multi-layer/multi-domain
depending on whether the DWDM optical network is based on fixed grid coordination.
or flexible-grid.
The Ethernet Client Signal Model is used to configure the steering o Initiated by the MDSC itself to perform multi-layer/multi-domain
of the Ethernet client traffic between Ethernet access links and TE optimizations and/or maintenance works, beyond discovery (e.g.
Tunnels, which in this case could be either WSON Tunnels or rerouting LSPs with their associated services when putting a
Flexi-Grid Media Channels. This model is generic and applies to any resource, like a fibre, in maintenance mode during a maintenance
technology-specific TE Tunnel: technology-specific attributes are window). Different to service fulfillment, the workflows then are
provided by the technology-specific models which augment the generic not related at all to a service provisioning request being
TE-Tunnel Model. received from the OSS/Orchestration layer.
3.2.1.2. Required YANG models at the Packet MPIs Above two MDSC workflow cases are in the scope of this draft or in
future versions.
The Packet PNC uses at least the following topology YANG models: 2.1. L2/L3VPN Service Request in North Bound of MDSC
o The Base Network Model, defined in the "ietf-network" YANG module As explained in section 2, the OSS/Orchestration layer can request
of [RFC8345] (see section 3.1.1) the MDSC to setup of L2/L3VPN services (with or without TE
requirements).
o The Base Network Topology Model, defined in the "ietf-network- Although the interface between the OSS/Orchestration layer is
topology" YANG module of [RFC8345] (see section 3.1.1) usually operator-specific, ideally it would be using a RESTCONF/YANG
interface with more abstracted version of the MPI YANG data models
used for network configuration (e.g. L3NM, L2NM).
o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" Figure 2 shows an example of a possible control flow between the
YANG modules of [RFC8346] (see section 3.1.1.1) OSS/Orchestration layer and the MDSC to instantiate L2/L3VPN
services, using the YANG models under definition in [VN], [L2NM],
[L3NM] and [TSM].
If, as discussed in section 3.2.2, IP Links created over DWDM can be +-------------------------------------------+
automatically discovered by the P-PNC, the IP Topology is needed | |
only to report these IP Links after being discovered by the P-PNC. | OSS/Orchestration layer |
| |
+-----------------------+-------------------+
|
1.VN 2. L2/L3NM & | ^
| TSM | |
| | | |
| | | |
v v | 3. Update VN
|
+-----------------------+-------------------+
| |
| MDSC |
| |
+-------------------------------------------+
The IP Topology can also be used to configure the IP Links created Figure 2 Service Request Process
over DWDM.
3.2.2. IP Link Setup Procedure o The VN YANG model [VN], whose primary focus is the CMI, can also
be used to provide VN Service configuration from a orchestrated
connectivity service point of view, when the L2/L3VPN service has
TE requirements. This model is not used to setup L2/L3VPN service
with no TE requirements.
The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON o It provides the profile of VN in terms of VN members, each of
Tunnel or a Flexi-grid Tunnel) within the DWDM network between the which corresponds to an edge-to-edge link between customer
two Optical Transponders (OTs) associated with the two access links. end-points (VNAPs). It also provides the mappings between the
VNAPs with the LTPs and between the connectivity matrix with
the VN member from which the associated traffic matrix (e.g.,
bandwidth, latency, protection level, etc.) of VN member is
expressed (i.e., via the TE-topology's connectivity matrix).
The Optical Transponders are reported by the O-PNC as Trail o The model also provides VN-level preference information
Termination Points (TTPs), defined in [TE-TOPO], within the WDM (e.g., VN member diversity) and VN-level admin-status and
Topology. The association between the Ethernet access link and the operational-status.
WDM TTP is reported by the Inter-Layer Lock (ILL) identifiers,
defined in [TE-TOPO], reported by the O-PNC within the Ethernet
Topology and WDM Topology.
The MDSC also requires the O-PNC to steer the Ethernet client o The L2NM YANG model [L2NM], whose primary focus is the MPI, can
traffic between the two access Ethernet Links over the WDM Tunnel. also be used to provide L2VPN service configuration and site
information, from a orchestrated connectivity service point of
view.
After the WDM Tunnel has been setup and the client traffic steering o The L3NM YANG model [L3NM], whose primary focus is the MPI, can
configured, the two IP routers can exchange Ethernet packets between also be used to provide all L3VPN service configuration and site
themselves, including LLDP messages. information, from a orchestrated connectivity service point of
view.
If LLDP [IEEE 802.1AB] is used between the two routers, the P-PNC o The TE & Service Mapping YANG model [TSM] provides TE-service
can automatically discover the IP Link being set up by the MDSC. The mapping as well as site mapping.
IP LTPs terminating this IP Link are supported by the ETH LTPs
terminating the two access links.
Otherwise, the MDSC needs to require the P-PNC to configure an IP o TE-service mapping provides the mapping between a L2/L3VPN
Link between the two routers: the MDSC also configures the two ETH instance and the corresponding VN instances.
LTPs which support the two IP LTPs terminating this IP Link.
3.3. Provisioning of an IP link/LAG over DWDM with path constraints o The TE-service mapping also provides the service mapping
requirement type as to how each L2/L3VPN/VN instance is
created with respect to the underlay TE tunnels (e.g.,
whether they require a new and isolated set of TE underlay
tunnels or not). See Section 2.2 for detailed discussion on
the mapping requirement types.
MDSC must be able to provision an IP link with a fixed maximum o Site mapping provides the site reference information across
latency constraint, or with the minimum latency available constraint L2/L3VPN Site ID, VN Access Point ID, and the LTP of the
within each domain but as well inter-domain when required (e.g. by access link.
monitoring traffic KPIs trends for this IP link). Through the O-PNC
fixed latency path/minimum latency path is chosen between PE and
ASBR in each optical domain. Then MDSC needs to select the inter-AS
domain with less latency (in case we have several interconnection
links) to have the right low latency constraint fulfilled end-to-end
across domains.
MDSC must be able to automatically create two IP links between two 2.2. Service and Network Orchestration
routers, over DWDM network, with physical path diversity (avoiding
SRLGs communicated by O-PNCs to the MDSC).
MDSC must be responsible for routing each of this IP links through From a functional standpoint, MDSC represented in Figure 2
different inter-AS domain links so that end-to-end IP links are interfaces with the OSS/Orchestration layer and decouples L2/L3VPN
fully disjoint. service configuration functions from network configuration
functions. Therefore in this document the MDSC performs the
functions of the Network Orchestrator, as defined in [RFC 8309].
Optical connectivity must be set up accordingly by MDSC through O- One of the important MDSC functions is to identify which TE Tunnels
PNCs. should carry the L2/L3VPN traffic (e.g., from TE & Service Mapping
configuration) and to relay this information to the P-PNCs, to
ensure the PEs' forwarding tables (e.g., VRF) are properly
populated, according to the TE binding requirement for the L2/L3VPN.
3.3.1. YANG models used at the MPIs TE binding requirement types [TSM] are:
This section is for further study 1. Hard Isolation with deterministic latency: The L2/L3VPN service
requires a set of dedicated TE Tunnels providing deterministic
latency performances and that cannot be not shared with other
services, nor compete for bandwidth with other Tunnels.
3.4. Provisioning Link Members to an existing LAG 2. Hard Isolation: This is similar to the above case without
deterministic latency requirements.
When adding a new link member to a LAG between two routers with or 3. Soft Isolation: The L2/L3VPN service requires a set of dedicated
without path latency/diversity constraint, the MDSC must be able to MPLS-TE tunnels which cannot be shared with other services, but
force the additional optical connection to use the same physical which could compete for bandwidth with other Tunnels.
path in the optical domain where the LAG capacity increase is
required.
3.4.1. YANG Models used at the MPIs 4. Sharing: The L2/L3VPN service allows sharing the MPLS-TE Tunnels
supporting it with other services.
This is for further study For the first three types, there could be additional TE binding
requirements with respect to different VN members of the same VN (on
how different VN members, belonging to the same VN, can share or not
network resources). For the first two cases, VN members can be
hard-isolated, soft-isolated, or shared. For the third case, VN
members can be soft-isolated or shared.
4. Multi-Layer Recovery Coordination In order to fulfill the the L2/L3VPN end-to-end TE requirements,
including the TE binding requirements, the MDSC needs to perform
multi-layer/multi-domain path computation to select the BRs, the
intra-domain MPLS-TE Tunnels and the intra-domain Optical Tunnels.
4.1. Ensuring Network Resiliency during Maintenance Events Depending on the knowledge that MDSC has of the topology and
configuration of the underlying network domains, three models for
performing path computation are possible:
Before planned maintenance operation on DWDM network takes place, IP 1. Summarization: MDSC has an abstracted TE topology view of all of
traffic should be moved hitless to another link. the underlying domains, both packet and optical. MDSC does not
have enough TE topology information to perform
multi-layer/multi-domain path computation. Therefore MDSC
delegates the P-PNCs and O-PNCs to perform a local path
computation within their controlled domains and it uses the
information returned by the P-PNCs and O-PNCs to compute the
optimal multi-domain/multi-layer path.
This model presents an issue to P-PNC, which does not have the
capability of performing a single-domain/multi-layer path
computation (that is, P-PNC does not have any possibility to
retrieve the topology/configuration information from the Optical
controller). A possible solution could be to include a CNC
function in the P-PNC to request the MDSC multi-domain Optical
path computation, as shown in Figure 10 of [RFC8453].
MDSC must reroute IP traffic before the events takes place. It 2. Partial summarization: MDSC has full visibility of the TE
should be possible to lock IP traffic to the protection route until topology of the packet network domains and an abstracted view of
the maintenance event is finished, unless a fault occurs on such the TE topology of the optical network domains.
path. MDSC then has only the capability of performing multi-
domain/single-layer path computation for the packet layer (the
path can be computed optimally for the two packet domains).
Therefore MDSC still needs to delegate the O-PNCs to perform
local path computation within their respective domains and it
uses the information received by the O-PNCs, together with its TE
topology view of the multi-domain packet layer, to perform
multi-layer/multi-domain path computation.
The role of P-PNC is minimized, i.e. is limited to management.
4.2. Router Port Failure 3. Full knowledge: MDSC has the complete and enough detailed view of
the TE topology of all the network domains (both optical and
packet). In such case MDSC has all the information needed to
perform multi-domain/multi-layer path computation, without
relying on PNCs.
This model may present, as a potential drawback, scalability
issues and, as discussed in section 2.2. of [PATH-COMPUTE],
performing path computation for optical networks in the MDSC is
quite challenging because the optimal paths depend also on
vendor-specific optical attributes (which may be different in the
two domains if they are provided by different vendors).
The focus is on client-side protection scheme between IP router and The current version of this draft assumes that MDSC supports at
reconfigurable ROADM. Scenario here is to define only one port in least model #2 (Partial summarization).
the routers and in the ROADM muxponder board at both ends as back-up
ports to recover any other port failure on client-side of the ROADM
(either on router port side or on muxponder side or on the link
between them). When client-side port failure occurs, alarms are
raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.).
MDSC checks with OP-PNC(s) that there is no optical failure in the
optical layer.
There can be two cases here: [Note: check with opeerators for some references on real deployment]
a) LAG was defined between the two end routers. MDSC, after checking 2.2.1. Hard Isolation
that optical layer is fine between the two end ROADMs, triggers
the ROADM configuration so that the router back-up port with its
associated muxponder port can reuse the OCh that was already in
use previously by the failed router port and adds the new link to
the LAG on the failure side.
While the ROADM reconfiguration takes place, IP/MPLS traffic is For example, when "Hard Isolation with or w/o deterministic latency"
using the reduced bandwidth of the IP link bundle, discarding TE binding requirement is applied for a L2/L3VPN, new Optical
lower priority traffic if required. Once backup port has been Tunnels need to be setup to support dedicated IP Links between PEs
reconfigured to reuse the existing OCh and new link has been and BRs.
added to the LAG then original Bandwidth is recovered between the
end routers.
Note: in this LAG scenario let assume that BFD is running at LAG The MDSC needs to identify the set of IP/MPLS domains and their BRs.
level so that there is nothing triggered at MPLS level when one This requires the MDSC to request each O-PNC to compute the
of the link member of the LAG fails. intra-domain optical paths between each PEs/BRs pairs.
b) If there is no LAG then the scenario is not clear since a router When requesting optical path computation to the O-PNC, the MDSC
port failure would automatically trigger (through BFD failure) needs to take into account the inter-layer peering points, such as
first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE the interconnections between the PE/BR nodes and the edge Optical
case) or TI-LFA (MPLS based SR-TE case) through a protection nodes (e.g., using the inter-layer lock or the transitional link
port. At the same time MDSC, after checking that optical network information, defined in [RFC8795]).
connection is still fine, would trigger the reconfiguration of
the back-up port of the router and of the ROADM muxponder to re-
use the same OCh as the one used originally for the failed router
port. Once everything has been correctly configured, MDSC Global
PCE could suggest to the operator to trigger a possible re-
optimisation of the back-up MPLS path to go back to the MPLS
primary path through the back-up port of the router and the
original OCh if overall cost, latency etc. is improved. However,
in this scenario, there is a need for protection port PLUS back-
up port in the router which does not lead to clear port savings.
5. Service Coordination for Multi-Layer network When the optimal multi-layer/multi-domain path has been computed,
the MDSC requests each O-PNC to setup the selected Optical Tunnels
and P-PNC to setup the intra-domain MPLS-TE Tunnels, over the
selected Optical Tunnels. MDSC also properly configures its BGP
speakers and PE/BR forwarding tables to ensure that the VPN traffic
is properly forwarded.
[Editors' Note] This text has been taken from section 2 of draft- 2.2.2. Shared Tunnel Selection
lee-teas-actn-poi-applicability-00 and need to be reconciled with
the other sections (the introduction in particular) of this document
This section provides a number of deployment scenarios for packet
and optical integration (POI). Specifically, this section provides a
deployment scenario in which ACTN hierarchy is deployed to control a
multi-layer and multi-domain network via two IP/MPLS PNCs and two
Optical PNCs with coordination with L-MDSC. This scenario is in the
context of an upper layer service configuration (e.g. L3VPN) across
two AS domains which are transported by two transport underlay
domains (e.g. OTN).
The provisioning of the L3VPN service is outside ACTN scope but it In case of shared tunnel selection, the MDSC needs to check if there
is worth showing how the L3VPN service provisioning is integrated is multi-domain path which can support the L2/L3VPN end-to-end TE
for the end-to-end service fulfilment in ACTN context. An example of service requirements (e.g., bandwidth, latency, etc.) using existing
service configuration function in the Service/Network Orchestrator intra-domain MPLS-TE tunnels.
is discussed in [BGP-L3VPN].
Figure 2 shows an ACTN POI Reference Architecture where it shows If such a path is found, the MDSC selects the optimal path from the
ACTN components as well as non-ACTN components that are necessary candidate pool and request each P-PNC to setup the L2/L3VPN service
for the end-to-end service fulfilment. Both IP/MPLS and Optical using the selected intra-domain MPLS-TE tunnel, between PE/BR nodes.
Networks are multi-domain. Each IP/MPLS domain network is controlled
by its' domain controller and all the optical domains are controlled
by a hierarchy of optical domain controllers. The L-MDSC function of
the optical domain controllers provides an abstract view of the
whole optical network to the Service/Network Orchestrator. It is
assumed that all these components of the network belong to one
single network operator domain under the control of the
service/network orchestrator.
Customer Otherwise, the MDSC should detect if the multi-domain path can be
+-------------------------------+ setup using existing intra-domain MPLS-TE tunnels with modifications
| +-----+ +------------+ | (e.g., increasing the tunnel bandwidth) or setting up new intra-
| | CNC |----| Service Op.| | domain MPLS-TE tunnel(s).
| +-----+ +------------+ |
+-------|------------------|----+
| ACTN interface | Non-ACTN interface
| CMI | (Customer Service model)
Service/Network| +-----------------+
Orchestrator | |
+-----|------------------------------------|-----------+
| +----------------------------------+ | |
| |MDSC TE & Service Mapping Function| | |
| +----------------------------------+ | |
| | | | |
| +------------------+ +---------------------+ |
| | MDSC NP Function |-------|Service Config. Func.| |
| +------------------+ +---------------------+ |
+------|---------------------------|-------------------+
MPI | +---------------------+--+
| / Non-ACTN interface \
+-------+---/-------+------------+ \
IP/MPLS | / |Optical | \ IP/MPLS
Domain 1 | / |Domain | \ Domain 2
Controller| / |Controller | \ Controller
+------|-------/--+ +---|-----+ +--|-----------\----+
| +-----+ +-----+| | +-----+ | |+------+ +------+|
| |PNC1 | |Serv.|| | |PNC | | || PNC2 | | Serv.||
| +-----+ +----- | | +-----+ | |+------+ +------+|
+-----------------+ +---------+ +-------------------+
SBI | | | SBI
v | V
+------------------+ | +------------------+
/ IP/MPLS Network \ | / IP/MPLS Network \
+----------------------+ | SBI +----------------------+
v
+-------------------------------+
/ Optical Network \
+-----------------------------------+
Figure 2 ACTN POI Reference Architecture The modification of an existing MPLS-TE Tunnel as well as the setup
of a new MPLS-TE Tunnel may also require multi-layer coordination
e.g., in case the available bandwidth of underlying Optical Tunnels
is not sufficient. Based on multi-domain/multi-layer path
computation, the MDSC can decide for example to modify the bandwidth
of an existing Optical Tunnel (e.g., ODUflex bandwidth increase) or
to setup new Optical Tunnels to be used as additional LAG members of
an existing IP Link or as new IP Links to re-route the MPLS-TE
Tunnel.
Figure 2 shows ACTN POI Reference Architecture where it depicts: In all the cases, the labels used by the end-to-end tunnel are
distributed in the PE and BR nodes by BGP. The MDSC is responsible
to configure the BGP speakeers in each P-PNC, if needed.
o CMI (CNC-MDSC Interface) interfacing CNC with MDSC function in 2.3. IP/MPLS Domain Controller and NE Functions
the Service/Network Orchestrator. This is where TE & Service
Mapping [TSM] and either ACTN VN [ACTN-VN] or TE-topology [TE-
TOPO] model is exchanged over CMI.
o Customer Service Model Interface: Non-ACTN interface in the IP/MPLS networks are assumed to have multiple domains, where each
Customer Portal interfacing Service/Network Orchestrator's domain, corresponding to either an IGP area or an Autonomous System
Service Configuration Function. This is the interface where L3SM (AS) within the same operator network, is controlled by an IP/MPLS
information is exchanged. domain controller (P-PNC).
o MPI (MDSC-PNC Interface) interfacing IP/MPLS Domain Controllers Among the functions of the P-PNC, there are the setup or
and Optical Domain Controllers. modification of the intra-domain MPLS-TE Tunnels, between PEs and
BRs, and the configuration of the VPN services, such as the VRF in
the PE nodes, as shown in Figure 3:
o Service Configuration Interface: Non-ACTN interface in +------------------+ +------------------+
Service/Network Orchestrator interfacing with the IP/MPLS Domain | | | |
Controllers to coordinate L2/L3VPN multi-domain service | P-PNC1 | | P-PNC2 |
configuration. This is where service specific information such as | | | |
VPN, VPN binding policy (e.g., new underlay tunnel creation for +--|-----------|---+ +--|-----------|---+
isolation), etc. are conveyed. | 1.Tunnel | 2.VPN | 1.Tunnel | 2.VPN
| Config | Provisioning | Config | Provisioning
V V V V
+---------------------+ +---------------------+
CE / PE tunnel 1 BR\ / BR tunnel 2 PE \ CE
o--/---o..................o--\-----/--o..................o---\--o
\ / \ /
\ Domain 1 / \ Domain 2 /
+---------------------+ +---------------------+
o SBI (South Bound Interface): Non-ACTN interface in the domain End-to-end tunnel
controller interfacing network elements in the domain. <------------------------------------------------->
Please note that MPI and Service Configuration Interface can be Figure 3 IP/MPLS Domain Controller & NE Functions
implemented as the same interface with the two different
capabilities. The split is just functional but doesn't have to be
also logical.
The following sections are provided to describe key functions that It is assumed that BGP is running in the inter-domain IP/MPLS
are necessary for the vertical as well as horizontal end-to-end networks for L2/L3VPN and that the P-PNC controller is also
service fulfilment of POI. responsible for configuring the BGP speakers within its control
domain, if necessary.
5.1. L2/L3VPN/VN Service Request by the Customer The BGP would be responsible for the label distribution of the
end-to-end tunnel on PE and BR nodes. The MDSC is responsible for
the selection of the BRs and of the intra-domain MPLS-TE Tunnels
between PE/BR nodes.
A customer can request L3VPN services with TE requirements using If new MPLS-TE Tunnels are needed or mofications (e.g., bandwidth
ACTN CMI models (i.e., ACTN VN YANG, TE & Service Mapping YANG) and ingrease) to existing MPLS_TE Tunnels are needed, as outlined in
non-ACTN customer service models such as L2SM/L3SM YANG together. section 2.2, the MDSC would request their setup or modifications to
Figure 3 shows detailed control flow between customer and the P-PNCs (step 1 in Figure 3). Then the MDSC would request the
service/network orchestrator to instantiate L2/L3VPN/VN service P-PNC to configure the VPN, including the selection of the
request. intra-domain TE Tunnel (step 2 in Figure 3).
Customer The P-PNC should configure, using mechanisms outside the scope of
+-------------------------------------------+ this document, the ingress PE forwarding table, e.g., the VRF, to
| +-----+ +------------+ | forward the VPN traffic, received from the CE, with the following
| | CNC |--------------| Service Op.| | three labels:
| +-----+ +------------+ |
+-------|------------------------|----------+
2. VN & TE/Svc | | 1.L2/3SM
Mapping | | |
| | ^ | |
| | | | |
v | | 3. Update VN | v
| & TE/Svc |
Service/Network | mapping |
Orchestrator | |
+------------------|------------------------|-----------+
| +----------------------------------+ | |
| |MDSC TE & Service Mapping Function| | |
| +----------------------------------+ | |
| | | | |
| +------------------+ +---------------------+ |
| | MDSC NP Function |-------|Service Config. Func.| |
| +------------------+ +---------------------+ |
+-------|-----------------------------------|-----------+
NP: Network Provisioning o VPN label: assigned by the egress PE and distributed by BGP;
Figure 3 Service Request Process o end-to-end LSP label: assigned by the egress BR, selected by the
MDSC, and distributed by BGP;
o ACTN VN YANG provides VN Service configuration, as specified in o MPLS-TE tunnel label, assigned by the next hop P node of the
[ACTN-VN]. tunnel selected by the MDSC and distributed by mechanism internal
to the IP/MPLS domain (e.g., RSVP-TE).
o It provides the profile of VN in terms of VN members, each of 2.4. Optical Domain Controller and NE Functions
which corresponds to an edge-to-edge link between customer
end-points (VNAPs). It also provides the mappings between the
VNAPs with the LTPs and between the connectivity matrix with
the VN member from which the associated traffic matrix (e.g.,
bandwidth, latency, protection level, etc.) of VN member is
expressed (i.e., via the TE-topology's connectivity matrix).
o The model also provides VN-level preference information Optical network provides the underlay connectivity services to
(e.g., VN member diversity) and VN-level admin-status and IP/MPLS networks. The coordination of Packet/Optical multi-layer is
operational-status. done by the MDSC, as shown in Figure 1.
o L2SM YANG [RFC8466] provides all L2VPN service configuration and The O-PNC is responsible to:
site information from a customer/service point of view.
o L3SM YANG [RFC8299] provides all L3VPN service configuration and o provide to the MDSC an abstract TE topology view of its
site information from a customer/service point of view. underlying optical network resources;
o The TE & Service Mapping YANG model [TSM] provides TE-service o perform single-domain local path computation, when requested by
mapping as well as site mapping. the MDSC;
o TE-service mapping provides the mapping of L3VPN instance o perform Optical Tunnel setup, when requested by the MDSC.
from [RFC8299] with the corresponding ACTN VN instance.
o The TE-service mapping also provides the service mapping The mechanisms used by O-PNC to perform intra-domain topology
requirement type as to how each L2/L3VPN/VN instance is discovery and path setup are usually vendor-speicific and outside
created with respect to the underlay TE tunnels (e.g., the scope of this document.
whether the L3VPN requires a new and isolated set of TE
underlay tunnels or not, etc.). See Section 5.2 for detailed
discussion on the mapping requirement types.
o Site mapping provides the site reference information across Depending on the type of optical network, TE topology abstraction,
L2/L3VPN Site ID, ACTN VN Access Point ID, and the LTP of the path compution and path setup can be single-layer (either OTN or
access link. WDM) or multi-layer OTN/WDM. In the latter case, the multi-layer
coordination between the OTN and WDM layers is performed by the
O-PNC.
5.2. Service and Network Orchestration 3. Interface protocols and YANG data models for the MPIs
The Service/Network orchestrator shown in Figure 2 interfaces the This section describes general assumptions which are applicable at
customer and decouples the ACTN MDSC functions from the customer all the MPI interfaces, between each PNC (Optical or Packet) and the
service configuration functions. MDSC, and also to all the scenarios discussed in this document.
An implementation can choose to split the Service/Network 3.1. RESTCONF protocol at the MPIs
orchestration functions, as described in [RFC8309] and in section
4.2 of [RFC8453], between a top-level Service Orchestrator
interfacing the customer and two low-level Network Orchestrators,
one controlling a multi-domain IP/MPLS network and the other
controlling the Optical networks.
Another implementation can choose to combine the L-MDSC functions of The RESTCONF protocol, as defined in [RFC8040], using the JSON
the Optical hierarchical controller, providing multi-domain representation, defined in [RFC7951], is assumed to be used at these
coordination of the Optical network together with the MDSC functions interfaces. Extensions to RESTCONF, as defined in [RFC8527], to be
in the Service/Network orchestrator. compliant with Network Management Datastore Architecture (NMDA)
defined in [RFC8342], are assumed to be used as well at these MPI
interfaces and also at CMI interfaces.
Without loss of generality, this assumes that the service/network 3.2. YANG data models at the MPIs
orchestrator as depicted in Figure 2 would include all the required
functionalities as in a hierarchical orchestration case.
One of the important service functions the Service/Network The data models used on these interfaces are assumed to use the YANG
orchestrator performs is to identify which TE Tunnels should carry 1.1 Data Modeling Language, as defined in [RFC7950].
the L3VPN traffic (from TE & Service Mapping Model) and to relay
this information to the IP/MPLS domain controllers, via non-ACTN
interface, to ensure proper IP/VRF forwarding table be populated
according to the TE binding requirement for the L3VPN.
[Editor's Note] What mechanism would convey on the interface to the 3.2.1. Common YANG data models at the MPIs
IP/MPLS domain controllers as well as on the SBI (between IP/MPLS
domain controllers and IP/MPLS PE routers) the TE binding policy
dynamically for the L3VPN? Typically, VRF is the function of the
device that participate MP-BGP in MPLS VPN. With current MP-BGP
implementation in MPLS VPN, the VRF's BGP next hop is the
destination PE and the mapping to a tunnel (either an LDP or a BGP
tunnel) toward the destination PE is done by automatically without
any configuration. It is to be determined the impact on the PE VRF
operation when the tunnel is an optical bypass tunnel which does not
participate either LDP or BGP.
Figure 4 shows service/network orchestrator interactions with As required in [RFC8040], the "ietf-yang-library" YANG module
various domain controllers to instantiate tunnel provisioning as defined in [RFC8525] is used to allow the MDSC to discover the set
well as service configuration. of YANG modules supported by each PNC at its MPI.
+-------|----------------------------------|-----------+ Both Optical and Packet PNCs use the following common topology YANG
| +----------------------------------+ | | models at the MPI to report their abstract topologies:
| |MDSC TE & Service Mapping Function| | |
| +----------------------------------+ | |
| | | | |
| +------------------+ +---------------------+ |
| | MDSC NP Function |-------|Service Config. Func.| |
| +------------------+ +---------------------+ |
+-------|------------------------------|---------------+
| |
| +-------------------+------+ 3.
2. Inter-layer | / \ VPN
Serv.
tunnel +-----+--------/-------+-----------------+
\provision
binding| / | 1. Optical | \
| / | tunnel creation | \
+----|-----------/-+ +---|------+ +-----|-------\---+
| +-----+ +-----+ | | +------+ | | +-----+ +-----+|
| |PNC1 | |Serv.| | | | PNC | | | |PNC2 | |Serv.||
| +-----+ +-----+ | | +------+ | | +-----+ +-----+|
+------------------+ +----------+ +-----------------+
Figure 4 Service and Network Orchestration Process o The Base Network Model, defined in the "ietf-network" YANG module
of [RFC8345]
TE binding requirement types [TSM] are: o The Base Network Topology Model, defined in the "ietf-network-
topology" YANG module of [RFC8345], which augments the Base
Network Model
1. Hard Isolation with deterministic latency: Customer would request o The TE Topology Model, defined in the "ietf-te-topology" YANG
an L3VPN service [RFC8299] using a set of TE Tunnels with a module of [RFC8795], which augments the Base Network Topology
deterministic latency requirement and that cannot be not shared Model with TE specific information.
with other L3VPN services nor compete for bandwidth with other
Tunnels.
2. Hard Isolation: This is similar to the above case without These common YANG models are generic and augmented by technology-
deterministic latency requirements. specific YANG modules as described in the following sections.
3. Soft Isolation: Customer would request an L3VPN service using a Both Optical and Packet PNCs must use the following common
set of MPLS-TE tunnel which cannot be shared with other L3VPN notifications YANG models at the MPI so that any network changes can
services. be reported almost in real-time to MDSC by the PNCs:
4. Sharing: Customer would accept sharing the MPLS-TE Tunnels o Dynamic Subscription to YANG Events and Datastores over RESTCONF
supporting its L3VPN service with other services. as defined in [RFC8650]
For the first three types, there could be additional TE binding o Subscription to YANG Notifications for Datastores updates as
requirements with respect to different VN members of the same VN defined in [RFC8641]
associated with an L3VPN service. For the first two cases, VN
members can be hard-isolated, soft-isolated, or shared. For the
third case, VN members can be soft-isolated or shared.
o When "Hard Isolation with or w/o deterministic latency" (i.e., PNCs and MDSCs must be compliant with subscription requirements as
the first and the second type) TE binding requirement is applied stated in [RFC7923].
for a L3VPN, a new optical layer tunnel has to be created (Step 1
in Figure 4). This operation requires the following control level
mechanisms as follows:
o The MDSC function of the Service/Network Orchestrator 3.2.2. YANG models at the Optical MPIs
identifies only the domains in the IP/MPLS layer in which the
VPN needs to be forwarded.
o Once the IP/MPLS layer domains are determined, the MDSC The Optical PNC also uses at least the following technology-specific
function of the Service/Network Orchestrator needs to topology YANG models, providing WDM and Ethernet technology-specific
identify the set of optical ingress and egress points of the augmentations of the generic TE Topology Model:
underlay optical tunnels providing connectivity between the
IP/MPLS layer domains.
o Once both IP/MPLS layers and optical layer are determined, o The WSON Topology Model, defined in the "ietf-wson-topology" YANG
the MDSC needs to identify the inter-layer peering points in modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined
both IP/MPLS domains as well as the optical domain(s). This in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO].
implies that the L3VPN traffic will be forwarded to an MPLS-
TE tunnel that starts at the ingress PE (in one IP/MPLS
domain) and terminates at the egress PE (in another IP/MPLS
domain) via a dedicated underlay optical tunnel.
o The MDSC function of the Service/Network Orchestrator needs to o Optionally, when the OTN layer is used, the OTN Topology Model,
first request the optical L-MDSC to instantiate an optical tunnel as defined in the "ietf-otn-topology" YANG module of [OTN-TOPO].
for the optical ingress and egress. This is referred to as
optical tunnel creation (Step 1 in Figure 4). Note that it is L-
MDSC responsibility to perform multi-domain optical coordination
with its underlying optical PNCs, for setting up a multi-domain
optical tunnel.
o Once the optical tunnel is established, then the MDSC function of o The Ethernet Topology Model, defined in the "ietf-eth-te-
the Service/Network Orchestrator needs to coordinate with the PNC topology" YANG module of [CLIENT-TOPO].
functions of the IP/MPLS Domain Controllers (under which the
ingress and egress PEs belong) the setup of a multi-domain MPLS-
TE Tunnel, between the ingress and egress PEs. This setup is
carried by the created underlay optical tunnel (Step 2 in Figure
4).
o It is the responsibility of the Service Configuration Function of o Optionally, when the OTN layer is used, the network data model
the Service/Network Orchestrator to identify interfaces/labels on for L1 OTN services (e.g. an Ethernet transparent service) as
both ingress and egress PEs and to convey this information to defined in "ietf-trans-client-service" YANG module of draft-ietf-
both the IP/MPLS Domain Controllers (under which the ingress and ccamp-client-signal-yang [CLIENT-SIGNAL].
egress PEs belong) for proper configuration of the L3VPN (BGP and
VRF function of the PEs) in their domain networks (Step 3 in
Figure 4).
5.3. IP/MPLS Domain Controller and NE Functions o The WSON Topology Model or, alternatively, the Flexi-grid
Topology model is used to report the DWDM network topology (e.g.,
ROADMs and links) depending on whether the DWDM optical network
is based on fixed grid or flexible-grid.
IP/MPLS networks are assumed to have multiple domains and each The Ethernet Topology is used to report the access links between the
domain is controlled by IP/MPLS domain controller in which the ACTN IP routers and the edge ROADMs.
PNC functions and non-ACTN service functions are performed by the
IP/MPLS domain controller.
Among the functions of the IP/MPLS domain controller are VPN service The optical PNC uses at least the following YANG models:
aspect provisioning such as VRF control and management for VPN
services, etc. It is assumed that BGP is running in the inter-domain
IP/MPLS networks for L2/L3VPN and that the IP/MPLS domain controller
is also responsible for configuring the BGP speakers within its
control domain if necessary.
Depending on the TE binding requirement types discussed in Section o The TE Tunnel Model, defined in the "ietf-te" YANG module of
5.2, there are two possible deployment scenarios. [TE-TUNNEL]
5.3.1. Scenario A: Shared Tunnel Selection o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG
modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model,
defined in the "ietf-flexi-grid-media-channel" YANG module of
[Flexi-MC]
When the L2/L3VPN does not require isolation (either hard or soft), o Optionally, when the OTN layer is used, the OTN Tunnel Model,
it can select an existing MPLS-TE and Optical tunnel between ingress defined in the "ietf-otn-tunnel" YANG module of [OTN-TUNNEL].
and egress PE, without creating any new TE tunnels. Figure 5 shows
this scenario.
IP/MPLS Domain 1 IP/MPLS Domain 2 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran-
Controller Controller service" YANG module of [CLIENT-SIGNAL].
+------------------+ +------------------+ The TE Tunnel model is generic and augmented by technology-specific
| +-----+ +-----+ | | +-----+ +-----+ | models such as the WSON Tunnel Model and the Flexi-grid Media
| |PNC1 | |Serv.| | | |PNC2 | |Serv.| | Channel Model.
| +-----+ +-----+ | | +-----+ +-----+ |
+--|-----------|---+ +--|-----------|---+
| 1.Tunnel | 2.VPN/VRF | 1.Tunnel | 2.VPN/VRF
| Selection | Provisioning | Selection |
Provisioning
V V V V
+---------------------+ +---------------------+
CE / PE tunnel 1 ASBR\ /ASBR tunnel 2 PE \
CE
o--/---o..................o--\--------/--o..................o---
\--o
\ / \ /
\ AS Domain 1 / \ AS Domain 2 /
+---------------------+ +---------------------+
End-to-end tunnel The WSON Tunnel Model or, alternatively, the Flexi-grid Media
<-----------------------------------------------------> Channel Model are used to setup connectivity within the DWDM network
depending on whether the DWDM optical network is based on fixed grid
or flexible-grid.
Figure 5 IP/MPLS Domain Controller & NE Functions The Ethernet Client Signal Model is used to configure the steering
of the Ethernet client traffic between Ethernet access links and TE
Tunnels, which in this case could be either WSON Tunnels or
Flexi-Grid Media Channels. This model is generic and applies to any
technology-specific TE Tunnel: technology-specific attributes are
provided by the technology-specific models which augment the generic
TE-Tunnel Model.
How VPN is disseminated across the network is out of the scope of 3.2.3. YANG data models at the Packet MPIs
this document. We assume that MP-BGP is running in IP/MPLS networks
and VPN is made known to ABSRs and PEs by each IP/MPLS domain
controllers. See [RFC4364] for detailed descriptions on how MP-BGP
works.
There are several functions IP/MPLS domain controllers need to The Packet PNC also uses at least the following technology-specific
provide in order to facilitate tunnel selection for the VPN in both topology YANG models, providing IP and Ethernet technology-specific
domain level and end-to-end level. augmentations of the generic Topology Models described in section
3.2.1:
5.3.1.1. Domain Tunnel Selection o The L3 Topology Model, defined in the "ietf-l3-unicast-topology"
YANG modules of [RFC8346], which augments the Base Network
Topology Model
Each domain IP/MPLS controller is responsible for selecting its o The L3 specific data model including extended TE attributes (e.g.
domain level tunnel for the L3VPN. First it needs to determine which performance derived metrics like latency), defined in "ietf-l3-
existing tunnels would fit for the L2/L3VPN requirements allotted to te-topology" and in "ietf-te-topology-packet" in draft-ietf-teas-
the domain by the Service/Network Orchestrator (e.g., tunnel l3-te-topo [L3-TE-TOPO]
binding, bandwidth, latency, etc.). If there are existing tunnels
that are feasible to satisfy the L3VPN requirements, the IP/MPLS
domain controller selects the optimal tunnel from the candidate
pool. Otherwise, an MPLS tunnel with modified bandwidth or a new
MPLS Tunnel needs to be setup. Note that with no isolation
requirement for the L3VPN, existing MPLS tunnel can be selected.
With soft isolation requirement for the L3VPN, an optical tunnel can
be shared with other L2/L3VPN services while with hard isolation
requirement for the L2/L3VPN, a dedicated MPLS-TE and a dedicated
optical tunnel MUST be provisioned for the L2/L3VPN.
5.3.1.2. VPN/VRF Provisioning for L3VPN o The Ethernet Topology Model, defined in the "ietf-eth-te-
topology" YANG module of [CLIENT-TOPO], which augments the TE
Topology Model
Once the domain level tunnel is selected for a domain, the Service The Ethernet Topology Model is used to report the access links
Function of the IP/MPLS domain controller maps the L3VPN to the between the IP routers and the edge ROADMs as well as the
selected MPLS-TE tunnel and assigns a label (e.g., MPLS label) with inter-domain links between ASBRs, while the L3 Topology Model is
the PE. Then the PE creates a new entry for the VPN in the VRF used to report the IP network topology (e.g., IP routers and links).
forwarding table so that when the VPN packet arrives to the PE, it
will be able to direct to the right interface and PUSH the label
assigned for the VPN. When the PE forwards a VPN packet, it will
push the VPN label signaled by BGP and, in case of option A and B
[RFC4364], it will also push the LSP label assigned to the
configured MPLS-TE Tunnel to reach the ASBR next hop and forwards
the packet to the MPLS next-hop of this MPLS-TE Tunnel.
In case of option C [RFC4364], the PE will push one MPLS LSP label o The User Network Interface (UNI) Topology Model, being defined in
signaled by BGP to reach the destination PE and a second MPLS LSP the "ietf-uni-topology" module of the draft-ogondio-opsawg-uni-
label assigned to the configured MPLS-TE Tunnel to reach the ASBR topology [UNI-TOPO] which augment "ietf-network" module defined
next-hop and forward the packet to the MPLS next-hop of this MPLS-TE in [RFC8345] adding service attachment points to the nodes to
Tunnel. which L2VPN/L3VPN IP/MPLS services can be attached.
With Option C, the ASBR of the first domain interfacing the next o L3VPN network data model defined in "ietf-l3vpn-ntw" module of
domain should keep the VPN label intact to the ASBR of the next draft-ietf-opsawg-l3sm-l3nm [L3NM] used for non-ACTN MPI for
domain so that the ASBR in the next domain sees the VPN packets as L3VPN service provisioning
if they are coming from a CE. With Option B, the VPN label is
swapped. With option A, the VPN label is removed.
With Option A and B, the ASBR of the second domain does the same o L2VPN network data model defined in "ietf-l2vpn-ntw" module of
procedure that includes VPN/VRF tunnel mapping and interface/label draft-ietf-barguil-opsawg-l2sm-l2nm [L2NM] used for non-ACTN MPI
assignment with the IP/MPLS domain controller. With option A, the for L2VPN service provisioning
ASBR operations are the same as of the PEs. With option B, the ASBR
operates with VPN labels so it can see the VPN the traffic belongs
to. With option C, the ASBR operates with the end-to-end tunnel
labels so it may be not aware of the VPN the traffic belongs to.
This process is repeated in each domain. The PE of the last domain [Editor's note:] Add YANG models used for tunnel and service
interfacing the destination CE should recognize the VPN label when configuration.
the VPN packets arrive and thus POP the VPN label and forward the
packets to the CE.
5.3.1.3. VSI Provisioning for L2VPN 4. Multi-layer and multi-domain services scenarios
The VSI provisioning for L2VPN is similar to the VPN/VRF provision Multi-layer and multi-domain scenarios, based on reference network
for L3VPN. L2VPN service types include: described in section 2, and very relevant for Service Providers, are
described in the next sections. For each scenario existing IETF
protocols and data models are identified with particular focus on
the MPI in the ACTN architecture. Non ACTN IETF data models required
for L2/L3VPN service provisioning between MDSC and IP PNCs are also
identified.
o Point-to-point Virtual Private Wire Services (VPWSs) that use 4.1. Scenario 1: network and service topology discovery
LDP-signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074];
o Multipoint Virtual Private LAN Services (VPLSs) that use LDP- In this scenario, the MSDC needs to discover through the underlying
signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074]; PNCs, the network topology, at both WDM and IP layers, in terms of
nodes (NEs) and links, including inter AS domain links as well as
cross-layer links but also in terms of tunnels (MPLS or SR paths in
IP layer and OCh and optionally ODUk tunnels in optical layer).MDSC
discovers also the IP/MPLS transport services (L2VPN/L3VPN)
deployed, both intra-domain and inter-domain wise.
o Multipoint Virtual Private LAN Services (VPLSs) that use a Border Each PNC provides to the MDSC an abstracted or full topology view of
Gateway Protocol (BGP) control plane as described in [RFC4761]and the WDM or the IP topology of the domain it controls. This topology
[RFC6624]; can be abstracted in the sense that some detailed NE information is
hidden at the MPI, and all or some of the NEs and related physical
links are exposed as abstract nodes and logical (virtual) links,
depending on the level of abstraction the user requires. This
information is key to understand both the inter-AS domain links
(seen by each controller as UNI interfaces but as I-NNI interfaces
by the MDSC) as well as the cross-layer mapping between IP and WDM
layer.
o IP-Only LAN-Like Services (IPLSs) that are a functional subset of The MDSC also maintains an up-to-date network database of both IP
VPLS services [RFC7436]; and WDM layers (and optionally OTN layer) through the use of IETF
notifications through MPI with the PNCs when any topology change
occurs. It should be possible also to correlate information coming
from IP and WDM layers (e.g.: which port, lambda/OTSi, direction is
used by a specific IP service on the WDM equipment)
In particular, For the cross-layer links it is key for MDSC to be
able to correlate automatically the information from the PNC network
databases about the physical ports from the routers (single link or
bundle links for LAG) to client ports in the ROADM.
o BGP MPLS-based Ethernet VPN Services as described in [RFC7432] It should be possible at MDSC level to easily correlate WDM and IP
and [RFC7209]; layers alarms to speed-up troubleshooting
o Ethernet VPN VPWS specified in [RFC8214] and [RFC7432]. Alarms and event notifications are required between MDSC and PNCs so
that any network changes are reported almost in real-time to the MDSC
(e.g. NE or link failure, MPLS tunnel switched from main to backup
path etc.). As specified in [RFC7923] MDSC must be able to subscribe
to specific objects from PNC YANG datastores for notifications.
5.3.1.4. Inter-domain Links Update 4.1.1. Inter-domain link discovery
In order to facilitate inter-domain links for the VPN, we assume In the reference network of Figure 1, there are two types of
that the service/network orchestrator would know the inter-domain inter-domain links:
link status and its resource information (e.g., bandwidth available,
protection/restoration policy, etc.) via some mechanisms (which are
beyond the scope of this document). We also assume that the inter-
domain links are pre-configured prior to service instantiation.
5.3.1.5. End-to-end Tunnel Management o Links between two IP domains (ASes)
It is foreseen that the Service/Network orchestrator should control o Links between an IP router and a ROADM
and manage end-to-end tunnels for VPNs per VPN policy.
As discussed in [ACTN-PM], the Orchestrator is responsible to Both types of links are Ethernet physical links.
collect domain LSP-level performance monitoring data from domain
controllers and to derive and report end-to-end tunnel performance
monitoring information to the customer.
5.3.2. Scenario B: Isolated VN/Tunnel Establishment The inter-domain link information is reported to the MDSC by the two
adjacent PNCs, controlling the two ends of the inter-domain link.
The MDSC needs to understa how to merge the these inter-domain
Ethernet links together.
When the L3VPN requires hard-isolated Tunnel establishment, optical This document considers the following two options for discovering
layer tunnel binding with IP/MPLS layer is necessary. As such, the inter-domain links:
following functions are necessary.
o The IP/MPLS Domain Controller of Domain 1 needs to send the VRF 1. Static configuration
instruction to the PE:
o To the Ingress PE of AS Domain 1: Configuration for each 2. LLDP [IEEE 802.1AB] automatic discovery
L3VPN destination IP address (in this case the remote CE's IP
address for the VPN or any customer's IP addresses reachable
through a remote CE) of the associated VPN label assigned by
the Egress PE and of the MPLS-TE Tunnel to be used to reach
the Egress PE: so that the proper VRF table is populated to
forward the VPN traffic to the inter-layer optical interface
with the VPN label.
o The Egress PE, upon the discovery of a new IP address, needs to Other options are possible but not described in this document.
send the mapping information (i.e., VPN to IP address) to its'
IP/MPLS Domain Controller of Domain 2 which sends, in turn, to
the service orchestrator. The service orchestrator would then
propagate this mapping information to the IP/MPLS Domain
Controller of Domain 1 which sends it, in turn, to the ingress PE
so that it may override the VPN/VRF forwarding or VSI forwarding,
respectively for L3VPN and L2VPN. As a result, when packets
arriving at the ingress PE with that IP destination address, the
ingress PE would then forward this packet to the inter-layer
optical interface.
[Editor's Note] in case of hard isolated tunnel required for the The MDSC can understand how to merge these inter-domain links
VPN, we need to create a separate MPLS TE tunnel and encapsulate the together using the plug-id attribute defined in the TE Topology
MPLS packets of the MPLS Tunnel into the ODU so that the optical NE Model [RFC8795], as described in as described in section 4.3 of
would route this MPLS Tunnel to a separate optical tunnel from other [RFC8795].
tunnels.]
5.4. Optical Domain Controller and NE Functions A more detailed description of how the plug-id can be used to
discover inter-domain link is also provided in section 5.1.4 of
[TNBI].
Optical network provides the underlay connectivity services to Both types of inter-domain links are discovered using the plug-id
IP/MPLS networks. The multi-domain optical network coordination is attributes reported in the Ethernet Topologies exposed by the two
performed by the L-MDSC function shown in Figure 2 so that the whole adjacent PNCs. The MDSC can also discover an inter-domain IP
multi-domain optical network appears to the service/network link/adjacency between the two IP LTPs, reported in the IP
orchestrator as one optical network. The coordination of Topologies exposed by the two adjacent P-PNCs, supported by the two
Packet/Optical multi-layer and IP/MPLS multi-domain is done by the ETH LTPs of an Ethernet Link discovered between these two P-PNCs.
service/network orchestrator where it interfaces two IP/MPLS domain
controllers and one optical L-MDSC.
Figure 6 shows how the Optical Domain Controllers create a new The static configuration requires an administrative burden to
optical tunnel and the related interaction with IP/MPLS domain configure network-wide unique identifiers: it is therefore more
controllers and the NEs to bind the optical tunnel with proper viable for inter-AS links. For the links between the IP routers and
forwarding instruction so that the VPN requiring hard isolation can the Optical NEs, the automatic discovery solution based on LLDP
be fulfilled. snooping is preferable when LLDP snooping is supported by the
Optical NEs.
IP/MPLS Domain 1 Optical Domain IP/MPLS Domain 2 As outlined in [TNBI], the encoding of the plug-id namespace as well
Controller Controller Controller as of the LLDP information within the plug-id value is
implementation specific and needs to be consistent across all the
PNCs.
+------------------+ +---------+ +------------------+ 4.1.2. IP Link Setup Procedure
| +-----+ +-----+ | | +-----+ | | +-----+ +-----+ |
| |PNC1 | |Serv.| | | |PNC | | | |PNC2 | |Serv.| |
| +-----+ +-----+ | | +-----+ | | +-----+ +-----+ |
+--|-----------|---+ +----|----+ +--|----------|----+
| 2.Tunnel | 3.VPN/VRF | |2.Tunnel | 3.VPN/VRF
| Binding | Provisioning| |Binding |
Provisioning
V V | V V
+-------------------+ | +-------------------+
CE / PE ASBR\ | /ASBR PE \ CE
o--/---o o--\----|--/--o o---\--o
\ : / | \ : /
\ : AS Domain 1 / | \ AS Domain 2 : /
+-:-----------------+ | +-----------------:-+
: | :
: | 1. Optical :
: | Tunnel Creation :
: v :
+-:--------------------------------------------------:-+
/ : : \
/ o..................................................o \
| Optical Tunnel |
\ /
\ Optical Domain /
+------------------------------------------------------+
Figure 6 Domain Controller & NE Functions (Isolated Optical Tunnel) The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON
Tunnel or a Flexi grid Tunnel) within the DWDM network between the
two Optical Transponders (OTs) associated with the two access links.
As discussed in 5.2, in case that VPN has requirement for hard- The Optical Transponders are reported by the O-PNC as Trail
isolated tunnel establishment, the service/network orchestrator will Termination Points (TTPs), defined in [TE TOPO], within the WDM
coordinate across IP/MPLS domain controllers and Optical L-MDSC to Topology. The association between the Ethernet access link and the
ensure the creation of a new optical tunnel for the VPN in proper WDM TTP is reported by the Inter Layer Lock (ILL) identifiers,
sequence. Figure 6 shows this scenario. defined in [TE TOPO], reported by the O PNC within the Ethernet
Topology and WDM Topology.
o The MDSC of the service/network orchestrator requests the L-MDSC The MDSC also requires the O-PNC to steer the Ethernet client
to setup and Optical tunnel providing connectivity between the traffic between the two access Ethernet Links over the WDM Tunnel.
inter-layer interfaces at the ingress and egress PEs and requests
the two IP/MPLS domain controllers to setup an inter-domain IP
link between these interfaces
o The MDSC of the service/network orchestrator then should provide After the WDM Tunnel has been setup and the client traffic steering
the ingress IP/MPLS domain controller with the routing configured, the two IP routers can exchange Ethernet packets between
instruction for the VPN so that the ingress IP/MPLS domain themselves, including LLDP messages.
controller would help its ingress PE to populate forwarding
table. The packet with the VPN label should be forwarded to the
optical interface the MDSC provided.
o The Ingress Optical Domain PE needs to recognize MPLS-TE label on If LLDP [IEEE 802.1AB] is used between the two routers, the P PNC
its ingress interface from IP/MPLS domain PE and encapsulate the can automatically discover the IP Link being set up by the MDSC. The
MPLS packets of this MPLS-TE Tunnel into the ODU. IP LTPs terminating this IP Link are supported by the ETH LTPs
terminating the two access links.
[Editor's Note] We assumed that the Optical PE is LSR.] Otherwise, the MDSC needs to require the P PNC to configure an IP
Link between the two routers: the MDSC also configures the two ETH
LTPs which support the two IP LTPs terminating this IP Link.
o The Egress Optical Domain PE needs to POP the ODU label before 4.2. L2VPN/L3VPN establishment
sending the packet (with MPLS-TE label kept intact at the top
level) to the Egress PE in the IP/MPLS Domain to which the packet
is destined.
[Editor's Note] If there are two VPNs having the same destination CE To be added
requiring non-shared optical tunnels from each other, we need to
explain this case with a need for additional Label to differentiate
the VPNs]
5.5. Orchestrator-Controllers-NEs Communication Protocol Flows [Editor's Note] What mechanism would convey on the interface to the
IP/MPLS domain controllers as well as on the SBI (between IP/MPLS
domain controllers and IP/MPLS PE routers) the TE binding policy
dynamically for the L3VPN? Typically, VRF is the function of the
device that participate MP-BGP in MPLS VPN. With current MP-BGP
implementation in MPLS VPN, the VRF's BGP next hop is the
destination PE and the mapping to a tunnel (either an LDP or a BGP
tunnel) toward the destination PE is done by automatically without
any configuration. It is to be determined the impact on the PE VRF
operation when the tunnel is an optical bypass tunnel which does not
participate either LDP or BGP.
This section provides generic communication protocol flows across New text to answer the yellow part:
orchestrator, controllers and NEs in order to facilitate the POI
scenarios discussed in Section 5.3.2 for dynamic optical Tunnel
establishment. Figure 7 shows the communication flows.
+---------+ +-------+ +------+ +------+ +------+ +------+ The MDSC Network-related function will then coordinate with the PNCs
|Orchestr.| |Optical| |Packet| |Packet| |Ing.PE| |Egr.PE| involved in the process to provide the provisioning information
| | | Ctr. | |Ctr-D1| |Ctr-D2| | D1 | | D2 | through ACTN MDSC to PNC (MPI) interface. The relevant data models
+---------+ +-------+ +------+ +------+ +------+ +------+ used at the MPI may be in the form of L3NM, L2NM or others and are
| | | | | | exchanged through MPI API calls. Through this process MDSC Network-
| | | | |<--BGP--->| related functions provide the configuration information to realize a
| | | |VPN Update | | VPN service to PNCs. For example, this process will inform PNCs on
| | | VPN Update|<---------------------| what PE routers compose a L3VPN, the topology requested, the VPN
|<--------------------------------------|(Dest, VPN)| | attributes, etc.
| | |(Dest, VPN)| | |
| Tunnel Create | | | | |
|---------------->| | | | |
|(VPN,Ingr/Egr if)| | | | |
| | | | | |
| Tunnel Confirm | | | | |
|<----------------| | | | |
| (Tunnel ID) | | | | |
| | | | | |
| Tunnel Bind | | | | |
|-------------------------->| | | |
| (Tunnel ID, VPN, Ingr if) | Forward. Mapping | |
| | |---------------------->| (1) |
| Tunnel Bind Confirm | (Dest, VPN, Ingr if | |
|<--------------------------| | | |
| | | | | |
| Tunnel Bind | | | | |
|-------------------------------------->| | |
| (Tunnel ID, VPN, Egr if) | | | |
| | | | Forward. Mapping |
| | | |---------------------
>|(2)
| | | | (Dest, VPN , Egr if) |
| | Tunnel Bind Confirm | | |
|<--------------------------------------| | |
| | | | | |
Figure 7 Communication Flows for Optical Tunnel Establishment and At the end of the process PNCs will deliver the actual configuration
binding. to the devices (either physical or virtual), through the ACTN
Southbound Interface (SBI). In this case the configuration policies
may be exchanged using a Netconf session delivering configuration
commands associated to device-specific data models (e.g. BGP[], QOS
[], etc.).
When Domain Packet Controller 1 sends the forwarding mapping Having the topology information of the network domains under their
information as indicated in (1) in Figure 7, the Ingress PE in control, PNCs will deliver all the information necessary to create,
Domain 1 will need to provision the VRF forwarding table based on update, optimize or delete the tunnels connecting the PE nodes as
the information it receives. Please see the detailed procedure in requested by the VPN instantiation.
section 5.3.1.2. A similar procedure is to be done at the Egress PE
in Domain 2.
6. Security Considerations 5. Security Considerations
Several security considerations have been identified and will be Several security considerations have been identified and will be
discussed in future versions of this document. discussed in future versions of this document.
7. Operational Considerations 6. Operational Considerations
Telemetry data, such as the collection of lower-layer networking Telemetry data, such as the collection of lower-layer networking
health and consideration of network and service performance from POI health and consideration of network and service performance from POI
domain controllers, may be required. These requirements and domain controllers, may be required. These requirements and
capabilities will be discussed in future versions of this document. capabilities will be discussed in future versions of this document.
8. IANA Considerations 7. IANA Considerations
This document requires no IANA actions. This document requires no IANA actions.
9. References 8. References
9.1. Normative References 8.1. Normative References
[RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling
Language", RFC 7950, August 2016. Language", RFC 7950, August 2016.
[RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC
7951, August 2016. 7951, August 2016.
[RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January
2017. 2017.
skipping to change at page 31, line 45 skipping to change at page 23, line 40
Network Topologies", RFC8345, March 2018. Network Topologies", RFC8345, March 2018.
[RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3
Topologies", RFC8346, March 2018. Topologies", RFC8346, March 2018.
[RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction
and Control of TE Networks (ACTN)", RFC8453, August 2018. and Control of TE Networks (ACTN)", RFC8453, August 2018.
[RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019.
[RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering
(TE) Topologies", RFC8795, August 2020.
[IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and
metropolitan area networks - Station and Media Access metropolitan area networks - Station and Media Access
Control Connectivity Discovery", March 2016. Control Connectivity Discovery", March 2016.
[TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies",
draft-ietf-teas-yang-te-topo, work in progress.
[WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength
Switched Optical Networks)", draft-ietf-ccamp-wson-yang, Switched Optical Networks)", draft-ietf-ccamp-wson-yang,
work in progress. work in progress.
[Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for
Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid-
yang, work in progress. yang, work in progress.
[OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical
Transport Network Topology", draft-ietf-ccamp-otn-topo-
yang, work in progress.
[CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer
Topology", draft-zheng-ccamp-client-topo-yang, work in Topology", draft-zheng-ccamp-client-topo-yang, work in
progress. progress.
[L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE
Topologies", draft-ietf-teas-yang-l3-te-topo, work in Topologies", draft-ietf-teas-yang-l3-te-topo, work in
progress. progress.
[TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic
Engineering Tunnels and Interfaces", draft-ietf-teas-yang- Engineering Tunnels and Interfaces", draft-ietf-teas-yang-
te, work in progress. te, work in progress.
[WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel",
draft-ietf-ccamp-wson-tunnel-model, work in progress. draft-ietf-ccamp-wson-tunnel-model, work in progress.
[Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for
Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid-
media-channel-yang, work in progress. media-channel-yang, work in progress.
[OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft-
ietf-ccamp-otn-tunnel-model, work in progress.
[CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport
Network Client Signals", draft-ietf-ccamp-client-signal- Network Client Signals", draft-ietf-ccamp-client-signal-
yang, work in progress. yang, work in progress.
9.2. Informative References 8.2. Informative References
[RFC1930] J. Hawkinson, T. Bates, "Guideline for creation,
selection, and registration of an Autonomous System (AS)",
RFC 1930, March 1996.
[RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private
Networks (VPNs)", RFC 4364, February 2006. Networks (VPNs)", RFC 4364, February 2006.
[RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN
Service (VPLS) Using BGP for Auto-Discovery and Service (VPLS) Using BGP for Auto-Discovery and
Signaling", RFC 4761, January 2007. Signaling", RFC 4761, January 2007.
[RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning,
Auto-Discovery, and Signaling in Layer 2 Virtual Private Auto-Discovery, and Signaling in Layer 2 Virtual Private
skipping to change at page 33, line 37 skipping to change at page 25, line 41
RFC 8309, January 2018. RFC 8309, January 2018.
[RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual
Private Network (L2VPN) Service Delivery", RFC8466, Private Network (L2VPN) Service Delivery", RFC8466,
October 2018. October 2018.
[TNBI] Busi, I., Daniel, K. et al., "Transport Northbound [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound
Interface Applicability Statement", draft-ietf-ccamp- Interface Applicability Statement", draft-ietf-ccamp-
transport-nbi-app-statement, work in progress. transport-nbi-app-statement, work in progress.
[ACTN-VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", [VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation",
draft-ietf-teas-actn-vn-yang, work in progress. draft-ietf-teas-actn-vn-yang, work in progress.
[TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping Yang [L2NM] S. Barguil, et al., "A Layer 2 VPN Network YANG Model",
Model", draft-ietf-teas-te-service-mapping-yang, work in draft-ietf-opsawg-l2nm, work in progress.
progress.
[L3NM] S. Barguil, et al., "A Layer 3 VPN Network YANG Model",
draft-ietf-opsawg-l3sm-l3nm, work in progress.
[TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping
Yang Model", draft-ietf-teas-te-service-mapping-yang, work
in progress.
[ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance
Monitoring Telemetry and Scaling Intent Autonomics", Monitoring Telemetry and Scaling Intent Autonomics",
draft-lee-teas-actn-pm-telemetry-autonomics, work in draft-lee-teas-actn-pm-telemetry-autonomics, work in
progress. progress.
[BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs",
draft-ietf-bess-l3vpn-yang, work in progress. draft-ietf-bess-l3vpn-yang, work in progress.
Appendix A. Multi-layer and multi-domain resiliency
A.1. Maintenance Window
Before planned maintenance operation on DWDM network takes place, IP
traffic should be moved hitless to another link.
MDSC must reroute IP traffic before the events takes place. It
should be possible to lock IP traffic to the protection route until
the maintenance event is finished, unless a fault occurs on such
path.
A.2. Router port failure
The focus is on client-side protection scheme between IP router and
reconfigurable ROADM. Scenario here is to define only one port in
the routers and in the ROADM muxponder board at both ends as back-up
ports to recover any other port failure on client-side of the ROADM
(either on router port side or on muxponder side or on the link
between them). When client-side port failure occurs, alarms are
raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.).
MDSC checks with OP-PNC(s) that there is no optical failure in the
optical layer.
There can be two cases here:
a) LAG was defined between the two end routers. MDSC, after checking
that optical layer is fine between the two end ROADMs, triggers
the ROADM configuration so that the router back-up port with its
associated muxponder port can reuse the OCh that was already in
use previously by the failed router port and adds the new link to
the LAG on the failure side.
While the ROADM reconfiguration takes place, IP/MPLS traffic is
using the reduced bandwidth of the IP link bundle, discarding
lower priority traffic if required. Once backup port has been
reconfigured to reuse the existing OCh and new link has been
added to the LAG then original Bandwidth is recovered between the
end routers.
Note: in this LAG scenario let assume that BFD is running at LAG
level so that there is nothing triggered at MPLS level when one
of the link member of the LAG fails.
b) If there is no LAG then the scenario is not clear since a router
port failure would automatically trigger (through BFD failure)
first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE
case) or TI-LFA (MPLS based SR-TE case) through a protection
port. At the same time MDSC, after checking that optical network
connection is still fine, would trigger the reconfiguration of
the back-up port of the router and of the ROADM muxponder to re-
use the same OCh as the one used originally for the failed router
port. Once everything has been correctly configured, MDSC Global
PCE could suggest to the operator to trigger a possible re-
optimisation of the back-up MPLS path to go back to the MPLS
primary path through the back-up port of the router and the
original OCh if overall cost, latency etc. is improved. However,
in this scenario, there is a need for protection port PLUS back-
up port in the router which does not lead to clear port savings.
Acknowledgments Acknowledgments
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Some of this analysis work was supported in part by the European Some of this analysis work was supported in part by the European
Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727).
Contributors Contributors
Sergio Belotti Sergio Belotti
skipping to change at page 35, line 19 skipping to change at page 29, line 26
Young Lee Young Lee
Sung Kyun Kwan University Sung Kyun Kwan University
Email: younglee.tx@gmail.com Email: younglee.tx@gmail.com
Jeff Tantsura Jeff Tantsura
Apstra Apstra
Email: jefftant.ietf@gmail.com Email: jefftant.ietf@gmail.com
Paolo Volpato
Huawei
Email: paolo.volpato@huawei.com
Authors' Addresses Authors' Addresses
Fabio Peruzzini Fabio Peruzzini
TIM TIM
Email: fabio.peruzzini@telecomitalia.it Email: fabio.peruzzini@telecomitalia.it
Jean-Francois Bouquier Jean-Francois Bouquier
Vodafone Vodafone
Email: jeff.bouquier@vodafone.com Email: jeff.bouquier@vodafone.com
Italo Busi Italo Busi
Huawei Huawei
Email: Italo.busi@huawei.com Email: Italo.busi@huawei.com
Daniel King Daniel King
Old Dog Consulting Old Dog Consulting
 End of changes. 203 change blocks. 
985 lines changed or deleted 795 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/