TEAS Working Group                                      Fabio Peruzzini
Internet Draft                                                      TIM
Intended status: Informational                   Jean-Francois Bouquier
                                                               Vodafone
                                                             Italo Busi
                                                                 Huawei
                                                            Daniel King
                                                     Old Dog Consulting
                                                     Daniele Ceccarelli
                                                               Ericsson

Expires: March May 2021                                  September 28,                                      November 2, 2020

      Applicability of Abstraction and Control of Traffic Engineered
            Networks (ACTN) to Packet Optical Integration (POI)

                 draft-ietf-teas-actn-poi-applicability-00

                 draft-ietf-teas-actn-poi-applicability-01

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html

   This Internet-Draft will expire on March 28, 2020. April 9, 2021.

Copyright Notice

   Copyright (c) 2020 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Abstract

   This document considers the applicability of the IETF Abstraction and Control
   of Traffic Engineered TE Networks (ACTN) architecture to Packet Optical Integration (POI), and IP
   (POI)in the context of IP/MPLS and Optical DWDM domain internetworking.

   In this document, we highlight internetworking,
   identifying the YANG data models being defined by the IETF to
   support this deployment architecture as well as specific scenarios
   relevant for Service Providers.

   Existing IETF protocols and YANG data models that may be used are identified for the ACTN and control of POI networks, each
   multi-layer (packet over optical) scenario with particular focus on
   the interfaces between the MDSC (Multi-
   Domain MPI (Multi-Domain Service Coordinator) and the underlying Packet and Optical
   Domain Controllers (P-PNC and O-PNC) Coordinator to support POI use cases. Provisioning Network
   Controllers Interface)in the ACTN architecture.

Table of Contents

   1. Introduction...................................................3
   2. Reference Scenario.............................................5 architecture and network scenario....................4
      2.1. Generic Assumptions.......................................6
   3. Multi-Layer Topology Coordination..............................7
      3.1. Discovery L2/L3VPN Service Request in North Bound of existing Och, ODU, IP links, IP tunnels MDSC...........8
      2.2. Service and Network Orchestration........................10
         2.2.1. Hard Isolation......................................12
         2.2.2. Shared Tunnel Selection.............................13
      2.3. IP/MPLS Domain Controller and NE Functions...............13
      2.4. Optical Domain Controller and NE Functions...............15
   3. Interface protocols and IP
      services.......................................................7
         3.1.1. Common YANG Models used at the MPI...................8
            3.1.1.1. YANG data models used for the MPIs.........15
      3.1. RESTCONF protocol at the Optical MPIs............8
            3.1.1.2. Required MPIs............................15
      3.2. YANG data models at the Packet MPIs.........9
         3.1.2. Inter-domain link Discovery..........................9
      3.2. Provisioning of an IP Link/LAG over DWDM.................10 MPIs.............................16
         3.2.1. Common YANG data models used at the MPIs........................10
            3.2.1.1. MPIs.................16
         3.2.2. YANG models used at the Optical MPIs...........10
            3.2.1.2. Required MPIs.....................16
         3.2.3. YANG data models at the Packet MPIs........11
         3.2.2. MPIs.................18
   4. Multi-layer and multi-domain services scenarios...............19
      4.1. Scenario 1: network and service topology discovery.......19
         4.1.1. Inter-domain link discovery.........................20
         4.1.2. IP Link Setup Procedure.............................12

      3.3. Provisioning of an IP link/LAG over DWDM with path
      constraints...................................................12
         3.3.1. YANG models used at the MPIs........................13
      3.4. Provisioning Link Members to an existing LAG.............13
         3.4.1. YANG Models used at the MPIs........................13
   4. Multi-Layer Recovery Coordination.............................13
      4.1. Ensuring Network Resiliency during Maintenance Events....13 Procedure.............................21
      4.2. Router Port Failure......................................13 L2VPN/L3VPN establishment................................22
   5. Service Coordination for Multi-Layer network..................14
      5.1. L2/L3VPN/VN Service Request by the Customer..............17
      5.2. Service and Network Orchestration........................19
      5.3. IP/MPLS Domain Controller and NE Functions...............23
         5.3.1. Scenario A: Shared Tunnel Selection.................23
            5.3.1.1. Domain Tunnel Selection........................24
            5.3.1.2. VPN/VRF Provisioning for L3VPN.................25
            5.3.1.3. VSI Provisioning for L2VPN.....................26
            5.3.1.4. Inter-domain Links Update......................26
            5.3.1.5. End-to-end Tunnel Management...................26
         5.3.2. Scenario B: Isolated VN/Tunnel Establishment........27
      5.4. Optical Domain Controller and NE Functions...............27
      5.5. Orchestrator-Controllers-NEs Communication Protocol Flows29
   6. Security Considerations.......................................31
   7. Considerations.......................................22
   6. Operational Considerations....................................31
   8. Considerations....................................23
   7. IANA Considerations...........................................31
   9. References....................................................31
      9.1. Considerations...........................................23
   8. References....................................................23
      8.1. Normative References.....................................31
      9.2. References.....................................23
      8.2. Informative References...................................32
   Acknowledgments..................................................34
   Contributors.....................................................34 References...................................24
   Appendix A.    Multi-layer and multi-domain resiliency...........27
      A.1.  Maintenance Window......................................27
      A.2.  Router port failure.....................................27
   Acknowledgments..................................................28
   Contributors.....................................................28
   Authors' Addresses...............................................35 Addresses...............................................29

1. Introduction

   Packet

   The full automation of the management and control of Service
   Providers transport networks (IP/MPLS, Optical Integration (POI) and also Microwave)
   is key for achieving the new challenges coming now with 5G as well
   as with the increased demand in terms of business agility and
   mobility in a digital world. ACTN architecture, by abstracting the
   network complexity from Optical and IP/MPLS networks towards MDSC
   and then from MDSC towards OSS/BSS or Orchestration layer through
   the use of standard interfaces and data models, is allowing a wide
   range of transport connectivity services that can be requested by
   the upper layers fulfilling almost any kind of service level
   requirements from a network perspective (e.g. physical diversity,
   latency, bandwidth, topology etc.)

   Packet Optical Integration (POI) is an advanced use case of traffic
   engineering. In wide-area wide area networks, a packet network based on the
   Internet Protocol (IP) and possibly Multiprotocol Label Switching
   (MPLS) is typically deployed realized on top of an optical transport network
   that uses Dense Wavelength Division Multiplexing (DWDM). (DWDM)(and
   optionally an Optical Transport Network (OTN)layer). In many
   existing network deployments, the packet and the optical networks
   are engineered and operated independently of each other. There are
   technical differences between the technologies (e.g., routers versus vs.
   optical switches) and the corresponding network engineering and
   planning methods (e.g., inter-domain peering optimization in IP vs.
   dealing with physical impairments in DWDM, or very different time
   scales). In addition, customers and customer needs vary can be different between a
   packet and an optical network, and it is not uncommon to use
   different vendors in both domains. Last but not least, state-of-the-
   art packet and optical networks use sophisticated but complex
   technologies, and for a network engineer, engineer it may not be trivial to be
   a full expert in both areas. As a result, packet and optical
   networks are often managed by different operated in technical and organizational silos.

   This separation is inefficient for many reasons. Both capital
   expenditure (CAPEX) and operational expenditure (OPEX) could be
   significantly reduced by better integrating the packet and the
   optical network. Multi-layer online topology insight can speed up
   troubleshooting (e.g., alarm correlation) and network operation
   (e.g., coordination of maintenance events), multi-layer offline
   topology inventory can improve service quality (e.g., detection of
   diversity constraint violations) and multi-layer traffic engineering
   can use the available network capacity more efficiently (e.g.,
   coordination of restoration). In addition, provisioning workflows
   can be simplified or automated as needed across layers (e.g, to
   achieve bandwidth on demand, or to perform maintenance events).

   Fully leveraging these benefits requires integration between the
   management

   ACTN framework enables this complete multi-layer and control multi-vendor
   integration of the packet and the optical network. The
   Abstraction and Control of TE Networks (ACTN) framework outlines the
   functional components and interfaces between a Multi-Domain Service
   Coordinator (MDSC) networks through MDSC and Provisioning Network Controllers (PNCs) that
   can be used for coordinating the packet
   and optical layers. PNCs.

   In this document, critical use cases key scenarios for Packet Optical Integration (POI)
   are described. We outline how and what described from the packet service layer perspective. The
   objective is required to explain the benefit and the impact for both the
   packet and the optical layer to interact to set up layer, and operate
   services. to identify the required
   coordination between both layers. Precise definitions of scenarios
   can help with achieving a common understanding across different
   disciplines. The IP networks focus of the scenarios are IP/MPLS networks
   operated as a client of optical DWDM networks. The use cases scenarios are
   ordered by increasing the level of integration and complexity. For each
   multi-layer use case, scenario, the document analyzes how to use the
   interfaces and data models of the ACTN architecture.

   The document also captures the current issues with ACTN and POI
   deployment. By understanding

   Understanding the level of standardization and
   potential gaps, it the possible gaps
   will help to better assess the feasibility of integration between IP
   and optical Optical DWDM domain, domain (and optionally OTN layer), in an end-to-end
   multi-vendor network. service provisioning perspective.

2. Reference Scenario architecture and network scenario

   This document uses "Reference Scenario 1" with multiple Optical
   domains and multiple analyses a number of deployment scenarios for Packet domains. The following Figure 1 shows
   this scenario
   and Optical Integration (POI) in case of which ACTN hierarchy is deployed to
   control a multi-layer and multi-domain network, with two Optical
   domains and two Packet domains: domains, as shown in Figure 1:

                              +----------+
                              |   MDSC   |
                              +-----+----+
                                    |
                  +-----------+-----+------+-----------+
                  |           |            |           |
             +----+----+ +----+----+  +----+----+ +----+----+
             | P-PNC 1 | | O-PNC 1 |  | O-PNC 2 | | P-PNC 2 |
             +----+----+ +----+----+  +----+----+ +----+----+
                  |           |            |           |
                  |           \            /           |
        +-------------------+  \          /  +-------------------+
   CE  / PE             ASBR               BR \  |        /  / ASBR BR              PE  \  CE
   o--/---o               o---\-|-------|--/---o               o---\--o
      \   :               :   / |       |  \   :               :   /
       \  :  AS PKT Domain 1  :  /  |       |   \  :  AS PKT Domain 2  :  /
        +-:---------------:-+   |       |    +-:---------------:--+
          :               :     |       |      :               :
          :               :     |       |      :               :
        +-:---------------:------+     +-------:---------------:--+
       /  :               :       \   /        :               :   \
      /   o...............o        \ /         o...............o    \
      \     Optical Domain 1       / \       Optical Domain 2       /
       \                          /   \                            /
        +------------------------+     +--------------------------+

                       Figure 1 - Reference Scenario 1

   The ACTN architecture, defined in [RFC8453], is used to control this
   multi-domain network where each Packet PNC (P-PNC) is responsible
   for controlling its IP domain domain, which can be either an Autonomous
   System (AS), [RFC1930], or an IGP area within the same operator
   network, and each Optical PNC (O-PNC) is responsible for controlling
   its Optical Domain.

   The routers between IP domains can be either AS Boundary Routers
   (ASBR) or Area Border Router (ABR): in this document the generic
   term Border Router (BR) is used to represent either an ASBR or a
   ABR.

   The MDSC is responsible for coordinating the whole multi-domain
   multi-layer (Packet and Optical) network. A specific standard
   interface (MPI) permits MDSC to interact with the different
   Provisioning Network Controller (O/P-PNCs).

   The MPI interface presents an abstracted topology to MDSC hiding
   technology-specific aspects of the network and hiding topology
   details depending on the policy chosen regarding the level of
   abstraction supported. The level of abstraction can be obtained
   based on P-PNC and O-PNC configuration parameters (e.g. provide the
   potential connectivity between any PE and any ABSR BR in an MPLS-TE
   network).

   The MDSC in Figure 1 is responsible for multi-domain and multi-layer
   coordination across multiple Packet and Optical domains, as well as
   to provide IP services to different CNCs at its CMIs using YANG-
   based service models (e.g., using L2SM [RFC8466], L3SM [RFC8299]).

   The multi-domain coordination mechanisms for the IP tunnels
   supporting these IP services are described in section 5. In some
   cases, the MDSC could also rely on the multi-layer POI mechanisms,
   described in this draft, to support multi-layer optimizations for
   these IP services and tunnels.

   In the network scenario of Figure 1, it is assumed that:

   o  The domain boundaries between the IP and Optical domains are
      congruent. In other words, one Optical domain supports
      connectivity between Routers in one and only one Packet Domain;

   o  Inter-domain links exist only between Packet domains (i.e.,
      between ASBR BR routers) and between Packet and Optical domains (i.e.,
      between routers and ROADMs). Optical NEs). In other words, there are no
      inter-domain links between Optical domains;

   o  The interfaces between the routers Routers and the ROADM's Optical NEs are
      "Ethernet" physical interfaces;

   o  The interfaces between the ASBR routers Border Routers (BRs) are "Ethernet"
      physical interfaces.

2.1. Generic Assumptions

   This section describes general assumptions which are applicable at
   all version of the MPI interfaces, between each PNC (optical document assumes that the IP Link supported by
   the Optical network are always intra-AS (PE-BR, intra-domain BR-BR,
   PE-P, BR-P, or packet) P-P) and that the
   MDSC, BRs are co-located and also connected by
   an IP Link supported by an Ethernet physical link.

   The possibility to all setup inter-AS/inter-area IP Links (e.g.,
   inter-domain BR-BR or PE-PE), supported by Optical network, is for
   further study.

   Therefore, if inter-domain links between the scenarios discussed in this document.

   The data models Optical domains exist,
   they would be used on these interfaces are assumed to use support multi-domain Optical services, which
   are outside the YANG
   1.1 Data Modeling Language, as defined in [RFC7950]. scope of this document.

   The RESTCONF protocol, as defined in [RFC8040], using Optical NEs within the JSON
   representation, defined in [RFC7951], is assumed to optical domains can be used at these
   interfaces.

   As required in [RFC8040], the "ietf-yang-library" YANG module
   defined ROADMs or OTN
   switches, with or without a ROADM.

   The MDSC in [RFC8525] Figure 1 is used to allow the MDSC to discover the set
   of YANG modules supported by each PNC at its MPI.

3. Multi-Layer Topology Coordination

   In this scenario, the MSDC needs to discover the network topology,
   at both WDM responsible for multi-domain and IP layers, in terms of nodes (NEs) multi-layer
   coordination across multiple Packet and links,
   including inter-AS domain links Optical domains, as well as cross-layer links.

   Each PNC provides
   to provide L2/L3VPN services.

   Although the MDSC an abstract topology view of new technologies (e.g. QSFP-DD ZR 400G) are making
   convenient to fit the WDM
   or of DWDM pluggable interfaces on the IP topology of Routers, the domain it controls. This topology
   deployment of those pluggable is
   abstracted in not yet widely adopted by the sense
   operators. The reason is that some detailed NE information most of operators are not yet ready to
   manage Packet and Transport networks in a unified single domain. As
   a consequence, this draft is hidden
   at not addressing the MPI, unified scenario.
   This matter will be described in a different draft.

   From an implementation perspective, the functions associated with
   MDSC and all or some of described in [RFC8453] may be grouped in different ways.

   1. Both the NEs service- and related physical links network-related functions are exposed as abstract nodes collapsed into
     a single, monolithic implementation, dealing with the end customer
     service requests, received from the CMI (Customer MDSC Interface),
     and logical (virtual) links, depending
   on the level of abstraction adaptation to the user requires. This detailed
   information relevant network models. Such case is vital
     represented in Figure 2 of [RFC8453]
   2. An implementation can choose to understand both split the inter-AS domain links
   (seen by each controller as UNI interfaces but as I-NNI interfaces
   by service-related and the MDSC) as well
     network-related functions in different functional entities, as the cross-layer mapping between IP
     described in [RFC8309] and WDM
   layer.

   The MDSC also maintains an up-to-date network inventory in section 4.2 of both IP [RFC8453]. In this
     case, MDSC is decomposed into a top-level Service Orchestrator,
     interfacing the customer via the CMI, and WDM layers through into  a Network
     Orchestrator interfacing at the use of IETF notifications through MPI southbound with the PNCs.

   For The
     interface between the cross-layer links, Service Orchestrator and the MDSC needs Network
     Orchestrator is not specified in [RFC8453].
   3. Another implementation can choose to be capable of
   automatically correlating physical ports information from split the
   routers (single link or bundle links MDSC functions
     between an H-MDSC responsible for link aggregation groups -
   LAG) to client ports in packet-optical multi-layer
     coordination, interfacing with one Optical L-MDSC, providing
     multi-domain coordination between the ROADM.

3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP
   services

   Typically, an MDSC must be able to automatically discover network
   topology of both WDM and IP layers (links O-PNCs and NE, links between two
   domains), this assumes one Packet
     L-MDSC, providing multi-domain coordination betweeh the following:

   o  An abstract view P-PNCs
     (see for example Figure 9 of [RFC8453]).
   4. Another implementation can also choose to combine the WDM and IP topology must be available;
   o MDSC must keep an up-to-date and the
     P-PNC functions together.

   Please note that in current service provider's network inventory deployments,
   at the North Bound of both IP and WDM
      layers, and it should be possible to correlate such information
      (e.g., which port, lambda/OTSi, the direction it is used by MDSC, instead of a
      specific IP service on CNC, typically there is
   an OSS/Orchestration layer. In this case, the WDM equipment);

   o  It should be possible at MDSC level to easily correlate WDM and
      IP layers alarms to speed-up troubleshooting.

3.1.1. Common YANG Models used at would implement
   only the MPI

   Both optical Network Orchestration functions, as in [RFC8309] and packet PNCs use
   described in point 2 above. In this case, the following common topology YANG
   models at MDSC is dealing with
   the MPI network services requests received from the OSS/Orchestration
   layer.

   [Editors'note:] Check for a better term to report their abstract topologies:

   o  The Base Network Model, defined in define the "ietf-network" YANG module
      of [RFC8345];

   o  The Base Network Topology Model, defined in network
   services. It may be worthwhile defining what are the "ietf-network-
      topology" YANG module customer and
   network services.

   The OSS/Orchestration layer is a key part of [RFC8345], which augments the Base
      Network Model; architecture
   framework for a service provider:

   o  The TE Topology Model, defined in  to abstract (through MDSC and PNCs) the "ietf-te-topology" YANG
      module of [TE-TOPO], which augments underlying transport
      network complexity to the Base Network Topology
      Model.

   These IETF YANG models are generic Business Systems Support layer

   o  to coordinate NFV, Transport (e.g. IP, Optical and augmented by technology-
   specific YANG modules as described in Microwave
      networks), Fixed Acess, Core and Radio domains enabling full
      automation of end-to-end services to the following sections.

3.1.1.1. YANG models used at end customers.

   o  to enable catalogue-driven service provisioning from external
      applications (e.g. Customer Portal for Enterprise Business
      services) orchestrating the design and lifecycle management of
      these end-to-end transport connectivity services, consuming IP
      and/or Optical MPIs transport connectivity services upon request.

   The optical PNC also uses at least functionality of the following technology-specific
   topology YANG models, providing WDM OSS/Orchestration layer as well as the
   interface toward the MDSC are usually operator-specific and Ethernet technology-specific
   augmentations outside
   the scope of this draft. This document assumes that the generic TE Topology Model:

   o  The WSON Topology Model, defined in
   OSS/Orchestrator requests MDSC to setup L2VPN/L3VPN services through
   mechanisms which are outside the "ietf-wson-topology" YANG
      modules scope of [WSON-TOPO], or the Flexi-grid Topology Model, defined
      in the "ietf-flexi-grid-topology" YANG module draft.

   There are two main cases when MDSC coordination of [Flexi-TOPO].

   o  The Ethernet Topology Model, defined underlying PNCs
   in the "ietf-eth-te-
      topology" YANG module of [CLIENT-TOPO]

   The WSON Topology Model or, alternatively, the Flexi-grid Topology
   model POI context is used to report the fixed-grid or, respectively, initiated:

   o  Initiated by a request from the
   flexible-grid DWDM network topology (e.g., ROADMs and OMS links).

   The Ethernet Topology Model is used OSS/Orchestration layer to report the Ethernet access
   links on setup
      L2VPN/L3VPN services that requires multi-layer/multi-domain
      coordination.

   o  Initiated by the edge ROADMs.

3.1.1.2. Required YANG models at MDSC itself to perform multi-layer/multi-domain
      optimizations and/or maintenance works, beyond discovery (e.g.
      rerouting LSPs with their associated services when putting a
      resource, like a fibre, in maintenance mode during a maintenance
      window). Different to service fulfillment, the Packet MPIs

   The Packet PNC also uses workflows then are
      not related at least the following technology-specific
   topology YANG models, providing IP and Ethernet technology-specific
   augmentations of all to a service provisioning request being
      received from the generic Topology Models:

   o  The L3 Topology Model, defined OSS/Orchestration layer.

   Above two MDSC workflow cases are in the "ietf-l3-unicast-topology"
      YANG modules scope of [RFC8346], which augments the Base Network
      Topology Model

   o  The Ethernet Topology Model, defined this draft or in the "ietf-eth-te-
      topology" YANG module
   future versions.

2.1. L2/L3VPN Service Request in North Bound of [CLIENT-TOPO], which augments the TE
      Topology Model

   o  The L3-TE Topology Model, defined MDSC

   As explained in section 2, the "ietf-l3-te-topology"
      YANG modules of [L3-TE-TOPO], which augments OSS/Orchestration layer can request
   the L3 Topology
      Model

   The Ethernet Topology Model is used MDSC to report the Ethernet links
   between the IP routers and the edge ROADMs as well as setup of L2/L3VPN services (with or without TE
   requirements).

   Although the
   inter-domain links interface between ASBRs, while the L3 Topology Model OSS/Orchestration layer is
   used to report the IP network topology (e.g., IP routers and IP
   links).

   The L3-TE Topology Model reports the relationship between the IP
   routers and LTPs provided by the L3 Topology Model and the
   underlying Ethernet nodes and LTPs provided by the Ethernet Topology
   Model.

3.1.2. Inter-domain link Discovery

   In
   usually operator-specific, ideally it would be using a RESTCONF/YANG
   interface with more abstracted version of the reference MPI YANG data models
   used for network of configuration (e.g. L3NM, L2NM).

   Figure 1, there are two types of
   inter-domain links:

   o  Links between two IP domains/ASBRs (ASes)

   o  Links between 2 shows an IP router and a ROADM

   Both types of links are Ethernet physical links.

   The inter-domain link information is reported to the MDSC by the two
   adjacent PNCs, controlling the two ends example of a possible control flow between the inter-domain link,
   using
   OSS/Orchestration layer and the Ethernet Topology Model defined in [CLIENT-TOPO].

   The MDSC can understand how to merge these inter-domain Ethernet
   links together instantiate L2/L3VPN
   services, using the plug-id attribute defined in the TE
   Topology Model [TE-TOPO], as described in as described YANG models under definition in section
   4.3 of [TE-TOPO].

   A more detailed description of how [VN], [L2NM],
   [L3NM] and [TSM].

               +-------------------------------------------+
               |                                           |
               |          OSS/Orchestration layer          |
               |                                           |
               +-----------------------+-------------------+
                                       |
                 1.VN    2. L2/L3NM &  |            ^
                   |          TSM      |            |
                   |           |       |            |
                   |           |       |            |
                   v           v       |      3. Update VN
                                       |
               +-----------------------+-------------------+
               |                                           |
               |                  MDSC                     |
               |                                           |
               +-------------------------------------------+

                     Figure 2 Service Request Process

   o  The VN YANG model [VN], whose primary focus is the plug-id CMI, can also
      be used to
   discover inter-domain link is also provided in section 5.1.4 of
   [TNBI].

   Both types provide VN Service configuration from a orchestrated
      connectivity service point of inter-domain Ethernet links are discovered using the
   plug-id attributes reported in view, when the Ethernet Topologies exposed by L2/L3VPN service has
      TE requirements. This model is not used to setup L2/L3VPN service
      with no TE requirements.

       o It provides the two adjacent PNCs.

   The MDSC, when discovering profile of VN in terms of VN members, each of
          which corresponds to an Ethernet inter-domain edge-to-edge link between two
   Ethernet customer
          end-points (VNAPs). It also provides the mappings between the
          VNAPs with the LTPs and between the connectivity matrix with
          the VN member from which are the associated with two IP LTPs, reported in traffic matrix (e.g.,
          bandwidth, latency, protection level, etc.) of VN member is
          expressed (i.e., via the
   IP Topologies exposed by TE-topology's connectivity matrix).

       o The model also provides VN-level preference information
          (e.g., VN member diversity) and VN-level admin-status and
          operational-status.

   o  The L2NM YANG model [L2NM], whose primary focus is the two adjacent P-PNCs, MPI, can
      also discover
   an inter-domain IP link/adjacency between these two IP LTPs.

   Two options are possible be used to discover these inter-domain Ethernet
   links:

   1. Static configuration

   2. LLDP [IEEE 802.1AB] automatic discovery

   Since the static provide L2VPN service configuration requires an administrative burden to
   configure network-wide unique identifiers, the automatic discovery
   solution based on LLDP is preferable when LLDP and site
      information, from a orchestrated connectivity service point of
      view.

   o  The L3NM YANG model [L3NM], whose primary focus is supported.

   As outlined in [TNBI], the encoding MPI, can
      also be used to provide all L3VPN service configuration and site
      information, from a orchestrated connectivity service point of the plug-id namespace
      view.

   o  The TE & Service Mapping YANG model [TSM] provides TE-service
      mapping as well as of the LLDP information within site mapping.

       o TE-service mapping provides the plug-id value is
   implementation specific mapping between a L2/L3VPN
          instance and needs to be consistent across all the
   PNCs.

3.2. Provisioning of an IP Link/LAG over DWDM

   In this scenario, corresponding VN instances.

       o The TE-service mapping also provides the MSDC needs service mapping
          requirement type as to how each L2/L3VPN/VN instance is
          created with respect to coordinate the creation underlay TE tunnels (e.g.,
          whether they require a new and isolated set of an IP
   link, TE underlay
          tunnels or a LAG, between two routers through a DWDM network.

   It is assumed that the MDSC has already discovered not). See Section 2.2 for detailed discussion on
          the whole network
   topology as described in section 3.1.

3.2.1. YANG models used at mapping requirement types.

       o Site mapping provides the MPIs

3.2.1.1. YANG models used at site reference information across
          L2/L3VPN Site ID, VN Access Point ID, and the Optical MPIs

   The optical PNC uses at least LTP of the following YANG models:

   o  The TE Tunnel Model, defined
          access link.

2.2. Service and Network Orchestration

   From a functional standpoint, MDSC represented in Figure 2
   interfaces with the "ietf-te" YANG module of
      [TE-TUNNEL]

   o  The WSON Tunnel Model, defined OSS/Orchestration layer and decouples L2/L3VPN
   service configuration functions from network configuration
   functions. Therefore in this document the "ietf-wson-tunnel" YANG
      modules MDSC performs the
   functions of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, Network Orchestrator, as defined in the "ietf-flexi-grid-media-channel" YANG module [RFC 8309].

   One of
      [Flexi-MC]

   o  The Ethernet Client Signal Model, defined in the "ietf-eth-tran-
      service" YANG module of [CLIENT-SIGNAL]

   The TE Tunnel model important MDSC functions is generic and augmented by technology-specific
   models such as to identify which TE Tunnels
   should carry the WSON Tunnel Model L2/L3VPN traffic (e.g., from TE & Service Mapping
   configuration) and to relay this information to the Flexi-grid Media
   Channel Model.

   The WSON Tunnel Model or, alternatively, P-PNCs, to
   ensure the Flexi-grid Media
   Channel Model PEs' forwarding tables (e.g., VRF) are used properly
   populated, according to setup connectivity within the DWDM network
   depending on whether TE binding requirement for the DWDM optical network is based on fixed grid
   or flexible-grid. L2/L3VPN.

   TE binding requirement types [TSM] are:

   1. Hard Isolation with deterministic latency: The Ethernet Client Signal Model is used to configure the steering L2/L3VPN service
      requires a set of the Ethernet client traffic between Ethernet access links and dedicated TE
   Tunnels, which in this case could be either WSON Tunnels or
   Flexi-Grid Media Channels. providing deterministic
      latency performances and that cannot be not shared with other
      services, nor compete for bandwidth with other Tunnels.

   2. Hard Isolation: This model is generic and applies similar to any
   technology-specific TE Tunnel: technology-specific attributes are
   provided by the technology-specific models above case without
      deterministic latency requirements.

   3. Soft Isolation: The L2/L3VPN service requires a set of dedicated
      MPLS-TE tunnels which augment the generic
   TE-Tunnel Model.

3.2.1.2. Required YANG models at the Packet MPIs cannot be shared with other services, but
      which could compete for bandwidth with other Tunnels.

   4. Sharing: The Packet PNC uses at least L2/L3VPN service allows sharing the following topology YANG models:

   o  The Base Network Model, defined in MPLS-TE Tunnels
      supporting it with other services.

   For the "ietf-network" YANG module first three types, there could be additional TE binding
   requirements with respect to different VN members of [RFC8345] (see section 3.1.1)

   o  The Base Network Topology Model, defined in the "ietf-network-
      topology" YANG module of [RFC8345] (see section 3.1.1)

   o  The L3 Topology Model, defined in same VN (on
   how different VN members, belonging to the "ietf-l3-unicast-topology"
      YANG modules of [RFC8346] (see section 3.1.1.1)

   If, as discussed in section 3.2.2, IP Links created over DWDM same VN, can be
   automatically discovered by the P-PNC, share or not
   network resources). For the IP Topology is needed
   only to report these IP Links after being discovered by first two cases, VN members can be
   hard-isolated, soft-isolated, or shared. For the P-PNC.

   The IP Topology third case, VN
   members can also be used soft-isolated or shared.

   In order to configure fulfill the IP Links created
   over DWDM.

3.2.2. IP Link Setup Procedure

   The MDSC requires the O-PNC L2/L3VPN end-to-end TE requirements,
   including the TE binding requirements, the MDSC needs to setup a WDM Tunnel (either a WSON
   Tunnel or a Flexi-grid Tunnel) within perform
   multi-layer/multi-domain path computation to select the DWDM network between BRs, the
   two Optical Transponders (OTs) associated with
   intra-domain MPLS-TE Tunnels and the two access links.

   The intra-domain Optical Transponders are reported by the O-PNC as Trail
   Termination Points (TTPs), defined in [TE-TOPO], within Tunnels.

   Depending on the WDM
   Topology. The association between knowledge that MDSC has of the Ethernet access link topology and
   configuration of the
   WDM TTP is reported by the Inter-Layer Lock (ILL) identifiers,
   defined in [TE-TOPO], reported by the O-PNC within underlying network domains, three models for
   performing path computation are possible:

   1. Summarization: MDSC has an abstracted TE topology view of all of
      the Ethernet
   Topology underlying domains, both packet and WDM Topology.

   The optical. MDSC also requires the O-PNC does not
      have enough TE topology information to steer the Ethernet client
   traffic between perform
      multi-layer/multi-domain path computation. Therefore MDSC
      delegates the two access Ethernet Links over P-PNCs and O-PNCs to perform a local path
      computation within their controlled domains and it uses the WDM Tunnel.

   After
      information returned by the WDM Tunnel has been setup P-PNCs and O-PNCs to compute the client traffic steering
   configured, the two IP routers can exchange Ethernet packets between
   themselves, including LLDP messages.

   If LLDP [IEEE 802.1AB] is used between the two routers,
      optimal multi-domain/multi-layer path.
      This model presents an issue to P-PNC, which does not have the
      capability of performing a single-domain/multi-layer path
      computation (that is, P-PNC
   can automatically discover the IP Link being set up by the MDSC. The
   IP LTPs terminating this IP Link are supported by the ETH LTPs
   terminating does not have any possibility to
      retrieve the two access links.

   Otherwise, topology/configuration information from the MDSC needs Optical
      controller). A possible solution could be to require include a CNC
      function in the P-PNC to configure request the MDSC multi-domain Optical
      path computation, as shown in Figure 10 of [RFC8453].

   2. Partial summarization: MDSC has full visibility of the TE
      topology of the packet network domains and an IP
   Link between abstracted view of
      the two routers: TE topology of the optical network domains.
      MDSC also configures the two ETH
   LTPs which support then has only the two IP LTPs terminating this IP Link.

3.3. Provisioning capability of an IP link/LAG over DWDM with performing multi-
      domain/single-layer path constraints

   MDSC must computation for the packet layer (the
      path can be able computed optimally for the two packet domains).
      Therefore MDSC still needs to provision an IP link with a fixed maximum
   latency constraint, or with delegate the minimum latency available constraint O-PNCs to perform
      local path computation within each domain but as well inter-domain when required (e.g. their respective domains and it
      uses the information received by
   monitoring traffic KPIs trends for this IP link). Through the O-PNC
   fixed latency path/minimum latency O-PNCs, together with its TE
      topology view of the multi-domain packet layer, to perform
      multi-layer/multi-domain path computation.
      The role of P-PNC is chosen between PE minimized, i.e. is limited to management.

   3. Full knowledge: MDSC has the complete and
   ASBR in each enough detailed view of
      the TE topology of all the network domains (both optical domain. Then and
      packet). In such case MDSC needs to select has all the inter-AS
   domain with less latency (in case we have several interconnection
   links) information needed to have
      perform multi-domain/multi-layer path computation, without
      relying on PNCs.
      This model may present, as a potential drawback, scalability
      issues and, as discussed in section 2.2. of [PATH-COMPUTE],
      performing path computation for optical networks in the right low latency constraint fulfilled end-to-end
   across domains. MDSC must is
      quite challenging because the optimal paths depend also on
      vendor-specific optical attributes (which may be able to automatically create two IP links between different in the
      two
   routers, over DWDM network, with physical path diversity (avoiding
   SRLGs communicated domains if they are provided by O-PNCs to the MDSC).

   MDSC must be responsible for routing each different vendors).

   The current version of this IP links through
   different inter-AS domain links so draft assumes that end-to-end IP links are
   fully disjoint.

   Optical connectivity must be set up accordingly by MDSC through O-
   PNCs.

3.3.1. YANG models used supports at the MPIs

   This section
   least model #2 (Partial summarization).

   [Note: check with opeerators for some references on real deployment]

2.2.1. Hard Isolation

   For example, when "Hard Isolation with or w/o deterministic latency"
   TE binding requirement is applied for further study

3.4. Provisioning Link Members to an existing LAG

   When adding a L2/L3VPN, new link member Optical
   Tunnels need to a LAG be setup to support dedicated IP Links between two routers with or
   without path latency/diversity constraint, PEs
   and BRs.

   The MDSC needs to identify the set of IP/MPLS domains and their BRs.
   This requires the MDSC must be able to
   force request each O-PNC to compute the additional
   intra-domain optical connection paths between each PEs/BRs pairs.

   When requesting optical path computation to use the same physical
   path in O-PNC, the optical domain where MDSC
   needs to take into account the LAG capacity increase is
   required.

3.4.1. YANG Models used at inter-layer peering points, such as
   the MPIs

   This is for further study

4. Multi-Layer Recovery Coordination

4.1. Ensuring Network Resiliency during Maintenance Events

   Before planned maintenance operation on DWDM network takes place, IP
   traffic should be moved hitless to another link.

   MDSC must reroute IP traffic before interconnections between the events takes place. It
   should be possible to PE/BR nodes and the edge Optical
   nodes (e.g., using the inter-layer lock IP traffic to or the protection route until transitional link
   information, defined in [RFC8795]).

   When the maintenance event is finished, unless a fault occurs on such
   path.

4.2. Router Port Failure

   The focus is on client-side protection scheme between IP router and
   reconfigurable ROADM. Scenario here is optimal multi-layer/multi-domain path has been computed,
   the MDSC requests each O-PNC to define only one port in setup the routers selected Optical Tunnels
   and in the ROADM muxponder board at both ends as back-up
   ports P-PNC to recover any other port failure on client-side of setup the ROADM
   (either on router port side or on muxponder side or on intra-domain MPLS-TE Tunnels, over the link
   between them). When client-side port failure occurs, alarms are
   raised to
   selected Optical Tunnels. MDSC by IP-PNC also properly configures its BGP
   speakers and O-PNC (port status down, LOS etc.).
   MDSC checks with OP-PNC(s) PE/BR forwarding tables to ensure that there the VPN traffic
   is no optical failure in properly forwarded.

2.2.2. Shared Tunnel Selection

   In case of shared tunnel selection, the
   optical layer.

   There MDSC needs to check if there
   is multi-domain path which can be two cases here:

   a) LAG was defined between support the two end routers. MDSC, after checking
      that optical layer L2/L3VPN end-to-end TE
   service requirements (e.g., bandwidth, latency, etc.) using existing
   intra-domain MPLS-TE tunnels.

   If such a path is fine between the two end ROADMs, triggers
      the ROADM configuration so that found, the router back-up port with its
      associated muxponder port can reuse MDSC selects the OCh that was already in
      use previously by optimal path from the failed router port
   candidate pool and adds the new link request each P-PNC to setup the LAG on the failure side.

      While the ROADM reconfiguration takes place, IP/MPLS traffic is L2/L3VPN service
   using the reduced bandwidth of selected intra-domain MPLS-TE tunnel, between PE/BR nodes.

   Otherwise, the IP link bundle, discarding
      lower priority traffic MDSC should detect if required. Once backup port has been
      reconfigured to reuse the multi-domain path can be
   setup using existing OCh and new link has been
      added to the LAG then original Bandwidth is recovered between intra-domain MPLS-TE tunnels with modifications
   (e.g., increasing the
      end routers.

      Note: in this LAG scenario let assume that BFD is running at LAG
      level so that there is nothing triggered at MPLS level when one tunnel bandwidth) or setting up new intra-
   domain MPLS-TE tunnel(s).

   The modification of an existing MPLS-TE Tunnel as well as the link member setup
   of the LAG fails.

   b) If there is no LAG then the scenario is not clear since a router
      port failure would automatically trigger (through BFD failure)
      first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE
      case) or TI-LFA (MPLS based SR-TE case) through a protection
      port. At new MPLS-TE Tunnel may also require multi-layer coordination
   e.g., in case the same time MDSC, after checking that optical network
      connection available bandwidth of underlying Optical Tunnels
   is still fine, would trigger not sufficient. Based on multi-domain/multi-layer path
   computation, the reconfiguration of MDSC can decide for example to modify the back-up port bandwidth
   of the router and an existing Optical Tunnel (e.g., ODUflex bandwidth increase) or
   to setup new Optical Tunnels to be used as additional LAG members of the ROADM muxponder
   an existing IP Link or as new IP Links to re-
      use re-route the same OCh as MPLS-TE
   Tunnel.

   In all the one cases, the labels used originally for by the failed router
      port. Once everything has been correctly configured, MDSC Global
      PCE could suggest to end-to-end tunnel are
   distributed in the operator PE and BR nodes by BGP. The MDSC is responsible
   to trigger a possible re-
      optimisation of configure the back-up MPLS path BGP speakeers in each P-PNC, if needed.

2.3. IP/MPLS Domain Controller and NE Functions

   IP/MPLS networks are assumed to go back have multiple domains, where each
   domain, corresponding to either an IGP area or an Autonomous System
   (AS) within the  MPLS
      primary path through same operator network, is controlled by an IP/MPLS
   domain controller (P-PNC).

   Among the back-up port functions of the router and the
      original OCh if overall cost, latency etc. is improved. However,
      in this scenario, P-PNC, there is a need for protection port PLUS back-
      up port in are the router which does not lead to clear port savings.

5. Service Coordination for Multi-Layer network

   [Editors' Note] This text has been taken from section 2 setup or
   modification of draft-
   lee-teas-actn-poi-applicability-00 and need to be reconciled with the other sections (the introduction in particular) of this document
   This section provides a number of deployment scenarios for packet
   and optical integration (POI). Specifically, this section provides a
   deployment scenario in which ACTN hierarchy is deployed to control a
   multi-layer intra-domain MPLS-TE Tunnels, between PEs and multi-domain network via two IP/MPLS PNCs
   BRs, and two
   Optical PNCs with coordination with L-MDSC. This scenario is in the
   context of an upper layer service configuration (e.g. L3VPN) across
   two AS domains which are transported by two transport underlay
   domains (e.g. OTN).

   The provisioning of the L3VPN service is outside ACTN scope but it
   is worth showing how the L3VPN service provisioning is integrated
   for VPN services, such as the end-to-end service fulfilment in ACTN context. An example of
   service configuration function VRF in
   the Service/Network Orchestrator
   is discussed PE nodes, as shown in [BGP-L3VPN]. Figure 2 shows an ACTN POI Reference Architecture where it shows
   ACTN components as well as non-ACTN components that are necessary
   for the end-to-end service fulfilment. Both IP/MPLS and Optical
   Networks are multi-domain. Each IP/MPLS domain network is controlled
   by its' domain controller and all the optical domains are controlled
   by a hierarchy of optical domain controllers. The L-MDSC function of
   the optical domain controllers provides an abstract view of the
   whole optical network to the Service/Network Orchestrator. It is
   assumed that all these components of the network belong to one
   single network operator domain under the control of the
   service/network orchestrator.

   Customer
            +-------------------------------+
            |    +-----+    +------------+  |
            |    | CNC |----| Service Op.|  |
            |    +-----+    +------------+  |
            +-------|------------------|----+
                    | ACTN interface 3:

          +------------------+            +------------------+
          | Non-ACTN interface                  | CMI            | (Customer Service model)
     Service/Network|                  +-----------------+
     Orchestrator                  |
          |
              +-----|------------------------------------|-----------+      P-PNC1      |   +----------------------------------+            |      P-PNC2      |
          |   |MDSC TE & Service Mapping Function|                  |            |                  |   +----------------------------------+
          +--|-----------|---+            +--|-----------|---+
             | 1.Tunnel  | 2.VPN             | 1.Tunnel  | 2.VPN
             | Config    | Provisioning      | Config    |   +------------------+ Provisioning
             V           V                   V           V
           +---------------------+ |
              |   | MDSC NP Function |-------|Service Config. Func.| |
              |   +------------------+         +---------------------+ |
              +------|---------------------------|-------------------+
                 MPI |     +---------------------+--+
                     |    / Non-ACTN interface       \
             +-------+---/-------+------------+       \
   IP/MPLS   |
      CE  /        |Optical     |        \    IP/MPLS
   Domain PE     tunnel 1  |     BR\       /         |Domain      |         \   Domain BR     tunnel 2
   Controller|        /          |Controller  |    PE \  CE
      o--/---o..................o--\-----/--o..................o---\--o
         \  Controller
      +------|-------/--+    +---|-----+   +--|-----------\----+
      | +-----+  +-----+|    | +-----+ |   |+------+   +------+|
      | |PNC1 |  |Serv.||    | |PNC  | |   || PNC2 |   | Serv.||
      | +-----+  +----- |    | +-----+ |   |+------+   +------+|
      +-----------------+    +---------+   +-------------------+
          SBI |                  |                     | SBI
              v                  |                     V
       +------------------+      |         +------------------+                         /   IP/MPLS Network     \     |                         /   IP/MPLS Network
          \
     +----------------------+    |  SBI  +----------------------+
                                 v
                  +-------------------------------+        Domain 1       /           Optical Network       \
                +-----------------------------------+

                 Figure       Domain 2 ACTN POI Reference Architecture        /
           +---------------------+         +---------------------+

                                    End-to-end tunnel
             <------------------------------------------------->

             Figure 2 shows ACTN POI Reference Architecture where it depicts:

   o  CMI (CNC-MDSC Interface) interfacing CNC with MDSC function in
      the Service/Network Orchestrator. This is where TE 3 IP/MPLS Domain Controller & Service
      Mapping [TSM] and either ACTN VN [ACTN-VN] or TE-topology [TE-
      TOPO] model NE Functions

   It is exchanged over CMI.

   o  Customer Service Model Interface: Non-ACTN interface assumed that BGP is running in the
      Customer Portal interfacing Service/Network Orchestrator's
      Service Configuration Function. This is the interface where L3SM
      information is exchanged.

   o  MPI (MDSC-PNC Interface) interfacing inter-domain IP/MPLS Domain Controllers
   networks for L2/L3VPN and Optical Domain Controllers.

   o  Service Configuration Interface: Non-ACTN interface in
      Service/Network Orchestrator interfacing with that the IP/MPLS Domain
      Controllers to coordinate L2/L3VPN multi-domain service
      configuration. This P-PNC controller is where service specific information such as
      VPN, VPN binding policy (e.g., new underlay also
   responsible for configuring the BGP speakers within its control
   domain, if necessary.

   The BGP would be responsible for the label distribution of the
   end-to-end tunnel creation on PE and BR nodes. The MDSC is responsible for
      isolation), etc.
   the selection of the BRs and of the intra-domain MPLS-TE Tunnels
   between PE/BR nodes.

   If new MPLS-TE Tunnels are conveyed.

   o  SBI (South Bound Interface): Non-ACTN interface needed or mofications (e.g., bandwidth
   ingrease) to existing MPLS_TE Tunnels are needed, as outlined in
   section 2.2, the domain
      controller interfacing network elements MDSC would request their setup or modifications to
   the P-PNCs (step 1 in Figure 3). Then the domain.

   Please note that MPI and Service Configuration Interface can be
   implemented as MDSC would request the same interface with
   P-PNC to configure the two different
   capabilities. VPN, including the selection of the
   intra-domain TE Tunnel (step 2 in Figure 3).

   The split is just functional but doesn't have P-PNC should configure, using mechanisms outside the scope of
   this document, the ingress PE forwarding table, e.g., the VRF, to be
   also logical.

   The
   forward the VPN traffic, received from the CE, with the following sections are provided to describe key functions that
   are necessary for
   three labels:

   o  VPN label: assigned by the vertical as well as horizontal egress PE and distributed by BGP;

   o  end-to-end
   service fulfilment of POI.

5.1. L2/L3VPN/VN Service Request LSP label: assigned by the Customer

   A customer can request L3VPN services with TE requirements using
   ACTN CMI models (i.e., ACTN VN YANG, TE & Service Mapping YANG) egress BR, selected by the
      MDSC, and
   non-ACTN customer service models such as L2SM/L3SM YANG together.
   Figure 3 shows detailed control flow between customer and
   service/network orchestrator to instantiate L2/L3VPN/VN service
   request.

             Customer
               +-------------------------------------------+
               |    +-----+              +------------+    |
               |    | CNC |--------------| Service Op.|    |
               |    +-----+              +------------+    |
               +-------|------------------------|----------+
       2. VN & TE/Svc  |                        | 1.L2/3SM
          Mapping      |                        |   |
                |      |  ^                     |   |
                |      |  |                     |   |
                v      |  | 3. Update VN        |   v
                       |       & TE/Svc         |
    Service/Network    |       mapping          |
     Orchestrator      |                        |
    +------------------|------------------------|-----------+
    |   +----------------------------------+    |           |
    |   |MDSC TE & Service Mapping Function|    |           |
    |   +----------------------------------+    |           |
    |       |                           |       |           |
    |   +------------------+       +---------------------+  |
    |   | MDSC NP Function |-------|Service Config. Func.|  |
    |   +------------------+       +---------------------+  |
    +-------|-----------------------------------|-----------+

   NP: Network Provisioning

                     Figure 3 Service Request Process

   o  ACTN VN YANG provides VN Service configuration, as specified in
      [ACTN-VN]. distributed by BGP;

   o It provides  MPLS-TE tunnel label, assigned by the profile of VN in terms of VN members, each next hop P node of
          which corresponds to an edge-to-edge link between customer
          end-points (VNAPs). It also provides the mappings between
      tunnel selected by the
          VNAPs with MDSC and distributed by mechanism internal
      to the LTPs IP/MPLS domain (e.g., RSVP-TE).

2.4. Optical Domain Controller and between NE Functions

   Optical network provides the underlay connectivity matrix with
          the VN member from which the associated traffic matrix (e.g.,
          bandwidth, latency, protection level, etc.) services to
   IP/MPLS networks. The coordination of VN member Packet/Optical multi-layer is
          expressed (i.e., via
   done by the TE-topology's connectivity matrix).

       o MDSC, as shown in Figure 1.

   The model also provides VN-level preference information
          (e.g., VN member diversity) and VN-level admin-status and
          operational-status. O-PNC is responsible to:

   o  L2SM YANG [RFC8466] provides all L2VPN service configuration and
      site information from a customer/service point  provide to the MDSC an abstract TE topology view of view. its
      underlying optical network resources;

   o  L3SM YANG [RFC8299] provides all L3VPN service configuration and
      site information from a customer/service point of view.

   o  The TE & Service Mapping YANG model [TSM] provides TE-service
      mapping as well as site mapping.

       o TE-service mapping provides the mapping of L3VPN instance
          from [RFC8299] with  perform single-domain local path computation, when requested by
      the corresponding ACTN VN instance. MDSC;

   o The TE-service mapping also provides  perform Optical Tunnel setup, when requested by the service mapping
          requirement type as to how each L2/L3VPN/VN instance is
          created with respect MDSC.

   The mechanisms used by O-PNC to perform intra-domain topology
   discovery and path setup are usually vendor-speicific and outside
   the underlay TE tunnels (e.g.,
          whether scope of this document.

   Depending on the L3VPN requires a new and isolated set type of optical network, TE
          underlay tunnels topology abstraction,
   path compution and path setup can be single-layer (either OTN or not, etc.). See Section 5.2 for detailed
          discussion on the mapping requirement types.

       o Site mapping provides
   WDM) or multi-layer OTN/WDM. In the site reference information across
          L2/L3VPN Site ID, ACTN VN Access Point ID, and latter case, the LTP of multi-layer
   coordination between the
          access link.

5.2. Service OTN and Network Orchestration

   The Service/Network orchestrator shown in Figure 2 interfaces WDM layers is performed by the
   customer
   O-PNC.

3. Interface protocols and decouples the ACTN MDSC functions from the customer
   service configuration functions.

   An implementation can choose to split YANG data models for the Service/Network
   orchestration functions, as described in [RFC8309] and in MPIs

   This section
   4.2 of [RFC8453], between a top-level Service Orchestrator
   interfacing describes general assumptions which are applicable at
   all the customer and two low-level Network Orchestrators,
   one controlling a multi-domain IP/MPLS network MPI interfaces, between each PNC (Optical or Packet) and the other
   controlling the Optical networks.

   Another implementation can choose
   MDSC, and also to combine the L-MDSC functions of all the Optical hierarchical controller, providing multi-domain
   coordination of the Optical network together with the MDSC functions scenarios discussed in the Service/Network orchestrator.

   Without loss of generality, this assumes that document.

3.1. RESTCONF protocol at the service/network
   orchestrator MPIs

   The RESTCONF protocol, as depicted defined in Figure 2 would include all [RFC8040], using the required
   functionalities as JSON
   representation, defined in a hierarchical orchestration case.

   One of the important service functions the Service/Network
   orchestrator performs [RFC7951], is assumed to identify which TE Tunnels should carry
   the L3VPN traffic (from TE & Service Mapping Model) and to relay
   this information be used at these
   interfaces. Extensions to the IP/MPLS domain controllers, via non-ACTN
   interface, RESTCONF, as defined in [RFC8527], to ensure proper IP/VRF forwarding table be populated
   according to the TE binding requirement for the L3VPN.

   [Editor's Note] What mechanism would convey on the interface
   compliant with Network Management Datastore Architecture (NMDA)
   defined in [RFC8342], are assumed to the
   IP/MPLS domain controllers be used as well as on the SBI (between IP/MPLS
   domain controllers at these MPI
   interfaces and IP/MPLS PE routers) the TE binding policy
   dynamically for the L3VPN? Typically, VRF is also at CMI interfaces.

3.2. YANG data models at the function of MPIs

   The data models used on these interfaces are assumed to use the
   device that participate MP-BGP YANG
   1.1 Data Modeling Language, as defined in MPLS VPN. With current MP-BGP
   implementation [RFC7950].

3.2.1. Common YANG data models at the MPIs

   As required in MPLS VPN, [RFC8040], the VRF's BGP next hop "ietf-yang-library" YANG module
   defined in [RFC8525] is used to allow the
   destination PE and the mapping MDSC to a tunnel (either an LDP or a BGP
   tunnel) toward discover the destination PE is done set
   of YANG modules supported by automatically without
   any configuration. It is to be determined each PNC at its MPI.

   Both Optical and Packet PNCs use the impact on following common topology YANG
   models at the PE VRF
   operation when MPI to report their abstract topologies:

   o  The Base Network Model, defined in the tunnel is an optical bypass tunnel "ietf-network" YANG module
      of [RFC8345]

   o  The Base Network Topology Model, defined in the "ietf-network-
      topology" YANG module of [RFC8345], which does not
   participate either LDP or BGP.

   Figure 4 shows service/network orchestrator interactions with
   various domain controllers to instantiate tunnel provisioning as
   well as service configuration.

               +-------|----------------------------------|-----------+
               |   +----------------------------------+   |           |
               |   |MDSC TE & Service Mapping Function|   |           |
               |   +----------------------------------+   |           |
               |       |                          |       |           |
               |   +------------------+       +---------------------+ |
               |   | MDSC NP Function |-------|Service Config. Func.| |
               |   +------------------+       +---------------------+ |
               +-------|------------------------------|---------------+
                       |                              |
                       |          +-------------------+------+  3.
       2. Inter-layer  |         /                            \ VPN
   Serv.
          tunnel +-----+--------/-------+-----------------+
   \provision
          binding|             /        | 1. Optical      |     \
                 |            /         | tunnel creation |      \
            +----|-----------/-+    +---|------+    +-----|-------\---+
            | +-----+  +-----+ |    | +------+ |    | +-----+  +-----+|
            | |PNC1 |  |Serv.| |    | | PNC  | |    | |PNC2 |  |Serv.||
            | +-----+  +-----+ |    | +------+ |    | +-----+  +-----+|
            +------------------+    +----------+    +-----------------+

            Figure 4 Service and augments the Base
      Network Orchestration Process Model

   o  The TE binding requirement types [TSM] are:

   1. Hard Isolation with deterministic latency: Customer would request
     an L3VPN service [RFC8299] using a set Topology Model, defined in the "ietf-te-topology" YANG
      module of TE Tunnels [RFC8795], which augments the Base Network Topology
      Model with a
     deterministic latency requirement TE specific information.

   These common YANG models are generic and augmented by technology-
   specific YANG modules as described in the following sections.

   Both Optical and Packet PNCs must use the following common
   notifications YANG models at the MPI so that cannot any network changes can
   be not shared
     with other L3VPN services nor compete for bandwidth with other
     Tunnels.

   2. Hard Isolation: This is similar reported almost in real-time to MDSC by the above case without
     deterministic latency requirements.

   3. Soft Isolation: Customer would request an L3VPN service using a
     set of MPLS-TE tunnel which cannot PNCs:

   o  Dynamic Subscription to YANG Events and Datastores over RESTCONF
      as defined in [RFC8650]

   o  Subscription to YANG Notifications for Datastores updates as
      defined in [RFC8641]

   PNCs and MDSCs must be shared with other L3VPN
     services.

   4. Sharing: Customer would accept sharing the MPLS-TE Tunnels
     supporting its L3VPN service compliant with other services.

   For the first three types, there could be additional TE binding subscription requirements with respect to different VN members of the same VN
   associated with an L3VPN service. For the first two cases, VN
   members can be hard-isolated, soft-isolated, or shared. For as
   stated in [RFC7923].

3.2.2. YANG models at the
   third case, VN members can be soft-isolated or shared.

   o  When "Hard Isolation with or w/o deterministic latency" (i.e., Optical MPIs

   The Optical PNC also uses at least the first following technology-specific
   topology YANG models, providing WDM and Ethernet technology-specific
   augmentations of the second type) generic TE binding requirement is applied
      for a L3VPN, a new optical layer tunnel has to be created (Step 1
      in Figure 4). This operation requires the following control level
      mechanisms as follows: Topology Model:

   o  The MDSC function of WSON Topology Model, defined in the Service/Network Orchestrator
          identifies only "ietf-wson-topology" YANG
      modules of [WSON-TOPO], or the domains Flexi-grid Topology Model, defined
      in the IP/MPLS "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO].

   o  Optionally, when the OTN layer is used, the OTN Topology Model,
      as defined in which the
          VPN needs to be forwarded. "ietf-otn-topology" YANG module of [OTN-TOPO].

   o Once the IP/MPLS layer domains are determined,  The Ethernet Topology Model, defined in the MDSC
          function "ietf-eth-te-
      topology" YANG module of [CLIENT-TOPO].

   o  Optionally, when the Service/Network Orchestrator needs to
          identify OTN layer is used, the set of optical ingress and egress points network data model
      for L1 OTN services (e.g. an Ethernet transparent service) as
      defined in "ietf-trans-client-service" YANG module of the
          underlay optical tunnels providing connectivity between the
          IP/MPLS layer domains. draft-ietf-
      ccamp-client-signal-yang [CLIENT-SIGNAL].

   o Once both IP/MPLS layers and optical layer are determined,
          the MDSC needs to identify the inter-layer peering points in
          both IP/MPLS domains as well as the optical domain(s). This
          implies that  The WSON Topology Model or, alternatively, the L3VPN traffic will be forwarded Flexi-grid
      Topology model is used to an MPLS-
          TE tunnel that starts at report the ingress PE (in one IP/MPLS
          domain) DWDM network topology (e.g.,
      ROADMs and terminates at links) depending on whether the egress PE (in another IP/MPLS
          domain) via a dedicated underlay DWDM optical tunnel.

   o network
      is based on fixed grid or flexible-grid.

   The MDSC function of the Service/Network Orchestrator needs Ethernet Topology is used to
      first request report the optical L-MDSC to instantiate an optical tunnel
      for access links between the optical ingress
   IP routers and egress. This is referred to as
      optical tunnel creation (Step 1 in Figure 4). Note that it is L-
      MDSC responsibility to perform multi-domain optical coordination
      with its underlying optical PNCs, for setting up a multi-domain
      optical tunnel.

   o  Once the edge ROADMs.

   The optical tunnel is established, then the MDSC function of PNC uses at least the Service/Network Orchestrator needs to coordinate with following YANG models:

   o  The TE Tunnel Model, defined in the PNC
      functions "ietf-te" YANG module of
      [TE-TUNNEL]

   o  The WSON Tunnel Model, defined in the IP/MPLS Domain Controllers (under which "ietf-wson-tunnel" YANG
      modules of [WSON-TUNNEL], or the
      ingress and egress PEs belong) Flexi-grid Media Channel Model,
      defined in the setup "ietf-flexi-grid-media-channel" YANG module of a multi-domain MPLS-
      TE Tunnel, between
      [Flexi-MC]

   o  Optionally, when the ingress and egress PEs. This setup OTN layer is
      carried by used, the created underlay optical tunnel (Step 2 OTN Tunnel Model,
      defined in Figure
      4).

   o  It is the responsibility "ietf-otn-tunnel" YANG module of [OTN-TUNNEL].

   o  The Ethernet Client Signal Model, defined in the Service Configuration Function "ietf-eth-tran-
      service" YANG module of
      the Service/Network Orchestrator to identify interfaces/labels on
      both ingress [CLIENT-SIGNAL].

   The TE Tunnel model is generic and egress PEs augmented by technology-specific
   models such as the WSON Tunnel Model and the Flexi-grid Media
   Channel Model.

   The WSON Tunnel Model or, alternatively, the Flexi-grid Media
   Channel Model are used to convey this information to
      both setup connectivity within the IP/MPLS Domain Controllers (under which DWDM network
   depending on whether the ingress and
      egress PEs belong) for proper configuration of DWDM optical network is based on fixed grid
   or flexible-grid.

   The Ethernet Client Signal Model is used to configure the L3VPN (BGP and
      VRF function steering
   of the PEs) in their domain networks (Step 3 Ethernet client traffic between Ethernet access links and TE
   Tunnels, which in
      Figure 4).

5.3. IP/MPLS Domain Controller this case could be either WSON Tunnels or
   Flexi-Grid Media Channels. This model is generic and NE Functions

   IP/MPLS networks are assumed applies to have multiple domains and each
   domain is controlled any
   technology-specific TE Tunnel: technology-specific attributes are
   provided by IP/MPLS domain controller in the technology-specific models which augment the ACTN
   PNC functions and non-ACTN service functions are performed by generic
   TE-Tunnel Model.

3.2.3. YANG data models at the
   IP/MPLS domain controller.

   Among Packet MPIs

   The Packet PNC also uses at least the functions following technology-specific
   topology YANG models, providing IP and Ethernet technology-specific
   augmentations of the IP/MPLS domain controller are VPN service
   aspect provisioning such as VRF control and management for VPN
   services, etc. It is assumed that BGP is running generic Topology Models described in section
   3.2.1:

   o  The L3 Topology Model, defined in the inter-domain
   IP/MPLS networks for L2/L3VPN and that the IP/MPLS domain controller
   is also responsible for configuring the BGP speakers within its
   control domain if necessary.

   Depending on "ietf-l3-unicast-topology"
      YANG modules of [RFC8346], which augments the Base Network
      Topology Model

   o  The L3 specific data model including extended TE binding requirement types discussed attributes (e.g.
      performance derived metrics like latency), defined in Section
   5.2, there are two possible deployment scenarios.

5.3.1. Scenario A: Shared Tunnel Selection

   When the L2/L3VPN does not require isolation (either hard or soft),
   it can select an existing MPLS-TE and Optical tunnel between ingress "ietf-l3-
      te-topology" and egress PE, without creating any new TE tunnels. Figure 5 shows
   this scenario.

            IP/MPLS Domain 1                    IP/MPLS Domain 2
                Controller                          Controller

          +------------------+               +------------------+
          | +-----+  +-----+ |               | +-----+  +-----+ |
          | |PNC1 |  |Serv.| |               | |PNC2 |  |Serv.| |
          | +-----+  +-----+ |               | +-----+  +-----+ |
          +--|-----------|---+               +--|-----------|---+
             | 1.Tunnel  | 2.VPN/VRF            | 1.Tunnel  | 2.VPN/VRF
             | Selection | Provisioning         | Selection |
   Provisioning
             V           V                      V           V
           +---------------------+            +---------------------+
      CE  / PE     tunnel 1   ASBR\          /ASBR    tunnel 2    PE \
   CE
      o--/---o..................o--\--------/--o..................o---
   \--o
         \                         /        \                         /
          \       AS Domain 1     /          \      AS Domain 2      /
           +---------------------+            +---------------------+

                                    End-to-end tunnel
             <----------------------------------------------------->

             Figure 5 IP/MPLS Domain Controller & NE Functions

   How VPN is disseminated across in "ietf-te-topology-packet" in draft-ietf-teas-
      l3-te-topo [L3-TE-TOPO]

   o  The Ethernet Topology Model, defined in the network is out "ietf-eth-te-
      topology" YANG module of [CLIENT-TOPO], which augments the scope of
   this document. We assume that MP-BGP TE
      Topology Model

   The Ethernet Topology Model is running in IP/MPLS networks used to report the access links
   between the IP routers and VPN the edge ROADMs as well as the
   inter-domain links between ASBRs, while the L3 Topology Model is made known
   used to ABSRs report the IP network topology (e.g., IP routers and PEs by each IP/MPLS domain
   controllers. See [RFC4364] for detailed descriptions on how MP-BGP
   works.

   There are several functions IP/MPLS domain controllers need to
   provide links).

   o  The User Network Interface (UNI) Topology Model, being defined in order to facilitate tunnel selection for
      the VPN in both
   domain level and end-to-end level.

5.3.1.1. Domain Tunnel Selection

   Each domain IP/MPLS controller is responsible for selecting its
   domain level tunnel for "ietf-uni-topology" module of the L3VPN. First it needs to determine draft-ogondio-opsawg-uni-
      topology [UNI-TOPO] which
   existing tunnels would fit for the L2/L3VPN requirements allotted augment "ietf-network" module defined
      in [RFC8345] adding service attachment points to the domain by the Service/Network Orchestrator (e.g., tunnel
   binding, bandwidth, latency, etc.). If there are existing tunnels
   that are feasible nodes to satisfy the L3VPN requirements, the
      which L2VPN/L3VPN IP/MPLS
   domain controller selects the optimal tunnel from the candidate
   pool. Otherwise, an MPLS tunnel with modified bandwidth or a new
   MPLS Tunnel needs to be setup. Note that with no isolation
   requirement for the L3VPN, existing MPLS tunnel services can be selected.
   With soft isolation requirement attached.

   o  L3VPN network data model defined in "ietf-l3vpn-ntw" module of
      draft-ietf-opsawg-l3sm-l3nm [L3NM] used for the L3VPN, an optical tunnel can
   be shared with other L2/L3VPN services while with hard isolation
   requirement non-ACTN MPI for the L2/L3VPN, a dedicated MPLS-TE and a dedicated
   optical tunnel MUST be provisioned
      L3VPN service provisioning

   o  L2VPN network data model defined in "ietf-l2vpn-ntw" module of
      draft-ietf-barguil-opsawg-l2sm-l2nm [L2NM] used for the L2/L3VPN.

5.3.1.2. VPN/VRF Provisioning non-ACTN MPI
      for L3VPN

   Once the domain level tunnel is selected L2VPN service provisioning

   [Editor's note:] Add YANG models used for a domain, the Service
   Function of the IP/MPLS domain controller maps the L3VPN to the
   selected MPLS-TE tunnel and assigns a label (e.g., MPLS label) with
   the PE. Then the PE creates a new entry service
   configuration.

4. Multi-layer and multi-domain services scenarios

   Multi-layer and multi-domain scenarios, based on reference network
   described in section 2, and very relevant for the VPN Service Providers, are
   described in the VRF
   forwarding table so that when the VPN packet arrives to the PE, it
   will be able to direct to the right interface next sections. For each scenario existing IETF
   protocols and PUSH the label
   assigned for the VPN. When the PE forwards a VPN packet, it will
   push data models are identified with particular focus on
   the VPN label signaled by BGP and, MPI in case of option A the ACTN architecture. Non ACTN IETF data models required
   for L2/L3VPN service provisioning between MDSC and B
   [RFC4364], it will IP PNCs are also push
   identified.

4.1. Scenario 1: network and service topology discovery

   In this scenario, the LSP label assigned MSDC needs to discover through the
   configured MPLS-TE Tunnel to reach underlying
   PNCs, the ASBR next hop network topology, at both WDM and forwards
   the packet to the MPLS next-hop IP layers, in terms of this MPLS-TE Tunnel.

   In case
   nodes (NEs) and links, including inter AS domain links as well as
   cross-layer links but also in terms of option C [RFC4364], the PE will push one MPLS LSP label
   signaled by BGP to reach the destination PE tunnels (MPLS or SR paths in
   IP layer and a second MPLS LSP
   label assigned to the configured MPLS-TE Tunnel to reach the ASBR
   next-hop OCh and forward optionally ODUk tunnels in optical layer).MDSC
   discovers also the packet IP/MPLS transport services (L2VPN/L3VPN)
   deployed, both intra-domain and inter-domain wise.

   Each PNC provides to the MPLS next-hop of this MPLS-TE
   Tunnel.

   With Option C, the ASBR MDSC an abstracted or full topology view of
   the first domain interfacing the next
   domain should keep the VPN label intact to WDM or the ASBR IP topology of the next domain so that the ASBR it controls. This topology
   can be abstracted in the next domain sees the VPN packets as
   if they are coming from a CE. With Option B, the VPN label sense that some detailed NE information is
   swapped. With option A,
   hidden at the VPN label is removed.

   With Option A MPI, and B, the ASBR all or some of the second domain does the same
   procedure that includes VPN/VRF tunnel mapping NEs and interface/label
   assignment with the IP/MPLS domain controller. With option A, the
   ASBR operations related physical
   links are the same exposed as abstract nodes and logical (virtual) links,
   depending on the level of abstraction the PEs. With option B, user requires. This
   information is key to understand both the ASBR
   operates with VPN labels so it can see inter-AS domain links
   (seen by each controller as UNI interfaces but as I-NNI interfaces
   by the VPN MDSC) as well as the traffic belongs
   to. With option C, cross-layer mapping between IP and WDM
   layer.

   The MDSC also maintains an up-to-date network database of both IP
   and WDM layers (and optionally OTN layer) through the ASBR operates use of IETF
   notifications through MPI with the end-to-end tunnel
   labels so it may PNCs when any topology change
   occurs. It should be not aware of possible also to correlate information coming
   from IP and WDM layers (e.g.: which port, lambda/OTSi, direction is
   used by a specific IP service on the VPN WDM equipment)
   In particular, For the traffic belongs to.

   This process cross-layer links it is repeated in each domain. The PE of key for MDSC to be
   able to correlate automatically the last domain
   interfacing information from the destination CE should recognize PNC network
   databases about the VPN label when physical ports from the VPN packets arrive and thus POP routers (single link or
   bundle links for LAG) to client ports in the VPN label ROADM.

  It should be possible at MDSC level to easily correlate WDM and forward the
   packets IP
  layers alarms to the CE.

5.3.1.3. VSI Provisioning for L2VPN

   The VSI provisioning for L2VPN is similar speed-up troubleshooting

  Alarms and event notifications are required between MDSC and PNCs so
  that any network changes are reported almost in real-time to the VPN/VRF provision
   for L3VPN. L2VPN service types include:

   o  Point-to-point Virtual Private Wire Services (VPWSs) that use
      LDP-signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074];

   o  Multipoint Virtual Private LAN Services (VPLSs) that use LDP-
      signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074];

   o  Multipoint Virtual Private LAN Services (VPLSs) that use a Border
      Gateway Protocol (BGP) control plane as described MDSC
  (e.g. NE or link failure, MPLS tunnel switched from main to backup
  path etc.). As specified in [RFC4761]and
      [RFC6624];

   o  IP-Only LAN-Like Services (IPLSs) that [RFC7923] MDSC must be able to subscribe
  to specific objects from PNC YANG datastores for notifications.

4.1.1. Inter-domain link discovery

   In the reference network of Figure 1, there are a functional subset two types of
      VPLS services [RFC7436];
   inter-domain links:

   o  BGP MPLS-based Ethernet VPN Services as described in [RFC7432]
      and [RFC7209];  Links between two IP domains (ASes)

   o  Ethernet VPN VPWS specified in [RFC8214] and [RFC7432].

5.3.1.4. Inter-domain  Links Update

   In order to facilitate inter-domain between an IP router and a ROADM

   Both types of links for the VPN, we assume
   that the service/network orchestrator would know the are Ethernet physical links.

   The inter-domain link status and its resource information (e.g., bandwidth available,
   protection/restoration policy, etc.) via some mechanisms (which are
   beyond is reported to the scope of this document). We also assume that MDSC by the inter-
   domain links are pre-configured prior to service instantiation.

5.3.1.5. End-to-end Tunnel Management

   It is foreseen that two
   adjacent PNCs, controlling the Service/Network orchestrator should control
   and manage end-to-end tunnels for VPNs per VPN policy.

   As discussed in [ACTN-PM], two ends of the Orchestrator is responsible to
   collect domain LSP-level performance monitoring data from domain
   controllers and inter-domain link.
   The MDSC needs to derive and report end-to-end tunnel performance
   monitoring information understa how to merge the customer.

5.3.2. Scenario B: Isolated VN/Tunnel Establishment

   When the L3VPN requires hard-isolated Tunnel establishment, optical
   layer tunnel binding with IP/MPLS layer is necessary. As such, these inter-domain
   Ethernet links together.

   This document considers the following functions two options for discovering
   inter-domain links:

   1. Static configuration

   2. LLDP [IEEE 802.1AB] automatic discovery

   Other options are necessary.

   o possible but not described in this document.

   The IP/MPLS Domain Controller of Domain 1 needs MDSC can understand how to send merge these inter-domain links
   together using the VRF
      instruction to the PE:

       o To the Ingress PE of AS Domain 1: Configuration for each
          L3VPN destination IP address (in this case the remote CE's IP
          address for plug-id attribute defined in the VPN or any customer's IP addresses reachable
          through a remote CE) TE Topology
   Model [RFC8795], as described in as described in section 4.3 of the associated VPN label assigned by
          the Egress PE and
   [RFC8795].

   A more detailed description of how the MPLS-TE Tunnel to plug-id can be used to reach
          the Egress PE: so that the proper VRF table
   discover inter-domain link is populated to
          forward also provided in section 5.1.4 of
   [TNBI].

   Both types of inter-domain links are discovered using the VPN traffic to plug-id
   attributes reported in the inter-layer optical interface
          with Ethernet Topologies exposed by the VPN label.

   o two
   adjacent PNCs. The Egress PE, upon the discovery of a new MDSC can also discover an inter-domain IP address, needs to
      send
   link/adjacency between the mapping information (i.e., VPN to two IP address) to its'
      IP/MPLS Domain Controller of Domain 2 which sends, LTPs, reported in turn, to the service orchestrator. The service orchestrator would then
      propagate this mapping information to IP
   Topologies exposed by the IP/MPLS Domain
      Controller two adjacent P-PNCs, supported by the two
   ETH LTPs of Domain 1 which sends it, in turn, an Ethernet Link discovered between these two P-PNCs.

   The static configuration requires an administrative burden to the ingress PE
      so that
   configure network-wide unique identifiers: it may override the VPN/VRF forwarding or VSI forwarding,
      respectively is therefore more
   viable for L3VPN and L2VPN. As a result, when packets
      arriving at inter-AS links. For the links between the ingress PE with that IP destination address, routers and
   the
      ingress PE would then forward this packet to Optical NEs, the inter-layer
      optical interface.

   [Editor's Note] automatic discovery solution based on LLDP
   snooping is preferable when LLDP snooping is supported by the
   Optical NEs.

   As outlined in case [TNBI], the encoding of hard isolated tunnel required for the
   VPN, we need to create a separate MPLS TE tunnel and encapsulate the
   MPLS packets plug-id namespace as well
   as of the MPLS Tunnel into LLDP information within the ODU so that plug-id value is
   implementation specific and needs to be consistent across all the optical NE
   would route this MPLS Tunnel
   PNCs.

4.1.2. IP Link Setup Procedure

   The MDSC requires the O-PNC to setup a separate optical tunnel from other
   tunnels.]

5.4. Optical Domain Controller and NE Functions

   Optical WDM Tunnel (either a WSON
   Tunnel or a Flexi grid Tunnel) within the DWDM network provides between the underlay connectivity services to
   IP/MPLS networks.
   two Optical Transponders (OTs) associated with the two access links.

   The multi-domain optical network coordination is
   performed Optical Transponders are reported by the L-MDSC function shown O-PNC as Trail
   Termination Points (TTPs), defined in Figure 2 so that the whole
   multi-domain optical network appears to [TE TOPO], within the service/network
   orchestrator as one optical network. WDM
   Topology. The coordination of
   Packet/Optical multi-layer association between the Ethernet access link and IP/MPLS multi-domain the
   WDM TTP is done reported by the
   service/network orchestrator where it interfaces two IP/MPLS domain
   controllers and one optical L-MDSC.

   Figure 6 shows how Inter Layer Lock (ILL) identifiers,
   defined in [TE TOPO], reported by the Optical Domain Controllers create a new
   optical tunnel and O PNC within the related interaction with IP/MPLS domain
   controllers Ethernet
   Topology and WDM Topology.

   The MDSC also requires the NEs O-PNC to bind steer the optical tunnel with proper
   forwarding instruction so that Ethernet client
   traffic between the VPN requiring hard isolation two access Ethernet Links over the WDM Tunnel.

   After the WDM Tunnel has been setup and the client traffic steering
   configured, the two IP routers can
   be fulfilled.

           IP/MPLS Domain 1       Optical Domain    IP/MPLS Domain 2
               Controller            Controller         Controller

        +------------------+    +---------+   +------------------+
        | +-----+  +-----+ |    | +-----+ |   | +-----+  +-----+ |
        | |PNC1 |  |Serv.| |    | |PNC  | |   | |PNC2 |  |Serv.| |
        | +-----+  +-----+ |    | +-----+ |   | +-----+  +-----+ |
        +--|-----------|---+    +----|----+   +--|----------|----+
           | 2.Tunnel  | 3.VPN/VRF   |           |2.Tunnel  | 3.VPN/VRF
           | Binding   | Provisioning|           |Binding   |
   Provisioning
           V           V             |           V          V
          +-------------------+      |    +-------------------+
     CE  / PE              ASBR\     |   /ASBR              PE \   CE
     o--/---o                o--\----|--/--o                o---\--o
        \   :                   /    |  \                   :   /
         \  :    AS Domain 1   /     |   \   AS Domain 2    :  /
          +-:-----------------+      |    +-----------------:-+
            :                        |                         :
            :                        | 1. Optical              :
            :                        | Tunnel Creation         :
            :                        v                         :
          +-:--------------------------------------------------:-+
         /  :                                                  :  \
        /   o..................................................o   \
       |                      Optical Tunnel                        |
        \                                                          /
         \                    Optical Domain                      /
          +------------------------------------------------------+

    Figure 6 Domain Controller & NE Functions (Isolated Optical Tunnel)

   As discussed in 5.2, in case that VPN has requirement for hard-
   isolated tunnel establishment, exchange Ethernet packets between
   themselves, including LLDP messages.

   If LLDP [IEEE 802.1AB] is used between the service/network orchestrator will
   coordinate across IP/MPLS domain controllers and Optical L-MDSC to
   ensure two routers, the creation of a new optical tunnel for P PNC
   can automatically discover the VPN in proper
   sequence. Figure 6 shows this scenario.

   o  The MDSC of IP Link being set up by the service/network orchestrator requests MDSC. The
   IP LTPs terminating this IP Link are supported by the L-MDSC
      to setup and Optical tunnel providing connectivity between ETH LTPs
   terminating the
      inter-layer interfaces at two access links.

   Otherwise, the ingress and egress PEs and requests MDSC needs to require the two IP/MPLS domain controllers P PNC to setup configure an inter-domain IP
      link
   Link between these interfaces

   o  The the two routers: the MDSC of also configures the service/network orchestrator then should provide two ETH
   LTPs which support the ingress IP/MPLS domain controller with two IP LTPs terminating this IP Link.

4.2. L2VPN/L3VPN establishment

   To be added

   [Editor's Note] What mechanism would convey on the routing
      instruction for interface to the VPN so that the ingress
   IP/MPLS domain
      controller would help its ingress PE to populate forwarding
      table. The packet with the VPN label should be forwarded to the
      optical interface the MDSC provided.

   o  The Ingress Optical Domain PE needs to recognize MPLS-TE label controllers as well as on
      its ingress interface from the SBI (between IP/MPLS
   domain PE controllers and encapsulate IP/MPLS PE routers) the
      MPLS packets TE binding policy
   dynamically for the L3VPN? Typically, VRF is the function of this MPLS-TE Tunnel into the ODU.

   [Editor's Note] We assumed
   device that participate MP-BGP in MPLS VPN. With current MP-BGP
   implementation in MPLS VPN, the Optical PE VRF's BGP next hop is LSR.]

   o  The Egress Optical Domain PE needs to POP the ODU label before
      sending the packet (with MPLS-TE label kept intact at
   destination PE and the top
      level) mapping to a tunnel (either an LDP or a BGP
   tunnel) toward the Egress destination PE in the IP/MPLS Domain is done by automatically without
   any configuration. It is to which be determined the packet
      is destined.

   [Editor's Note] If there are two VPNs having impact on the same destination CE
   requiring non-shared PE VRF
   operation when the tunnel is an optical tunnels from each other, we need bypass tunnel which does not
   participate either LDP or BGP.

   New text to
   explain this case answer the yellow part:

   The MDSC Network-related function will then coordinate with a need for additional Label to differentiate the VPNs]

5.5. Orchestrator-Controllers-NEs Communication Protocol Flows

   This section provides generic communication protocol flows across
   orchestrator, controllers and NEs PNCs
   involved in order the process to facilitate provide the POI
   scenarios discussed provisioning information
   through ACTN MDSC to PNC (MPI) interface. The relevant data models
   used at the MPI may be in Section 5.3.2 for dynamic optical Tunnel
   establishment. Figure 7 shows the communication flows.

    +---------+   +-------+    +------+   +------+   +------+  +------+
    |Orchestr.|   |Optical|    |Packet|   |Packet|   |Ing.PE|  |Egr.PE|
    |         |   |  Ctr. |    |Ctr-D1|   |Ctr-D2|   |  D1  |  |  D2  |
    +---------+   +-------+    +------+   +------+   +------+  +------+
       |                 |         |           |           |          |
       |                 |         |           |           |<--BGP--->|
       |                 |         |           |VPN Update |          |
       |                 |         | VPN Update|<---------------------|
       |<--------------------------------------|(Dest, VPN)|          |
       |                 |         |(Dest, VPN)|           |          |
       |  Tunnel Create  |         |           |           |          |
       |---------------->|         |           |           |          |
       |(VPN,Ingr/Egr if)|         |           |           |          |
       |                 |         |           |           |          |
       |  Tunnel Confirm |         |           |           |          |
       |<----------------|         |           |           |          |
       | (Tunnel ID)     |         |           |           |          |
       |                 |         |           |           |          |
       |  Tunnel Bind    |         |           |           |          |
       |-------------------------->|           |           |          |
       | (Tunnel ID, VPN, Ingr if) | Forward. Mapping      |          |
       |                 |         |---------------------->| (1)      |
       |      Tunnel Bind Confirm  | (Dest, VPN, Ingr if   |          |
       |<--------------------------|           |           |          |
       |                 |         |           |           |          |
       |  Tunnel Bind    |         |           |           |          |
       |-------------------------------------->|           |          |
       | (Tunnel ID, VPN, Egr if)  |           |           |          |
       |                 |         |           | Forward. Mapping     |
       |                 |         |           |---------------------
   >|(2)
       |                 |         |           | (Dest, VPN , Egr if) |
       |                 | Tunnel Bind Confirm |           |          |
       |<--------------------------------------|           |          |
       |                 |         |           |           |          |

     Figure 7 Communication Flows for Optical Tunnel Establishment form of L3NM, L2NM or others and
                                   binding.

   When Domain Packet Controller 1 sends are
   exchanged through MPI API calls. Through this process MDSC Network-
   related functions provide the forwarding mapping configuration information as indicated in (1) in Figure 7, the Ingress to realize a
   VPN service to PNCs. For example, this process will inform PNCs on
   what PE in
   Domain 1 routers compose a L3VPN, the topology requested, the VPN
   attributes, etc.

   At the end of the process PNCs will need deliver the actual configuration
   to provision the VRF forwarding table based on devices (either physical or virtual), through the ACTN
   Southbound Interface (SBI). In this case the configuration policies
   may be exchanged using a Netconf session delivering configuration
   commands associated to device-specific data models (e.g. BGP[], QOS
   [], etc.).

   Having the topology information it receives. Please see of the detailed procedure in
   section 5.3.1.2. A similar procedure is network domains under their
   control, PNCs will deliver all the information necessary to be done at create,
   update, optimize or delete the tunnels connecting the Egress PE
   in Domain 2.

6. nodes as
   requested by the VPN instantiation.

5. Security Considerations

   Several security considerations have been identified and will be
   discussed in future versions of this document.

7.

6. Operational Considerations

   Telemetry data, such as the collection of lower-layer networking
   health and consideration of network and service performance from POI
   domain controllers, may be required. These requirements and
   capabilities will be discussed in future versions of this document.

8.

7. IANA Considerations

   This document requires no IANA actions.

9.

8. References

9.1.

8.1. Normative References

   [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling
             Language", RFC 7950, August 2016.

   [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC
             7951, August 2016.

   [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January
             2017.

   [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for
             Network Topologies", RFC8345, March 2018.

   [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3
             Topologies", RFC8346, March 2018.

   [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction
             and Control of TE Networks (ACTN)", RFC8453, August 2018.

   [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019.

   [RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering
             (TE) Topologies", RFC8795, August 2020.

   [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and
             metropolitan area networks - Station and Media Access
             Control Connectivity Discovery", March 2016.

   [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies",
             draft-ietf-teas-yang-te-topo, work in progress.

   [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength
             Switched Optical Networks)", draft-ietf-ccamp-wson-yang,
             work in progress.

   [Flexi-TOPO]   Lopez de Vergara, J. E. et al., "YANG data model for
             Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid-
             yang, work in progress.

   [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical
             Transport Network Topology", draft-ietf-ccamp-otn-topo-
             yang, work in progress.

   [CLIENT-TOPO]  Zheng, H. et al., "A YANG Data Model for Client-layer
             Topology", draft-zheng-ccamp-client-topo-yang, work in
             progress.

   [L3-TE-TOPO]   Liu, X. et al., "YANG Data Model for Layer 3 TE
             Topologies", draft-ietf-teas-yang-l3-te-topo, work in
             progress.

   [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic
             Engineering Tunnels and Interfaces", draft-ietf-teas-yang-
             te, work in progress.

   [WSON-TUNNEL]  Lee, Y. et al., "A Yang Data Model for WSON Tunnel",
             draft-ietf-ccamp-wson-tunnel-model, work in progress.

   [Flexi-MC]  Lopez de Vergara, J. E. et al., "YANG data model for
             Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid-
             media-channel-yang, work in progress.

   [OTN-TUNNEL]   Zheng, H. et al., "OTN Tunnel YANG Model", draft-
             ietf-ccamp-otn-tunnel-model, work in progress.

   [CLIENT-SIGNAL]   Zheng, H. et al., "A YANG Data Model for Transport
             Network Client Signals", draft-ietf-ccamp-client-signal-
             yang, work in progress.

9.2.

8.2. Informative References

   [RFC1930] J. Hawkinson, T. Bates, "Guideline for creation,
             selection, and registration of an Autonomous System (AS)",
             RFC 1930, March 1996.

   [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private
             Networks (VPNs)", RFC 4364, February 2006.

   [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN
             Service (VPLS) Using BGP for Auto-Discovery and
             Signaling", RFC 4761, January 2007.

   [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning,
             Auto-Discovery, and Signaling in Layer 2 Virtual Private
             Networks (L2VPNs)", RFC 6074, January 2011.

   [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2
             Virtual Private Networks Using BGP for Auto-Discovery and
             Signaling", RFC 6624, May 2012.

   [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W.
             Henderickx, and A. Isaac, "Requirements for Ethernet VPN
             (EVPN)", RFC 7209, May 2014.

   [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN",
             RFC 7432, February 2015.

   [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only
             LAN Service (IPLS)", RFC 7436, January 2015.

   [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J.
             Rabadan, "Virtual Private Wire Service Support in Ethernet
             VPN", RFC 8214, August 2017.

   [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data
             Model for L3VPN Service Delivery", RFC 8299, January 2018.

   [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained",
             RFC 8309, January 2018.

   [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual
             Private Network (L2VPN) Service Delivery", RFC8466,
             October 2018.

   [TNBI]    Busi, I., Daniel, K. et al., "Transport Northbound
             Interface Applicability Statement", draft-ietf-ccamp-
             transport-nbi-app-statement, work in progress.

   [ACTN-VN]

   [VN]      Y. Lee, et al., "A Yang Data Model for ACTN VN Operation",
             draft-ietf-teas-actn-vn-yang, work in progress.

   [L2NM]    S. Barguil, et al., "A Layer 2 VPN Network YANG Model",
             draft-ietf-opsawg-l2nm, work in progress.

   [L3NM]    S. Barguil, et al., "A Layer 3 VPN Network YANG Model",
             draft-ietf-opsawg-l3sm-l3nm, work in progress.

   [TSM]     Y. Lee, et al., "Traffic Engineering and Service Mapping Yang
             Model", draft-ietf-teas-te-service-mapping-yang, work Service Mapping
             Yang Model", draft-ietf-teas-te-service-mapping-yang, work
             in progress.

   [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance
             Monitoring Telemetry and Scaling Intent Autonomics",
             draft-lee-teas-actn-pm-telemetry-autonomics, work in
             progress.

   [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs",
             draft-ietf-bess-l3vpn-yang, work in progress.

Appendix A. Multi-layer and multi-domain resiliency

A.1.  Maintenance Window

   Before planned maintenance operation on DWDM network takes place, IP
   traffic should be moved hitless to another link.

   MDSC must reroute IP traffic before the events takes place. It
   should be possible to lock IP traffic to the protection route until
   the maintenance event is finished, unless a fault occurs on such
   path.

A.2.  Router port failure

   The focus is on client-side protection scheme between IP router and
   reconfigurable ROADM. Scenario here is to define only one port in
   the routers and in the ROADM muxponder board at both ends as back-up
   ports to recover any other port failure on client-side of the ROADM
   (either on router port side or on muxponder side or on the link
   between them). When client-side port failure occurs, alarms are
   raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.).
   MDSC checks with OP-PNC(s) that there is no optical failure in the
   optical layer.

   There can be two cases here:

   a) LAG was defined between the two end routers. MDSC, after checking
      that optical layer is fine between the two end ROADMs, triggers
      the ROADM configuration so that the router back-up port with its
      associated muxponder port can reuse the OCh that was already in
      use previously by the failed router port and adds the new link to
      the LAG on the failure side.

      While the ROADM reconfiguration takes place, IP/MPLS traffic is
      using the reduced bandwidth of the IP link bundle, discarding
      lower priority traffic if required. Once backup port has been
      reconfigured to reuse the existing OCh and new link has been
      added to the LAG then original Bandwidth is recovered between the
      end routers.

      Note: in
             progress.

   [ACTN-PM] Y. Lee, et al., "YANG models this LAG scenario let assume that BFD is running at LAG
      level so that there is nothing triggered at MPLS level when one
      of the link member of the LAG fails.

   b) If there is no LAG then the scenario is not clear since a router
      port failure would automatically trigger (through BFD failure)
      first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE
      case) or TI-LFA (MPLS based SR-TE case) through a protection
      port. At the same time MDSC, after checking that optical network
      connection is still fine, would trigger the reconfiguration of
      the back-up port of the router and of the ROADM muxponder to re-
      use the same OCh as the one used originally for VN & TE Performance
             Monitoring Telemetry the failed router
      port. Once everything has been correctly configured, MDSC Global
      PCE could suggest to the operator to trigger a possible re-
      optimisation of the back-up MPLS path to go back to the  MPLS
      primary path through the back-up port of the router and Scaling Intent Autonomics",
             draft-lee-teas-actn-pm-telemetry-autonomics, work the
      original OCh if overall cost, latency etc. is improved. However,
      in
             progress.

   [BGP-L3VPN] D. Jain, et al. "Yang Data Model this scenario, there is a need for BGP/MPLS L3 VPNs",
             draft-ietf-bess-l3vpn-yang, work protection port PLUS back-
      up port in progress. the router which does not lead to clear port savings.

Acknowledgments

   This document was prepared using 2-Word-v2.0.template.dot.

   Some of this analysis work was supported in part by the European
   Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727).

Contributors

   Sergio Belotti
   Nokia

   Email: sergio.belotti@nokia.com

   Gabriele Galimberti
   Cisco

   Email: ggalimbe@cisco.com

   Zheng Yanlei
   China Unicom

   Email: zhengyanlei@chinaunicom.cn

   Anton Snitser
   Sedona
   Email: antons@sedonasys.com

   Washington Costa Pereira Correia
   TIM Brasil

   Email: wcorreia@timbrasil.com.br

   Michael Scharf
   Hochschule Esslingen - University of Applied Sciences

   Email: michael.scharf@hs-esslingen.de

   Young Lee
   Sung Kyun Kwan University

   Email: younglee.tx@gmail.com

   Jeff Tantsura
   Apstra

   Email: jefftant.ietf@gmail.com

   Paolo Volpato
   Huawei

   Email: paolo.volpato@huawei.com

Authors' Addresses

   Fabio Peruzzini
   TIM

   Email: fabio.peruzzini@telecomitalia.it

   Jean-Francois Bouquier
   Vodafone
   Email: jeff.bouquier@vodafone.com

   Italo Busi
   Huawei

   Email: Italo.busi@huawei.com

   Daniel King
   Old Dog Consulting

   Email: daniel@olddog.co.uk

   Daniele Ceccarelli
   Ericsson

   Email: daniele.ceccarelli@ericsson.com