TEAS Working Group                              Daniele Ceccarelli (Ed)
Internet Draft                                                 Ericsson
Intended status: Informational                           Young Lee (Ed)
Expires: January 19, April 3, 2018                                           Huawei

                                                          July 20,

                                                        October 4, 2017

  Framework for Abstraction and Control of Traffic Engineered Networks

                   draft-ietf-teas-actn-framework-07

                   draft-ietf-teas-actn-framework-08

Abstract

   Traffic Engineered networks have a variety of mechanisms to
   facilitate the separation of the data plane and control plane. They
   also have a range of management and provisioning protocols to
   configure and activate network resources.  These mechanisms
   represent key technologies for enabling flexible and dynamic
   networking.

   Abstraction of network resources is a technique that can be applied
   to a single network domain or across multiple domains to create a
   single virtualized network that is under the control of a network
   operator or the customer of the operator that actually owns
   the network resources.

   This document provides a framework for Abstraction and Control of
   Traffic Engineered Networks (ACTN).

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with
   the provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt
   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on January 19, April 3, 2018.

Copyright Notice

   Copyright (c) 2017 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document.  Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Table of Contents

   1. Introduction...................................................3
      1.1. Terminology...............................................5
   2. Business Model of ACTN.........................................9 Overview.......................................................4
      2.1. Customers................................................10 Terminology...............................................5
      2.2. VNS Model of ACTN.........................................7
         2.2.1. Customers............................................9
         2.2.2. Service Providers........................................10
      2.3. Providers....................................9
         2.2.3. Network Providers........................................12 Providers...................................10
   3. Virtual Network Service.......................................12
   4. ACTN Base Architecture........................................13
      4.1. Architecture........................................10
      3.1. Customer Network Controller..............................15
      4.2. Multi Domain Controller..............................12
      3.2. Multi-Domain Service Coordinator.........................16
      4.3. Coordinator.........................13
      3.3. Physical Network Controller..............................17
      4.4. Controller..............................13
      3.4. ACTN Interfaces..........................................17
   5. Interfaces..........................................14
   4. Advanced ACTN architectures...................................18
      5.1. Architectures...................................15
      4.1. MDSC Hierarchy for scalability...........................18
      5.2. Hierarchy...........................................15
      4.2. Functional Split of MDSC Functions in Orchestrators......19
   6. Orchestrators......16
   5. Topology Abstraction Method...................................21
      6.1. Methods..................................17
      5.1. Abstraction Factors......................................22
      6.2. Factors......................................17
      5.2. Abstraction Types........................................23
         6.2.1. Types........................................18
         5.2.1. Native/White Topology...............................23
         6.2.2. Topology...............................18
         5.2.2. Black Topology......................................24
         6.2.3. Topology......................................18
         5.2.3. Grey Topology.......................................25

      6.3. Building Topology.......................................19
      5.3. Methods of Building Grey Topology........................27
         6.3.1. Topologies......................20
         5.3.1. Automatic generation Generation of abstract topology Abstract Topology by
         configuration..............................................27
         6.3.2.
         Configuration..............................................21
         5.3.2. On-demand generation Generation of supplementary topology Supplementary Topology via path
         compute request/reply......................................28
      6.4. Abstraction Configuration Consideration..................29
         6.4.1. Packet Networks.....................................29
         6.4.2. OTN Networks........................................29
         6.4.3. WSON Networks.......................................30
      6.5. Path
         Compute Request/Reply......................................21
      5.4. Hierarchical Topology Abstraction Granularity Level example...........30
   7. Example................22
   6. Access Points and Virtual Network Access Points...............32
      7.1. Dual homing scenario.....................................34
   8. Points...............23
      6.1. Dual-Homing Scenario.....................................25
   7. Advanced ACTN Application: Multi-Destination Service..........35
      8.1. Service..........26
      7.1. Pre-Planned End Point Migration..........................36
      8.2. Migration..........................27
      7.2. On the Fly End Point Migration...........................37
   9. Advanced Topic................................................37
   10. End-Point Migration...........................28
   8. Manageability Considerations.................................37
      10.1. Policy..................................................38
      10.2. Considerations..................................28
      8.1. Policy...................................................29
      8.2. Policy applied Applied to the Customer Network Controller.......39
      10.3. Controller........30
      8.3. Policy applied Applied to the Multi Domain Service Coordinator..39
      10.4. Coordinator...30
      8.4. Policy applied Applied to the Physical Network Controller.......39
   11. Controller........30
   9. Security Considerations......................................40
      11.1. Interface between the Customer Network Controller and Multi
      Domain Service Coordinator (MDSC), Considerations.......................................31
      9.1. CNC-MDSC Interface (CMI)...41
      11.2. Interface between the Multi Domain Service Coordinator and
      Physical Network Controller (PNC), (CMI).................................32
      9.2. MDSC-PNC Interface (MPI)...41
   12. References...................................................42
      12.1. (MPI).................................32
   10. References...................................................32
      10.1. Informative References..................................42
   13. Contributors.................................................43 References..................................32
   11. Contributors.................................................33
   Authors' Addresses...............................................44 Addresses...............................................35
   APPENDIX A - Example of MDSC and PNC functions integrated Functions Integrated in A
   Service/Network Orchestrator.....................................45 Orchestrator.....................................35
   APPENDIX B - Example of IP + Optical network with L3VPN service..45 service..36

1. Introduction

   The term "Traffic Engineered network" refers to a network that uses
   any connection-oriented technology under the control of a
   distributed or centralized control plane to support dynamic
   provisioning of end-to-end connectivity.  Traffic Engineered (TE)
   networks have a variety of mechanisms to facilitate separation of
   data plane and control plane including distributed signaling for
   path setup and protection, centralized path computation for planning
   and traffic engineering, and a range of management and provisioning
   protocols to configure and activate network resources.  These
   mechanisms represent key technologies for enabling flexible and
   dynamic networking.

   The term Traffic Engineered network is used in this document to
   refer to a network that uses any connection-oriented technology
   under the control of a distributed or centralized control plane to
   support dynamic provisioning of end-to-end connectivity. Some examples of networks that are in scope of
   this definition are optical networks, MPLS Transport Profile (MPLS-TP) (MPLS-
   TP) networks [RFC5654], and MPLS Traffic Engineering (MPLS-TE) MPLS-TE networks [RFC2702].

   One of the main drivers for Software Defined Networking (SDN)
   [RFC7149] is a decoupling of the network control plane from the data
   plane.  This separation of the control plane from the data plane has been already achieved for TE networks with the
   development of MPLS/GMPLS [GMPLS] [RFC3945] and the Path Computation Element
   (PCE) [RFC4655] for TE-based networks. [RFC4655].  One of the advantages of SDN is its logically
   centralized control regime that allows a global view of the
   underlying networks.  Centralized control in SDN helps improve
   network resource utilization compared with distributed network
   control.  For TE-based networks, a PCE is essentially equivalent to may serve as a logically
   centralized path computation function.

   Three key aspects that need

   This document describes a set of management and control functions
   used to operate one or more TE networks to construct virtual
   networks that can be solved by SDN are:

     . Separation of service requests from service delivery so represented to customers and that
        the orchestration of a network is transparent are built
   from the point abstractions of the underlying TE networks so that, for
   example, a link in the customer's network is constructed from a path
   or collection of paths in the underlying networks.  We call this set
   of function "Abstraction and Control of Traffic Engineered Networks"
   (ACTN).

2. Overview

   Three key aspects that need to be solved by SDN are:

     . Separation of service requests from service delivery so that
        the configuration and operation of a network is transparent
        from the point of view of the customer customer, but remains responsive
        to the customer's services and business needs.

     . Network abstraction: As described in [RFC7926], abstraction is
        the process of applying policy to a set of information about a
        TE network to produce selective information that represents the
        potential ability to connect across the domain. network.  The process
        of abstraction presents the connectivity graph in a way that is
        independent of the underlying network technologies,
        capabilities, and topology so that it the graph can be used to
        plan and deliver network services in a uniform way

     . Coordination of resources across multiple domains independent networks
        and multiple technology layers to provide end-to-end services
        regardless of whether the
        domains networks use SDN or not.

   As networks evolve, the need to provide support for distinct
   services, separated service
   request/orchestration orchestration, and resource abstraction has
   have emerged as a key
   requirement requirements for operators.  In order to support
   multiple clients customers each with its own view of and control of the
   server network, a network operator needs to partition (or "slice")
   or manage sharing of the network resources.  The
   resulting  Network slices can be
   assigned to each client customer for guaranteed usage which is a step
   further than shared use of common network resources.

   Furthermore, each network represented to a client customer can be built
   from
   abstractions virtualization of the underlying networks so that, for example,
   a link in the client's customer's network is constructed from a path or
   collection of paths in the underlying network.

   We call the set of management and control functions used to provide
   these features Abstraction and Control of Traffic Engineered
   Networks (ACTN).

   Particular attention needs to be paid to the multi-domain case,

   ACTN can facilitate virtual network operation via the creation of a
   single virtualized network or a seamless service.  This supports
   operators in viewing and controlling different domains (at any
   dimension: applied technology, administrative zones, or vendor-
   specific technology islands) as a single and presenting virtualized network. networks to
   their customers.

   The ACTN framework described in this document facilitates:

     . Abstraction of the underlying network resources to higher-layer
        applications and customers [RFC7926].

     . Virtualization of particular underlying resources, whose
        selection criterion is the allocation of those resources to a
        particular customer, application or service [ONF-ARCH].

     . Network slicing of infrastructure to meet specific customers'
        service requirements.

     . Creation of a virtualized environment allowing operators to
        view and control multi-domain networks as a single virtualized
        network.

     . The presentation to customers of networks as a virtual network
        via open and programmable interfaces.

1.1.

2.1. Terminology

   The following terms are used in this document. Some of them are
   newly defined, some others reference existing definition: definitions:
     . Domain: A domain [RFC4655] is any collection of network
        elements within a common sphere of address management or path
        computation responsibility.  Specifically within this document
        we mean a part of an operator's network that is under common
        management.  Network elements will often be grouped into
        domains based on technology types, vendor profiles, and
        geographic proximity.

     . Abstraction: This process is defined in [RFC7926].

     . Network Slicing: In the context of ACTN, a network slicing slice is a
        collection of resources that are is used to establish a logically
        dedicated virtual networks network over one or more TE networks. It network.  Network
        slicing allows a network provider to provide dedicated virtual
        networks for
        application/customer applications/customers over a common network
        infrastructure.  The logically dedicated resources are a part
        of the larger common network infrastructures that are shared
        among various network slice instances which are the end-to-end
        realization of network slicing, consisting of the combination
        of physically or logically dedicated resources.

     . Node: A node is a vertex on the graph representation of a TE
        topology.  In a physical network topology, a node corresponds
        to a physical network element (NE). (NE) such as a router.  In an
        abstract network topology, a node (sometimes called an abstract
        node) is a representation as a single vertex of one or more
        physical NEs and their connecting physical connections.  The
        concept of a node represents the ability to connect from any
        access to the node (a link end) to any other access to that
        node, although "limited cross-connect capabilities" may also be
        defined to restrict this functionality.  Just as network
        slicing and network abstraction may be applied recursively, so
        a node in a one topology may be created by applying slicing or
        abstraction on to the nodes in the underlying topology.

     . Link: A link is an edge on the graph representation of a TE
        topology.  Two nodes connected by a link are said to be
        "adjacent" in the TE topology.  In a physical network topology,
        a link corresponds to a physical connection.  In an abstract
        network topology, a link (sometimes called an abstract link) is
        a representation of the potential to connect a pair of points
        with certain TE parameters (see RFC 7926 [RFC7926] for details).
        Network slicing/virtualization and network abstraction may be
        applied recursively, so a link in a one topology may be created
        by applying slicing and/or abstraction on to the links in the
        underlying topology.

     . CNC: A Customer Network Controller Abstract Link: The term "abstract link" is responsible for
        communicating customer's virtual network service requirements
        to network provider. It has knowledge defined in
        [RFC7926].

     . Abstract Topology: The topology of abstract nodes and abstract
        links presented through the end-point
        associated with virtual process of abstraction by a lower
        layer network service, service policy, and
        other QoS information related to the service it is responsible for instantiating. use by a higher layer network.

     . PNC: A Physical Virtual Network Controller (VN) is responsible a network provided by a service
        provider to a customer for
        controlling devices or NEs under its direct control. the customer to use in any way it
        wants as though it was a physical network.  There are two views
        of a VN as follows:

        a) The PNC
        functions VN can be implemented seen as part of an SDN domain
        controller, a Network Management System (NMS), an Element
        Management System (EMS), an active PCE-based controller or any
        other means to dynamically control a set of nodes edge-to-edge links (a Type 1
          VN).  Each link is referred as a VN member and that is
        implementing formed as
          an NBI compliant with ACTN specification.

     . PNC domain: A PNC domain includes all the resources under end-to-end tunnel across the
        control of a single PNC. It can underlying networks. Such
          tunnels may be composed constructed by recursive slicing or
          abstraction of different
        routing domains and administrative domains, paths in the underlying networks and can
          encompass edge points of the resources
        may come from different layers. customer's network, access
          links, intra-domain paths, and inter-domain links.

        b) The interconnection between PNC
        domains VN can also be seen as a link or a node.

                     _______   Border Link    _______
                    _(       )================(       )_
                  _(           )_          _(           )_
                 (               )  ----  (               )
                (       PNC       )|    |(       PNC       )
                (    Domain X     )|    |(     Domain Y    )
                (                 )|    |(                 )
                 (_             _)  ----  (_             _)
                   (_         _)   Border   (_         _)
                     (_______)      Node      (_______)

                       Figure 1: PNC Domain Borders

     . MDSC: A multi-domain Service Coordinator is a functional block
        that implements all four ACTN main functions, i.e., multi
        domain coordination, virtualization/abstraction, customer
        mapping/translation, topology of virtual nodes and
          virtual service coordination. links (a Type 2 VN).  The
        first two functions of provider needs to map the MDSC, namely, multi domain
        coordination and virtualization/abstraction are referred
          VN to actual resource assignment, which is known as
        network-related functions while virtual
          network embedding.  The nodes in this case include physical
          end points, border nodes, and internal nodes as well as
          abstracted nodes.  Similarly the last two functions, namely,
        customer mapping/translation links include physical
          access links, inter-domain links, and virtual service coordination
        are referred to intra-domain links as service-related functions. See details on
        these functions in Section 4.2. In some implementation, PNC
          well as abstract links.

        Clearly a Type 1 VN is a special case of a Type 2 VN.

     . Access link: A link between a customer node and
        MDSC functions can be co-located a provider
        node.

     . Inter-domain link: A link between domains under distinct
        management administration.

     . Access Point (AP): An AP is a logical identifier shared between
        the customer and implemented in the same
        box. provider used to identify an access link.
        The AP is used by the customer when requesting a VNS.

     . VN Access Point (VNAP): A Virtual Network (VN) VNAP is the binding between an AP and
        a customer view given VN.

2.2. VNS Model of ACTN

   A Virtual Network Service (VNS) is the TE
        network.  Depending on the service agreement between client a
   customer and provider various VN operations and VN views to provide a VN.  There are possible as
        follows: three types of VNS
   defined in this document.

          o VN Creation - VN could be pre-configured and created via
             offline negotiation between customer and provider. In
             other cases, the VN could also be created dynamically
             based on Type 1 VNS refers to a request from the customer with given SLA
             attributes VNS in which satisfy the customer's objectives.

          o Dynamic Operations - The VN could be further modified or
             deleted based on a customer request. The customer can
             further act upon the virtual network resources to perform
             end-to-end tunnel management (set-up/release/modify).
             These changes will result in subsequent LSP management at
             the operator's level.

          o VN Type:

               a. The VN can be seen as set of end-to-end tunnels from a
                 customer point of view, where each tunnel is referred
                 as a VN member. Each VN member can then be formed by
                 recursive slicing or abstraction of paths in
                 underlying networks. Such end-to-end tunnels may
                 comprise of customer end points, access links, intra-
                 domain paths, and inter-domain links. In this view, VN
                 is thus a set of VN members (which is referred to as
                 Type 1 VN)

               b. The VN can also be seen as a topology comprising of
                 physical, sliced, and abstract nodes and links. This
                 VN is referred to as Type 2 VN. The nodes in this case
                 include physical customer end points, border nodes,
                 and internal nodes as well as abstracted nodes.
                 Similarly the links include physical access links,
                 inter-domain links, and intra-domain links as well as
                 abstract links. With VN type 2, it is still possible
                 to view VN member-level.

     . Virtual Network Service (VNS) is requested by the customer and
        negotiated with the provider. There are three types of VNS
        defined in this document. Type 1 VNS refers to VNS in which customer is allowed
             to create and operate a Type 1 VN.

          o Type 2a and 2b VNS refers refer to the VNS VNSs in which the customer is
             allowed to create and operates a Type 2 VN.  With a Type
             2a VNS, once the VN is statically created at service
             configuration time, time and the customer is not allowed to
             change the topology (i.e., (e.g., by adding or deleting abstract nodes/links).
             nodes and links).  A Type 2b VNS is the same as a Type 2a
             VNS except that the customer is allowed to change topology
        dynamically from make dynamic
             changes to the initial topology created at service
             configuration time. See Section 3 for details.

     . Abstraction. This process is defined in [RFC7926].

     . Abstract Link: The term "abstract link" is defined in
        [RFC7926].

     . Abstract Topology: The topology of abstract nodes and abstract
        links presented through the process of abstraction by a lower
        layer network for use by a higher layer network.

     . Access link: A link between

   VN Operations are functions that a customer node and can exercise on a provider
        node.

     . Inter-domain link: A link between domains managed by different
        PNCs. The MDSC is in charge of managing inter-domain links.

     . Access Point (AP): An access point is used to keep
        confidentiality VN
   depending on the agreement between the customer and the provider. It is

          o VN Creation allows a
        logical identifier shared between the customer and the
        provider, used to map request the end points instantiation
             of the border node in both
        the customer and the provider NW. The AP can a VN.  This could be used by the
        customer when requesting VN service through off-line pre-configuration
             or through dynamic requests specifying attributes to the provider.

     . VN Access Point (VNAP): A VNAP is defined as the binding
        between an AP and a given VN and is used
             Service Level Agreement (SLA) to identify the
        portion of satisfy the access and/or inter-domain link dedicated to customer's
             objectives.

          o Dynamic Operations allow a
        given customer to modify or delete the
             VN.

2. Business Model of ACTN  The Virtual Private Network (VPN) [RFC4026] and Overlay Network (ON)
   models [RFC4208] are built on the premise that customer can further act upon the network provider
   provides all virtual private or overlay networks network
             to its customers. create/modify/delete virtual links and nodes.  These models are simple to operate but have some disadvantages
             changes will result in subsequent tunnel management in
   accommodating the increasing need for flexible and dynamic network
   virtualization capabilities.
             operator's networks.

   There are three key entities in the ACTN VNS model:

     - Customers
     - Service Providers
     - Network Providers

   These entities are described related in the following sections.

    2.1. Customers

   Within the ACTN framework, different types a three tier model as shown in Figure
   1.

                           +----------------------+
                           |       Customer       |
                           +----------------------+
                                      |
                     VNS       ||     |    /\     VN
                    Request    ||     |    ||    Reply
                               \/     |    ||
                           +----------------------+
                           |  Service Provider    |
                           +----------------------+
                           /          |            \
                          /           |             \
                         /            |              \

                  /                   |                      \
     +------------------+   +------------------+   +------------------+
     |Network Provider 1|   |Network Provider 2|   |Network Provider 3|
     +------------------+   +------------------+   +------------------+

                    Figure 1: The Three Tier Model.

   The commercial roles of customers may be taken
   into account depending on these entities are described in the type of their resource needs, and on
   their number and type of access. For example, it is possible to
   group them into two main categories:

   Basic Customer:
   following sections.

2.2.1. Customers

   Basic customers include fixed residential users, mobile users users, and
   small enterprises. Usually, the number of basic
   customers for  Each requires a service provider is high: they require small amounts amount of resources and are is
   characterized by steady requests (relatively time invariant). A typical request for a basic customer is for a
   bundle of voice services and internet access. Moreover, basic  Basic
   customers do not modify their services themselves: if a service
   change is needed, it is performed by the provider as a proxy and the
   services generally have very few dedicated resources (such as for
   subscriber drop), with everything else shared on the basis of some
   Service Level Agreement (LSA), which is usually best-efforts.

   Advanced Customer: proxy.

   Advanced customers typically include enterprises,
   governments governments, and utilities. utility
   companies.  Such customers can ask for both point-to point and
   multipoint connectivity with high resource demands varying
   significantly in time and from customer to customer. time.  This is one of the reasons why a bundled
   service offering is not enough and it is desirable to provide each
   advanced customer with a customized virtual network service.
   Advanced customers may own dedicated virtual resources, or share
   resources. They may also have the ability to modify their service
   parameters within the scope of their virtualized environments. The
   primary focus of ACTN is Advanced Customers.

   As customers are geographically spread over multiple network
   provider domains, they have to interface to multiple providers and
   may have to support multiple virtual network services with different
   underlying objectives set by the network providers.  To enable these
   customers to support flexible and dynamic applications they need to
   control their allocated virtual network resources in a dynamic
   fashion, and that means that they need a view of the topology that
   spans all of the network providers.  Customers of a given service
   provider can in turn offer a service to other customers in a
   recursive way.

    2.2.

2.2.2. Service Providers

   Service providers are

   In the providers scope of virtual network services (see
   Section 3 for details) ACTN, service providers deliver VNSs to their
   customers.  Service providers may or may not own physical network
   resources (i.e, (i.e., may or may not be network providers as described in
   Section 2.3). 2.2.3).  When a service provider is the same as the network
   provider, this is similar to existing VPN models applied to a single provider. This
   provider although it may be hard to use this approach
   works well when the
   customer maintains a single interface with a
   single provider.  When customer spans multiple independent network provider domains, then it becomes hard to facilitate the creation of
   end-to-end virtual network services with this model.

   A more interesting case arises when domains.

   When network providers supply only provide infrastructure, while distinct
   service providers interface to the
   customers. In this case, customers, the service providers are,
   are themselves customers of the network infrastructure providers.
   One service provider may need to keep multiple independent network
   providers as because its end-users span geographically across multiple
   network provider domains.

   The ACTN network model is predicated upon this three tier model and
   is summarized in Figure 2:

                       +----------------------+
                       |       customer       |
                       +----------------------+
                                  |
                  VNS       ||    |   /\     VNS
                 Request    ||    |   ||    Reply
                            \/    |   ||
                       +----------------------+
                       |  Service Provider    |
                       +----------------------+
                       /         |            \
                      /          |             \
                     /           |              \
                    /            |               \
   +------------------+   +------------------+   +------------------+
   |Network Provider 1|   |Network Provider 2|   |Network Provider 3|
   +------------------+   +------------------+   +------------------+

                      Figure 2: Three tier model.

   There can be multiple service providers to which a customer may
   interface.

   There are multiple types of service providers, for example:

     . Data Center providers can be viewed as a service provider type
        as they own and operate data center resources for various WAN
        customers, and they can lease physical network resources from
        network providers.
     . Internet Service Providers (ISP) are service providers of
        internet services to their customers while leasing physical
        network resources from network providers.
     . Mobile Virtual Network Operators (MVNO) provide mobile services
        to their end-users without owning the physical network
        infrastructure.

    2.3. Network Providers

   Network Providers are the infrastructure providers that own the
   physical

2.2.3. Network Providers

   Network Providers are the infrastructure providers that own the
   physical network resources and provide network resources to their
   customers. The layered model described in this architecture
   separates the concerns of network providers and customers, with
   service providers acting as aggregators of customer requests.

3. Virtual Network Service

   Virtual Network Service (VNS) is requested by ACTN Base Architecture

   This section provides a high-level model of ACTN showing the customer
   interfaces and
   negotiated with the provider. There are three types flow of VNS defined
   in this document.

   Type 1 VNS refers to VNS in which customer is allowed to create and
   operate a Type 1 VN. Type 1 VN control between components.

   The ACTN architecture is based on a VN that comprises a set 3-tier reference model and
   allows for hierarchy and recursion.  The main functionalities within
   an ACTN system are:

     . Multi-domain coordination: This function oversees the specific
        aspects of end-
   to-end tunnels from a customer point of view, where each tunnel is
   referred as different domains and builds a VN member. With Type 1 VNS, the single abstracted
        end-to-end network operator does
   not need to provide additional abstract VN topology associated with
   the Type 1 VN.

   Type 2a VNS refer to VNS in which customer is allowed order to create coordinate end-to-end
        path computation and
   operates a Type 2 VN, but not allowed to change topology once it is
   configured at service configuration time. Type 2 VN path/service provisioning.  Domain
        sequence path calculation/determination is an abstract
   VN topology that may comprise also a part of virtual/abstract nodes and links.
   The nodes in this case may include physical customer end points,
   border nodes, and internal nodes as well as
        function.

     . Virtualization/Abstraction: This function provides an
        abstracted nodes.
   Similarly, view of the links underlying network resources for use by
        the customer - a customer may include physical access links, inter-domain
   links, be the client or a higher level
        controller entity.  This function includes network path
        computation based on customer service connectivity request
        constraints, path computation based on the global network-wide
        abstracted topology, and intra-domain links as well as abstract links.

   Type 2b VNS refers the creation of an abstracted view of
        network resources allocated to VNS in which each customer.  These operations
        depend on customer-specific network objective functions and
        customer traffic profiles.

     . Customer mapping/translation: This function is allowed to create and
   operate a Type 2 VN and the map customer is allowed
        requests/commands into network provisioning requests that can
        be sent to dynamically
   change abstract VN topology from the initially configured abstract
   VN topology Physical Network Controller (PNC) according to
        business policies provisioned statically or dynamically at service configuration time.

   From an implementation standpoint, Type 2a VNS the
        OSS/NMS.  Specifically, it provides mapping and Type 2b VNS
   differentiation might be fulfilled via local policy.

   In all types translation of VNS, customer can specify
        a customer's service request into a set of service related parameters that are
        specific to a network type and technology such as connectivity type, VN traffic matrix (e.g.,
   bandwidth, latency, diversity, etc.), VN survivability, VN that network
        configuration process is made possible.

     . Virtual service
   policy and other characteristics.

4. ACTN Base Architecture coordination: This section provides function translates customer
        service-related information into virtual network service
        operations in order to seamlessly operate virtual networks
        while meeting a high-level model of ACTN showing the
   interfaces and customer's service requirements.  In the flow
        context of control between components.

   The ACTN architecture is aligned with the ONF SDN architecture [ONF-
   ARCH] and presents ACTN, service/virtual service coordination includes
        a 3-tiers reference model. It allows for
   hierarchy and recursiveness not only number of SDN controllers but also service orchestration functions such as multi-
        destination load balancing, guarantees of
   traditionally controlled domains that use a control plane. service quality,
        bandwidth and throughput.  It also includes notifications for
        service fault and performance degradation and so forth.

   The base ACTN architecture defines three controller types of controllers depending on and the functionalities
   they implement.
   corresponding interfaces between these controllers. The main functionalities that are identified are:

     . Multi-domain coordination function: This function oversees the
        specific aspects following
   types of the different domains and builds a single
        abstracted end-to-end network topology controller are shown in order to coordinate
        end-to-end path computation and path/service provisioning. Figure 2:

     . CNC - Customer Network Controller
     . MDSC - Multi Domain sequence path calculation/determination is also a part
        of this function. Service Coordinator
     . Virtualization/Abstraction function: This function provides an
        abstracted view of the underlying network resources for use by
        the customer PNC - a customer may be the client or a higher level
        controller entity. This function includes network path
        computation based on customer service connectivity request
        constraints, path computation based on the global network-wide
        abstracted topology, and the creation of an abstracted view of
        network resources allocated to each customer. These operations
        depend on customer-specific network objective functions and
        customer traffic profiles.

     . Customer mapping/translation function: This function is to map
        customer requests/commands into network provisioning requests
        that can be sent to the Physical Network Controller (PNC)
        according to business policies provisioned statically or
        dynamically at the OSS/NMS. Specifically, it provides mapping and
        translation of a customer's service request into a set of
        parameters that are specific to a network type and technology
        such that network configuration process is made possible.

     . Virtual service coordination function: This function translates
        customer service-related information into virtual network
        service operations in order to seamlessly operate virtual
        networks while meeting a customer's service requirements. In
        the context of ACTN, service/virtual service coordination
        includes a number of service orchestration functions such as
        multi-destination load balancing, guarantees of service
        quality, bandwidth and throughput. It also includes
        notifications for service fault and performance degradation and
        so forth.

   Figure 3 depicts the base ACTN architecture with three controller
   types and the corresponding interfaces between these controllers.
   The types of controller defined in 2 also shows the ACTN architecture are shown
   in Figure 3 below and are as follows: following interfaces:

     . CNC CMI - Customer Network Controller CNC-MDSC Interface
     . MDSC - Multi Domain Service Coordinator
     . PNC - Physical Network Controller

   Figure 3 also shows the following interfaces:

     . CMI - CNC-MDSC Interface
     . MPI MPI - MDSC-PNC Interface
     . SBI - South Bound Interface
             +--------------+        +---------------+        +--------------+
             |    CNC-A     |        |     CNC-B     |        |     CNC-C    |
             |(DC provider) |        |     (ISP)     |        |     (MVNO)   |
             +--------------+        +---------------+        +--------------+
                  \                          |                           /
   Business        \                         |                          /
   Boundary  =======\========================|=========================/=======
   Between           \                       | CMI                    /
   Customer &         -----------            |          --------------
   Network Provider              \           |         /
                           +-----------------------+
                                |          MDSC         |
                           +-----------------------+
                            /           |         \
                     ------------            |MPI       ----------------
                    /                        |                          \
               +-------+                 +-------+                   +-------+
               |  PNC  |                 |  PNC  |                   |  PNC  |
               +-------+                 +-------+                   +-------+
                  | GMPLS               /      |                      /   \
                  | trigger            /       |SBI                  /     \
               --------           -----        |                    /       \
              (        )         (     )       |                   /         \
             -         -        ( Phys. )      |                  /       -----
             (  GMPLS   )        ( Net )       |                 /       (     )
            (  Physical  )         ----        |                /       ( Phys. )
             (  Network )                   -----        -----           ( Net )
              -        -                   (     )      (     )           -----
               (       )                  (  Phys. )   (  Phys. )
               --------                    ( Net )      ( Net )
                                            -----        -----

                     Figure 3: ACTN Base Architecture

4.1. Customer Network Controller

   A Virtual Network Service is instantiated by the Customer Network
   Controller via the CNC-MDSC Interface (CMI). As the Customer Network
   Controller directly interfaces to the applications, it understands
   multiple application requirements and their service needs. It is
   assumed that the Customer Network Controller and the MDSC have a
   common knowledge of the end-point interfaces based on their business
   negotiations prior to service instantiation. End-point interfaces
   refer to customer-network physical interfaces that connect customer
   premise equipment to network provider equipment.

4.2. Multi Domain Service Coordinator

   The Multi Domain Service Coordinator (MDSC) sits between the CNC
   that issues connectivity requests and the Physical Network
   Controllers (PNCs) that manage the physical network resources. The
   MDSC can be collocated with the PNC.

   The internal system architecture and building blocks of the MDSC are
   out of the scope of ACTN. Some examples can be found in the
   Application Based Network Operations (ABNO) architecture [RFC7491]
   and the ONF SDN architecture [ONF-ARCH].

   The MDSC is the only building block of the architecture that can
   implement all four ACTN main functions, i.e., multi domain
   coordination, virtualization/abstraction, customer
   mapping/translation, and virtual service coordination. The first two
   functions of the MDSC, namely, multi domain coordination and
   virtualization/abstraction are referred to as network-related
   functions while the last two functions, namely, customer
   mapping/translation and virtual service coordination are referred to
   as service-related functions.
   The key point of the MDSC (and of the whole ACTN framework) is
   detaching the network and service control from underlying technology
   to help the customer express the network as desired by business
   needs. The MDSC envelopes the instantiation of the right technology
   and network control to meet business criteria. In essence it
   controls and manages the primitives to achieve functionalities as
   desired by the CNC.

   In order to allow for multi-domain coordination a 1:N relationship
   must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1
   parent MDSC and N child MDSC or 1 MDSC and N PNCs).

   In addition to that, it could also be possible to have an M:1
   relationship between MDSCs and PNC to allow for network resource
   partitioning/sharing among different customers not necessarily
   connected to the same MDSC (e.g., different service providers).

4.3. Physical Network Controller

   The Physical Network Controller (PNC) oversees configuring the
   network elements, monitoring the topology (physical or virtual) of
   the network, and passing information about the topology (either raw
   or abstracted) to the MDSC.

   The internal architecture of the PNC, its building blocks, and the
   way it controls its domain are out of the scope of ACTN. Some
   examples can be found in the Application Based Network Operations
   (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF-
   ARCH]

   The PNC, in addition to being in charge of controlling the physical
   network, is able to implement two of the four main ACTN main
   functions: multi domain coordination and virtualization/abstraction
   function.
   Note that from an implementation point of view it is possible to
   integrate one or more MDSC functions and one or more PNC functions
   within the same controller.

4.4. ACTN Interfaces

   The network has to provide open, programmable interfaces, through
   which customer applications can create, replace and modify virtual
   network resources and services in an interactive, flexible and
   dynamic fashion while having no impact on other customers. Direct
   customer control of transport network elements and virtualized
   services is not perceived as a viable proposition for transport
   network providers due to security and policy concerns among other
   reasons. In addition, the network control plane for transport
   networks has been separated from the data plane and as such it is
   not viable for the customer to directly interface with transport
   network elements.

     . CMI Interface: The CNC-MDSC Interface (CMI) is an interface
        between a CNC and an MDSC. As depicted in Figure 3, the CMI is
        a business boundary between customer and network provider. It
        is used to request virtual network services required for the
        applications. Note that all service related information such as
        specific service properties, including virtual network service
        type, topology, bandwidth, and constraint information are
        conveyed over this interface. Most of the information over this
        interface is technology agnostic; however, there are some
        cases, e.g., access link configuration, where it should be
        possible to explicitly request for a VN to be created at a
        given layer in the network (e.g. ODU VN or MPLS VN).

     . MPI Interface: The MDSC-PNC Interface (MPI) is an interface
        between an MDSC and a PNC. It communicates the creation
        requests for new connectivity or for bandwidth changes in the
        physical network. In multi-domain environments, the MDSC needs
        to establish multiple MPIs, one for each PNC, as there is one
        PNC responsible for control of each domain. The MPI could have
        different degrees of abstraction and present an abstracted
        topology hiding technology specific aspects of the network or
        convey technology specific parameters to allow for path
        computation at the MDSC level. Please refer to CCAMP Transport
        NBI work for the latter case [Transport NBI].

     . SBI Interface: This interface is out of the scope of ACTN. It
        is shown in Figure 3 for reference reason only.

   Please note that for all the three interfaces, when technology
   specific information needs to be included, this info will be add-ons
   on top of the general abstract topology. As far as general topology
   abstraction standpoint, all interfaces are still recursive in
   nature.

5. Advanced ACTN architectures

   This section describes advanced forms of ACTN architectures as
   possible implementation choices.

5.1. MDSC Hierarchy for scalability

   A hierarchy of MDSCs can be foreseen for many reasons, among which
   are scalability, administrative choices or putting together
   different layers and technologies in the network. In the case where
   there is a hierarchy of MDSCs, we introduce the higher-level MDSC
   (MDSC-H) the lower-level MDSC (MDSC-L) and the interface between
   them is basically of a recursive nature of the MPI. An
   implementation choice could foresee the usage of an MDSC-L for all
   the PNCs related to a given network layer or technology (e.g.
   IP/MPLS) a different MDSC-L for the PNCs related to another
   layer/technology (e.g. OTN/WDM) and an MDSC-H to coordinate them.

   Figure 4 shows this case.

                                       +--------+
                                       |   CNC  |
                                       +--------+
                                            |
                                            |
                                      +----------+
                              --------|  MDSC-H  |--------
                              |       +----------+       |
                              |                          |
                         +---------+               +---------+
                         |  MDSC-L |               |  MDSC-L |
                         +---------+               +---------+

                         Figure 4: MDSC Hierarchy

   Note that both the MDSC-H and the MDSC-L in general cases implement
   all four functions of the MDSC discussed in Section 3.2.

5.2. Functional Split of MDSC Functions in Orchestrators

   Another implementation choice could foresee the separation of MDSC
   functions into two groups (i.e., one group for service-related
   functions and another group for network-related functions) which
   will result in a service orchestrator for providing service-related
   functions of MDSC and other non-ACTN functions and a network
   orchestrator for providing network-related functions of MDSC and
   other non-ACTN functions. Figure 5 shows this case and it also
   depicts the mapping between ACTN architecture and the YANG service
   model architecture described in [Service-YANG]. This mapping is
   helpful for the readers who are not familiar with some TEAS specific
   terminology used in this document. A number of key ACTN interfaces
   exist for deployment and operation of ACTN-based networks. These are
   highlighted in Figure 5 (ACTN Interfaces).

                              +------------------------------+
                              |                     Customer |
                              |   +-----+   +----------+     |
                              |   | CNC |   |Other fns.|     |
                              |   +-----+   +----------+     |
                              +------------------------------+
                                                 |  Customer Service Model
                                                 |
                        +-----------------------------------------------+
                ********|**********************    Service Orchestrator |
                * MDSC  |  +------+  +------+ *  +-----------+          |
                *       |  | MDSC |  | MDSC | *  | Other fns.|          |
                *       |  |  F1 South Bound Interface
                  +---------+           +---------+             +---------+
                  |   CNC   |  F2           | *   CNC   | (non-ACTN)|             |
                *   CNC   |  +------+  +------+ *  +-----------+
                  +---------+           +---------+             +---------+
                        \                    |
                *       +---------------------*-------------------------+
                *                             *                       /
   Business              \                   |  Service Delivery Model
                *                             *                      /
   Boundary  =============\==================|=====================/=======
   Between                 \                 |
                *       +---------------------*-------------------------+
                *                    /
   Customer &               -----------      |                     * CMI  --------------
   Network Orchestrator   |
                *       |  +------+  +------+ *  +-----------+          |
                *       |  | MDSC Provider                    \     |     /
                                     +---------------+
                                     |     MDSC      | *  | Other fns.|          |
                *       |  |  F3  |  |  F4  | *  | (non-ACTN)|          |
                *       |  +------+  +------+ *  +-----------+          |
                ********|**********************                         |
                        +-----------------------------------------------+
                                                 |  Network Configuration Model
                                                 |
                          +-------------------------------------------+
                                     +---------------+
                                       /     |                      Domain Controller     \
                           ------------      | MPI  ---------------
                          /                  |  +------+  +-----------+                     \
                     +-------+          +-------+               +-------+
                     |  PNC  |          |  PNC  |               | Other fns.|  PNC  |
                     +-------+          +-------+               +-------+
                         |  +------+ SBI            /   | (non-ACTN)|                  /   \
                         |               /    |            +-----------+ SBI             /     \
                     ---------        -----   |
                          +-------------------------------------------+                /       \
                    (         )      (     )  |  Device Configuration Model               /         \
                    - Control -     ( Phys. ) |
                                              --------              /        -----
                   (  Plane    )     ( Net )  | Device             /        (     )
                  (  Physical   )     -----   |
                                              --------            /        ( Phys. )
                   (  Network  )            -----        -----       ( Net )
                    -         -            (     )      (     )       -----
                    (         )           ( Phys. )    ( Phys. )
                     ---------             ( Net )      ( Net )
                                            -----        -----

                     Figure 5: 2: ACTN Base Architecture in the context

   Note that this is a functional architecture: an implementation and
   deployment might collocate one or more of YANG Service Models

   In Figure 5, the functional components.

3.1. Customer Network Controller

   A Customer Network Controller (CNC) is responsible for communicating
   a customer's VNS requirements to the network provider over the CNC-
   MDSC F1 Interface (CMI).  It has knowledge of the end-points associated
   with the VNS (expressed as APs), the service policy, and F2 correspond other QoS
   information related to the service.

   As the Customer Network Controller directly interfaces to customer
   mapping/translation, the
   applications, it understands multiple application requirements and virtual
   their service coordination, respectively,
   which are needs.

3.2. Multi-Domain Service Coordinator

   A Multi-Domain Service Coordinator (MDSC) is a functional block that
   implements all of the MDSC service-related ACTN functions as defined listed in Section
   4. MDSC F3 3 and F4 correspond to multi domain coordination,
   virtualization/abstraction, respectively, which are the MDSC
   network-related functions as defined
   described further in Section 4. In some
   implementation, 4.2.  The MDSC F1 and F2 can be implemented as part sits at the center of a
   Service Orchestrator which may support other non-ACTN functions.
   Likewise,
   the MDSC F3 ACTN model between the CNC that issues connectivity requests and F4 can be implemented as part of a
   the Physical Network
   Orchestrator which may support other non-ACTN MDSC functions.

   Also note Controllers (PNCs) that manage the PNC is not same as domain controller. Domain
   controller in general has a larger set of functions than that of
   PNC. physical
   network resources.
   The main functions of PNC are explained in Section 3.3.
   Likewise, Customer has a larger set key point of functions than that the MDSC (and of the
   CNC.

   Customer service model describes a whole ACTN framework) is
   detaching the network and service as offer or delivered control from underlying technology
   to
   a help the customer by a express the network operator as defined in [Service-YANG]. desired by business
   needs.  The
   CMI is a subset MDSC envelopes the instantiation of a customer service model the right technology
   and network control to support VNS. This
   model encompasses other non-TE/non-ACTN models meet business criteria.  In essence it
   controls and manages the primitives to control non-ACTN
   services (e.g., L3SM).

   Service delivery model is used achieve functionalities as
   desired by the CNC.

   In order to allow for multi-domain coordination a network operator 1:N relationship
   must be allowed between MDSCs and PNCs.

   In addition to that, it could also be possible to define have an M:1
   relationship between MDSCs and
   configure how a service is provided by the PNC to allow for network as defined in
   [Service-YANG]. This model is similar resource
   partitioning/sharing among different customers not necessarily
   connected to the MPI model as the
   network-related functions of same MDSC (e.g., different service providers) but
   all using the MDSC, i.e., F3 and F4, provide an
   abstract topology view resources of a common network infrastructure provider.

3.3. Physical Network Controller

   The Physical Network Controller (PNC) oversees configuring the E2E
   network to elements, monitoring the service-related
   functions topology (physical or virtual) of
   the MDSC, i.e., F1 network, and F2, which translate customer's
   request at the CMI into the network configuration at collecting information about the MPI.

   Network configuration model is used by a network orchestrator to
   provide network-level configuration model to a controller as defined
   in [Service-YANG]. topology (either
   raw or abstracted).

   The MPI is a subset of network configuration
   model to support TE configuration. This model encompasses the MPI
   model plus other non-TE/non-ACTN models to control non-ACTN PNC functions can be implemented as part of the an SDN domain controller (e.g., L3VPN).

   Device configuration model is used by
   controller, a Network Management System (NMS), an Element Management
   System (EMS), an active PCE-based controller [Centralized] or any
   other means to configure
   physical network elements.

6. Topology Abstraction Method

   This section discusses topology abstraction factors, types and their
   context in ACTN architecture. Topology abstraction is useful in ACTN
   architecture as dynamically control a way to scale multi-domain network operation. Note set of nodes and that this is the abstraction performed by the
   implementing an NBI compliant with ACTN specification.

   A PNC to the MDSC or by domain includes all the MDSC-L to resources under the MDSC-H, and that this is control of a
   single PNC.  It can be composed of different from the VN
   Type 2 topology (that is created routing domains and negotiated between the CNC
   administrative domains, and the MDSC as part of the VNS). resources may come from different
   layers.  The purpose of topology abstraction
   discussed in this section interconnection between PNC domains is for an efficient internal network
   operation based on abstraction principle.

6.1. Abstraction Factors

   This section provides abstraction factors illustrated in the
   Figure 3.

                     _______                        _______
                   _(       )_                    _(       )_
                 _(           )_                _(           )_
                (               )     Border   (               )
               (     PNC     ------   Link   ------     PNC     )
              (   Domain X  |Border|========|Border|  Domain Y   )
              (             | Node |        | Node |             )
               (             ------          ------             )
                (_             _)              (_             _)
                  (_         _)                  (_         _)
                    (_______)                      (_______)

                         Figure 3: PNC Domain Borders

3.4. ACTN architecture.

   The MDSC oversees the specific aspects Interfaces

   Direct customer control of the different domains transport network elements and
   builds
   virtualized services is not a single abstracted end-to-end viable proposition for network topology in order
   providers due to
   coordinate end-to-end path computation security and path/service
   provisioning. policy concerns.  In order addition, some
   networks may operate a control plane and as such it is not practical
   for the MDSC customer to perform its coordination
   function, it depends on the coordination directly interface with network elements.
   Therefore, the PNCs network has to provide open, programmable interfaces,
   through which are customer applications can create, replace and modify
   virtual network resources and services in an interactive, flexible
   and dynamic fashion while having no impact on other customers.

   Three interfaces exist in the
   domain-level controllers especially ACTN architecture as to what level of domain
   network resource abstraction shown in Figure
   2.

     . CMI: The CNC-MDSC Interface (CMI) is agreed upon an interface between a CNC
        and an MDSC.  The CMI is a business boundary between the MDSC customer
        and the
   PNCs.

   As discussed in [RFC7926], abstraction network provider.  It is tied with policy of the
   networks. For instance, per an operational policy, the PNC would not
   be allowed used to provide any technology specific details (e.g., optical
   parameters for WSON) in its update. In such case, the abstraction
   level of the update will be in request a generic nature. In order VNS for the
   MDSC to get technology specific topology an
        application.  All service-related information from the PNC, a
   request/reply mechanism may be employed.

   In some cases, abstraction is also tied with the controller's
   capability of abstraction conveyed over
        this interface (such as it involves some rules the VNS type, topology, bandwidth, and algorithms
   to be applied to
        service constraints).  Most of the actual network resource information (which over this
        interface is
   also known as network topology).

   [TE-Topology] describes YANG models for TE-network abstraction.
   [PCEP-LS] describes PCEP Link-state mechanism that also allows for
   transport technology agnostic (the customer is unaware of abstract topology in
        the context of Hierarchical PCE.

   There are factors that may impact network technologies used to deliver the choice of abstraction. Here service), but
        there are the most relevant:

   - The nature of underlying domain networks: Abstraction depends on
     the nature of the underlying domain networks. For instance, packet
     networks may have different level of abstraction requirements from
     that of optical networks. Within optical networks, WSON may have
     different level of abstraction requirements than the OTN networks.

   - some cases (e.g., access link configuration) where it
        is necessary to specify technology-specific details.

     . MPI: The capability of the PNC: Abstraction depends on the capability
     of the PNCs. As abstraction requires hiding details of MDSC-PNC Interface (MPI) is an interface between an
        MDSC and a PNC.  It communicates requests for new connectivity
        or for bandwidth changes in the
     underlying resource network resource information, physical network.  In multi-
        domain environments, the PNC
     capability MDSC needs to communicate with
        multiple PNCs each responsible for control of a domain.  The
        MPI presents an abstracted topology to run some internal optimization algorithm impacts the
     feasibility MDSC hiding
        technology specific aspects of abstraction. Some PNC may not have the ability to
     abstract native network and hiding topology while other PNCs may have such an ability
        according to abstract actual topology by using sophisticated algorithms.

   - Scalability factor: Abstraction policy.

     . SBI: The Southbound Interface (SBI) is a function out of scalability. If
     the actual network resource information scope of ACTN.
        Many different SBIs have been defined for different
        environments, technologies, standards organizations, and
        vendors.  It is shown in Figure 3 for reference reason only.

4. Advanced ACTN Architectures

   This section describes advanced configurations of small size, then the
     need for abstraction would ACTN
   architecture.

4.1. MDSC Hierarchy

   A hierarchy of MDSCs can be less than foreseen for many reasons, among which
   are scalability, administrative choices, or putting together
   different layers and technologies in the network.  In the case where the native
     network resource information
   there is a hierarchy of large size. In some cases,
     abstraction may not be needed at all.

   - The frequency of topology updates: The proper abstraction level
     may depend on MDSCs, we introduce the frequency of topology updates terms higher-level
   MDSC (MDSC-H) and vice versa.

   - The capability/nature of the MDSC: The nature of the lower-level MDSC impacts
     the degree/level (MDSC-L).  The interface between
   them is a recursion of abstraction. If the MDSC is not capable MPI.  An implementation of
     handling optical parameters such an MDSC-H
   makes provisioning requests as those specific to OTN/WSON,
     then white topology abstraction may not work well.

   - The confidentiality:  In some cases where normal using the PNC would like MPI, but an MDSC-L
   must be able to
     hide key internal topological data from receive requests as normal at the MDSC, CMI and also at
   the abstraction
     method should consider this aspect.

   - MPI.  The scope hierarchy of abstraction: All MDSCs can be seen in Figure 4.

   Another implementation choice could foresee the usage of an MDSC-L
   for all the aforementioned factors are
     equally applicable PNCs related to both a given network layer or technology
   (e.g. IP/MPLS) a different MDSC-L for the PNCs related to another
   layer/technology (e.g. OTN/WDM) and an MDSC-H to coordinate them.

                                       +--------+
                                       |   CNC  |
                                       +--------+
                                            |          +-----+
                                            | CMI      | CNC |
                                      +----------+     +-----+
                               -------|  MDSC-H  |----    |
                              |       +----------+    |   | CMI
                          MPI |                   MPI (MDSC-PNC Interface) |   |
                              |                       |   |
                         +---------+               +---------+
                         |  MDSC-L |               |  MDSC-L |
                         +---------+               +---------+
                       MPI |     |                   |     |
                           |     |                   |     |
                        -----   -----             -----   -----
                       | PNC | | PNC |           | PNC | | PNC |
                        -----   -----             -----   -----

                         Figure 4: MDSC Hierarchy

4.2. Functional Split of MDSC Functions in Orchestrators

   An implementation choice could separate the MDSC functions into two
   groups, one group for service-related functions and the
     CMI (CNC-MDSC Interface).

6.2. Abstraction Types other for
   network-related functions.  This section defines enables the following three types implementation of topology
   abstraction:

     . Native/White Topology (Section 6.2.1)
     . Black Topology (Section 6.2.2)
     . Grey Topology (Section 6.2.3)

6.2.1. Native/White Topology

   This is a case where the PNC
   service orchestrator that provides the actual network topology to
   the MDSC without any hiding or filtering service-related functions of information as shown in
   Figure 6a. In this case,
   the MDSC has and a network orchestrator that provides the full knowledge network-
   related functions of the
   underlying network topology and as such there is no need for the
   MDSC to send a path computation request to MDSC.  This split is consistent with the PNC. The computation
   burden will fall on
   YANG service model architecture described in [Service-YANG].  Figure
   5 depicts this and shows how the MDSC ACTN interfaces may map to find an optimal end-to-end path and
   optimal per domain paths.

                             +--+     +--+     +--+     +--+
                           +-+  +-----+ YANG
   models.

                                +--------------------+
                                |           Customer |
                                |   +-----+          |
                                |   | CNC |          |
                                |   +-----+  +-+
                             ++-+     ++-+     +-++     +-++          |
                                +--------------------+
                                         CMI |  Customer Service Model
                                             |
                        +---------------------------------------+
                        |                          Service      |
                ********|***********************   Orchestrator |
                * MDSC  |  +-----------------+ *                |
                *       |  | Service-related | *                |
                *       |  |    Functions    | *                |
                             ++-+     ++-+     +-++     +-++
                           +-+  +-----+  +-----+  +-----+  +-+
                             +--+     +--+     +--+     +--+

                   Figure 6a: The native/white topology

6.2.2. Black Topology

   The entire domain network is abstracted as a single virtual node
   (see the definition of virtual node in [RFC7926]) with the
   access/egress links without disclosing any node internal
   connectivity information.
   Figure 6b depicts a native topology with the corresponding black
   topology with one virtual node and inter-domain links. In this case,
   the MDSC has to make path computation requests to the PNCs before it
   can determine an end-to-end path. If there are a large number of
   inter-connected domains, this abstraction method may impose a heavy
   coordination load at the MDSC level in order to find an optimal end-
   to-end path.
   The black topology would not give the MDSC any critical network
   resource information other than the border nodes/links information
   and as such it is likely to have a need for complementary
   communications between the MDSC and the PNCs (e.g., Path computation
   Request/Reply).

                      +--+     +--+     +--+     +--+
                    +-+  +-----+  +-----+  +-----+  +-+
                      ++-+     ++-+     +-++     +-++
                *       |  +-----------------+ *                |
                *       +----------------------*----------------+
                *                              *  |  Service Delivery Model
                *                              *  |
                *       +----------------------*----------------+
                *       |                      *   Network      |
                *       |  +-----------------+ *   Orchestrator |
                *       |  | Network-related | *                |
                *       |  |    Functions    | *                |
                *       |  +-----------------+ *                |
                ********|***********************                |
                        +---------------------------------------+
                                             MPI |  Network Configuration Model
                                                 |
                                   +------------------------+
                                   |            Domain      |
                      ++-+     ++-+     +-++     +-++
                    +-+  +-----+  +-----+  +-----+  +-+
                      +--+     +--+     +--+     +--+

                                +--------+
                             +--+        +--+
                                   |  +------+  Controller  |
                                   |  | PNC  |              |
                                   |  +------+              |
                                   +------------------------+
                                             SBI |  Device Configuration Model
                                                 |
                                             +--------+
                                             | Device |
                             +--+        +--+
                                             +--------+

      Figure 6b: The native 5: ACTN Architecture in the Context of the YANG Service
                                  Models
5. Topology Abstraction Methods

   Topology abstraction is described in [RFC7926].  This section
   discusses topology abstraction factors, types, and their context in
   the corresponding black ACTN architecture.

   Abstraction in ACTN is performed by the PNC when presenting
   available topology
              with one virtual node and inter-domain links

6.2.3. Grey Topology to the MDSC, or by an MDSC-L when presenting
   topology to an MDSC-H.  This abstraction level, referred function is different to the creation
   of a grey topology, represents a
   compromise between black and white topology from VN (and particularly a granularity point Type 2 VN) which is not abstraction but
   construction of view. virtual resources.

5.1. Abstraction Factors

   As shown discussed in Figures 7a and 7b, we may further differentiate
   from a perspective [RFC7926], abstraction is tied with policy of how to abstract internal TE resources between the pairs of border nodes:
     . Grey topology type A: border nodes with a TE links between them
        in a full mesh fashion (See Figure 7a).

                           +--+     +--+     +--+     +--+
                         +-+  +-----+  +-----+  +-----+  +-+
                           ++-+     ++-+     +-++     +-++
                            |        |         |        |
                            |        |         |        |
                            |        |         |        |
                            |        |         |        |
                           ++-+     ++-+     +-++     +-++
                         +-+  +-----+  +-----+  +-----+  +-+
                           +--+     +--+     +--+     +--+

                                    +--+    +--+
                                  +-+  +----+  +-+
                                    ++-+    +-++
                                     |  \  /  |
                                     |   \/   |
                                     |   /\   |
                                     |  /  \  |
                                    ++-+    +-++
                                  +-+  +----+  +-+
                                    +--+    +--+

     Figure 7a: The native topology and
   networks.  For instance, per an operational policy, the PNC would
   not provide any technology specific details (e.g., optical
   parameters for WSON) in the corresponding grey abstract topology
                      type A with TE links between border nodes

   For each pair of ingress and egress nodes (i.e., border nodes
   to/from it provides to the domain), TE link metric is provided with TE attributes
   such as max bandwidth available, link delay, etc. This abstraction
   MDSC.

   There are many factors that may impact the choice of abstraction:

   - Abstraction depends on the nature of the underlying TE domain
     networks.
   Note that this grey topology can also  For instance, packet networks may be represented as a single
   abstract node abstracted with
     fine granularity while abstraction of optical networks depends on
     the connectivity matrix defined in [TE-Topology],
   abstracting switching units (such as wavelengths) and the internal connectivity information. The only thing
   might be different is some additional information about end-to-end
     continuity and cross-connect limitations within the end
   points network.

   - Abstraction also depends on the capability of the links PNCs.  As
     abstraction requires hiding details of the border nodes (i.e., links outward
   customer-facing) as they cannot be included in underlying network
     resources, the connectivity
   matrix's termination points.

     . Grey topology type B: border nodes with some internal
        abstracted nodes and abstracted links (See Figure 7b)
                          +--+     +--+     +--+
                        +-+  +-----+  +-----+  +-+
                          ++-+     ++-+     +-++
                           |                  |
                           |                  |
                           |                  |
                           |                  |
                          ++-+     ++-+     +-++
                        +-+  +-----+  +-----+  +-+
                          +--+     +--+     +--+

     Figure 7b: The grey topology type B with PNC's capability to run algorithms impacts the
     feasibility of abstraction.  Some PNC may not have the ability to
     abstract nodes/links
                          between border nodes

   The grey abstraction type B would allow native topology while other PNCs may have the MDSC ability to
     use sophisticated algorithms.

   - Abstraction is a tool that can improve scalability.  Where the
     native network resource information is of large size there is a
     specific scaling benefit to have more
   information about abstraction.

   - The proper abstraction level may depend on the internals frequency of
     topology updates and vice versa.

   - The nature of the domain networks by MDSC's support for technology-specific
     parameters impacts the PNCs
   so that degree/level of abstraction.  If the MDSC can flexibly determine optimal paths. The MDSC may
   configure some
     is not capable of handling such parameters then a higher level of
     abstraction is needed.

   - In some cases, the internal virtual nodes (e.g., cross-connect) PNC is required to redirect its traffic as it sees changes hide key internal
     topological data from the domain networks.

6.3. Building Methods of Grey Topology MDSC.  Such confidentiality can be
     achieved through abstraction.

5.2. Abstraction Types

   This section discusses two different methods of building a grey
   topology:

     . Automatic generation defines the following three types of abstract topology by configuration
   abstraction:

     . Native/White Topology (Section 6.3.1) 5.2.1)
     . On-demand generation of supplementary topology via path
        computation request/reply Black Topology (Section 6.3.2)

6.3.1. Automatic generation of abstract topology by configuration

   The "Automatic generation" method 5.2.2)
     . Grey Topology (Section 5.2.3)

5.2.1. Native/White Topology

   This is based on the
   abstraction/summarization of the whole domain by a case where the PNC and its
   advertisement on MPI interface once provides the abstraction level is
   configured. The level of abstraction advertisement can be decided
   based on some PNC configuration parameters (e.g. provide actual network topology to
   the
   potential connectivity between any PE and MDSC without any ASBR in an MPLS-TE
   network.

   Note that the configuration parameters for this potential topology
   can include available B/W, latency, hiding or any combination filtering of defined
   parameters. How to generate such tunnel information information. I.e., no
   abstraction is beyond the
   scope of this document.

   Such potential topology needs to be periodically or
   incrementally/asynchronously updated every time that a failure, a
   recovery or performed.  In this case, the setup of new VNs causes a change in MDSC has the
   characteristics full
   knowledge of the advertised grey topology (e.g. in our
   previous case if due to changes in the underlying network is it now possible to
   provide connectivity between a given PE topology and can operate on it
   directly.
5.2.2. Black Topology

   A black topology replaces a given ASBR full network with a
   higher delay in the update).

6.3.2. On-demand generation minimal
   representation of supplementary the edge-to-edge topology via path compute
   request/reply without disclosing any
   node internal connectivity information.  The "on-demand generation" of supplementary topology is to be
   distinguished from automatic generation of abstract topology. While
   abstract topology is generated and updated automatically by
   configuration as explained in Section 6.3.1, additional
   supplementary topology entire domain network
   may be obtained by the MDSC via path compute
   request/reply mechanism. Starting with abstracted as a black topology
   advertisement from single abstract node with the PNCs, network's
   access/egress links appearing as the MDSC may need additional
   information beyond ports to the level of black topology from abstract node and
   the PNCs.

   It is assumed implication that any port can be 'cross-connected' to any other.
   Figure 6 depicts a native topology with the corresponding black
   topology advertisement from PNCs would
   give the MDSC each domain's the border node/link information. Under with one virtual node and inter-domain links.  In this scenario, when
   case, the MDSC needs has to allocate a new VN, the MDSC
   can issue make a number of Path Computation requests as described in
   [ACTN-YANG] provisioning request to different PNCs with constraints matching the VN
   request. An example is provided in Figure 4, where PNCs to
   establish the MDSC port-to-port connection.  If there is
   requesting to setup a P2P VN between AP1 and AP2. The MDSC can use
   two different inter-domain links to get from Domain X to Domain Y,
   namely the one between ASBRX.1 and ASBRY.1 and large number
   of inter-connected domains, this abstraction method may impose a
   heavy coordination load at the one between
   ASBRX.2 and ASBRY.2, but MDSC level in order to choose find an
   optimal end-to-end path since the best end abstraction hides so much
   information that it is not possible to determine whether an end-to-
   end path
   it needs is feasible without asking each PNC to know what domain X and Y can offer in term of
   connectivity and constraints between set up each path
   fragment.  For this reason, the PE nodes MPI might need to be enhanced to
   allow the PNCs to be queried for the practicality and
   characteristics of paths across the ASBR
   nodes.

                       -------                -------
                      (       )              (       )
                     -      ASBRX.1------- ASBRY.1     -
                    (+---+       )         (       +---+)
              -+---( |PE1| Dom.X  )       (  Dom.Y |PE2| )---+- abstract node.
                   .....................................
                   : PNC Domain                        :
                   :  +--+     +--+     +--+     +--+  :
                ------+  +-----+  +-----+  +-----+  +------
                   :  ++-+     ++-+     +-++     +-++  :
                   :   |    (+---+       )         (       +---+)        |
               AP1    -      ASBRX.2------- ASBRY.2    -     AP2
                      (       )              (       )
                       -------                -------         |        |   :
                   :   |        |         |        |   :
                   :   |        |         |        |   :
                   :   |        |         |        |   :
                   :  ++-+     ++-+     +-++     +-++  :
                ------+  +-----+  +-----+  +-----+  +------
                   :  +--+     +--+     +--+     +--+  :
                   :....................................

                                +----------+
                             ---+          +---
                                | Abstract |
                                |   Node   |
                             ---+          +---
                                +----------+

 Figure 4: A multi-domain networks example

   A path computation request will be issued to PNC.X asking for
   potential connectivity between PE1 and ASBRX.1 and between PE1 and
   ASBRX.2 6: Native Topology with related objective functions and TE metric constraints. Corresponding Black Topology Expressed
                          as an Abstract Node

5.2.3. Grey Topology

   A similar request will be issued to PNC.Y grey topology represents a compromise between black and white
   topologies from a granularity point of view.  In this case the results merged
   together at the MDSC to be able to compute the optimal end-to-end
   path including the inter domain PNC
   exposes an abstract topology that comprises nodes and links.  The info related to
   nodes and links may be physical of abstract while the abstract
   topology represents the potential of connectivity may be cached across the PNC
   domain.
   Two modes of grey topology are identified:
     . In a type A grey topology type border nodes are connected by a
        full mesh of TE links (see Figure 7).
     . In a type B grey topology border nodes are connected over a
        more detailed network comprising internal abstract nodes and
        abstracted links.  This mode of abstraction supplies the MDSC for subsequent path computation processes or discarded, but in
   this case
        with more information about the PNCs are not requested internals of the PNC domain and
        allows it to keep make more informed choices about how to route
        connectivity over the grey topology
   updated.

6.4.  Abstraction Configuration Consideration underlying network.

                  .....................................
                  : PNC Domain                        :
                  :  +--+     +--+     +--+     +--+  :
               ------+  +-----+  +-----+  +-----+  +------
                  :  ++-+     ++-+     +-++     +-++  :
                  :   |        |         |        |   :
                  :   |        |         |        |   :
                  :   |        |         |        |   :
                  :   |        |         |        |   :
                  :  ++-+     ++-+     +-++     +-++  :
               ------+  +-----+  +-----+  +-----+  +------
                  :  +--+     +--+     +--+     +--+  :
                  :....................................

                           ....................
                           : Abstract Network :
                           :                  :
                           :   +--+    +--+   :
                        -------+  +----+  +-------
                           :   ++-+    +-++   :
                           :    |  \  /  |    :
                           :    |   \/   |    :
                           :    |   /\   |    :
                           :    |  /  \  |    :
                           :   ++-+    +-++   :
                        -------+  +----+  +-------
                           :   +--+    +--+   :
                           :..................:

         Figure 7: Native Topology with Corresponding Grey Topology

5.3. Methods of Building Grey Topologies

   This section provides discusses two different methods of building a set grey
   topology:

     . Automatic generation of abstraction abstract topology by configuration
   considerations.

   It
        (Section 5.3.1)
     . On-demand generation of supplementary topology via path
        computation request/reply (Section 5.3.2)

5.3.1. Automatic Generation of Abstract Topology by Configuration

   Automatic generation is expected that the abstraction level be configured between based on the
   CNC and abstraction/summarization of
   the MDSC (i.e., whole domain by the CMI) depending PNC and its advertisement on the capability of the
   CNC. This negotiated MPI.  The
   level of abstraction can be decided based on PNC configuration
   parameters (e.g., "provide the CMI may also impact
   the way the MDSC and the PNCs configure potential connectivity between any PE
   and encode the abstracted
   topology. For example, if any ASBR in an MPLS-TE network").

   Note that the CNC is capable of sophisticated
   technology specific operation, then configuration parameters for this would impact abstract topology
   can include available bandwidth, latency, or any combination of
   defined parameters.  How to generate such information is beyond the level
   scope of
   abstraction at this document.

   This abstract topology may need to be periodically or incrementally
   updated when there is a change in the MDSC with underlying network or the PNCs. On use
   of the other hand, if network resources that make connectivity more or less
   available.

5.3.2. On-demand Generation of Supplementary Topology via Path Compute
   Request/Reply

   While abstract topology is generated and updated automatically by
   configuration as explained in Section 5.3.1, additional
   supplementary topology may be obtained by the CNC
   asks for MDSC via a generic path
   compute request/reply mechanism.

   The abstract topology abstraction, then the level of
   abstraction at advertisements from PNCs give the MDSC with the PNCs can be less technology
   specific than
   border node/link information for each domain.  Under this scenario,
   when the former case.

   The subsequent sections provide MDSC needs to create a list of possible abstraction
   levels for various technology domain networks.

6.4.1. Packet Networks

   - For grey abstraction, new VN, the type of abstraction and its parameters MDSC can be defined and configured.
        o Abstraction Level 1: TE-tunnel abstraction for all (S-D)
          border pairs with:
             . Maximum B/W available per Priority Level
             . Minimum Latency

6.4.2. OTN Networks

   For OTN networks, max bandwidth available may be per ODU 0/1/2/3
   switching level or aggregated across all ODU switching levels (i.e.,
   ODUj/k). Clearly, there issue path
   computation requests to PNCs with constraints matching the VN
   request as described in [ACTN-YANG].  An example is provided in
   Figure 8, where the MDSC is creating a trade-off P2P VN between these two abstraction
   methods. Some OTN switches can switch any level of ODUs AP1 and AP2.
   The MDSC could use two different inter-domain links to get from
   Domain X to Domain Y, but in such
   case there is no need for ODU level abstraction.

   - For grey abstraction, order to choose the type of abstraction best end-to-end
   path it needs to know what domain X and its parameters Y can be defined offer in terms of
   connectivity and configured.

        o Abstraction Level 1: Per ODU Switching level (i.e., ODU type constraints between the PE nodes and number) TE-tunnel abstraction the border
   nodes.

                        -------                 --------
                       (       )               (        )
                      -      BrdrX.1------- BrdrY.1      -
                     (+---+       )          (       +---+)
               -+---( |PE1| Dom.X  )        (  Dom.Y |PE2| )---+-
                |    (+---+       )          (       +---+)    |
               AP1    -      BrdrX.2------- BrdrY.2      -    AP2
                       (       )               (        )
                        -------                 --------

                     Figure 8: A Multi-Domain Example
   The MDSC issues a path computation request to PNC.X asking for all (S-D)
   potential connectivity between PE1 and border pairs
          with:
             . Maximum B/W available per Priority Level
             . Minimum Latency

        o Abstraction Level 2: Aggregated TE-tunnel abstraction node BrdrX.1 and
   between PE1 and BrdrX.2 with related objective functions and TE
   metric constraints.  A similar request for all
          (S-D) connectivity from the
   border pairs with:
             . Maximum B/W available per Priority Level
             . Minimum Latency

6.4.3. WSON Networks

   For WSON networks, max bandwidth available may nodes in Domain Y to PE2 will be per
   lambda/frequency level (OCh) or aggregated across all
   lambda/frequency level. Per OCh level abstraction gives more
   detailed data issued to the PNC.Y.  The MDSC at
   merges the expense of more information
   processing. Either OCh-level or aggregated level abstraction should
   factor in results to compute the RWA constraint (i.e., wavelength continuity) at optimal end-to-end path including
   the
   PNC level. This means inter domain links.  The MDSC can use the PNC should have result of this capability and
   advertise it as such.

   For grey abstraction,
   computation to request the PNCs to provision the type of abstraction underlying
   networks, and its parameters the MDSC can
   be defined and configured then use the end-to-end path as follows:

        o Abstraction Level 1: Per Lambda/Frequency level TE-tunnel
          abstraction for all (S-D) border pairs with:
             . Maximum B/W available per Priority Level
             . Minimum Latency

        o Abstraction Level 2: Aggregated TE-tunnel abstraction for all
          (S-D) border pairs with:
             . Maximum B/W available per Priority Level

6.5. a virtual
   link in the VN it delivers to the customer.

5.4. Hierarchical Topology Abstraction Granularity Level example Example

   This section illustrates how topology abstraction operates in
   different level levels of granularity over a hierarchy of MDSCs which is as shown in Figure 8 below. 9.

                            +-----+
                            | CNC |  CNC wants to create a VN
                            +-----+  between CE A and CE B
                               |
                               |
                   +-----------------------+
                   |         MDSC-H        |
                   +-----------------------+
                         /           \
                        /             \
                     +--------+              +--------+
                +---------+         +---------+
                | MDSC-L1| MDSC-L1 | MDSC-L2|
                     +--------+              +--------+         | MDSC-L2 |
                +---------+         +---------+
                  /    \               /    \
                 /      \             /      \
                   +-----+  +-----+        +-----+  +-----+
              +----+  +----+       +----+  +----+
    CE A o----|PNC 1|  |PNC 2|        |PNC 3|  |PNC 4|----o o----|PNC1|  |PNC2|       |PNC3|  |PNC4|----o CE B
              +----+  +----+       +----+  +----+

                  Virtual Network Delivered to CNC

                    CE A o==============o CE B
                   +-----+  +-----+        +-----+  +-----+

                  Topology operated on by MDSC-H

                               --o=o=o=o--

                 CE A o----o==o==o===o----o CE B

    Topology operated on by MDSC-L1     Topology operated on by MDSC-L2
                  _        _                       _        _
                 ( )      ( )                     ( )      ( )
                (   )    (   )                   (   )    (   )
                --(o---o)==(o---o)==             ==(o---o)==(o---o)--
       CE A o--(o---o)==(o---o)==Dom.3   Dom.2==(o---o)==(o---o)--o CE B
                (   )    (   )                   (   )    (   )
                 (_)      (_)                     (_)      (_)

                             Actual Topology
                ___          ___          ___          ___
               (   )        (   )        (   )        (   )
              (  o  )      (  o  )      ( o--o)      (  o  )
             (  / \  )    (   |\  )    (  |  | )    (  / \  )
          ----(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)----
   CE A o---(o-o---o-o)==(o-o-o-o-o)==(o--o--o-o)==(o-o-o-o-o)---o CE B
             (  \ /  )    ( | |/  )    (  |  | )    (  \ /  )
              (  o  )      (o-o  )      ( o--o)      (  o  )
               (___)        (___)        (___)        (___)

              Domain 1     Domain 2     Domain 3     Domain 4

   Where
        o  is a node and --
       --- is a link and
       === a border link

        Figure 8: 9: Illustration of topology abstraction granularity levels Hierarchical Topology Abstraction

   In the example depicted in Figure 8, 9, there are four domains under
   control of the respective PNCs, namely, PNC 1, PNC 2, PNC3 PNCs PNC1, PNC2, PNC3, and PNC4.
   Assume that MDSC L-1 is controlling PNC 1  MDSC-L1 controls PNC1
   and PNC 2 PNC2 while MDSC L-2
   is controlling PNC 3 MDSC-L2 controls PNC3 and PNC 4. Let us assume that each PNC4.  Each of the PNCs
   provides a grey topology abstraction in which to present that presents only border nodes
   and links within across and outside the domain.  The abstract topology
   MDSC-L1 would operate that operates is basically a combination of the two topologies the PNCs (PNC 1 from
   PNC1 and PNC 2) provide. PNC2.  Likewise, the abstract topology that MDSC-L2 would operate
   operates is shown in Figure 8. 9.  Both MDSC-L1 and MDSC-L2 provide a
   black topology abstraction to MSDC-H in which each PNC domain is
   presented as one a single virtual node to its top level
   MDSC-H. Then the node.  MDSC-H combines these two
   topologies updated by
   MDSC-L1 and MDSC-L2 to create the abstraction topology to on which it operates.
   MDSC-H sees the whole four domain networks as four virtual nodes
   connected via virtual links. The top level MDSC may operate on
   a higher level of abstraction (i.e., less granular level) than the
   lower level MSDCs.

7.

6. Access Points and Virtual Network Access Points

   In order not to share unwanted topological information map identification of connections between the
   customer domain customer's
   sites and the TE networks and provider domain, a new entity is defined which
   is referred to as scope the connectivity requested in
   the VNS, the Access Point (AP). See CNC and the definition of AP in
   Section 1.1.

   A customer node will use APs as MDSC refer to the end points for connections using the request of
   VNS
   Access Point (AP) construct as shown in Figure 9. 10.

                                -------------
                               (             )
                              -               -
               +---+ X       (                 )      Z +---+
               |CE1|---+----(                   )---+---|CE2|
               +---+   |     (                 )    |   +---+
                      AP1     -               -    AP2
                               (             )
                                -------------

                      Figure 9: 10: Customer View of APs definition customer view

   Let's take as an example a scenario shown in Figure 7. 10.  CE1 is
   connected to the network via a 10Gb link and CE2 via a 40Gb link.
   Before the creation of any VN between AP1 and AP2 the customer view
   can be summarized as shown in Table 1: 1.

                         +----------+------------------------+
                         |End Point | Access Link Bandwidth  |
                   +-----+----------+----------+-------------+
                   |AP id| CE,port  | MaxResBw | AvailableBw |
                   +-----+----------+----------+-------------+
                   | AP1 |CE1,portX |   10Gb   |    10Gb     |
                   +-----+----------+----------+-------------+
                   | AP2 |CE2,portZ |   40Gb   |    40Gb     |
                   +-----+----------+----------+-------------+

                      Table 1: AP - customer view Customer View

   On the other hand, what the provider sees is shown in Figure 10. 11.

                          -------             -------
                         (       )           (       )
                        -         -         -         -
                    W  (+---+       )      (       +---+)  Y
                 -+---( |PE1| Dom.X  )----(  Dom.Y |PE2| )---+-
                  |    (+---+       )      (       +---+)    |
                  AP1   -         -         -         -     AP2
                         (       )           (       )
                          -------             -------

                    Figure 10: 11: Provider view of the AP

   Which results in a summarization as shown in Table 2.

                         +----------+------------------------+
                         |End Point | Access Link Bandwidth  |
                   +-----+----------+----------+-------------+
                   |AP id| PE,port  | MaxResBw | AvailableBw |
                   +-----+----------+----------+-------------+
                   | AP1 |PE1,portW |   10Gb   |    10Gb     |
                   +-----+----------+----------+-------------+
                   | AP2 |PE2,portY |   40Gb   |    40Gb     |
                   +-----+----------+----------+-------------+

                        Table 2: AP - provider view Provider View

   A Virtual Network Access Point (VNAP) needs to be defined as binding
   between the AP that is linked to a VN and that is used to allow for
   different VNs to start from the same AP.  It also allows for traffic
   engineering on the access and/or inter-domain links (e.g., keeping
   track of bandwidth allocation).  A different VNAP is created on an
   AP for each VN.

   In the this simple scenario depicted above we suppose we want to create two virtual
   networks.  The first with VN identifier 9 between AP1 and AP2 with
   bandwidth of 1Gbps, while the second with VN id identifier 5, again
   between AP1 and AP2 and with bandwidth 2Gbps.

   The provider view would evolve as shown in Table 3.

                           +----------+------------------------+
                           |End Point |  Access Link/VNAP Bw   |
                 +---------+----------+----------+-------------+
                 |AP/VNAPid| PE,port  | MaxResBw | AvailableBw |
                 +---------+----------+----------+-------------+
                 |AP1      |PE1,portW |  10Gbps  |    7Gbps    |
                 | -VNAP1.9|          |   1Gbps  |     N.A.    |
                 | -VNAP1.5|          |   2Gbps  |     N.A     |
                 +---------+----------+----------+-------------+
                 |AP2      |PE2,portY |  40Gbps  |    37Gbps   |
                 | -VNAP2.9|          |   1Gbps  |     N.A.    |
                 | -VNAP2.5|          |   2Gbps  |     N.A     |
                 +---------+----------+----------+-------------+
        Table 3: AP and VNAP - provider view Provider View after VNS creation

7.1. Dual homing scenario Creation

6.1. Dual-Homing Scenario

   Often there is a dual homing relationship between a CE and a pair of
   PEs.  This case needs to be supported by the definition of VN, APs
   and VNAPs.  Suppose CE1 connected to two different PEs in the
   operator domain via AP1 and AP2 and that the customer needs 5Gbps of
   bandwidth between CE1 and CE2.  This is shown in Figure 11. 12.

                                      ____________
                              AP1    (            )    AP3
                             -------(PE1)      (PE3)-------
                          W /      (                )      \X      \ X
                      +---+/      (                  )      \+---+
                      |CE1|      (                    )      |CE2|
                      +---+\      (                  )      /+---+
                          Y \      (                )      /Z      / Z
                             -------(PE2)      (PE4)-------
                              AP2    (____________)

                      Figure 11: Dual homing scenario 12: Dual-Homing Scenario

   In this case, the customer will request for a VN between AP1, AP2 AP2,
   and AP3 specifying a dual homing relationship between AP1 and AP2.
   As a consequence no traffic will flow between AP1 and AP2.  The dual
   homing relationship would then be mapped against the VNAPs (since
   other independent VNs might have AP1 and AP2 as end points).

   The customer view would be shown in Table 4.

                      +----------+------------------------+
                      |End Point |  Access Link/VNAP Bw   |
            +---------+----------+----------+-------------+-----------+
            |AP/VNAPid| CE,port  | MaxResBw | AvailableBw |Dual Homing|
            +---------+----------+----------+-------------+-----------+
            |AP1      |CE1,portW |  10Gbps  |    5Gbps    |           |
            | -VNAP1.9|          |   5Gbps  |     N.A.    | VNAP2.9   |
            +---------+----------+----------+-------------+-----------+
            |AP2      |CE1,portY |  40Gbps  |    35Gbps   |           |
            | -VNAP2.9|          |   5Gbps  |     N.A.    | VNAP1.9   |
            +---------+----------+----------+-------------+-----------+
            |AP3      |CE2,portX |  40Gbps  |   35Gbps    |           |
            | -VNAP3.9|          |   5Gbps  |     N.A.    |   NONE    |
            +---------+----------+----------+-------------+-----------+

          Table 4: Dual homing Dual-Homing - customer view Customer View after VN creation

8. Creation

7. Advanced ACTN Application: Multi-Destination Service

   A further advanced application of ACTN is in the case of Data Center
   selection, where the customer requires the Data Center selection to
   be based on the network status; this is referred to as Multi-
   Destination in [ACTN-REQ].  In terms of ACTN, a CNC could request a
   connectivity service (virtual network) between a set of source Aps
   and destination APs and leave it up to the network (MDSC) to decide
   which source and destination access points to be used to set up the
   connectivity service (virtual network).  The candidate list of
   source and destination APs is decided by a CNC (or an entity outside
   of ACTN) based on certain factors which are outside the scope of
   ACTN.

   Based on the AP selection as determined and returned by the network
   (MDSC), the CNC (or an entity outside of ACTN) should further take
   care of any subsequent actions such as orchestration or service
   setup requirements.  These further actions are outside the scope of
   ACTN.

   Consider a case as shown in Figure 12, 13, where three data centers are
   available, but the customer requires the data center selection to be
   based on the network status and the connectivity service setup
   between the AP1 (CE1) and one of the destination APs (AP2 (DC-A),
   AP3 (DC-B), and AP4 (DC-C)).  The MDSC (in coordination with PNCs)
   would select the best destination AP based on the constraints,
   optimization criteria, policies, etc., and setup the connectivity
   service (virtual network).

                          -------            -------
                         (       )          (       )
                        -         -        -         -
          +---+        (           )      (           )        +----+
   |CE1|---+----(
          |CE1|---+---(  Domain X   )----(  Domain Y   )---+---|DC-A|
          +---+   |    (           )      (           )    |   +----+
                   AP1   -         -        -         -    AP2
                         (       )          (       )
                          ---+---            ---+---
                   AP3
                             |              AP4                  |
                         AP3-+              AP4-+
                             |                  |
                          +----+              +----+
                          |DC-B|              |DC-C|
                          +----+              +----+

          Figure 12: End point selection based 13: End-Point Selection Based on network status

8.1. Network Status

7.1. Pre-Planned End Point Migration

   Further

   Furthermore, in case of Data Center selection, customer could
   request for a backup DC to be selected, such that in case of
   failure, another DC site could provide hot stand-by protection.  As
   shown in Figure 13 14 DC-C is selected as a backup for DC-A.  Thus, the
   VN should be setup by the MDSC to include primary connectivity
   between AP1 (CE1) and AP2 (DC-A) as well as protection connectivity
   between AP1 (CE1) and AP4 (DC-C).

                    -------            -------
                   (       )          (       )
                  -         -    __  -         -
   +---+         (           )      (           )        +----+
   |CE1|---+----(  Domain X   )----(  Domain Y   )---+---|DC-A|
   +---+   |     (           )      (           )    |   +----+
           AP1    -         -        -         -    AP2    |
                   (       )          (       )            |
                    ---+---            ---+---             |
                   AP3
                       |              AP4                  |                |
                   AP3-+              AP4-+         HOT STANDBY
                       |                  |                |
                    +----+             +----+              |
                    |DC-D|             |DC-C|<-------------
                    +----+             +----+

                Figure 13: 14: Pre-planned end point migration

8.2. End-Point Migration

7.2. On the Fly End Point End-Point Migration

   Compared to pre-planned end point migration, on the fly end point
   selection is dynamic in that the migration is not pre-planned but
   decided based on network condition.  Under this scenario, the MDSC
   would monitor the network (based on the VN SLA) and notify the CNC
   in case where some other destination AP would be a better choice
   based on the network parameters.  The CNC should instruct the MDSC
   when it is suitable to update the VN with the new AP if it is
   required.

9. Advanced Topic

   This section describes how ACTN architecture supports some
   deployment scenarios. See Appendix A for details on MDSC and PNC
   functions integrated in Service/Network Orchestrator and Appendix B
   for IP + Optical with L3VPN service.

10.

8. Manageability Considerations

   The objective of ACTN is to manage traffic engineered resources, and
   provide a set of mechanism mechanisms to allow clients customers to request virtual
   connectivity across server network resources.  ACTN will support supports
   multiple clients customers each with its own view of and control of a
   virtual network built on the server network, the network operator
   will need to partition (or "slice") their network resources, and
   manage them the resources accordingly.

   The ACTN platform will, itself, need to support the request,
   response, and reservations of client and network layer connectivity.
   It will also need to provide performance monitoring and control of
   traffic engineered resources.  The management requirements may be
   categorized as follows:

     . Management of external ACTN protocols
     . Management of internal ACTN protocols interfaces/protocols
     . Management and monitoring of ACTN components
     . Configuration of policy to be applied across the ACTN system

10.1.

8.1. Policy

   Policy

   It is expected that a policy will be an important aspect of ACTN control and management. Typically, policies
   Policies are used via the components and interfaces, during
   deployment of the service, to ensure that the service is compliant
   with agreed policy factors and variations (often described in Service Level Agreements - SLAs),
   these include, but are not limited to: connectivity, bandwidth,
   geographical transit, technology selection, security, resilience,
   and economic cost.

   Depending on the deployment of the ACTN deployment architecture, some policies
   may have local or global significance.  That is, certain policies
   may be ACTN component specific in scope, while others may have
   broader scope and interact with multiple ACTN components.  Two
   examples are provided below:

     . A local policy might limit the number, type, size, and
       scheduling of virtual network services a customer may request
       via its CNC.  This type of policy would be implemented locally
       on the MDSC.

     . A global policy might constrain certain customer types (or
       specific customer applications) to only use certain MDSCs, and
       be restricted to physical network types managed by the PNCs.  A
       global policy agent would govern these types of policies.

   This

   The objective of this section is to discuss the applicability of
   ACTN policy: requirements, components, interfaces, and examples.
   This section provides an analysis and does not mandate a specific
   method for enforcing policy, or the type of policy agent that would
   be responsible for propagating policies across the ACTN components.
   It does highlight examples of how policy may be applied in the
   context of ACTN, but it is expected further discussion in an
   applicability or solution specific document, will be required.

10.2.

8.2. Policy applied Applied to the Customer Network Controller

   A virtual network service for a customer application will be
   requested from by the CNC. It  The request will reflect the application
   requirements and specific service policy needs, including bandwidth,
   traffic type and survivability.  Furthermore, application access and
   type of virtual network service requested by the CNC, will be need
   adhere to specific access control policies.

10.3.

8.3. Policy applied Applied to the Multi Domain Service Coordinator

   A key objective of the MDSC is to help support the customer express customer's expression
   of the application connectivity request via its CNC as set of
   desired business needs, therefore policy will play an important
   role.

   Once authorized, the virtual network service will be instantiated
   via the CNC-MDSC Interface (CMI), it will reflect the customer
   application and connectivity requirements, and specific service
   transport needs.  The CNC and the MDSC components will have agreed
   connectivity end-points, use of these end-points should be defined
   as a policy expression when setting up or augmenting virtual network
   services.  Ensuring that permissible end-points are defined for CNCs
   and applications will require the MDSC to maintain a registry of
   permissible connection points for CNCs and application types.

   It may also be necessary for the MDSC to resolve policy conflicts,
   or at least flag any issues to administrator of the MDSC itself.

   Conflicts may occur when virtual network service optimization
   criterion
   criteria are in competition.  For example, to meet objectives for
   service reachability a request may require an interconnection point
   between multiple physical networks; however, this might break a
   confidentially policy requirement of specific type of end-to-end
   service. This type  Thus an MDSC may have to balance a number of situation the
   constraints on a service request and between different requested
   services.  It may also have to balance requested services with
   operational norms for the underlying physical networks.  This
   balancing may be resolved using configured policy and using hard and
   soft policy constraints.

10.4.

8.4. Policy applied Applied to the Physical Network Controller

   The PNC is responsible for configuring the network elements,
   monitoring physical network resources, and exposing connectivity
   (direct or abstracted) to the MDSC.  It is therefore expected that
   policy will dictate what connectivity information will be exported
   between the PNC, via the MDSC-PNC Interface (MPI), and MDSC.

   Policy interactions may arise when a PNC determines that it cannot
   compute a requested path from the MDSC, or notices that (per a
   locally configured policy) the network is low on resources (for
   example, the capacity on key links become exhausted).  In either
   case, the PNC will be required to notify the MDSC, which may (again
   per policy) act to construct a virtual network service across
   another physical network topology.

   Furthermore, additional forms of policy-based resource management
   will be required to provide virtual network service performance,
   security and resilience guarantees.  This will likely be implemented
   via a local policy agent and subsequent additional protocol methods.

11.

9. Security Considerations

   The ACTN framework described in this document defines key components
   and interfaces for managed traffic engineered networks.  Securing
   the request and control of resources, confidentially of the
   information, and availability of function, should all be critical
   security considerations when deploying and operating ACTN platforms.

   Several distributed ACTN functional components are required, and as
   a rule
   implementations should consider encrypting data that flow flows between
   components, especially when they are implemented at remote nodes,
   regardless if these data flows are on external or internal network
   interfaces.

   The ACTN security discussion is further split into two specific
   categories described in the following sub-sections:

     . Interface between the Customer Network Controller and Multi
       Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)

     . Interface between the Multi Domain Service Coordinator and
       Physical Network Controller (PNC), MDSC-PNC Interface (MPI)

   From a security and reliability perspective, ACTN may encounter many
   risks such as malicious attack and rogue elements attempting to
   connect to various ACTN components.  Furthermore, some ACTN
   components represent a single point of failure and threat vector,
   and must also manage policy conflicts, and eavesdropping of
   communication between different ACTN components.

   The conclusion is that all protocols used to realize the ACTN
   framework should have rich security features, and customer,
   application and network data should be stored in encrypted data
   stores.  Additional security risks may still exist.  Therefore,
   discussion and applicability of specific security functions and
   protocols will be better described in documents that are use case
   and environment specific.

11.1. Interface between the Customer Network Controller and Multi
   Domain Service Coordinator (MDSC),

9.1. CNC-MDSC Interface (CMI)

   The role of the MDSC is to detach the network and service control
   from underlying technology to help the customer express the network
   as desired by business needs. It should be noted that data

   Data stored by the MDSC will reveal details of the virtual network
   services, and which CNC and application customer/application is consuming the
   resource.  The data stored must therefore be considered as a
   candidate for encryption.

   CNC Access rights to an MDSC must be managed.  The MDSC resources must be
   properly allocated,
   allocate resources properly, and methods to prevent policy
   conflicts, resource wastage wastage, and denial of service attacks on the
   MDSC by rogue CNCs, should also be considered.

   A CNC-MDSC protocol interface

   The CMI will likely be an external protocol interface. Again, suitable  Suitable
   authentication and authorization of each CNC connecting to the MDSC
   will be required, especially, as these are likely to be implemented
   by different organizations and on separate functional nodes.  Use of
   the AAA-based mechanisms would also provide role-based authorization
   methods, so that only authorized CNC's may access the different
   functions of the MDSC.

11.2. Interface between the Multi Domain Service Coordinator and
   Physical Network Controller (PNC),

9.2. MDSC-PNC Interface (MPI)

   The function of the Physical Network Controller (PNC) is to
   configure network elements, provide performance and monitoring
   functions of the physical elements, and export physical topology
   (full, partial, or abstracted) to the MDSC.

   Where the MDSC must interact with multiple (distributed) PNCs, a
   PKI-based mechanism is suggested, such as building a TLS or HTTPS
   connection between the MDSC and PNCs, to ensure trust between the
   physical network layer control components and the MDSC.

   Which MDSC the PNC exports topology information to, and the level of
   detail (full or abstracted) should also be authenticated and
   specific access restrictions and topology views, should be
   configurable and/or policy-based.

12.

10. References

12.1.

10.1. Informative References

   [RFC2702] Awduche, D., et. al., "Requirements for Traffic
             Engineering Over MPLS", RFC 2702, September 1999.

   [RFC4026] L. Andersson, T. Madsen, "Provider Provisioned Virtual
             Private Network (VPN) Terminology", RFC 4026, March 2005.

   [RFC4208] G. Swallow, J. Drake, H.Ishimatsu, Y. Rekhter,
             "Generalized Multiprotocol Label Switching (GMPLS) User-
             Network Interface (UNI): Resource ReserVation Protocol-
             Traffic Engineering (RSVP-TE) Support for the Overlay
             Model", RFC 4208, October 2005.

   [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path
             Computation Element (PCE)-Based Architecture", IETF RFC
             4655, August 2006.

   [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts
             (Ed.), "Requirements of an MPLS Transport Profile", RFC
             5654, September 2009.

   [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined
             Networking: A Perspective from within a Service Provider
             Environment", RFC 7149, March 2014.

   [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for
             Information Exchange between Interconnected Traffic-
             Engineered Networks", RFC 7926, July 2016.

   [GMPLS]

   [RFC3945] Manning, E., et al., "Generalized Multi-Protocol Label
             Switching (GMPLS) Architecture", Architecture2, RFC 3945, October 2004.

   [ONF-ARCH] Open Networking Foundation, "SDN architecture", Issue
             1.1, ONF TR-521, June 2016.

   [RFC7491] King, D., and

   [Centralized] Farrel, A., "A PCE-based et al., "An Architecture for
             Application-based Use of PCE
             and PCEP in a Network Operations", RFC 7491, March
             2015.

   [Transport NBI] Busi, I., with Central Control", draft-ietf-
             teas-pce-central-control, work in progress.

   [Service-YANG] Lee, Y., Dhody, D., and Ceccarrelli, C., "Traffic
             Engineering and Service Mapping Yang Model", draft-lee-
             teas-te-service-mapping-yang, work in progress.

   [ACTN-YANG] Lee, Y., et al., "Transport North Bound Interface
             Use Cases", draft-tnbidt-ccamp-transport-nbi-use-cases, "A Yang Data Model for ACTN VN
             Operation", draft-lee-teas-actn-vn-yang, work in progress.

   [ACTN-Abstraction] Y.

   [ACTN-REQ] Lee, Y., et al., "Abstraction "Requirements for Abstraction and
             Control of TE
             Networks (ACTN) Abstraction Methods", draft-lee-teas-actn-
             abstraction, Networks", draft-ietf-teas-actn-
             requirements, work in progress.

13.

11. Contributors

   Adrian Farrel
   Old Dog Consulting
   Email: adrian@olddog.co.uk

   Italo Busi
   Huawei
   Email: Italo.Busi@huawei.com

   Khuzema Pithewan
   Infinera
   Email: kpithewan@infinera.com
   Michael Scharf
   Nokia
   Email: michael.scharf@nokia.com

Authors' Addresses

   Daniele Ceccarelli (Editor)
   Ericsson
   Torshamnsgatan,48
   Stockholm, Sweden
   Email: daniele.ceccarelli@ericsson.com

   Young Lee (Editor)
   Huawei Technologies
   5340 Legacy Drive
   Plano, TX 75023, USA
   Phone: (469)277-5838
   Email: leeyoung@huawei.com

   Luyuan Fang
   Microsoft
   eBay
   Email: luyuanf@gmail.com

   Diego Lopez
   Telefonica I+D
   Don Ramon de la Cruz, 82
   28006 Madrid, Spain
   Email: diego@tid.es

   Sergio Belotti
   Alcatel Lucent
   Via Trento, 30
   Vimercate, Italy
   Email: sergio.belotti@nokia.com

   Daniel King
   Lancaster University
   Email: d.king@lancaster.ac.uk

   Dhruv Dhody
   Huawei Technologies
   Divyashree Techno Park, Whitefield
   Bangalore, Karnataka  560066
   India
   Email: dhruv.ietf@gmail.com

   Gert Grammel
   Juniper Networks
   Email: ggrammel@juniper.net

Authors' Addresses

   Daniele Ceccarelli
   Ericsson
   Torshamnsgatan,48
   Stockholm, Sweden
   Email: daniele.ceccarelli@ericsson.com

   Young Lee
   Huawei Technologies
   5340 Legacy Drive
   Plano, TX 75023, USA
   Phone: (469)277-5838
   Email: leeyoung@huawei.com

APPENDIX A - Example of MDSC and PNC functions integrated Functions Integrated in A
Service/Network Orchestrator

   This section provides an example of a possible deployment scenario,
   in which Service/Network Orchestrator can include a number of
   functionalities, among which, in the example below, PNC
   functionalities for domain 2 and MDSC functionalities to coordinate
   the PNC1 functionalities (hosted in a separate domain controller)
   and PNC2 functionalities (co-hosted in the network orchestrator).

   Customer
               +-------------------------------+
               |    +-----+                    |
               |    | CNC |                    |
               |    +-----+                    |
               +-------|-----------------------+
                       |-CMI
                       |
   Service/Network     | CMI
   Orchestrator        |
               +-------|------------------------+
               |    +------+   MPI   +------+   |
               |    | MDSC |----|--> | |---------| PNC2 |   |
               |    +------+         +------+   |
               +-------|------------------|-----+
                       |-MPI
                       | MPI              |
   Domain Controller   |                  |
               +-------|-----+            |
               |   +-----+   |            | SBI
               |   |PNC1 |   |            |
               |   +-----+   |            |
               +-------|-----+            |
                       v SBI              v
                    -------            -------

                   (       )          (       )
                  -         -        -         -
                 (           )      (           )
                (  Domain 1   )----(  Domain 2   )
                 (           )      (           )
                  -         -        -         -
                   (       )          (       )
                    -------            -------

APPENDIX B - Example of IP + Optical network with L3VPN service

   This section provides a more complex deployment scenario in which
   ACTN hierarchy is deployed to control a multi-layer network via an
   IP/MPLS PNC and an Optical PNC. The scenario is further enhanced by
   the introduction of an upper layer service configuration (e.g.
   L3VPN). The provisioning of the L3VPN service is outside ACTN scope
   but it is worth showing how the two parts are integrated for the end
   to end service fulfilment. An example of service configuration
   function in the Service/Network Orchestrator is discussed in [I-
   D.dhjain-bess-bgp-l3vpn-yang].

   Customer
               +-------------------------------+
               |    +-----+                    |
               |    | CNC |                    |
               |    +-----+                    |
               +-------|--------+--------------+
                       |-CMI
                       |        | Customer Service Model
                       | CMI    | (non-ACTN interface)
   Service/Network     |        |
   Orchestrator        |        |
               +-------|--------|--------------------------+
               |       |      +-------------------------+  |
               |       |      |Service Mapping Function |  |
               |       |      +-------------------------+  |
               |       |       |         |                 |
               |    +------+   |   +---------------+       |
               |    | MDSC |---    |Service Config.|       |
               |    +------+       +---------------+       |
               +------|------------------|-----------------+
                  MPI-|
                  MPI |     +------------+ (non-ACTN Interf.)
                      |    /
              +-----------/------------+
   IP/MPLS    |          /             |
   Domain     |         /              |   Optical Domain
   Controller |        /               |       Controller
     +--------|-------/----+       +---|--------------+
     |   +-----+  +-----+  |       | +-----+          |
     |   |PNC1 |  |Serv.|  |       | |PNC2 |          |
     |   +-----+  +-----+  |       | +-----+          |
     +---------------------+       +------------------+
          SBI |                               |
              v                               |
       +---------------------------------+    | SBI
      /         IP/MPLS Network           \   |
     +-------------------------------------+  |
                                              V
                                              v
        +--------------------------------------+
       /           Optical Network              \
      +------------------------------------------+