[Docs] [txt|pdf] [Tracker] [Email] [Nits]

Versions: 00

Network Working Group
Internet Draft
Expiration Date: September 2001

                                            Osama Aboul-Magd (Nortel)
                                      Daniel Awduche (Movaz Networks)
                                        Curtis Brownmiller (Worldcom)
                                                   John Eaves (Tycom)
                                               Rudy Hoebeke (Alcatel)
                                   Hirokazu Ishimatsu (Japan Telecom)
                                                 Monica Lazer  (AT&T)
                                                   Guangzhi Li (AT&T)
                                               Michael Mayer (Nortel)
                                            Ananth Nagarajan (Sprint)
                                                   Lynn Neir (Sprint)
                                                Sandip Patel (Lucent)
                                                   Eve Varma (Lucent)
                                                Yangguang Xu (Lucent)
                                            Yong Xue (UUNET/WorldCom)
                                                Jennifer Yates (AT&T)

     A Framework for Generalized Multi-protocol Label Switching (GMPLS)


Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at

   The list of Internet-Draft Shadow Directories can be accessed at

Y. Xu, et. al.                                                  [Page 1]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   This document presents a framework for the GMPLS control plane. It studies
   the GMPLS control plane from three different perspectives: functional
   partitioning, computational layering and network partitioning. First, it
   studies the functional building blocks of a generic GMPLS control plane and
   illustrates that, in order to be generalized for different technologies and
   their specific application environment, what each functional component could
   assume, what it "should" (instead of "how") accomplish, what are the generic
   procedures and possible options. Then, this document goes beyond the
   functional partitioning and a single network. It illustrates the
   computational network layering, network partitions and their implications to
   the GMPLS control plane design, operations for the overall hybrid network. It
   also covers issues associated with control plane interaction of multiple
   networks, control plane integration and control plane operation models.

   This document focuses on different aspects of the GMPLS control plane from
   what the requirement document [CARR-REQ] and GMPLS architecture document
   [GMPLS-ARCH] focused. It completes these drafts and other GMPLS related

Summary for Sub-IP Area

   Please see the abstract above

   See the Reference Section

   This work fits in the Control Plane of CCAMP

   GMPLS belongs to CCAMP. Because this document presents a framework
   for GMPLS, it would be logical to target this document at the CCAMP WG.

   CCAMP charter looks for a framework document. This document is a
   revision of previous work that was considered as a good fit for GMPLS
   framework. The document was written by a balanced group of authors from both
   service providers and carriers. It completes the requirement and architecture
   documents for GMPLS control plane and provides guidance to protocol designs.
   So CCAMP should consider this WG.

Y. Xu, et. al.                                                  [Page 2]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

Table of Contents

    1     Specification  .........................................
    2     Acronyms   .............................................
    3     Introduction  ..........................................
    4     Background  ............................................
    4.1   Transport Technology Overview  .........................
    4.2   Transport Network Control Plane Evolution and Options ..
    4.3   Motivation for A GMPLS Control Plane  ..................
    5     GMPLS Control Plane Functions and Goals ................
    6     Architectural Components of GMPLS Control Plane  .......
    6.1   Architectural Overview   ...............................
    6.2   Data Communication Network for GMPLS Control Plane  ....
    6.3   LSR Level Resource Discovery and Link Management   .....
    6.4   GMPLS Routing   ........................................
    6.5   GMPLS Signaling   ......................................
    7     GMPLS Operation Models and Control Plane Integration  ..
    7.1   LSP Hierarchy and the Network Layering  ................
    7.2   Network Partitioning and Control Plane Interfaces   ....
    7.3   GMPLS Control Plane Operation Models   .................
    8     Security Considerations  ...............................
    9     Acknowledgment  ........................................
    10    Authors' Addresses  ....................................
    11    References  ............................................

Y. Xu, et. al.                                                  [Page 3]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

1. Specification

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   document are to be interpreted as described in RFC 2119.

2. Acronyms

   ASTN :   Automatic Switched Transport Network
   OTN  :   Optical Transport Network
   LSR  :   Label Switched Router
            (In this document, LSR is used as a generic term. It is
             interchangeable with Network Element)
   NE   :   Network Element
   LSP  :   Label Switched Path
   PDM  :   Packet Division Multiplexing
   TDM  :   Time Division Multiplexing
   SDM  :   Space Division Multiplexing
   PS   :   Protection Switching
   EMS  :   Element Management System
   NMS  :   Network Management System
   OCh  :   Optical Channel
   OMS  :   Optical Multiplex Section
   OTN  :   Optical Transport Network
   OTS  :   Optical Transmission Section
   PDH  :   Plesiochronous Digital Hierarchy
   SDH  :   Synchronous Digital Hierarchy
   SONET:   Synchronous Optical Network
   STM-N:   Synchronous Transport Module of level N
   STS-N:   Synchronous Transport Signal of level N
   TDM  :   Time Division Multiplexing
   WDM  :   Wavelength Division Multiplexing

3. Introduction

A network is composed of a data (or transport) plane, a control plane, and a
management plane. The scope of the control plane is connection setup and
processes necessary to support this, such as neighbor discovery/link management,
routing, signaling, etc. Other network functions, such as fault management and
performance monitoring involve different roles and timescales and are outside of
the scope of the control plane.

Despite the differences that exist in the technology and operation of different
networks, especially within their data planes, all switched networks share very
similar characteristics in their control planes for connection management. MPLS
has the ability to separate between the control and forwarding aspects of

Y. Xu, et. al.                                                  [Page 4]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

routing. This ability allows the extensions of MPLS protocol to encompass
different forwarding planes including that of circuit transport networks. For
this reason, there is significant interest throughout the industry to define a
Generalized MPLS control plane.

In general, the GMPLS control plane proposes to extend current MPLS signaling
and other IP based control protocols into different types of networks. GMPLS
should define a set of powerful tools that can be used for different
technologies and network scenarios. It should facilitate network inter-working
and simplify network operation.

Besides the commonalities, we also need to pay attention to the differences
between different types of networks. MPLS/IP based control protocols are
designed and optimized for packet switching networks. There are some
fundamentally different assumptions between the packet switching network and
circuit switching transport network. For example, control and transport planes
are typically independent of each other in circuit switching network. While in
IP network, control and user traffics are mixed together. Furthermore, there are
different service and operation requirements for different types of networks.
The GMPLS control plane, as a generalized control plane, should also be
flexible, or provide enough tools to accommodate all these differences and
requirements. Technology dependent extensions/modification should be introduced
in the control plane, when necessary, in order to account for inherent
peculiarities of the underlying technologies and networking contexts.

In this document, section 4 introduces the background and rationale for GMPLS
control plane. Section 5 highlights the service and implementation requirements
for GMPLS control plane from service providers' point of view. Section 6
discusses the functional building blocks of a generic GMPLS control plane and
illustrates that in order to be generalized for different technologies and their
specific application environments, what each functional component could assume,
what it "should" (instead of "how") accomplish, what are the generic procedures
and possible implementation options. Section 7 goes beyond functional
partitioning and a single network. It illustrates the LSP hierarchy, network
layering, network partitioning and their implications to the GMPLS control plane
design and operations. This section then presents how to achieve control plane
integration and options of GMPLS control plane operation models.

4. Background

GMPLS was initiated to develop a unified control plane for multiple switching
technologies. The following sub-sections give the background of transport
technologies, transport network control plane evolution, and rationale for
selecting a GMPLS control plane.

Y. Xu, et. al.                                                  [Page 5]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

4.1 Transport Technology Overview

Time Division Multiplexing (TDM) of several digital signals into a single higher
rate signal is used to maximize the bandwidth efficiency of digital circuit
transport networks.  The first generations of TDM systems, based on PDH
techniques, used bit interleaving and pulse stuffing for clock synchronization
across the tributaries.  The resulting hierarchy required full de-multiplexing
for identification of tributary signals at each level of the hierarchy.
Contemporary TDM schemes such as SONET/SDH (Synchronous Optical
Network/Synchronous Digital Hierarchy) and OTN (Optical Transport Network) use
byte interleaving of tributary signals and the transport overheads necessary for
network management functions to define a multiplex hierarchy of single-channel
systems in which the tributaries are more directly identifiable.  Currently, the
TDM hierarchy extends up to about 40Gbit/s.

Both SONET/SDH and OTN are optimized for optical transport. Wavelength Division
Multiplexing (WDM) may be used in combination with SONET/SDH or OTN to create
optical transport systems of many wavelengths, each wavelength serving as the
carrier of a TDM tributary channel.  A transport bandwidth of a few Tbit/s
capacity can thereby be achieved on current WDM systems.  Due to the nature of
analog impairment accumulation in WDM systems and possible limitations due to
fiber nonlinearities, there is typically a tradeoff between the maximum
tributary bit rate, the maximum number of tributaries, and the total
transmission distance achievable on a WDM system.

SONET, mostly used in North America, is defined in ANSI T1.105. It is based on a
basic building block of 51.840 Mbit/s (STS-1) and the hierarchical levels are
integrally related to that value.  However, only certain values above STS-1 are
recognized by the ANSI specification (STS-1, STS-3, STS-24, STS-48, and STS-
192). This helps alignment with SDH which has the same basic frame structure but
is defined relative to integer multiples of the unit 155.520 Mb/s (STM-1) which
is exactly 3xSTS-1.  The SDH signal structure and hierarchy is specified in ITU-
T Recommendation G.707 according to the network architectural principles for SDH
given in G.803.  Although some differences in overhead assignment usage and
networking functions exist between SONET and SDH, these systems can be made to
interwork at the transport level of their mutually compatible hierarchical

OTN is a transport capability optimized for use of WDM technologies, enabling
the network manipulation of wavelengths. The architectural foundation of OTN is
defined in ITU-T Rec. G.872 and the signal structure and hierarchy is defined in
Rec. G.709.  The optical transport network layered structure is comprised of
optical channel (OCh), optical multiplex section (OMS) and optical transmission
section (OTS) layer networks.  The optical channel layer network provides end-
to-end networking of optical channels for transparently conveying client
information of varying format (e.g. PDH, SONET/SDH, cell based ATM, etc.).  The
optical multiplex layer network provides functionality for networking of a
multi-wavelength optical signal (the OMS can also be a single-wavelength).  The
optical transmission section layer network provides functionality for
transmission of optical signals on various types of optical media.

Y. Xu, et. al.                                                  [Page 6]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

Each of these network layers of the OTN has related overheads for ensuring
integrity of that layer and enabling relevant operations and management
functions.  In particular, the optical channel layer provides OCh supervisory
functions for enabling network level operations and management functions, such
as connection provisioning, quality of service parameter exchange and network
survivability. Similarly, the OMS and OTS overheads include the capabilities for
survivability at the optical multiplex section and optical transmission section
layers, respectively.

4.2 Control Plane Evolution and Options of Transport Network

Currently, transport networks providing the capability to establish, tear down
and maintain end-to-end circuit connections have virtually no automatic network
control. Networks providing SONET/SDH and, possibly, wavelength services are
provisioned via vendor Element Management Systems (EMS) and operator Network
Management Systems (NMS).  For a variety of reasons, the steps required for the
provisioning process are generally very slow relative to the switching speed,
often inaccurate, and frustrating to users who want quick turn-up of service and
the flexibility to change networks resources dedicated to their use.  It is
often costly to network operators in terms of delayed or lost revenue and the
expenses required to integrate various vendors equipment into the network and
its NMS due to the hierarchical, centralized architecture and proprietary
implementations of the various management systems.

Connections can be established either by provisioning, signaling, or a hybrid
combination of these:

-- In the provisioning method, a connection is established by configuring every
   network element along the path with the required information to establish an
   end-to-end connection.  Provisioning is provided either by means of
   management systems or by manually creating "hard" permanent connections.  A
   network management system is used to access a database model of the network
   to first establish the most suitable route and then to send commands to each
   network element, involved in that route, to support the connection.

-- In the signaling method, a connection is established on demand by the
   communicating end points within the control plane using a dynamic protocol
   message exchange in the form of signaling messages.  The resulting switched
   connection requires network naming and addressing schemes as well as the
   supporting control plane protocols to achieve inter-working solutions.

-- In the hybrid method, a network provides a permanent connection at the edge
   of the network and utilizes a switched connection within the network to
   provide end-to-end connections between the permanent connections at the
   network edges.  Appropriate signaling and routing protocols are then used to
   establish the end-to-end network connections.  Provisioning is therefore
   required only on the edge connections.  This type of network connection is
   known as a "soft" permanent connection.  From the user's perspective, a soft

Y. Xu, et. al.                                                  [Page 7]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   permanent connection appears as a provisioned, management-controlled
   permanent connection.

The major difference among these methods is the party that establishes the
connection.  In the case of provisioning, the connection is set up by the
network operator.  In the case of signaling, the connection set-up may also be
the responsibility of the end user but could also be done by third party

The evolution of network capabilities is toward requirements for quick
connection setup, which can best be accomplished by the signaling or hybrid
connection establishment methods, supported by automated discovery of the
network resources. Automated discovery of network topology, required to support
identification of feasible connections, can be accomplished either by
centralized or distributed methods.

4.3 Motivation for GMPLS Control Plane

There are many candidates for the control plane for circuit switched networks,
e.g. PNNI, SS7, etc. A GMPLS control plane offers the following attributes:

-- It is ideally suited for networks where the primary services offered are
   IP/MPLS -based, and/or the operators utilize (or are moving towards) data-
   oriented operations and management paradigms.  This simplifies network
   administration across IP/MPLS - and non-IP/MPLS based networks via increased
   uniformity of control and management semantics, and hence improved control
   coordination across LSRs and circuit-switched network elements.

-- Recent advances in MPLS control plane technology can be leveraged, in
   addition to accumulated operational experience with IP distributed routing

-- Software artifacts, originally developed for the MPLS traffic engineering
   application, can be utilized as a basis for transport control plane software,
   with appropriate extensions/modifications. Consequently, this fosters the
   rapid development and deployment of a new class of transport devices such as
   PXCs and OXCs.

-- GMPLS control plane facilitates the introduction of control coordination
   concepts between any kind of network devices, PXCs, OXCs and LSRs. GMPLS can
   provide the efficient unified "control-plane service" needed by the IP/MPLS,
   SDH/Sonet and OTN layers. Using the same paradigm and the same signaling and
   routing protocol suite to control multiple layers will not only reduce the
   overall complexity of designing, deploying and maintaining transmission
   networks but also allow potentially two or more contiguous layers to inter-
   operate no matter the partitioning of the network.

In general, a common signaling and protocol suite for distributed control of

Y. Xu, et. al.                                                  [Page 8]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

circuit switched networks facilitates multi-layer circuit establishment across
heterogeneous transport networks, and reduces the overall complexity of
designing, deploying, and maintaining such networks. It additionally facilitates
the introduction of new networking and service capabilities, such as fast
distributed restoration that can introduce significant infrastructure cost

5  GMPLS Control Plane Functions and Goals

GMPLS control plane deals primarily with connection management, including
packet connection service and circuit connection service, which are
parts of the overall network management functions; the other parts
include aspects of accounting management, performance management,
security management, and policy management, etc. GMPLS control plane
is fundamentally a IP-based distributed connection control. This does
not preclude a common usage of GMPLS together with other solutions
such as centralized network management systems.

The text below highlights some high level functions and service requirements
from carriers. Detailed requirements are in [CARR-REQ].

-- Operation Automation
   The control plane encompasses distributed management functions and
   interfaces required for automatic connection management in a network.
   One of the most fundamental goals of a control plane is to automate
   operations. The GMPLS distributed control plane should provide
   carriers with enhanced network control capabilities and relieve operators
   from unnecessary, painful and time consuming manual operations. At the
   same time, it should facilitate control plane interoperation and
   integration between networks with different data plane technologies.

-- Path Selection Optimization
   The selection of the LSP explicit route in a GMPLS controlled network
   should be optimized to ensure effective and efficient network resource
   utilization and other satisfactory performance.

-- Fast Restoration of Data Path
   Restoration of data path may be performed by pre-selected (offline)
   or on-line real-time path computations. Off-line computation may be
   facilitated by simulation and/or network planning tools. Off-line computation
   can help provide guidance to subsequent real-time computations. It is
   expected that on-line computation and distributed restoration mechanisms will
   provide shorter restoration time than the one available today with
   centralized NMS.

   Restoration may be complicated by the simultaneous control plane data
   communication network failure. In order to guarantee to the data path

Y. Xu, et. al.                                                  [Page 9]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   restoration requirements, control plane DCN should be engineered
   reliable with build-in reliability mechanisms.

-- Alarm handling
   Alarms relating to the control plane managed entity itself must be
   reported to the management plane.  The alarm handling depends on the
   management plane. It is expected for SNMP based implementations that
   alarm reporting mechanisms and conventions be consistent with what is being
   specified in the IETF DISMAN working group.

More generically, the following requirements are expected from a GMPLS based
distributed control plane encompassing multi-layer transport networks:

-- Scalability
   The performance of the control plane should not largely depend on the
   scale of the network to which GMPLS is applied (e.g. the number of nodes,
   the number of physical links, end etc.). The control plane should keep
   constant performance as much as possible regardless of network size.

-- Flexibility
   The control plane should have flexibility in its functionality and
   provide operation full control and configurability.

6. Architectural Components of GMPLS Control Plane

This section studies the GMPLS control plane in the point view of functional
partitioning. Here, we look at a generic control plane of a single network.
Network partitioning, network layering and issues associated with control
plane interaction of multiple networks, control plane integration and control
plane operation models are discussed in section 7.

6.1 Overview

In general, a GMPLS control plane should:

-- Be applicable to all packet switching network and circuit switching
   networks; e.g., IP, ATM, OTN, SONET/SDH, and PDH. In order to achieve this
   goal, it is essential to isolate technology dependent aspects from
   technology independent aspects and to address them separately.

-- Be sufficiently flexible to accommodate different network scenarios
   (service provider business models). This goal must be achieved by
   partitioning the control plane into a number of distinct functional
   components. Each component may require more than one tools for different
   network scenarios. Also, each tool needs to be configurable and
   extensible. This, for example, allows vendors and service providers to

Y. Xu, et. al.                                                  [Page 10]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   decide the logical placement of these components, and also allows the
   service provider to decide the security and policy.

In the functional point of view, a GMPLS control plane can be partitioned into
the following functional components: neighbor discovery and link management,
routing, and signaling. The GMPLS control plane needs a data communication
network for control messages. It has interfaces to management plane and network
element hardware such as system controller, switching fabric.

Details of the control plane DCN and its functional components are elaborated in
following sub-sections.

6.2 Data Communication Network supporting GMPLS Control Plane

Before going to the functional components, we need to elaborate on the data
communication network for control plane first. The GMPLS control plane consists
of a set of distributed controllers that communicate and coordinate with each
other to achieve connection operations. A Data Communication Network (DCN) is
needed to enable control messages to be exchanged between controllers. For
conventional IP networks, there is no separated control plane network because
control and user data traffics share the same network. For circuit based
networks, an independent DCN is a fundamental requirement because a typical
circuit switch doesn't process user data traffic in its transport plane. In
order for GMPLS control plane to be applicable to circuit based networks, the
DCN concept MUST be introduced. The DCN supporting GMPLS control plane may use
different types of physical mediums:

-- The control traffic could be carried over a communication channel embedded
   in the data-carrying circuit link between LSRs. For example, the medium
   could be SONET/SDH or OTN overhead bytes

-- The control traffic could be carried over a separate communication channel
   that shares the physical link with the data channels. For example, the
   medium could be a dedicated wavelength, a STS-1 or a DS-1.

-- The control traffic could be carried over a dedicated communication link
   between LSRs, separate from the data-bearing links. For example, the
   medium could be a dedicated LAN or a point to point link.

6.2.1 GMPLS Control Plane Requirements for DCN

GMPLS control plane operations heavily depend on a DCN for exchanging control
messages. Thus, the DCN should satisfy following requirements:

-- The DCN supporting the GMPLS control plane MUST support IP.

-- The DCN should provide communications (either directly or in-directly)
   between any pair of LSRs that need control traffic exchanging.

Y. Xu, et. al.                                                  [Page 11]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

-- The DCN must be secure. This requirement stems from both signaling and
   routing perspectives. The fact that the routing information exchanged over
   the control plane is service-provider specific and security is critical.
   Service providers may not want to expose internal network information outside
   their network boundaries even to their own customers. Meanwhile, the control
   plane network must prevent all kinds of attacking of connection services.

-- The DCN must be reliable and provide fault-tolerance. The DCN is the
   transportation system for GMPLS control plane messages. If DCN fails, no
   GMPLS message will be communicated among GMPLS controllers. DCN reliability
   must be guaranteed, even during what might be considered catastrophic failure
   scenarios of the service transport networks. Failures in the user-traffic
   transport network that also affect the control plane DCN may result in the
   inability to restore traffic or a degradation in the service restoration

-- DCN should support message forwarding priority functionality. The overall
   performance of GMPLS control plane largely depends on its control message
   transportation. Time sensitive operations, such as protection switching, may
   need certain QoS guarantees. Furthermore, message transportation should be
   guaranteed or recovered quickly in event of control network failure.

-- The DCN should be extensible and scalable. The performance of DCN should not
   explicitly depend on the network size, such as the number of nodes and number
   of links. Since the GMPLS control plane supporting network be extended via
   new network deployment.

-- Control plane DCN inter-working is the first step towards control plane
   integration. It is desirable to have a common control plane DCN architecture
   and protocol stack so that control planes of different networks could
   communicate with each other.

6.2.2 De-coupling of the Control Plane and Transport Plane

GMPLS control plane should not make any assumptions about which type of physical
medium to use for its DCN, which implies that GMPLS control plane and its
transport plane should be de-coupled logically at least. This implication
affects the GMPLS control plane design in several aspects:

-- The control plane DCN may have a different physical topology from its
   transport plane network such that a LSR's transport plane neighbor may not be
   its control plane neighbor.

-- Transport plane neighbor discovery may need to depend on other mechanisms
   other than DCN based neighbor discovery.

-- Control interfaces and transport interfaces may be de-coupled. Then it is not
   required for two transport plane neighbors to have a direct control channel

Y. Xu, et. al.                                                  [Page 12]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   as long as they are reachable through the DCN. This also implies that the
   routing of control plane messages in the control plane DCN is logically
   de-coupled from the routing of LSPs in the transport plane.

-- The health of control plane and transport plane should be maintained
   independently. This requires separate notifications and status codes for the
   control plane and transport plane. In the event of control plane failure (for
   example, communications channel or control entity failure), new circuit LSP
   (connection) operations may not be accepted, but existing connections shall
   not be dropped. Connections in progress can be either left hanging, removed,
   cranked back or continued once the failure in the control plane is resolved.
   In any case it is imperative that the connections in progress not be left
   partially setup (or hanging).

-- Signaling messages don't necessarily flow along the data path. So some of the
   basic concepts in MPLS need to be re-defined for a broader scope.

-- Both IP based signaling and routing protocols should be enhanced to
   accommodate this change.

6.2.3 Control Plane DCN Management Module

To support all the requirements mentioned in section 6.2.1, pure IP solution
needs to be enhanced. So there should be a build-in module to manage the GMPLS
control plane DCN. This module should include three functions (1) Control
Link/Channel Management (2) Control Route Management, and (2) Control Traffic
Management. The main objective of this component is to overcome link
degradations (failures or congestion) and keep the DCN reliability. To meet this
objective, the signaling management function is concerned with monitoring the
status of each control channel, with pre-selected alternate routes to overcome
link degradation.

This emphasis on the build-in management of the control plane DCN network is
rare; however, in GMPLS control plane, there are strong reasons for the emphasis
on internal DCN management:

-- The function being specified is critical. The performance of a network's
   control plane DCN architecture affects all subscribers to the network.

-- The various networks involved must support multiple-vendors' control traffic.
   Degradations in one carrier's control plane DCN will have repercussions
   beyond that carrier's borders. Thus, some mutual agreement on the degree of
   reliability of national DCNs is indicated.

-- Recovery and restoration actions may involve multiple networks. If GMPLS
   control plane doesn't include failure and congestion recovery procedures, it
   would be necessary for the administration of each public network to enter
   into bilateral agreements with a number of other networks.

Y. Xu, et. al.                                                  [Page 13]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

6.3 LSR Level Resource Discovery and Link Management

LSR resource discovery and link management is defined as the transaction that
establishes, verifies, updates and maintains the LSR adjacencies and their port-
pair association for their transport (data) plane.

6.3.1 The Output - LSR Level Resource Table

The resource discovery module is required to generate a complete LSR level
resource map, which include at least attributes, neighbor identifiers, and
pertinent real-time operational status. Examples of physical attributes are
signal type, payload type, optics types etc. Several physical attributes can be
abstracted into one logical attribute. Meanwhile, logical attributes may not be
assigned to a particular physical port directly, instead, they may describe
characteristics of a physical port pool. Examples of logical attributes examples
are: VPN ID, SRLG (Shared risk link group)[OPT-BUND] and Service type

It should be noticed that much of the information in the local resource map
might not be used by other control plane functions. They are used for fault
management, inventory management etc.

Furthermore, a LSR might have different types of neighbors. Its physical
adjacent neighbor may not be its switching neighbor. For example, two OXCs could
be inter-connected through DWDM transmission systems in the middle. These two
OXCs are switching neighbors even they are not physically adjacent. So a LSR may
maintain several local LSR level resource tables. The GMPLS control plane only
cares about resource tables of a LSR's switching neighbors. Other LSR level
resource tables may be used for fault management, inventory management etc.

6.3.2 Operation Procedures

Resource discovery and link management may include following operation steps:

-- Self resource awareness/discovery
   The results of self resource awareness/discovery is to populate the
   local ID, physical attributes, logical constraints parameters in the
   element resource table.

-- Neighbor discovery and port association
   This step discovers the adjacencies in the transport plane and
   their port association and populates the neighbor LSR address and port
   ID fields in the resource table.

   Because the control plane network may be different from the
   transport plane network in circuit switching network, LSRs that are not
   adjacent in the control plane network may be adjacent in the transport
   network. In order to unambiguously identify the transport plane neighbors and

Y. Xu, et. al.                                                  [Page 14]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   their port associations, it is essential to have in-band events (along
   the transport plane) coordinated  with other control messages (along
   the control plane).

   An example of in-band events for LSRs capable of electrical signal
   processing in the transport plane is a byte stream containing the
   local LSR address and port ID. Bi-directional links may use in-band
   signaling between both ends. Uni-directional links may employ messages
   through the control network to coordinate with in-band signals to achieve
   bi-directional control coordination.

   For a pure OXC without O-E-O capability, an analog signal (power
   on/off) could principally be used as the in-band event. Control
   messages over the control network  can then be used to co-ordinate
   and associate additional significance with the in-band event to
   identify neighbor adjacency and port associations. Details of this
   type of scenario are for further study.

-- Resource verification and monitoring
   After neighbor resource discovery, neighbors should detect their operation
   states and verify their configurations such as physical attributes in order
   to ensure compatibility. Such verification can be done through control
   message over the control plane network without using in-band signals. In case
   of any mismatch, the port should be marked as mis-configured, and
   notification should be issued to operators.

   Resource monitoring is a continual process. Neighbor discovery and
   port association procedures are repeated periodically. Resource
   monitoring procedures update resource state and report changes to
   relevant control entities.

-- Service negotiation / discovery
   Service negotiation essentially covers all aspects related to service
   related to rules/policy negotiation between neighbors.

6.4 GMPLS Routing

The functionality of GMPLS control plane routing includes topology dissemination
and path selection of the LSPs in the transport/data plane..

-- Topology Information Dissemination
   Topology information dissemination distribute topology information
   throughout the network to form a consistent network level resource view
   among LSRs. However, consistent view doesn't mean equal view. Since
   networks may be partitioned into hierarchies according to business,
   technological, geographical etc. considerations, LSRs are required to have
   just need to equipped enough information to perform path selections at
   different levels and granularities.

Y. Xu, et. al.                                                  [Page 15]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   The key issue of topology dissemination is to answer "what", "to whom",
   and "how" three questions. That is "what" information should be
   distributed to "whom" and "how" to control the information dissemination.

-- Path Selection
   The GMPLS path selection is a constrain-based computation procedure. This
   procedure should  conform to connection request constraints, such as
   physical diversity, maximize network performance, adhere to management
   policies, as well as conform to the network specific constraints.

   Two most common used path selection mechanisms are hop by hope and
   explicit path selection.

6.4.1 Topology Information Dissemination

The goal of topology information dissemination is to disseminate the sufficient
and necessary information effectively and efficiently to LSRs such that they are
able to select an optimized path. Major concerns for topology information
dissemination is scalability.

A typical transport core switch may contain thousands of physical ports. The
detailed link state information for a LSR could be huge. Scalability requires to
minimize global information propagation and keep detail information as well as
decision-making locally as much as possible.

To achieve scalability, the routing protocol should support different levels of
-- At the LSR level, parallel link connections should be summarized into an
   aggregate bundle link, which hides the link connection details and only
   the summarized information is distributed to the entire network. A bundle
   is defined as a collection of link connections that share some common
   logical or physical attributes that are significant for path selection
   purposes. The specific links inside a bundle link should be equivalent for
   routing purposes. The bundling granularity should be carefully considered
   and engineered because too few information propagated will hurt
   operational performance.

-- At the network level, the GMPLS routing protocol should support hierarchy
   routing. Topology information of a sub-network is aggregated and
   abstracted when disseminating to other sub-networks. At the AS boundary,
   information is also filtered according to local policies. However, the
   level of network hierarchy should be well evaluated. Too many hierarchies
   may hurt network performance because the abstraction and aggregation may
   not generate the "optimal" paths.

Scalability also requires controlling the information updating frequency. The
following optimization techniques should be considered in GMPLS control plane
routing protocol.

Y. Xu, et. al.                                                  [Page 16]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

-- Differentiate static and dynamic information and update only the delta.

-- Control the update frequency through different types of thresholds.
   Generally, two types of thresholds could be defined: absolute threshold
   and relative threshold.
   (1) Absolute threshold defines resource change difference.
   (2) Relative threshold describes percent changes relative to previous data.

There is a tradeoff between accuracy of the topology and the GMPLS control plane
scalability. The GMPLS control plane should be designed such that network
operators are able to have the flexibility to adjust the balance according to
their networks' specific characteristics.

6.4.2 Path Selection

Path selection could be done either off-line at the network planning time or on-
line at real time. The choice depends upon the computational complexity,
topology information availability, and specific network context. Both off-line
and on-line path selection may be provided. For example, operators could use on-
line computation to handle a subset of path selection decisions and use off-line
computation for complicated traffic engineering and policy related issues such
as demand planning, service scheduling, cost modeling and global optimization.

This document does not discuss the specific path selection algorithms. But in
general, following input and output of path selection should be satisfied.

-- Path Selection Input
   The inputs for path selection includes circuit connection end points, a
   set of requested routing constraints and constraints of the network.

   The requested constraints include bandwidth requirement, diversity
   requirements, inclusion/exclusion hop list, etc. One of the major services
   provided by a GMPLS control plane is fast restoration. Restoration is
   often provided by either pre-computing or finding diversified routes for
   connections in real-time. There are different levels of diversity
   requirements. The simplest form of diversity is node diversification. More
   complete notions of diversity can be addressed by logical attributes such
   as shared risk link groups (SRLG), which can be abstracted by operators
   from several physical attributes such as fiber trench ID and destructive

   There are also constraints imposed by the networks. Some of these
   constraints are generic, such as connectivity, bandwidth availability. Some
   are technology dependent, such as those addressed by [OLCP].

-- Path Selection Output
   For the intra-AS, the result of path selection is a list of explicit
   routed path which satisfies the connection requirements. This path should
   provide enough information for each LSR to choose the appropriate physical
   resource for connection operation. For example the details of the hop

Y. Xu, et. al.                                                  [Page 17]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   could be in link bundle level.

   If the path across multiple carriers, the output of path selection may
   include a list of AS-es or simply next AS.

6.4.3 Inter-AS vs. Intra-AS

Intra-AS routing and inter-AS routing have different requirements and may need
different tool sets.

In intra-AS routing, there is no trust boundary between network interfaces. Then
link state based IGPs (such as OSPF, ISIS) should be appropriate to the link
state information dissemination. In order to make the link state information
routing scalable, a LSR's Link state information should be the summary of its
local LSR level resource information and hiding the detail LSR level resource

The routing protocol may further decompose the link state information into
static information and dynamic information.
-- Static information does not change due to connection operations, such
   as neighbor relationship, bundle link attributes, total bundle link
   bandwidth, etc.

-- Dynamic information updates due to operations, such as bundle link
   bandwidth availability, bundle link bandwidth fragmentation, etc.

The design of GMPLS control plane should consider the difference of these
two types of information. The tradeoff between routing scalability and
information availability is one of the key issues of GMPLS control plane
routing. The bottom line is that routing should provide enough link state
information such that constraint-based path selection for intra-AS routing
becomes available.

With detailed link state information available, the path selection for
intra-domain routing is normally constraint-based explicit routing.

In inter-AS routing, there exists strong trust boundary between carrier
networks. Then policy based EGPs should be appropriate for inter-AS routing. EGP
distributes reachability information without internal network information and
maintains a clear separation between distinct AS networks. Policy based control
gives operators configurable control of the topology dissemination.

The path selection in inter-AS case is typically hop-by-hop because different AS
may have different policies and concerns. A provider may not want to follow the
path selection result of another provider. In the inter-domain case, the hop
granularity is AS.

Y. Xu, et. al.                                                  [Page 18]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

6.5 GMPLS signaling

Signaling is one of the key components of GMPLS control plane. The basic
functions include LSP creation, LSP deletion, LSP modification, LSP error
reporting, LSP error handling, as well as LSP restoration. In order to provide a
robust and efficient GMPLS control plane signaling protocol, the following are
some basic framework guidelines.

6.5.1 Basic Functions

GMPLS control plane was assumed to support both IP network and transport
network. GMPLS should inherit all signaling functions of MPLS, such as LSP
creating, LSP deletion, LSP modification, signaling error handling, etc. Except
these, GMPLS signaling protocol has some other functional requirements. Because
the transport network carries huge bandwidth and supports many applications, a
network failure, such as a fiber cut, will affect many applications. Fast
failure detection and fast LSP restoration become basic requirements for
transport networks that require support for high reliability and availability
for application connectivity. In summary, GMPLS control plane signaling should
-- LSP creation
-- LSP deletion
-- LSP modification
-- LSP restoration
-- LSP exception handling

-- Creation Operation
   An LSP creation operation starts when the ingress GMPLS node receives an LSP
   request. This node usually performs certain processes of authorization
   including admission control and resource verification. If the authorization
   is confirmed, the ingress GMPLS node should select a path for the LSP with
   available information and start the creation operation. A LSP creation
   message will travel from the ingress node to the egress node along the
   selected path. In transport networks, an Acknowledgement message is required
   from the egress node to the ingress node. For bi-directional LSPs, three way
   handshaking messages should provide in the signaling protocol to inform both
   sides of the establishment of the LSP.

   During LSP creation operation, the resources may either be allocated first,
   or allocated after the entire path has been reserved. In a network where the
   allocation procedure may take some finite time, fast LSP creation may suggest
   that allocation be performed in the forward path. This may have consequences
   if a downstream node denies this LSP request and cause the upstream nodes to
   de-allocate the resources.

-- Deletion Operation
   When an LSP is not needed anymore or an LSP has been given up by network, the

Y. Xu, et. al.                                                  [Page 19]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

   resources occupied by this LSP should be released. A deletion request message
   should be provided in the signaling protocol. The deletion operation may
   start from the ingress GMPLS node, the egress GMPLS node, or any middle GMPLS
   node of the LSP. The deletion signaling should be designed to handle all
   these cases.

   In transport networks, partially deleted LSPs are serious problems. During
   LSP deletion, a failed GMPLS node along the LSP may cause incomplete LSP
   deletion. A deletion acknowledgement message is required by the initiating
   node to guarantee the deletion completion. During the deletion operation,
   race conditions may occur between the deletion request message and the de-
   allocation of resources. Based on this race condition, certain alarms may be
   raised at the downstream nodes. To support such an environment, a mechanism
   is needed to allow for disabling/enabling of the alarms associated with the
   LSP prior to de-allocation of resources.

   Two alternatives may be envisioned to avoid this problem:
   -- The downstream nodes will be informed through LSP frames, something
      similar to SONET APS signal.

   -- The downstream nodes will be notified through part of the deletion
      message. After confirmation, they de-allocate the resource.

-- Modification operation
   An LSP may be required to modify its attributes, such as bandwidth. A basic
   requirement of LSP modification is so-called "make-before-break". The
   signaling protocol should be designed to support this functionality. A
   possible solution may involve resource sharing between the old LSP and the
   new LSP.

6.5.2 Support of Restoration

Rapid restoration from network failures is a crucial aspect of current and
future transport networks.  Rapid restoration is required by transport network
providers to support high reliability and availability for customer
connectivity. The choice of a restoration policy is a tradeoff between network
resource utilization (cost) and service interruption time. Clearly, minimized
service interruption time is desirable, but schemes achieving this usually do so
at the expense of network resource utilization, resulting in increased cost to
the provider. Different restoration schemes operate with different tradeoffs
between spare capacity requirements and service interruption time. In light of
these tradeoffs, transport providers are expected to support a range of
different service offerings, with a strong differentiating factor between these
service offerings being service interruption time in the event of network
failures. For example, a provider's highest offered service level would
generally ensure the most rapid recovery from network failures. However, such Y.

Xu, et. al.                                                     [Page 20]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

schemes (e.g., 1+1, 1:1 protection) generally use a large amount of spare
restoration capacity, and are thus not cost effective for most customer
applications. Significant reductions in spare capacity can be achieved by
instead sharing this capacity across multiple independent failures.

GMPLS signaling should consider restoration and support multiple restoration
schemes, such as link protection, dedicated path protection, mesh shared path
restoration, dynamic rerouting restoration. Different applications may have
different requirements. GMPLS restoration schemes should include at least link
protection, dedicated path protection, mesh shared path restoration, and dynamic

6.5.3 Support of Exception handling

Different levels of exceptions may occur within the GMPLS network, impacting
both the data plane and the control plane. The following exceptions should be
considered in the GMPLS signaling design:
-- Ingress node, mediate nodes, egress nodes may deny the LSP creation. If the
   resources are allocated before LSP confirmation, the deny of creation should
   trigger the de-allocation of these resources.

-- A node detects the failure of an LSP. If restoration is provided, this
   exception should trigger the restoration process of this LSP.

-- An LSP restoration process fails, for reasons such as GMPLS control plane
   along the restoration path failure or lack of resources along the restoration
   path. The exception should trigger the de-allocation of partially allocated
   resources for this restoration LSP immediately such that those resources
   are available other LSP creations and LSP restorations.

-- An LSP deletion process fails due to control plane failure along the LSP
   path. This exception should be well handled such that no partially undeleted
   LSPs exist in the network.

Unexpected exceptions may also happen. GMPLS signaling should be prepared to
handle some other unexpected exceptions, such as GMPLS control plane
implementation errors, hardware errors, etc.

6.5.4 Signaling coordination

All the above signaling messages do not exist independently. Their functions and
operations should be well coordinated. Otherwise, one signaling message may
trigger other signaling message misbehaviors.

GMPLS signaling should distinguish between service path and restoration path
establishment. MPLS signaling may establish two link/node disjoint LSPs at the
same time for each request. The data traffic can be either split into these two

Y. Xu, et. al.                                                  [Page 21]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

LSPs or forwarded into one path. Another path is used only when the first path
fails. This is feasible, because the second LSP does not consume any bandwidth
if no data is forwarded in this path. In transport network, the situation is
different. In order to use the bandwidth efficiently, the restoration path may
need to be established after the service path fails. The restoration path could
be pre-planned or computed on the fly. Another difference between service path
establish and restoration path establish is the time delay. Specifically,
restoration path establishment has a higher time delay requirement than service
path establishment.

GMPLS should be able to support both unidirectional LSPs and bi-directional
LSPs. Traditional telecommunication circuits are typically bi-directional.
Current research shows that the data traffic in Internet is asymmetric.
Unidirectional LSPs for data traffic are more reasonable. Although the services
of mixed uni-directional LSPs and bi-directional LSPs are not clear now, GMPLS
signaling protocol should be prepared to support both applications in the same
framework. This may create label contention between uni-directional LSP and bi-
directional LSP as well as between two bi-directional LSPs from opposite
directions. A consistent solution should be provided to solve the label
contention problem.

LSP deletion operation and LSP fast restoration process should be well
coordinated. The GMPLS control plane is a distributed system. One node operation
should not cause other nodes' misunderstanding. For example, if one node is
using detection of loss of light in an all-optical network to trigger the fast
restoration process, the far end node deletion operation may cause this node to
trigger a failure restoration attempt. GMPLS signaling protocol should be well
designed to avoid this situation.

A GMPLS control plane may fail. After recovery, the signaling protocol should
design some mechanisms to synchronize the LSP management information bases
(MIBs) and link resources with its neighbors. During the control plane failure
period, LSPs may be deleted and link resources may be updated. Two possible
solutions could be:
-- Using neighbor query messages to synchronize its information base.
-- Using LSP refresh messages to synchronize its information base.

Y. Xu, et. al.                                                  [Page 22]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

7. GMPLS Operation Models and Control Plane Integration

Section 6 describes the functional building blocks of a GMPLS control plane of a
single network. This section discusses network partitions, network layering and
issues associated with control plane interaction of multiple networks, control
plane integration, and control plane operation models.

Functional partitioning, network layering and network partitioning look at
a problem from different angles and provide a paramount picture of the GMPLS
control plane. They are designed for different purposes and complement each
other. It should be noted that a computational (section 7.1) model may or may
not be translated into specific protocol design and network partitioning
(section 7.2) directly. Protocols and network partitions are usually designed in
the engineering view as they take into account the decisions made to keep layers
separate or to combine them into single engineering components. Logical control
entities in the computational model may collapse or be integrated onto a single
entity. However, the computational model is a good tool to analyze the problem
and provide important inputs for protocol design and network partitioning.

7.1 LSP Hierarchy and the Network Layering

GMPLS [GMPLS-ARCH] defines five types of LSR interfaces: PSC interface, L2SC
interface, TDM interface, LSC interface and FSC interface.  Accordingly, we
could view the network as having different conceptual layers: PSC layer, L2SC
layer, TDM layer, LSC layer and FSC layer.  Within each of these layers multiple
sub-layers are possible.  For example, the TDM layer may contain SONET/SDH,
which would be broken down into the LOVC and HOVC layers. Depending upon the
particular scenario, some layers (and sub-layers) might not be present.  For
example, in a traditional IP over SONET scenario with no WDM functionality, the
LSC layer would not be present.

The layering shown figure 1 is intended to depict the inter-layer relationships
in the data (transport) plane.  That is, the PSC layer can run over a TDM and
L2SC layer or it might also run directly over a LSC layer.  A PSC layer would
not run directly over a FSC layer but instead either a TDM or LSC layer would be
in between.  All layers eventually run over a FSC layer (here we are assuming an
optical fiber based network).  TDM may run directly over FSC but most likely
would run over an LSC layer.

The layering in figure 1 also depicts a client/server relationship among the
layers.  Here the upper layer plays the role of the client and the lower layer
plays the role of the server.  For example, the PSC acts as a client to the TDM
and LSC.  The LSC and TDM layers play the role of a server to the PSC layer.  As
well, the LSC and TDM layers play the role of a client to the FSC layer.

Y. Xu, et. al.                                                  [Page 23]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

                        |         PSC          |
                           |               |
                   +-------+               |
                   |       V               |
                   |    +------+           |
                   |    | L2SC |--\        |
                   |    +------+   \       |
                   |        |       \      |
                   +--------+        \     |
                            V         \    |
                         +-------+    |    |
                         |  TDM  |    |    |
                         +-------+    |    |
                            |  |      V    V
                            |  |    +---------+
                            |  +--->|   LSC   |
                            |       +---------+
                            V           /
                         +--------+    /
                         |   FSC  |<--/

                Figure 1  Network Layering and Relationships

GMPLS doesn't have any restrictions on the mapping between control layers and
transport layers; e.g. a control plane instance may control several transport
layers. However, The network layering has critical impact on the control plane
design. LSPs at different layers are triggered by different events and may be
created at different times because of several mismatches such as switching
granularity and time-scale mismatches. For example, the bandwidth granularities
in different layers are different. IP link may be in the order of kbps; whereas
the size of a LSC link may be in units of 2.5G, 10G or higher. So it is
unpractical to set up an OC-48c pipe solely because of a 56kbps data LSP request.

These issues exist independent of network partitioning or control plane
operation models. For example, wile IP routers and OXCs are provisioned in a
single OSPF/ISIS area, mismatches still exist. In order to achieve control plane
integration, we need a new functional module to serve as a Bandwidth Broker
between networks of different layers.

We can look at this Bandwidth Broker function from another perspective. There
are two sets of functions associated with setting up a LSP.

1. "How" to automatically create/delete/modify a LSP and thus change the LSR
   adjacency and client network topology?

Y. Xu, et. al.                                                  [Page 24]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

The majority of current control plane work focuses on this area. The functional
components described in section 6 are for this "how" purpose.

2. "When" and "Where" to create/delete/modify a "what" LSP and thus change the
   client network's LSR adjacency and network topology?

This set of functions makes up the Bandwidth Broker. This module resides in the
client network and decides "when" and "where" to create/delete/modify "what"
circuits through the server network. It can also be view as an automation of
part of a broader Network Planning function for the client layer. (Network
Planning makes decisions on installing fiber and network equipment for a network
based on forecast traffic demands.)

For IP network, the conventional IP layer control plane and Traffic Engineering
function assume that the IP network's physical topology is "static". However,
the GMPLS control plane enables the transport network control of an
automatically switched network. This new capability of the optical network has
changes the previous assumption of the IP layer. The Bandwidth Broker in the IP
layer is about how to use this new capacity of its transport network and request
the transport capacity where the IP traffic wants to be.

7.2 Network Partitioning and Control Plane Interfaces

For different network scenarios and partitions, GMPLS may need to provide a
different set of tools (a profile). The reason for studying network partitioning
is to understand what type of control plane interfaces need to be supported and
the tools and information flow of each interface.

Network partitioning refers to the administrative separation of a network into
separate sub-networks. It is in the operator's implementation point of view. The
network partitioning concept is a recursive definition where the limiting case
of partitioning of a network (or sub-network) is the network element itself (and
thus the fabric that provides a flexible connection function). We view the
overall network a collection of sub-networks. Each sub-network is a collection
of same/different types of NEs and/or sub-networks.

In order to understand network partitioning, we need to discuss the control
plane interfaces first. Communication is required by all entities that make up a
network's control plane. This includes communication between signaling functions
(this is one application where communication is carried over an interface) as
well as communication between routing functions, among others. In the context of
this framework document, control plane interfaces represent logical
relationships between control plane entities and are defined by the information
flow between these entities, as illustrated in the figure below. Such a
relationship allows distribution of these entities in support of different
equipment implementations and network architectures, as based on how the
functions are partitioned, how networks are partitioned, and what layer networks
are considered.

Y. Xu, et. al.                                                  [Page 25]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

                     ------                     ------
                    /      \                   /      \
                   /        \                 /        \
                  |  Control |               |  Control |
                  |  Plane   |   Interface   |  Plane   |
                  |  Entity  +---------------+  Entity  |
                  |          |               |          |
                   \        /                 \        /
                    \      /                   \      /
                     ------                     ------

                           Figure 2: Interface

The above information elements can be aggregated to form three types of control
plane interfaces, UNI, I-NNI and E-NNI as defined by ITU [G.807/Y1301]:
-- operations between user and network provider domains; user-network interface,
-- multi-vendor operation within a domain (administrative, vendor, technology,
   political, etc.); interior node-node interface, I-NNI
-- multi-domain operation for a single network provider, or multi-domain
   operation among different network providers; exterior node-node interface, E-

For these interfaces, although some similar types of information may flow over
the interface (e.g. connection service messages), the information content for
both routing and signaling may differ totally.

Operators may partition their network according to their administrative,
technological and/or geographical considerations. In order to understand the
principles of network partitioning; let's take a look at an example. In the case
of a single provider network, there are situations in which an E-NNI is
preferable to an I-NNI. Examples of such situations include the cases where the
provider network is comprised of multiple partitions arising from:
-- The use of different types of sub-networks(technology based), e.g. IP,
   ATM, SONET, OTN etc.
-- The use of different vendors sub-networks
-- Interconnections of different sub-networks administered by different
   business units
-- The consideration of control plane scalability and intention to reduce the
   volume of topology information exchanged

As such, the type of interface between two applications are chosen as a result
of the type of information flow that is desired and what capabilities LSRs
support and not necessarily based on the administrative boundaries. In general,
network partitioning choice is decided by the interface type choice, which is
decided by the type of information flow between control plane entities. GMPLS

Y. Xu, et. al.                                                  [Page 26]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

control plane should accommodate different network partitions. For each
interface, GMPLS should provide a profile (a set of tools) to meet the specific
requirements of the interface.

>From the network partitioning point of view, control plane could operate in different ways:
(1) Different transport layers could be partitioned into a single sub-network.
    There could be one unified view for the entire span of the GMPLS layers
    (PSC, TDM, LSC, FSC).

    This implies that the GMPLS functional components would need to understand
    all layers (e.g., extending routing protocol to have knowledge of all
    layers). In essence you would have one type functional components that works
    for all layers.

    This variation may cause scalability problem. For example, a core
    circuit switch typically has very limited packet processing power
    comparing to a core IP router. If it is partitioned in the same AS as a
    core IP router. Millions of IP routes that the core circuit switch
    doesn't care can overload the core circuit switch easily.

(2) Different transport layers could be partitioned into different sub-networks
    using a client/server model to request resources between the layers. They
    may not know each other's detailed topology. However, networks at different
    layers are aware of each other through their control planes. Topology
    dissemination across sub-network boundary is controlled by policy based
    routing protocol to guarantee the right LSR to get the right information
    and only the right information.

(3) The network could also be partitioned in the middle of (a) and (b).
    It might make sense in some cases to partition LSRs of specific layers or
    sub-layers into a single sub-network.  For example, the cell and packet
    layers within the PSC layer might be unified. Or it might make sense to
    unify the TDM and LSC layers in some implementations.

7.3 Control Plane Operation Models

GMPLS should allow for a wide variety of control plane operation models and
architectures that meet different network operation scenarios. In section 6,
7.1, and 7.2, we studied GMPLS control plane from different perspectives. Now we
discuss different operation models of the GMPLS control plane.

>From the functional partitioning point of view, control plane can be identified
(1) A fully-automated control plane which has all the functional components
    implemented in GMPLS control plane, while

(2) A semi-automated control plane which may have management plane implement
    some of the control plane functions.

Y. Xu, et. al.                                                  [Page 27]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

>From the network layering point of view, we can identify two control plane
integration models:

(1) Fully-integrated control plane.
    In this model, the Bandwidth Broker mentioned in section 7.1 is integrated
    into the client LSR to handle the interaction between the network
    layers. Thus in the operator point of view, he/she only needs to focus on
    the service level functions and is not aware of the existence of multiple

(2) Semi-integrated control plane.
    In this model, each computational layer is partitioned into independent
    networks and has one independent control plane entity. Management systems
    and operator interventions are needed to coordinate the various control

Details of these operation models will be elaborated further in later versions.

8. Security Considerations

Security is critical where different business domains interact. When service
invocations happen between business domains, encryption and authentication
mechanisms are required at the service interfaces. Within a single business
domain, strict security needs further study. However, the control plane network
should be isolated from user traffic to avoid sensitive information being leaked
out. Many details need further study.

9. Acknowledgment

The authors would like to thank Patrice Lamy, Harvey Epstein, Wesam Alanqar,
Tammy Ferris, Mark Jones, Maarten Vissers and Papadimitriou Dimitrifor their
helpful suggestions and comments on this work.

Y. Xu, et. al.                                                  [Page 28]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

10. Authors' Addresses

Yangguang Xu 21-2A41,                     Eve Varma 3C 509,
1600 Osgood Street                        101 Crawfords Corner Rd
Lucent Technologies, Inc.                 Lucent Technologies, Inc.
North Andover, MA 01845                   Holmdel, NJ 07733-3030
Email: xuyg@lucent.com                    Email: evarma@lucent.com

Rudy Hoebeke                              Hirokazu Ishimatsu
Alcatel                                   Japan Telecom Co. Ltd.
Francis Wellesplein 1, 2018 Antwerp,      2-9-1 Hatchobori, Chuo-ku,
Belgium                                   Tokyo, 104-0032 Japan
Email: rudy.hoebeke@alcatel.be            E-mail: hirokazu@japan-telecom.co.jp

Osama S. Aboul-Magd                       Michael Mayer
Nortel Networks P.O. Box 3511,            Nortel Networks P.O. Box 402
Station "C" Ottawa,                       Ogdensburg, NY 13669
Ontario, Canada K1Y - 4H7                 Email: mgm@nortelnetworks.com
Email: osama@nortelnetworks.com

Daniel O. Awduche                         Yong Xue
Movaz                                     Global Network Architecture
Email: awduche@movaz.net                  UUNET/WorldCom Ashburn, Virginia
                                          Email: yxue@uu.net

Curtis Brownmiller                        John Eaves
WorldCom                                  TyCom
Richardson, TX 75082                      1C-240, 250 Industrial Way West
Email: curtis.brownmiller@wcom.com        Eatontown, NJ 07724
                                          Email: jeaves@tycomltd.com

Guangzhi Li                               Lily Cheng
AT&T                                      Lucent
Email: gli@research.att.com               Lilycheng@lucent.com

Ananth Nagarajan                          Lynn Neir
Sprint                                    Sprint
9300 Metcalf Ave                          9300 Metcalf Ave
Overland Park Ks 66212, USA.              Overland Park Ks 66212, USA.
ananth.nagarajan@mail.sprint.com          lynn.neir@mail.sprint.com

Jennifer Yates,                           Monica Lazer
AT&T Labs                                 AT&T
180 PARK AVE, P.O. BOX 971                900 ROUTE 202/206N PO BX 752
FLORHAM PARK, NJ  07932-0000              BEDMINSTER, NJ  07921-0000
jyates@research.att.com                   mlazer@att.com

Zhi-wei Lin

Y. Xu, et. al.                                                  [Page 29]

draft-many-ccamp-gmpls-framework-00.txt                         Jan. 2002

101 Crawfords Corner Rd
Holmdel, NJ  07733-3030

11. Reference


Y. Xu, et. al.                                                  [Page 30]

Html markup produced by rfcmarkup 1.129d, available from https://tools.ietf.org/tools/rfcmarkup/