draft-ietf-teas-scheduled-resources-05.txt   draft-ietf-teas-scheduled-resources-06.txt 
TEAS Working Group Y. Zhuang TEAS Working Group Y. Zhuang
Internet-Draft Q. Wu Internet-Draft Q. Wu
Intended status: Informational H. Chen Intended status: Informational H. Chen
Expires: July 20, 2018 Huawei Expires: August 24, 2018 Huawei
A. Farrel A. Farrel
Juniper Networks Juniper Networks
January 16, 2018 February 20, 2018
Architecture for Scheduled Use of Resources Framework for Scheduled Use of Resources
draft-ietf-teas-scheduled-resources-05 draft-ietf-teas-scheduled-resources-06
Abstract Abstract
Time-scheduled reservation of traffic engineering (TE) resources can Time-scheduled reservation of traffic engineering (TE) resources can
be used to provide resource booking for TE Label Switched Paths so as be used to provide resource booking for TE Label Switched Paths so as
to better guarantee services for customers and to improve the to better guarantee services for customers and to improve the
efficiency of network resource usage into the future. This document efficiency of network resource usage at any moment in time including
provides a framework that describes and discusses the architecture future planned network usage. This document provides a framework
for the scheduled reservation of TE resources. This document does that describes and discusses the architecture for supporting
not describe specific protocols or protocol extensions needed to scheduled reservation of TE resources. This document does not
realize this service. describe specific protocols or protocol extensions needed to realize
this service.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on July 20, 2018. This Internet-Draft will expire on August 24, 2018.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 17 skipping to change at page 2, line 18
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 3 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 3
2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3 2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3
2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4 2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4
2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 4 2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 5
2.4. Looking at Future Demands on TE Resources . . . . . . . . 5 2.4. Looking at Future Demands on TE Resources . . . . . . . . 6
2.5. Requisite State Information . . . . . . . . . . . . . . . 5 2.5. Requisite State Information . . . . . . . . . . . . . . . 6
3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 6 3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 7
3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 6 3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 8
3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 8 3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 10
4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 10 3.3. Enforcement of Operator Policy . . . . . . . . . . . . . 11
4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 10 4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 12
4.1.1. Reoptimization After TED Updates . . . . . . . . . . 11 4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 13
4.2. Initialization and Recovery . . . . . . . . . . . . . . . 12 4.1.1. Reoptimization After TED Updates . . . . . . . . . . 14
4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 12 4.2. Initialization and Recovery . . . . . . . . . . . . . . . 15
5. Multi-Domain Considerations . . . . . . . . . . . . . . . . . 13 4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 15
6. Security Considerations . . . . . . . . . . . . . . . . . . . 15 5. Multi-Domain Considerations . . . . . . . . . . . . . . . . . 16
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 6. Security Considerations . . . . . . . . . . . . . . . . . . . 18
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 16 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19
9. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 16 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19
10. Informative References . . . . . . . . . . . . . . . . . . . 16 9. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 19
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 19 10. Informative References . . . . . . . . . . . . . . . . . . . 19
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22
1. Introduction 1. Introduction
Traffic Engineering Label Switched Paths (TE-LSPs) are connection Traffic Engineering Label Switched Paths (TE-LSPs) are connection
oriented tunnels in packet and non-packet networks [RFC3209], oriented tunnels in packet and non-packet networks [RFC3209],
[RFC3945]. TE-LSPs may reserve network resources for use by the [RFC3945]. TE-LSPs may reserve network resources for use by the
traffic they carry, thus providing some guarantees of service traffic they carry, thus providing some guarantees of service
delivery and allowing a network operator to plan the use of the delivery and allowing a network operator to plan the use of the
resources across the whole network. resources across the whole network.
In some technologies (such as wavelength switched optical networks) In some technologies (such as wavelength switched optical networks)
the resource is synonymous with the label that is switched on the the resource is synonymous with the label that is switched on the
path of the LSP so that it is not possible to establish an LSP that path of the LSP so that it is not possible to establish an LSP that
can carry traffic without assigning a concrete resource to the LSP. can carry traffic without assigning a physical resource to the LSP.
In other technologies (such as packet switched networks) the In other technologies (such as packet switched networks) the
resources assigned to an LSP are a measure of the capacity of a link resources assigned to an LSP are a measure of the capacity of a link
that is dedicated for use by the traffic on the LSP. that is dedicated for use by the traffic on the LSP.
In all cases, network planning consists of selecting paths for LSPs In all cases, network planning consists of selecting paths for LSPs
through the network so that there will be no contention for through the network so that there will be no contention for
resources. LSP establishment is the act of setting up an LSP and resources. LSP establishment is the act of setting up an LSP and
reserving resources within the network. Network optimization or re- reserving resources within the network. Network optimization or re-
optimization is the process of re-positioning LSPs in the network to optimization is the process of re-positioning LSPs in the network to
make the unreserved network resources more useful for potential make the unreserved network resources more useful for potential
future LSPs while ensuring that the established LSPs continue to future LSPs while ensuring that the established LSPs continue to
fulfill their objectives. fulfill their objectives.
It is often the case that it is known that an LSP will be needed at It is often the case that it is known that an LSP will be needed at
some time in the future. While a path for that LSP could be computed some specific time in the future. While a path for that LSP could be
using knowledge of the currently established LSPs and the currently computed using knowledge of the currently established LSPs and the
available resources, this does not give any degree of certainty that currently available resources, this does not give any degree of
the necessary resources will be available when it is time to set up certainty that the necessary resources will be available when it is
the new LSP. Yet setting up the LSP ahead of the time when it is time to set up the new LSP. Yet setting up the LSP ahead of the time
needed (which would guarantee the availability of the resources) is when it is needed (which would guarantee the availability of the
wasteful since the network resources could be used for some other resources) is wasteful since the network resources could be used for
purpose in the meantime. some other purpose in the meantime.
Similarly, it may be known that an LSP will no longer be needed after Similarly, it may be known that an LSP will no longer be needed after
some future time and that it will be torn down releasing the network some future time and that it will be torn down releasing the network
resources that were assigned to it. This information can be helpful resources that were assigned to it. This information can be helpful
in planning how a future LSP is placed in the network. in planning how a future LSP is placed in the network.
Time-Scheduled (TS) reservation of TE resources can be used to Time-Scheduled (TS) reservation of TE resources can be used to
provide resource booking for TE-LSPs so as to better guarantee provide resource booking for TE-LSPs so as to better guarantee
services for customers and to improve the efficiency of network services for customers and to improve the efficiency of network
resource usage into the future. This document provides a framework resource usage into the future. This document provides a framework
that describes and discusses the architecture for the scheduled that describes the problem and discusses the architecture for the
reservation of TE resources. This document does not describe scheduled reservation of TE resources. This document does not
specific protocols or protocol extensions needed to realize this describe specific protocols or protocol extensions needed to realize
service. this service.
2. Problem Statement 2. Problem Statement
2.1. Provisioning TE-LSPs and TE Resources 2.1. Provisioning TE-LSPs and TE Resources
TE-LSPs in existing networks are provisioned using RSVP-TE as a TE-LSPs in existing networks are provisioned using a variety of
signaling protocol [RFC3209] [RFC3473], by direct control of network techniques. They may be set up using RSVP-TE as a signaling protocol
elements such as in the Software Defined Networking (SDN) paradigm, [RFC3209] [RFC3473]. Alternatively, they could be established by
and using the PCE Communication Protocol (PCEP) [RFC5440] as a direct control of network elements such as in the Software Defined
control protocol. Networking (SDN) paradigm. They could also be provisioned using the
PCE Communication Protocol (PCEP) [RFC5440] as a control protocol to
communicate with the network elements.
TE resources are reserved at the point of use. That is, the TE resources are reserved at the point of use. That is, the
resources (wavelengths, timeslots, bandwidth, etc.) are reserved for resources (wavelengths, timeslots, bandwidth, etc.) are reserved for
use on a specific link and are tracked by the Label Switching Routers use on a specific link and are tracked by the Label Switching Routers
(LSRs) at the end points of the link. Those LSRs learn which (LSRs) at the end points of the link. Those LSRs learn which
resources to reserve during the LSP setup process. resources to reserve during the LSP setup process.
The use of TE resources can be varied by changing the parameters of The use of TE resources can be varied by changing the parameters of
the LSP that uses them, and the resources can be released by tearing the LSP that uses them, and the resources can be released by tearing
down the LSP. down the LSP.
Resources that have been reserved in the network for use by one LSP
may be pre-empted for use by another LSP. If RSVP-TE signaling is in
use, a holding priority and a pre-emption priority are used to
determine which LSPs may pre-empted the resources in use for which
other LSPs. If direct (central) control is in use, the controller is
able to make pre-emption decisions. In either case, operator policy
forms a key part of pre-emption since there is a trade between
disrupting existing LSPs and enabling new LSPs.
2.2. Selecting the Path of an LSP 2.2. Selecting the Path of an LSP
Although TE-LSPs can determine their paths hop-by-hop using the Although TE-LSPs can determine their paths hop-by-hop using the
shortest path toward the destination to route the signaling protocol shortest path toward the destination to route the signaling protocol
messages [RFC3209], in practice this option is not applied because it messages [RFC3209], in practice this option is not applied because it
does not look far enough ahead into the network to verify that the does not look far enough ahead into the network to verify that the
desired resources are available. Instead, the full length of the desired resources are available. Instead, the full length of the
path of an LSP is computed ahead of time either by the head-end LSR path of an LSP is usually computed ahead of time either by the head-
of a signaled LSP, or by Path Computation Element (PCE) functionality end LSR of a signaled LSP, or by Path Computation Element (PCE)
in a dedicated server or built into network management software functionality in a dedicated server or built into network management
[RFC4655]. software [RFC4655].
Such full-path computation is applied in order that an end-to-end Such full-path computation is applied in order that an end-to-end
view of the available resources in the network can be used to view of the available resources in the network can be used to
determine the best likelihood of establishing a viable LSP that meets determine the best likelihood of establishing a viable LSP that meets
the service requirements. Even in this situation, however, it is the service requirements. Even in this situation, however, it is
possible that two LSPs being set up at the same time will compete for possible that two LSPs being set up at the same time will compete for
scarce network resources meaning that one or both of them will fail scarce network resources meaning that one or both of them will fail
to be established. This situation is avoided by using a centralized to be established. This situation is avoided by using a centralized
PCE that is aware of the LSP setup requests that are in progress. PCE that is aware of the LSP setup requests that are in progress.
Path selection may make allowance for pre-emption as described in
Section 2.1. That is, when selecting a path, the decision may be
made to choose a path that will result in the pre-emption of an
existing LSP. The trade-off between selecting a less optimal path,
failing to select any path at all, and pre-empting an existing LSP
must be subject to operator policy.
Path computation is subject to "objective functions" that define what
criteria are to be met when the LSP is placed [RFC4655]. These can
be criteria that apply to the LSP itself (such as shortest path to
destination) or to the network state after the LSP is set up (such as
maximized residual link bandwidth). The objective functions may be
requested by the application requesting the LSP, and may be filtered
and enhanced by the computation engine according to operator policy.
2.3. Planning Future LSPs 2.3. Planning Future LSPs
LSPs may be established "on demand" when the requester determines LSPs may be established "on demand" when the requester determines
that a new LSP is needed. In this case, the path of the LSP is that a new LSP is needed. In this case, the path of the LSP is
computed as described in Section 2.2. computed as described in Section 2.2.
However, in many situations, the requester knows in advance that an However, in many situations, the requester knows in advance that an
LSP will be needed at a particular time in the future. For example, LSP will be needed at a particular time in the future. For example,
the requester may be aware of a large traffic flow that will start at the requester may be aware of a large traffic flow that will start at
a well-known time, perhaps for a database synchronization or for the a well-known time, perhaps for a database synchronization or for the
skipping to change at page 5, line 10 skipping to change at page 5, line 41
approach has a number of drawbacks because it is not possible to approach has a number of drawbacks because it is not possible to
determine in advance whether it will be possible to deliver the LSP determine in advance whether it will be possible to deliver the LSP
since the resources it needs might be used by other LSPs in the since the resources it needs might be used by other LSPs in the
network. Thus, at the time the requester asks for the future LSP, network. Thus, at the time the requester asks for the future LSP,
the NMS can only make a best-effort guarantee that the LSP will be the NMS can only make a best-effort guarantee that the LSP will be
set up at the desired time. set up at the desired time.
A better solution, therefore, is for the requests for future LSPs to A better solution, therefore, is for the requests for future LSPs to
be serviced at once. The paths of the LSPs can be computed ahead of be serviced at once. The paths of the LSPs can be computed ahead of
time and converted into reservations of network resources during time and converted into reservations of network resources during
specific windows in the future. specific windows in the future. That is, while the path of the LSP
is computed and the network resources are reserved, the LSP is not
established in the network until the time for which it is scheduled.
There is a need to take into account items that need to be subject to
operator policy such as the amount of capacity available for
scheduling future reservations and the operator preference for the
measures which are used with respect to the use of scheduled
resources during rapid changes in traffic demand events or a complex
(multiple nodes/links) failure event so as to protect against network
destabilization. Operator policy is discussed further in
Section 3.3.
2.4. Looking at Future Demands on TE Resources 2.4. Looking at Future Demands on TE Resources
While path computation as described in Section 2.2 takes account of While path computation as described in Section 2.2 takes account of
the currently available network resources, and can act to place LSPs the currently available network resources, and can act to place LSPs
in the network so that there is the best possibility of future LSPs in the network so that there is the best possibility of future LSPs
being accommodated, it cannot handle all eventualities. It is simple being accommodated, it cannot handle all eventualities. It is simple
to construct scenarios where LSPs that are placed one at a time lead to construct scenarios where LSPs that are placed one at a time lead
to future LSPs being blocked, but where foreknowledge of all of the to future LSPs being blocked, but where foreknowledge of all of the
LSPs would have made it possible for them all to be set up. LSPs would have made it possible for them all to be set up.
If, therefore, we were able to know in advance what LSPs were going If, therefore, we were able to know in advance what LSPs were going
to be requested we could plan for them and ensure resources were to be requested, we could plan for them and ensure resources were
available. Furthermore, such an approach enables a commitment to be available. Furthermore, such an approach enables a commitment to be
made to a service user that an LSP will be set up and available at a made to a service user that an LSP will be set up and available at a
specific time. specific time.
This service can be achieved by tracking the current use of network A reservation service can be achieved by tracking the current use of
resources and also a future view of the resource usage. We call this network resources and also having a future view of the resource
Time-Scheduled TE (TS-TE) resource reservation. usage. We call this Time-Scheduled TE (TS-TE) resource reservation.
2.5. Requisite State Information 2.5. Requisite State Information
In order to achieve the TS-TE resource reservation, the use of In order to achieve the TS-TE resource reservation, the use of
resources on the path needs to be scheduled. Scheduling state is resources on the path needs to be scheduled. Scheduling state is
used to indicate when resources are reserved and when they are used to indicate when resources are reserved and when they are
available for use. available for use.
A simple information model for one piece of scheduling state is as A simple information model for one piece of scheduling state is as
follows: follows:
{ {
link id; link id;
resource id or reserved capacity; resource id or reserved capacity;
reservation start time; reservation start time;
reservation end time reservation end time
} }
The resource that is scheduled can be link capacity, physical The resource that is scheduled could be link capacity, physical
resources on a link, CPU utilization, memory, buffers on an resources on a link, buffers on an interfaces, etc., and could
interfaces, etc. The resource might also be the maximal unreserved include advanced considerations such as CPU utilization and the
bandwidth of the link over a time interval. For any one resource availability of memory at nodes within the network. The resource-
there could be multiple pieces of scheduling state, and for any one related information might also include the maximal unreserved
link, the timing windows might overlap. bandwidth of the link over a time interval. That is, the intention
is to book (reserve) a percentage of the residual (unreserved)
bandwidth of the link. This could be used, for example, to reserve
bandwidth for a particular class of traffic (such as IP) that doesn't
have a provisioned LSP.
For any one resource there could be multiple pieces of scheduling
state, and for any one link, the timing windows might overlap.
There are multiple ways to realize this information model and There are multiple ways to realize this information model and
different ways to store the data. The resource state could be different ways to store the data. The resource state could be
expressed as a start time and an end time as shown above, or could be expressed as a start time and an end time as shown above, or could be
expressed as a start time and a duration. Multiple reservation expressed as a start time and a duration. Multiple reservation
periods, possibly of different lengths, may be need to be recorded periods, possibly of different lengths, may need to be recorded for
for each resource. Furthermore, the current state of network each resource. Furthermore, the current state of network reservation
reservation could be kept separate from the scheduled usage, or could be kept separate from the scheduled usage, or everything could
everything could be merged into a single TS database. be merged into a single TS database.
An application may make a reservation request for immediate resource An application may make a reservation request for immediate resource
usage, or to book resources for future use so as to maximize the usage, or to book resources for future use so as to maximize the
chance of services being delivered and to avoid contention for chance of services being delivered and to avoid contention for
resources in the future. A single reservation request may book resources in the future. A single reservation request may book
resources for multiple periods and might request a reservation that resources for multiple periods and might request a reservation that
repeats on a regular cycle. repeats on a regular cycle.
A computation engine (that is, a PCE) may use the scheduling state A computation engine (that is, a PCE) may use the scheduling state
information to help optimize the use of resources into the future and information to help optimize the use of resources into the future and
reduce contention or blocking when the resources are actually needed. reduce contention or blocking when the resources are actually needed.
Note that it is also necessary to store the information about future Note that it is also necessary to store the information about future
LSPs as distinct from the specific resource scheduling. This LSPs as distinct from the specific resource scheduling. This
information is held to allow the LSPs to be instantiated when they information is held to allow the LSPs to be instantiated when they
are due and using the paths/resources that have been computed for are due and using the paths/resources that have been computed for
them, but also to provide correlation with the TS-TE resource them, but also to provide correlation with the TS-TE resource
reservations so that it is clear why resources were reserved allowing reservations so that it is clear why resources were reserved,
pre-emption and handling release of reserved resources in the event allowing pre-emption, and handling release of reserved resources in
of cancellation of future LSPs. See Section 3.2 for further the event of cancellation of future LSPs. See Section 3.2 for
discussion of the distinction between scheduled resource state and further discussion of the distinction between scheduled resource
scheduled LSP state. state and scheduled LSP state.
Network performance factors (such as maximum link utilization and the
residual capacity of the network) with respect to supporting
scheduled reservations need to be supported and are subject to
operator policy.
3. Architectural Concepts 3. Architectural Concepts
This section examines several important architectural concepts that This section examines several important architectural concepts to
lead to design decisions that will influence how networks can achieve understand the design decisions reached in this document to achieve
TS-TE in a scalable and robust manner. TS-TE in a scalable and robust manner.
3.1. Where is Scheduling State Held? 3.1. Where is Scheduling State Held?
The scheduling state information described in Section 2.5 has to be The scheduling state information described in Section 2.5 has to be
held somewhere. There are two places where this makes sense: held somewhere. There are two places where this makes sense:
o In the network nodes where the resources exist; o In the network nodes where the resources exist;
o In a central scheduling controller where decisions about resource o In a central scheduling controller where decisions about resource
allocation are made. allocation are made.
The first of these makes policing of resource allocation easier. It The first of these makes policing of resource allocation easier. It
means that many points in the network can request immediate or means that many points in the network can request immediate or
scheduled LSPs with the associated resource reservation and that all scheduled LSPs with the associated resource reservation and that all
such requests can be correlated at the point where the resources are such requests can be correlated at the point where the resources are
allocated. However, this approach has some scaling and technical allocated. However, this approach has some scaling and technical
problems: problems:
skipping to change at page 7, line 17 skipping to change at page 8, line 25
The first of these makes policing of resource allocation easier. It The first of these makes policing of resource allocation easier. It
means that many points in the network can request immediate or means that many points in the network can request immediate or
scheduled LSPs with the associated resource reservation and that all scheduled LSPs with the associated resource reservation and that all
such requests can be correlated at the point where the resources are such requests can be correlated at the point where the resources are
allocated. However, this approach has some scaling and technical allocated. However, this approach has some scaling and technical
problems: problems:
o The most obvious issue is that each network node must retain the o The most obvious issue is that each network node must retain the
full time-based state for all of its resources. In a busy network full time-based state for all of its resources. In a busy network
with a high arrival rate of new LSPs and a low hold time for each with a high arrival rate of new LSPs and a low hold time for each
LSP, this could be a lot of state. Yet network nodes are normally LSP, this could be a lot of state. Network nodes are normally
implemented with minimal spare memory. implemented with minimal spare memory.
o In order that path computation can be performed, the computing o In order that path computation can be performed, the computing
entity normally known as a Path Computation Element (PCE) entity normally known as a Path Computation Element (PCE)
[RFC4655] needs access to a database of available links and nodes [RFC4655] needs access to a database of available links and nodes
in the network, and of the TE properties of the links. This in the network, and of the TE properties of the links. This
database is known as the Traffic Engineering Database (TED) and is database is known as the Traffic Engineering Database (TED) and is
usually populated from information advertised in the IGP by each usually populated from information advertised in the IGP by each
of the network nodes or exported using BGP-LS [RFC7752]. To be of the network nodes or exported using BGP-LS [RFC7752]. To be
able to compute a path for a future LSP the PCE needs to populate able to compute a path for a future LSP the PCE needs to populate
skipping to change at page 8, line 42 skipping to change at page 10, line 5
challenge may be mitigated by the centralized server being challenge may be mitigated by the centralized server being
dedicated hardware, but there remains the problem of collecting dedicated hardware, but there remains the problem of collecting
the information from the network in a timely way when there is the information from the network in a timely way when there is
potentially a very large amount of information to be collected and potentially a very large amount of information to be collected and
when the rate of change of that information is high. This latter when the rate of change of that information is high. This latter
challenge is only solved if the central server has full control of challenge is only solved if the central server has full control of
the booking of resources and the establishment of new LSPs so that the booking of resources and the establishment of new LSPs so that
the information from the network only serves to confirm what the the information from the network only serves to confirm what the
central server expected. central server expected.
Thus the architectural conclusion is that scheduling state should be Thus, considering these tradeoffs, the architectural conclusion is
held centrally at the point of use and not in the network devices. that scheduling state should be held centrally at the point of use
and not in the network devices.
3.2. What State is Held? 3.2. What State is Held?
As already described, the PCE needs access to an enhanced, time-based As already described, the PCE needs access to an enhanced, time-based
TED. It stores the traffic engineering (TE) information such as TED. It stores the traffic engineering (TE) information such as
bandwidth for every link for a series of time intervals. There are a bandwidth for every link for a series of time intervals. There are a
few ways to store the TE information in the TED. For example, few ways to store the TE information in the TED. For example,
suppose that the amount of the unreserved bandwidth at a priority suppose that the amount of the unreserved bandwidth at a priority
level for a link is Bj in a time interval from time Tj to Tk (k = level for a link is Bj in a time interval from time Tj to Tk (k =
j+1), where j = 0, 1, 2, .... j+1), where j = 0, 1, 2, ....
skipping to change at page 10, line 7 skipping to change at page 11, line 20
the constraints in the time interval, and the resources such as the constraints in the time interval, and the resources such as
bandwidth reserved for the LSP in the time interval. See also bandwidth reserved for the LSP in the time interval. See also
Section 2.3 Section 2.3
It is an implementation choice how the TED and LSP-DB are stored both It is an implementation choice how the TED and LSP-DB are stored both
for dynamic use and for recovery after failure or restart, but it may for dynamic use and for recovery after failure or restart, but it may
be noted that all of the information in the scheduled TED can be be noted that all of the information in the scheduled TED can be
recovered from the active network state and from the scheduled LSP- recovered from the active network state and from the scheduled LSP-
DB. DB.
3.3. Enforcement of Operator Policy
Computation requests for LSPs are serviced according to operator
policy. For example, a PCE may refuse a computation request because
the application making the request does not have sufficient
permissions, or because servicing the request might take specific
resource usage over a given threshold.
Furthermore, the pre-emption and holding priorities of any particular
computation request may be subject to the operator's policies. The
request could be rejected if it does not conform to the operator's
policies, or (possibly more likely) the priorities could be set/
overwritten according to the operator's policies.
Additionally, the objective functions (OFs) of computation request
(such as maximizing residual bandwidth) are also subject to operator
policies. It is highly likely that the choice of OFs is not
available to an application and is selected by the PCE or management
system subject to operator policies and knowledge of the application.
None of these statements is new to scheduled resources. They apply
to stateless, stateful, passive, and active PCEs, and they continue
to apply to scheduling of resources.
An operator may choose to configure special behavior for a PCE that
handles resource scheduling. For example, an operator might want
only a certain percentage of any resource to be bookable. And an
operator might want the pre-emption of booked resources to be an
inverse function of how far in the future the resources are needed
for the first time.
It is a general assumption about the architecture described in
Section 4 that a PCE is under the operational control of the operator
that owns the resources that the PCE manipulates. Thus the operator
may configure any amount of (potentially complex) policy at the PCE.
This configuration would also include policy points surrounding re-
optimization of existing and planned LSPs in the event of changes in
the current and future (planned) resource availability.
The granularity of the timing window offered to an application will
depend on an operator's policy as well as the implementation in the
PCE, and goes to define the operator' service offerings. Different
granularities and different lengths of pre-booking may be offered to
different applications.
4. Architecture Overview 4. Architecture Overview
The architectural considerations and conclusions described in the The architectural considerations and conclusions described in the
previous section lead to the architecture described in this section previous section lead to the architecture described in this section
and illustrated in Figure 2. The interfaces and interactions shown and illustrated in Figure 2. The interfaces and interactions shown
on the figure and labeled (a) through (f) are described in on the figure and labeled (a) through (f) are described in
Section 4.1. Section 4.1.
------------------- -------------------
| Service Requester | | Service Requester |
skipping to change at page 11, line 6 skipping to change at page 13, line 42
4.1. Service Request 4.1. Service Request
As shown in Figure 2, some component in the network requests a As shown in Figure 2, some component in the network requests a
service. This may be an application, an NMS, an LSR, or any service. This may be an application, an NMS, an LSR, or any
component that qualifies as a Path Computation Client (PCC). We show component that qualifies as a Path Computation Client (PCC). We show
this on the figure as the "Service Requester" and it sends a request this on the figure as the "Service Requester" and it sends a request
to the PCE for an LSP to be set up at some time (either now or in the to the PCE for an LSP to be set up at some time (either now or in the
future). The request, indicated on Figure 2 by the arrow (a), future). The request, indicated on Figure 2 by the arrow (a),
includes all of the parameters of the LSP that the requester wishes includes all of the parameters of the LSP that the requester wishes
to supply such as bandwidth, start time, and end time. Note that the to supply such as priority, bandwidth, start time, and end time.
requester in this case may be the LSR shown in the figure or may be a Note that the requester in this case may be the LSR shown in the
distinct system. figure or may be a distinct system.
The PCE enters the LSP request in its LSP-DB (b), and uses The PCE enters the LSP request in its LSP-DB (b), and uses
information from its TED (c) to compute a path that satisfies the information from its TED (c) to compute a path that satisfies the
constraints (such as bandwidth) for the LSP in the time interval from constraints (such as bandwidth) for the LSP in the time interval from
the start time to the end time. It updates the future resource the start time to the end time. It updates the future resource
availability in the TED so that further path computations can take availability in the TED so that further path computations can take
account of the scheduled resource usage. It stores the path for the account of the scheduled resource usage. It stores the path for the
LSP into the LSP-DB (b). LSP into the LSP-DB (b).
When it is time (i.e., at the start time) for the LSP to be set up, When it is time (i.e., at the start time) for the LSP to be set up,
skipping to change at page 11, line 44 skipping to change at page 14, line 33
Requester has cancelled the request, or because the LSP's scheduled Requester has cancelled the request, or because the LSP's scheduled
lifetime has expired) the PCE can remove it. If the LSP is currently lifetime has expired) the PCE can remove it. If the LSP is currently
active, the PCE instructs the head-end LSR to tear it down (d), and active, the PCE instructs the head-end LSR to tear it down (d), and
the network resource usage will be updated by the IGP and advertised the network resource usage will be updated by the IGP and advertised
back to the PCE through the IGP or BGP-LS (e). Once the LSP is no back to the PCE through the IGP or BGP-LS (e). Once the LSP is no
longer active, the PCE can remove it from the LSP-DB (b). longer active, the PCE can remove it from the LSP-DB (b).
4.1.1. Reoptimization After TED Updates 4.1.1. Reoptimization After TED Updates
When the TED is updated as indicated in Section 4.1, the PCE may When the TED is updated as indicated in Section 4.1, the PCE may
perform reoptimization of the LSPs for which it has computed paths. perform reoptimization of the LSPs for which it has computed paths
depending on operator policy so as to minimize network perturbations.
These LSPs may be already provisioned in which case the PCE issues These LSPs may be already provisioned in which case the PCE issues
PCEP Update request messages for the LSPs that should be adjusted. PCEP Update request messages for the LSPs that should be adjusted.
Additionally, the LSPs being reoptimized may be scheduled LSPs that Additionally, the LSPs being reoptimized may be scheduled LSPs that
have not yet been provisioned, in which case reoptimization involves have not yet been provisioned, in which case reoptimization involves
updating the store of scheduled LSPs and resources. updating the store of scheduled LSPs and resources.
In all cases, the purpose of reoptimization is to take account of the In all cases, the purpose of reoptimization is to take account of the
resource usage and availability in the network and to compute paths resource usage and availability in the network and to compute paths
for the current and future LSPs that best satisfy the objectives of for the current and future LSPs that best satisfy the objectives of
those LSPs while keeping the network as clear as possible to support those LSPs while keeping the network as clear as possible to support
further LSPs. further LSPs. Since reoptimization may perturb established LSPs, it
is subject to operator oversight and policy. As the stability of the
network will be impacted by frequent changes, the extent and impact
of any reoptimization needs to be subject to operator policy.
Additionally, the status of the reserved resources (alarms) can
enhance the computation and planning for future LSPs, and may
influence repair and reoptimization. Control of recalculations based
on failures and notifications to the operator is also subject to
policy.
See Section 3.3 for further discussion of operator policy.
4.2. Initialization and Recovery 4.2. Initialization and Recovery
When a PCE in the architecture shown in Figure 2 is initialized, it When a PCE in the architecture shown in Figure 2 is initialized, it
must learn state from the network, from its stored databases, and must learn state from the network, from its stored databases, and
potentially from other PCEs in the network. potentially from other PCEs in the network.
The first step is to get an accurate view of the topology and The first step is to get an accurate view of the topology and
resource availability in the network. This would normally involve resource availability in the network. This would normally involve
reading the state direct from the network via the IGP or BGP-LS (e), reading the state direct from the network via the IGP or BGP-LS (e),
skipping to change at page 16, line 8 skipping to change at page 19, line 8
[RFC8253]. Furthermore, the path key mechanism described in [RFC8253]. Furthermore, the path key mechanism described in
[RFC5520] can be used to enhance privacy and security. [RFC5520] can be used to enhance privacy and security.
Similarly, there is no change to the security implications for the Similarly, there is no change to the security implications for the
signaling of scheduled LSPs. A discussion of the security of the signaling of scheduled LSPs. A discussion of the security of the
signaling protocols that would be used is found in [RFC5920]. signaling protocols that would be used is found in [RFC5920].
However, the use of scheduled LSPs extends the attack surface for a However, the use of scheduled LSPs extends the attack surface for a
PCE-enabled TE system by providing a larger (logically infinite) PCE-enabled TE system by providing a larger (logically infinite)
window during which an attack can be initiated or planned. That is, window during which an attack can be initiated or planned. That is,
if bogus scheduled LSPs can be requested, they can be entered into if bogus scheduled LSPs can be requested and entered into the LSP-DB,
the LSP-DB, then a large number of LSPs could be launched, or then a large number of LSPs could be launched, and significant
significant network resources could be blocked. Of course, network resources could be blocked. Control of scheduling requests
additional authorization could be applied for access to LSP needs to be subject to operator policy and additional authorization
scheduling, and diagnostic tools could inspect the LSP DB to spot needs to be applied for access to LSP scheduling. Diagnostic tools
attacks. need to be provided to inspect the LSP DB to spot attacks.
7. IANA Considerations 7. IANA Considerations
This architecture document makes no request for IANA action. This architecture document makes no request for IANA action.
8. Acknowledgements 8. Acknowledgements
This work has benefited from the discussions of resource scheduling This work has benefited from the discussions of resource scheduling
over the years. In particular the DRAGON project [DRAGON] and over the years. In particular the DRAGON project [DRAGON] and
[I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide [I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide
approaches to auto-bandwidth services in GMPLS networks. approaches to auto-bandwidth services in GMPLS networks.
Mehmet Toy, Lei Liu, and Khuzema Pithewan contributed the earlier Mehmet Toy, Lei Liu, and Khuzema Pithewan contributed to an earlier
version of [I-D.chen-teas-frmwk-tts]. We would like to thank the version of [I-D.chen-teas-frmwk-tts]. We would like to thank the
authors of that draft on Temporal Tunnel Services. authors of that draft on Temporal Tunnel Services for material that
assisted in thinking about this document.
Thanks to Michael Scharf and Daniele Ceccarelli for useful comments Thanks to Michael Scharf and Daniele Ceccarelli for useful comments
on this work. on this work.
Jonathan Hardwick provided a helpful Routing Directorate review. Jonathan Hardwick provided a helpful Routing Directorate review.
Deborah Brungard suggested many changes during her AD review.
9. Contributors 9. Contributors
The following people contributed to discussions that led to the The following people contributed to discussions that led to the
development of this document: development of this document:
Dhruv Dhody Dhruv Dhody
Email: dhruv.dhody@huawei.com Email: dhruv.dhody@huawei.com
10. Informative References 10. Informative References
 End of changes. 31 change blocks. 
88 lines changed or deleted 201 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/