draft-ietf-teas-scheduled-resources-07.txt   rfc8413.txt 
TEAS Working Group Y. Zhuang Internet Engineering Task Force (IETF) Y. Zhuang
Internet-Draft Q. Wu Request for Comments: 8413 Q. Wu
Intended status: Informational H. Chen Category: Informational H. Chen
Expires: October 12, 2018 Huawei ISSN: 2070-1721 Huawei
A. Farrel A. Farrel
Juniper Networks Juniper Networks
April 10, 2018 July 2018
Framework for Scheduled Use of Resources Framework for Scheduled Use of Resources
draft-ietf-teas-scheduled-resources-07
Abstract Abstract
Time-scheduled reservation of traffic engineering (TE) resources can Time-Scheduled (TS) reservation of Traffic Engineering (TE) resources
be used to provide resource booking for TE Label Switched Paths so as can be used to provide resource booking for TE Label Switched Paths
to better guarantee services for customers and to improve the so as to better guarantee services for customers and to improve the
efficiency of network resource usage at any moment in time including efficiency of network resource usage at any moment in time, including
future planned network usage. This document provides a framework network usage that is planned for the future. This document provides
that describes and discusses the architecture for supporting a framework that describes and discusses the architecture for
scheduled reservation of TE resources. This document does not supporting scheduled reservation of TE resources. This document does
describe specific protocols or protocol extensions needed to realize not describe specific protocols or protocol extensions needed to
this service. realize this service.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This document is not an Internet Standards Track specification; it is
provisions of BCP 78 and BCP 79. published for informational purposes.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Engineering Task Force
and may be updated, replaced, or obsoleted by other documents at any (IETF). It represents the consensus of the IETF community. It has
time. It is inappropriate to use Internet-Drafts as reference received public review and has been approved for publication by the
material or to cite them other than as "work in progress." Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are candidates for any level of Internet
Standard; see Section 2 of RFC 7841.
This Internet-Draft will expire on October 12, 2018. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc8413.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 3 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 4
2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3 2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 4
2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4 2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4
2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 5 2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 5
2.4. Looking at Future Demands on TE Resources . . . . . . . . 6 2.4. Looking at Future Demands on TE Resources . . . . . . . . 6
2.4.1. Interaction Between Time-Scheduled and Ad Hoc 2.4.1. Interaction between Time-Scheduled and Ad Hoc
Reservations . . . . . . . . . . . . . . . . . . . . 6 Reservations . . . . . . . . . . . . . . . . . . . . 6
2.5. Requisite State Information . . . . . . . . . . . . . . . 6 2.5. Requisite State Information . . . . . . . . . . . . . . . 7
3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 8 3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 8
3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 8 3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 8
3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 10 3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 10
3.3. Enforcement of Operator Policy . . . . . . . . . . . . . 11 3.3. Enforcement of Operator Policy . . . . . . . . . . . . . 12
4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 12 4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 13
4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 13 4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 13
4.1.1. Reoptimization After TED Updates . . . . . . . . . . 14 4.1.1. Reoptimization After TED Updates . . . . . . . . . . 14
4.2. Initialization and Recovery . . . . . . . . . . . . . . . 15 4.2. Initialization and Recovery . . . . . . . . . . . . . . . 15
4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 15 4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 15
5. Multi-Domain Considerations . . . . . . . . . . . . . . . . . 16 5. Multi-domain Considerations . . . . . . . . . . . . . . . . . 16
6. Security Considerations . . . . . . . . . . . . . . . . . . . 18 6. Security Considerations . . . . . . . . . . . . . . . . . . . 18
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 19 8. Informative References . . . . . . . . . . . . . . . . . . . 19
9. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 19 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . 21
10. Informative References . . . . . . . . . . . . . . . . . . . 20 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22
1. Introduction 1. Introduction
Traffic Engineering Label Switched Paths (TE-LSPs) are connection Traffic Engineering Label Switched Paths (TE-LSPs) are connection-
oriented tunnels in packet and non-packet networks [RFC3209], oriented tunnels in packet and non-packet networks [RFC3209]
[RFC3945]. TE-LSPs may reserve network resources for use by the [RFC3945]. TE-LSPs may reserve network resources for use by the
traffic they carry, thus providing some guarantees of service traffic they carry, thus providing some guarantees of service
delivery and allowing a network operator to plan the use of the delivery and allowing a network operator to plan the use of the
resources across the whole network. resources across the whole network.
In some technologies (such as wavelength switched optical networks) In some technologies (such as wavelength switched optical networks)
the resource is synonymous with the label that is switched on the the resource is synonymous with the label that is switched on the
path of the LSP so that it is not possible to establish an LSP that path of the LSP so that it is not possible to establish an LSP that
can carry traffic without assigning a physical resource to the LSP. can carry traffic without assigning a physical resource to the LSP.
In other technologies (such as packet switched networks) the In other technologies (such as packet switched networks), the
resources assigned to an LSP are a measure of the capacity of a link resources assigned to an LSP are a measure of the capacity of a link
that is dedicated for use by the traffic on the LSP. that is dedicated for use by the traffic on the LSP.
In all cases, network planning consists of selecting paths for LSPs In all cases, network planning consists of selecting paths for LSPs
through the network so that there will be no contention for through the network so that there will be no contention for
resources. LSP establishment is the act of setting up an LSP and resources. LSP establishment is the act of setting up an LSP and
reserving resources within the network. Network optimization or re- reserving resources within the network. Network optimization or
optimization is the process of re-positioning LSPs in the network to reoptimization is the process of repositioning LSPs in the network to
make the unreserved network resources more useful for potential make the unreserved network resources more useful for potential
future LSPs while ensuring that the established LSPs continue to future LSPs while ensuring that the established LSPs continue to
fulfill their objectives. fulfill their objectives.
It is often the case that it is known that an LSP will be needed at It is often the case that it is known that an LSP will be needed at
some specific time in the future. While a path for that LSP could be some specific time in the future. While a path for that LSP could be
computed using knowledge of the currently established LSPs and the computed using knowledge of the currently established LSPs and the
currently available resources, this does not give any degree of currently available resources, this does not give any degree of
certainty that the necessary resources will be available when it is certainty that the necessary resources will be available when it is
time to set up the new LSP. Yet setting up the LSP ahead of the time time to set up the new LSP. Yet, setting up the LSP ahead of the
when it is needed (which would guarantee the availability of the time when it is needed (which would guarantee the availability of the
resources) is wasteful since the network resources could be used for resources) is wasteful since the network resources could be used for
some other purpose in the meantime. some other purpose in the meantime.
Similarly, it may be known that an LSP will no longer be needed after Similarly, it may be known that an LSP will no longer be needed after
some future time and that it will be torn down releasing the network some future time and that it will be torn down, which will release
resources that were assigned to it. This information can be helpful the network resources that were assigned to it. This information can
in planning how a future LSP is placed in the network. be helpful in planning how a future LSP is placed in the network.
Time-Scheduled (TS) reservation of TE resources can be used to Time-Scheduled (TS) reservation of TE resources can be used to
provide resource booking for TE-LSPs so as to better guarantee provide resource booking for TE-LSPs so as to better guarantee
services for customers and to improve the efficiency of network services for customers and to improve the efficiency of network
resource usage into the future. This document provides a framework resource usage into the future. This document provides a framework
that describes the problem and discusses the architecture for the that describes the problem and discusses the architecture for the
scheduled reservation of TE resources. This document does not scheduled reservation of TE resources. This document does not
describe specific protocols or protocol extensions needed to realize describe specific protocols or protocol extensions needed to realize
this service. this service.
2. Problem Statement 2. Problem Statement
2.1. Provisioning TE-LSPs and TE Resources 2.1. Provisioning TE-LSPs and TE Resources
TE-LSPs in existing networks are provisioned using a variety of TE-LSPs in existing networks are provisioned using a variety of
techniques. They may be set up using RSVP-TE as a signaling protocol techniques. They may be set up using RSVP-TE as a signaling protocol
[RFC3209] [RFC3473]. Alternatively, they could be established by [RFC3209] [RFC3473]. Alternatively, they could be established by
direct control of network elements such as in the Software Defined direct control of network elements such as in the Software-Defined
Networking (SDN) paradigm. They could also be provisioned using the Networking (SDN) paradigm. They could also be provisioned using the
PCE Communication Protocol (PCEP) [RFC5440] as a control protocol to PCE Communication Protocol (PCEP) [RFC5440] as a control protocol to
communicate with the network elements. communicate with the network elements.
TE resources are reserved at the point of use. That is, the TE resources are reserved at the point of use. That is, the
resources (wavelengths, timeslots, bandwidth, etc.) are reserved for resources (wavelengths, timeslots, bandwidth, etc.) are reserved for
use on a specific link and are tracked by the Label Switching Routers use on a specific link and are tracked by the Label Switching Routers
(LSRs) at the end points of the link. Those LSRs learn which (LSRs) at the end points of the link. Those LSRs learn which
resources to reserve during the LSP setup process. resources to reserve during the LSP setup process.
The use of TE resources can be varied by changing the parameters of The use of TE resources can be varied by changing the parameters of
the LSP that uses them, and the resources can be released by tearing the LSP that uses them, and the resources can be released by tearing
down the LSP. down the LSP.
Resources that have been reserved in the network for use by one LSP Resources that have been reserved in the network for use by one LSP
may be pre-empted for use by another LSP. If RSVP-TE signaling is in may be preempted for use by another LSP. If RSVP-TE signaling is in
use, a holding priority and a pre-emption priority are used to use, a holding priority and a preemption priority are used to
determine which LSPs may pre-empted the resources in use for which determine which LSPs may preempt the resources that are in use for
other LSPs. If direct (central) control is in use, the controller is which other LSPs. If direct (central) control is in use, the
able to make pre-emption decisions. In either case, operator policy controller is able to make preemption decisions. In either case,
forms a key part of pre-emption since there is a trade between operator policy forms a key part of preemption since there is a trade
disrupting existing LSPs and enabling new LSPs. between disrupting existing LSPs and enabling new LSPs.
2.2. Selecting the Path of an LSP 2.2. Selecting the Path of an LSP
Although TE-LSPs can determine their paths hop-by-hop using the Although TE-LSPs can determine their paths hop by hop using the
shortest path toward the destination to route the signaling protocol shortest path toward the destination to route the signaling protocol
messages [RFC3209], in practice this option is not applied because it messages [RFC3209], in practice this option is not applied because it
does not look far enough ahead into the network to verify that the does not look far enough ahead into the network to verify that the
desired resources are available. Instead, the full length of the desired resources are available. Instead, the full length of the
path of an LSP is usually computed ahead of time either by the head- path of an LSP is usually computed ahead of time either by the head-
end LSR of a signaled LSP, or by Path Computation Element (PCE) end LSR of a signaled LSP or by Path Computation Element (PCE)
functionality in a dedicated server or built into network management functionality that is in a dedicated server or built into network
software [RFC4655]. management software [RFC4655].
Such full-path computation is applied in order that an end-to-end Such full-path computation is applied in order that an end-to-end
view of the available resources in the network can be used to view of the available resources in the network can be used to
determine the best likelihood of establishing a viable LSP that meets determine the best likelihood of establishing a viable LSP that meets
the service requirements. Even in this situation, however, it is the service requirements. Even in this situation, however, it is
possible that two LSPs being set up at the same time will compete for possible that two LSPs being set up at the same time will compete for
scarce network resources meaning that one or both of them will fail scarce network resources, which means that one or both of them will
to be established. This situation is avoided by using a centralized fail to be established. This situation is avoided by using a
PCE that is aware of the LSP setup requests that are in progress. centralized PCE that is aware of the LSP setup requests that are in
progress.
Path selection may make allowance for pre-emption as described in Path selection may make allowance for preemption as described in
Section 2.1. That is, when selecting a path, the decision may be Section 2.1. That is, when selecting a path, the decision may be
made to choose a path that will result in the pre-emption of an made to choose a path that will result in the preemption of an
existing LSP. The trade-off between selecting a less optimal path, existing LSP. The trade-off between selecting a less optimal path,
failing to select any path at all, and pre-empting an existing LSP failing to select any path at all, and preempting an existing LSP
must be subject to operator policy. must be subject to operator policy.
Path computation is subject to "objective functions" that define what Path computation is subject to "objective functions" that define what
criteria are to be met when the LSP is placed [RFC4655]. These can criteria are to be met when the LSP is placed [RFC4655]. These can
be criteria that apply to the LSP itself (such as shortest path to be criteria that apply to the LSP itself (such as the shortest path
destination) or to the network state after the LSP is set up (such as to the destination) or to the network state after the LSP is set up
maximized residual link bandwidth). The objective functions may be (such as the maximized residual link bandwidth). The objective
requested by the application requesting the LSP, and may be filtered functions may be requested by the application requesting the LSP and
and enhanced by the computation engine according to operator policy. may be filtered and enhanced by the computation engine according to
operator policy.
2.3. Planning Future LSPs 2.3. Planning Future LSPs
LSPs may be established "on demand" when the requester determines LSPs may be established "on demand" when the requester determines
that a new LSP is needed. In this case, the path of the LSP is that a new LSP is needed. In this case, the path of the LSP is
computed as described in Section 2.2. computed as described in Section 2.2.
However, in many situations, the requester knows in advance that an However, in many situations, the requester knows in advance that an
LSP will be needed at a particular time in the future. For example, LSP will be needed at a particular time in the future. For example,
the requester may be aware of a large traffic flow that will start at the requester may be aware of a large traffic flow that will start at
a well-known time, perhaps for a database synchronization or for the a well-known time, perhaps for a database synchronization or for the
exchange of content between streaming sites. Furthermore, the exchange of content between streaming sites. Furthermore, the
requester may also know for how long the LSP is required before it requester may also know for how long the LSP is required before it
can be torn down. can be torn down.
The set of requests for future LSPs could be collected and held in a The set of requests for future LSPs could be collected and held in a
central database (such as at a Network Management System - NMS): when central database (such as at a Network Management System (NMS)): when
the time comes for each LSP to be set up the NMS can ask the PCE to the time comes for each LSP to be set up, the NMS can ask the PCE to
compute a path and can then request the LSP to be provisioned. This compute a path and can then request the LSP to be provisioned. This
approach has a number of drawbacks because it is not possible to approach has a number of drawbacks because it is not possible to
determine in advance whether it will be possible to deliver the LSP determine in advance whether it will be possible to deliver the LSP
since the resources it needs might be used by other LSPs in the since the resources it needs might be used by other LSPs in the
network. Thus, at the time the requester asks for the future LSP, network. Thus, at the time the requester asks for the future LSP,
the NMS can only make a best-effort guarantee that the LSP will be the NMS can only make a best-effort guarantee that the LSP will be
set up at the desired time. set up at the desired time.
A better solution, therefore, is for the requests for future LSPs to A better solution, therefore, is for the requests for future LSPs to
be serviced at once. The paths of the LSPs can be computed ahead of be serviced at once. The paths of the LSPs can be computed ahead of
time and converted into reservations of network resources during time and converted into reservations of network resources during
specific windows in the future. That is, while the path of the LSP specific windows in the future. That is, while the path of the LSP
is computed and the network resources are reserved, the LSP is not is computed and the network resources are reserved, the LSP is not
established in the network until the time for which it is scheduled. established in the network until the time for which it is scheduled.
There is a need to take into account items that need to be subject to There is a need to take into account items that need to be subject to
operator policy such as the amount of capacity available for operator policy, such as 1) the amount of capacity available for
scheduling future reservations and the operator preference for the scheduling future reservations, 2) the operator preference for the
measures which are used with respect to the use of scheduled measures that are used with respect to the use of scheduled resources
resources during rapid changes in traffic demand events or a complex during rapid changes in traffic demand events, or 3) a complex
(multiple nodes/links) failure event so as to protect against network (multiple nodes/links) failure event so as to protect against network
destabilization. Operator policy is discussed further in destabilization. Operator policy is discussed further in
Section 3.3. Section 3.3.
2.4. Looking at Future Demands on TE Resources 2.4. Looking at Future Demands on TE Resources
While path computation as described in Section 2.2 takes account of While path computation, as described in Section 2.2, takes account of
the currently available network resources, and can act to place LSPs the currently available network resources and can act to place LSPs
in the network so that there is the best possibility of future LSPs in the network so that there is the best possibility of future LSPs
being accommodated, it cannot handle all eventualities. It is simple being accommodated, it cannot handle all eventualities. It is simple
to construct scenarios where LSPs that are placed one at a time lead to construct scenarios where LSPs that are placed one at a time lead
to future LSPs being blocked, but where foreknowledge of all of the to future LSPs being blocked, but where foreknowledge of all of the
LSPs would have made it possible for them all to be set up. LSPs would have made it possible for them all to be set up.
If, therefore, we were able to know in advance what LSPs were going If, therefore, we were able to know in advance what LSPs were going
to be requested, we could plan for them and ensure resources were to be requested, we could plan for them and ensure resources were
available. Furthermore, such an approach enables a commitment to be available. Furthermore, such an approach enables a commitment to be
made to a service user that an LSP will be set up and available at a made to a service user that an LSP will be set up and available at a
specific time. specific time.
A reservation service can be achieved by tracking the current use of A reservation service can be achieved by tracking the current use of
network resources and also having a future view of the resource network resources and also having a future view of the resource
usage. We call this Time-Scheduled TE (TS-TE) resource reservation. usage. We call this Time-Scheduled TE (TS-TE) resource reservation.
2.4.1. Interaction Between Time-Scheduled and Ad Hoc Reservations 2.4.1. Interaction between Time-Scheduled and Ad Hoc Reservations
There will, of course, be a mixture of resource uses in a network. There will, of course, be a mixture of resource uses in a network.
For example, normal unplanned LSPs may be requested alongside TS-TE For example, normal unplanned LSPs may be requested alongside TS-TE
LSPs. When an unplanned LSP is requested no prior accommodation can LSPs. When an unplanned LSP is requested, no prior accommodation can
be made to arrange resource availability, so the LSP can be placed no be made to arrange resource availability, so the LSP can be placed no
better than would be the case without TS-TE. However, the new LSP better than would be the case without TS-TE. However, the new LSP
can be placed considering the future demands of TS-TE LSPs that have can be placed considering the future demands of TS-TE LSPs that have
already been requested. Of course, the unplanned LSP has no known already been requested. Of course, the unplanned LSP has no known
end time and so any network planning must assume that it will consume end time and so any network planning must assume that it will consume
resources for ever. resources forever.
2.5. Requisite State Information 2.5. Requisite State Information
In order to achieve the TS-TE resource reservation, the use of In order to achieve the TS-TE resource reservation, the use of
resources on the path needs to be scheduled. Scheduling state is resources on the path needs to be scheduled. The scheduling state is
used to indicate when resources are reserved and when they are used to indicate when resources are reserved and when they are
available for use. available for use.
A simple information model for one piece of scheduling state is as A simple information model for one piece of the scheduling state is
follows: as follows:
{ {
link id; link id;
resource id or reserved capacity; resource id or reserved capacity;
reservation start time; reservation start time;
reservation end time reservation end time
} }
The resource that is scheduled could be link capacity, physical The resource that is scheduled could be link capacity, physical
resources on a link, buffers on an interfaces, etc., and could resources on a link, buffers on an interface, etc., and could include
include advanced considerations such as CPU utilization and the advanced considerations such as CPU utilization and the availability
availability of memory at nodes within the network. The resource- of memory at nodes within the network. The resource-related
related information might also include the maximal unreserved information might also include the maximal unreserved bandwidth of
bandwidth of the link over a time interval. That is, the intention the link over a time interval. That is, the intention is to book
is to book (reserve) a percentage of the residual (unreserved) (reserve) a percentage of the residual (unreserved) bandwidth of the
bandwidth of the link. This could be used, for example, to reserve link. This could be used, for example, to reserve bandwidth for a
bandwidth for a particular class of traffic (such as IP) that doesn't particular class of traffic (such as IP) that doesn't have a
have a provisioned LSP. provisioned LSP.
For any one resource there could be multiple pieces of scheduling For any one resource, there could be multiple pieces of the
state, and for any one link, the timing windows might overlap. scheduling state, and for any one link, the timing windows might
overlap.
There are multiple ways to realize this information model and There are multiple ways to realize this information model and
different ways to store the data. The resource state could be different ways to store the data. The resource state could be
expressed as a start time and an end time as shown above, or could be expressed as a start time and an end time (as shown above), or it
expressed as a start time and a duration. Multiple reservation could be expressed as a start time and a duration. Multiple
periods, possibly of different lengths, may need to be recorded for reservation periods, possibly of different lengths, may need to be
each resource. Furthermore, the current state of network reservation recorded for each resource. Furthermore, the current state of
could be kept separate from the scheduled usage, or everything could network reservation could be kept separate from the scheduled usage,
be merged into a single TS database. or everything could be merged into a single TS database.
An application may make a reservation request for immediate resource An application may make a reservation request for immediate resource
usage, or to book resources for future use so as to maximize the usage or to book resources for future use so as to maximize the
chance of services being delivered and to avoid contention for chance of services being delivered and to avoid contention for
resources in the future. A single reservation request may book resources in the future. A single reservation request may book
resources for multiple periods and might request a reservation that resources for multiple periods and might request a reservation that
repeats on a regular cycle. repeats on a regular cycle.
A computation engine (that is, a PCE) may use the scheduling state A computation engine (that is, a PCE) may use the scheduling state
information to help optimize the use of resources into the future and information to help optimize the use of resources into the future and
reduce contention or blocking when the resources are actually needed. reduce contention or blocking when the resources are actually needed.
Note that it is also necessary to store the information about future Note that it is also necessary to store the information about future
LSPs as distinct from the specific resource scheduling. This LSPs as distinct from the specific resource scheduling. This
information is held to allow the LSPs to be instantiated when they information is held to allow the LSPs to be instantiated when they
are due and using the paths/resources that have been computed for are due, and use the paths/resources that have been computed for
them, but also to provide correlation with the TS-TE resource them, and also to provide correlation with the TS-TE resource
reservations so that it is clear why resources were reserved, reservations so that it is clear why resources were reserved, thus
allowing pre-emption, and handling release of reserved resources in allowing preemption and handling the release of reserved resources in
the event of cancellation of future LSPs. See Section 3.2 for the event of cancellation of future LSPs. See Section 3.2 for
further discussion of the distinction between scheduled resource further discussion of the distinction between scheduled resource
state and scheduled LSP state. state and scheduled LSP state.
Network performance factors (such as maximum link utilization and the Network performance factors (such as maximum link utilization and the
residual capacity of the network) with respect to supporting residual capacity of the network), with respect to supporting
scheduled reservations need to be supported and are subject to scheduled reservations, need to be supported and are subject to
operator policy. operator policy.
3. Architectural Concepts 3. Architectural Concepts
This section examines several important architectural concepts to This section examines several important architectural concepts to
understand the design decisions reached in this document to achieve understand the design decisions reached in this document to achieve
TS-TE in a scalable and robust manner. TS-TE in a scalable and robust manner.
3.1. Where is Scheduling State Held? 3.1. Where is Scheduling State Held?
The scheduling state information described in Section 2.5 has to be The scheduling state information described in Section 2.5 has to be
held somewhere. There are two places where this makes sense: held somewhere. There are two places where this makes sense:
o In the network nodes where the resources exist; o in the network nodes where the resources exist; or,
o In a central scheduling controller where decisions about resource o in a central scheduling controller where decisions about resource
allocation are made. allocation are made.
The first of these makes policing of resource allocation easier. It The first of these makes policing of resource allocation easier. It
means that many points in the network can request immediate or means that many points in the network can request immediate or
scheduled LSPs with the associated resource reservation and that all scheduled LSPs with the associated resource reservation, and that all
such requests can be correlated at the point where the resources are such requests can be correlated at the point where the resources are
allocated. However, this approach has some scaling and technical allocated. However, this approach has some scaling and technical
problems: problems:
o The most obvious issue is that each network node must retain the o The most obvious issue is that each network node must retain the
full time-based state for all of its resources. In a busy network full time-based state for all of its resources. In a busy network
with a high arrival rate of new LSPs and a low hold time for each with a high arrival rate of new LSPs and a low hold time for each
LSP, this could be a lot of state. Network nodes are normally LSP, this could be a lot of state. Network nodes are normally
implemented with minimal spare memory. implemented with minimal spare memory.
o In order that path computation can be performed, the computing o In order that path computation can be performed, the computing
entity normally known as a Path Computation Element (PCE) entity normally known as a Path Computation Element (PCE)
[RFC4655] needs access to a database of available links and nodes [RFC4655] needs access to a database of available links and nodes
in the network, and of the TE properties of the links. This in the network (as well as the TE properties of said links). This
database is known as the Traffic Engineering Database (TED) and is database is known as the Traffic Engineering Database (TED) and is
usually populated from information advertised in the IGP by each usually populated from information advertised in the IGP by each
of the network nodes or exported using BGP-LS [RFC7752]. To be of the network nodes or exported using BGP Link State (BGP-LS)
able to compute a path for a future LSP the PCE needs to populate [RFC7752]. To be able to compute a path for a future LSP, the PCE
the TED with all of the future resource availability: if this needs to populate the TED with all of the future resource
information is held on the network nodes it must also be availability: if this information is held on the network nodes, it
advertised in the IGP. This could be a significant scaling issue must also be advertised in the IGP. This could be a significant
for the IGP and the network nodes as all of the advertised scaling issue for the IGP and the network nodes, as all of the
information is held at every network node and must be periodically advertised information is held at every network node and must be
refreshed by the IGP. periodically refreshed by the IGP.
o When a normal node restarts it can recover resource reservation o When a normal node restarts, it can recover the resource
state from the forwarding hardware, from Non-Volatile Random- reservation state from the forwarding hardware, from Non-Volatile
Access Memory (NVRAM), or from adjacent nodes through the Random-Access Memory (NVRAM), or from adjacent nodes through the
signaling protocol [RFC5063]. If scheduling state is held at the signaling protocol [RFC5063]. If the scheduling state is held at
network nodes it must also be recovered after the restart of a the network nodes, it must also be recovered after the restart of
network node. This cannot be achieved from the forwarding a network node. This cannot be achieved from the forwarding
hardware because the reservation will not have been made, could hardware because the reservation will not have been made, could
require additional expensive NVRAM, or might require that all require additional expensive NVRAM, or might require that all
adjacent nodes also have the scheduling state in order to re- adjacent nodes also have the scheduling state in order to
install it on the restarting node. This is potentially complex reinstall it on the restarting node. This is potentially complex
processing with scaling and cost implications. processing with scaling and cost implications.
Conversely, if the scheduling state is held centrally it is easily Conversely, if the scheduling state is held centrally, it is easily
available at the point of use. That is, the PCE can utilize the available at the point of use. That is, the PCE can utilize the
state to plan future LSPs and can update that stored information with state to plan future LSPs and can update that stored information with
the scheduled reservation of resources for those future LSPs. This the scheduled reservation of resources for those future LSPs. This
approach also has several issues: approach also has several issues:
o If there are multiple controllers then they must synchronize their o If there are multiple controllers, then they must synchronize
stored scheduling state as they each plan future LSPs, and must their stored scheduling state as they each plan future LSPs and
have a mechanism to resolve resource contention. This is they must have a mechanism to resolve resource contention. This
relatively simple and is mitigated by the fact that there is ample is relatively simple and is mitigated by the fact that there is
processing time to re-plan future LSPs in the case of resource ample processing time to replan future LSPs in the case of
contention. resource contention.
o If other sources of immediate LSPs are allowed (for example, other o If other sources of immediate LSPs are allowed (for example, other
controllers or autonomous action by head-end LSRs) then the controllers or autonomous action by head-end LSRs), then the
changes in resource availability caused by the setup or tear down changes in resource availability caused by the setup or tear down
of these LSPs must be reflected in the TED (by use of the IGP as of these LSPs must be reflected in the TED (by use of the IGP as
currently) and may have an impact of planned future LSPs. This is already normally done) and may have an impact on planned future
impact can be mitigated by re-planning future LSPs or through LSP LSPs. This impact can be mitigated by replanning future LSPs or
preemption. through LSP preemption.
o If the scheduling state is held centrally at a PCE, the state must o If the scheduling state is held centrally at a PCE, the state must
be held and restored after a system restart. This is relatively be held and restored after a system restart. This is relatively
easy to achieve on a central server that can have access to non- easy to achieve on a central server that can have access to non-
volatile storage. The PCE could also synchronize the scheduling volatile storage. The PCE could also synchronize the scheduling
state with other PCEs after restart. See Section 4.2 for details. state with other PCEs after restart. See Section 4.2 for details.
o Of course, a centralized system must store information about all o Of course, a centralized system must store information about all
of the resources in the network. In a busy network with a high of the resources in the network. In a busy network with a high
arrival rate of new LSPs and a low hold time for each LSP, this arrival rate of new LSPs and a low hold time for each LSP, this
could be a lot of state. This is multiplied by the size of the could be a lot of state. This is multiplied by the size of the
network measured both by the number of links and nodes, and by the network measured both by the number of links and nodes and by the
number of trackable resources on each link or at each node. This number of trackable resources on each link or at each node. This
challenge may be mitigated by the centralized server being challenge may be mitigated by the centralized server being
dedicated hardware, but there remains the problem of collecting dedicated hardware, but there remains the problem of collecting
the information from the network in a timely way when there is the information from the network in a timely way when there is
potentially a very large amount of information to be collected and potentially a very large amount of information to be collected and
when the rate of change of that information is high. This latter when the rate of change of that information is high. This latter
challenge is only solved if the central server has full control of challenge is only solved if the central server has full control of
the booking of resources and the establishment of new LSPs so that the booking of resources and the establishment of new LSPs so that
the information from the network only serves to confirm what the the information from the network only serves to confirm what the
central server expected. central server expected.
Thus, considering these tradeoffs, the architectural conclusion is Thus, considering these trade-offs, the architectural conclusion is
that scheduling state should be held centrally at the point of use that the scheduling state should be held centrally at the point of
and not in the network devices. use and not in the network devices.
3.2. What State is Held? 3.2. What State is Held?
As already described, the PCE needs access to an enhanced, time-based As already described, the PCE needs access to an enhanced, time-based
TED. It stores the traffic engineering (TE) information such as TED. It stores the Traffic Engineering (TE) information, such as
bandwidth for every link for a series of time intervals. There are a bandwidth, for every link for a series of time intervals. There are
few ways to store the TE information in the TED. For example, a few ways to store the TE information in the TED. For example,
suppose that the amount of the unreserved bandwidth at a priority suppose that the amount of the unreserved bandwidth at a priority
level for a link is Bj in a time interval from time Tj to Tk (k = level for a link is Bj in a time interval from time Tj to Tk (k =
j+1), where j = 0, 1, 2, .... j+1), where j = 0, 1, 2, ....
Bandwidth Bandwidth
^ ^
| B3 | B3
| B1 ___________ | B1 ___________
| __________ | __________
|B0 B4 |B0 B4
skipping to change at page 11, line 6 skipping to change at page 11, line 27
Figure 1: A Plot of Bandwidth Usage against Time Figure 1: A Plot of Bandwidth Usage against Time
The unreserved bandwidth for the link can be represented and stored The unreserved bandwidth for the link can be represented and stored
in the TED as [T0, B0], [T1, B1], [T2, B2], [T3, B3], ... as shown in in the TED as [T0, B0], [T1, B1], [T2, B2], [T3, B3], ... as shown in
Figure 1. Figure 1.
But it must be noted that service requests for future LSPs are known But it must be noted that service requests for future LSPs are known
in terms of the LSPs whose paths are computed and for which resources in terms of the LSPs whose paths are computed and for which resources
are scheduled. For example, if the requester of a future LSP decides are scheduled. For example, if the requester of a future LSP decides
to cancel the request or to modify the request, the PCE must be able to cancel the request or to modify the request, the PCE must be able
to map this to the resources that were reserved. When the LSP or the to map this to the resources that were reserved. When the LSP (or
request for the LSP with a number of time intervals is cancelled, the the request for the LSP with a number of time intervals) is canceled,
PCE must release the resources that were reserved on each of the the PCE must release the resources that were reserved on each of the
links along the path of the LSP in every time intervals from the TED. links along the path of the LSP in every time interval from the TED.
If the bandwidth that had been reserved for the LSP on a link was B If the bandwidth that had been reserved for the LSP on a link was B
from time T2 to T3 and the unreserved bandwidth on the link was B2 from time T2 to T3 and the unreserved bandwidth on the link was B2
from T2 to T3, then B is added back to the link for the time interval from T2 to T3, then B is added back to the link for the time interval
from T2 to T3 and the unreserved bandwidth on the link from T2 to T3 from T2 to T3 and the unreserved bandwidth on the link from T2 to T3
will be seen to be B2 + B. will be seen to be B2 + B.
This suggests that the PCE needs an LSP Database (LSP-DB) [RFC8231] This suggests that the PCE needs an LSP Database (LSP-DB) [RFC8231]
that contains information not only about LSPs that are active in the that contains information not only about LSPs that are active in the
network, but also those that are planned. The information for an LSP network but also those that are planned. For each time interval that
stored in the LSP-DB includes for each time interval that applies to applies to the LSP, the information for an LSP stored in the LSP-DB
the LSP: the time interval, the paths computed for the LSP satisfying includes: the time interval, the paths computed for the LSP
the constraints in the time interval, and the resources such as satisfying the constraints in the time interval, and the resources
bandwidth reserved for the LSP in the time interval. See also (such as bandwidth) reserved for the LSP in the time interval. See
Section 2.3 also Section 2.3
It is an implementation choice how the TED and LSP-DB are stored both It is an implementation choice how the TED and LSP-DB are stored both
for dynamic use and for recovery after failure or restart, but it may for dynamic use and for recovery after failure or restart, but it may
be noted that all of the information in the scheduled TED can be be noted that all of the information in the scheduled TED can be
recovered from the active network state and from the scheduled LSP- recovered from the active network state and from the scheduled LSP-
DB. DB.
3.3. Enforcement of Operator Policy 3.3. Enforcement of Operator Policy
Computation requests for LSPs are serviced according to operator Computation requests for LSPs are serviced according to operator
policy. For example, a PCE may refuse a computation request because policy. For example, a PCE may refuse a computation request because
the application making the request does not have sufficient the application making the request does not have sufficient
permissions, or because servicing the request might take specific permissions or because servicing the request might take specific
resource usage over a given threshold. resource usage over a given threshold.
Furthermore, the pre-emption and holding priorities of any particular Furthermore, the preemption and holding priorities of any particular
computation request may be subject to the operator's policies. The computation request may be subject to the operator's policies. The
request could be rejected if it does not conform to the operator's request could be rejected if it does not conform to the operator's
policies, or (possibly more likely) the priorities could be set/ policies, or (possibly more likely) the priorities could be set/
overwritten according to the operator's policies. overwritten according to the operator's policies.
Additionally, the objective functions (OFs) of computation request Additionally, the Objective Functions (OFs) of computation request
(such as maximizing residual bandwidth) are also subject to operator (such as maximizing residual bandwidth) are also subject to operator
policies. It is highly likely that the choice of OFs is not policies. It is highly likely that the choice of OFs is not
available to an application and is selected by the PCE or management available to an application and is selected by the PCE or management
system subject to operator policies and knowledge of the application. system subject to operator policies and knowledge of the application.
None of these statements is new to scheduled resources. They apply None of these statements is new to scheduled resources. They apply
to stateless, stateful, passive, and active PCEs, and they continue to stateless, stateful, passive, and active PCEs, and they continue
to apply to scheduling of resources. to apply to scheduling of resources.
An operator may choose to configure special behavior for a PCE that An operator may choose to configure special behavior for a PCE that
handles resource scheduling. For example, an operator might want handles resource scheduling. For example, an operator might want
only a certain percentage of any resource to be bookable. And an only a certain percentage of any resource to be bookable. And an
operator might want the pre-emption of booked resources to be an operator might want the preemption of booked resources to be an
inverse function of how far in the future the resources are needed inverse function of how far in the future the resources are needed
for the first time. for the first time.
It is a general assumption about the architecture described in It is a general assumption about the architecture described in
Section 4 that a PCE is under the operational control of the operator Section 4 that a PCE is under the operational control of the operator
that owns the resources that the PCE manipulates. Thus the operator that owns the resources that the PCE manipulates. Thus, the operator
may configure any amount of (potentially complex) policy at the PCE. may configure any amount of (potentially complex) policy at the PCE.
This configuration would also include policy points surrounding re- This configuration would also include policy points surrounding
optimization of existing and planned LSPs in the event of changes in reoptimization of existing and planned LSPs in the event of changes
the current and future (planned) resource availability. in the current and future (planned) resource availability.
The granularity of the timing window offered to an application will The granularity of the timing window offered to an application will
depend on an operator's policy as well as the implementation in the depend on an operator's policy as well as the implementation in the
PCE, and goes to define the operator' service offerings. Different PCE and goes to define the operator' service offerings. Different
granularities and different lengths of pre-booking may be offered to granularities and different lengths of prebooking may be offered to
different applications. different applications.
4. Architecture Overview 4. Architecture Overview
The architectural considerations and conclusions described in the The architectural considerations and conclusions described in the
previous section lead to the architecture described in this section previous section lead to the architecture described in this section
and illustrated in Figure 2. The interfaces and interactions shown and illustrated in Figure 2. The interfaces and interactions shown
on the figure and labeled (a) through (f) are described in in the figure and labeled (a) through (f) are described in
Section 4.1. Section 4.1.
------------------- -------------------
| Service Requester | | Service Requester |
------------------- -------------------
^ ^
a| a|
v v
------- b -------- ------- b --------
| |<--->| LSP-DB | | |<--->| LSP-DB |
skipping to change at page 13, line 38 skipping to change at page 13, line 46
| LSR |<------>| LSR | | LSR |<------>| LSR |
----- f ----- ----- f -----
Figure 2: Reference Architecture for Scheduled Use of Resources Figure 2: Reference Architecture for Scheduled Use of Resources
4.1. Service Request 4.1. Service Request
As shown in Figure 2, some component in the network requests a As shown in Figure 2, some component in the network requests a
service. This may be an application, an NMS, an LSR, or any service. This may be an application, an NMS, an LSR, or any
component that qualifies as a Path Computation Client (PCC). We show component that qualifies as a Path Computation Client (PCC). We show
this on the figure as the "Service Requester" and it sends a request this on the figure as the "Service Requester", and it sends a request
to the PCE for an LSP to be set up at some time (either now or in the to the PCE for an LSP to be set up at some time (either now or in the
future). The request, indicated on Figure 2 by the arrow (a), future). The request, indicated on Figure 2 by the arrow (a),
includes all of the parameters of the LSP that the requester wishes includes all of the parameters of the LSP that the requester wishes
to supply such as priority, bandwidth, start time, and end time. to supply, such as priority, bandwidth, start time, and end time.
Note that the requester in this case may be the LSR shown in the Note that the requester in this case may be the LSR shown in the
figure or may be a distinct system. figure or may be a distinct system.
The PCE enters the LSP request in its LSP-DB (b), and uses The PCE enters the LSP request in its LSP-DB (b) and uses information
information from its TED (c) to compute a path that satisfies the from its TED (c) to compute a path that satisfies the constraints
constraints (such as bandwidth) for the LSP in the time interval from (such as bandwidth) for the LSP in the time interval from the start
the start time to the end time. It updates the future resource time to the end time. It updates the future resource availability in
availability in the TED so that further path computations can take the TED so that further path computations can take account of the
account of the scheduled resource usage. It stores the path for the scheduled resource usage. It stores the path for the LSP into the
LSP into the LSP-DB (b). LSP-DB (b).
When it is time (i.e., at the start time) for the LSP to be set up, When it is time (i.e., at the start time) for the LSP to be set up,
the PCE sends a PCEP Initiate request to the head end LSR (d) the PCE sends a PCEP Initiate request to the head-end LSR (d), which
providing the path to be signaled as well as other parameters such as provides the path to be signaled as well as other parameters, such as
the bandwidth of the LSP. the bandwidth of the LSP.
As the LSP is signaled between LSRs (f) the use of resources in the As the LSP is signaled between LSRs (f), the use of resources in the
network is updated and distributed using the IGP. This information network is updated and distributed using the IGP. This information
is shared with the PCE either through the IGP or using BGP-LS (e), is shared with the PCE either through the IGP or using BGP-LS (e),
and the PCE updates the information stored in its TED (c). and the PCE updates the information stored in its TED (c).
After the LSP is set up, the head end LSR sends a PCEP LSP State After the LSP is set up, the head-end LSR sends a PCEP LSP State
Report (PCRpt message) to the PCE (d). The report contains the Report (PCRpt) message to the PCE (d). The report contains the
resources such as bandwidth usage for the LSP. The PCE updates the resources, such as bandwidth usage, for the LSP. The PCE updates the
status of the LSP in the LSP-DB according to the report. status of the LSP in the LSP-DB according to the report.
When an LSP is no longer required (either because the Service When an LSP is no longer required (either because the Service
Requester has cancelled the request, or because the LSP's scheduled Requester has canceled the request or because the LSP's scheduled
lifetime has expired) the PCE can remove it. If the LSP is currently lifetime has expired), the PCE can remove it. If the LSP is
active, the PCE instructs the head-end LSR to tear it down (d), and currently active, the PCE instructs the head-end LSR to tear it down
the network resource usage will be updated by the IGP and advertised (d), and the network resource usage will be updated by the IGP and
back to the PCE through the IGP or BGP-LS (e). Once the LSP is no advertised back to the PCE through the IGP or BGP-LS (e). Once the
longer active, the PCE can remove it from the LSP-DB (b). LSP is no longer active, the PCE can remove it from the LSP-DB (b).
4.1.1. Reoptimization After TED Updates 4.1.1. Reoptimization After TED Updates
When the TED is updated as indicated in Section 4.1, the PCE may When the TED is updated as indicated in Section 4.1, depending on
perform reoptimization of the LSPs for which it has computed paths operator policy (so as to minimize network perturbations), the PCE
depending on operator policy so as to minimize network perturbations. may perform reoptimization of the LSPs for which it has computed
These LSPs may be already provisioned in which case the PCE issues paths. These LSPs may be already provisioned, in which case the PCE
PCEP Update request messages for the LSPs that should be adjusted. issues PCEP Update request messages for the LSPs that should be
Additionally, the LSPs being reoptimized may be scheduled LSPs that adjusted. Additionally, the LSPs being reoptimized may be scheduled
have not yet been provisioned, in which case reoptimization involves LSPs that have not yet been provisioned, in which case reoptimization
updating the store of scheduled LSPs and resources. involves updating the store of scheduled LSPs and resources.
In all cases, the purpose of reoptimization is to take account of the In all cases, the purpose of reoptimization is to take account of the
resource usage and availability in the network and to compute paths resource usage and availability in the network and to compute paths
for the current and future LSPs that best satisfy the objectives of for the current and future LSPs that best satisfy the objectives of
those LSPs while keeping the network as clear as possible to support those LSPs while keeping the network as clear as possible to support
further LSPs. Since reoptimization may perturb established LSPs, it further LSPs. Since reoptimization may perturb established LSPs, it
is subject to operator oversight and policy. As the stability of the is subject to operator oversight and policy. As the stability of the
network will be impacted by frequent changes, the extent and impact network will be impacted by frequent changes, the extent and impact
of any reoptimization needs to be subject to operator policy. of any reoptimization needs to be subject to operator policy.
Additionally, the status of the reserved resources (alarms) can Additionally, the status of the reserved resources (alarms) can
enhance the computation and planning for future LSPs, and may enhance the computation and planning for future LSPs and may
influence repair and reoptimization. Control of recalculations based influence repair and reoptimization. Control of recalculations based
on failures and notifications to the operator is also subject to on failures and notifications to the operator is also subject to
policy. policy.
See Section 3.3 for further discussion of operator policy. See Section 3.3 for further discussion of operator policy.
4.2. Initialization and Recovery 4.2. Initialization and Recovery
When a PCE in the architecture shown in Figure 2 is initialized, it When a PCE in the architecture shown in Figure 2 is initialized, it
must learn state from the network, from its stored databases, and must learn the state from the network, from its stored databases, and
potentially from other PCEs in the network. potentially from other PCEs in the network.
The first step is to get an accurate view of the topology and The first step is to get an accurate view of the topology and
resource availability in the network. This would normally involve resource availability in the network. This would normally involve
reading the state direct from the network via the IGP or BGP-LS (e), reading the state directly from the network via the IGP or BGP-LS
but might include receiving a copy of the TED from another PCE. Note (e), but it might include receiving a copy of the TED from another
that a TED stored from a previous instantiation of the PCE is PCE. Note that a TED stored from a previous instantiation of the PCE
unlikely to be valid. is unlikely to be valid.
Next, the PCE must construct a time-based TED to show scheduled Next, the PCE must construct a time-based TED to show scheduled
resource usage. How it does this is implementation specific and this resource usage. How it does this is implementation specific, and
document does not dictate any particular mechanism: it may recover a this document does not dictate any particular mechanism: it may
time-based TED previously saved to non-volatile storage, or it may recover a time-based TED previously saved to non-volatile storage, or
reconstruct the time-based TED from information retrieved from the it may reconstruct the time-based TED from information retrieved from
LSP-DB previously saved to non-volatile storage. If there is more the LSP-DB previously saved to non-volatile storage. If there is
than one PCE active in the network, the recovering PCE will need to more than one PCE active in the network, the recovering PCE will need
synchronize the LSP-DB and time-based TED with other PCEs (see to synchronize the LSP-DB and time-based TED with other PCEs (see
Section 4.3). Section 4.3).
Note that the stored LSP-DB needs to include the intended state and Note that the stored LSP-DB needs to include the intended state and
actual state of the LSPs so that when a PCE recovers it is able to actual state of the LSPs so that when a PCE recovers, it is able to
determine what actions are necessary. determine what actions are necessary.
4.3. Synchronization Between PCEs 4.3. Synchronization Between PCEs
If there is more than one PCE that supports scheduling active in the If there is active in the network more than one PCE that supports
network, it is important to achieve some consistency between the scheduling, it is important to achieve some consistency between the
scheduled TED and scheduled LSP-DB held by the PCEs. scheduled TED and scheduled LSP-DB held by the PCEs.
[RFC7399] answers various questions around synchronization between [RFC7399] answers various questions around synchronization between
the PCEs. It should be noted that the time-based "scheduled" the PCEs. It should be noted that the time-based "scheduled"
information adds another dimension to the issue of synchronization information adds another dimension to the issue of synchronization
between PCEs. It should also be noted that a deployment may use a between PCEs. It should also be noted that a deployment may use a
primary PCE and the have other PCEs as backup, where a backup PCE can primary PCE and then have other PCEs as backup, where a backup PCE
take over only in the event of a failure of the primary PCE. can take over only in the event of a failure of the primary PCE.
Alternatively, the PCEs may share the load at all times. The choice Alternatively, the PCEs may share the load at all times. The choice
of the synchronization technique is largely dependent on the of the synchronization technique is largely dependent on the
deployment of PCEs in the network. deployment of PCEs in the network.
One option for ensuring that multiple PCEs use the same scheduled One option for ensuring that multiple PCEs use the same scheduled
information is simply to have the PCEs driven from the same shared information is simply to have the PCEs driven from the same shared
database, but it is likely to be inefficient and interoperation database, but it is likely to be inefficient, and interoperation
between multiple implementations will be harder. between multiple implementations will be harder.
Another option is for each PCE to be responsible for its own Another option is for each PCE to be responsible for its own
scheduled database and to utilize some distributed database scheduled database and to utilize some distributed database
synchronization mechanism to have consistent information. Depending synchronization mechanism to have consistent information. Depending
on the implementation, this could be efficient, but interoperation on the implementation, this could be efficient, but interoperation
between heterogeneous implementations is still hard. between heterogeneous implementations is still hard.
A further approach is to utilize PCEP messages to synchronize the A further approach is to utilize PCEP messages to synchronize the
scheduled state between PCEs. This approach would work well if the scheduled state between PCEs. This approach would work well if the
number of PCEs which support scheduling is small, but as the number number of PCEs that support scheduling is small, but as the number
increases considerable message exchange needs to happen to keep the increases, considerable message exchange needs to happen to keep the
scheduled databases synchronized. Future solutions could also scheduled databases synchronized. Future solutions could also
utilize some synchronization optimization techniques for efficiency. utilize some synchronization optimization techniques for efficiency.
Another variation would be to request information from other PCEs for Another variation would be to request information from other PCEs for
a particular time slice, but this might have impact on the a particular time slice, but this might have an impact on the
optimization algorithm. optimization algorithm.
5. Multi-Domain Considerations 5. Multi-domain Considerations
Multi-domain path computation usually requires some form of Multi-domain path computation usually requires some form of
cooperation between PCEs each of which has responsibility for cooperation between PCEs, each of which has responsibility for
determining a segment of the end-to-end path in the domain for which determining a segment of the end-to-end path in the domain for which
it has computational responsibility. When computing a scheduled it has computational responsibility. When computing a scheduled
path, resources need to be booked in all of the domains that the path path, resources need to be booked in all of the domains that the path
will cross so that they are available when the LSP is finally will cross so that they are available when the LSP is finally
signalled. signaled.
Per-domain path computation [RFC5152] is not an appropriate mechanism Per-domain path computation [RFC5152] is not an appropriate mechanism
when a scheduled LSP is being computed because the computation when a scheduled LSP is being computed because the computation
requests at downstream PCEs are only triggered by signaling. requests at downstream PCEs are only triggered by signaling.
However, a similar mechanism could be used where cooperating PCEs However, a similar mechanism could be used where cooperating PCEs
exchange PCReq messages for a scheduled LSP as shown in Figure 3. In exchange Path Computation Request (PCReq) messages for a scheduled
this case the service requester asks for a scheduled LSP that will LSP, as shown in Figure 3. In this case, the service requester asks
span two domains (a). PCE1 computes a path across Domain 1 and for a scheduled LSP that will span two domains (a). PCE1 computes a
reserves the resources, and also asks PCE2 to compute and reserve in path across Domain 1 and reserves the resources and also asks PCE2 to
Domain 2 (b). PCE2 may return a full path, or could return a path compute and reserve in Domain 2 (b). PCE2 may return a full path or
key [RFC5520]. When it is time for LSP setup PCE1 triggers the head- could return a path key [RFC5520]. When it is time for LSP setup,
end LSR (c) and the LSP is signaled (d). If a path key is used, the PCE1 triggers the head-end LSR (c), and the LSP is signaled (d). If
entry LSR in Domain 2 will consult PCE2 for the path expansion (e) a path key is used, the entry LSR in Domain 2 will consult PCE2 for
before completing signaling (f). the path expansion (e) before completing signaling (f).
------------------- -------------------
| Service Requester | | Service Requester |
------------------- -------------------
^ ^
a| a|
v v
------ b ------ ------ b ------
| |<---------------->| | | |<---------------->| |
| PCE1 | | PCE2 | | PCE1 | | PCE2 |
skipping to change at page 17, line 31 skipping to change at page 17, line 34
| | Domain 1 | | | Domain 2 | | | Domain 1 | | | Domain 2 |
| v | | v | | v | | v |
| ----- d ----- | | ----- f ----- | | ----- d ----- | | ----- f ----- |
| | LSR |<--->| LSR |<-+--+->| LSR |<--->| LSR | | | | LSR |<--->| LSR |<-+--+->| LSR |<--->| LSR | |
| ----- ----- | | ----- ----- | | ----- ----- | | ----- ----- |
---------------------- ---------------------- ---------------------- ----------------------
Figure 3: Per-Domain Path Computation for Scheduled LSPs Figure 3: Per-Domain Path Computation for Scheduled LSPs
Another mechanism for PCE cooperation in multi-domain LSP setup is Another mechanism for PCE cooperation in multi-domain LSP setup is
Backward- Recursive Path Computation (BRPC) [RFC5441]. This approach Backward Recursive PCE-Based Computation (BRPC) [RFC5441]. This
relies on the downstream domain supply a variety of potential paths approach relies on the downstream domain to supply a variety of
to the upstream domain. Although BRPC can arrive at a more optimal potential paths to the upstream domain. Although BRPC can arrive at
end-to-end path than per-domain path computation, it is not well a more optimal end-to-end path than per-domain path computation, it
suited to LSP scheduling because the downstream PCE would need to is not well suited to LSP scheduling because the downstream PCE would
reserve resources on all of the potential paths and then release need to reserve resources on all of the potential paths and then
those that the upstream PCE announced it did not plan to use. release those that the upstream PCE announced it did not plan to use.
Finally we should consider hierarchical PCE (H-PCE) [RFC6805]. This Finally, we should consider hierarchical PCE (H-PCE) [RFC6805]. This
mode of operation is similar to that shown in Figure 3, but a parent mode of operation is similar to that shown in Figure 3, but a parent
PCE is used to coordinate the requests to the child PCEs resulting in PCE is used to coordinate the requests to the child PCEs, which then
better visibility of the end-to-end path and better coordination of results in better visibility of the end-to-end path and better
the resource booking. The sequenced flow of control is shown in coordination of the resource booking. The sequenced flow of control
Figure 4. is shown in Figure 4.
------------------- -------------------
| Service Requester | | Service Requester |
------------------- -------------------
^ ^
a| a|
v v
-------- --------
| | | |
| Parent | | Parent |
skipping to change at page 18, line 43 skipping to change at page 18, line 43
| ----- d ----- | | ----- f ----- | | ----- d ----- | | ----- f ----- |
| | LSR |<--->| LSR |<-+--+->| LSR |<--->| LSR | | | | LSR |<--->| LSR |<-+--+->| LSR |<--->| LSR | |
| ----- ----- | | ----- ----- | | ----- ----- | | ----- ----- |
---------------------- ---------------------- ---------------------- ----------------------
Figure 4: Hierarchical PCE for Path Computation for Scheduled LSPs Figure 4: Hierarchical PCE for Path Computation for Scheduled LSPs
6. Security Considerations 6. Security Considerations
The protocol implications of scheduled resources are unchanged from The protocol implications of scheduled resources are unchanged from
"on-demand" LSP computation and setup. A discussion of securing PCEP "on demand" LSP computation and setup. A discussion of securing PCEP
is found in [RFC5440] and work to extend that security is provided in is found in [RFC5440], and work to extend that security is provided
[RFC8253]. Furthermore, the path key mechanism described in in [RFC8253]. Furthermore, the path key mechanism described in
[RFC5520] can be used to enhance privacy and security. [RFC5520] can be used to enhance privacy and security.
Similarly, there is no change to the security implications for the Similarly, there is no change to the security implications for the
signaling of scheduled LSPs. A discussion of the security of the signaling of scheduled LSPs. A discussion of the security of the
signaling protocols that would be used is found in [RFC5920]. signaling protocols that would be used is found in [RFC5920].
However, the use of scheduled LSPs extends the attack surface for a However, the use of scheduled LSPs extends the attack surface for a
PCE-enabled TE system by providing a larger (logically infinite) PCE-enabled TE system by providing a larger (logically infinite)
window during which an attack can be initiated or planned. That is, window during which an attack can be initiated or planned. That is,
if bogus scheduled LSPs can be requested and entered into the LSP-DB, if bogus scheduled LSPs can be requested and entered into the LSP-DB,
then a large number of LSPs could be launched, and significant then a large number of LSPs could be launched and significant network
network resources could be blocked. Control of scheduling requests resources could be blocked. Control of scheduling requests needs to
needs to be subject to operator policy and additional authorization be subject to operator policy, and additional authorization needs to
needs to be applied for access to LSP scheduling. Diagnostic tools be applied for access to LSP scheduling. Diagnostic tools need to be
need to be provided to inspect the LSP DB to spot attacks. provided to inspect the LSP-DB to spot attacks.
7. IANA Considerations 7. IANA Considerations
This architecture document makes no request for IANA action. This document has no IANA actions.
8. Acknowledgements
This work has benefited from the discussions of resource scheduling
over the years. In particular the DRAGON project [DRAGON] and
[I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide
approaches to auto-bandwidth services in GMPLS networks.
Mehmet Toy, Lei Liu, and Khuzema Pithewan contributed to an earlier
version of [I-D.chen-teas-frmwk-tts]. We would like to thank the
authors of that draft on Temporal Tunnel Services for material that
assisted in thinking about this document.
Thanks to Michael Scharf and Daniele Ceccarelli for useful comments
on this work.
Jonathan Hardwick provided a helpful Routing Directorate review.
Deborah Brungard, Mirja Kuehlewind, and Benjamin Kaduk suggested many
changes during their Area Director reviews.
9. Contributors
The following people contributed to discussions that led to the
development of this document:
Dhruv Dhody 8. Informative References
Email: dhruv.dhody@huawei.com
10. Informative References [AUTOBW] Yong, L. and Y. Lee, "ASON/GMPLS Extension for Reservation
and Time Based Automatic Bandwidth Service", Work in
Progress, draft-yong-ccamp-ason-gmpls-autobw-service-00,
October 2006.
[DRAGON] National Science Foundation, "http://www.maxgigapop.net/ [DRAGON] National Science Foundation, "The DRAGON Project: Dynamic
wp-content/uploads/The-DRAGON-Project.pdf". Resource Allocation via GMPLS Optical Networks", Overview
and Status Presentation at ONT3, September 2006,
<http://www.maxgigapop.net/wp-content/uploads/
The-DRAGON-Project.pdf>.
[I-D.chen-teas-frmwk-tts] [FRAMEWORK-TTS]
Chen, H., Toy, M., Liu, L., and K. Pithewan, "Framework Chen, H., Toy, M., Liu, L., and K. Pithewan, "Framework
for Temporal Tunnel Services", draft-chen-teas-frmwk- for Temporal Tunnel Services", Work In Progress, draft-
tts-01 (work in progress), March 2016. chen-teas-frmwk-tts-01, March 2016.
[I-D.yong-ccamp-ason-gmpls-autobw-service]
Yong, L. and Y. Lee, "ASON/GMPLS Extension for Reservation
and Time Based Automatic Bandwidth Service", draft-yong-
ccamp-ason-gmpls-autobw-service-00 (work in progress),
October 2006.
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
<https://www.rfc-editor.org/info/rfc3209>. <https://www.rfc-editor.org/info/rfc3209>.
[RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label
Switching (GMPLS) Signaling Resource ReserVation Protocol- Switching (GMPLS) Signaling Resource ReserVation Protocol-
Traffic Engineering (RSVP-TE) Extensions", RFC 3473, Traffic Engineering (RSVP-TE) Extensions", RFC 3473,
DOI 10.17487/RFC3473, January 2003, DOI 10.17487/RFC3473, January 2003,
skipping to change at page 22, line 11 skipping to change at page 21, line 23
Extensions for Stateful PCE", RFC 8231, Extensions for Stateful PCE", RFC 8231,
DOI 10.17487/RFC8231, September 2017, DOI 10.17487/RFC8231, September 2017,
<https://www.rfc-editor.org/info/rfc8231>. <https://www.rfc-editor.org/info/rfc8231>.
[RFC8253] Lopez, D., Gonzalez de Dios, O., Wu, Q., and D. Dhody, [RFC8253] Lopez, D., Gonzalez de Dios, O., Wu, Q., and D. Dhody,
"PCEPS: Usage of TLS to Provide a Secure Transport for the "PCEPS: Usage of TLS to Provide a Secure Transport for the
Path Computation Element Communication Protocol (PCEP)", Path Computation Element Communication Protocol (PCEP)",
RFC 8253, DOI 10.17487/RFC8253, October 2017, RFC 8253, DOI 10.17487/RFC8253, October 2017,
<https://www.rfc-editor.org/info/rfc8253>. <https://www.rfc-editor.org/info/rfc8253>.
Acknowledgements
This work has benefited from the discussions of resource scheduling
over the years. In particular, the DRAGON project [DRAGON] and
[AUTOBW], both of which provide approaches to auto-bandwidth services
in GMPLS networks.
Mehmet Toy, Lei Liu, and Khuzema Pithewan contributed to an earlier
version of [FRAMEWORK-TTS]. We would like to thank the authors of
that document on Temporal Tunnel Services for material that assisted
in thinking about this document.
Thanks to Michael Scharf and Daniele Ceccarelli for useful comments
on this work.
Jonathan Hardwick provided a helpful Routing Directorate review.
Deborah Brungard, Mirja Kuehlewind, and Benjamin Kaduk suggested many
changes during their Area Director reviews.
Contributors
The following person contributed to discussions that led to the
development of this document:
Dhruv Dhody
Email: dhruv.dhody@huawei.com
Authors' Addresses Authors' Addresses
Yan Zhuang Yan Zhuang
Huawei Huawei
101 Software Avenue, Yuhua District 101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012 Nanjing, Jiangsu 210012
China China
Email: zhuangyan.zhuang@huawei.com Email: zhuangyan.zhuang@huawei.com
skipping to change at page 22, line 32 skipping to change at page 22, line 26
Huawei Huawei
101 Software Avenue, Yuhua District 101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012 Nanjing, Jiangsu 210012
China China
Email: bill.wu@huawei.com Email: bill.wu@huawei.com
Huaimo Chen Huaimo Chen
Huawei Huawei
Boston, MA Boston, MA
US United States of America
Email: huaimo.chen@huawei.com Email: huaimo.chen@huawei.com
Adrian Farrel Adrian Farrel
Juniper Networks Juniper Networks
Email: afarrel@juniper.net Email: afarrel@juniper.net
 End of changes. 100 change blocks. 
292 lines changed or deleted 295 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/