draft-ietf-teas-scheduled-resources-00.txt   draft-ietf-teas-scheduled-resources-01.txt 
TEAS Working Group Y. Zhuang, Ed. TEAS Working Group Y. Zhuang
Internet-Draft Q. Wu Internet-Draft Q. Wu
Intended status: Standards Track H. Chen Intended status: Standards Track H. Chen
Expires: May 17, 2017 Huawei Expires: June 3, 2017 Huawei
A. Farrel A. Farrel
Juniper Networks Juniper Networks
November 13, 2016 November 30, 2016
Architecture for Scheduled Use of Resources Architecture for Scheduled Use of Resources
draft-ietf-teas-scheduled-resources-00 draft-ietf-teas-scheduled-resources-01
Abstract Abstract
Time-Scheduled reservation of traffic engineering (TE) resources can Time-scheduled reservation of traffic engineering (TE) resources can
be used to provide resource booking for TE Label Switched Paths so as be used to provide resource booking for TE Label Switched Paths so as
to better guarantee services for customers and to improve the to better guarantee services for customers and to improve the
efficiency of network resource usage into the future. This document efficiency of network resource usage into the future. This document
provides a framework that describes and discusses the architecture provides a framework that describes and discusses the architecture
for the scheduled reservation of TE resources. This document does for the scheduled reservation of TE resources. This document does
not describe specific protocols or protocol extensions needed to not describe specific protocols or protocol extensions needed to
realize this service. realize this service.
Status of This Memo Status of This Memo
skipping to change at page 1, line 40 skipping to change at page 1, line 40
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on May 17, 2017. This Internet-Draft will expire on June 3, 2017.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Problem statement . . . . . . . . . . . . . . . . . . . . . . 3 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 3
2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3 2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3
2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4 2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4
2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 4 2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 4
2.4. Looking at Future Demands on TE Resources . . . . . . . . 5 2.4. Looking at Future Demands on TE Resources . . . . . . . . 5
2.5. Requisite State Information . . . . . . . . . . . . . . . 5 2.5. Requisite State Information . . . . . . . . . . . . . . . 5
3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 6 3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 6
3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 6 3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 6
3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 8 3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 8
4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 10 4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 10
4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 10 4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 10
4.2. Initialization and Recovery . . . . . . . . . . . . . . . 11 4.2. Initialization and Recovery . . . . . . . . . . . . . . . 11
4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 12 4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 12
5. Security Consideration . . . . . . . . . . . . . . . . . . . 12 5. Multi-Domain Considerations . . . . . . . . . . . . . . . . . 13
6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13
7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 13 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13
8. Informative References . . . . . . . . . . . . . . . . . . . 13 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 9. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 13
10. Informative References . . . . . . . . . . . . . . . . . . . 13
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15
1. Introduction 1. Introduction
Traffic Engineering Label Switched Paths (TE-LSPs) are connection Traffic Engineering Label Switched Paths (TE-LSPs) are connection
oriented tunnels in packet and non-packet networks [RFC3209], oriented tunnels in packet and non-packet networks [RFC3209],
[RFC3945]. TE-LSPs may reserve network resources for use by the [RFC3945]. TE-LSPs may reserve network resources for use by the
traffic they carry, thus providing some guarantees of service traffic they carry, thus providing some guarantees of service
delivery and allowing a network operator to plan the use of the delivery and allowing a network operator to plan the use of the
resources across the whole network. resources across the whole network.
skipping to change at page 3, line 34 skipping to change at page 3, line 36
Time-Scheduled (TS) reservation of TE resources can be used to Time-Scheduled (TS) reservation of TE resources can be used to
provide resource booking for TE-LSPs so as to better guarantee provide resource booking for TE-LSPs so as to better guarantee
services for customers and to improve the efficiency of network services for customers and to improve the efficiency of network
resource usage into the future. This document provides a framework resource usage into the future. This document provides a framework
that describes and discusses the architecture for the scheduled that describes and discusses the architecture for the scheduled
reservation of TE resources. This document does not describe reservation of TE resources. This document does not describe
specific protocols or protocol extensions needed to realize this specific protocols or protocol extensions needed to realize this
service. service.
2. Problem statement 2. Problem Statement
2.1. Provisioning TE-LSPs and TE Resources 2.1. Provisioning TE-LSPs and TE Resources
TE-LSPs in existing networks are provisioned using RSVP-TE as a TE-LSPs in existing networks are provisioned using RSVP-TE as a
signaling protocol [RFC3209] [RFC3473], by direct control of network signaling protocol [RFC3209] [RFC3473], by direct control of network
elements such as in the Software Defined Networking (SDN) paradigm, elements such as in the Software Defined Networking (SDN) paradigm,
and using the PCE Communication Protocol (PCEP) [RFC5440] as a and using the PCE Communication Protocol (PCEP) [RFC5440] as a
control protocol. control protocol.
TE resources are reserved at the point of use. That is, the TE resources are reserved at the point of use. That is, the
skipping to change at page 4, line 39 skipping to change at page 4, line 39
2.3. Planning Future LSPs 2.3. Planning Future LSPs
LSPs may be established "on demand" when the requester determines LSPs may be established "on demand" when the requester determines
that a new LSP is needed. In this case, the path of the LSP is that a new LSP is needed. In this case, the path of the LSP is
computed as described in Section 2.2. computed as described in Section 2.2.
However, in many situations, the requester knows in advance that an However, in many situations, the requester knows in advance that an
LSP will be needed at a particular time in the future. For example, LSP will be needed at a particular time in the future. For example,
the requester may be aware of a large traffic flow that will start at the requester may be aware of a large traffic flow that will start at
a well-known time, perhaps for a database synchronzation or for the a well-known time, perhaps for a database synchronization or for the
exchange of content between streamng sites. Furthermore, the exchange of content between streaming sites. Furthermore, the
requester may also know for how long the LSP is required before it requester may also know for how long the LSP is required before it
can be torn down. can be torn down.
The set of requests for future LSPs could be collected and held in a The set of requests for future LSPs could be collected and held in a
central database (such as at a Network Management System - NMS): when central database (such as at a Network Management System - NMS): when
the time comes for each LSP to be set up the NMS can ask the PCE to the time comes for each LSP to be set up the NMS can ask the PCE to
compute a path and can then requst the LSP to be provisioned. This compute a path and can then request the LSP to be provisioned. This
approach has a number of drawbacks because it is not possible to approach has a number of drawbacks because it is not possible to
determine in advance whether it will be possible to deliver the LSP determine in advance whether it will be possible to deliver the LSP
since the resources it needs might be used by other LSPs in the since the resources it needs might be used by other LSPs in the
network. Thus, at the time the requester asks for the future LSP, network. Thus, at the time the requester asks for the future LSP,
the NMS can only make a best-effort guarantee that the LSP will be the NMS can only make a best-effort guarantee that the LSP will be
set up at the desired time. set up at the desired time.
A better solution, therefore, is for the requests for future LSPs to A better solution, therefore, is for the requests for future LSPs to
be serviced at once. The paths of the LSPs can be computed ahead of be serviced at once. The paths of the LSPs can be computed ahead of
time and converted into reservations of network resources during time and converted into reservations of network resources during
skipping to change at page 5, line 30 skipping to change at page 5, line 30
LSPs would have made it possible for them all to be set up. LSPs would have made it possible for them all to be set up.
If, therefore, we were able to know in advance what LSPs were going If, therefore, we were able to know in advance what LSPs were going
to be requested we could plan for them and ensure resources were to be requested we could plan for them and ensure resources were
available. Furthermore, such an approach enables a commitment to be available. Furthermore, such an approach enables a commitment to be
made to a service user that an LSP will be set up and available at a made to a service user that an LSP will be set up and available at a
specific time. specific time.
This service can be achieved by tracking the current use of network This service can be achieved by tracking the current use of network
resources and also a future view of the resource usage. We call this resources and also a future view of the resource usage. We call this
time-scheduled TE (TS-TE) resource reservation. Time-Scheduled TE (TS-TE) resource reservation.
2.5. Requisite State Information 2.5. Requisite State Information
In order to achieve the TS-TE resource reservation, the use of In order to achieve the TS-TE resource reservation, the use of
resources on the path needs to be scheduled. Scheduling state is resources on the path needs to be scheduled. Scheduling state is
used to indicate when resources are reserved and when they are used to indicate when resources are reserved and when they are
available for use. available for use.
A simple information model for one piece of scheduling state is as A simple information model for one piece of scheduling state is as
follows: follows:
{ link id; {
link id;
resource id or reserved capacity; resource id or reserved capacity;
reservation start time; reservation start time;
reservation end time reservation end time
} }
The resource that is scheduled can be link capacity, physical The resource that is scheduled can be link capacity, physical
resources on a link, CPU utilization, memory, buffers on an resources on a link, CPU utilization, memory, buffers on an
interfaces, etc. The resource might also be the maximal unreserved interfaces, etc. The resource might also be the maximal unreserved
bandwidth of the link over a time intervals. For any one resource bandwidth of the link over a time interval. For any one resource
there could be multiple pieces of scheduling state, and for any one there could be multiple pieces of scheduling state, and for any one
link, the timing windows might overlap. link, the timing windows might overlap.
There are multiple ways to realize this information model and There are multiple ways to realize this information model and
different ways to store the data. The resource state could be different ways to store the data. The resource state could be
expressed as a start time and and end time as shown above, or could expressed as a start time and an end time as shown above, or could be
be expressed as a start time and a duration. Multiple periods, expressed as a start time and a duration. Multiple periods, possibly
possibly of different lengths, may be associated with one reservation of different lengths, may be associated with one reservation request,
request, and a reservation might repeat on a regular cycle. and a reservation might repeat on a regular cycle. Furthermore, the
Furthermore, the current state of network reservation could be kept current state of network reservation could be kept separate from the
separate from the scheduled usage, or everything could be merged into scheduled usage, or everything could be merged into a single TS
a single TS databasae. This document does not spend any more time on database.
discussion of encoding of state information except to discuss the
location of storage of the state information and the recovery of the
information after failure events.
This scheduling state information can be used by applications to book This scheduling state information can be used by applications to book
resources for future or now, so as to maximize chance of services resources for future or now, so as to maximize chance of services
being delivered. Also, it can avoid contention for resources of being delivered. Also, it can avoid contention for resources of
LSPs. LSPs.
Note that it is also to store the information about future LSPs. Note that it is also necessary to store the information about future
This information is held to allow the LSPs to be instantiated when LSPs. This information is held to allow the LSPs to be instantiated
they are due and using the paths/resources that have been computed when they are due and using the paths/resources that have been
for them, but also to provide correlation with the TS-TE resource computed for them, but also to provide correlation with the TS-TE
reservations so that it is clear why resources were reserved allowing resource reservations so that it is clear why resources were reserved
pre-emption and handling release of reserved resources in the event allowing pre-emption and handling release of reserved resources in
of cancelation of future LSPs. the event of cancellation of future LSPs.
3. Architectural Concepts 3. Architectural Concepts
This section examines several important architectural concepts that This section examines several important architectural concepts that
lead to design decisions that will influence how networks can achieve lead to design decisions that will influence how networks can achieve
TS-TE in a scalable and robust manner. TS-TE in a scalable and robust manner.
3.1. Where is Scheduling State Held? 3.1. Where is Scheduling State Held?
The scheduling state information described in Section 2.5 has to be The scheduling state information described in Section 2.5 has to be
skipping to change at page 7, line 20 skipping to change at page 7, line 19
with a high arrival rate of new LSPs and a low hold time for each with a high arrival rate of new LSPs and a low hold time for each
LSP, this could be a lot of state. Yet network nodes are normally LSP, this could be a lot of state. Yet network nodes are normally
implemented with minimal spare memory. implemented with minimal spare memory.
o In order that path computation can be performed, the computing o In order that path computation can be performed, the computing
entity normally known as a Path Computation Element (PCE) entity normally known as a Path Computation Element (PCE)
[RFC4655] needs access to a database of available links and nodes [RFC4655] needs access to a database of available links and nodes
in the network, and of the TE properties of the links. This in the network, and of the TE properties of the links. This
database is known as the Traffic Engineering Database (TED) and is database is known as the Traffic Engineering Database (TED) and is
usually populated from information advertised in the IGP by each usually populated from information advertised in the IGP by each
of the network nodes or exported using BGP-LS of the network nodes or exported using BGP-LS [RFC7752]. To be
[I-D.ietf-idr-ls-distribution]. To be able to compute a path for able to compute a path for a future LSP the PCE needs to populate
a future LSP the PCE needs to populate the TED with all of the the TED with all of the future resource availability: if this
future resource availability: if this information is held on the information is held on the network nodes it must also be
network nodes it must also be advertised in the IGP. This could advertised in the IGP. This could be a significant scaling issue
be a significant scaling issue for the IGP and the network nodes for the IGP and the network nodes as all of the advertised
as all of the advertised information is held at every network node information is held at every network node and must be periodically
and must be periodically refreshed by the IGP. refreshed by the IGP.
o When a normal node restarts it can recover resource reservation o When a normal node restarts it can recover resource reservation
state from the forwarding hardware, from Non-volatile random- state from the forwarding hardware, from Non-Volatile Random-
access memory (NVRAM), or from adjacent nodes through the Access Memory (NVRAM), or from adjacent nodes through the
signaling protocol [RFC5063]. If scheduling state is held at the signaling protocol [RFC5063]. If scheduling state is held at the
network nodes it must also be recovered after the restart of a network nodes it must also be recovered after the restart of a
network node. This cannot be achieved from the forwarding network node. This cannot be achieved from the forwarding
hardware because the reservation will not have been made, could hardware because the reservation will not have been made, could
require additional expensive NVRAM, or might require that all require additional expensive NVRAM, or might require that all
adjacent nodes also have the scheduling state in order to adjacent nodes also have the scheduling state in order to re-
reinstall it on the restarting node. This is potentially complex install it on the restarting node. This is potentially complex
processing with scaling and cost implications. processing with scaling and cost implications.
Conversely, if the scheduling state is held centrally it is easily Conversely, if the scheduling state is held centrally it is easily
available at the point of use. That is, the PCE can utilize the available at the point of use. That is, the PCE can utilize the
state to plan future LSPs and can update that stored information with state to plan future LSPs and can update that stored information with
the scheduled reservation of resources for those future LSPs. This the scheduled reservation of resources for those future LSPs. This
approach also has several issues: approach also has several issues:
o If there are multiple controllers then they must synchronise their o If there are multiple controllers then they must synchronize their
stored scheduling state as they each plan future LSPs, and must stored scheduling state as they each plan future LSPs, and must
have a mechanism to resolve resource contention. This is have a mechanism to resolve resource contention. This is
relatively simple and is mitigated by the fact that there is ample relatively simple and is mitigated by the fact that there is ample
processing time to replan future LSPs in the case of resource processing time to re-plan future LSPs in the case of resource
contention. contention.
o If other sources of immediate LSPs are allowed (for example, other o If other sources of immediate LSPs are allowed (for example, other
controllers or autonomous action by head-end LSRs) then the controllers or autonomous action by head-end LSRs) then the
changes in resource availability caused by the setup or teardown changes in resource availability caused by the setup or tear down
of these LSPs must be reflected in the TED (by use of the IGP as of these LSPs must be reflected in the TED (by use of the IGP as
currently) and may have an impact of planned future LSPs. This currently) and may have an impact of planned future LSPs. This
impact can be mitigated by replanning future LSPs or through LSP impact can be mitigated by re-planning future LSPs or through LSP
preemption. preemption.
o If other sources of planned LSPs are allowed, they can request o If other sources of planned LSPs are allowed, they can request
path computation and resource reservation from the centralized PCE path computation and resource reservation from the centralized PCE
using PCEP [RFC5440]. using PCEP [RFC5440].
o If the scheduling state is held centrally at a PCE, the state must o If the scheduling state is held centrally at a PCE, the state must
be held and restored after a system restart. This is relatively be held and restored after a system restart. This is relatively
easy to achieve on a central server that can have access to non- easy to achieve on a central server that can have access to non-
volatile storage. The PCE could also synchronize the scheduling volatile storage. The PCE could also synchronize the scheduling
state with other PCEs after restart. See Section 4.2 for details. state with other PCEs after restart. See Section 4.2 for details.
o Of course, a centralized system must store informaton about all of o Of course, a centralized system must store information about all
the resources in the network. In a busy network with a high of the resources in the network. In a busy network with a high
arrival rate of new LSPs and a low hold time for each LSP, this arrival rate of new LSPs and a low hold time for each LSP, this
could be a lot of state. This is multiplied by the size of the could be a lot of state. This is multiplied by the size of the
network measured both by the number of links and nodes, and by the network measured both by the number of links and nodes, and by the
number of trackable resources on each link or at each node. The number of trackable resources on each link or at each node. The
challenge may be mitigated by the centralized server being challenge may be mitigated by the centralized server being
dedicated hardware, but the problem of collecting the information dedicated hardware, but the problem of collecting the information
from the network is only solved if the central server has full from the network is only solved if the central server has full
control of the booking of resources and the estblshment of new control of the booking of resources and the establishment of new
LSPs. LSPs.
Thus the architectural conclusion is that scheduling state should be Thus the architectural conclusion is that scheduling state should be
held centrally at the point of use and not in the network devices. held centrally at the point of use and not in the network devices.
3.2. What State is Held? 3.2. What State is Held?
As already described, the PCE needs access to an enhanced, time-based As already described, the PCE needs access to an enhanced, time-based
TED. It stores the traffic engineering (TE) information such as TED. It stores the traffic engineering (TE) information such as
bandwidth for every link for a series of time intervals. There are a bandwidth for every link for a series of time intervals. There are a
skipping to change at page 10, line 8 skipping to change at page 10, line 8
It is an implementation choice how the TED and LSP-DB are stored both It is an implementation choice how the TED and LSP-DB are stored both
for dynamic use and for recovery after failure or restart, but it may for dynamic use and for recovery after failure or restart, but it may
be noted that all of the information in the scheduled TED can be be noted that all of the information in the scheduled TED can be
recovered from the active network state and from the scheduled LSP- recovered from the active network state and from the scheduled LSP-
DB. DB.
4. Architecture Overview 4. Architecture Overview
The architectural considerations and conclusions described in the The architectural considerations and conclusions described in the
previous section lead to the architecture described in this section. previous section lead to the architecture described in this section
and illustrated in Figure 2. The interfaces and interactions shown
on the figure and labeled (a) through (f) are described in
Section 4.1.
------------------- -------------------
| Service Requester | | Service Requester |
------------------- -------------------
^ ^
a| a|
v v
------- b -------- ------- b --------
| |<--->| LSP-DB | | |<--->| LSP-DB |
| | -------- | | --------
skipping to change at page 10, line 45 skipping to change at page 10, line 48
Figure 2: Reference Architecture for Scheduled Use of Resources Figure 2: Reference Architecture for Scheduled Use of Resources
4.1. Service Request 4.1. Service Request
As shown in Figure 2, some component in the network requests a As shown in Figure 2, some component in the network requests a
service. This may be an application, an NMS, an LSR, or any service. This may be an application, an NMS, an LSR, or any
component that qualifies as a Path Computation Client (PCC). We show component that qualifies as a Path Computation Client (PCC). We show
this on the figure as the "Service Requester" and it sends a request this on the figure as the "Service Requester" and it sends a request
to the PCE for an LSP to be set up at some time (either now or in the to the PCE for an LSP to be set up at some time (either now or in the
future). The request, indicated on Figure 2 by the arrow (a) future). The request, indicated on Figure 2 by the arrow (a),
includes all of the parameters of the LSP that the requester wishes includes all of the parameters of the LSP that the requester wishes
to supply such as bandwidth, start time, and end time. Note that the to supply such as bandwidth, start time, and end time. Note that the
requester in this case may be the same LSR shown in the figure or may requester in this case may be the LSR shown in the figure or may be a
be a distinct system. distinct system.
The PCE enters the LSP request in its LSP-DB (b), and uses The PCE enters the LSP request in its LSP-DB (b), and uses
information from its TED (c) to compute a path that satisfies information from its TED (c) to compute a path that satisfies the
constraints such as bandwidth constraint for the LSP in the time constraints (such as bandwidth) for the LSP in the time interval from
interval from a start time to an end time. It updates the future the start time to the end time. It updates the future resource
resource availability in the TED so that further path computations availability in the TED so that further path computations can take
can take account of the scheduled resource usage. It stores the path account of the scheduled resource usage. It stores the path for the
for the LSP into the LSP-DB (b). LSP into the LSP-DB (b).
When it is time such as at a start time for the LSP to be set up, the When it is time (i.e., at the start time) for the LSP to be set up,
PCE sends a PCEP Initiate request to the head end LSR (d) providing the PCE sends a PCEP Initiate request to the head end LSR (d)
the path to be signaled as well as other parameters such as the providing the path to be signaled as well as other parameters such as
bandwidth of the LSP. the bandwidth of the LSP.
As the LSP is signaled between LSRs (f) the use of resources in the As the LSP is signaled between LSRs (f) the use of resources in the
network is updated and distributed using the IGP. This information network is updated and distributed using the IGP. This information
is shared with the PCE either through the IGP or using BGP-LS (e), is shared with the PCE either through the IGP or using BGP-LS (e),
and the PCE updates the information stored in its TED (c). and the PCE updates the information stored in its TED (c).
After the LSP is set up, the head end LSR sends a PCEP LSP State After the LSP is set up, the head end LSR sends a PCEP LSP State
Report (PCRpt message) to the PCE (d). The report contains the Report (PCRpt message) to the PCE (d). The report contains the
resources such as bandwidth usage for the LSP. The PCE updates the resources such as bandwidth usage for the LSP. The PCE updates the
status of the LSP in the LSPDB according to the report. status of the LSP in the LSP-DB according to the report.
When an LSP is no longer required (either because the Service When an LSP is no longer required (either because the Service
Requester has cancelled the request, or because the LSP's scheduled Requester has cancelled the request, or because the LSP's scheduled
lifetime has expired) the PCE can remove it. If the LSP is currently lifetime has expired) the PCE can remove it. If the LSP is currently
active, the PCE instructs the head-end LSR to tear it down (d), and active, the PCE instructs the head-end LSR to tear it down (d), and
the network resource usage will be updated by the IGP and advertised the network resource usage will be updated by the IGP and advertised
back to the PCE through the IGP or BGP-LS (e). Once the LSP is no back to the PCE through the IGP or BGP-LS (e). Once the LSP is no
longer active, the PCE can remove it from the LSP-DB (b). longer active, the PCE can remove it from the LSP-DB (b).
4.2. Initialization and Recovery 4.2. Initialization and Recovery
skipping to change at page 12, line 10 skipping to change at page 12, line 15
Next, the PCE must construct a time-based TED to show scheduled Next, the PCE must construct a time-based TED to show scheduled
resource usage. How it does this is implementation specific and this resource usage. How it does this is implementation specific and this
document does not dictate any particular mechanism: it may recover a document does not dictate any particular mechanism: it may recover a
time-based TED previously saved to non-volatile storage, or it may time-based TED previously saved to non-volatile storage, or it may
reconstruct the time-based TED from information retrieved from the reconstruct the time-based TED from information retrieved from the
LSP-DB previously saved to non-volatile storage. If there is more LSP-DB previously saved to non-volatile storage. If there is more
than one PCE active in the network, the recovering PCE will need to than one PCE active in the network, the recovering PCE will need to
synchronize the LSP-DB and time-based TED with other PCEs (see synchronize the LSP-DB and time-based TED with other PCEs (see
Section 4.3). Section 4.3).
Note that the stored LSP-DB needs to include the intended state and
actual state of the LSPs so that when a PCE recovers it is able to
determine what actions are necessary.
4.3. Synchronization Between PCEs 4.3. Synchronization Between PCEs
If there is more than one PCE active in the network which supports If there is more than one PCE that supports scheduling active in the
scheduling, it is important to achieve some consistency between the network, it is important to achieve some consistency between the
scheduled TED and scheduled LSP-DB between the PCEs. scheduled TED and scheduled LSP-DB held by the PCEs.
[RFC7399] answers various questions around synchronization between [RFC7399] answers various questions around synchronization between
the PCEs. It should be noted that the time-based "scheduled" the PCEs. It should be noted that the time-based "scheduled"
information adds another dimension to it. It should be noted that information adds another dimension to the issue of synchronization
the deployment may use a primary PCE and the other PCEs as backup, between PCEs. It should also be noted that a deployment may use a
where the backup PCE can take over only in the event of a failure of primary PCE and the have other PCEs as backup, where a backup PCE can
the primary PCE. Or the PCEs may share the load at all times. The take over only in the event of a failure of the primary PCE.
choice of the synchronization technique is largely dependent on the Alternatively, the PCEs may share the load at all times. The choice
of the synchronization technique is largely dependent on the
deployment of PCEs in the network. deployment of PCEs in the network.
One option for ensuring that multiple PCEs use the same scheduled One option for ensuring that multiple PCEs use the same scheduled
information is simply to have the PCEs driven from the same shared information is simply to have the PCEs driven from the same shared
database, but it is likely to be inefficient and inter-operation database, but it is likely to be inefficient and interoperation
between multiple implementation harder. between multiple implementations will be harder.
Or the PCEs might be responsible for its own scheduled database and Another option is for each PCE to be responsible for its own
utilize some distributed database synchronization mechanism to have a scheduled database and to utilize some distributed database
consistent database. Based on the implementation, this could be synchronization mechanism to have consistent information. Depending
efficient but the inter-operation between heterogeneous on the implementation, this could be efficient, but interoperation
implementation is still hard. between heterogeneous implementations is still hard.
Another approach would be to utilize PCEP messages to synchronize the A further approach is to utilize PCEP messages to synchronize the
scheduled state between PCEs. This approach would work well if the scheduled state between PCEs. This approach would work well if the
number of PCEs which support scheduling are less, but as the number number of PCEs which support scheduling is small, but as the number
increases considerable message exchange needs to happen to keep the increases considerable message exchange needs to happen to keep the
scheduled database in sync. Future solution could also utilize some scheduled databases synchronized. Future solutions could also
synchronization optimization techniques for efficiency. Another utilize some synchronization optimization techniques for efficiency.
variation would be to request information from other PCEs for a Another variation would be to request information from other PCEs for
particular time slice but this might have impact on the optimization a particular time slice, but this might have impact on the
algorithm. optimization algorithm.
5. Security Consideration 5. Multi-Domain Considerations
TBD TBD
6. Acknowledgements 6. Security Considerations
TBD
7. IANA Considerations
This architecture document makes no request for IANA action.
8. Acknowledgements
This work has benefited from the discussions of resource scheduling This work has benefited from the discussions of resource scheduling
over the years. In particular the DRAGON project [DRAGON] and over the years. In particular the DRAGON project [DRAGON] and
[I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide [I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide
approaches to auto-bandwidth services in GMPLS networks. approaches to auto-bandwidth services in GMPLS networks.
Mehmet Toy, Lei Liu and Khuzema Pithewan contributed the earlier Mehmet Toy, Lei Liu, and Khuzema Pithewan contributed the earlier
version of [I-D.chen-teas-frmwk-tts]. We would like to thank authors version of [I-D.chen-teas-frmwk-tts]. We would like to thank the
of that draft on Temporal Tunnel Services andfor help inspire authors of that draft on Temporal Tunnel Services.
discussion in the TEAS WG and get this work solid.
Thanks to Michael Scharf and Daniele Ceccarelli for useful comments Thanks to Michael Scharf and Daniele Ceccarelli for useful comments
on this work. on this work.
7. Contributors 9. Contributors
The following people contributed to discussions that led to the The following people contributed to discussions that led to the
development of this document: development of this document:
Dhruv Dhody Dhruv Dhody
Email: dhruv.dhody@huawei.com Email: dhruv.dhody@huawei.com
8. Informative References 10. Informative References
[DRAGON] National Science Foundation, "http://www.maxgigapop.net/ [DRAGON] National Science Foundation, "http://www.maxgigapop.net/
wp-content/uploads/The-DRAGON-Project.pdf". wp-content/uploads/The-DRAGON-Project.pdf".
[I-D.chen-teas-frmwk-tts] [I-D.chen-teas-frmwk-tts]
Chen, H., Toy, M., Liu, L., and K. Pithewan, "Framework Chen, H., Toy, M., Liu, L., and K. Pithewan, "Framework
for Temporal Tunnel Services", draft-chen-teas-frmwk- for Temporal Tunnel Services", draft-chen-teas-frmwk-
tts-01 (work in progress), March 2016. tts-01 (work in progress), March 2016.
[I-D.ietf-idr-ls-distribution]
Gredler, H., Medved, J., Previdi, S., Farrel, A., and S.
Ray, "North-Bound Distribution of Link-State and TE
Information using BGP", draft-ietf-idr-ls-distribution-13
(work in progress), October 2015.
[I-D.ietf-pce-stateful-pce] [I-D.ietf-pce-stateful-pce]
Crabbe, E., Minei, I., Medved, J., and R. Varga, "PCEP Crabbe, E., Minei, I., Medved, J., and R. Varga, "PCEP
Extensions for Stateful PCE", draft-ietf-pce-stateful- Extensions for Stateful PCE", draft-ietf-pce-stateful-
pce-16 (work in progress), September 2016. pce-17 (work in progress), November 2016.
[I-D.yong-ccamp-ason-gmpls-autobw-service] [I-D.yong-ccamp-ason-gmpls-autobw-service]
Yong, L. and Y. Lee, "ASON/GMPLS Extension for Reservation Yong, L. and Y. Lee, "ASON/GMPLS Extension for Reservation
and Time Based Automatic Bandwidth Service", draft-yong- and Time Based Automatic Bandwidth Service", draft-yong-
ccamp-ason-gmpls-autobw-service-00 (work in progress), ccamp-ason-gmpls-autobw-service-00 (work in progress),
October 2006. October 2006.
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001,
skipping to change at page 14, line 47 skipping to change at page 15, line 5
[RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation
Element (PCE) Communication Protocol (PCEP)", RFC 5440, Element (PCE) Communication Protocol (PCEP)", RFC 5440,
DOI 10.17487/RFC5440, March 2009, DOI 10.17487/RFC5440, March 2009,
<http://www.rfc-editor.org/info/rfc5440>. <http://www.rfc-editor.org/info/rfc5440>.
[RFC7399] Farrel, A. and D. King, "Unanswered Questions in the Path [RFC7399] Farrel, A. and D. King, "Unanswered Questions in the Path
Computation Element Architecture", RFC 7399, Computation Element Architecture", RFC 7399,
DOI 10.17487/RFC7399, October 2014, DOI 10.17487/RFC7399, October 2014,
<http://www.rfc-editor.org/info/rfc7399>. <http://www.rfc-editor.org/info/rfc7399>.
[RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and
S. Ray, "North-Bound Distribution of Link-State and
Traffic Engineering (TE) Information Using BGP", RFC 7752,
DOI 10.17487/RFC7752, March 2016,
<http://www.rfc-editor.org/info/rfc7752>.
Authors' Addresses Authors' Addresses
Yan Zhuang (editor)
Yan Zhuang
Huawei Huawei
101 Software Avenue, Yuhua District 101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012 Nanjing, Jiangsu 210012
China China
Email: zhuangyan.zhuang@huawei.com Email: zhuangyan.zhuang@huawei.com
Qin Wu Qin Wu
Huawei Huawei
101 Software Avenue, Yuhua District 101 Software Avenue, Yuhua District
skipping to change at page 15, line 30 skipping to change at page 15, line 39
Huaimo Chen Huaimo Chen
Huawei Huawei
Boston, MA Boston, MA
US US
Email: huaimo.chen@huawei.com Email: huaimo.chen@huawei.com
Adrian Farrel Adrian Farrel
Juniper Networks Juniper Networks
Email: adrian@olddog.co.uk Email: afarrel@juniper.net
 End of changes. 49 change blocks. 
108 lines changed or deleted 124 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/