draft-ietf-tewg-diff-te-mar-02.txt   draft-ietf-tewg-diff-te-mar-03.txt 
Network Working Group Jerry Ash Network Working Group Jerry Ash
Internet Draft AT&T Internet Draft AT&T
Category: Experimental Category: Experimental
<draft-ietf-tewg-diff-te-mar-02.txt> <draft-ietf-tewg-diff-te-mar-03.txt>
Expiration Date: March 2004 Expiration Date: July 2004
October, 2003 January, 2004
Max Allocation with Reservation Bandwidth Constraint Model for Max Allocation with Reservation Bandwidth Constraints Model for
MPLS/DiffServ TE & Performance Comparisons DiffServ-aware MPLS Traffic Engineering & Performance Comparisons
<draft-ietf-tewg-diff-te-mar-02.txt> <draft-ietf-tewg-diff-te-mar-03.txt>
Status of this Memo Status of this Memo
This document is an Internet-Draft and is in full conformance with This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC2026. all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that other Task Force (IETF), its areas, and its working groups. Note that other
groups may also distribute working documents as Internet-Drafts. groups may also distribute working documents as Internet-Drafts.
skipping to change at line 36 skipping to change at page 1, line 37
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
Abstract Abstract
This document complements the DiffServ-aware MPLS TE (DSTE) requirements This document complements the DiffServ-aware MPLS TE (DS-TE) requirements
document by giving a functional specification for the Maximum Allocation document by giving a functional specification for the Maximum Allocation
with Reservation (MAR) bandwidth constraint model. Assumptions, with Reservation (MAR) Bandwidth Constraints Model. Assumptions,
applicability, and examples of the operation of the MAR bandwidth applicability, and examples of the operation of the MAR Bandwidth
constraint model are presented. MAR performance is analyzed relative to Constraints Model are presented. MAR performance is analyzed relative to
the criteria for selecting a bandwidth constraint model, in order to the criteria for selecting a Bandwidth Constraints Model, in order to
provide guidance to user implementation of the model in their networks. provide guidance to user implementation of the model in their networks.
Table of Contents Table of Contents
1. Introduction 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Definitions 2. Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Assumptions & Applicability 3. Assumptions & Applicability . . . . . . . . . . . . . . . . . 5
4. Functional Specification of the MAR Bandwidth Constraint Model 4. Functional Specification of the MAR Bandwidth Constraints Model 5
5. Setting Bandwidth Constraints 5. Setting Bandwidth Constraints . . . . . . . . . . . . . . . . 6
6. Example of MAR Operation 6. Example of MAR Operation . . . . . . . . . . . . . . . . . . . 7
7. Summary 7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
8. Security Considerations 8. Security Considerations . . . . . . . . . . . . . . . . . . . 8
9. Acknowledgments 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 8
10. Normative References 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 8
11. Informative References 11. Normative References . . . . . . . . . . . . . . . . . . . . 9
12. Authors' Addresses 12. Informative References . . . . . . . . . . . . . . . . . . . 9
ANNEX A. MAR Operation & Performance Analysis 13. Intellectual Property Statement . . . . . . . . . . . . . . . 10
14. Authors' Addresses . . . . . . . . . . . . . . . . . . . . . 10
Appendix A. MAR Operation & Performance Analysis . . . . . . . . 10
Appendix B. Bandwidth Prediction for Path Computation . . . . . . 16
Specification of Requirements Specification of Requirements
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in [RFC2119].
1. Introduction 1. Introduction
DiffServ-aware MPLS traffic engineering (DSTE) requirements and protocol DiffServ-aware MPLS traffic engineering (DS-TE) requirements and protocol
extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for
DSTE implementation is the specification of bandwidth constraint models DS-TE implementation is the specification of Bandwidth Constraints Models
for use with DSTE. The bandwidth constraint model provides the 'rules' for use with DS-TE. The Bandwidth Constraints Model provides the 'rules'
to support the allocation of bandwidth to individual class types (CTs). to support the allocation of bandwidth to individual class types (CTs).
CTs are groupings of service classes in the DSTE model, which are CTs are groupings of service classes in the DS-TE model, which are
provided separate bandwidth allocations, priorities, and QoS objectives. provided separate bandwidth allocations, priorities, and QoS objectives.
Several CTs can share a common bandwidth pool on an integrated, Several CTs can share a common bandwidth pool on an integrated,
multiservice MPLS/DiffServ network. multiservice MPLS/DiffServ network.
This document is intended to complement the DSTE requirements document This document is intended to complement the DS-TE requirements document
[DSTE-REQ] by giving a functional specification for the Maximum [DSTE-REQ] by giving a functional specification for the Maximum
Allocation with Reservation (MAR) bandwidth constraint model. Examples Allocation with Reservation (MAR) Bandwidth Constraints Model. Examples
of the operation of the MAR bandwidth constraint model are presented. of the operation of the MAR Bandwidth Constraints Model are presented.
MAR performance is analyzed relative to the criteria for selecting a MAR performance is analyzed relative to the criteria for selecting a
bandwidth constraint model, in order to provide guidance to user Bandwidth Constraints Model, in order to provide guidance to user
implementation of the model in their networks. implementation of the model in their networks.
Two other bandwidth constraint models are being specified for use in Two other Bandwidth Constraints Models are being specified for use in
DSTE: DS-TE:
1. maximum allocation model (MAM) [MAM] - the maximum allowable 1. Maximum Allocation Model (MAM) [MAM] - the maximum allowable
bandwidth usage of each CT is explicitly specified. bandwidth usage of each CT is explicitly specified.
2. Russian doll model (RDM) [RDM] - the maximum allowable bandwidth 2. Russian Doll Model (RDM) [RDM] - the maximum allowable bandwidth
usage is done cumulatively by grouping successive CTs according to usage is done cumulatively by grouping successive CTs according to
priority classes. priority classes.
MAR is similar to MAM in that a maximum bandwidth allocation is given to MAR is similar to MAM in that a maximum bandwidth allocation is given to
each CT. However, through the use of bandwidth reservation and each CT. However, through the use of bandwidth reservation and
protection mechanisms, CTs are allowed to exceed their bandwidth protection mechanisms, CTs are allowed to exceed their bandwidth
allocations under conditions of no congestion but revert to their allocations under conditions of no congestion but revert to their
allocated bandwidths when overload and congestion occurs. allocated bandwidths when overload and congestion occurs.
All bandwidth constraint models should meet these objectives: All Bandwidth Constraints Models should meet these objectives:
1. applies equally when preemption is either enabled or disabled (when 1. applies equally when preemption is either enabled or disabled (when
preemption is disabled, the model still works 'reasonably' well), preemption is disabled, the model still works 'reasonably' well),
2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under 2. bandwidth efficiency, i.e., good bandwidth sharing among CTs under
both normal and overload conditions, both normal and overload conditions,
3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another
CT under overload conditions, CT under overload conditions,
4. protection against QoS degradation, at least of the high-priority CTs 4. protection against QoS degradation, at least of the high-priority CTs
(e.g. high-priority voice, high-priority data, etc.), and (e.g. high-priority voice, high-priority data, etc.), and
5. reasonably simple, i.e., does not require additional IGP extensions 5. reasonably simple, i.e., does not require additional IGP extensions
and minimizes signaling load processing requirements. and minimizes signaling load processing requirements.
In Annex A modeling analysis is presented which shows that the MAR model In Appendix A modeling analysis is presented which shows that the MAR
meets all these objectives, and provides good network performance Model meets all these objectives, and provides good network performance
relative to MAM and full sharing models, under normal and abnormal relative to MAM and full sharing models, under normal and abnormal
operating conditions. It is demonstrated that simultaneously achieves operating conditions. It is demonstrated that simultaneously achieves
bandwidth efficiency, bandwidth isolation, and protection against QoS bandwidth efficiency, bandwidth isolation, and protection against QoS
degradation without preemption. degradation without preemption.
In Section 3 we give the assumptions and applicability, in Section 4 a In Section 3 we give the assumptions and applicability, in Section 4 a
functional specification of the MAR bandwidth constraint model, and in functional specification of the MAR Bandwidth Constraints Model, and in
Section 5 we give examples of its operation. In Annex A, MAR Section 5 we give examples of its operation. In Appendix A, MAR
performance is analyzed relative to the criteria for selecting a performance is analyzed relative to the criteria for selecting a
bandwidth constraint model, in order to provide guidance to user
implementation of the model in their networks. Bandwidth Constraints Model, in order to provide guidance to user
implementation of the model in their networks. In Appendix B,
bandwidth prediction for path computation is discussed.
2. Definitions 2. Definitions
For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are
repeated here: repeated here:
Traffic Trunk: an aggregation of traffic flows of the same class (i.e. Traffic Trunk: an aggregation of traffic flows of the same class (i.e.
which are to be treated equivalently from the DSTE perspective) which which are to be treated equivalently from the DS-TE perspective) which
are placed inside an LSP. are placed inside an LSP.
Class-Type (CT): the set of Traffic Trunks crossing a link that is Class-Type (CT): the set of Traffic Trunks crossing a link that is
governed by a specific set of Bandwidth constraints. CT is used for the governed by a specific set of Bandwidth constraints. CT is used for the
purposes of link bandwidth allocation, constraint based routing and purposes of link bandwidth allocation, constraint based routing and
admission control. A given Traffic Trunk belongs to the same CT on all admission control. A given Traffic Trunk belongs to the same CT on all
links. links.
Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTc, 0 Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTc, 0
<= c <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth <= c <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth
skipping to change at line 157 skipping to change at page 4, line 38
0 <= c <= MaxBC-1 = 7. 0 <= c <= MaxBC-1 = 7.
TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that
CT. This means that an LSP transporting a Traffic Trunk from that CT can CT. This means that an LSP transporting a Traffic Trunk from that CT can
use that preemption priority as the set-up priority, as the holding use that preemption priority as the set-up priority, as the holding
priority or both. priority or both.
MAX_RESERVABLE_BWk: maximum reservable bandwidth on link k specifies the MAX_RESERVABLE_BWk: maximum reservable bandwidth on link k specifies the
maximum bandwidth that may be reserved; this may be greater than the maximum bandwidth that may be reserved; this may be greater than the
maximum link bandwidth in which case the link may be oversubscribed maximum link bandwidth in which case the link may be oversubscribed
[KATZ-YEUNG]. [OSPF-TE].
BCck: bandwidth constraint for CTc on link k = allocated (minimum BCck: bandwidth constraint for CTc on link k = allocated (minimum
guaranteed) bandwidth for CTc on link k (see Section 4). guaranteed) bandwidth for CTc on link k (see Section 4).
RBW_THRESk: reservation bandwidth threshold for link k (see Section 4). RBW_THRESk: reservation bandwidth threshold for link k (see Section 4).
RESERVED_BWck: reserved bandwidth-in-progress on CTc on link k (0 <3D c RESERVED_BWck: reserved bandwidth-in-progress on CTc on link k (0 <= c
<3D MaxCT-1), RESERVED_BWck 3D total amount of the bandwidth reserved <= MaxCT-1), RESERVED_BWck = total amount of the bandwidth reserved
by all the established LSPs which belong to CTc. by all the established LSPs which belong to CTc.
UNRESERVED_BWk: unreserved link bandwidth on link k specifies the
amount of bandwidth not yet reserved for any CT, UNRESERVED_BWk =
MAX_RESERVABLE_BWk - sum [RESERVED_BWck (0 <= c <= MaxCT-1)].
UNRESERVED_BWck: unreserved link bandwidth on CTc on link k specifies UNRESERVED_BWck: unreserved link bandwidth on CTc on link k specifies
the amount of bandwidth not yet reserved for CTc, UNRESERVED_BWck 3D the amount of bandwidth not yet reserved for CTc, UNRESERVED_BWck =
MAX_RESERVABLE_BWk - sum [RESERVED_BWck (0 <3D c <3D MaxCT-1)]. MAX_RESERVABLE_BWk - UNRESERVED_BWk - delta0/1(CTck) * RBW-THRESk
where
delta0/1(CTck) = 0 if RESERVED_BWck < BCck
delta0/1(CTck) = 1 if RESERVED_BWck >= BCck
A number of recovery mechanisms under investigation in the IETF take A number of recovery mechanisms under investigation in the IETF take
advantage of the concept of bandwidth sharing across particular sets of advantage of the concept of bandwidth sharing across particular sets of
LSPs. "Shared Mesh Restoration" in [GMPLS-RECOV] and "Facility-based LSPs. "Shared Mesh Restoration" in [GMPLS-RECOV] and "Facility-based
Computation Model" in [MPLS-BACKUP] are example mechanisms which Computation Model" in [MPLS-BACKUP] are example mechanisms which
increase bandwidth efficiency by sharing bandwidth across backup LSPs increase bandwidth efficiency by sharing bandwidth across backup LSPs
protecting against independent failures. To ensure that the notion of protecting against independent failures. To ensure that the notion of
RESERVED_BWck introduced in [DSTE-REQ] is compatible with such a concept RESERVED_BWck introduced in [DSTE-REQ] is compatible with such a concept
of bandwidth sharing across multiple LSPs, the wording of the definition of bandwidth sharing across multiple LSPs, the wording of the definition
provided in [DSTE-REQ] is generalized. With this generalization, the provided in [DSTE-REQ] is generalized. With this generalization, the
definition is compatible with Shared Mesh Restoration defined in definition is compatible with Shared Mesh Restoration defined in
[GMPLS-RECOV], so that DSTE and Shared Mesh Protection can operate [GMPLS-RECOV], so that DS-TE and Shared Mesh Protection can operate
simultaneously, under the assumption that Shared Mesh Restoration simultaneously, under the assumption that Shared Mesh Restoration
operates independently within each DSTE Class-Type and does not operate operates independently within each DS-TE Class-Type and does not operate
across Class-Types. For example, backup LSPs protecting primary LSPs of across Class-Types. For example, backup LSPs protecting primary LSPs of
CTc must also belong to CTc; excess traffic LSPs sharing bandwidth with CTc must also belong to CTc; excess traffic LSPs sharing bandwidth with
backup LSPs of CTc must also belong to CTc. backup LSPs of CTc must also belong to CTc.
3. Assumptions & Applicability 3. Assumptions & Applicability
In general, DSTE is a bandwidth allocation mechanism, for different In general, DS-TE is a bandwidth allocation mechanism, for different
classes of traffic allocated to various CTs (e.g., voice, normal data, classes of traffic allocated to various CTs (e.g., voice, normal data,
best-effort data). Network operations functions such as capacity best-effort data). Network operations functions such as capacity
design, bandwidth allocation, routing design, and network planning are design, bandwidth allocation, routing design, and network planning are
normally based on traffic measured load and forecast [ASH1]. normally based on traffic measured load and forecast [ASH1].
As such, the following assumptions are made according to the operation As such, the following assumptions are made according to the operation
of MAR: of MAR:
1. connection admission control (CAC) allocates bandwidth for network 1. connection admission control (CAC) allocates bandwidth for network
flows/LSPs according to the traffic load assigned to each CT, based on flows/LSPs according to the traffic load assigned to each CT, based on
skipping to change at line 224 skipping to change at page 5, line 53
this adjustment, it could be short term (hours), daily, weekly, monthly, this adjustment, it could be short term (hours), daily, weekly, monthly,
or otherwise. or otherwise.
5. No assumption is made on the order in which traffic is allocated to 5. No assumption is made on the order in which traffic is allocated to
various CTs, again traffic allocation is assumed to be based only on various CTs, again traffic allocation is assumed to be based only on
traffic load as it is measured and/or forecast. traffic load as it is measured and/or forecast.
6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic 6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic
trunk, alternate paths may be attempted to satisfy CT bandwidth trunk, alternate paths may be attempted to satisfy CT bandwidth
allocation. allocation.
Note that the above assumptions are not unique to MAR, but are generic, Note that the above assumptions are not unique to MAR, but are generic,
common assumptions for all BC models. common assumptions for all BC Models.
4. Functional Specification of the MAR Bandwidth Constraint Model 4. Functional Specification of the MAR Bandwidth Constraints Model
A DSTE LSR implementing MAR MUST support enforcement of bandwidth A DS-TE LSR implementing MAR MUST support enforcement of bandwidth
constraints in compliance with the specifications in this Section. constraints in compliance with the specifications in this Section.
In the MAR bandwidth constraint model, the bandwidth allocation control In the MAR Bandwidth Constraints Model, the bandwidth allocation control
for each CT is based on estimated bandwidth needs, bandwidth use, and for each CT is based on estimated bandwidth needs, bandwidth use, and
status of links. The LER makes needed bandwidth allocation changes, and status of links. The LER makes needed bandwidth allocation changes, and
uses [RSVP-TE], for example, to determine if link bandwidth can be uses [RSVP-TE], for example, to determine if link bandwidth can be
allocated to a CT. Bandwidth allocated to individual CTs is protected as allocated to a CT. Bandwidth allocated to individual CTs is protected as
needed but otherwise shared. Under normal non-congested network needed but otherwise shared. Under normal non-congested network
conditions, all CTs/services fully share all available bandwidth. When conditions, all CTs/services fully share all available bandwidth. When
congestion occurs for a particular CTc, bandwidth reservation acts to congestion occurs for a particular CTc, bandwidth reservation acts to
prohibit traffic from other CTs from seizing the allocated capacity for prohibit traffic from other CTs from seizing the allocated capacity for
CTc. CTc.
On a given link k, a small amount of bandwidth RBW_THRESk, the On a given link k, a small amount of bandwidth RBW_THRESk, the
reservation bandwidth threshold for link k, is reserved and governs the reservation bandwidth threshold for link k, is reserved and governs the
admission control on link k. Also associated with each CTc on link k admission control on link k. Also associated with each CTc on link k
skipping to change at line 261 skipping to change at page 6, line 31
bandwidth can be fully shared among CTs if available, but is otherwise bandwidth can be fully shared among CTs if available, but is otherwise
protected by bandwidth reservation methods. protected by bandwidth reservation methods.
Bandwidth can be accessed for a bandwidth request = DBW for CTc on a Bandwidth can be accessed for a bandwidth request = DBW for CTc on a
given link k based on the following rules: given link k based on the following rules:
Table 1: Rules for Admitting LSP Bandwidth Request = DBW on Link k Table 1: Rules for Admitting LSP Bandwidth Request = DBW on Link k
For LSP on a high priority or normal priority CTc: For LSP on a high priority or normal priority CTc:
If RESERVED_BWck <= BCc: admit if DBW <= UNRESERVED_BWk If RESERVED_BWck <= BCc: admit if DBW <= UNRESERVED_BWk
If RESERVED_BWck > BCc: admit if DBW <= UNRESERVED_BWk - RBW_THRESk If RESERVED_BWck > BCc: admit if DBW <= UNRESERVED_BWk - RBW_THRESk;
or, equivalently:
If DBW <= UNRESERVED_BWck, admit the LSP.
For LSP on a best-effort priority CTc: For LSP on a best-effort priority CTc:
allocated bandwidth BCc = 0; allocated bandwidth BCc = 0;
DiffServ queuing admits BE packets only if there is available link DiffServ queuing admits BE packets only if there is available link
bandwidth; bandwidth.
The normal semantics of setup and holding priority are applied in the The normal semantics of setup and holding priority are applied in the
MAR bandwidth constraint model, and cross-CT preemption is permitted MAR Bandwidth Constraints Model, and cross-CT preemption is permitted
when preemption is enabled. when preemption is enabled.
The bandwidth allocation rules defined in Table 1 are illustrated with The bandwidth allocation rules defined in Table 1 are illustrated with
an example in Section 6 and simulation analysis in ANNEX A. an example in Section 6 and simulation analysis in Appendix A.
5. Setting Bandwidth Constraints 5. Setting Bandwidth Constraints
For a normal priority CTc, the bandwidth constraints BCck on link k are For a normal priority CTc, the bandwidth constraints BCck on link k are
set by allocating the maximum reservable bandwidth (MAX_RESERVABLE_BWk) set by allocating the maximum reservable bandwidth (MAX_RESERVABLE_BWk)
in proportion to the forecast or measured traffic load bandwidth in proportion to the forecast or measured traffic load bandwidth
TRAF_LOAD_BWck for CTc on link k. That is: TRAF_LOAD_BWck for CTc on link k. That is:
PROPORTIONAL_BWck = TRAF_LOAD_BWck/[sum {TRAF_LOAD_BWck, c=0,MaxCT-1}] X PROPORTIONAL_BWck = TRAF_LOAD_BWck/[sum {TRAF_LOAD_BWck, c=0,MaxCT-1}] X
MAX_RESERVABLE_BWk MAX_RESERVABLE_BWk
skipping to change at line 363 skipping to change at page 8, line 21
RESERVED_BW0 > BC0 (50 > 30), and RESERVED_BW0 > BC0 (50 > 30), and
DBW > UNRESERVED_BW - RBW_THRES (i.e., 5 > 10 - 10) DBW > UNRESERVED_BW - RBW_THRES (i.e., 5 > 10 - 10)
Table 1 says the LSP is rejected/blocked. Table 1 says the LSP is rejected/blocked.
Now let's say an LSP arrives for CT2 needing 5 units of bandwidth (i.e., Now let's say an LSP arrives for CT2 needing 5 units of bandwidth (i.e.,
DBW = 5). We need to decide based on Table 1 whether to admit this DBW = 5). We need to decide based on Table 1 whether to admit this
LSP or not. Since for CT2 LSP or not. Since for CT2
RESERVED_BW2 < BC2 (10 < 20), and RESERVED_BW2 < BC2 (10 < 20), and
DBW < UNRESERVED_BW (i.e., 10 - 10 < 5) DBW < UNRESERVED_BW (i.e., 5 < 10)
Table 1 says to admit the LSP. Table 1 says to admit the LSP.
Hence, in the above example, in the current state of the link and the Hence, in the above example, in the current state of the link and the
current CT loading, CT0 and CT1 can no longer increase their bandwidth current CT loading, CT0 and CT1 can no longer increase their bandwidth
on the link, since they are above their BCc values and there is only on the link, since they are above their BCc values and there is only
RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can
take the additional bandwidth (up to 10 units) if the demand arrives, take the additional bandwidth (up to 10 units) if the demand arrives,
since it is below its BCc value. since it is below its BCc value.
7. Summary 7. Summary
The proposed MAR bandwidth constraint model includes the following: a) The proposed MAR Bandwidth Constraints Model includes the following: a)
allocate bandwidth to individual CTs, b) protect allocated bandwidth by allocate bandwidth to individual CTs, b) protect allocated bandwidth by
bandwidth reservation methods, as needed, but otherwise fully share bandwidth reservation methods, as needed, but otherwise fully share
bandwidth, c) differentiate high-priority, normal-priority, and bandwidth, c) differentiate high-priority, normal-priority, and
best-effort priority services, and d) provide admission control to best-effort priority services, and d) provide admission control to
reject connection requests when needed to meet performance objectives. reject connection requests when needed to meet performance objectives.
Modeling results presented in Annex A show that MAR bandwidth allocation Modeling results presented in Appendix A show that MAR bandwidth
a) achieves greater efficiency in bandwidth sharing while still allocation a) achieves greater efficiency in bandwidth sharing while
providing bandwidth isolation and protection against QoS degradation, still providing bandwidth isolation and protection against QoS
and b) achieves service differentiation for high-priority, degradation, and b) achieves service differentiation for high-priority,
normal-priority, and best-effort priority services. normal-priority, and best-effort priority services.
8. Security Considerations 8. Security Considerations
No new security considerations are raised by this document, they are the Security considerations related to the use of DS-TE are discussed in
same as in the DSTE requirements document [DSTE-REQ]. [DSTE-PROTO]. Those apply independently of the Bandwidth Constraints
Model, including MAR specified in this document.
9. Acknowledgements 9. Acknowledgements
DSTE and bandwidth constraint models have been an active area of DS-TE and Bandwidth Constraints Models have been an active area of
discussion in the TEWG. I would like to thank Wai Sum Lai for his discussion in the TEWG. I would like to thank Wai Sum Lai for his
support and review of this draft. I also appreciate helpful discussions support and review of this draft. I also appreciate helpful discussions
with Francois Le Faucheur. with Francois Le Faucheur.
10. Normative References 10. IANA Considerations
[DSTE-PROTO] defines a new name space for "Bandwidth Constraints Model
Id". The guidelines for allocation of values in that name space are
detailed in Section 14 of [DSTE-PROTO]. In accordance with these
guidelines, IANA was requested to assign a Bandwidth Constraints Model
Id for MAR from the range 0-127 (which is to be managed as per the
"Specification Required" policy defined in [IANA-CONS]).
Bandwidth Constraints Model Id = TBD was allocated by IANA to MAR.
<IANA-note> To be removed by the RFC editor at the time of publication
We request IANA to assign value 2 for the MAR model. Once the value
has been assigned, please replace "TBD" above by the assigned value.
</IANA-note>
11. Normative References
[DSTE-REQ] Le Faucheur, F., Lai, W., et. al., "Requirements for Support [DSTE-REQ] Le Faucheur, F., Lai, W., et. al., "Requirements for Support
of Diff-Serv-aware MPLS Traffic Engineering," RFC 3564, July 2003. of Diff-Serv-aware MPLS Traffic Engineering," RFC 3564, July 2003.
[DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support [DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support
of Diff-Serv-aware MPLS Traffic Engineering," work in progress. of Diff-Serv-aware MPLS Traffic Engineering," work in progress.
[KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement [KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement
Levels", RFC 2119, March 1997. Levels", RFC 2119, March 1997.
11. Informative References 12. Informative References
[AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks [AKI] Akinpelu, J. M., "The Overload Performance of Engineered Networks
with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984. with Nonhierarchical & Hierarchical Routing," BSTJ, Vol. 63, 1984.
[ASH1] Ash, G. R., Dynamic Routing in Telecommunications Networks, [ASH1] Ash, G. R., "Dynamic Routing in Telecommunications Networks,"
McGraw-Hill, 1998. McGraw-Hill, 1998.
[ASH2] Ash, G. R., et. al., Routing Evolution in Multiservice Integrated [ASH2] Ash, G. R., et. al., "Routing Evolution in Multiservice Integrated
Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999. Voice/Data Networks," Proceeding of ITC-16, Edinburgh, June 1999.
[ASH3] Ash, G. R., Traffic Engineering & QoS Methods for IP-, ATM-, & [ASH3] Ash, G. R., "Performance Evaluation of QoS-Routing Methods for
IP-Based Multiservice Networks," Computer Communications Magazine,
May 2003.
TDM-Based Multiservice Networks, work in progress. TDM-Based Multiservice Networks, work in progress.
[BUR] Burke, P. J., Blocking Probabilities Associated with Directional [BUR] Burke, P. J., Blocking Probabilities Associated with Directional
Reservation, unpublished memorandum, 1961. Reservation, unpublished memorandum, 1961.
[DIFF-MPLS] Le Faucheur, F., et. al., "MPLS Support of Diff-Serv", RFC [DSTE-PERF] Lai, W., "Bandwidth Constraints Models for DiffServ-TE:
3270, May 2002.
[DIFFSERV] Blake, S., et. al., "An Architecture for Differentiated
Services", RFC 2475, December 1998.
[DSTE-PERF] Lai, W., "Bandwidth Constraints Models for Diffserv-TE:
Performance Evaluation", work in progress. Performance Evaluation", work in progress.
[E.360.1 --> E.360.7] ITU-T Recommendations, "QoS Routing & Related [E.360.1 --> E.360.7] ITU-T Recommendations, "QoS Routing & Related
Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based
Networks". Networks".
[GMPLS-RECOV] Lang, J., et. al., "Generalized MPLS Recovery Functional [GMPLS-RECOV] Lang, J., et. al., "Generalized MPLS Recovery Functional
Specification", work in progress. Specification", work in progress.
[KATZ-YEUNG] Katz, D., Yeung, D., Kompella, K., "Traffic Engineering
Extensions to OSPF Version 2," work in progress.
[KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks", [KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks",
Proceedings of ICC, Philadelphia, 1982. Proceedings of ICC, Philadelphia, 1982.
[LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and [LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and
Control of Network Systems III Conference", SPIE Proceedings Vol. 4865, Control of Network Systems III Conference", SPIE Proceedings Vol. 4865,
pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002 pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002
(http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf). (http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf).
[MAM] Le Faucheur, F., Lai, W., "Maximum Allocation Bandwidth [MAM] Le Faucheur, F., Lai, W., "Maximum Allocation Bandwidth
Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in
progress. progress.
[MPLS-BACKUP] Vasseur, J. P., et. al., "MPLS Traffic Engineering Fast [MPLS-BACKUP] Vasseur, J. P., et. al., "MPLS Traffic Engineering Fast
Reroute: Bypass Tunnel Path Computation for Bandwidth Protection", work Reroute: Bypass Tunnel Path Computation for Bandwidth Protection", work
in progress. in progress.
[MUM] Mummert, V. S., "Network Management and Its Implementation on the [MUM] Mummert, V. S., "Network Management and Its Implementation on the
No. 4ESS, International Switching Symposium", Japan, 1976. No. 4ESS, International Switching Symposium", Japan, 1976.
[NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global [NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global
Communication Network, Proceedings of ITC-7, Stockholm, 1973. Communication Network, Proceedings of ITC-7, Stockholm, 1973.
[MPLS-ARCH] Rosen, E., et. al., "Multiprotocol Label Switching [OSPF-TE] Katz, D., et. al., "Traffic Engineering (TE) Extensions to
Architecture," RFC 3031, January 2001. OSPF Version 2," RFC 3630, September 2003.
[RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for [RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for
Diff-Serv-aware MPLS Traffic Engineering", work in progress. Diff-Serv-aware MPLS Traffic Engineering", work in progress.
[RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3",
BCP 9, RFC 2026, October 1996. BCP 9, RFC 2026, October 1996.
[RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP [RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, December 2001. Tunnels", RFC 3209, December 2001.
11. Authors' Addresses 13. Intellectual Property Statement
AT&T Corporation may own intellectual property applicable to this
contribution. The IETF has been notified of AT&T's licensing intent
for the specification contained in this document. See
http://www.ietf.org/ietf/IPR/ATT-GENERAL.txt for AT&T's IPR statement.
14. Authors' Addresses
Jerry Ash Jerry Ash
AT&T AT&T
Room MT D5-2A01 Room MT D5-2A01
200 Laurel Avenue 200 Laurel Avenue
Middletown, NJ 07748, USA Middletown, NJ 07748, USA
Phone: +1 732-420-4578 Phone: +1 732-420-4578
Email: gash@att.com Email: gash@att.com
ANNEX A - MAR Operation & Performance Analysis Appendix A. MAR Operation & Performance Analysis
A.1 MAR Operation A.1 MAR Operation
In the MAR bandwidth constraint model, the bandwidth allocation control In the MAR Bandwidth Constraints Model, the bandwidth allocation control
for each CT is based on estimated bandwidth needs, bandwidth use, and for each CT is based on estimated bandwidth needs, bandwidth use, and
status of links. The LER makes needed bandwidth allocation changes, and status of links. The LER makes needed bandwidth allocation changes, and
uses [RSVP-TE], for example, to determine if link bandwidth can be uses [RSVP-TE], for example, to determine if link bandwidth can be
allocated to a CT. Bandwidth allocated to individual CTs is protected as allocated to a CT. Bandwidth allocated to individual CTs is protected as
needed but otherwise shared. Under normal non-congested network needed but otherwise shared. Under normal non-congested network
conditions, all CTs/services fully share all available bandwidth. When conditions, all CTs/services fully share all available bandwidth. When
congestion occurs for a particular CTc, bandwidth reservation acts to congestion occurs for a particular CTc, bandwidth reservation acts to
prohibit traffic from other CTs from seizing the allocated capacity for prohibit traffic from other CTs from seizing the allocated capacity for
CTc. Associated with each CT is the allocated bandwidth constraint CTc. Associated with each CT is the allocated bandwidth constraint
(BCc) to govern bandwidth allocation and protection, these parameters (BCc) to govern bandwidth allocation and protection, these parameters
are illustrated with examples in this ANNEX. are illustrated with examples in this Appendix.
In performing MAR bandwidth allocation for a given flow/LSP, the LER In performing MAR bandwidth allocation for a given flow/LSP, the LER
first determines the egress LSR address, service-identity, and CT. The first determines the egress LSR address, service-identity, and CT. The
connection request is allocated an equivalent bandwidth to be routed on connection request is allocated an equivalent bandwidth to be routed on
a particular CT. The LER then accesses the CT priority, QoS/traffic a particular CT. The LER then accesses the CT priority, QoS/traffic
parameters, and routing table between the LER and egress LSR, and sets parameters, and routing table between the LER and egress LSR, and sets
up the connection request using the MAR bandwidth allocation rules. The up the connection request using the MAR bandwidth allocation rules. The
LER selects a first choice path and determines if bandwidth can be LER selects a first choice path and determines if bandwidth can be
allocated on the path based on the MAR bandwidth allocation rules given allocated on the path based on the MAR bandwidth allocation rules given
in Section 4. If the first choice path has insufficient bandwidth, the in Section 4. If the first choice path has insufficient bandwidth, the
skipping to change at line 558 skipping to change at page 11, line 58
another LSP could by tried, such as A-C-D-E. Hence determination of the another LSP could by tried, such as A-C-D-E. Hence determination of the
link load state is necessary for MAR bandwidth allocation, and two link link load state is necessary for MAR bandwidth allocation, and two link
load states are distinguished: available (non-reserved) bandwidth load states are distinguished: available (non-reserved) bandwidth
(ABW_STATE), and reserved-bandwidth (RBW_STATE). Management of CT (ABW_STATE), and reserved-bandwidth (RBW_STATE). Management of CT
capacity uses the link state and the allowed load state threshold to capacity uses the link state and the allowed load state threshold to
determine if a bandwidth allocation request can be accepted on a given determine if a bandwidth allocation request can be accepted on a given
CT. CT.
A.2 Analysis of MAR Performance A.2 Analysis of MAR Performance
In this Annex, modeling analysis is presented in which MAR bandwidth In this Appendix, modeling analysis is presented in which MAR bandwidth
allocation is shown to provide good network performance relative to full allocation is shown to provide good network performance relative to full
sharing models, under normal and abnormal operating conditions. A sharing models, under normal and abnormal operating conditions. A
large-scale MPLS/DiffServ TE simulation model is used, in which several large-scale DiffServ-aware MPLS traffic engineering simulation model is
CTs with different priority classes share the pool of bandwidth on a used, in which several CTs with different priority classes share the pool
multiservice, integrated voice/data network. MAR methods have also been of bandwidth on a multiservice, integrated voice/data network. MAR
analyzed in practice for TDM-based networks [ASH1], and in modeling
studies for IP-based networks [ASH2, ASH3, E.360].
All bandwidth constraint models should meet these objectives: methods have also been analyzed in practice for TDM-based networks [ASH1],
and in modeling studies for IP-based networks [ASH2, ASH3, E.360].
All Bandwidth Constraints Models should meet these objectives:
1. applies equally when preemption is either enabled or disabled (when 1. applies equally when preemption is either enabled or disabled (when
preemption is disabled, the model still works 'reasonably' well), preemption is disabled, the model still works 'reasonably' well),
2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under 2. bandwidth efficiency, i.e., good bandwidth sharing among CTs under
both normal and overload conditions, both normal and overload conditions,
3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another
CT under overload conditions, CT under overload conditions,
4. protection against QoS degradation, at least of the high-priority CTs 4. protection against QoS degradation, at least of the high-priority CTs
(e.g. high-priority voice, high-priority data, etc.), and (e.g. high-priority voice, high-priority data, etc.), and
5. reasonably simple, i.e., does not require additional IGP extensions 5. reasonably simple, i.e., does not require additional IGP extensions
and minimizes signaling load processing requirements. and minimizes signaling load processing requirements.
The use of any given bandwidth constraint model has significant impacts The use of any given Bandwidth Constraints Model has significant impacts
on the performance of a network, as explained later. Therefore, the on the performance of a network, as explained later. Therefore, the
criteria used to select a model must enable us to evaluate how a criteria used to select a model must enable us to evaluate how a
particular model delivers its performance, relative to other models. Lai particular model delivers its performance, relative to other models. Lai
[LAI, DSTE-PERF] has analyzed the MA and RD models and provided valuable [LAI, DSTE-PERF] has analyzed the MAM and RDM Models and provided
insights into the relative performance of these models under various valuable insights into the relative performance of these models under
network conditions. various network conditions.
In environments where preemption is not used, MAM is attractive because In environments where preemption is not used, MAM is attractive because
a) it is good at achieving isolation, and b) it achieves reasonable a) it is good at achieving isolation, and b) it achieves reasonable
bandwidth efficiency with some QoS degradation of lower classes. When bandwidth efficiency with some QoS degradation of lower classes. When
preemption is used, RDM is attractive because it can achieve bandwidth preemption is used, RDM is attractive because it can achieve bandwidth
efficiency under normal load. However, RDM cannot provide service efficiency under normal load. However, RDM cannot provide service
isolation under high load or when preemption is not used. isolation under high load or when preemption is not used.
Our performance analysis of MAR bandwidth allocation methods is based on Our performance analysis of MAR bandwidth allocation methods is based on
a full-scale, 135-node simulation model of a national network together a full-scale, 135-node simulation model of a national network together
with a multiservice traffic demand model to study various scenarios and with a multiservice traffic demand model to study various scenarios and
tradeoffs [ASH3]. Three levels of traffic priority - high, normal, and tradeoffs [ASH3, E.360]. Three levels of traffic priority - high,
best effort -- are given across 5 CTs: normal priority voice, high normal, and best effort -- are given across 5 CTs: normal priority voice,
priority voice, normal priority data, high priority data, and best high priority voice, normal priority data, high priority data, and best
effort data. effort data.
The performance analyses for overloads and failures include a) the MAR The performance analyses for overloads and failures include a) the MAR
bandwidth constraint model, as specified in Section 4, b) the MAM Bandwidth Constraints Model, as specified in Section 4, b) the MAM
bandwidth constraint model, and c) the No-DSTE bandwidth constraint Bandwidth Constraints Model, and c) the No-DSTE Bandwidth Constraints
model. Model.
The allocated bandwidth constraints for MAR are as described in Section The allocated bandwidth constraints for MAR are as described in Section
5: 5:
Normal priority CTs: BCck = PROPORTIONAL_BWk, Normal priority CTs: BCck = PROPORTIONAL_BWk,
High priority CTs: BCck = FACTOR X PROPORTIONAL_BWk High priority CTs: BCck = FACTOR X PROPORTIONAL_BWk
Best-effort priority CTs: BCck = 0 Best-effort priority CTs: BCck = 0
In the MAM bandwidth constraint model, the bandwidth constraints for In the MAM Bandwidth Constraints Model, the bandwidth constraints for
each CT are set to a multiple of the proportional bandwidth allocation: each CT are set to a multiple of the proportional bandwidth allocation:
Normal priority CTs: BCck = FACTOR1 X PROPORTIONAL_BWk, Normal priority CTs: BCck = FACTOR1 X PROPORTIONAL_BWk,
High priority CTs: BCck = FACTOR2 X PROPORTIONAL_BWk High priority CTs: BCck = FACTOR2 X PROPORTIONAL_BWk
Best-effort priority CTs: BCck = 0 Best-effort priority CTs: BCck = 0
Simulations show that for MAM, the sum (BCc) should exceed Simulations show that for MAM, the sum (BCc) should exceed
MAX_RESERVABLE_BWk for better efficiency, as follows: MAX_RESERVABLE_BWk for better efficiency, as follows:
1. The normal priority CTs the BCc values need to be over-allocated to 1. The normal priority CTs the BCc values need to be over-allocated to
get reasonable performance. It was found that over-allocating by 100%, get reasonable performance. It was found that over-allocating by 100%,
that is, setting FACTOR1 = 2, gave reasonable performance. that is, setting FACTOR1 = 2, gave reasonable performance.
2. The high priority CTs can be over-allocated by a larger multiple 2. The high priority CTs can be over-allocated by a larger multiple
FACTOR2 in MAM and this gives better performance. FACTOR2 in MAM and this gives better performance.
The rather large amount of over-allocation improves efficiency but The rather large amount of over-allocation improves efficiency but
somewhat defeats the 'bandwidth protection/isolation' needed with a BC somewhat defeats the 'bandwidth protection/isolation' needed with a BC
model, since one CT can now invade the bandwidth allocated to another Model, since one CT can now invade the bandwidth allocated to another
CT. Each CT is restricted to its allocated bandwidth constraint BCck, CT. Each CT is restricted to its allocated bandwidth constraint BCck,
which is the maximum level of bandwidth allocated to each CT on each which is the maximum level of bandwidth allocated to each CT on each
link, as in normal operation of MAM. link, as in normal operation of MAM.
In the No-DSTE bandwidth constraint model, no reservation or protection In the No-DSTE Bandwidth Constraints Model, no reservation or protection
of CT bandwidth is applied, and bandwidth allocation requests are of CT bandwidth is applied, and bandwidth allocation requests are
admitted if bandwidth is available. Furthermore, no queueing priority admitted if bandwidth is available. Furthermore, no queuing priority
is applied to any of the CTs in the No-DSTE bandwidth constraint model. is applied to any of the CTs in the No-DSTE Bandwidth Constraints Model.
Table 2 gives performance results for a six-times overload on a single Table 2 gives performance results for a six-times overload on a single
network node at Oakbrook IL. The numbers given in the table are the network node at Oakbrook IL. The numbers given in the table are the
total network percent lost (blocked) or delayed traffic. Note that in total network percent lost (blocked) or delayed traffic. Note that in
the focused overload scenario studied here, the percent lost/delayed the focused overload scenario studied here, the percent lost/delayed
traffic on the Oakbrook node is much higher than the network-wide traffic on the Oakbrook node is much higher than the network-wide
average values given. average values given.
Table 2 Table 2
Performance Comparison for MAR, MAM, & No-DSTE Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models Bandwidth Constraints (BC) Models
6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC Class Type MAR BC MAM BC No-DSTE BC
Model Model Model Model Model Model
NORMAL PRIORITY VOICE 0.00 1.97 10.3009 NORMAL PRIORITY VOICE 0.00 1.97 10.30
HIGH PRIORITY VOICE 0.00 0.00 7.0509 HIGH PRIORITY VOICE 0.00 0.00 7.05
NORMAL PRIORITY DATA 0.00 6.63 13.3009 NORMAL PRIORITY DATA 0.00 6.63 13.30
HIGH PRIORITY DATA 0.00 0.00 7.0509 HIGH PRIORITY DATA 0.00 0.00 7.05
BEST EFFORT PRIORITY DATA 12.33 11.92 9.6509 BEST EFFORT PRIORITY DATA 12.33 11.92 9.65
Clearly the performance is better with MAR bandwidth allocation, and the Clearly the performance is better with MAR bandwidth allocation, and the
results show that performance improves when bandwidth reservation is results show that performance improves when bandwidth reservation is
used. The reason for the poor performance of the No-DSTE model, without used. The reason for the poor performance of the No-DSTE Model, without
bandwidth reservation, is due to the lack of protection of allocated bandwidth reservation, is due to the lack of protection of allocated
bandwidth. If we add the bandwidth reservation mechanism, then bandwidth. If we add the bandwidth reservation mechanism, then
performance of the network is greatly improved. performance of the network is greatly improved.
The simulations showed that the performance of MAM is quite sensitive to The simulations showed that the performance of MAM is quite sensitive to
the over-allocation factors discussed above. For example, if the BCc the over-allocation factors discussed above. For example, if the BCc
values are proportionally allocated with FACTOR1 = 1, then the results values are proportionally allocated with FACTOR1 = 1, then the results
are much worse, as shown in Table 3: are much worse, as shown in Table 3:
Table 3 Table 3
Performance Comparison for MAM Bandwidth Constraint Model Performance Comparison for MAM Bandwidth Constraints Model
with Different Over-allocation Factors with Different Over-allocation Factors
6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic)
Class Type (FACTOR1 = 1) (FACTOR1 = 2) Class Type (FACTOR1 = 1) (FACTOR1 = 2)
NORMAL PRIORITY VOICE 31.69 1.9709 NORMAL PRIORITY VOICE 31.69 1.97
HIGH PRIORITY VOICE 0.00 0.0009 HIGH PRIORITY VOICE 0.00 0.00
NORMAL PRIORITY DATA 31.22 6.6309 NORMAL PRIORITY DATA 31.22 6.63
HIGH PRIORITY DATA 0.00 0.0009 HIGH PRIORITY DATA 0.00 0.00
BEST EFFORT PRIORITY DATA 8.76 11.9209 BEST EFFORT PRIORITY DATA 8.76 11.92
Table 4 illustrates the performance of the MAR, MAM, and No-DSTE Table 4 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a high-day network load pattern with a Bandwidth Constraints Models for a high-day network load pattern with a
30% general overload. The numbers given in the table are the total 50% general overload. The numbers given in the table are the total
network percent lost (blocked) or delayed traffic. network percent lost (blocked) or delayed traffic.
Table 4 Table 4
Performance Comparison for MAR, MAM, & No-DSTE Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models Bandwidth Constraints (BC) Models
50% General Overload (Total Network % Lost/Delayed Traffic) 50% General Overload (Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC Class Type MAR BC MAM BC No-DSTE BC
Model Model Model Model Model Model
NORMAL PRIORITY VOICE 0.02 0.13 7.9809 NORMAL PRIORITY VOICE 0.02 0.13 7.98
HIGH PRIORITY VOICE 0.00 0.00 8.9409 HIGH PRIORITY VOICE 0.00 0.00 8.94
NORMAL PRIORITY DATA 0.00 0.26 6.9309 NORMAL PRIORITY DATA 0.00 0.26 6.93
HIGH PRIORITY DATA 0.00 0.00 8.9409 HIGH PRIORITY DATA 0.00 0.00 8.94
BEST EFFORT PRIORITY DATA 10.41 10.39 8.4009 BEST EFFORT PRIORITY DATA 10.41 10.39 8.40
Again, we can see the performance is always better when MAR bandwidth Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used. allocation and reservation is used.
Table 5 illustrates the performance of the MAR, MAM, and No-DSTE Table 5 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a single link failure scenario (3 Bandwidth Constraints Models for a single link failure scenario (3
OC-48). The numbers given in the table are the total network percent OC-48). The numbers given in the table are the total network percent
lost (blocked) or delayed traffic. lost (blocked) or delayed traffic.
Table 5 Table 5
Performance Comparison for MAR, MAM, & No-DSTE Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models Bandwidth Constraints (BC) Models
Single Link Failure (3 OC-48s) Single Link Failure (2 OC-48)
(Total Network % Lost/Delayed Traffic) (Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC Class Type MAR BC MAM BC No-DSTE BC
Model Model Model Model Model Model
NORMAL PRIORITY VOICE 0.00 0.62 0.5809 NORMAL PRIORITY VOICE 0.00 0.62 0.63
HIGH PRIORITY VOICE 0.00 0.31 0.2909 HIGH PRIORITY VOICE 0.00 0.31 0.32
NORMAL PRIORITY DATA 0.00 0.48 0.4609 NORMAL PRIORITY DATA 0.00 0.48 0.50
HIGH PRIORITY DATA 0.00 0.31 0.2909 HIGH PRIORITY DATA 0.00 0.31 0.32
BEST EFFORT PRIORITY DATA 0.12 0.72 0.6609 BEST EFFORT PRIORITY DATA 0.12 0.72 0.63
Again, we can see the performance is always better when MAR bandwidth Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used. allocation and reservation is used.
Table 6 illustrates the performance of the MAR, MAM, and No-DSTE Table 6 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a multiple link failure scenario (3 Bandwidth Constraints Models for a multiple link failure scenario (3
links with 3 OC-48, 3 OC-3, 4 OC-3 capacity, respectively). The numbers links with 3 OC-48, 3 OC-3, 4 OC-3 capacity, respectively). The numbers
given in the table are the total network percent lost (blocked) or given in the table are the total network percent lost (blocked) or
delayed traffic. delayed traffic.
Table 6 Table 6
Performance Comparison for MAR, MAM, & No-DSTE Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models Bandwidth Constraints (BC) Models
Multiple Link Failure (3 Links with 3 OC-48, 3 OC-3, 4 OC-3, Respectively) Multiple Link Failure (3 Links with 2 OC-48, 2 OC-12, 1 OC-12, Respectively)
(Total Network % Lost/Delayed Traffic) (Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC Class Type MAR BC MAM BC No-DSTE BC
Model Model Model Model Model Model
NORMAL PRIORITY VOICE 0.00 0.91 0.8609 NORMAL PRIORITY VOICE 0.00 0.91 0.92
HIGH PRIORITY VOICE 0.00 0.44 0.4209 HIGH PRIORITY VOICE 0.00 0.44 0.44
NORMAL PRIORITY DATA 0.00 0.70 0.6409 NORMAL PRIORITY DATA 0.00 0.70 0.72
HIGH PRIORITY DATA 0.00 0.44 0.4209 HIGH PRIORITY DATA 0.00 0.44 0.44
BEST EFFORT PRIORITY DATA 0.14 1.03 0.9809 BEST EFFORT PRIORITY DATA 0.14 1.03 1.04
Again, we can see the performance is always better when MAR bandwidth Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used. allocation and reservation is used.
Lai's results [LAI, DSTE-PERF] show the trade-off between bandwidth sharing Lai's results [LAI, DSTE-PERF] show the trade-off between bandwidth sharing
and service protection/isolation, using an analytic model of a single and service protection/isolation, using an analytic model of a single
link. He shows that RDM has a higher degree of sharing than MAM. link. He shows that RDM has a higher degree of sharing than MAM.
Furthermore, for a single link, the overall loss probability is the Furthermore, for a single link, the overall loss probability is the
smallest under full sharing and largest under MAM, with RDM being smallest under full sharing and largest under MAM, with RDM being
intermediate. Hence, on a single link, Lai shows that the full sharing intermediate. Hence, on a single link, Lai shows that the full sharing
model yields the highest link efficiency and MAM the lowest, and that model yields the highest link efficiency and MAM the lowest, and that
full sharing has the poorest service protection capability. full sharing has the poorest service protection capability.
The results of the present study show that when considering a network The results of the present study show that when considering a network
context, in which there are many links and multiple-link routing paths context, in which there are many links and multiple-link routing paths
are used, full sharing does not necessarily lead to maximum network-wide are used, full sharing does not necessarily lead to maximum network-wide
bandwidth efficiency. In fact, the results in Table 4 show that the bandwidth efficiency. In fact, the results in Table 4 show that the
No-DSTE model not only degrades total network throughput, but also No-DSTE Model not only degrades total network throughput, but also
degrades the performance of every CT that should be protected. Allowing degrades the performance of every CT that should be protected. Allowing
more bandwidth sharing may improve performance up to a point, but can more bandwidth sharing may improve performance up to a point, but can
severely degrade performance if care is not taken to protect allocated severely degrade performance if care is not taken to protect allocated
bandwidth under congestion. bandwidth under congestion.
Both Lai's study and this study show that increasing the degree of Both Lai's study and this study show that increasing the degree of
bandwidth sharing among the different CTs leads to a tighter coupling bandwidth sharing among the different CTs leads to a tighter coupling
between CTs. Under normal loading conditions, there is adequate capacity between CTs. Under normal loading conditions, there is adequate capacity
for each CT, which minimizes the effect of such coupling. Under overload for each CT, which minimizes the effect of such coupling. Under overload
conditions, when there is a scarcity of capacity, such coupling can conditions, when there is a scarcity of capacity, such coupling can
cause severe degradation of service, especially for the lower priority cause severe degradation of service, especially for the lower priority
CTs. CTs.
Thus, the objective of maximizing efficient bandwidth usage, as stated Thus, the objective of maximizing efficient bandwidth usage, as stated
in bandwidth constraint model objectives, must be exercised with care. in Bandwidth Constraints Model objectives, must be exercised with care.
Due consideration needs to be given also to achieving bandwidth Due consideration needs to be given also to achieving bandwidth
isolation under overload, in order to minimize the effect of isolation under overload, in order to minimize the effect of
interactions among the different CTs. The proper tradeoff of bandwidth interactions among the different CTs. The proper tradeoff of bandwidth
sharing and bandwidth isolation needs to be achieved in the selection of sharing and bandwidth isolation needs to be achieved in the selection of
a bandwidth constraint model. Bandwidth reservation supports greater a Bandwidth Constraints Model. Bandwidth reservation supports greater
efficiency in bandwidth sharing while still providing bandwidth efficiency in bandwidth sharing while still providing bandwidth
isolation and protection against QoS degradation. isolation and protection against QoS degradation.
In summary, the proposed MAR bandwidth constraint model includes the In summary, the proposed MAR Bandwidth Constraints Model includes the
following: a) allocate bandwidth to individual CTs, b) protect allocated following: a) allocate bandwidth to individual CTs, b) protect allocated
bandwidth by bandwidth reservation methods, as needed, but otherwise bandwidth by bandwidth reservation methods, as needed, but otherwise
fully share bandwidth, c) differentiate high-priority, normal-priority, fully share bandwidth, c) differentiate high-priority, normal-priority,
and best-effort priority services, and d) provide admission control to and best-effort priority services, and d) provide admission control to
reject connection requests when needed to meet performance objectives. reject connection requests when needed to meet performance objectives.
In the modeling results, the MAR bandwidth constraint model compares In the modeling results, the MAR Bandwidth Constraints Model compares
favorably with methods that do not use bandwidth reservation. In favorably with methods that do not use bandwidth reservation. In
particular, some of the conclusions from the modeling are as follows: particular, some of the conclusions from the modeling are as follows:
o MAR bandwidth allocation is effective in improving performance over o MAR bandwidth allocation is effective in improving performance over
methods that lack bandwidth reservation and that allow more bandwidth methods that lack bandwidth reservation and that allow more bandwidth
sharing under congestion, sharing under congestion,
o MAR achieves service differentiation for high-priority, o MAR achieves service differentiation for high-priority,
normal-priority, and best-effort priority services, normal-priority, and best-effort priority services,
o bandwidth reservation supports greater efficiency in bandwidth sharing o bandwidth reservation supports greater efficiency in bandwidth sharing
while still providing bandwidth isolation and protection against QoS while still providing bandwidth isolation and protection against QoS
degradation, and is critical to stable and efficient network degradation, and is critical to stable and efficient network
performance. performance.
Appendix B. Bandwidth Prediction for Path Computation
As discussed in [DSTE-PROTO], there there are potential advantages for a
Head-end in trying to predict the impact of an LSP on the unreserved
bandwidth when computing the path for the LSP. One example would be to
perform better load-distribution of multiple LSPs across multiple
paths. Another example would be to avoid CAC rejection when the LSP
would no longer fit on a link after establishment.
Where such predictions are used on Head-ends, the optional Bandwidth
Constraints sub-TLV and the optional Maximum Reservable Bandwidth
sub-TLV MAY be advertised in the IGP. This can be used by Head-ends
to predict how an LSP affects unreserved bandwidth values. Such
predictions can be made with MAR by using the unreserved bandwidth
values advertised by the IGP, as discussed in Sections 2 and 4:
UNRESERVED_BWck = MAX_RESERVABLE_BWk - UNRESERVED_BWk -
delta0/1(CTck) * RBW-THRESk
where
delta0/1(CTck) = 0 if RESERVED_BWck < BCck
delta0/1(CTck) = 1 if RESERVED_BWck >= BCck
Furthermore, the following estimate can be made for RBW_THRESk:
RBW_THRESk = RBW_% * MAX_RESERVABLE_BWk,
where RBW_% is a locally configured variable, which could take on
different values for different link speeds. This information
could be used in conjunction with the BC sub-TLV,
MAX_RESERVABLE_BW sub-TLV, and UNRESERVED_BW sub-TLV to make
predictions of available bandwidth on each link for each CT.
Since admission control algorithms are left for vendor differentiation,
predictions can only be performed effectively when the Head-end LSR
predictions are based on the same (or a very close) admission control
algorithm as used by other LSRs.
There may be occasional rejected LSPs when head-ends are establishing
LSPs through a common link. As an example, consider some link L, and
two head-ends H1 and H2. If only H1 or only H2 is establishing LSPs
through L, then the prediction is accurate. But, if both H1 and H2 are
establishing LSPs through L at the same time, then the prediction
would not work perfectly. That is, the CAC will occasionally run into a
rejected LSP on a link with such 'race' conditions. Also, as mentioned
in Appendix A, such prediction is optional and outside the scope of the
document.
Full Copyright Statement Full Copyright Statement
Copyright (C) The Internet Society (2003). All Rights Reserved. Copyright (C) The Internet Society (2004). All Rights Reserved.
This document and translations of it may be copied and furnished to This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it or others, and derivative works that comment on or otherwise explain it or
assist in its implementation may be prepared, copied, published and assist in its implementation may be prepared, copied, published and
distributed, in whole or in part, without restriction of any kind, distributed, in whole or in part, without restriction of any kind,
provided that the above copyright notice and this paragraph are included provided that the above copyright notice and this paragraph are included
on all such copies and derivative works. on all such copies and derivative works.
However, this document itself may not be modified in any way, such as by However, this document itself may not be modified in any way, such as by
removing the copyright notice or references to the Internet Society or removing the copyright notice or references to the Internet Society or
 End of changes. 

This html diff was produced by rfcdiff 1.25, available from http://www.levkowetz.com/ietf/tools/rfcdiff/