draft-ietf-tewg-diff-te-mar-00.txt   draft-ietf-tewg-diff-te-mar-01.txt 
Network Working Group Jerry Ash Network Working Group Jerry Ash
Internet Draft AT&T Internet Draft AT&T
Category: Experimental Category: Experimental
<draft-ietf-tewg-diff-te-mar-00.txt> <draft-ietf-tewg-diff-te-mar-01.txt>
Expiration Date: November 2003 Expiration Date: December 2003
April, 2003 June, 2003
Max Allocation with Reservation Bandwidth Constraint Model for Max Allocation with Reservation Bandwidth Constraint Model for
MPLS/DiffServ TE & Performance Comparisons MPLS/DiffServ TE & Performance Comparisons
<draft-ietf-tewg-diff-te-mar-00.txt> <draft-ietf-tewg-diff-te-mar-01.txt>
Status of this Memo Status of this Memo
This document is an Internet-Draft and is in full conformance with This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC2026. all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that other Task Force (IETF), its areas, and its working groups. Note that other
groups may also distribute working documents as Internet-Drafts. groups may also distribute working documents as Internet-Drafts.
skipping to change at line 50 skipping to change at line 50
constraint model are presented. MAR performance is analyzed relative to constraint model are presented. MAR performance is analyzed relative to
the criteria for selecting a bandwidth constraint model, in order to the criteria for selecting a bandwidth constraint model, in order to
provide guidance to user implementation of the model in their networks. provide guidance to user implementation of the model in their networks.
Table of Contents Table of Contents
1. Introduction 1. Introduction
2. Definitions 2. Definitions
3. Assumptions & Applicability 3. Assumptions & Applicability
4. Functional Specification of the MAR Bandwidth Constraint Model 4. Functional Specification of the MAR Bandwidth Constraint Model
5. Examples of MAR Operation 5. Setting Bandwidth Constraints
6. Summary 6. Example of MAR Operation
7. Security Considerations 7. Summary
8. Acknowledgments 8. Security Considerations
9. References 9. Acknowledgments
10. Authors' Addresses 10. References
11. Authors' Addresses
ANNEX A. MAR Operation & Performance Analysis ANNEX A. MAR Operation & Performance Analysis
1. Introduction 1. Introduction
DiffServ-aware MPLS traffic engineering (DSTE) requirements and protocol DiffServ-aware MPLS traffic engineering (DSTE) requirements and protocol
extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for
DSTE implementation is the specification of bandwidth constraint models DSTE implementation is the specification of bandwidth constraint models
for use with DSTE. The bandwidth constraint model provides the 'rules' for use with DSTE. The bandwidth constraint model provides the 'rules'
to support the allocation of bandwidth to individual class types (CTs). to support the allocation of bandwidth to individual class types (CTs).
CTs are groupings of service classes in the DSTE model, which are CTs are groupings of service classes in the DSTE model, which are
skipping to change at line 81 skipping to change at line 82
[DSTE-REQ] by giving a functional specification for the Maximum [DSTE-REQ] by giving a functional specification for the Maximum
Allocation with Reservation (MAR) bandwidth constraint model. Examples Allocation with Reservation (MAR) bandwidth constraint model. Examples
of the operation of the MAR bandwidth constraint model are presented. of the operation of the MAR bandwidth constraint model are presented.
MAR performance is analyzed relative to the criteria for selecting a MAR performance is analyzed relative to the criteria for selecting a
bandwidth constraint model, in order to provide guidance to user bandwidth constraint model, in order to provide guidance to user
implementation of the model in their networks. implementation of the model in their networks.
Two other bandwidth constraint models are being specified for use in Two other bandwidth constraint models are being specified for use in
DSTE: DSTE:
1. maximum allocation (MA) model [MAM1, MAM2] - the maximum allowable 1. maximum allocation model (MAM) [MAM1, MAM2] - the maximum allowable
bandwidth usage of each CT is explicitly specified. bandwidth usage of each CT is explicitly specified.
2. Russian doll (RD) model [RDM] - the maximum allowable bandwidth usage 2. Russian doll model (RDM) [RDM] - the maximum allowable bandwidth
is done cumulatively by grouping successive CTs according to priority usage is done cumulatively by grouping successive CTs according to
classes. priority classes.
MAR is similar to the MA model in that a maximum bandwidth allocation is MAR is similar to MAM in that a maximum bandwidth allocation is given to
given to each CT. However, through the use of bandwidth reservation and each CT. However, through the use of bandwidth reservation and
protection mechanisms, CTs are allowed to exceed their bandwidth protection mechanisms, CTs are allowed to exceed their bandwidth
allocations under conditions of no congestion but revert to their allocations under conditions of no congestion but revert to their
allocated bandwidths when overload and congestion occurs. allocated bandwidths when overload and congestion occurs.
All bandwidth constraint models should meet these objectives: All bandwidth constraint models should meet these objectives:
1. applies equally when preemption is either enabled or disabled (when 1. applies equally when preemption is either enabled or disabled (when
preemption is disabled, the model still works 'reasonably' well), preemption is disabled, the model still works 'reasonably' well),
2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under 2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under
both normal and overload conditions, both normal and overload conditions,
3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another
CT under overload conditions, CT under overload conditions,
4. protection against QoS degradation, at least of the high-priority CTs 4. protection against QoS degradation, at least of the high-priority CTs
(e.g. high-priority voice, high-priority data, etc.), and (e.g. high-priority voice, high-priority data, etc.), and
5. reasonably simple, i.e., does not require additional IGP extensions 5. reasonably simple, i.e., does not require additional IGP extensions
and minimizes signaling load processing requirements. and minimizes signaling load processing requirements.
In Annex A modeling analysis is presented which shows that the MAR model In Annex A modeling analysis is presented which shows that the MAR model
meets all these objectives, and provides good network performance meets all these objectives, and provides good network performance
relative to full sharing models, under normal and abnormal operating relative to MAM and full sharing models, under normal and abnormal
conditions. It is demonstrated that simultaneously achieves bandwidth operating conditions. It is demonstrated that simultaneously achieves
efficiency, bandwidth isolation, and protection against QoS degradation bandwidth efficiency, bandwidth isolation, and protection against QoS
without preemption. degradation without preemption.
In Section 3 we give assumptions and applicability, in Section 4 a In Section 3 we give the assumptions and applicability, in Section 4 a
functional specification of the MAR bandwidth constraint model, and in functional specification of the MAR bandwidth constraint model, and in
Section 5 we give examples of its operation. In Annex A, MAR Section 5 we give examples of its operation. In Annex A, MAR
performance is analyzed relative to the criteria for selecting a performance is analyzed relative to the criteria for selecting a
bandwidth constraint model, in order to provide guidance to user bandwidth constraint model, in order to provide guidance to user
implementation of the model in their networks. implementation of the model in their networks.
2. Definitions 2. Definitions
For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are
repeated here: repeated here:
skipping to change at line 135 skipping to change at line 136
Traffic Trunk: an aggregation of traffic flows of the same class (i.e. Traffic Trunk: an aggregation of traffic flows of the same class (i.e.
which are to be treated equivalently from the DSTE perspective) which which are to be treated equivalently from the DSTE perspective) which
are placed inside an LSP. are placed inside an LSP.
Class-Type (CT): the set of Traffic Trunks crossing a link that is Class-Type (CT): the set of Traffic Trunks crossing a link that is
governed by a specific set of Bandwidth constraints. CT is used for the governed by a specific set of Bandwidth constraints. CT is used for the
purposes of link bandwidth allocation, constraint based routing and purposes of link bandwidth allocation, constraint based routing and
admission control. A given Traffic Trunk belongs to the same CT on all admission control. A given Traffic Trunk belongs to the same CT on all
links. links.
Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTi, Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTc, 0
0 <= i <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth <= c <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth
Constraint, or a set of Bandwidth Constraints. Up to 8 Bandwidth Constraint, or a set of Bandwidth Constraints. Up to 8 Bandwidth
Constraints (MaxBC = 8) are supported and they are referred to as BCi, Constraints (MaxBC = 8) are supported and they are referred to as BCc,
0 <= i <= MaxBC-1 = 7. 0 <= c <= MaxBC-1 = 7.
TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that
CT. This means that an LSP transporting a Traffic Trunk from that CT can CT. This means that an LSP transporting a Traffic Trunk from that CT can
use that preemption priority as the set-up priority, as the holding use that preemption priority as the set-up priority, as the holding
priority or both. priority or both.
BWIPi: bandwidth-in-progress on CTi (0 <= i <= MaxCT), BWIPi = sum MAX_RESERVABLE_BWk: maximum reservable bandwidth on link k specifies the
of the bandwidth reserved by all established LSPs which belong to CTi. maximum bandwidth that may be reserved; this may be greater than the
maximum link bandwidth in which case the link may be oversubscribed
BWALLOCi: allocated (minimum guaranteed) bandwidth for CTi. [KATZ-YEUNG].
BWMAXi: bandwidth allocation threshold for high-priority CTs (see
Section 4).
TLBWk: the total link bandwidth on link k. RESERVED_BWck: reserved bandwidth-in-progress on CTc on link k (0 <= c
<= MaxCT-1), RESERVED_BWck = sum of the bandwidth reserved by all
established LSPs which belong to CTc.
ILBWk: idle link bandwidth on link k = TLBWk - sum BWIPi (0 <= i UNRESERVED_BWck: unreserved link bandwidth on CTc on link k specifies
<= MaxCT), the amount of bandwidth not yet reserved for CTc, UNRESERVED_BWck =
MAX_RESERVABLE_BWk - sum [RESERVED_BWck (0 <= c <= MaxCT-1)].
RBWk: reserved bandwidth for link k BCck: bandwidth constraint for CTc on link k = allocated (minimum
guaranteed) bandwidth for CTc on link k (see Section 4).
Normalized(CTi): Normalized(CTi) = BWIPi/LOMi, where LOMi is the Local RBW_THRESk: reservation bandwidth threshold for link k (see Section 4).
Overbooking Multiplier for CTi defined in [DSTE-PROTO].
3. Assumptions & Applicability 3. Assumptions & Applicability
In general, DSTE is a bandwidth allocation mechanism, for different In general, DSTE is a bandwidth allocation mechanism, for different
classes of traffic allocated to various CTs (e.g., voice, normal data, classes of traffic allocated to various CTs (e.g., voice, normal data,
best-effort data). Network operations functions such as capacity best-effort data). Network operations functions such as capacity
design, bandwidth allocation, routing design, and network planning are design, bandwidth allocation, routing design, and network planning are
normally based on traffic measured load and forecast [ASH1]. normally based on traffic measured load and forecast [ASH1].
As such, the following assumptions are made according to the operation As such, the following assumptions are made according to the operation
skipping to change at line 187 skipping to change at line 188
traffic measurement and forecast. traffic measurement and forecast.
2. CAC could allocate bandwidth per flow, per LSP, per traffic trunk, or 2. CAC could allocate bandwidth per flow, per LSP, per traffic trunk, or
otherwise. That is, no specific assumption is made on a specific CAC otherwise. That is, no specific assumption is made on a specific CAC
method, only that CT bandwidth allocation is related to the method, only that CT bandwidth allocation is related to the
measured/forecast traffic load, as per assumption #1. measured/forecast traffic load, as per assumption #1.
3. CT bandwidth allocation is adjusted up or down according to 3. CT bandwidth allocation is adjusted up or down according to
measured/forecast traffic load. No specific time period is assumed for measured/forecast traffic load. No specific time period is assumed for
this adjustment, it could be short term (hours), daily, weekly, monthly, this adjustment, it could be short term (hours), daily, weekly, monthly,
or otherwise. or otherwise.
4. Capacity management and CT bandwidth allocation thresholds (e.g., 4. Capacity management and CT bandwidth allocation thresholds (e.g.,
BWALLOC) are designed according to traffic load, and are based on BCc) are designed according to traffic load, and are based on traffic
traffic measurement and forecast. Again, no specific time period is measurement and forecast. Again, no specific time period is assumed for
assumed for this adjustment, it could be short term (hours), daily, this adjustment, it could be short term (hours), daily, weekly, monthly,
weekly, monthly, or otherwise. or otherwise.
5. No assumption is made on the order in which traffic is allocated to 5. No assumption is made on the order in which traffic is allocated to
various CTs, again traffic allocation is assumed to be based only on various CTs, again traffic allocation is assumed to be based only on
traffic load as it is measured and/or forecast. traffic load as it is measured and/or forecast.
6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic 6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic
trunk, alternate paths may be attempted to satisfy CT bandwidth trunk, alternate paths may be attempted to satisfy CT bandwidth
allocation. allocation.
Note that the above assumptions are not unique to MAR, but are generic, Note that the above assumptions are not unique to MAR, but are generic,
common assumptions for all BC models. common assumptions for all BC models.
4. Functional Specification of the MAR Bandwidth Constraint Model 4. Functional Specification of the MAR Bandwidth Constraint Model
In the MAR bandwidth constraint model, the bandwidth allocation control In the MAR bandwidth constraint model, the bandwidth allocation control
for each CT is based on estimated bandwidth needs, bandwidth use, and for each CT is based on estimated bandwidth needs, bandwidth use, and
status of links. The LER makes needed bandwidth allocation changes, and status of links. The LER makes needed bandwidth allocation changes, and
uses [RSVP-TE], for example, to determine if link bandwidth can be uses [RSVP-TE], for example, to determine if link bandwidth can be
allocated to a CT. Bandwidth allocated to individual CTs is protected as allocated to a CT. Bandwidth allocated to individual CTs is protected as
needed but otherwise shared. Under normal non-congested network needed but otherwise shared. Under normal non-congested network
conditions, all CTs/services fully share all available bandwidth. When conditions, all CTs/services fully share all available bandwidth. When
congestion occurs for a particular CTi, bandwidth reservation acts to congestion occurs for a particular CTc, bandwidth reservation acts to
prohibit traffic from other CTs from seizing the allocated capacity for prohibit traffic from other CTs from seizing the allocated capacity for
CTi. CTc.
On a given link k, a small amount of bandwidth RBWk is reserved, and the
link load state definition is as follows:
Table 1: On a given link k, a small amount of bandwidth RBW_THRESk, the
reservation bandwidth threshold for link k, is reserved and governs the
admission control on link k. Also associated with each CTc on link k
are the allocated bandwidth constraints BCck to govern bandwidth
allocation and protection. The reservation bandwidth on a link,
RBW_THRESk, can be accessed when a given CTc has bandwidth-in-use
RESERVED_BWck below its allocated bandwidth constraint BCck. However,
if RESERVED_BWck exceeds its allocated bandwidth constraint BCck, then
the reservation bandwidth RBW_THRESk cannot be accessed. In this way,
bandwidth can be fully shared among CTs if available, but is otherwise
protected by bandwidth reservation methods.
Reserved-bandwidth state (RBW): ILBWk <= RBWk Bandwidth can be accessed for a bandwidth request = DBW for CTc on a
Available-bandwidth state (ABW): ILBWk > RBWk given link k based on the following rules:
Associated with each CTi are the allocated bandwidth BWALLOCi and Table 1: Rules for Admitting LSP Bandwidth Request = DBW on Link k
maximum bandwidth BWMAXi parameters to govern bandwidth allocation and
protection. The reserved bandwidth on a link, RBWk, can be accessed
when a given CTi has bandwidth-in-use BWIPi below its allocated
bandwidth BWALLOCi. That is, if the available link bandwidth ILBWk
exceeds RBWk, then any CTi is free to access the available bandwidth on
the link. However, if ILBWk is less than RBWk, then the CTi can utilize
the available bandwidth only if its current bandwidth usage is below the
allocated amount BWALLOCi. In this way, bandwidth can be fully shared
among CTs if available, but is protected by bandwidth reservation if
below the reservation level.
When preemption is disabled, i.e., all LSP holding priorities are set to For LSP on a high priority or normal priority CTc:
zero, bandwidth can be accessed for a bandwidth request = DBW for CTi If RESERVED_BWck <= BCc: admit if DBW <= UNRESERVED_BWk
on a given path based on the following rules: If RESERVED_BWck > BCc: admit if DBW <= UNRESERVED_BWk - RBW_THRESk
Table 2: For LSP on a best-effort priority CTc:
allocated bandwidth BCc = 0;
DiffServ queuing admits BE packets only if there is available link
bandwidth;
For a high priority LSP (setup priority = 0): The normal semantics of setup and holding priority are applied in the
On primary and alternate paths for CTi: MAR bandwidth constraint model, and cross-CT preemption is permitted
If BWIPi <= 2 X BWMAXi: RBWk state allowed on all links k in path when preemption is enabled.
If BWIPi > 2 X BWMAXi: ABWk state allowed on all links k in path
For a normal priority LSP (setup priority = 1): The bandwidth allocation rules defined in Table 1 are illustrated with
On primary path for CTi: an example in Section 6 and simulation analysis in ANNEX A.
If BWIPi <= BWALLOCi: RBWk state allowed on all links k in path
If BWIPi > BWALLOCi: ABWk state allowed on all links k in path
On alternate path for CTi:
If BWIPi <= BWALLOCi: ABWk state allowed on all links k in path
If BWIPi > BWALLOCi: alternate path not allowed
For a best-effort priority LSP (setup priority >= 2): 5. Setting Bandwidth Constraints
On primary path for CTi:
allocated bandwidth BWALLOCi = 0;
DiffServ queuing admits BE packets only if there is available bandwidth
on a link;
Alternate paths not allowed.
When preemption is enabled, the normal semantics of setup and holding For a normal priority CTc, the bandwidth constraints BCck on link k are
priority are applied, in addition to the above bandwidth allocation set by allocating the maximum reservable bandwidth (MAX_RESERVABLE_BWk)
thresholds in Table 1 and Table 2. in proportion to the forecast or measured traffic load bandwidth
TRAF_LOAD_BWck for CTc on link k. That is:
These parameters defined in Table 1 and Table 2 are illustrated with PROPORTIONAL_BWck = TRAF_LOAD_BWck/[sum {TRAF_LOAD_BWck, c=0,MaxCT-1}] X
examples in Section 5. MAX_RESERVABLE_BWk
5. Examples of MAR Operation For normal priority CTc:
BCck = PROPORTIONAL_BWck
In the first example, assume there are three class-types: CT0, CT1, CT2. For a high priority CT, the bandwidth constraint BCck is set to a
We consider a particular link with capacity 100. multiple of the proportional bandwidth. That is:
In practice, MAR allocates CT bandwidth for the normal traffic loads, so For high priority CTc:
in an engineered network it never winds up that the BWALLOC values on a BCck = FACTOR X PROPORTIONAL_BWck
given link add to 100% of the link bandwidth. For example these could
be typical allocated bandwidths:
CT0-BWALLOC = 30 where FACTOR is set to a multiple of the proportional bandwidth (e.g.,
CT1-BWALLOC = 20 FACTOR = 2 or 3 is typical). This results in some 'over-allocation'
CT2-BWALLOC = 20 of the maximum reservable bandwidth, and gives priority to the high
priority CTs. Normally the bandwidth allocated to high priority CTs
should be a relatively small fraction of the total link bandwidth, a
maximum of 10-15 percent being a reasonable guideline.
These are based on the normal traffic loads. This leaves 100 - 70 = As stated in Section 4, the bandwidth allocated to a best-effort
30 units of spare bandwidth on the link under normal loading. With MAR, priority CTc should be set to zero. That is:
any of the CTs is allowed to exceed its BWALLOC as long a there is at
least RBW (reserved bandwidth on the link) units of spare bandwidth
remaining.
Let's say RBW = 10. So under overload, if For best-effort priority CTc:
BCck = 0
CT0 has taken 50 units of bandwidth, 6. Example of MAR Operation
CT1 has taken 30 units of bandwidth,
CT2 has taken 10 units of bandwidth,
CT0 and CT1 can no longer increase their bandwidth on the link, since In the example, assume there are three class-types: CT0, CT1, CT2. We
they are above their BWALLOC values and there is only RBW=10 units of consider a particular link with
spare bandwidth left on the link. But CT2 can take the additional
bandwidth (up to 10 units) if the demand arrives, since it is below its
BWALLOC value.
RBW is set such that the probability that each CT can get at least its MAX-RESERVABLE_BW = 100
BWALLOC is quite high.
As also discussed in Section 4, if best effort traffic is present, it And with the allocated bandwidth constraints set as follows:
can always seize whatever spare bandwidth is available on the link at
the moment (30 units average for this example), but is subject to being
lost at the queues in favor of the higher priority traffic.
We now expand the example to give some illustration of the use of Table BC0 = 30
1 and Table 2 in Section 4. BC1 = 20
BC2 = 20
We still assume 3 CTs: CT0, CT1, CT2, all with 'normal' priority, and a These bandwidth constraints are based on the normal traffic loads, as
particular link with capacity = 100. BWALLOC values and RBW are as in discussed in Section 5. With MAR, any of the CTs is allowed to exceed
the above example: its bandwidth constraint BCc as long a there is at least RBW_THRES
(reservation bandwidth threshold on the link) units of spare bandwidth
remaining. Let's assume
BWALLOC for CT0 = BWALLOC0 = 30 RBW_THRES = 10
BWALLOC for CT1 = BWALLOC1 = 20
BWALLOC for CT2 = BWALLOC2 = 20
Reserved bandwidth (RBW) = 10
This leaves 100 - 70 = 30 units of spare bandwidth on the link under So under overload, if
normal loading. With MAR, any of the CTs is allowed to exceed its
BWALLOC as long a there is at least RBW (reserved bandwidth on the link)
units of spare bandwidth remaining.
Now assume an overload condition, such that RESERVED_BW0 = 50
RESERVED_BW1 = 30
RESERVED_BW2 = 10
CT0 has taken 50 units of bandwidth (bandwidth-in-progress for CT0 = Therefore, for this loading
BWIP0 = 50),
CT1 has taken 30 units of bandwidth (bandwidth-in-progress for CT1 =
BWIP1 = 30),
CT2 has taken 10 units of bandwidth (bandwidth-in-progress for CT2 =
BWIP2 = 10),
Therefore, for this loading, UNRESERVED_BW = 100 - 50 - 30 - 10 = 10
Idle link bandwidth (ILBW) = 100 - 50 - 30 - 10 = 10
Let's say a flow arrives for CT0 needing 5 units of bandwidth (i.e., DBW CT0 and CT1 can no longer increase their bandwidth on the link, since
= 5). We need to decide based on Table 2 and Table 1 whether to admit they are above their BC values and there is only RBW_THRES=10 units of
this flow or not. spare bandwidth left on the link. But CT2 can take the additional
bandwidth (up to 10 units) if the demand arrives, since it is below its
BC value.
The link load state is determined from Table 1. Since ILBW - RBW < DBW As also discussed in Section 4, if best effort traffic is present, it
(i.e., 10 - 10 < 5), Table 1 says the link is in the RBW (reserved can always seize whatever spare bandwidth is available on the link at
bandwidth) state. the moment, but is subject to being lost at the queues in favor of the
higher priority traffic.
The allowed load state is determined from Table 2 (the allowed load Let's say an LSP arrives for CT0 needing 5 units of bandwidth (i.e., DBW
state is the minimum level of bandwidth that must be available on a link = 5). We need to decide based on Table 1 whether to admit this LSP or
to admit the flow). Since for CT0 (normal priority) BWALLOC0 < BWIP0 not. Since for CT0
(30 < 50), Table 2 says the allowed load state is ABW (available
bandwidth).
Hence since the link has less bandwidth (RBW state) than the allowed RESERVED_BW0 > BC0 (50 > 30), and
load state level of bandwidth required to admit the flow (ABW), the flow DBW > UNRESERVED_BW - RBW_THRES (i.e., 5 > 10 - 10)
is rejected/blocked.
Now let's say a flow arrives for CT2 needing 5 units of bandwidth (i.e., Table 1 says the LSP is rejected/blocked.
DBW = 5). We need to decide based on Table 2 and Table 1 whether to
admit this flow or not.
The link load state is determined from Table 1. Since ILBW - RBW < DBW Now let's say an LSP arrives for CT2 needing 5 units of bandwidth (i.e.,
(i.e., 10 - 10 < 5), Table 1 says the link is in the RBW (reserved DBW = 5). We need to decide based on Table 1 whether to admit this
bandwidth) state. LSP or not. Since for CT2
The allowed load state is determined from Table 2 (the allowed load RESERVED_BW2 < BC2 (10 < 20), and
state is the minimum level of bandwidth that must be available on a link DBW < UNRESERVED_BW (i.e., 10 - 10 < 5)
to admit the flow). Since for CT2 (normal priority) BWIP2 < BWALLOC2
(10 < 20), Table 2 says the allowed load state is RBW (reserved
bandwidth).
Hence since the link has sufficient bandwidth (RBW state) compared to Table 1 says to admit the LSP.
the allowed load state level of bandwidth required to admit the flow
(also RBW), the flow is admitted.
Hence, in the above example, in the current state of the link and the Hence, in the above example, in the current state of the link and the
current CT loading, CT0 and CT1 can no longer increase their bandwidth current CT loading, CT0 and CT1 can no longer increase their bandwidth
on the link, since they are above their BWALLOC values and there is only on the link, since they are above their BCc values and there is only
RBW=10 units of spare bandwidth left on the link. But CT2 can take RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can
the additional bandwidth (up to 10 units) if the demand arrives, since take the additional bandwidth (up to 10 units) if the demand arrives,
it is below its BWALLOC value. since it is below its BCc value.
6. Summary 7. Summary
The proposed MAR bandwidth constraint model includes the following: a) The proposed MAR bandwidth constraint model includes the following: a)
allocate bandwidth to individual CTs, b) protect allocated bandwidth by allocate bandwidth to individual CTs, b) protect allocated bandwidth by
bandwidth reservation methods, as needed, but otherwise fully share bandwidth reservation methods, as needed, but otherwise fully share
bandwidth, c) differentiate high-priority, normal-priority, and bandwidth, c) differentiate high-priority, normal-priority, and
best-effort priority services, and d) provide admission control to best-effort priority services, and d) provide admission control to
reject connection requests when needed to meet performance objectives. reject connection requests when needed to meet performance objectives.
Modeling results presented in Annex A show that MAR bandwidth allocation Modeling results presented in Annex A show that MAR bandwidth allocation
a) achieves greater efficiency in bandwidth sharing while still a) achieves greater efficiency in bandwidth sharing while still
providing bandwidth isolation and protection against QoS degradation, providing bandwidth isolation and protection against QoS degradation,
and b) achieves service differentiation for high-priority, and b) achieves service differentiation for high-priority,
normal-priority, and best-effort priority services. normal-priority, and best-effort priority services.
7. Security Considerations 8. Security Considerations
No new security considerations are raised by this document, they are the No new security considerations are raised by this document, they are the
same as in the DSTE requirements document [DSTE-REQ]. same as in the DSTE requirements document [DSTE-REQ].
8. Acknowledgements 9. Acknowledgements
DSTE and bandwidth constraint models have been an active area of DSTE and bandwidth constraint models have been an active area of
discussion in the TEWG. I would like to thank Wai Sum Lai for his discussion in the TEWG. I would like to thank Wai Sum Lai for his
support and review of this draft. I also appreciate helpful discussions support and review of this draft. I also appreciate helpful discussions
with Francois Le Faucheur. with Francois Le Faucheur.
8. References 10. References
[AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks [AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks
with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984. with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984.
[ASH1] Ash, G. R., Dynamic Routing in Telecommunications Networks, [ASH1] Ash, G. R., Dynamic Routing in Telecommunications Networks,
McGraw-Hill, 1998. McGraw-Hill, 1998.
[ASH2] Ash, G. R., et. al., Routing Evolution in Multiservice Integrated [ASH2] Ash, G. R., et. al., Routing Evolution in Multiservice Integrated
Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999. Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999.
[ASH3] Ash, G. R., Traffic Engineering & QoS Methods for IP-, ATM-, & [ASH3] Ash, G. R., Traffic Engineering & QoS Methods for IP-, ATM-, &
TDM-Based Multiservice Networks, work in progress. TDM-Based Multiservice Networks, work in progress.
[BUR] Burke, P. J., Blocking Probabilities Associated with Directional [BUR] Burke, P. J., Blocking Probabilities Associated with Directional
Reservation, unpublished memorandum, 1961. Reservation, unpublished memorandum, 1961.
[E.360] ITU-T Recommendations, QoS Routing & Related Traffic Engineering
Methods for Multiservice TDM-, ATM-, & IP-Based Networks.
[DIFF-MPLS] Le Faucheur, F., et. al., "MPLS Support of Diff-Serv", RFC [DIFF-MPLS] Le Faucheur, F., et. al., "MPLS Support of Diff-Serv", RFC
3270, May 2002. 3270, May 2002.
[DSTE-REQ] Le Faucheur, F., et. al., "Requirements for Support of [DSTE-REQ] Le Faucheur, F., et. al., "Requirements for Support of
Diff-Serv-aware MPLS Traffic Engineering," work in progress. Diff-Serv-aware MPLS Traffic Engineering," work in progress.
[DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support [DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support
of Diff-Serv-aware MPLS Traffic Engineering," work in progress. of Diff-Serv-aware MPLS Traffic Engineering," work in progress.
[DIFFSERV] Blake, S., et. al., "An Architecture for Differentiated [DIFFSERV] Blake, S., et. al., "An Architecture for Differentiated
Services", RFC 2475, December 1998. Services", RFC 2475, December 1998.
[E.360.1 --> E.360.7] ITU-T Recommendations, "QoS Routing & Related
Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based
Networks".
[KATZ-YEUNG] Katz, D., Yeung, D., Kompella, K., "Traffic Engineering
Extensions to OSPF Version 2," work in progress.
[KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement [KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement
Levels", RFC 2119, March 1997. Levels", RFC 2119, March 1997.
[KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks", [KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks",
Proceedings of ICC, Philadelphia, 1982. Proceedings of ICC, Philadelphia, 1982.
[LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and [LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and
Control of Network Systems III Conference", SPIE Proceedings Vol. 4865, Control of Network Systems III Conference", SPIE Proceedings Vol. 4865,
pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002 pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002
(http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf). (http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf).
[MAM1] Lai, W., "Maximum Allocation Bandwidth Constraints Model for [MAM1] Lai, W., "Maximum Allocation Bandwidth Constraints Model for
Diffserv-TE & Performance Comparisons", work in progress. Diffserv-TE & Performance Comparisons", work in progress.
[MAM2] Le Faucheur, F., "Maximum Allocations Bandwidth Constraints Model [MAM2] Lai, W., Le Faucheur, F., "Maximum Allocations Bandwidth
for Diff-Serv-aware MPLS Traffic Engineering", work in progress. Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in
progress.
[MUM] Mummert, V. S., "Network Management and Its Implementation on the [MUM] Mummert, V. S., "Network Management and Its Implementation on the
No. 4ESS, International Switching Symposium", Japan, 1976. No. 4ESS, International Switching Symposium", Japan, 1976.
[NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global [NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global
Communication Network, Proceedings of ITC-7, Stockholm, 1973. Communication Network, Proceedings of ITC-7, Stockholm, 1973.
[MPLS-ARCH] Rosen, E., et. al., "Multiprotocol Label Switching [MPLS-ARCH] Rosen, E., et. al., "Multiprotocol Label Switching
Architecture," RFC 3031, January 2001. Architecture," RFC 3031, January 2001.
[RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for [RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for
Diff-Serv-aware MPLS Traffic Engineering", work in progress. Diff-Serv-aware MPLS Traffic Engineering", work in progress.
[RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3",
BCP 9, RFC 2026, October 1996. BCP 9, RFC 2026, October 1996.
[RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP [RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP
Tunnels", RFC 3209, December 2001. Tunnels", RFC 3209, December 2001.
9. Authors' Addresses 11. Authors' Addresses
Jerry Ash Jerry Ash
AT&T AT&T
Room MT D5-2A01 Room MT D5-2A01
200 Laurel Avenue 200 Laurel Avenue
Middletown, NJ 07748, USA Middletown, NJ 07748, USA
Phone: +1 732-420-4578 Phone: +1 732-420-4578
Email: gash@att.com Email: gash@att.com
ANNEX A - MAR Operation & Performance Analysis ANNEX A - MAR Operation & Performance Analysis
A.1 MAR Operation A.1 MAR Operation
In the MAR bandwidth constraint model, the bandwidth allocation control In the MAR bandwidth constraint model, the bandwidth allocation control
for each CT is based on estimated bandwidth needs, bandwidth use, and for each CT is based on estimated bandwidth needs, bandwidth use, and
status of links. The LER makes needed bandwidth allocation changes, and status of links. The LER makes needed bandwidth allocation changes, and
uses [RSVP-TE], for example, to determine if link bandwidth can be uses [RSVP-TE], for example, to determine if link bandwidth can be
allocated to a CT. Bandwidth allocated to individual CTs is protected as allocated to a CT. Bandwidth allocated to individual CTs is protected as
needed but otherwise shared. Under normal non-congested network needed but otherwise shared. Under normal non-congested network
conditions, all CTs/services fully share all available bandwidth. When conditions, all CTs/services fully share all available bandwidth. When
congestion occurs for a particular CTi, bandwidth reservation acts to congestion occurs for a particular CTc, bandwidth reservation acts to
prohibit traffic from other CTs from seizing the allocated capacity for prohibit traffic from other CTs from seizing the allocated capacity for
CTi. Associated with each CT are the allocated bandwidth (BWALLOC) and CTc. Associated with each CT is the allocated bandwidth constraint
maximum bandwidth (BWmax) parameters to govern bandwidth allocation and (BCc) to govern bandwidth allocation and protection, these parameters
protection. An allowed load state (ALS) parameter controls the are illustrated with examples in this ANNEX.
bandwidth allocation on individual links in a CT, based on their
available bandwidth. These parameters are illustrated with examples in
this Annex.
In performing MAR bandwidth allocation for a given flow, the LER first In performing MAR bandwidth allocation for a given flow/LSP, the LER
determines the egress LSR address, service-identity, and CT. The first determines the egress LSR address, service-identity, and CT. The
connection request is allocated an equivalent bandwidth to be routed on connection request is allocated an equivalent bandwidth to be routed on
a particular CT. The LER then accesses the CT priority, QoS/traffic a particular CT. The LER then accesses the CT priority, QoS/traffic
parameters, and routing table between the LER and egress LSR, and sets parameters, and routing table between the LER and egress LSR, and sets
up the connection request using the MAR bandwidth allocation rules. The up the connection request using the MAR bandwidth allocation rules. The
LER selects a first choice path and determines if bandwidth can be LER selects a first choice path and determines if bandwidth can be
allocated on the path based on the MAR bandwidth allocation rules given allocated on the path based on the MAR bandwidth allocation rules given
in Section 4. If the first choice path has insufficient bandwidth, the in Section 4. If the first choice path has insufficient bandwidth, the
LER may then try alternate paths, and again applies the MAR bandwidth LER may then try alternate paths, and again applies the MAR bandwidth
allocation rules now described. allocation rules now described.
MAR bandwidth allocation is done on a per-CT basis, in which aggregated MAR bandwidth allocation is done on a per-CT basis, in which aggregated
CT bandwidth is managed to meet the overall bandwidth requirements of CT CT bandwidth is managed to meet the overall bandwidth requirements of CT
service needs. Individual flows are allocated bandwidth in the service needs. Individual flows/LSPs are allocated bandwidth in the
corresponding CT according to CT bandwidth availability. A fundamental corresponding CT according to CT bandwidth availability. A fundamental
principle applied in MAR bandwidth allocation methods is the use of principle applied in MAR bandwidth allocation methods is the use of
bandwidth reservation techniques. bandwidth reservation techniques.
Bandwidth reservation gives preference to the preferred traffic by Bandwidth reservation gives preference to the preferred traffic by
allowing it to seize any idle bandwidth on a link, while allowing the allowing it to seize any idle bandwidth on a link, while allowing the
non-preferred traffic to only seize bandwidth if there is a minimum non-preferred traffic to only seize bandwidth if there is a minimum
level of idle bandwidth available called the reserved bandwidth RBW. level of idle bandwidth available called the reservation bandwidth
Burke [BUR] first analyzed bandwidth reservation behavior from the threshold RBW_THRES. Burke [BUR] first analyzed bandwidth reservation
solution of the birth-death equations for the bandwidth reservation behavior from the solution of the birth-death equations for the
model. Burke's model showed the relative lost-traffic level for bandwidth reservation model. Burke's model showed the relative
preferred traffic, which is not subject to bandwidth reservation lost-traffic level for preferred traffic, which is not subject to
restrictions, as compared to non-preferred traffic, which is subject to bandwidth reservation restrictions, as compared to non-preferred
the restrictions. Bandwidth reservation protection is robust to traffic traffic, which is subject to the restrictions. Bandwidth reservation
variations and provides significant dynamic protection of particular protection is robust to traffic variations and provides significant
streams of traffic. It is widely used in large-scale network dynamic protection of particular streams of traffic. It is widely used
applications [ASH1, MUM]. in large-scale network applications [ASH1, MUM, AKI, KRU, NAK].
Bandwidth reservation is used in two ways in MAR bandwidth allocation, Bandwidth reservation is used in MAR bandwidth allocation to control
first to control sharing of link bandwidth across different CTs, and sharing of link bandwidth across different CTs. On a given link, a
second to prevent inefficient (long) routing paths from degrading small amount of bandwidth RBW_THRES is reserved (say 1% of the total
network performance. On a given link, a small amount of bandwidth RBW link bandwidth), and the reservation bandwidth can be accessed when a
is reserved (say 1% of the total link bandwidth), and the reserved given CT has reserved bandwidth-in-progress RESERVED_BW below its
bandwidth can be accessed when a given CT has bandwidth-in-use below its allocated bandwidth BC. That is, if the available link bandwidth
allocated bandwidth BWALLOC. That is, if the available link bandwidth (unreserved idle link bandwidth UNRESERVED_BW) exceeds RBW_THRES, then
ABW exceeds RBW, then any CT is free to access the available bandwidth any CT is free to access the available bandwidth on the link. However,
on the link. However, if ABW is less than RBW, then the CT can utilize if UNRESERVED_BW is less than RBW_THRES, then the CT can utilize the
the available bandwidth only if its current bandwidth usage is below the available bandwidth only if its current bandwidth usage is below the
allocated amount BWALLOC. In this way, bandwidth can be fully shared allocated amount BC. In this way, bandwidth can be fully shared among
among CTs if available, but is protected by bandwidth reservation if CTs if available, but is protected by bandwidth reservation if below the
below the reservation level. reservation level.
Bandwidth reservation is also used to prevent inefficient (long) routing Through the bandwidth reservation mechanism, MAR bandwidth allocation
paths from degrading network performance, which if uncontrolled can lead also gives preference to high-priority CTs, in comparison to
to network "instability" and severely reduce throughput in periods of normal-priority and best-effort priority CTs.
congestion, perhaps by as much as 50 percent of the traffic-carrying
capacity of a network. Bandwidth reservation is used to prevent this
unstable behavior by having the preferred traffic on a link be the
traffic on the primary, shortest path, and the non-preferred traffic,
subjected to bandwidth reservation restrictions as described above, be
the alternate-routed traffic on longer paths. In this way the
alternate-routed traffic is inhibited from selecting longer alternate
paths when sufficient idle trunk capacity is not available on all links
of an alternate-routed connection, which is the likely condition under
network and link congestion. Through the bandwidth reservation
mechanism, MAR bandwidth allocation also gives preference to
high-priority CTs, in comparison to normal-priority and best-effort
priority CTs.
Hence, bandwidth allocated to each CT is protected by bandwidth Hence, bandwidth allocated to each CT is protected by bandwidth
reservation methods, as needed, but otherwise shared. Each LER monitors reservation methods, as needed, but otherwise shared. Each LER monitors
CT bandwidth use on each CT, and determines if connection requests can CT bandwidth use on each CT, and determines if connection requests can
be allocated to the CT bandwidth. For example, for a bandwidth request be allocated to the CT bandwidth. For example, for a bandwidth request
of DBW on a given flow, the LER determines the CT priority (high, of DBW on a given flow/LSP, the LER determines the CT priority (high,
normal, or best-effort), CT bandwidth-in-use, and CT bandwidth normal, or best-effort), CT bandwidth-in-use, and CT bandwidth
allocation thresholds, and uses these parameters to determine the allocation thresholds, and uses these parameters to determine the
allowed load state threshold (ALSi) to which capacity can be allocated. allowed load state threshold to which capacity can be allocated. In
In allocating bandwidth DBW to a CT on given LSP, say A-B-E, each link allocating bandwidth DBW to a CT on given LSP, say A-B-E, each link in
in the path is checked for available bandwidth in comparison to ALSi. the path is checked for available bandwidth in comparison to the allowed
If bandwidth is unavailable on any link in path A-B-E, another LSP could load state. If bandwidth is unavailable on any link in path A-B-E,
by tried, such as A-C-D-E. Hence determination of the link load state another LSP could by tried, such as A-C-D-E. Hence determination of the
is necessary for MAR bandwidth allocation, and three link load states link load state is necessary for MAR bandwidth allocation, and two link
are distinguished: available (non-reserved) bandwidth (ABW), load states are distinguished: available (non-reserved) bandwidth
reserved-bandwidth (RBW), and bandwidth-not-available (BNA). Management (ABW_STATE), and reserved-bandwidth (RBW_STATE). Management of CT
of CT capacity uses the link state and the ALS threshold to determine if capacity uses the link state and the allowed load state threshold to
a bandwidth allocation request can be accepted on a given CT. determine if a bandwidth allocation request can be accepted on a given
CT.
A.2 Analysis of MAR Performance A.2 Analysis of MAR Performance
In this Annex, modeling analysis is presented in which MAR bandwidth In this Annex, modeling analysis is presented in which MAR bandwidth
allocation is shown to provide good network performance relative to full allocation is shown to provide good network performance relative to full
sharing models, under normal and abnormal operating conditions. A sharing models, under normal and abnormal operating conditions. A
large-scale MPLS/DiffServ TE simulation model is used, in which several large-scale MPLS/DiffServ TE simulation model is used, in which several
CTs with different priority classes share the pool of bandwidth on a CTs with different priority classes share the pool of bandwidth on a
multiservice, integrated voice/data network. MAR methods have also been multiservice, integrated voice/data network. MAR methods have also been
analyzed in practice for TDM-based networks [ASH1], and in modeling analyzed in practice for TDM-based networks [ASH1], and in modeling
skipping to change at line 596 skipping to change at line 553
and minimizes signaling load processing requirements. and minimizes signaling load processing requirements.
The use of any given bandwidth constraint model has significant impacts The use of any given bandwidth constraint model has significant impacts
on the performance of a network, as explained later. Therefore, the on the performance of a network, as explained later. Therefore, the
criteria used to select a model must enable us to evaluate how a criteria used to select a model must enable us to evaluate how a
particular model delivers its performance, relative to other models. Lai particular model delivers its performance, relative to other models. Lai
[LAI, MAM1] has analyzed the MA and RD models and provided valuable [LAI, MAM1] has analyzed the MA and RD models and provided valuable
insights into the relative performance of these models under various insights into the relative performance of these models under various
network conditions. network conditions.
In environments where preemption is not used, the MA model is attractive In environments where preemption is not used, MAM is attractive because
because a) it is good at achieving isolation, and b) it achieves a) it is good at achieving isolation, and b) it achieves reasonable
reasonable bandwidth efficiency with some QoS degradation of lower bandwidth efficiency with some QoS degradation of lower classes. When
classes. When preemption is used, the RD model is attractive because it preemption is used, RDM is attractive because it can achieve bandwidth
can achieve bandwidth efficiency under normal load. However, the RD efficiency under normal load. However, RDM cannot provide service
model cannot provide service isolation under high load or when isolation under high load or when preemption is not used.
preemption is not used.
Our performance analysis of MAR bandwidth allocation methods is based on Our performance analysis of MAR bandwidth allocation methods is based on
a full-scale, 135-node simulation model of a national network together a full-scale, 135-node simulation model of a national network together
with a multiservice traffic demand model to study various scenarios and with a multiservice traffic demand model to study various scenarios and
tradeoffs [ASH3]. Three levels of traffic priority - high, normal, and tradeoffs [ASH3]. Three levels of traffic priority - high, normal, and
best effort -- are given across 5 CTs: normal priority voice, high best effort -- are given across 5 CTs: normal priority voice, high
priority voice, normal priority data, high priority data, and best priority voice, normal priority data, high priority data, and best
effort data. effort data.
The performance analyses for overloads and failures include a) the MAR The performance analyses for overloads and failures include a) the MAR
bandwidth constraint model, as specified in Section 4, and b) the full bandwidth constraint model, as specified in Section 4, b) the MAM
sharing bandwidth constraint model. In the full sharing bandwidth bandwidth constraint model, and c) the No-DSTE bandwidth constraint
constraint model, no reservation or protection of CT bandwidth is model.
applied, and bandwidth allocation requests are admitted if bandwidth is
available.
Table 3 gives performance results for a six-times overload on a single The allocated bandwidth constraints for MAR are as described in Section
5:
Normal priority CTs: BCck = PROPORTIONAL_BWk,
High priority CTs: BCck = FACTOR X PROPORTIONAL_BWk
Best-effort priority CTs: BCck = 0
In the MAM bandwidth constraint model, the bandwidth constraints for
each CT are set to a multiple of the proportional bandwidth allocation:
Normal priority CTs: BCck = FACTOR1 X PROPORTIONAL_BWk,
High priority CTs: BCck = FACTOR2 X PROPORTIONAL_BWk
Best-effort priority CTs: BCck = 0
Simulations show that for MAM, the sum (BCc) should exceed
MAX_RESERVABLE_BWk for better efficiency, as follows:
1. The normal priority CTs the BCc values need to be over-allocated to
get reasonable performance. It was found that over-allocating by 100%,
that is, setting FACTOR1 = 2, gave reasonable performance.
2. The high priority CTs can be over-allocated by a larger multiple
FACTOR2 in MAM and this gives better performance.
The rather large amount of over-allocation improves efficiency but
somewhat defeats the 'bandwidth protection/isolation' needed with a BC
model, since one CT can now invade the bandwidth allocated to another
CT. Each CT is restricted to its allocated bandwidth constraint BCck,
which is the maximum level of bandwidth allocated to each CT on each
link, as in normal operation of MAM.
In the No-DSTE bandwidth constraint model, no reservation or protection
of CT bandwidth is applied, and bandwidth allocation requests are
admitted if bandwidth is available. Furthermore, no queueing priority
is applied to any of the CTs in the No-DSTE bandwidth constraint model.
Table 2 gives performance results for a six-times overload on a single
network node at Oakbrook IL. The numbers given in the table are the network node at Oakbrook IL. The numbers given in the table are the
total network percent lost (blocked) or delayed traffic. Note that in total network percent lost (blocked) or delayed traffic. Note that in
the focused overload scenario studied here, the percent lost/delayed the focused overload scenario studied here, the percent lost/delayed
traffic on the Oakbrook node is much higher than the network-wide traffic on the Oakbrook node is much higher than the network-wide
average values given. average values given.
Table 3 Table 2
Performance Comparison Performance Comparison for MAR, MAM, & No-DSTE
for MAR & Full Sharing Bandwidth Constraint Models Bandwidth Constraint (BC) Models
6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic)
Class Type MAR Bandwidth Full Sharing Bandwidth Class Type MAR BC MAM BC No-DSTE BC
Constraint Model Constraint Model Model Model Model
NORMAL PRIORITY VOICE 0.16 10.83 NORMAL PRIORITY VOICE 0.00 1.97 10.3009
HIGH PRIORITY VOICE 0.00 8.47 HIGH PRIORITY VOICE 0.00 0.00 7.0509
NORMAL PRIORITY DATA 3.18 12.88 NORMAL PRIORITY DATA 0.00 6.63 13.3009
HIGH PRIORITY DATA 0.00 0.46 HIGH PRIORITY DATA 0.00 0.00 7.0509
BEST EFFORT PRIORITY DATA 12.32 9.75 BEST EFFORT PRIORITY DATA 12.33 11.92 9.6509
Clearly the performance is better with MAR bandwidth allocation, and the Clearly the performance is better with MAR bandwidth allocation, and the
results show that performance improves when bandwidth reservation is results show that performance improves when bandwidth reservation is
used. The reason for the poor performance of the full sharing model, used. The reason for the poor performance of the No-DSTE model, without
without bandwidth reservation, is due to the lack of protection of bandwidth reservation, is due to the lack of protection of allocated
allocated bandwidth, and the tendency to admit flows on longer paths bandwidth. If we add the bandwidth reservation mechanism, then
rather than protect shorter primary paths under network congestion. performance of the network is greatly improved.
Without bandwidth reservation, networks can exhibit unstable behavior in
which essentially all connections are established on longer alternate
paths as opposed to shorter primary paths, which greatly reduces network
throughput and increases network congestion [AKI, KRU, NAK]. If we add
the bandwidth reservation mechanism, then performance of the network is
greatly improved.
Table 4 illustrates the performance of the MAR and full sharing The simulations showed that the performance of MAM is quite sensitive to
the over-allocation factors discussed above. For example, if the BCc
values are proportionally allocated with FACTOR1 = 1, then the results
are much worse, as shown in Table 3:
Table 3
Performance Comparison for MAM Bandwidth Constraint Model
with Different Over-allocation Factors
6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic)
Class Type (FACTOR1 = 1) (FACTOR1 = 2)
NORMAL PRIORITY VOICE 31.69 1.9709
HIGH PRIORITY VOICE 0.00 0.0009
NORMAL PRIORITY DATA 31.22 6.6309
HIGH PRIORITY DATA 0.00 0.0009
BEST EFFORT PRIORITY DATA 8.76 11.9209
Table 4 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a high-day network load pattern with a bandwidth constraint models for a high-day network load pattern with a
50% general overload. The numbers given in the table are the total 30% general overload. The numbers given in the table are the total
network percent lost (blocked) or delayed traffic. network percent lost (blocked) or delayed traffic.
Table 4 Table 4
Performance Comparison Performance Comparison for MAR, MAM, & No-DSTE
for MAR & Full Sharing Bandwidth Constraint Models Bandwidth Constraint (BC) Models
50% General Overload (Total Network % Lost/Delayed Traffic) 50% General Overload (Total Network % Lost/Delayed Traffic)
Class Type MAR Bandwidth Full Sharing Bandwidth Class Type MAR BC MAM BC No-DSTE BC
Constraint Model Constraint Model Model Model Model
NORMAL PRIORITY VOICE 0.03 2.00 NORMAL PRIORITY VOICE 0.02 0.13 7.9809
HIGH PRIORITY VOICE 0.00 2.41 HIGH PRIORITY VOICE 0.00 0.00 8.9409
NORMAL PRIORITY DATA 0.01 1.90 NORMAL PRIORITY DATA 0.00 0.26 6.9309
HIGH PRIORITY DATA 0.00 2.04 HIGH PRIORITY DATA 0.00 0.00 8.9409
BEST EFFORT PRIORITY DATA 11.15 24.95 BEST EFFORT PRIORITY DATA 10.41 10.39 8.4009
Again, we can see the performance is always better when MAR bandwidth Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used, including the best effort traffic. allocation and reservation is used.
Table 5 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a single link failure scenario (3
OC-48). The numbers given in the table are the total network percent
lost (blocked) or delayed traffic.
Table 5
Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models
Single Link Failure (3 OC-48s)
(Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC
Model Model Model
NORMAL PRIORITY VOICE 0.00 0.62 0.5809
HIGH PRIORITY VOICE 0.00 0.31 0.2909
NORMAL PRIORITY DATA 0.00 0.48 0.4609
HIGH PRIORITY DATA 0.00 0.31 0.2909
BEST EFFORT PRIORITY DATA 0.12 0.72 0.6609
Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used.
Table 6 illustrates the performance of the MAR, MAM, and No-DSTE
bandwidth constraint models for a multiple link failure scenario (3
links with 3 OC-48, 3 OC-3, 4 OC-3 capacity, respectively). The numbers
given in the table are the total network percent lost (blocked) or
delayed traffic.
Table 6
Performance Comparison for MAR, MAM, & No-DSTE
Bandwidth Constraint (BC) Models
Multiple Link Failure (3 Links with 3 OC-48, 3 OC-3, 4 OC-3, Respectively)
(Total Network % Lost/Delayed Traffic)
Class Type MAR BC MAM BC No-DSTE BC
Model Model Model
NORMAL PRIORITY VOICE 0.00 0.91 0.8609
HIGH PRIORITY VOICE 0.00 0.44 0.4209
NORMAL PRIORITY DATA 0.00 0.70 0.6409
HIGH PRIORITY DATA 0.00 0.44 0.4209
BEST EFFORT PRIORITY DATA 0.14 1.03 0.9809
Again, we can see the performance is always better when MAR bandwidth
allocation and reservation is used.
Lai's results [LAI, MAM1] show the trade-off between bandwidth sharing Lai's results [LAI, MAM1] show the trade-off between bandwidth sharing
and service protection/isolation, using an analytic model of a single and service protection/isolation, using an analytic model of a single
link. He shows that the RD model has a higher degree of sharing than the link. He shows that RDM has a higher degree of sharing than MAM.
MA model. Furthermore, for a single link, the overall loss probability Furthermore, for a single link, the overall loss probability is the
is the smallest under full sharing and largest under MA, with the RD smallest under full sharing and largest under MAM, with RDM being
model being intermediate. Hence, on a single link, Lai shows that the intermediate. Hence, on a single link, Lai shows that the full sharing
full sharing model yields the highest link efficiency and MA model the model yields the highest link efficiency and MAM the lowest, and that
lowest, and that full sharing has the poorest service protection full sharing has the poorest service protection capability.
capability.
The results of the present study show that when considering a network The results of the present study show that when considering a network
context, in which there are many links and multiple-link routing paths context, in which there are many links and multiple-link routing paths
are used, full sharing does not necessarily lead to maximum network-wide are used, full sharing does not necessarily lead to maximum network-wide
bandwidth efficiency. In fact, the results in Table 4 show that the bandwidth efficiency. In fact, the results in Table 4 show that the
full sharing model not only degrades total network throughput, but also No-DSTE model not only degrades total network throughput, but also
degrades the performance of every CT. Allowing more bandwidth sharing degrades the performance of every CT that should be protected. Allowing
may improve performance up to a point, but can severely degrade more bandwidth sharing may improve performance up to a point, but can
performance if care is not taken to protect allocated bandwidth under severely degrade performance if care is not taken to protect allocated
congestion. bandwidth under congestion.
Both Lai's study and this study show that increasing the degree of Both Lai's study and this study show that increasing the degree of
bandwidth sharing among the different CTs leads to a tighter coupling bandwidth sharing among the different CTs leads to a tighter coupling
between CTs. Under normal loading conditions, there is adequate capacity between CTs. Under normal loading conditions, there is adequate capacity
for each CT, which minimizes the effect of such coupling. Under overload for each CT, which minimizes the effect of such coupling. Under overload
conditions, when there is a scarcity of capacity, such coupling can conditions, when there is a scarcity of capacity, such coupling can
cause severe degradation of service, especially for the lower priority cause severe degradation of service, especially for the lower priority
CTs. CTs.
Thus, the objective of maximizing efficient bandwidth usage, as stated Thus, the objective of maximizing efficient bandwidth usage, as stated
in bandwidth constraint model objectives, must be exercised with care. in bandwidth constraint model objectives, must be exercised with care.
Due consideration needs to be given also to achieving bandwidth Due consideration needs to be given also to achieving bandwidth
isolation under overload, in order to minimize the effect of isolation under overload, in order to minimize the effect of
interactions among the different CTs. The proper tradeoff of bandwidth interactions among the different CTs. The proper tradeoff of bandwidth
sharing and bandwidth isolation needs to be achieved in the selection of sharing and bandwidth isolation needs to be achieved in the selection of
a default bandwidth constraint model. Bandwidth reservation supports a bandwidth constraint model. Bandwidth reservation supports greater
greater efficiency in bandwidth sharing while still providing bandwidth efficiency in bandwidth sharing while still providing bandwidth
isolation and protection against QoS degradation. isolation and protection against QoS degradation.
In summary, the proposed MAR bandwidth constraint model includes the In summary, the proposed MAR bandwidth constraint model includes the
following: a) allocate bandwidth to individual CTs, b) protect allocated following: a) allocate bandwidth to individual CTs, b) protect allocated
bandwidth by bandwidth reservation methods, as needed, but otherwise bandwidth by bandwidth reservation methods, as needed, but otherwise
fully share bandwidth, c) differentiate high-priority, normal-priority, fully share bandwidth, c) differentiate high-priority, normal-priority,
and best-effort priority services, and d) provide admission control to and best-effort priority services, and d) provide admission control to
reject connection requests when needed to meet performance objectives. reject connection requests when needed to meet performance objectives.
In the modeling results, the MAR bandwidth constraint model compares In the modeling results, the MAR bandwidth constraint model compares
favorably with methods that permit more bandwidth sharing. In favorably with methods that do not use bandwidth reservation. In
particular, some of the conclusions from the modeling are as follows: particular, some of the conclusions from the modeling are as follows:
* MAR bandwidth allocation is effective in improving performance over o MAR bandwidth allocation is effective in improving performance over
methods that lack bandwidth protection and allow more bandwidth sharing methods that lack bandwidth reservation and that allow more bandwidth
under congestion, sharing under congestion,
* MAR achieves service differentiation for high-priority, o MAR achieves service differentiation for high-priority,
normal-priority, and best-effort priority services, normal-priority, and best-effort priority services,
* bandwidth reservation supports greater efficiency in bandwidth sharing o bandwidth reservation supports greater efficiency in bandwidth sharing
while still providing bandwidth isolation and protection against QoS while still providing bandwidth isolation and protection against QoS
degradation, and is critical to stable and efficient network degradation, and is critical to stable and efficient network
performance. performance.
Full Copyright Statement Full Copyright Statement
Copyright (C) The Internet Society (2003). All Rights Reserved. Copyright (C) The Internet Society (2003). All Rights Reserved.
This document and translations of it may be copied and furnished to This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it or others, and derivative works that comment on or otherwise explain it or
 End of changes. 

This html diff was produced by rfcdiff 1.25, available from http://www.levkowetz.com/ietf/tools/rfcdiff/