draft-ietf-diffserv-phb-ef-01.txt   draft-ietf-diffserv-phb-ef-02.txt 
Internet Engineering Task Force Van Jacobson Internet Engineering Task Force Van Jacobson
Differentiated Services Working Group Kathleen Nichols Differentiated Services Working Group Kathleen Nichols
Internet Draft Cisco Systems Internet Draft Cisco Systems
Expires May, 1999 Kedarnath Poduri Expires August, 1999 Kedarnath Poduri
Bay Networks Bay Networks
November, 1998 February, 1999
An Expedited Forwarding PHB An Expedited Forwarding PHB
<draft-ietf-diffserv-phb-ef-01.txt> <draft-ietf-diffserv-phb-ef-02.txt>
Status of this Memo Status of this Memo
This document is a submission to the IETF Differentiated Services This document is an Internet-Draft and is in full conformance
(DiffServ) Working Group. Comments are solicited and should be with all provisions of Section 10 of RFC2026.
addressed to the working group mailing list or to the editor.
This document is an Internet-Draft. Internet Drafts are working This document is a product of the IETF Differentiated Services
documents of the Internet Engineering Task Force (IETF), its areas, and Working Group. Comments are solicited and should be directed
its working Groups. Note that other groups may also distribute working to the working group mailing list.
documents as Internet Drafts.
Internet-Drafts draft documents are valid for a maximum of six months Internet-Drafts are working documents of the Internet Engineering
and may be updated, replaced, or obsolete by other documents at any Task Force (IETF), its areas, and its working groups. Note that
time. It is inappropriate to use Internet-Drafts as reference material other groups may also distribute working documents as
or to cite them other than as "work in progress." Internet-Drafts.
To view the entire list of current Internet-Drafts, please check the Internet-Drafts are draft documents valid for a maximum of six
"1id-abstracts.txt" listing contained in the Internet-Drafts Shadow months and may be updated, replaced, or obsoleted by other
Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern Europe), documents at any time. It is inappropriate to use Internet-
ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific Rim), Drafts as reference material or to cite them other than as
ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
Distribution of this memo is unlimited. Distribution of this memo is unlimited.
Abstract Abstract
The definition of PHBs (per-hop forwarding behaviors) is a critical part The definition of PHBs (per-hop forwarding behaviors) is a
of the work of the Diffserv Working Group. This document describes a critical part of the work of the Diffserv Working Group. This
PHB called Expedited Forwarding. We show the generality of this PHB by document describes a PHB called Expedited Forwarding. We show the
noting that it can be produced by more than one mechanism and give an generality of this PHB by noting that it can be produced by more
example of its use to produce at least one service, a Virtual Leased than one mechanism and give an example of its use to produce at
Line. A recommended codepoint for this PHB is given. least one service, a Virtual Leased Line. A recommended
codepoint for this PHB is given.
A pdf version of this document is available at A pdf version of this document is available at
ftp://ftp.ee.lbl.gov/papers/ef_phb.pdf ftp://ftp.ee.lbl.gov/papers/ef_phb.pdf
1. Introduction 1. Introduction
Network nodes that implement the differentiated services enhancements to Network nodes that implement the differentiated services
IP use a codepoint in the IP header to select a per-hop behavior (PHB) enhancements to IP use a codepoint in the IP header to select a
as the specific forwarding treatment for that packet [HEADER, ARCH]. per-hop behavior (PHB) as the specific forwarding treatment for
This draft describes a particular PHB called expedited forwarding (EF). that packet [RFC2474, RFC2475]. This draft describes a
The EF PHB can be used to build a low loss, low latency, low jitter, particular PHB called expedited forwarding (EF). The EF PHB can
assured bandwidth, end-to-end service through DS domains. Such a be used to build a low loss, low latency, low jitter, assured
service appears to the endpoints like a point-to-point connection or a bandwidth, end-to-end service through DS domains. Such a service
"virtual leased line". This service has also been described as Premium appears to the endpoints like a point-to-point connection or a
service [2BIT]. ?virtual leased line?. This service has also been described as
Premium service [2BIT].
Loss, latency and jitter are all due to the queues traffic experiences Loss, latency and jitter are all due to the queues traffic
while transiting the network. Therefore providing low loss, latency and experiences while transiting the network. Therefore providing
jitter for some traffic aggregate means ensuring that the aggregate sees low loss, latency and jitter for some traffic aggregate means
no (or very small) queues. Queues arise when (short-term) traffic ensuring that the aggregate sees no (or very small) queues.
arrival rate exceeds departure rate at some node. Thus a service that Queues arise when (short-term) traffic arrival rate exceeds
ensures no queues for some aggregate is equivalent to bounding rates departure rate at some node. Thus a service that ensures no
such that, at every transit node, the aggregate's max arrival rate is queues for some aggregate is equivalent to bounding rates such
less than that aggregate's min departure rate. that, at every transit node, the aggregate's maximum arrival rate
is less than that aggregate's minimum departure rate.
Creating such a service has two parts: Creating such a service has two parts:
1) Configuring nodes so that the aggregate has a well-defined minimum 1) Configuring nodes so that the aggregate has a well-defined
departure rate. (`Well-defined' means independent of the dynamic state minimum departure rate. (?Well-defined? means independent of the
of the node. In particular, independent of the intensity of other dynamic state of the node. In particular, independent of the
traffic at the node.) intensity of other traffic at the node.)
2) Conditioning the aggregate (via policing and shaping) so that it's 2) Conditioning the aggregate (via policing and shaping) so that its
arrival rate at any node is always less than that node's configured arrival rate at any node is always less than that node's
minimum departure rate. The EF PHB provides the first part of the configured minimum departure rate.
service. The network boundary traffic conditioners described in [ARCH]
provide the second part. The EF PHB provides the first part of the service. The network
boundary traffic conditioners described in [RFC2475] provide the
second part.
The EF PHB is not a mandatory part of the Differentiated Services The EF PHB is not a mandatory part of the Differentiated Services
architecture. I.e., a node is not required to implement the EF PHB in architecture, i.e., a node is not required to implement the EF
order to be considered DS-compliant. However, when a DS-compliant node PHB in order to be considered DS-compliant. However, when a DS-
claims to implement the EF PHB, the implementation must conform to the compliant node claims to implement the EF PHB, the implementation
specification given in this document. must conform to the specification given in this document.
The next sections describe the EF PHB in detail and give examples of how The next sections describe the EF PHB in detail and give examples
it might be implemented. The keywords "MUST", "MUST NOT", "REQUIRED", of how it might be implemented. The keywords ?MUST?, ?MUST NOT?,
"SHOULD", "SHOULD NOT", and "MAY" that appear in this document are to be ?REQUIRED?, ?SHOULD?, ?SHOULD NOT?, and ?MAY? that appear in this
interpreted as described in [Bradner97]. document are to be interpreted as described in [Bradner97].
2. Description of EF per-hop behavior 2. Description of EF per-hop behavior
The EF PHB is defined as a forwarding treatment for a particular The EF PHB is defined as a forwarding treatment for a particular
diffserv aggregate where the departure rate of the aggregate's packets diffserv aggregate where the departure rate of the aggregate's
from any diffserv node must equal or exceed a configurable rate. The EF packets from any diffserv node must equal or exceed a
traffic should receive this rate independent of the intensity of any configurable rate. The EF traffic SHOULD receive this rate
other traffic attempting to transit the node. It should average at independent of the intensity of any other traffic attempting to
least the configured rate when measured over any time interval equal to transit the node. It SHOULD average at least the configured rate
or longer than a packet time at the configured rate. (Behavior at time when measured over any time interval equal to or longer than the
scales shorter than a packet time at the configured rate is deliberately time it takes to send an output link MTU sized packet at the
not specified.) The configured minimum rate must be settable by a configured rate. (Behavior at time scales shorter than a packet
network administrator (using whatever mechanism the node supports for time at the configured rate is deliberately not specified.) The
configured minimum rate MUST be settable by a network
administrator (using whatever mechanism the node supports for
non-volatile configuration). non-volatile configuration).
If the EF PHB is implemented by a mechanism that allows unlimited If the EF PHB is implemented by a mechanism that allows unlimited
preemption of other traffic (e.g., a priority queue), the implementation preemption of other traffic (e.g., a priority queue), the
must include some means to limit the damage EF traffic could inflict on implementation MUST include some means to limit the damage EF
other traffic (e.g., a token bucket rate limiter). This maximum EF rate traffic could inflict on other traffic (e.g., a token bucket rate
must be settable by a network administrator (using whatever mechanism limiter). Traffic that exceeds this limit MUST be discarded. This
the node supports for non-volatile configuration). The minimum and maximum EF rate, and burst size if appropriate, MUST be settable
maximum rates can be the same and configured by a single parameter. by a network administrator (using whatever mechanism the node
supports for non-volatile configuration). The minimum and maximum
rates may be the same and configured by a single parameter.
The Appendix describes how this PHB can be used to construct end-to-end The Appendix describes how this PHB can be used to construct end-
services. to-end services.
2.2 Example Mechanisms to Implement the EF PHB 2.2 Example Mechanisms to Implement the EF PHB
Several types of queue scheduling mechanisms may be employed to deliver Several types of queue scheduling mechanisms may be employed to
the forwarding behavior described in section 2.1 and thus implement the deliver the forwarding behavior described in section 2.1 and thus
EF PHB. A simple priority queue will give the appropriate behavior as implement the EF PHB. A simple priority queue will give the
long as there is no higher priority queue that could preempt the EF for appropriate behavior as long as there is no higher priority queue
more than a packet time at the configured rate. (This could be that could preempt the EF for more than a packet time at the
accomplished by having a rate policer such as a token bucket associated configured rate. (This could be accomplished by having a rate
with each priority queue to bound how much the queue can starve other policer such as a token bucket associated with each priority
traffic.) queue to bound how much the queue can starve other traffic.)
It's also possible to use a single queue in a group of queues serviced It's also possible to use a single queue in a group of queues
by a weighted round robin scheduler where the share of the output serviced by a weighted round robin scheduler where the share of
bandwidth assigned to the EF queue is equal to the configured rate. the output bandwidth assigned to the EF queue is equal to the
This could be implemented, for example, using one PHB of a Class configured rate. This could be implemented, for example, using
Selector Compliant set of PHBs [HEADER]. one PHB of a Class Selector Compliant set of PHBs [RFC2474].
Another possible implementation is a CBQ [CBQ] scheduler that gives the Another possible implementation is a CBQ [CBQ] scheduler that
EF queue priority up to the configured rate. gives the EF queue priority up to the configured rate.
All of these mechanisms give the basic properties required for the EF All of these mechanisms have the basic properties required for
PHB though different choices result in differences in auxiliary behavior the EF PHB though different choices result in different ancillary
such as jitter seen by individual microflows. See Appendix A.3 for behavior such as jitter seen by individual microflows. See
simulations that quantify some of these differences. Appendix A.3 for simulations that quantify some of these differences.
2.3 Recommended codepoint for this PHB 2.3 Recommended codepoint for this PHB
Codepoint 101110 is recommended for the EF PHB. Codepoint 101110 is recommended for the EF PHB.
2.4 Mutability 2.4 Mutability
Packets marked for EF PHB may be remarked at a DS domain boundary to Packets marked for EF PHB MAY be remarked at a DS domain boundary
other codepoints that satisfy the EF PHB only. Packets marked for EF only to other codepoints that satisfy the EF PHB. Packets marked
PHBs SHOULD NOT be demoted or promoted to another PHB by a DS domain. for EF PHBs SHOULD NOT be demoted or promoted to another PHB by a
DS domain.
2.5 Tunneling 2.5 Tunneling
When EF packets are tunneled, the tunneling packets must be marked as When EF packets are tunneled, the tunneling packets must be
EF. marked as EF.
2.6 Interaction with other PHBs 2.6 Interaction with other PHBs
Other PHBs and PHB groups may be deployed in the same DS node or domain Other PHBs and PHB groups may be deployed in the same DS node or
with the EF PHB as long as the requirement of section 2.1 is met. domain with the EF PHB as long as the requirement of section 2.1
is met.
3. Security Considerations 3. Security Considerations
To protect itself against denial of service attacks, the edge of a DS To protect itself against denial of service attacks, the edge of
domain MUST strictly police all EF marked packets to a rate negotiated a DS domain MUST strictly police all EF marked packets to a rate
with the adjacent upstream domain. (This rate must be <= the EF PHB negotiated with the adjacent upstream domain. (This rate must be
configured rate.) Packets in excess of the negotiated rate MUST be <= the EF PHB configured rate.) Packets in excess of the
dropped. If two adjacent domains have not negotiated an EF rate, the negotiated rate MUST be dropped. If two adjacent domains have
downstream domain MUST use 0 as the rate (i.e., drop all EF marked not negotiated an EF rate, the downstream domain MUST use 0 as
packets). the rate (i.e., drop all EF marked packets).
Since the end-to-end premium service constructed from the EF PHB Since the end-to-end premium service constructed from the EF PHB
requires that the upstream domain police and shape EF marked traffic to requires that the upstream domain police and shape EF marked
meet the rate negotiated with the downstream domain, the downstream traffic to meet the rate negotiated with the downstream domain,
domain's policer should never have to drop packets. Thus these drops the downstream domain's policer should never have to drop
should be noted (e.g., via SNMP traps) as possible security violations packets. Thus these drops SHOULD be noted (e.g., via SNMP traps)
or serious misconfiguration. Similarly, since the aggregate EF traffic as possible security violations or serious misconfiguration.
rate is constrained at every interior node, the EF queue should never Similarly, since the aggregate EF traffic rate is constrained at
overflow so if it does the drops should be noted as possible attacks or every interior node, the EF queue should never overflow so if it
serious misconfiguration. does the drops SHOULD be noted as possible attacks or serious
misconfiguration.
4. References 4. References
[Bradner97] S. Bradner, "Key words for use in RFCs to Indicate Requirement [Bradner97] S. Bradner, ?Key words for use in RFCs to Indicate
Levels", Internet RFC 2119, March 1997. Requirement Levels?, Internet RFC 2119, March 1997.
[HEADER] K. Nichols, S. Blake, F. Baker, and D. Black, "Definition of the [RFC2474] K. Nichols, S. Blake, F. Baker, and D. Black,
Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers", ?Definition of the Differentiated Services Field (DS Field) in
<draft-ietf-diffserv-header-02.txt>, August 1998. the IPv4 and IPv6 Headers?, Internet RFC 2474, December 1998.
[ARCH] D. Black, S. Blake, M. Carlson, E. Davies, Z. Wang, and W. Weiss, [RFC2475] D. Black, S. Blake, M. Carlson, E. Davies, Z. Wang, and
"An Architecture for Differentiated Services", Internet Draft W. Weiss, ?An Architecture for Differentiated Services?,
<draft-ietf-diffserv-arch-04.txt>, August 1998. Internet RFC 2475, December 1998.
[2BIT] K. Nichols, V. Jacobson, and L. Zhang, "A Two-bit Differentiated [2BIT] K. Nichols, V. Jacobson, and L. Zhang, ?A Two-bit
Services Architecture for the Internet", Internet Draft Differentiated Services Architecture for the Internet?, Internet
<draft-nichols-diff-svc-arch-00.txt>, November 1997, Draft <draft-nichols-diff-svc-arch-00.txt>, November 1997,
ftp://ftp.ee.lbl.gov/papers/dsarch.pdf ftp://ftp.ee.lbl.gov/papers/dsarch.pdf
[CBQ] S. Floyd and V. Jacobson, "Link-sharing and Resource Management [CBQ] S. Floyd and V. Jacobson, ?Link-sharing and Resource
Models for Packet Networks", IEEE/ACM Transactions on Networking, Management Models for Packet Networks?, IEEE/ACM Transactions on
Vol. 3 no. 4, pp. 365-386, August 1995. Networking, Vol. 3 no. 4, pp. 365-386, August 1995.
[IW] K. Poduri and K. Nichols, "Simulation Studies of Increased Initial [RFC2415] K. Poduri and K. Nichols, ?Simulation Studies of
TCP Window Size", <draft-ietf-tcpimpl-poduri-02.txt>, August 1998. Increased Initial TCP Window Size?, Internet RFC 2415,
September 1998.
[LCN] K. Nichols, "Improving Network Simulation with Feedback", Proceedings [LCN] K. Nichols, ?Improving Network Simulation with
of LCN '98, October, 1998. Feedback?, Proceedings of LCN '98, October, 1998
5. Authors' Addresses 5. Authors' Addresses
Van Jacobson Van Jacobson
Cisco Systems, Inc Cisco Systems, Inc
170 W. Tasman Drive 170 W. Tasman Drive
San Jose, CA 95134-1706 San Jose, CA 95134-1706
van@cisco.com van@cisco.com
Kathleen Nichols Kathleen Nichols
skipping to change at line 234 skipping to change at line 251
Kedarnath Poduri Kedarnath Poduri
Bay Networks, Inc. Bay Networks, Inc.
4401 Great America Parkway 4401 Great America Parkway
Santa Clara, CA 95052-8185 Santa Clara, CA 95052-8185
kpoduri@baynetworks.com kpoduri@baynetworks.com
Appendix A: Example use of and experiences with the EF PHB Appendix A: Example use of and experiences with the EF PHB
A.1 Virtual Leased Line Service A.1 Virtual Leased Line Service
A VLL Service, also known as Premium service [2BIT], is quantified by a A VLL Service, also known as Premium service [2BIT], is
peak bandwidth. quantified by a peak bandwidth.
A.2 Experiences with its use in ESNET A.2 Experiences with its use in ESNET
A prototype of the VLL service has been deployed on DOE's ESNet A prototype of the VLL service has been deployed on DOE's ESNet
backbone. This uses weighted-round-robin queuing features of Cisco 75xx backbone. This uses weighted-round-robin queuing features of
series routers to implement the EF PHB. The early tests have been very Cisco 75xx series routers to implement the EF PHB. The early
successful and work is in progress to make the service available on a tests have been very successful and work is in progress to make
routine production basis (see ftp://ftp.ee.lbl.gov/talks/vj-doeqos.pdf the service available on a routine production basis (see
and ftp://ftp.ee.lbl.gov/talks/vj-i2qos-may98.pdf for details). ftp://ftp.ee.lbl.gov/talks/vj-doeqos.pdf and
ftp://ftp.ee.lbl.gov/talks/vj-i2qos-may98.pdf for details).
A.3 Simulation Results A.3 Simulation Results
A.3.1 Jitter variation A.3.1 Jitter variation
In section 2.2, we pointed out that a number of mechanisms might be used In section 2.2, we pointed out that a number of mechanisms might
to implement the EF PHB. The simplest is a priority queue (PQ) where be used to implement the EF PHB. The simplest of these is a
the arrival rate of the queue is strictly less than its service rate. priority queue (PQ) where the arrival rate of the queue is
As jitter comes from the queuing delay along the path, a feature of this strictly less than its service rate. As jitter comes from the
implementation is that EF-marked microflows will see very little jitter queuing delay along the path, a feature of this implementation is
at their subscribed rate if all DS nodes along the path use this that EF-marked microflows will see very little jitter at their
implementation since packets spend little time in queues. The EF PHB subscribed rate since packets spend little time in queues. The EF
does not have an explicit jitter requirement, but it is clear from the PHB does not have an explicit jitter requirement but it is clear
definition that the expected jitter in packets that use a service based from the definition that the expected jitter in a packet stream
on the EF PHB will be less than for best-effort style packet delivery. that uses a service based on the EF PHB will be less with PQ than
A PQ implementation for the EF PHB should give the smallest jitter, but with best-effort delivery. We used simulation to explore how
we used simulation to explore how other implementations, particularly weighted round-robin (WRR) compares to PQ in jitter. We chose
weighted round-robin (WRR), compare in jitter. PQ and WRR seemed to be these two since they?re the best and worst cases, respectively,
the best and worst cases, respectively, for jitter and we wanted to for jitter and we wanted to supply rough guidelines for EF
supply some rough guidelines for implementers choosing to use WRR. implementers choosing to use WRR or similar mechanisms.
Our simulation model is implemented in ns-2 as described in [IW] and Our simulation model is implemented in a modified ns-2 described
[LCN]. We've made some further modifications to ns-2, using the CBQ in [RFC2415] and [LCN]. We used the CBQ modules included with ns-
modules included with ns-2 as a basis to implement priority queuing and 2 as a basis to implement priority queuing and WRR. Our topology
WRR. Our topology has six hops with decreasing bandwidth in the has six hops with decreasing bandwidth in the direction of a
direction of a single 1.5 Mbps bottleneck link. Sources produce single 1.5 Mbps bottleneck link (see figure 6). Sources produce
EF-marked packets at an average bit rate equal to their subscribed EF-marked packets at an average bit rate equal to their
packet rate. Packets are produced with a variation of +/-10% from the subscribed packet rate. Packets are produced with a variation of
interpacket spacing at the subscribed packet rate. The individual +-10% from the interpacket spacing at the subscribed packet rate.
source rates were picked to add up to 30% of the bottleneck link or 450 The individual source rates were picked aggregate to 30% of the
Kbps. A mixture of other kinds of traffic, FTPs and HTTPs, is used to bottleneck link or 450 Kbps. A mixture of FTPs and HTTPs is then
fill the link. EF packet sources produce either all 160 byte packets or used to fill the link. Individual EF packet sources produce
all 1500 byte packets. Though we focus on the statistics of flows with either all 160 byte packets or all 1500 byte packets. Though we
one size of packet, all of the experiments used a mixture of short present the statistics of flows with one size of packet, all of
packet EF sources and long packet EF sources so the EF queues had a mix the experiments used a mixture of short and long packet EF
of both packet lengths. sources so the EF queues had a mix of both packet lengths.
We used as the jitter definition the absolute value of the difference We defined jitter as the absolute value of the difference between
between the arrival times of two adjacent packets minus their departure the arrival times of two adjacent packets minus their departure
times, |(aj-ai) - (dj-di)|. For the target flow of each experiment, we times, |(aj-dj) - (ai-di)|. For the target flow of each
record the median, 90th and 95th percentile values of jitter (in experiment, we record the median and 90th percentile values of
milliseconds) in a table. The pdf version of this document contains jitter (expressed as % of the subscribed EF rate) in a table. The
graphs of the jitter percentiles. pdf version of this document contains graphs of the jitter
percentiles.
We explored the jitter behavior for WRR implementations of the EF PHB. Our experiments compared the jitter of WRR and PQ implementations
We wanted to see how different the jitter behavior is from that of PQ of the EF PHB. We assessed the effect of different choices of WRR
and to examine the effects of different choices of WRR queue weight and queue weight and number of queues on jitter. For WRR, we define
number of queues on jitter. To this end, we define the the service-to-arrival rate ratio as the service rate of the EF
service-to-arrival rate ratio as the WRR rate of an EF queue (or the queue (or the queue?s minimum share of the output link) times the
queue's minimum share of the output link) times the output link output link bandwidth divided by the peak arrival rate of EF-
bandwidth divided by the peak arrival rate of EF-marked packets at a marked packets at the queue. Results will not be stable if the
queue. If the WRR weight is chosen to exactly balance arrival and WRR weight is chosen to exactly balance arrival and departure
departure rates, results will not be stable. Thus the minimum ratio of rates thus we used a minimum service-to-arrival ratio of 1.03. In
service rate to arrival rate used here is 1.03 which, in our our simulations this means that the EF queue gets at least 31% of
simulations, means that the EF queue gets a weight of 31% of the output the output links. In WRR simulations we kept the link full with
links. In our WRR simulations, we kept the link full with other traffic other traffic as described above, splitting the non-EF-marked
as described above, splitting the non-EF-marked traffic among the non-EF traffic among the non-EF queues. (It should be clear from the
queues. experiment description that we are attempting to induce worst-
case jitter and do not expect these settings or traffic to
represent a ?normal? operating point.)
Our first set of experiments uses the minimal service-to-arrival ratio Our first set of experiments uses the minimal service-to-arrival
of 1.06 and we vary the number of individual microflows composing the EF ratio of 1.06 and we vary the number of individual microflows
aggregate from 2 to 36. We compare these WRR implementations to a PQ composing the EF aggregate from 2 to 36. We compare these to a PQ
implementation with 24 flows. First, we examine a microflow at a implementation with 24 flows. First, we examine a microflow at a
subscribed rate of 56 Kbps sending 1500 byte packets, then one at the subscribed rate of 56 Kbps sending 1500 byte packets, then one at
same rate but sending 160 byte packets. Table 1 shows the 50th, 90th the same rate but sending 160 byte packets. Table 1 shows the
and 95th percentile jitter in milliseconds. Figure 1 plots the 1500 50th and 90th percentile jitter in percent of a packet time at the
byte flows and figure 2 the 160 byte flows. Note that a packet-time for subscribed rate. Figure 1 plots the 1500 byte flows and figure 2
a 1500 byte packet at 56 Kbps is 214 ms, for a 160 byte packet 23 ms. the 160 byte flows. Note that a packet-time for a 1500 byte
Thus the jitter for the large packets rarely exceeds half a subscribed packet at 56 Kbps is 214 ms, for a 160 byte packet 23 ms. The
rate packet-time, though most jitters for the small packets are at least jitter for the large packets rarely exceeds half a subscribed
one subscribed rate packet-time. Keep in mind that the EF aggregate is rate packet-time, though most jitters for the small packets are
composed of mixtures of small and large packets in both cases, so the at least one subscribed rate packet-time. Keep in mind that the
short packets can still queue behind long packets in the EF queue. EF aggregate is a mixture of small and large packets in all cases
Also, the service-to-arrival ratio used here is the minimum possible to so short packets can wait for long packets in the EF queue. PQ
implement the EF PHB. PQ gives a very low jitter. gives a very low jitter.
(see pdf form of document for all tables) Table 1: Variation in jitter with number of EF flows:
Service/arrival ratio of 1.06 and subscription rate of 56 Kbps
(all values given as % of subscribed rate)
Next we look at the effects of increasing the service-to-arrival ratio 1500 byte pack. 160 byte packet
to see if it reduces jitter. This means that EF packets are expected to # EF flows 50th % 90th % 50th % 90th %
remain enqueued for less time, though the amount of bandwidth available PQ (24) 1 5 17 43
for all other queues remains the same. In this set of experiments, the 2 11 47 96 513
number of flows composing the aggregate was fixed at eight and the total 4 12 35 100 278
number of queues at five (four non-EF queues). Table 2 shows the 8 10 25 96 126
results, first for a 1500 byte flow, then for a 160 byte flow. Figures 24 18 47 96 143
3 plots the 1500 byte results and figure 4 the 160 byte results. Table
2 gives the 95th percentile values of jitter for the same. Performance
gains leveled off at service-to-arrival ratios of 1.5. Note that the
higher service-to-arrival ratios still do not give the same performance
as PQ, but now 90% of packets experience less than a subscribed
packet-time of jitter, even for the small packets. We believe that
implementers should use service-to-arrival ratios of at least 1.5 and
further study may be desired to determine the efficacy of higher ratios.
Increasing the number of queues at the output interfaces can lead to Next we look at the effects of increasing the service-to-arrival
more variability in the service time for EF packets so we carried out an ratio. This means that EF packets should remain enqueued for less
experiment varying the number of queues at each output port. We fixed time though the bandwidth available to the other queues remains
the number of flows in the aggregate to eight and used a 1.03 the same. In this set of experiments the number of flows in the
service-to-arrival ratio. Results are shown in figure 5 and table 3. EF aggregate was fixed at eight and the total number of queues at
Figure 5 includes PQ with 8 flows as a baseline. five (four non-EF queues). Table 2 shows the results for 1500 and
160 byte flows. Figures 3 plots the 1500 byte results and figure
4 the 160 byte results. Performance gains leveled off at service-
to-arrival ratios of 1.5. Note that the higher service-to-arrival
ratios do not give the same performance as PQ, but now 90% of
packets experience less than a subscribed packet-time of jitter
even for the small packets.
It appears that most jitter for WRR is low and can be reduced by a Table 2: Variation in Jitter of EF flows: service/arrival ratio varies,
proper choice of the EF queue's WRR share of the output link with 8 flow aggregate, 56 Kbps subscribed rate
respect to its subscribed rate. As noted, WRR is probably a "worst
case" while PQ is the best case. Other possibilities include WFQ or CBQ WRR 1500 byte pack. 160 byte packet
with a fixed rate limit for the EF queue, but giving it priority over Ser/Arr 50th % 90th % 50th % 90th %
other queues. We expect the latter to have performance nearly identical PQ 1 3 17 43
with PQ, though future simulations can verify this. We have not yet 1.03 14 27 100 178
systematically explored effects of hop count, EF allocations of more or 1.30 7 21 65 113
less than 30% of the link bandwidth, or more complex topologies. Note 1.50 5 13 57 104
that this information is simply to guide implementers. 1.70 5 13 57 100
2.00 5 13 57 104
3.00 5 13 57 100
Increasing the number of queues at the output interfaces can lead
to more variability in the service time for EF packets so we
carried out an experiment varying the number of queues at each
output port. We fixed the number of flows in the aggregate to
eight and used the minimal 1.03 service-to-arrival ratio. Results
are shown in figure 5 and table 3. Figure 5 includes PQ with 8
flows as a baseline.
Table 3: Variation in Jitter with Number of Queues at Output Interface:
Service-to-arrival ratio is 1.03, 8 flow aggregate
# EF 1500 byte packet
flows 50th % 90th %
PQ (8) 1 3
2 7 21
4 7 21
6 8 22
8 10 23
It appears that most jitter for WRR is low and can be reduced by
a proper choice of the EF queue's WRR share of the output link
with respect to its subscribed rate. As noted, WRR is a worst
case while PQ is the best case. Other possibilities include WFQ
or CBQ with a fixed rate limit for the EF queue but giving it
priority over other queues. We expect the latter to have
performance nearly identical with PQ though future simulations
are needed to verify this. We have not yet systematically
explored effects of hop count, EF allocations other than 30% of
the link bandwidth, or more complex topologies. The information
in this section is not part of the EF PHB definition but provided
simply as background to guide implementers.
A.3.2 VLL service A.3.2 VLL service
We used simulation to see how well a VLL service built from the EF PHB We used simulation to see how well a VLL service built from the
behaved, that is, does it look like a "leased line" at the subscribed EF PHB behaved, that is, does it look like a `leased line' at the
rate. In the simulations of the last section, none of the EF packets subscribed rate. In the simulations of the last section, none of
were dropped in the network and the target rate was always achieved for the EF packets were dropped in the network and the target rate
those CBR sources. However, we wanted to see if VLL really looks like a was always achieved for those CBR sources. However, we wanted to
"wire" to a TCP using it. So we simulated long-lived FTPs using a VLL see if VLL really looks like a `wire' to a TCP using it. So we
service. Table 4 gives the percentage of each link allocated to EF simulated long-lived FTPs using a VLL service. Table 4 gives the
traffic (bandwidths are lower on the links with fewer EF microflows), percentage of each link allocated to EF traffic (bandwidths are
the subscribed VLL rate, the average rate for the same type of lower on the links with fewer EF microflows), the subscribed VLL
sender-receiver pair connected by a full duplex dedicated link at the rate, the average rate for the same type of sender-receiver pair
subscribed rate and the average of the VLL flows for each simulation connected by a full duplex dedicated link at the subscribed rate
(all sender-receiver pairs had the same value). Losses only occur when and the average of the VLL flows for each simulation (all sender-
the input shaping buffer overflows but not in the network. The target receiver pairs had the same value). Losses only occur when the
input shaping buffer overflows but not in the network. The target
rate is not achieved due to the well-known TCP behavior. rate is not achieved due to the well-known TCP behavior.
Table 4: Performance of FTPs using a VLL service
% link Average delivered rate (Kbps)
to EF Subscribed Dedicated VLL
20 100 90 90
40 150 143 143
60 225 213 215
 End of changes. 47 change blocks. 
252 lines changed or deleted 308 lines changed or added

This html diff was produced by rfcdiff 1.34. The latest version is available from http://tools.ietf.org/tools/rfcdiff/