draft-ietf-bmwg-mcastm-08.txt   draft-ietf-bmwg-mcastm-09.txt 
Network Working Group Debra Stopp Network Working Group Debra Stopp
Hardev Soor Hardev Soor
INTERNET-DRAFT IXIA INTERNET-DRAFT IXIA
Expires in: November 2002 Expires in: November 2002
Methodology for IP Multicast Benchmarking Methodology for IP Multicast Benchmarking
<draft-ietf-bmwg-mcastm-08.txt> <draft-ietf-bmwg-mcastm-09.txt>
Status of this Memo Status of this Memo
This document is an Internet-Draft and is in full conformance with This document is an Internet-Draft and is in full conformance with
all provisions of Section 10 of RFC2026. all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
skipping to change at page 2, line 7 skipping to change at page 2, line 7
The BMWG produces two major classes of documents: Benchmarking The BMWG produces two major classes of documents: Benchmarking
Terminology documents and Benchmarking Methodology documents. The Terminology documents and Benchmarking Methodology documents. The
Terminology documents present the benchmarks and other related Terminology documents present the benchmarks and other related
terms. The Methodology documents define the procedures required to terms. The Methodology documents define the procedures required to
collect the benchmarks cited in the corresponding Terminology collect the benchmarks cited in the corresponding Terminology
documents. documents.
Table of Contents Table of Contents
1. INTRODUCTION...................................................3 1. INTRODUCTION....................................................3
2. KEY WORDS TO REFLECT REQUIREMENTS..............................3 2. KEY WORDS TO REFLECT REQUIREMENTS...............................3
3. TEST SET UP....................................................3 3. TEST SET UP.....................................................3
3.1. Test Considerations..........................................5 3.1. Test Considerations...........................................5
3.1.1. IGMP Support..............................................5 3.1.1. IGMP Support...............................................5
3.1.2. Group Addresses...........................................5 3.1.2. Group Addresses............................................5
3.1.3. Frame Sizes...............................................5 3.1.3. Frame Sizes................................................6
3.1.4. TTL.......................................................6 3.1.4. TTL........................................................6
3.1.5. Trial Duration............................................6 3.1.5. Trial Duration.............................................6
3.2. Layer 2 Support..............................................6 3.2. Layer 2 Support...............................................6
4. FORWARDING AND THROUGHPUT......................................6 4. FORWARDING AND THROUGHPUT.......................................6
4.1. Mixed Class Throughput.......................................6 4.1. Mixed Class Throughput........................................6
4.2. Scaled Group Forwarding Matrix...............................7 4.2. Scaled Group Forwarding Matrix................................8
4.3. Aggregated Multicast Throughput..............................8 4.3. Aggregated Multicast Throughput...............................8
4.4. Encapsulation/Decapsulation (Tunneling) Throughput...........9 4.4. Encapsulation/Decapsulation (Tunneling) Throughput............9
4.4.1. Encapsulation Throughput..................................9 4.4.1. Encapsulation Throughput...................................9
4.4.2. Decapsulation Throughput..................................9 4.4.2. Decapsulation Throughput..................................10
4.4.3. Re-encapsulation Throughput..............................10 4.4.3. Re-encapsulation Throughput...............................10
5. FORWARDING LATENCY............................................10 5. FORWARDING LATENCY.............................................11
5.1. Multicast Latency...........................................11 5.1. Multicast Latency............................................11
5.2. Min/Max Multicast Latency...................................13 5.2. Min/Max Multicast Latency....................................14
6. OVERHEAD......................................................14 6. OVERHEAD.......................................................15
6.1. Group Join Delay............................................14 6.1. Group Join Delay.............................................15
6.2. Group Leave Delay...........................................15 6.2. Group Leave Delay............................................15
7. CAPACITY......................................................16 7. CAPACITY.......................................................16
7.1. Multicast Group Capacity....................................16 7.1. Multicast Group Capacity.....................................16
8. INTERACTION...................................................16 8. INTERACTION....................................................17
8.1. Forwarding Burdened Multicast Latency.......................17 8.1. Forwarding Burdened Multicast Latency........................17
8.2. Forwarding Burdened Group Join Delay........................17 8.2. Forwarding Burdened Group Join Delay.........................18
9. SECURITY CONSIDERATIONS.......................................17 9. SECURITY CONSIDERATIONS........................................19
10. ACKNOWLEDGEMENTS.............................................17 10. ACKNOWLEDGEMENTS..............................................19
11. REFERENCES...................................................18 11. REFERENCES....................................................20
12. AUTHOR'S ADDRESSES...........................................19 12. AUTHOR'S ADDRESSES............................................21
13. FULL COPYRIGHT STATEMENT.....................................19 13. FULL COPYRIGHT STATEMENT......................................21
1. Introduction 1. Introduction
This document defines a specific set of tests that vendors can use This document defines a specific set of tests that vendors can use
to measure and report the performance characteristics and to measure and report the performance characteristics and
forwarding capabilities of network devices that support IP forwarding capabilities of network devices that support IP
multicast protocols. The results of these tests will provide the multicast protocols. The results of these tests will provide the
user comparable data from different vendors with which to evaluate user comparable data from different vendors with which to evaluate
these devices. these devices.
A previous document, " Terminology for IP Multicast Benchmarking" A previous document, " Terminology for IP Multicast Benchmarking"
skipping to change at page 5, line 21 skipping to change at page 5, line 21
The procedures outlined below are written without regard for The procedures outlined below are written without regard for
specific physical layer or link layer protocols. The methodology specific physical layer or link layer protocols. The methodology
further assumes a uniform medium topology. Issues regarding mixed further assumes a uniform medium topology. Issues regarding mixed
transmission media, such as speed mismatch, headers differences, transmission media, such as speed mismatch, headers differences,
etc., are not specifically addressed. Flow control, QoS and other etc., are not specifically addressed. Flow control, QoS and other
traffic-affecting mechanisms MUST be disabled. Modifications to traffic-affecting mechanisms MUST be disabled. Modifications to
the specified collection procedures might need to be made to the specified collection procedures might need to be made to
accommodate the transmission media actually tested. These accommodate the transmission media actually tested. These
accommodations MUST be presented with the test results. accommodations MUST be presented with the test results.
An actual flow of test traffic may be required to prime related
mechanisms, (e.g., process RPF events, build device caches, etc.)
to optimally forward subsequent traffic. Therefore, before an
initial, measured forwarding test trial, the test apparatus MUST
generate test traffic utilizing the same addressing characteristics
to the DUT/SUT that will subsequently be used to measure the
DUT/SUT response. The test monitor should ensure the correct
forwarding of traffic by the DUT/SUT. The priming action need only
be repeated to keep the associated information current.
3.1.1. IGMP Support 3.1.1. IGMP Support
Each of the destination ports should support and be able to test Each of the destination ports should support and be able to test
all IGMP versions 1, 2 and 3. The minimum requirement, however, is all IGMP versions 1, 2 and 3. The minimum requirement, however, is
IGMP version 2. IGMP version 2.
Each destination port should be able to respond to IGMP queries Each destination port should be able to respond to IGMP queries
during the test. during the test.
Each destination port should also send LEAVE (running IGMP version Each destination port should also send LEAVE (running IGMP version
2) after each test. 2) after each test.
3.1.2. Group Addresses 3.1.2. Group Addresses
The Class D Group address SHOULD be changed between tests. Many It is intended that the collection of benchmarks prescribed in
DUTs have memory or cache that is not cleared properly and can bias this document be executed in an isolated lab environment. That
the results. is to say, the test traffic offered the tested devices MUST NOT
traverse a live internet, intranet, or other user-oriented network.
The following group addresses are recommended by use in a test: Assuming the above, there is no restriction to the use of multicast
addresses to compose the test traffic other than those assignments
imposed by IANA. The IANA assignments MUST be regarded for
operational consistency. For multicast address assignments see:
224.0.1.27-224.0.1.255 http://www.iana.org/assignments/multicast-addresses
224.0.5.128-224.0.5.255
224.0.6.128-224.0.6.255
If the number of group addresses accommodated by these ranges does It should be noted that address selection need not be restricted to
not satisfy the requirements of the test, then these ranges may be Administratively Scoped IP Multicast addresses.
overlapped. The total number of configured group addresses must be
less than or equal to the IGMP table size of the DUT/SUT.
3.1.3. Frame Sizes 3.1.3. Frame Sizes
Each test SHOULD be run with different Multicast Frame Sizes. The Each test SHOULD be run with different Multicast Frame Sizes. The
recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518 recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518
byte frames. byte frames.
3.1.4. TTL 3.1.4. TTL
The source frames should have a TTL value large enough to The source frames should have a TTL value large enough to
skipping to change at page 6, line 33 skipping to change at page 6, line 49
This section contains the description of the tests that are related This section contains the description of the tests that are related
to the characterization of the packet forwarding of a DUT/SUT in a to the characterization of the packet forwarding of a DUT/SUT in a
multicast environment. Some metrics extend the concept of throughput multicast environment. Some metrics extend the concept of throughput
presented in RFC 1242. The notion of Forwarding Rate is cited in RFC presented in RFC 1242. The notion of Forwarding Rate is cited in RFC
2285. 2285.
4.1. Mixed Class Throughput 4.1. Mixed Class Throughput
Objective Objective
To determine the maximum throughput rate at which none of the To determine the throughput of a DUT/SUT when both unicast class
offered frames, comprised from a unicast Class and a multicast frames and multicast class frames are offered simultaneously to a
Class, to be forwarded are dropped by the device across a fixed fixed number of ports as defined in RFC 2432.
number of ports as defined in RFC 2432.
Procedure Procedure
Multicast and unicast traffic are mixed together in the same Multicast and unicast traffic are mixed together in the same
aggregated traffic stream in order to simulate the non-homogenous aggregated traffic stream in order to simulate the non-homogenous
networking environment. The DUT/SUT MUST learn the appropriate networking environment. The DUT/SUT MUST learn the appropriate
unicast IP addresses, either by sending ARP frames from each unicast IP addresses, either by sending ARP frames from each
unicast address, sending a RIP packet or by assigning static unicast address, sending a RIP packet or by assigning static
entries into the DUT/SUT address table. entries into the DUT/SUT address table.
The mixture of multicast and unicast traffic MUST be set up in one The relationship between the intended load [Ma91] of multicast
of two ways: class frames vs. unicast class frames MUST be specified:
a) Input frame rate for each class of traffic [Br91] or as a a) As an independent rate for unicast class and multicast
percentage of media_maximum-octets [Ma98]. Frame rate should class of traffic OR
be specified independently for each traffic class. b) As an aggregate rate comprised of a ratio of multicast
class to unicast class of traffic.
b) As an aggregate rate (given either in frames per second or The offered load per each DUT/SUT port MUST not exceed the maximum
as a percentage), with the ratio of multicast to unicast bandwidth capacity of any configured receive DUT/SUT ports.
traffic declared.
While the multicast traffic is transmitted from one source to All DUT/SUT ports configured to receive multicast traffic MUST join
multiple destinations, the unicast traffic MAY be evenly all configured multicast groups prior to transmitting test frames.
distributed across the DUT/SUT architecture. Unicast traffic Joining a group is accomplished by sending an IGMP Join Group
distribution can either be non-meshed or meshed [Ma98] as specified messages. All DUT/SUT ports configured to receive unicast traffic
in RFC2544 or RFC2289. MUST send learning frames prior to transmitting test frames (see
section 3 for more information).
Unicast traffic distribution can either be non-meshed or meshed
[Ma98] as specified in RFC2544 or RFC2289. A minimum of one
unicast transmit port MUST be configured to transmit unicast
traffic to a DUT/SUT port that is configured to receive unicast and
multicast traffic.
Multicast traffic distribution MUST be configured to transmit
traffic in a one-to-many mesh [Ma98] configuration. A minimum of
one multicast transmit port MUST be configured to transmit
multicast traffic to a DUT/SUT port that is configured to receive
multicast traffic.
Throughput measurement is defined in RFC1242 [Br91]. A search Throughput measurement is defined in RFC1242 [Br91]. A search
algorithm MUST be utilized to determine the maximum offered frame algorithm MUST be utilized to determine the maximum offered frame
rate with a zero frame loss rate. rate with a zero frame loss rate.
Result Result
Parameters to be measured MUST include the aggregate offered load, Parameters to be measured MUST include the aggregate offered load,
number of multicast frames offered, number of unicast frames number of multicast frames offered, number of unicast frames
offered, number multicast frames received, number of unicast frames offered, number multicast frames received, number of unicast frames
received and transmit duration of offered frames. received and transmit duration of offered frames.
4.2. Scaled Group Forwarding Matrix 4.2. Scaled Group Forwarding Matrix
Objective Objective
A table that demonstrates Forwarding Rate as a function of tested To determine Forwarding Rate as a function of tested multicast
multicast groups for a fixed number of tested DUT/SUT ports. groups for a fixed number of tested DUT/SUT ports.
Procedure Procedure
Multicast traffic is sent at a fixed percent of maximum offered Multicast traffic is sent at a fixed percent of maximum offered
load with a fixed number of receive ports of the tester at a fixed load with a fixed number of receive ports of the tester at a fixed
frame length. frame length.
On each iteration, the receive ports SHOULD incrementally join 10 On each iteration, the receive ports SHOULD incrementally join 10
multicast groups until a user defined maximum number of groups is multicast groups until a user defined maximum number of groups is
reached. reached.
skipping to change at page 8, line 18 skipping to change at page 8, line 46
iteration: the number of frames offered, number of frames received iteration: the number of frames offered, number of frames received
per each group, number of multicast groups and forwarding rate, in per each group, number of multicast groups and forwarding rate, in
frames per second, and transmit duration of offered frames. frames per second, and transmit duration of offered frames.
Constructing a table that contains the forwarding rate vs. number Constructing a table that contains the forwarding rate vs. number
of groups is desirable. of groups is desirable.
4.3. Aggregated Multicast Throughput 4.3. Aggregated Multicast Throughput
Objective Objective
The maximum rate at which none of the offered frames to be To determine the maximum rate at which none of the offered frames
forwarded through N destination interfaces of the same multicast to be forwarded through N destination interfaces of the same
group is dropped. multicast group is dropped.
Procedure Procedure
Multicast traffic is sent at a fixed percent of maximum offered Multicast traffic is sent at a fixed percent of maximum offered
load with a fixed number of groups at a fixed frame length for a load with a fixed number of groups at a fixed frame length for a
fixed duration of time. fixed duration of time.
The initial number of receive ports of the tester will join the The initial number of receive ports of the tester will join the
group(s) and the sender will transmit to the same groups after a group(s) and the sender will transmit to the same groups after a
certain delay (a few seconds). certain delay (a few seconds).
skipping to change at page 9, line 15 skipping to change at page 9, line 41
4.4. Encapsulation/Decapsulation (Tunneling) Throughput 4.4. Encapsulation/Decapsulation (Tunneling) Throughput
This sub-section provides the description of tests that help in This sub-section provides the description of tests that help in
obtaining throughput measurements when a DUT/SUT or a set of DUTs obtaining throughput measurements when a DUT/SUT or a set of DUTs
are acting as tunnel endpoints are acting as tunnel endpoints
4.4.1. Encapsulation Throughput 4.4.1. Encapsulation Throughput
Objective Objective
The maximum rate at which frames offered a DUT/SUT are encapsulated To determine the maximum rate at which frames offered a DUT/SUT are
and correctly forwarded by the DUT/SUT without loss. encapsulated and correctly forwarded by the DUT/SUT without loss.
Procedure Procedure
Traffic is sent through a DUT/SUT that has been configured to Traffic is sent through a DUT/SUT that has been configured to
encapsulate the frames. Traffic is received on a test port prior to encapsulate the frames. Traffic is received on a test port prior to
decapsulation and throughput is calculated based on RFC2544. decapsulation and throughput is calculated based on RFC2544.
Results Results
Parameters to be measured SHOULD include the measured throughput Parameters to be measured SHOULD include the measured throughput
per tunnel, per tunnel.
The nature of the traffic stream contributing to the result MUST be The nature of the traffic stream contributing to the result MUST be
reported. All required reporting parameters of encapsulation reported. All required reporting parameters of encapsulation
throughput MUST be reflected in the results report, such as the throughput MUST be reflected in the results report, such as the
transmitted packet size(s), offered load of the packet stream and transmitted packet size(s), offered load of the packet stream and
transmit duration of offered frames. transmit duration of offered frames.
4.4.2. Decapsulation Throughput 4.4.2. Decapsulation Throughput
Objective Objective
The maximum rate at which frames offered a DUT/SUT are decapsulated To determine the maximum rate at which frames offered a DUT/SUT are
and correctly forwarded by the DUT/SUT without loss. decapsulated and correctly forwarded by the DUT/SUT without loss.
Procedure Procedure
Encapsulated traffic is sent through a DUT/SUT that has been Encapsulated traffic is sent through a DUT/SUT that has been
configured to decapsulate the frames. Traffic is received on a test configured to decapsulate the frames. Traffic is received on a test
port after decapsulation and throughput is calculated based on port after decapsulation and throughput is calculated based on
RFC2544. RFC2544.
Results Results
skipping to change at page 10, line 12 skipping to change at page 10, line 40
The nature of the traffic stream contributing to the result MUST be The nature of the traffic stream contributing to the result MUST be
reported. All required reporting parameters of decapsulation reported. All required reporting parameters of decapsulation
throughput MUST be reflected in the results report, such as the throughput MUST be reflected in the results report, such as the
transmitted packet size(s), offered load of the packet stream and transmitted packet size(s), offered load of the packet stream and
transmit duration of offered frames. transmit duration of offered frames.
4.4.3. Re-encapsulation Throughput 4.4.3. Re-encapsulation Throughput
Objective Objective
The maximum rate at which frames of one encapsulated format offered To determine the maximum rate at which frames of one encapsulated
a DUT/SUT are converted to another encapsulated format and format offered a DUT/SUT are converted to another encapsulated
correctly forwarded by the DUT/SUT without loss. format and correctly forwarded by the DUT/SUT without loss.
Procedure Procedure
Traffic is sent through a DUT/SUT that has been configured to Traffic is sent through a DUT/SUT that has been configured to
encapsulate frames into one format, then re-encapsulate the frames encapsulate frames into one format, then re-encapsulate the frames
into another format. Traffic is received on a test port after all into another format. Traffic is received on a test port after all
decapsulation is complete and throughput is calculated based on decapsulation is complete and throughput is calculated based on
RFC2544. RFC2544.
Results Results
skipping to change at page 10, line 43 skipping to change at page 11, line 19
transmit duration of offered frames. transmit duration of offered frames.
5. Forwarding Latency 5. Forwarding Latency
This section presents methodologies relating to the This section presents methodologies relating to the
characterization of the forwarding latency of a DUT/SUT in a characterization of the forwarding latency of a DUT/SUT in a
multicast environment. It extends the concept of latency multicast environment. It extends the concept of latency
characterization presented in RFC 2544. characterization presented in RFC 2544.
In order to lessen the effect of packet buffering in the DUT/SUT, In order to lessen the effect of packet buffering in the DUT/SUT,
the latency tests MUST be run such that the offered load is less the latency tests MUST be run at the measured multicast throughput
than the multicast throughput of the DUT/SUT as determined in the level of the DUT; multicast latency at other offered loads is
previous section. The tests should also take into account the optional.
DUT's/SUT's need to cache the traffic in its IP cache, fastpath
cache or shortcut tables since the initial part of the traffic will
be utilized to build these tables.
Lastly, RFC 1242 and RFC 2544 draw distinction between two classes Lastly, RFC 1242 and RFC 2544 draw distinction between two classes
of devices: "store and forward" and "bit-forwarding." Each class of devices: "store and forward" and "bit-forwarding." Each class
impacts how latency is collected and subsequently presented. See impacts how latency is collected and subsequently presented. See
the related RFCs for more information. In practice, much of the the related RFCs for more information. In practice, much of the
test equipment will collect the latency measurement for one class test equipment will collect the latency measurement for one class
or the other, and, if needed, mathematically derive the reported or the other, and, if needed, mathematically derive the reported
value by the addition or subtraction of values accounting for value by the addition or subtraction of values accounting for
medium propagation delay of the packet, bit times to the timestamp medium propagation delay of the packet, bit times to the timestamp
trigger within the packet, etc. Test equipment vendors SHOULD trigger within the packet, etc. Test equipment vendors SHOULD
skipping to change at page 11, line 20 skipping to change at page 11, line 46
vendors. (E.g., If test vendor A presents a "store and forward" vendors. (E.g., If test vendor A presents a "store and forward"
latency result and test vendor B presents a "bit-forwarding" latency result and test vendor B presents a "bit-forwarding"
latency result, the user may erroneously conclude the DUT has two latency result, the user may erroneously conclude the DUT has two
differing sets of latency values.) differing sets of latency values.)
5.1. Multicast Latency 5.1. Multicast Latency
Objective Objective
To produce a set of multicast latency measurements from a single, To produce a set of multicast latency measurements from a single,
multicast ingress port of a DUT or SUT through multiple, egress multicast ingress port of a DUT/SUT through multiple, egress
multicast ports of that same DUT or SUT as provided for by the multicast ports of that same DUT/SUT as provided for by the metric
metric "Multicast Latency" in RFC 2432. "Multicast Latency" in RFC 2432.
The procedures highlighted below attempt to draw from the The procedures highlighted below attempt to draw from the
collection methodology for latency in RFC 2544 to the degree collection methodology for latency in RFC 2544 to the degree
possible. The methodology addresses two topological scenarios: one possible. The methodology addresses two topological scenarios: one
for a single device (DUT) characterization; a second scenario is for a single device (DUT) characterization; a second scenario is
presented or multiple device (SUT) characterization. presented or multiple device (SUT) characterization.
Procedure Procedure
If the test trial is to characterize latency across a single Device If the test trial is to characterize latency across a single Device
skipping to change at page 11, line 53 skipping to change at page 12, line 28
If the multicast latencies are to be taken across multiple devices If the multicast latencies are to be taken across multiple devices
forming a System Under Test (SUT), an example test topology might forming a System Under Test (SUT), an example test topology might
take the form of Figure 2 in section 3. take the form of Figure 2 in section 3.
The trial duration SHOULD be 120 seconds. Departures to the The trial duration SHOULD be 120 seconds. Departures to the
suggested traffic class guidelines MUST be disclosed with the suggested traffic class guidelines MUST be disclosed with the
respective trial results. The nature of the latency measurement, respective trial results. The nature of the latency measurement,
"store and forward" or "bit forwarding," MUST be associated with "store and forward" or "bit forwarding," MUST be associated with
the related test trial(s) and disclosed in the results report. the related test trial(s) and disclosed in the results report.
End-to-end reach ability of the test traffic path SHOULD be End-to-end reach ability of the test traffic path MUST be verified
verified prior to the engagement of a test trial. This implies prior to the engagement of a test trial. This implies that
that subsequent measurements are intended to characterize the subsequent measurements are intended to characterize the latency
latency across the tested device's or devices' normal traffic across the tested device's or devices' normal traffic forwarding
forwarding path (e.g., faster hardware-based engines) of the path (e.g., faster hardware-based engines) of the device(s) as
device(s) as opposed a non-standard traffic processing path (e.g. opposed a non-standard traffic processing path (e.g. slower,
slower, software-based exception handlers). If the test trial is software-based exception handlers). If the test trial is to be
to be executed with the intent of characterizing a non-optimal, executed with the intent of characterizing a non-optimal,
forwarding condition, then a description of the exception forwarding condition, then a description of the exception
processing conditions being characterized MUST be included with the processing conditions being characterized MUST be included with the
trial's results. trial's results.
A test traffic stream is presented to the DUT. At the mid-point of A test traffic stream is presented to the DUT. At the mid-point of
the trial's duration, the test apparatus MUST inject a uniquely the trial's duration, the test apparatus MUST inject a uniquely
identifiable ("tagged") packet into the test traffic packets being identifiable ("tagged") packet into the test traffic packets being
presented. This tagged packet will be the basis for the latency presented. This tagged packet will be the basis for the latency
measurements. By "uniquely identifiable," it is meant that the test measurements. By "uniquely identifiable," it is meant that the test
apparatus MUST be able to discern the "tagged" packet from the apparatus MUST be able to discern the "tagged" packet from the
skipping to change at page 13, line 41 skipping to change at page 14, line 11
The Offered Load of the test traffic presented the DUT/SUT, size of The Offered Load of the test traffic presented the DUT/SUT, size of
the "tagged" packet, transmit duration of offered frames and nature the "tagged" packet, transmit duration of offered frames and nature
(i.e., store-and-forward or bit-forwarding) of the trial's (i.e., store-and-forward or bit-forwarding) of the trial's
measurement MUST be associated with any reported test trial's measurement MUST be associated with any reported test trial's
result. result.
5.2. Min/Max Multicast Latency 5.2. Min/Max Multicast Latency
Objective Objective
The difference between the maximum latency measurement and the To determine the difference between the maximum latency measurement
minimum latency measurement from a collected set of latencies and the minimum latency measurement from a collected set of
produced by the Multicast Latency benchmark. latencies produced by the Multicast Latency benchmark.
Procedure Procedure
Collect a set of multicast latency measurements, as prescribed in Collect a set of multicast latency measurements, as prescribed in
section 5.1. This will produce a set of multicast latencies, M, section 5.1. This will produce a set of multicast latencies, M,
where M is composed of individual forwarding latencies between DUT where M is composed of individual forwarding latencies between DUT
packet ingress and DUT packet egress port pairs. E.g.: packet ingress and DUT packet egress port pairs. E.g.:
M = {L(I,E1),L(I,E2), , L(I,En)} M = {L(I,E1),L(I,E2), , L(I,En)}
where L is the latency between a tested ingress port, I, of the where L is the latency between a tested ingress port, I, of the
DUT, and Ex a specific, tested multicast egress port of the DUT. DUT, and Ex a specific, tested multicast egress port of the DUT.
E1 through En are unique egress ports on the DUT. E1 through En are unique egress ports on the DUT.
From the collected multicast latency measurements in set M, From the collected multicast latency measurements in set M,
identify MAX(M), where MAX is a function that yields the largest identify MAX(M), where MAX is a function that yields the largest
latency value from set M. latency value from set M.
Identify MIN(M), when MIN is a function that yields the smallest Identify MIN(M), when MIN is a function that yields the smallest
latency value from set M. latency value from set M.
skipping to change at page 14, line 43 skipping to change at page 15, line 15
6. Overhead 6. Overhead
This section presents methodology relating to the characterization This section presents methodology relating to the characterization
of the overhead delays associated with explicit operations found in of the overhead delays associated with explicit operations found in
multicast environments. multicast environments.
6.1. Group Join Delay 6.1. Group Join Delay
Objective Objective
The time duration it takes a DUT/SUT to start forwarding multicast To determine the time duration it takes a DUT/SUT to start
packets from the time a successful IGMP group membership report has forwarding multicast packets from the time a successful IGMP group
been issued to the DUT/SUT. membership report has been issued to the DUT/SUT.
Procedure Procedure
Traffic is sent on the source port at the same time as the IGMP Traffic is sent on the source port at the same time as the IGMP
JOIN Group message is transmitted from the destination ports. The JOIN Group message is transmitted from the destination ports. The
join delay is the difference in time from when the IGMP Join is join delay is the difference in time from when the IGMP Join is
sent (timestamp A) and the first frame is forwarded to a receiving sent (timestamp A) and the first frame is forwarded to a receiving
member port (timestamp B). member port (timestamp B).
Group Join delay = timestamp B - timestamp A Group Join delay = timestamp B - timestamp A
skipping to change at page 15, line 26 skipping to change at page 15, line 48
The parameter to be measured is the join delay time for each The parameter to be measured is the join delay time for each
multicast group address per destination port. In addition, the multicast group address per destination port. In addition, the
number of frames transmitted and received and percent loss may be number of frames transmitted and received and percent loss may be
reported. reported.
6.2. Group Leave Delay 6.2. Group Leave Delay
Objective Objective
The time duration it takes a DUT/SUT to cease forwarding multicast To determine the time duration it takes a DUT/SUT to cease
packets after a corresponding IGMP "Leave Group" message has been forwarding multicast packets after a corresponding IGMP "Leave
successfully offered to the DUT/SUT. Group" message has been successfully offered to the DUT/SUT.
Procedure Procedure
Traffic is sent on the source port at the same time as the IGMP Traffic is sent on the source port at the same time as the IGMP
Leave Group messages are transmitted from the destination ports. Leave Group messages are transmitted from the destination ports.
The leave delay is the difference in time from when the IGMP leave The leave delay is the difference in time from when the IGMP leave
is sent (timestamp A) and the last frame is forwarded to a is sent (timestamp A) and the last frame is forwarded to a
receiving member port (timestamp B). receiving member port (timestamp B).
Group Leave delay = timestamp B - timestamp A Group Leave delay = timestamp B - timestamp A
skipping to change at page 16, line 14 skipping to change at page 16, line 38
7. Capacity 7. Capacity
This section offers terms relating to the identification of This section offers terms relating to the identification of
multicast group limits of a DUT/SUT. multicast group limits of a DUT/SUT.
7.1. Multicast Group Capacity 7.1. Multicast Group Capacity
Objective Objective
The maximum number of multicast groups a DUT/SUT can support while To determine the maximum number of multicast groups a DUT/SUT can
maintaining the ability to forward multicast frames to all support while maintaining the ability to forward multicast frames
multicast groups registered to that DUT/SUT. to all multicast groups registered to that DUT/SUT.
Procedure Procedure
One or more destination ports of DUT/SUT will join an initial One or more destination ports of DUT/SUT will join an initial
number of groups. number of groups.
Then after a delay (enough time for all ports to join) the source Then after a delay (enough time for all ports to join) the source
port will transmit to each group at a transmission rate that the port will transmit to each group at a transmission rate that the
DUT/SUT can handle without dropping IP Multicast frames. DUT/SUT can handle without dropping IP Multicast frames.
skipping to change at page 17, line 9 skipping to change at page 17, line 32
Network forwarding devices are generally required to provide more Network forwarding devices are generally required to provide more
functionality than just the forwarding of traffic. Moreover, functionality than just the forwarding of traffic. Moreover,
network-forwarding devices may be asked to provide those functions network-forwarding devices may be asked to provide those functions
in a variety of environments. This section offers terms to assist in a variety of environments. This section offers terms to assist
in the characterization of DUT/SUT behavior in consideration of in the characterization of DUT/SUT behavior in consideration of
potentially interacting factors. potentially interacting factors.
8.1. Forwarding Burdened Multicast Latency 8.1. Forwarding Burdened Multicast Latency
Objective
To produce a set of multicast latency measurements from a single,
multicast ingress port of a DUT/SUT through multiple, egress
multicast ports of that same DUT/SUT as provided for by the metric
"Multicast Latency" in RFC 2432, while burdening the DUT/SUT by
injecting addresses into the DUT/SUT address table.
Procedure
The Multicast Latency metrics can be influenced by forcing the The Multicast Latency metrics can be influenced by forcing the
DUT/SUT to perform extra processing of packets while multicast DUT/SUT to perform extra processing of packets while multicast
traffic is being forwarded for latency measurements. In this test, class traffic is being forwarded for latency measurements. As
a set of ports on the tester will be designated to be source and described in Section 5.1, a set of ports on the tester will be
destination similar to the generic IP Multicast test setup. In designated to be the source and destination in this test. In
addition to this setup, another set of ports will be selected to addition to this setup, another set of ports will be selected to
transmit some multicast traffic that is destined to multicast group transmit some multicast class traffic that is destined to multicast
addresses that have not been joined by these additional set of group addresses that have not been joined by these additional set
ports. of ports.
For example, if ports 1,2, 3, and 4 form the burdened response For example, ports 1,2, 3, and 4 form the burdened response setup
setup (setup A) which is used to obtain the latency metrics and (setup A) which is used to obtain the latency metrics and ports 5,
ports 5, 6, 7, and 8 form the non-burdened response setup (setup B) 6, 7, and 8 form the non-burdened response setup (setup B) which
which will afflict the burdened response setup, then setup B will afflict the burdened response setup. Setup B traffic will
traffic will join multicast group addresses not joined by the ports then join multicast group addresses not joined by the ports in this
in this setup. By sending such multicast traffic, the DUT/SUT will setup. By sending such multicast class traffic, the DUT/SUT will
perform a lookup on the packets that will affect the processing of perform a lookup on the packets that will affect the processing of
setup A traffic. setup A traffic.
Results
Result reports MUST include the following parameters for each
iteration: transmitted packet size, the number of frames offered,
number of frames received per each group, number of multicast
groups and forwarding rate in frames per second, number of
addresses injected into address table for that iteration, and
transmit duration of offered frames. The result report must also
specify the number of source and destination ports within the
multicast group, as well as the ports designated to inject
addresses throughout the test.
The following metrics MUST be reported:
1) The set of latency measurements
2) The nature of latency measured (i.e., store-and-forward or
bit-forwarding)
3) The significant environmental, methodological, or device
particulars giving insight into the test or its results.
Constructing a table that contains the latency vs. number of
injected addresses is desirable.
8.2. Forwarding Burdened Group Join Delay 8.2. Forwarding Burdened Group Join Delay
Objective
To determine the time duration it takes a DUT/SUT to start
forwarding multicast packets from the time a successful IGMP group
membership report has been issued to the DUT/SUT while burdening
the DUT/SUT by injecting addresses into the DUT/SUT address table
on a unrelated set of ports.
Procedure
The port configuration in this test is similar to the one described The port configuration in this test is similar to the one described
in section 8.1, but in this test, the ports in setup B do not send in Sections 6.1 and 8.1, however, the additional set of transmit
the multicast traffic. Rather, setup A traffic must be influenced ports, which comprise setup B, do not send multicast class
in such a way that will affect the DUT's/SUT's ability to process traffic. Setup A traffic must be influenced in such a way that will
Group Join messages. Therefore, in this test, the ports in setup B affect the DUT's/SUT's ability to process Group Join messages.
will send a set of IGMP Group Join messages while the ports in Therefore, in this test, the ports in setup B will send a set of
setup A are also joining its own set of group addresses. Since the IGMP Group Join messages while the ports in setup A are also
simultaneously joining its own set of group addresses. Since the
two sets of group addresses are independent of each other, the two sets of group addresses are independent of each other, the
group join delay for setup A may be different than in the case when group join delay for setup A may be different than in the case when
there were no other group addresses being joined. there were no other group addresses being joined.
Results
Similar to Section 6.1, the parameter to be measured is the leave
delay time for each multicast group address per destination port.
Result reports MUST specify the number of multicast groups joined
in the join delay port group, the number of groups joined by the
unrelated ports, the number of source and destination ports within
the join delay port group, and the number of unrelated ports
designated to inject addresses throughout the test.
Constructing a table that contains the join delay time vs. number
of injected addresses is desirable.
9. Security Considerations 9. Security Considerations
As this document is solely for the purpose of providing metric As this document is solely for the purpose of providing metric
methodology and describes neither a protocol nor a protocol's methodology and describes neither a protocol nor a protocol's
implementation, there are no security considerations associated implementation, there are no security considerations associated
with this document. with this document.
10. Acknowledgements 10. Acknowledgements
The authors would like to acknowledge the following individuals for The authors would like to acknowledge the following individuals for
their help and participation of the compilation and editing of this their help and participation of the compilation and editing of this
document Ralph Daniels, Netcom Systems, who made significant document Ralph Daniels, Netcom Systems, who made significant
contributions to earlier versions of this draft, Daniel Bui, IXIA, contributions to earlier versions of this draft, Michele Bustos,
and Kevin Dubray, Juniper Networks. IXIA, and Kevin Dubray, Juniper Networks.
11. References 11. References
[Br91] Bradner, S., "Benchmarking Terminology for Network [Br91] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, July 1991. Interconnection Devices", RFC 1242, July 1991.
[Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement
 End of changes. 

This html diff was produced by rfcdiff 1.23, available from http://www.levkowetz.com/ietf/tools/rfcdiff/