Network Working Group                            Hardev Soor
INTERNET-DRAFT                                   Debra Stopp
Expires in:  March  August 2000                         Ixia Communications

                                                 Ralph Daniels
                                                 Netcom Systems
                                                            October 1999
                                                 March 2000

               Methodology for IP Multicast Benchmarking
                    <draft-ietf-bmwg-mcastm-02.txt>
                    <draft-ietf-bmwg-mcastm-03.txt>

Status of this Memo

This document is an Internet-Draft and is in  full  conformance  with
all provisions of Section 10 of RFC2026.

Internet-Drafts are working documents  of  the  Internet  Engineering
Task  Force  (IETF),  its  areas,  and its working groups.  Note that
other groups may  also  distribute  working  documents  as  Internet-
Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and  may be updated, replaced, or obsoleted by other documents at any
time.  It is inappropriate  to  use  Internet-  Drafts  as  reference
material or to cite them other than as "work in progress."

The  list   of   current   Internet-Drafts   can   be   accessed   at
http://www.ietf.org/ietf/1id-abstracts.txt

The list of Internet-Draft Shadow  Directories  can  be  accessed  at
http://www.ietf.org/shadow.html.

Abstract

The purpose of this draft is to describe methodology specific to  the
benchmarking  of  multicast IP forwarding devices. It builds upon the
tenets set forth in RFC 2544, RFC 2432 and  other  IETF  Benchmarking
Methodology  Working  Group  (BMWG)  efforts.  This document seeks to
extend these efforts to the multicast paradigm.

The BMWG  produces  two  major  classes  of  documents:  Benchmarking
Terminology  documents  and  Benchmarking  Methodology documents. The
Terminology documents present the benchmarks and other related terms.
The  Methodology  documents define the procedures required to collect
the benchmarks cited in the corresponding Terminology documents.

1.

1 Introduction

This document defines a specific set of tests that vendors can use to
measure  and  report  the  performance characteristics and forwarding
capabilities of network devices that support IP multicast  protocols.
The results of these tests will provide the user comparable data from
different vendors with which to evaluate these devices.

A previous document, " Terminology for  IP  Multicast  Benchmarking"
(RFC 2432), defined many of the terms that are used in this document.
The terminology document should be  consulted  before  attempting  to
make use of this document.

This methodology will focus  on  one  source  to  many  destinations,
although  many of the tests described may be extended to use multiple
source to multiple destination IP multicast communication.

2.

2 Key Words to Reflect Requirements

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL",  "SHALL  NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.

3.

3 Test set up

Figure 1 shows a typical setup for an IP  multicast  test,  with  one
source  to  multiple  destinations,  although this MAY be extended to
multiple source to multiple destinations.

                                                   +----------------+
                           +------------+          |                |
        +--------+         |            |--------->| destination(1) |
        |        |         |            |          |                |
        | source |-------->|            |          +----------------+
        |        |         |            |          +----------------+
        +--------+         |   D U T    |--------->|                |
                           |            |          | destination(2) |
                           |            |          |                |
                           |            |          +----------------+
                           |            |               . . .
                           |            |          +----------------+
                           |            |          |                |
                           |            |--------->| destination(n) |
                           |            |          |                |
                           |            |          +----------------+
                           |            |
                           +------------+

                               Figure 1

Generally , the destination ports first join the desired number of
multicast groups by sending IGMP Join Group messages to the DUT/SUT. To
verify that all destination ports successfully joined the appropriate
groups, the source port MUST transmit IP multicast frames destined for
these groups. The destination ports MAY send IGMP Leave Group messages
after the transmission of IP Multicast frames to clear the IGMP table of
the DUT/SUT.

In addition, all transmitted frames MUST contain a recognizable pattern
that can be filtered on in order to ensure the receipt of only the
frames that are involved in the test.

  3.1    Test Considerations

3.1.1

  3.2    IGMP Support

  Each of the receiving ports of the tester should support and be able
  to test all IGMP versions 1, 2 and 3. The minimum requirement,
  however, is IGMP version 2.

  Each receiving port should be able to respond to IGMP queries during
  the test.

  Each receiving port should also send LEAVE (running IGMP version 2)
  after each test.

3.1.2

  3.3    Group Addresses

  The Class D Group address should SHOULD be changed between tests.  Many DUTs
  have memory or cache that is not cleared properly and can bias the
  results.

  The following group addresses are recommended by use in a test:

          224.0.1.27-224.0.1.255
          224.0.5.128-224.0.5.255
          224.0.6.128-224.0.6.255

  If the number of group addresses accomodated accommodated by these ranges do not
  satisfy the requrirements requirements of the test, then these ranges may be
  overlapped.

3.1.3 The total number of configured group addresses must be
  less than or equal to the IGMP table size of the DUT/SUT.

  3.4    Frame Sizes

  Each test should be run with different Multicast Frame Sizes. The
  recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518
  byte frames.

3.1.4

  3.5     TTL
  The source frames should have a TTL value large enough to accommodate
  the DUT/SUT.

3.1.5

  3.6     Layer 2 Support

  Each of the receiving ports of the tester should support GARP/GMRP
  protocols to join groups on Layer 2 DUTs/SUTs.

4.

4 Forwarding and Throughput

This section contains the description of the tests that are related to
the characterization of the packet forwarding of a DUT/SUT in a
multicast environment. Some metrics extend the concept of throughput
presented in RFC 1242. The notion of Forwarding Rate is cited in RFC
2285.

  4.1    Mixed Class Throughput

  Definition

   The maximum rate at which none of the offered frames, comprised from
   a unicast Class and a multicast Class, to be forwarded are dropped by
   the device across a fixed number of ports.

  Procedure

   Multicast and unicast traffic are mixed together in the same
   aggregated traffic stream in order to simulate the non-homogenous
   networking environment. While the multicast traffic is transmitted
   from one source to multiple destinations, the unicast traffic MAY be
   evenly distributed across the DUT/SUT architecture. In addition, the
   DUT/SUT SHOULD learn the appropriate unicast IP addresses, either by
   sending ARP frames from each unicast address, sending a RIP packet or
   by assigning static entries into the DUT/SUT address table.

   The rates at which traffic is transmitted for both mixture of multicast and unicast traffic classes MUST be set up in one of
   two ways:

        a) A percentage of the bandwidth is allocated for each traffic class
         and frames for each class are transmitted at the rate equal to
         the allocated bandwidth. For example, 64 byte frames can be
         transmitted at As a theoretical maximum rate of 148810 frames/second.
         If 80 percent of the total bandwidth is allocated for unicast traffic
         and resulting in a ratio. For
        example, 20 percent for multicast traffic, then unicast traffic will
         be sent at a maximum rate to 80 percent unicast
        traffic.

        b) In evenly distributed bursts of 119048 frames/second and the multicast traffic at a rate of 29762 frames/second.

      b) Transmission rate is fixed for both traffic classes and unicast
        traffic, resulting in a percentage of
         number of frames for each traffic class is specified. For example, if a
         fixed rate of 100% 50-50 ratio of theoretical maximum is desired, then 64 byte
         frames will be sent at 148810 frames/second for both unicast and multicast traffic. If 80 percent of the frames are to be unicast and
         20 percent multicast, then for a duration of 30 seconds, 3571440
         frames of unicast and 892860 frames of multicast will be sent. This
         fixed rate scenario actually over-subscribes the bandwidth,
         potentially causing congestion in the DUT/SUT.
        traffic.

  The transmission of the frames MUST be set up so that they form a
  deterministic distribution while still maintaining the specified
  bandwidth and transmission rates. See Appendix A for a discussion on
  determining an even distribution.

  Similar to the Frame loss rate test in RFC 2544, the first trial
  SHOULD be run for the frame rate that corresponds to 100% of the
  maximum rate for the frame size on the input media. Repeat the
  procedure for the rate that corresponds to 90% of the maximum rate
  used and then for 80% of this rate. This sequence SHOULD be continued
  (at reducing 10% intervals) until there are two successive trials in
  which no frames are lost. The maximum granularity of the trials MUST
  be 10% of the maximum rate, a finer granularity is encouraged.

  Result

      Transmit

  Parameters to be measured SHOULD include the frame loss and percent
  loss for each class of traffic per destination port.  The ratio of
  unicast traffic to multicast traffic MUST be reported.

  In addition, the transmit and Receive receive rates in frames per second for
  each source and destination port for both unicast and multicast traffic for each trial
      percent transmit rate. The ratio of the Unicast traffic versus Multicast
      traffic SHOULD be reported. The result report SHOULD contain
  traffic, together with the number of frames transmitted and received
  per port per class type (unicast and
      multicast traffic), reported in number of frames and percent loss per
      port. traffic SHOULD be reported.

  4.2    Scaled Group Forwarding Matrix

   Definition:

  Definition

  A table that demonstrates Forwarding Rate as a function of tested
  multicast groups for a fixed number of tested DUT/SUT ports.

   Procedure:

  Procedure

  Multicast traffic is sent at a fixed percent of line rate with a
  fixed number of receive ports of the tester at a fixed frame length.

  The receive ports will join an initial number of groups and the sender
      will transmit to the same SHOULD continue joining incrementally by 10
  multicast groups after until a certain delay (a few seconds).

      Then the receive ports will join an incremental value of groups and the
      transmit port will send to all groups joined (initial plus incremental). user defined maximum is reached.

  The receive ports will continue joining in the incremental fashion
  until a user defined maximum is reached.

   Results:

      For each group load

  Results

  Parameters to be measured SHOULD include the result WILL display frame rate, frames
      transmitted, total frames received, total frames loss, loss and percent
      loss.  The frame
  loss per destination port for each multicast group address.

  In addition, the transmit and receive rates in frames per second for
  each source and destination port of for all multicast groups, together
  with the tester number of frames transmitted and received per group port per
  multicast groups SHOULD also be available. reported.

  4.3    Aggregated Multicast Throughput

   Definition:

  Definition
  The maximum rate at which none of the offered frames to be forwarded
  through N destination interfaces of the same multicast group are
  dropped.

   Procedure:

  Procedure

  Multicast traffic is sent at a fixed percent of line rate with a
  fixed number of groups at a fixed frame length for a fixed duration
  of time.

  The initial number of receive ports of the tester will join the
  group(s) and the sender will transmit to the same groups after a
  certain delay (a few seconds).

  Then the an incremental or decremental number of receive ports will join the same
  groups and then the Multicast traffic is sent as stated.

  The receive ports will continue to be added or deleted and the Multicast multicast traffic
  sent until a user defined maximum number of ports is reached.

   Results:

      For each number of receive ports

  Results

  Parameters to be measured SHOULD include the result WILL display frame rate,
      frames transmitted, total frames received, total frames loss, loss and percent
      loss. The frame
  loss per destination port for each multicast group address.

  In addition, the transmit and receive rates in frames per second for
  each source and destination port for all multicast groups, together
  with the number of frames transmitted and received per port per group
  multicast groups SHOULD also be available. reported.

  4.4    Encapsulation (Tunneling) Throughput

  This sub-section provides the description of tests that help in
  obtaining throughput measurements when a DUT/SUT or a set of DUTs are
  acting as tunnel endpoints. The following Figure 2 presents the
  scenario for the tests.

   Client A      DUT/SUT A      Network      DUT/SUT B      Client B

                ----------                   ----------
                |        |      ------       |        |
      -------(a)
   -----(a)  (b)|        |(c)  (      )   (d)|        |(e) (f)-------
      ||||||| (f)-----
   ||||| -----> |        |---->(      )----->|        |-----> |||||||
      ------- |||||
   -----        |        |      ------       |        |       -------       -----
                |        |                   |        |
                ----------                   ----------

                                Figure 2

                                --------
  A tunnel is created between DUT/SUT A (the encapsulator) and DUT/SUT
  B (the decapsulator). Client A is acting as a source and Client B is
  the destination. Client B joins a multicast group (for example,
  224.0.1.1) and it sends an IGMP Join message to DUT/SUT B to join
  that group. Client A now wants to transmit some traffic to Client B.
  It will send the multicast traffic to DUT/SUT A which encapsulates
  the multicast frames, sends it to DUT/SUT B which will decapsulate
  the same frames and forward them to Client B.

  4.4.1      Encapsulation Throughput

     Definition

     The maximum rate at which frames offered a DUT/SUT are
     encapsulated and correctly forwarded by the DUT/SUT without loss.

     Procedure

      To test the forwarding rate of the DUT/SUT when it has to go
      through the process of encapsulation, a test port B is injected at
      the other end of DUT/SUT A (Figure B) that will receive the
      encapsulated frames and measure the throughput. Also, a test port
      A is used to generate multicast frames that will be passed through
      the tunnel.

      The following is the test setup:

      Test port A     DUT/SUT A              Test port B

                     ---------- (c')      (d')---------
                     |        |-------------->|       |
      -------(a)  (b)|        |               |       |
      ||||||| -----> |        |      ------   ---------
      -------        |        |(c)  ( N/W  )
                     |        |---->(      )
                     ----------      ------
                                   Figure 3
                                   --------

      In Figure 2, a tunnel is created with the local IP address of
      DUT/SUT A as the beginning of the tunnel (point c) and the IP
      address of DUT/SUT B as the end of the tunnel (point d). DUT/SUT B
      is assumed to have the tunneling protocol enabled so that the
      frames can be decapsulated. When the test port B is inserted in
      between the DUT/SUT A and DUT/SUT B (Figure 3), the endpoint of
      tunnel has to be re-configured to be directed to the test port B's
      IP address. For example, in Figure 3, point c' would be  assigned
      as the beginning of the tunnel and point d' as the end of the
      tunnel. The test port B is acting as the end of the tunnel, and it
      does not have to support any tunneling protocol since the frames
      do not have to be decapsulated. Instead, the received encapsulated
      frames are used to calculate the throughput and other necessary
      measurements.

      Result

   Throughput

      Parameters to be measured SHOULD include the frame loss and
      percent loss per destination port for each multicast group
      address.

      In addition, the transmit and receive rates in frames per second
      for each source and destination port. The results
   should also contain port for all multicast groups,
      together with the number of frames transmitted and received per port.

   4.4.2 Decapsulation Throughput
      port per multicast groups SHOULD be reported.

  4.4.2      Decapsulation Throughput

     Definition

      The maximum rate at which frames offered a DUT/SUT are
      decapsulated and correctly forwarded by the DUT/SUT without loss.

      Procedure

      The decapsulation process returns the tunneled unicast frames back
      to their multicast format. This test measures the throughput of
      the DUT/SUT when it has to perform the process of decapsulation,
      therefore, a test port C is used at the end of the tunnel to
      receive the decapsulated frames (Figure 4).

      Test port A  DUT/SUT A    Test port B     DUT/SUT B   Test port C

                   ----------                 ----------
                   |        |                 |        |
      -------(a)
      -----(a)  (b)|        |(c)   ------   ----    (d)|        |(e) (f)-------
      ||||||| (f)-----
      ||||| -----> |        |----> |||||| |||| ----->|        |-----> |||||||
      ------- |||||
      -----        |        |      ------      ----       |        |       -------       -----
                   |        |                 |        |
                   ----------                 ----------

                                  Figure 4
                                  --------

      In Figure 4, the encapsulation process takes place in DUT/SUT A.
      This may effect the throughput of the DUT/SUT B. Therefore, two
      test ports should be used to separate the encapsulation and
      decapsulation processes. Client A is replaced with the test port A
      which will generate a multicast frame that will be encapsulated by
      DUT/SUT A. Another test port B is inserted between DUT/SUT A and
      DUT/SUT B that will receive the encapsulated frames and forward it
      to DUT/SUT B. Test port C will receive the decapsulated frames and
      measure the throughput.

      Result

      Throughput
      Parameters to be measured SHOULD include the frame loss and
      percent loss per destination port for each multicast group
      address.

      In addition, the transmit and receive rates in frames per second
      for each source and destination port. The
      results should also contain port for all multicast groups,
      together with the number of frames transmitted and received per port.
      port per multicast groups SHOULD be reported.

  4.4.3      Re-encapsulation Throughput

     Definition

      The maximum rate at which frames of one encapsulated format
      offered a DUT/SUT are converted to another encapsulated format and
      correctly forwarded by the DUT/SUT without loss.

      Procedure

      Re-encapsulation takes place in DUT/SUT B after test port C has
      received the decapsulated frames. These decapsulated frames will
      be re-inserted with a new encapsulation frame and sent to test
      port B which will measure the throughput. See Figure 5.

       Test port A   DUT/SUT A    Test port B   DUT/SUT B  Test port C

                     ----------                 ----------
                     |        |                 |        |
      -------(a)
        -----(a)  (b)|        |(c)   ------   ----    (d)|        |(e) (f)-------
      ||||||| (f)-----
        ||||| -----> |        |----> |||||| |||| <---->|        |<----> |||||||
      ------- |||||
        -----        |        |      ------      ----       |        |       -------       -----
                     |        |                 |        |
                     ----------                 ----------

                                Figure 5
                                --------
      Result

      Throughput

      Parameters to be measured SHOULD include the frame loss and
      percent loss per destination port for each multicast group
      address.

      In addition, the transmit and receive rates in frames per second
      for each source and destination port. The results
      should also contain port for all multicast groups,
      together with the number of frames transmitted and received per
      port.

   5.
      port per multicast groups SHOULD be reported.

5 Forwarding Latency

This section presents methodologies relating to the characterization of
the forwarding latency of a DUT/SUT in a multicast environment. It
extends the concept of latency characterization presented in RFC 2544.

  5.1    Multicast Latency

  Definition

  The set of individual latencies from a single input port on the
  DUT/SUT or SUT to all tested ports belonging to the destination
  multicast group.

  Procedure

  According to RFC 2544, a tagged frame is sent half way through the
  transmission that contains a timestamp used for calculation of
  latency. In the multicast situation, a tagged frame is sent to all
  destinations for each multicast group and latency calculated on a per
  multicast group basis. Note that this test MUST be run using the
  transmission rate that is less than the multicast throughput of the
  DUT/SUT. Also, the test should take into account the DUT's/SUT's need
  to cache the traffic in its IP cache, fastpath cache or shortcut
  tables since the initial part of the traffic will be utilized to
  build these tables.

  Result

  The parameter to be measured is the latency value for each multicast
  group address per destination port. An aggregate latency MAY also be
  reported.

  5.2    Min/Max/Average Multicast Latency

   Definition:

  Definition

  The difference between the maximum latency measurement and the
  minimum latency measurement from the set of latencies produced by the
  Multicast Latency benchmark.

   Procedure:

      For

  Procedure

  First determine the entire duration throughput for DUT/SUT at each of the Latency test the smallest latency, the
      largest latency, listed
  frame sizes determined by the sum of latencies, forwarding and the number should be tracked
      per receive port throughput tests of the tester.

      The test can also increment bucket counters that represent
  section 4. Send a range latency
      range.  This can be used stream of frames to create a histogram.  From fixed number of multicast
  groups through the histogram,
      minimum, maximum, and average DUT at the test results can show determined throughput rate. An
  identifying tag SHOULD be included in all frames to ensure proper
  identification of the jitter.

 Results:

      For each port transmitted frame on the results WILL display receive side, the number type
  of frames, minimum
      latency, maximum latency, and tag being implementation dependent.

 Latencies for each transmitted frame are calculated based on the average latency.
  description of latencies in RFC 2544.  The results SHOULD
      also display average latency is the histogram
  total of latencies.

   6. Overhead

      This section presents methodology relating to all accumulated latency values divided by the characterization total number
  of those values.  The minimum latency is the overhead smallest latency; the
  maximum latency is the largest latency of all accumulated latency
  values.

  Results

  The parameters to be measured are the minium, maximum and average
  latency values for each multicast group address per destination port.

6 Overhead

This section presents methodology relating to the characterization of
the overhead delays associated with explicit operations found in
multicast environments.

  6.1    Group Join Delay

   Definition:

  Definition

  The time duration it takes a DUT/SUT to start forwarding multicast
  packets from the time a successful IGMP group membership report has
  been issued to the DUT/SUT.

   Procedure:

  Procedure

  Traffic is sent on the source port at the same time as the IGMP JOIN
  Group message is transmitted from the destination ports.  The join
  delay is the difference in time from when the IGMP Join is sent
  (timestamp A) and the first frame is received. forwarded to a receiving member
  port (timestamp B).

            Group Join delay = timestamp B - timestamp A

  One of the keys is to transmit at the fastest rate the DUT/SUT can
  handle multicast frames.  This is to get the best resultion resolution and the
  least margin of error in the Join Delay.

  However, you do not want to transmit the frames to so fast that frames
  are dropped by the DUT/SUT. Traffic should be sent at the throughput
  rate determined by the forwarding tests of section 4.

   Results:

  Results

  The JOIN parameter to be measured is the join delay time for each
  multicast group address per destination port. An error or granularity of In addition, the
      timestamp should be reported. This granularity number
  of frames transmitted and received and percent loss may be within 20
      nanoseconds of the result. The granularity depends on the hardware
      limitation of the tester. reported.

  6.2    Group Leave Delay

  Definition
  The time duration it takes a DUT/SUT to cease forwarding multicast
  packets after a corresponding IGMP "Leave Group" message has been
  successfully offered to the DUT/SUT.

  Procedure

  Traffic is sent on the source port at the same time as the IGMP Leave
  Group messages are transmitted from the destination ports.  The frames
      on both the source and destination ports are sent with the timestamps
      inserted. The Group Leave Delay leave
  delay is the difference in time from when the value of the
      timestamp A of the first IGMP Leave Group frame leave is sent
  (timestamp A) and the timestamp
      B of the last frame that is received on that destination port. forwarded to a receiving member
  port (timestamp B).

            Group Leave delay = timestamp B - timestamp A

  One of the keys is to transmit at the fastest rate the DUT/SUT can
  handle multicast frames.  This is to get the best resolution and
  least margin of error in the Leave Delay.  However, you do not want
  to transmit the frames too fast that frames are dropped by the
  DUT/SUT.  Traffic should be sent at the throughput rate determined by
  the forwarding tests of section 4.

  Result

      Group Leave Delay values

  The parameter to be measured is the leave delay time for each
  multicast group address on each per destination port. Also, In addition, the number
  of frames transmitted and received, received and percent loss may be displayed.

   7. reported.

7 Capacity

This section offers terms relating to the identification of multicast
group limits of a DUT/SUT.

  7.1    Multicast Group Capacity

Definition:

  Definition

  The maximum number of multicast groups a DUT/SUT can support while
  maintaining the ability to forward multicast frames to all multicast
  groups registered to that DUT/SUT.

Procedure:

  Procedure

  One or more receiving ports of DUT/SUT will join an initial number of
  groups.

  Then after a delay (enough time for all ports to join) the source
  port will transmit to each group at a transmission rate that the
  DUT/SUT can handle. handle without dropping IP Multicast frames.

  If all frames sent are forwarded by the DUT/SUT and received the receiving ports test
  iteration is said to pass at the current capacity.

  If the iteration passes at the capacity the test will join add an
      incremental user
  defined incremental value of groups.  Then after a delay the source port will
      transmit to all groups to each receive port.

  The iteration is to run again at a transmission rate that the DUT/SUT can
      handle.  If all frames sent are forwarded and received the receiving
      ports will continuing joining and testing until a frame is not forwarded
      nor received.

      The new group level and capacity resolution will be the incremental value.  So
  tested as stated above.

  Once the test fails at a capacity could be greater then last capacity passed but less then the
      one that failed.

      Once a capacity is determined the test should stated to be re run with greater
      delays after the JOIN and
  last Iteration that pass at a slower transmission rate.  And the initial
      group level should be raised giving capacity.

  Results

  The parameter to about five less then the previous
      capacity and incremental value should be set to one.

   Results:

      The number of groups passed vs measured is the total number of groups failed.  The
      results SHOULD give details when the frame fails to be group addresses
  that were successfully forwarded
      about how many frames did and did not get forwarded.  Which groups
      DID and DID NOT get forwarded. Also, the frame rate MAY be reported.

8. with no loss.

8 Interaction

Network forwarding devices are generally required to provide more
functionality than than just the forwarding of traffic.  Moreover, network
forwarding devices may be asked to provide those functions in a variety
of environments.  This section offers terms to assist in the
       charaterization
characterization of DUT/SUT behavior in consideration of potentially
interacting factors.

  8.1    Forwarding Burdened Multicast Latency

  The Multicast Latency metrics can be influenced by forcing the
  DUT/SUT to perform extra processing of packets while multicast
  traffic is being forwarded for latency measurements. In this test, a
  set of ports on the tester will be designated to be source and
  destination similar to the generic IP Multicast test setup. In
  addition to this setup, another set of ports will be selected to
  transmit some multicast traffic that is destined to multicast group
  addresses that have not been joined by these additional set of ports.

  For example, if ports 1,
    2, 1,2, 3, and 4 form the burdened response setup
  (setup A) which is used to obtain the latency metrics and ports 5, 6,
  7, and 8 form the non-burdened response setup (setup B) which will
  afflict the burdened response setup, then setup B traffic will join
  multicast group addresses not joined by the ports in this setup.  By
  sending such multicast traffic, the DUT/SUT will perform a lookup on
  the packets that will affect the processing of setup A traffic.

  8.2    Forwarding Burdened Group Join Delay

  The port configuration in this test is similar to the one described
  in section 8.1, but in this test, the multicast traffic is not sent
  by the ports in setup B. In this test, the setup A traffic must be
  influenced in such a way that will affect the DUT's/SUT's ability to
  process Group Join messages. Therefore, in this test, the ports in
  setup B will send a set of IGMP Group Join messages while the ports
  in setup A are also joining its own set of group addresses. Since the
  two sets of group addresses are independent of each other, the group
  join delay for setup A may be different than in the case when there
  were no other group addresses being joined.

Appendix A: Determining an even distribution

   A.1 Scope Of This Appendix

      This appendix discusses the suggested approach to configuring the
      deterministic distribution methodology for tests that involve both
      multicast and unicast traffic classes in an aggregated traffic stream.

9 Security Considerations

As such, this appendix MUST not be read as an amendment to the
      methodology described in the body of this document but as a guide
      to testing practice.

      It is important to understand and fully define solely for the distribution purpose of
      frames among all multicast and unicast destinations.  If the
      distribution is not well defined or understood, the throughput providing metric
methodology and describes neither a protocol nor a protocol's
implementation, there are no security considerations associated with
this document.

10
  References

[Br91] Bradner, S., "Benchmarking Terminology for Network
       Interconnection Devices", RFC 1242, July 1991.

[Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for
       Network Interconnect Devices", RFC 2544, March 1999.

[Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement
       Levels, RFC 2119, March 1997

[Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", RFC
       2432, October 1998.

[Hu95] Huitema, C.  "Routing in the Internet."  Prentice-Hall, 1995.

[Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to Interactive
       Corporate Networks", John Wiley & Sons, Inc, 1998.

[Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching
       Devices", RFC 2285, February 1998.

[Mt98] Maufer, T.  "Deploying IP Multicast in the Enterprise." Prentice-
       Hall, 1998.

[Se98] Semeria, C. and Maufer, T.  "Introduction to IP Multicast
       Routing."  http://www.3com.com/nsc/501303.html  3Com Corp., 1998.

11
  Author's Addresses

Hardev Soor
Ixia Communications
4505 Las Virgenes Road, Suite 209
Calabasas, CA  91302
USA

Phone: 818 871 1800
EMail: hardev@ixia.com

Debra Stopp
Ixia Communications
4505 Las Virgenes Road, Suite 209
Calabasas, CA  91302
USA

Phone: 818 871 1800
EMail: debby@ixia.com

Ralph Daniels
Netcom Systems
948 Loop Road
Clayton, NC 27520
USA

Phone: 919 550 9475
EMail: Ralph_Daniels@NetcomSystems.com

Appendix A: Determining an even distribution

A.1  Scope Of This Appendix

This appendix discusses the suggested approach to configuring the
deterministic distribution methodology for tests that involve both
multicast and unicast traffic classes in an aggregated traffic stream.
As such, this appendix MUST not be read as an amendment to the
methodology described in the body of this document but as a guide to
testing practice.

It is important to understand and fully define the distribution of
frames among all multicast and unicast destinations.  If the
distribution is not well defined or understood, the throughput and
forwarding metrics are not meaningful.

In a homogeneous environment, a large, single burst of multicast frames
may be followed by a large burst of unicast frames. This is a very
different distribution than that of a non-homogeneous environment, where
the multicast and unicast frames are intermingled
throughout the entire transmission.

The recommended distribution is that of the non-homogeneous environment
because it more closely represents a real-world scenario. The
distribution is modeled by calculating the number of multicast frames
per destination port as a burst, then calculating the number of unicast
frames to transmit as a percentage of the total frames transmitted. The
overall effect of the distribution is small bursts of multicast frames
intermingled with small bursts of unicast frames.

Example

This example illustrates the distribution algorithm for a 100 Mbps rate.

Frame size = 64
Duration of test = 30 seconds
Intended Load (ILOAD) = 100% of maximum rate
Mapping for unicast traffic:    Port 1 to Port 2
                                Port 3 to port 4
Mapping for multicast traffic:  Port 1 to Ports 2,3,4
Number of Multicast group addresses per destination port = 3
Multicast groups joined by Port 2: 224.0.1.27
                                   224.0.1.28
                                   224,0.1.29
Multicast groups joined by Port 3: 224.0.1.30
                                   224.0.1.31
                                   224,0.1.32

Multicast groups joined by Port 4: 224.0.1.33
                                   224.0.1.34
                                   224,0.1.35

Percentage of Unicast frames = 20

Percentage of Multicast frames = 80
Total number of frames to be transmitted = 148810 fps * 30 sec
                                         = 4464300 frames
Number of unicast frames = 20/100 * 4464300 = 892860 frames
Number of multicast frames = 80/100 * 4464300 = 3571440 frames

Unicast burst size = 20 * 9 = 180
Multicast burst size = 80 * 9 = 720
Loop counter = 4464300 / 900 = 4960.3333 (round it off to 4960)

Therefore, the actual number of frames that will be transmitted:
  Unicast frames = 4960 * 180 = 892800 frames
  Multicast frames = 4960 * 720 = 3571200 frames

The following pattern will be established:

UUUMMMMMMMMMMMMUUUMMMMMMMMMMMMUUUMMMMMMMMMMMMUUUMMMMMMMMMMMM

where     U represents 60 Unicast frames (UUU (U = 180 frames)
          M represents 60 Multicast frames (MMMMMMMMMMMM (M = 720 frames)

8. Security Considerations.

   As this document is solely for the purpose

12
  Full Copyright Statement

"Copyright (C) The Internet Society (date). All Rights Reserved. This
document and translations of providing metric methodology it may be copied and describes neither a protocol nor a protocol's implementation, there
   are no security considerations associated with this document.

9. References

   [Br91] Bradner, S., "Benchmarking Terminology for Network
          Interconnection Devices", RFC 1242, July 1991.

   [Br96] Bradner, S., furnished to others,
and J. McQuaid, "Benchmarking Methodology for
          Network Interconnect Devices", RFC 2544, March 1999.

   [Br97] Bradner, S. "Use derivative works that comment on or otherwise explain it or assist
in its implementation  may be prepared, copied, published and
distributed, in whole or in part, without restriction of Keywords any kind,
provided that the above copyright notice and this paragraph are
included on all such copies and derivative works. However, this
document itself may not be modified in RFCs any way, such as by removing the
copyright notice or references to Reflect Requirement
          Levels, RFC 2119, March 1997

   [Du98] Dubray, K., "Terminology the Internet Society or other
Internet organizations, except as needed for IP Multicast Benchmarking",
          RFC 2432, October 1998.

   [Hu95] Huitema, C.  "Routing in the Internet."  Prentice-Hall, 1995.

   [Ka98] Kosiur, D., "IP Multicasting: purpose of developing
Internet standards in which case the Complete Guide to Interactive
          Corporate Networks", John Wiley & Sons, Inc, 1998.

   [Ma98] Mandeville, R., "Benchmarking Terminology procedures for LAN Switching
          Devices", RFC 2285, February 1998.

   [Mt98] Maufer, T.  "Deploying IP Multicast copyrights defined
in the Enterprise."
          Prentice-Hall, 1998.

   [Se98] Semeria, C. and Maufer, T.  "Introduction Internet Standards process must be followed, or as required to IP Multicast
          Routing."  http://www.3com.com/nsc/501303.html  3Com Corp.,
          1998.

6. Author's Address

   Hardev Soor
   Ixia Communications
   4505 Las Virgenes Road, Suite 209
   Calabasas, CA  91302
   USA

   Phone: 818 871 1800
   EMail: hardev@ixiacom.com

   Debra Stopp
   Ixia Communications
   4505 Las Virgenes Road, Suite 209
   Calabasas, CA  91302
   USA

   Phone: 818 871 1800
   EMail: debby@ixiacom.com

   Ralph Daniels
   Netcom Systems
   948 Loop Road
   Clayton, NC 27520
   USA

   Phone: 919 550 9475

   EMail: Ralph_Daniels@NetcomSystems.com
translate it into.