draft-ietf-bmwg-sip-bench-meth-07.txt   draft-ietf-bmwg-sip-bench-meth-08.txt 
Benchmarking Methodology Working Group C. Davids Benchmarking Methodology Working C. Davids
Internet-Draft Illinois Institute of Technology Group Illinois Institute of Technology
Intended status: Informational V. Gurbani Internet-Draft V. Gurbani
Expires: July 10, 2013 Bell Laboratories, Intended status: Informational Bell Laboratories, Alcatel-Lucent
Alcatel-Lucent Expires: July 12, 2013 S. Poretsky
S. Poretsky
Allot Communications Allot Communications
January 6, 2013 January 8, 2013
Methodology for Benchmarking SIP Networking Devices Methodology for Benchmarking SIP Networking Devices
draft-ietf-bmwg-sip-bench-meth-07 draft-ietf-bmwg-sip-bench-meth-08
Abstract Abstract
This document describes the methodology for benchmarking Session This document describes the methodology for benchmarking Session
Initiation Protocol (SIP) performance as described in SIP Initiation Protocol (SIP) performance as described in SIP
benchmarking terminology document. The methodology and terminology benchmarking terminology document. The methodology and terminology
are to be used for benchmarking signaling plane performance with are to be used for benchmarking signaling plane performance with
varying signaling and media load. Both scale and establishment rate varying signaling and media load. Both scale and establishment rate
are measured by signaling plane performance. The SIP Devices to be are measured by signaling plane performance. The SIP Devices to be
benchmarked may be a single device under test (DUT) or a system under benchmarked may be a single device under test (DUT) or a system under
skipping to change at page 1, line 43 skipping to change at page 1, line 42
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on July 10, 2013. This Internet-Draft will expire on July 12, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 27 skipping to change at page 3, line 27
4.7. Attempted Sessions per Second . . . . . . . . . . . . . . 6 4.7. Attempted Sessions per Second . . . . . . . . . . . . . . 6
4.8. Stress Testing . . . . . . . . . . . . . . . . . . . . . . 6 4.8. Stress Testing . . . . . . . . . . . . . . . . . . . . . . 6
4.9. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 6 4.9. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 6
5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 9 5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 9
5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 9 5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 9
5.2. Device Benchmarks for IS . . . . . . . . . . . . . . . . . 10 5.2. Device Benchmarks for IS . . . . . . . . . . . . . . . . . 10
5.3. Device Benchmarks for NS . . . . . . . . . . . . . . . . . 10 5.3. Device Benchmarks for NS . . . . . . . . . . . . . . . . . 10
6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.1. Baseline Session Establishment Rate of the test bed . . . 10 6.1. Baseline Session Establishment Rate of the test bed . . . 10
6.2. Session Establishment Rate without media . . . . . . . . . 11 6.2. Session Establishment Rate without media . . . . . . . . . 11
6.3. Session Establishment Rate with Media on DUT/SUT . . . . . 11 6.3. Session Establishment Rate with Media not on DUT/SUT . . . 11
6.4. Session Establishment Rate with Media not on DUT/SUT . . . 12 6.4. Session Establishment Rate with Media on DUT/SUT . . . . . 12
6.5. Session Establishment Rate with Loop Detection Enabled . . 13 6.5. Session Establishment Rate with Loop Detection Enabled . . 13
6.6. Session Establishment Rate with Forking . . . . . . . . . 13 6.6. Session Establishment Rate with Forking . . . . . . . . . 13
6.7. Session Establishment Rate with Forking and Loop 6.7. Session Establishment Rate with Forking and Loop
Detection . . . . . . . . . . . . . . . . . . . . . . . . 14 Detection . . . . . . . . . . . . . . . . . . . . . . . . 14
6.8. Session Establishment Rate with TLS Encrypted SIP . . . . 14 6.8. Session Establishment Rate with TLS Encrypted SIP . . . . 14
6.9. Session Establishment Rate with IPsec Encrypted SIP . . . 15 6.9. Session Establishment Rate with IPsec Encrypted SIP . . . 15
6.10. Session Establishment Rate with SIP Flooding . . . . . . . 16 6.10. Session Establishment Rate with SIP Flooding . . . . . . . 16
6.11. Maximum Registration Rate . . . . . . . . . . . . . . . . 16 6.11. Maximum Registration Rate . . . . . . . . . . . . . . . . 16
6.12. Maximum Re-Registration Rate . . . . . . . . . . . . . . . 16 6.12. Maximum Re-Registration Rate . . . . . . . . . . . . . . . 16
6.13. Maximum IM Rate . . . . . . . . . . . . . . . . . . . . . 17 6.13. Maximum IM Rate . . . . . . . . . . . . . . . . . . . . . 17
skipping to change at page 4, line 40 skipping to change at page 4, line 40
The SIP Devices to be benchmarked may be a single device under test The SIP Devices to be benchmarked may be a single device under test
(DUT) or a system under test (SUT). The DUT is a SIP Server, which (DUT) or a system under test (SUT). The DUT is a SIP Server, which
may be any [RFC3261] conforming device. The SUT can be any device or may be any [RFC3261] conforming device. The SUT can be any device or
group of devices containing RFC 3261 conforming functionality along group of devices containing RFC 3261 conforming functionality along
with Firewall and/or NAT functionality. This enables benchmarks to with Firewall and/or NAT functionality. This enables benchmarks to
be obtained and compared for different types of devices such as SIP be obtained and compared for different types of devices such as SIP
Proxy Server, SBC, SIP proxy server paired with a media relay or Proxy Server, SBC, SIP proxy server paired with a media relay or
Firewall/NAT device. SIP Associated Media benchmarks can also be Firewall/NAT device. SIP Associated Media benchmarks can also be
made when testing SUTs. made when testing SUTs.
The test cases covered in this methodology document provide The test cases provide benchmarks metrics of Registration Rate, SIP
benchmarks metrics of Registration Rate, SIP Session Establishment Session Establishment Rate, Session Capacity, and IM Rate. These can
Rate, Session Capacity, and IM Rate. These can be benchmarked with be benchmarked with or without associated Media. Some cases are also
or without associated Media. Some cases are also included to cover included to cover Forking, Loop detection, Encrypted SIP, and SIP
Forking, Loop detection, Encrypted SIP, and SIP Flooding. The test Flooding. The test topologies that can be used are described in the
topologies that can be used are described in the Test Setup section. Test Setup section. Topologies are provided for benchmarking of a
Topologies are provided for benchmarking of a DUT or SUT. DUT or SUT. Benchmarking with Associated Media can be performed when
Benchmarking with Associated Media can be performed when using a SUT. using a SUT.
SIP permits a wide range of configuration options that are also SIP permits a wide range of configuration options that are explained
explained in the Test Setup section. Benchmark metrics could in Section 4 and Section 2 of [I-D.sip-bench-term]. Benchmark
possibly be impacted by Associated Media. The selected values for metrics could possibly be impacted by Associated Media. The selected
Session Duration and Media Streams Per Session enable benchmark values for Session Duration and Media Streams per Session enable
metrics to be benchmarked without Associated Media. Session Setup benchmark metrics to be benchmarked without Associated Media.
Rate could possibly be impacted by the selected value for Maximum Session Setup Rate could possibly be impacted by the selected value
Sessions Attempted. The benchmark for Session Establishment Rate is for Maximum Sessions Attempted. The benchmark for Session
measured with a fixed value for maximum Session Attempts. Establishment Rate is measured with a fixed value for maximum Session
Attempts.
Finally, the overall value of these tests is to serve as a comparison Finally, the overall value of these tests is to serve as a comparison
function between multiple SIP implementations. One way to use these function between multiple SIP implementations. One way to use these
tests is to derive benchmarks with SIP devices from Vendor-A, derive tests is to derive benchmarks with SIP devices from Vendor-A, derive
a new set of benchmarks with similar SIP devices from Vendor-B and a new set of benchmarks with similar SIP devices from Vendor-B and
perform a comparison on the results of Vendor-A and Vendor-B. This perform a comparison on the results of Vendor-A and Vendor-B. This
document does not make any claims on the interpretation of such document does not make any claims on the interpretation of such
results. results.
3. Benchmarking Topologies 3. Benchmarking Topologies
skipping to change at page 7, line 43 skipping to change at page 7, line 45
; ---- Parameters of test, adjust as needed ; ---- Parameters of test, adjust as needed
t := 5000 ; local maximum; used to figure out largest t := 5000 ; local maximum; used to figure out largest
; value ; value
T := 50000 ; global maximum; once largest value has been T := 50000 ; global maximum; once largest value has been
; figured out, pump this many requests before calling ; figured out, pump this many requests before calling
; the test a success ; the test a success
m := {...} ; other attributes that affect testing, such m := {...} ; other attributes that affect testing, such
; as media streams, etc. ; as media streams, etc.
s := 100 ; Initial session attempt rate (in sessions/sec) s := 100 ; Initial session attempt rate (in sessions/sec)
G := 5 ; granularity of results - the margin of error in sps G := 5 ; granularity of results - the margin of error in sps
C := 0.05 ; caliberation amount: How much to back down if we C := 0.05 ; calibration amount: How much to back down if we
; have found candidate s but cannot send at rate s for ; have found candidate s but cannot send at rate s for
; time T without failures ; time T without failures
; ---- End of parameters of test ; ---- End of parameters of test
; ---- Initialization of flags, candidate values and upper bounds ; ---- Initialization of flags, candidate values and upper bounds
f := false ; indicates that you had a success after the upper limit f := false ; indicates that you had a success after the upper limit
F := false ; indicates that test is done F := false ; indicates that test is done
c := 0 ; indicates that we have found an upper limit c := 0 ; indicates that we have found an upper limit
proc main proc main
find_largest_value ; First, figure out the largest value. find_largest_value ; First, figure out the largest value.
; Now that the largest value (saved in s) has been figured out, ; Now that the largest value (saved in s) has been figured out,
; use it for sending out s requests/s and send out T requests. ; use it for sending out s requests/s and send out T requests.
skipping to change at page 8, line 37 skipping to change at page 8, line 38
do { do {
send_traffic(s, m, t) ; Send s request/sec with m send_traffic(s, m, t) ; Send s request/sec with m
; characteristics until t requests have ; characteristics until t requests have
; been sent ; been sent
if (all requests succeeded) { if (all requests succeeded) {
s' := s ; save candidate value of metric s' := s ; save candidate value of metric
if ( c == 0 ) { if ( c == 0 ) {
s := s + (0.5 * s) s := s + (0.5 * s)
}else if ((c == 1) && (s??-s?)) > 2*G ) { }else if ((c == 1) && (s''-s')) > 2*G ) {
s := s + ( 0.5 * (s?? ? s ); s := s + ( 0.5 * (s'' - s );
}else if ((c == 1) && ((s??-s?) <= 2*G ) { }else if ((c == 1) && ((s''-s') <= 2*G ) {
f := true; f := true;
} }
else if (one or more requests fail) { else if (one or more requests fail) {
c := 1 ; we have found an upper bound for the metric c := 1 ; we have found an upper bound for the metric
s?? := s ; save new upper bound s'' := s ; save new upper bound
s := s - (0.5 * (s ? s?)) s := s - (0.5 * (s - s'))
} }
} while (f == false) } while (f == false)
end proc end proc
5. Reporting Format 5. Reporting Format
5.1. Test Setup Report 5.1. Test Setup Report
SIP Transport Protocol = ___________________________ SIP Transport Protocol = ___________________________
(valid values: TCP|UDP|TLS|SCTP|specify-other) (valid values: TCP|UDP|TLS|SCTP|specify-other)
skipping to change at page 10, line 10 skipping to change at page 10, line 10
trivial benchmarks, as all attempt rates will lead to a failure after trivial benchmarks, as all attempt rates will lead to a failure after
the first attempt. the first attempt.
Note 3: When the Authentication Option is "on" the DUT/SUT uses two Note 3: When the Authentication Option is "on" the DUT/SUT uses two
transactions instead of one when it is establishing a session or transactions instead of one when it is establishing a session or
accomplishing a registration. The first transaction ends with the accomplishing a registration. The first transaction ends with the
401 or 407. The second ends with the 200 OK or another failure 401 or 407. The second ends with the 200 OK or another failure
message. The Test Organization interested in knowing how many times message. The Test Organization interested in knowing how many times
the EA was intended to send a REGISTER as distinct from how many the EA was intended to send a REGISTER as distinct from how many
times the EA wound up actually sending a REGISTER may wish to record times the EA wound up actually sending a REGISTER may wish to record
the following data as well: Number of responses of the following the following data as well:
type:
401: _____________ (if authentication turned on; N/A otherwise) Number of responses of the following type:
407: _____________ (if authentication turned on; N/A otherwise) 401: _____________ (if authentication turned on; N/A
otherwise)
407: _____________ (if authentication turned on; N/A
otherwise)
5.2. Device Benchmarks for IS 5.2. Device Benchmarks for IS
Registration Rate = _______________________________ Registration Rate = _______________________________
(registrations per second) (registrations per second)
Re-registration Rate = ____________________________ Re-registration Rate = ____________________________
(registrations per second) (registrations per second)
Session Capacity = _________________________________ Session Capacity = _________________________________
(sessions) (sessions)
Session Overload Capacity = ________________________ Session Overload Capacity = ________________________
skipping to change at page 11, line 44 skipping to change at page 11, line 47
7 in [I-D.sip-bench-term]. 7 in [I-D.sip-bench-term].
3. Set media streams per session to 0. 3. Set media streams per session to 0.
4. Execute benchmarking algorithm as defined in Section 4.9 to 4. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate. This rate MUST be get the session establishment rate. This rate MUST be
recorded using any pertinent parameters as shown in the recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: This is the scenario to obtain the maximum Session Expected Results: This is the scenario to obtain the maximum Session
Establishment Rate of the DUT/SUT. Establishment Rate of the DUT/SUT.
6.3. Session Establishment Rate with Media on DUT/SUT 6.3. Session Establishment Rate with Media not on DUT/SUT
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT/SUT with To benchmark the Session Establishment Rate of the DUT/SUT with
zero failures when Associated Media is included in the benchmark zero failures when Associated Media is included in the benchmark
test and the media is running through the DUT/SUT. test but the media is not running through the DUT/SUT.
Procedure: Procedure:
1. If the DUT is being benchmarked as a user agent client or a 1. If the DUT is being benchmarked as proxy or B2BUA, configure
user agent server, configure the DUT in the test topology the DUT in the test topology shown in Figure 7 in
shown in Figure 3 or Figure 4 of [I-D.sip-bench-term].
Alternatively, if the DUT is being benchmarked as a B2BUA,
configure the DUT in the test topology shown in Figure 6 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
9 in [I-D.sip-bench-term]. 8 in [I-D.sip-bench-term].
3. Set media streams per session to 1. 3. Set media streams per session to 1.
4. Execute benchmarking algorithm as defined in Section 4.9 to 4. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Associated Media with any number of media streams per SIP session Associated Media with any number of media streams per SIP session
are expected to be identical to the Session Establishment Rate are expected to be identical to the Session Establishment Rate
results obtained without media in the case where the server is results obtained without media in the case where the server is
running on a platform separate from the platform on which the running on a platform separate from the platform on which the
Media Relay, NAT or Firewall is running. Session Establishment Media Relay, NAT or Firewall is running.
Rate results obtained with Associated Media may be lower than
those obtained without media in the case where the server and the
NAT, Firewall or Media Relay are running on the same platform.
6.4. Session Establishment Rate with Media not on DUT/SUT 6.4. Session Establishment Rate with Media on DUT/SUT
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT/SUT with To benchmark the Session Establishment Rate of the DUT/SUT with
zero failures when Associated Media is included in the benchmark zero failures when Associated Media is included in the benchmark
test but the media is not running through the DUT/SUT. test and the media is running through the DUT/SUT.
Procedure: Procedure:
1. If the DUT is being benchmarked as proxy or B2BUA, configure 1. If the DUT is being benchmarked as a user agent client or a
the DUT in the test topology shown in Figure 7 in user agent server, configure the DUT in the test topology
shown in Figure 3 or Figure 4 of [I-D.sip-bench-term].
Alternatively, if the DUT is being benchmarked as a B2BUA,
configure the DUT in the test topology shown in Figure 6 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
8 in [I-D.sip-bench-term]. 9 in [I-D.sip-bench-term].
3. Set media streams per session to 1. 3. Set media streams per session to 1.
4. Execute benchmarking algorithm as defined in Section 4.9 to 4. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Associated Media with any number of media streams per SIP session Associated Media may be lower than those obtained without media in
are expected to be identical to the Session Establishment Rate the case where the server and the NAT, Firewall or Media Relay are
results obtained without media in the case where the server is running on the same platform.
running on a platform separate from the platform on which the
Media Relay, NAT or Firewall is running. Session Establishment
Rate results obtained with Associated Media may be lower than
those obtained without media in the case where the server and the
NAT, Firewall or Media Relay are running on the same platform.
6.5. Session Establishment Rate with Loop Detection Enabled 6.5. Session Establishment Rate with Loop Detection Enabled
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT/SUT with To benchmark the Session Establishment Rate of the DUT/SUT with
zero failures when the Loop Detection option is enabled and no zero failures when the Loop Detection option is enabled and no
media streams are present. media streams are present.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, and loop 1. If the DUT is being benchmarked as a proxy or B2BUA, and loop
skipping to change at page 15, line 4 skipping to change at page 14, line 45
6. Execute benchmarking algorithm as defined in Section 4.9 to 6. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with forking and loop get the session establishment rate with forking and loop
detection. This rate MUST be recorded using any pertinent detection. This rate MUST be recorded using any pertinent
parameters as shown in the reporting format of Section 5.1. parameters as shown in the reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Forking and Loop Detection may be lower than those obtained with Forking and Loop Detection may be lower than those obtained with
only Forking or Loop Detection enabled. only Forking or Loop Detection enabled.
6.8. Session Establishment Rate with TLS Encrypted SIP 6.8. Session Establishment Rate with TLS Encrypted SIP
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT/SUT with To benchmark the Session Establishment Rate of the DUT/SUT with
zero failures when using TLS encrypted SIP. zero failures when using TLS encrypted SIP signaling.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, then 1. If the DUT is being benchmarked as a proxy or B2BUA, then
configure the DUT in the test topology shown in Figure 5 in configure the DUT in the test topology shown in Figure 5 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
8 of [I-D.sip-bench-term]. 8 of [I-D.sip-bench-term].
3. Set media streams per session to 0. 3. Set media streams per session to 0 (media is not used in this
test).
4. Configure Tester to enable TLS over the transport being 4. Configure Tester to enable TLS over the transport being
benchmarked. Make a note the transport when compiling benchmarked. Make a note the transport when compiling
results. May need to run for each transport of interest. results. May need to run for each transport of interest.
5. Execute benchmarking algorithm as defined in Section 4.9 to 5. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with encryption. This rate get the session establishment rate with encryption. This rate
MUST be recorded using any pertinent parameters as shown in MUST be recorded using any pertinent parameters as shown in
the reporting format of Section 5.1. the reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
TLS Encrypted SIP may be lower than those obtained with plaintext TLS Encrypted SIP may be lower than those obtained with plaintext
SIP. SIP.
6.9. Session Establishment Rate with IPsec Encrypted SIP 6.9. Session Establishment Rate with IPsec Encrypted SIP
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT/SUT with To benchmark the Session Establishment Rate of the DUT/SUT with
zero failures when using IPsec Encryoted SIP. zero failures when using IPsec Encrypted SIP signaling.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, then 1. If the DUT is being benchmarked as a proxy or B2BUA, then
configure the DUT in the test topology shown in Figure 5 in configure the DUT in the test topology shown in Figure 5 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
8 of [I-D.sip-bench-term]. 8 of [I-D.sip-bench-term].
3. Set media streams per session to 0. 3. Set media streams per session to 0 (media is not used in this
test).
4. Configure Tester for IPSec. 4. Configure Tester for IPSec.
5. Execute benchmarking algorithm as defined in Section 4.9 to 5. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with encryption. This rate get the session establishment rate with encryption. This rate
MUST be recorded using any pertinent parameters as shown in MUST be recorded using any pertinent parameters as shown in
the reporting format of Section 5.1. the reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
IPSec Encrypted SIP may be lower than those obtained with IPSec Encrypted SIP may be lower than those obtained with
plaintext SIP. plaintext SIP.
skipping to change at page 16, line 18 skipping to change at page 16, line 18
To benchmark the Session Establishment Rate of the SUT with zero To benchmark the Session Establishment Rate of the SUT with zero
failures when SIP Flooding is occurring. failures when SIP Flooding is occurring.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, then 1. If the DUT is being benchmarked as a proxy or B2BUA, then
configure the DUT in the test topology shown in Figure 5 in configure the DUT in the test topology shown in Figure 5 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
8 of [I-D.sip-bench-term]. 8 of [I-D.sip-bench-term].
3. Set media streams per session to 0. 3. Set media streams per session to 0.
4. Set s = 500 (c.f. Section 4.9). 4. Set s to a high value (e.g., 500) (c.f. Section 4.9).
5. Execute benchmarking algorithm as defined in Section 4.9 to 5. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with flooding. This rate get the session establishment rate with flooding. This rate
MUST be recorded using any pertinent parameters as shown in MUST be recorded using any pertinent parameters as shown in
the reporting format of Section 5.1. the reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
SIP Flooding may be degraded. SIP Flooding may be degraded.
6.11. Maximum Registration Rate 6.11. Maximum Registration Rate
skipping to change at page 17, line 16 skipping to change at page 17, line 16
zero failures. zero failures.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, then 1. If the DUT is being benchmarked as a proxy or B2BUA, then
configure the DUT in the test topology shown in Figure 5 in configure the DUT in the test topology shown in Figure 5 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Configure a SUT according to the test topology shown in Figure 2. Configure a SUT according to the test topology shown in Figure
8 of [I-D.sip-bench-term]. 8 of [I-D.sip-bench-term].
3. First, execute test detailed in Section 6.11 to register the 3. First, execute test detailed in Section 6.11 to register the
endpoints with the registrar. endpoints with the registrar.
4. After at least 5 mintes of Step 2, but no more than 10 minutes 4. After at least 5 minutes of Step 2, but no more than 10
after Step 2 has been performed, execute test detailed in minutes after Step 2 has been performed, execute test detailed
Section 6.11 again (this will count as a re-registration). in Section 6.11 again (this will count as a re-registration).
5. Execute benchmarking algorithm as defined in Section 4.9 to 5. Execute benchmarking algorithm as defined in Section 4.9 to
get the maximum re-registration rate. This rate MUST be get the maximum re-registration rate. This rate MUST be
recorded using any pertinent parameters as shown in the recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: The rate should be at least equal to but not more Expected Results: The rate should be at least equal to but not more
than the result of Section 6.11. than the result of Section 6.11.
6.13. Maximum IM Rate 6.13. Maximum IM Rate
Objective: Objective:
skipping to change at page 19, line 7 skipping to change at page 19, line 7
3. Set the Session Duration to be a value greater than T. 3. Set the Session Duration to be a value greater than T.
4. Execute benchmarking algorithm as defined in Section 4.9 to 4. Execute benchmarking algorithm as defined in Section 4.9 to
get the baseline session establishment rate. This rate MUST get the baseline session establishment rate. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
5. The Session Capacity is the product of T and the Session 5. The Session Capacity is the product of T and the Session
Establishment Rate. Establishment Rate.
Expected Results: Session Capacity results obtained with Associated Expected Results: Session Capacity results obtained with Associated
Media with any number of media streams per SIP session will be Media with any number of media streams per SIP session will be
identical to the Session Capacity results obtained without media. less than the Session Capacity results obtained without media.
6.16. Session Capacity with Media and a Media Relay/NAT 6.16. Session Capacity with Media and a Media Relay/NAT and/or Firewall
and/or Firewall
Objective: Objective:
To benchmark the Session Establishment Rate of the SUT with To benchmark the Session Establishment Rate of the SUT with
Associated Media. Associated Media.
Procedure: Procedure:
1. Configure the SUT as shown in Figure 7 or Figure 10 in 1. Configure the SUT as shown in Figure 7 or Figure 10 in
[I-D.sip-bench-term]. [I-D.sip-bench-term].
2. Set media streams per session to 1. 2. Set media streams per session to 1.
3. Execute benchmarking algorithm as defined in Section 4.9 to 3. Execute benchmarking algorithm as defined in Section 4.9 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
skipping to change at page 19, line 48 skipping to change at page 19, line 47
Security threats and how to counter these in SIP and the media layer Security threats and how to counter these in SIP and the media layer
is discussed in RFC3261, RFC3550, and RFC3711 and various other is discussed in RFC3261, RFC3550, and RFC3711 and various other
drafts. This document attempts to formalize a set of common drafts. This document attempts to formalize a set of common
methodology for benchmarking performance of SIP devices in a lab methodology for benchmarking performance of SIP devices in a lab
environment. environment.
9. Acknowledgments 9. Acknowledgments
The authors would like to thank Keith Drage and Daryl Malas for their The authors would like to thank Keith Drage and Daryl Malas for their
contributions to this document. Dale Worley provided an extensive contributions to this document. Dale Worley provided an extensive
review that lead to improvements in the documents. review that lead to improvements in the documents. We are grateful
to Barry Constantine for providing valuable comments during the
document's WGLC.
10. References 10. References
10.1. Normative References 10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[I-D.sip-bench-term] [I-D.sip-bench-term]
Davids, C., Gurbani, V., and S. Poretsky, "SIP Performance Davids, C., Gurbani, V., and S. Poretsky, "SIP Performance
Benchmarking Terminology", Benchmarking Terminology",
draft-ietf-bmwg-sip-bench-term-07 (work in progress), draft-ietf-bmwg-sip-bench-term-08 (work in progress),
March 2012. January 2013.
10.2. Informative References 10.2. Informative References
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E. A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261, Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002. June 2002.
Authors' Addresses Authors' Addresses
 End of changes. 35 change blocks. 
78 lines changed or deleted 74 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/