draft-ietf-bmwg-sip-bench-meth-09.txt   draft-ietf-bmwg-sip-bench-meth-10.txt 
Benchmarking Methodology Working Group C. Davids Benchmarking Methodology Working Group C. Davids
Internet-Draft Illinois Institute of Technology Internet-Draft Illinois Institute of Technology
Intended status: Informational V. Gurbani Intended status: Informational V. Gurbani
Expires: August 18, 2014 Bell Laboratories, Expires: November 29, 2014 Bell Laboratories,
Alcatel-Lucent Alcatel-Lucent
S. Poretsky S. Poretsky
Allot Communications Allot Communications
February 14, 2014 May 28, 2014
Methodology for Benchmarking Session Initiation Protocol (SIP) Devices: Methodology for Benchmarking Session Initiation Protocol (SIP) Devices:
Basic session setup and registration Basic session setup and registration
draft-ietf-bmwg-sip-bench-meth-09 draft-ietf-bmwg-sip-bench-meth-10
Abstract Abstract
This document provides a methodology for benchmarking the Session This document provides a methodology for benchmarking the Session
Initiation Protocol (SIP) performance of devices. Terminology Initiation Protocol (SIP) performance of devices. Terminology
related to benchmarking SIP devices is described in the companion related to benchmarking SIP devices is described in the companion
terminology document. Using these two documents, benchmarks can be terminology document. Using these two documents, benchmarks can be
obtained and compared for different types of devices such as SIP obtained and compared for different types of devices such as SIP
Proxy Servers, Registrars and Session Border Controllers. The term Proxy Servers, Registrars and Session Border Controllers. The term
"performance" in this context means the capacity of the device-under- "performance" in this context means the capacity of the device-under-
test (DUT) to process SIP messages. Media streams are used only to test (DUT) to process SIP messages. Media streams are used only to
study how they impact the signaling behavior. The intent of the two study how they impact the signaling behavior. The intent of the two
documents is to provide a normalized set of tests that will enable an documents is to provide a normalized set of tests that will enable an
objective comparison of the capacity of SIP devices. Test setup objective comparison of the capacity of SIP devices. Test setup
parameters and a methodology is necessary because SIP allows a wide parameters and a methodology are necessary because SIP allows a wide
range of configuration and operational conditions that can influence range of configuration and operational conditions that can influence
performance benchmark measurements. A standard terminology and performance benchmark measurements.
methodology will ensure that benchmarks have consistent definition
and were obtained following the same procedures.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 18, 2014.
This Internet-Draft will expire on November 29, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 17 skipping to change at page 3, line 17
1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Benchmarking Topologies . . . . . . . . . . . . . . . . . . . 5 3. Benchmarking Topologies . . . . . . . . . . . . . . . . . . . 5
4. Test Setup Parameters . . . . . . . . . . . . . . . . . . . . 6 4. Test Setup Parameters . . . . . . . . . . . . . . . . . . . . 6
4.1. Selection of SIP Transport Protocol . . . . . . . . . . . 6 4.1. Selection of SIP Transport Protocol . . . . . . . . . . . 6
4.2. Signaling Server . . . . . . . . . . . . . . . . . . . . . 6 4.2. Signaling Server . . . . . . . . . . . . . . . . . . . . . 6
4.3. Associated Media . . . . . . . . . . . . . . . . . . . . . 7 4.3. Associated Media . . . . . . . . . . . . . . . . . . . . . 7
4.4. Selection of Associated Media Protocol . . . . . . . . . . 7 4.4. Selection of Associated Media Protocol . . . . . . . . . . 7
4.5. Number of Associated Media Streams per SIP Session . . . . 7 4.5. Number of Associated Media Streams per SIP Session . . . . 7
4.6. Session Duration . . . . . . . . . . . . . . . . . . . . . 7 4.6. Session Duration . . . . . . . . . . . . . . . . . . . . . 7
4.7. Attempted Sessions per Second . . . . . . . . . . . . . . 7 4.7. Attempted Sessions per Second (sps) . . . . . . . . . . . 7
4.8. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 7 4.8. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 7
5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 10 5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 10
5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 10 5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 10
5.2. Device Benchmarks for IS . . . . . . . . . . . . . . . . . 10 5.2. Device Benchmarks for IS . . . . . . . . . . . . . . . . . 10
5.3. Device Benchmarks for NS . . . . . . . . . . . . . . . . . 10 5.3. Device Benchmarks for NS . . . . . . . . . . . . . . . . . 10
6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.1. Baseline Session Establishment Rate of the test bed . . . 11 6.1. Baseline Session Establishment Rate of the test bed . . . 11
6.2. Session Establishment Rate without media . . . . . . . . . 11 6.2. Session Establishment Rate without media . . . . . . . . . 11
6.3. Session Establishment Rate with Media not on DUT . . . . . 11 6.3. Session Establishment Rate with Media not on DUT . . . . . 11
6.4. Session Establishment Rate with Media on DUT . . . . . . . 12 6.4. Session Establishment Rate with Media on DUT . . . . . . . 12
6.5. Session Establishment Rate with TLS Encrypted SIP . . . . 12 6.5. Session Establishment Rate with TLS Encrypted SIP . . . . 12
6.6. Session Establishment Rate with IPsec Encrypted SIP . . . 13 6.6. Session Establishment Rate with IPsec Encrypted SIP . . . 13
6.7. Registration Rate . . . . . . . . . . . . . . . . . . . . 13 6.7. Registration Rate . . . . . . . . . . . . . . . . . . . . 13
6.8. Re-Registration Rate . . . . . . . . . . . . . . . . . . . 14 6.8. Re-Registration Rate . . . . . . . . . . . . . . . . . . . 14
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14
8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14
9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14
10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15
10.1. Normative References . . . . . . . . . . . . . . . . . . . 15 10.1. Normative References . . . . . . . . . . . . . . . . . . . 15
10.2. Informative References . . . . . . . . . . . . . . . . . . 15 10.2. Informative References . . . . . . . . . . . . . . . . . . 15
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 Appendix A. R code to simulate benchmarking algorithm . . . . . . 15
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
1. Terminology 1. Terminology
In this document, the key words "MUST", "MUST NOT", "REQUIRED", In this document, the key words "MUST", "MUST NOT", "REQUIRED",
"SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT
RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as
described in BCP 14, conforming to [RFC2119] and indicate requirement described in BCP 14, conforming to [RFC2119] and indicate requirement
levels for compliant implementations. levels for compliant implementations.
RFC 2119 defines the use of these key words to help make the intent RFC 2119 defines the use of these key words to help make the intent
skipping to change at page 4, line 34 skipping to change at page 4, line 34
Initiation Protocol (SIP) performance as described in Terminology Initiation Protocol (SIP) performance as described in Terminology
document [I-D.sip-bench-term]. The methodology and terminology are document [I-D.sip-bench-term]. The methodology and terminology are
to be used for benchmarking signaling plane performance with varying to be used for benchmarking signaling plane performance with varying
signaling and media load. Media streams, when used, are used only to signaling and media load. Media streams, when used, are used only to
study how they impact the signaling behavior. This document study how they impact the signaling behavior. This document
concentrates on benchmarking SIP session setup and SIP registrations concentrates on benchmarking SIP session setup and SIP registrations
only. only.
The device-under-test (DUT) is a SIP server, which may be any SIP The device-under-test (DUT) is a SIP server, which may be any SIP
conforming [RFC3261] device. Benchmarks can be obtained and compared conforming [RFC3261] device. Benchmarks can be obtained and compared
for different types of devices such as SIP Proxy Server, Session for different types of devices such as a SIP proxy server, Session
Border Controllers (SBC), SIP registrars and SIP proxy server paired Border Controllers (SBC), SIP registrars and a SIP proxy server
with a media relay. paired with a media relay.
The test cases provide metrics for benchmarking the maximum 'SIP The test cases provide metrics for benchmarking the maximum 'SIP
Registration Rate' and maximum 'SIP Session Establishment Rate' that Registration Rate' and maximum 'SIP Session Establishment Rate' that
the DUT can sustain over an extended period of time without failures. the DUT can sustain over an extended period of time without failures
Some cases are included to cover Encrypted SIP. The test topologies (extended period of time is defined in the algorithm in Section 4.8).
Some cases are included to cover encrypted SIP. The test topologies
that can be used are described in the Test Setup section. Topologies that can be used are described in the Test Setup section. Topologies
in which the DUT handles media as well as those in which the DUT does in which the DUT handles media as well as those in which the DUT does
not handle media are both considered. The measurement of the not handle media are both considered. The measurement of the
performance characteristics of the media itself is outside the scope performance characteristics of the media itself is outside the scope
of these documents. of these documents.
SIP permits a wide range of configuration options that are explained SIP permits a wide range of configuration options that are explained
in Section 4 and Section 2 of [I-D.sip-bench-term]. Benchmark values in Section 4 and Section 2 of [I-D.sip-bench-term]. Benchmark values
could possibly be impacted by Associated Media. The selected values could possibly be impacted by Associated Media. The selected values
for Session Duration and Media Streams per Session enable benchmark for Session Duration and Media Streams per Session enable benchmark
skipping to change at page 5, line 41 skipping to change at page 5, line 42
the media (Figure 1) and the other in which it does process media the media (Figure 1) and the other in which it does process media
(Figure 2). In both cases, the tester or EA sends traffic into the (Figure 2). In both cases, the tester or EA sends traffic into the
DUT and absorbs traffic from the DUT. The diagrams in Figure 1 and DUT and absorbs traffic from the DUT. The diagrams in Figure 1 and
Figure 2 represent the logical flow of information and do not dictate Figure 2 represent the logical flow of information and do not dictate
a particular physical arrangements of the entities. a particular physical arrangements of the entities.
Test organizations need to be aware that these tests generate large Test organizations need to be aware that these tests generate large
volumes of data and consequently ensure that networking devices like volumes of data and consequently ensure that networking devices like
hubs, switches or routers are able to handle the generated volume. hubs, switches or routers are able to handle the generated volume.
Figure 1 depicts a layout in which the DUT as an intermediary between Figure 1 depicts a layout in which the DUT is an intermediary between
the two interfaces of the EA. If the test case requires the exchange the two interfaces of the EA. If the test case requires the exchange
of media, the media does not flow through the DUT but rather passes of media, the media does not flow through the DUT but rather passes
directly between the two endpoints. Figure 2 shows the DUT as an directly between the two endpoints. Figure 2 shows the DUT as an
intermediary between the two interfaces of the EA. If the test case intermediary between the two interfaces of the EA. If the test case
requires the exchange of media, the media flows through the DUT requires the exchange of media, the media flows through the DUT
between the endpoints. between the endpoints.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
| |------------>+ |------------>+ | | |------------>+ |------------>+ |
skipping to change at page 7, line 32 skipping to change at page 7, line 32
SIP session. When benchmarking a DUT for voice, a single media SIP session. When benchmarking a DUT for voice, a single media
stream is used. When benchmarking a DUT for voice and video, two stream is used. When benchmarking a DUT for voice and video, two
media streams are used. The number of Associated Media Streams MUST media streams are used. The number of Associated Media Streams MUST
be reported with benchmarking results. be reported with benchmarking results.
4.6. Session Duration 4.6. Session Duration
The value of the DUT's performance benchmarks may vary with the The value of the DUT's performance benchmarks may vary with the
duration of SIP sessions. Session Duration MUST be reported with duration of SIP sessions. Session Duration MUST be reported with
benchmarking results. A Session Duration of zero seconds indicates benchmarking results. A Session Duration of zero seconds indicates
transmission of a BYE immediately following successful SIP transmission of a BYE immediately following a successful SIP
establishment indicate by receipt of a 200 OK. An infinite Session establishment. Setting this parameter to the value '0' indicates
Duration indicates that a BYE is never transmitted. that a BYE will be sent by the EA immediately after the EA receives a
200 OK to the INVITE. Setting this parameter to a time value greater
than the duration of the test indicates that a BYE is never sent.
4.7. Attempted Sessions per Second 4.7. Attempted Sessions per Second (sps)
The value of the DUT's performance benchmarks may vary with the The value of the DUT's performance benchmarks may vary with the
Session Attempt Rate offered by the tester. Session Attempt Rate Session Attempt Rate offered by the tester. Session Attempt Rate
MUST be reported with the benchmarking results. MUST be reported with the benchmarking results.
4.8. Benchmarking algorithm 4.8. Benchmarking algorithm
In order to benchmark the test cases uniformly in Section 6, the In order to benchmark the test cases uniformly in Section 6, the
algorithm described in this section should be used. Both, a prosaic algorithm described in this section should be used. A prosaic
description of the algorithm and a pseudo-code description are description of the algorithm and a pseudo-code description are
provided. provided below, and a simulation written in the R statistical
language is provided in Appendix A.
The goal is to find the largest value, R, of a SIP Session Attempt The goal is to find the largest value, R, a SIP Session Attempt Rate,
Rate, measured in sessions-per-second, which the DUT can process with measured in sessions-per-second (sps), which the DUT can process with
zero errors over a defined, extended period. This period is defined zero errors over a defined, extended period. This period is defined
as the amount of time needed to attempt N SIP sessions, where N is a as the amount of time needed to attempt N SIP sessions, where N is a
parameter of test, at the attempt rate, R. An iterative process is parameter of test, at the attempt rate, R. An iterative process is
used to find this rate. The iterative process is divided into two used to find this rate. The algorithm corresponding to this process
distinct phases: Candidate Identification and Steady State Testing. converges to R.
During the Candidate Identification phase, the test runs until n If the DUT vendor provides a value for R, the tester can use this
sessions have been attempted, at session attempt rates, r, which vary value. Alternatively, in cases where the DUT vendor does not provide
according to the algorithm below, where n is also a parameter of test a value for R, or in cases where the tester wants to ascertain a
and is a relatively large number, but an order of magnitude smaller vendor provided value using local media characteristics, the
than N. If no errors occur during the time it takes to attempt n algorithm could be run by setting "r = R" and observing the value at
sessions, we increment r according to the algorithm. If errors are convergence.
encountered during the test, we decrement r according to the
algorithm. The algorithm provides a variable, G, that allows us to
control how the accuracy, in sessions per second, that we require of
the test.
After this candidate rate has been discovered, the test enters the The algorithm proceeds by initializing "r = 100"; "r" is the session
Steady State phase. In the Steady State phase, N session Attempts attempt rate. The algorithm dynamically increases and decreases "r"
are made at the candidate rate. The goal is to find a rate at which as it converges to the a maximum sps value for R. The dynamic
the DUT can process calls "forever" with no errors and the test increase and decrease rate is controlled by the weights "w" and "d",
organization can choose N as large as it deems appropriate. If respectively. If the DUT vendor provides a value for R, the tester
errors are encountered during this steady-state phase, the candidate can use that value; however, because the requirements and media
rate is reduced by a defined percent, also a parameter of test, and characteristics are a function of the test environment, it is best
the steady-state phase is entered again until a final (new) steady- that the tester reflect these requirements during testing and allow
state rate is achieved. the algorithm to converge to R.
The iterative process itself is defined as follows: A starting rate The pseudo-code corresponding to the description above follows, and a
of r = 100 sessions per second is used and we place calls at that simulation written in the R statistical language is provided in
rate until n = 5000 calls have been placed. If all n calls are Appendix A.
successful, the rate is increased to 150 sps and again we place calls
at that rate until n = 5000 calls have been placed. The attempt rate
is continuously ramped up until a failure is encountered before n =
5000 calls have been placed. Then an attempt rate is calculated that
is higher than the last successful attempt rate by a quantity equal
to half the difference between the rate at which failures occurred
and the last successful rate. If this new attempt rate also results
in errors, a new attempt rate is tried that is higher than the last
successful attempt rate by a quantity equal to half the difference
between the rate at which failures occurred and the last successful
rate. Continuing in this way, an attempt rate without errors is
found. The tester can specify margin of error using the parameter G,
measured in units of sessions per second.
The pseudo-code corresponding to the description above follows. ; ---- Parameters of test, adjust as needed
N := 50000 ; Global maximum; once largest session rate has
; been established, send this many requests before
; calling the test a success
m := {...} ; Other attributes that affect testing, such
; as media streams, etc.
r := 100 ; Initial session attempt rate (in sessions/sec).
; Adjust as needed (for example, if DUT can handle
; thousands of calls in steady state, set to
; appropriate value in the thousands).
w := 0.10 ; Traffic increase weight (0 < w <= 1.0)
d := max(0.10, w / 2) ; Traffic decrease weight
; ---- Parameters of test, adjust as needed ; ---- End of parameters of test
n := 5000 ; local maximum; used to figure out largest
; value (number of sessions attempted)
N := 50000 ; global maximum; once largest session rate has
; been established, send this many requests before
; calling the test a success
m := {...} ; other attributes that affect testing, such
; as media streams, etc.
r := 100 ; Initial session attempt rate (in sessions/sec)
G := 5 ; granularity of results - the margin of error in
; sps
C := 0.05 ; calibration amount: How much to back down if we
; have found candidate s but cannot send at rate s
; for time T without failures
; ---- End of parameters of test proc find_R
; ---- Initialization of flags, candidate values and upper bounds
f := false ; indicates a success after the upper limit R = max_sps(r, m, N) ; Setup r sps, each with m media
F := false ; indicates that test is done ; characteristics until N sessions have been set up.
c := 0 ; indicates that we have found an upper limit
proc find_largest_value ; Note that if a DUT vendor provides this number, the tester
; Iterative process to figure out the largest value we can ; can use the number as a Session Attempt Rate, R, instead
; handle with no failures ; of invoking max_sps()
do {
send_traffic(r, m, n) ; Send r request/sec with m end proc
; characteristics until n
; requests have been sent ; Iterative process to figure out the largest number of
if (all requests succeeded) { ; sps that we can achieve in order to setup n sessions.
r' := r ; save candidate value of metric ; This function converges to R, the Session Attempt Rate.
if ( c == 0 ) { proc max_sps(r, m, n)
r := r + (0.5 * r) s := 0 ; session setup rate
} old_r := 0 ; old session setup rate
else if ((c == 1) && (r''-r')) > 2*G ) { h := 0 ; Return value, R
r := r + ( 0.5 * (r'' - r ); count := 0
}
else if ((c == 1) && ((r''-r') <= 2*G ) { ; Note that if w is small (say, 0.10) and r is small
f := true; ; (say, <= 9), the algorithm will not converge since it
} ; uses floor() to increment r dynamically. It is best
else if (one or more requests fail) { ; off to start with the defaults (w = 0.10 and
c := 1 ; found upper bound for the metric ; r >= 10)
r'' := r ; save new upper bound
r := r - (0.5 * (r - r')) while (TRUE) {
} s := send_traffic(r, m, n) ; Send r sps, with m media
} while (f == false) ; characteristics until n sessions established.
end proc if (s == n) {
if (r > old_r) {
old_r = r
}
else {
count = count + 1
if (count >= 10) {
# We've converged.
h := max(r, old_r)
break
}
}
r := floor(r + (w * r))
}
else {
r := floor(r - (d * r))
d := max(0.10, d / 2)
w := max(0.10, w / 2)
}
}
return h
end proc
5. Reporting Format 5. Reporting Format
5.1. Test Setup Report 5.1. Test Setup Report
SIP Transport Protocol = ___________________________ SIP Transport Protocol = ___________________________
(valid values: TCP|UDP|TLS|SCTP|websockets|specify-other) (valid values: TCP|UDP|TLS|SCTP|websockets|specify-other)
Session Attempt Rate = _____________________________ Session Attempt Rate = _____________________________
(session attempts/sec) (session attempts/sec)
Total Sessions Attempted = _________________________ Total Sessions Attempted = _________________________
skipping to change at page 12, line 18 skipping to change at page 12, line 18
2. Set media streams per session to 1. 2. Set media streams per session to 1.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Execute benchmarking algorithm as defined in Section 4.8 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Associated Media with any number of media streams per SIP session Associated Media with any number of media streams per SIP session
are expected to be identical to the Session Establishment Rate are expected to be identical to the Session Establishment Rate
results obtained without media in the case where the DUT is results obtained without media in the case where the DUT is
running on a platform separate from the platform on which the running on a platform separate from the Media Relay.
Media Relay.
6.4. Session Establishment Rate with Media on DUT 6.4. Session Establishment Rate with Media on DUT
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
failures when Associated Media is included in the benchmark test failures when Associated Media is included in the benchmark test
and the media is running through the DUT. and the media is running through the DUT.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
skipping to change at page 15, line 22 skipping to change at page 15, line 18
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
[I-D.sip-bench-term] [I-D.sip-bench-term]
Davids, C., Gurbani, V., and S. Poretsky, "SIP Performance Davids, C., Gurbani, V., and S. Poretsky, "SIP Performance
Benchmarking Terminology", Benchmarking Terminology",
draft-ietf-bmwg-sip-bench-term-04 (work in progress), draft-ietf-bmwg-sip-bench-term-10 (work in progress),
March 2012. May 2014.
10.2. Informative References 10.2. Informative References
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E. A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261, Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002. June 2002.
Appendix A. R code to simulate benchmarking algorithm
w = 0.10
d = max(0.10, w / 2)
DUT_max_sps = 460 # Change as needed to set the max sps value
# for a DUT
# Returns R, given r (initial session attempt rate).
# E.g., assume that a DUT handles 460 sps in steady state
# and you have saved this code in a file simulate.r. Then,
# start an R session and do the following:
#
# > source("simulate.r")
# > find_R(100)
# ... debug output omitted ...
# [1] 458
#
# Thus, the max sps that the DUT can handle is 458 sps, which is
# close to the absolute maximum of 460 sps the DUT is specified to
# do.
find_R <- function(r) {
s = 0
old_r = 0
h = 0
count = 0
# Note that if w is small (say, 0.10) and r is small
# (say, <= 9), the algorithm will not converge since it
# uses floor() to increment r dynamically. It is best
# off to start with the defaults (w = 0.10 and
# r >= 10)
cat("r old_r w d \n")
while (TRUE) {
cat(r, ' ', old_r, ' ', w, ' ', d, '\n')
s = send_traffic(r)
if (s == TRUE) { # All sessions succeeded
if (r > old_r) {
old_r = r
}
else {
count = count + 1
if (count >= 10) {
# We've converged.
h = max(r, old_r)
break
}
}
r = floor(r + (w * r))
}
else {
r = floor(r - (d * r))
d = max(0.10, d / 2)
w = max(0.10, w / 2)
}
}
h
}
send_traffic <- function(r) {
n = TRUE
if (r > DUT_max_sps) {
n = FALSE
}
n
}
Authors' Addresses Authors' Addresses
Carol Davids Carol Davids
Illinois Institute of Technology Illinois Institute of Technology
201 East Loop Road 201 East Loop Road
Wheaton, IL 60187 Wheaton, IL 60187
USA USA
Phone: +1 630 682 6024 Phone: +1 630 682 6024
Email: davids@iit.edu Email: davids@iit.edu
 End of changes. 28 change blocks. 
110 lines changed or deleted 189 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/