draft-ietf-bmwg-sip-bench-meth-10.txt   draft-ietf-bmwg-sip-bench-meth-11.txt 
Benchmarking Methodology Working Group C. Davids Benchmarking Methodology Working Group C. Davids
Internet-Draft Illinois Institute of Technology Internet-Draft Illinois Institute of Technology
Intended status: Informational V. Gurbani Intended status: Informational V. Gurbani
Expires: November 29, 2014 Bell Laboratories, Expires: January 3, 2015 Bell Laboratories,
Alcatel-Lucent Alcatel-Lucent
S. Poretsky S. Poretsky
Allot Communications Allot Communications
May 28, 2014 July 2, 2014
Methodology for Benchmarking Session Initiation Protocol (SIP) Devices: Methodology for Benchmarking Session Initiation Protocol (SIP) Devices:
Basic session setup and registration Basic session setup and registration
draft-ietf-bmwg-sip-bench-meth-10 draft-ietf-bmwg-sip-bench-meth-11
Abstract Abstract
This document provides a methodology for benchmarking the Session This document provides a methodology for benchmarking the Session
Initiation Protocol (SIP) performance of devices. Terminology Initiation Protocol (SIP) performance of devices. Terminology
related to benchmarking SIP devices is described in the companion related to benchmarking SIP devices is described in the companion
terminology document. Using these two documents, benchmarks can be terminology document. Using these two documents, benchmarks can be
obtained and compared for different types of devices such as SIP obtained and compared for different types of devices such as SIP
Proxy Servers, Registrars and Session Border Controllers. The term Proxy Servers, Registrars and Session Border Controllers. The term
"performance" in this context means the capacity of the device-under- "performance" in this context means the capacity of the device-under-
skipping to change at page 1, line 48 skipping to change at page 1, line 48
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on November 29, 2014. This Internet-Draft will expire on January 3, 2015.
Copyright Notice Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 10 skipping to change at page 3, line 10
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Benchmarking Topologies . . . . . . . . . . . . . . . . . . . 5 3. Benchmarking Topologies . . . . . . . . . . . . . . . . . . . 5
4. Test Setup Parameters . . . . . . . . . . . . . . . . . . . . 6 4. Test Setup Parameters . . . . . . . . . . . . . . . . . . . . 7
4.1. Selection of SIP Transport Protocol . . . . . . . . . . . 6 4.1. Selection of SIP Transport Protocol . . . . . . . . . . . 7
4.2. Signaling Server . . . . . . . . . . . . . . . . . . . . . 6 4.2. Connection-oriented Transport Management . . . . . . . . . 7
4.3. Associated Media . . . . . . . . . . . . . . . . . . . . . 7 4.3. Signaling Server . . . . . . . . . . . . . . . . . . . . . 8
4.4. Selection of Associated Media Protocol . . . . . . . . . . 7 4.4. Associated Media . . . . . . . . . . . . . . . . . . . . . 8
4.5. Number of Associated Media Streams per SIP Session . . . . 7 4.5. Selection of Associated Media Protocol . . . . . . . . . . 8
4.6. Session Duration . . . . . . . . . . . . . . . . . . . . . 7 4.6. Number of Associated Media Streams per SIP Session . . . . 8
4.7. Attempted Sessions per Second (sps) . . . . . . . . . . . 7 4.7. Codec Type . . . . . . . . . . . . . . . . . . . . . . . . 8
4.8. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 7 4.8. Session Duration . . . . . . . . . . . . . . . . . . . . . 8
5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 10 4.9. Attempted Sessions per Second (sps) . . . . . . . . . . . 9
5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 10 4.10. Benchmarking algorithm . . . . . . . . . . . . . . . . . . 9
5.2. Device Benchmarks for IS . . . . . . . . . . . . . . . . . 10 5. Reporting Format . . . . . . . . . . . . . . . . . . . . . . . 11
5.3. Device Benchmarks for NS . . . . . . . . . . . . . . . . . 10 5.1. Test Setup Report . . . . . . . . . . . . . . . . . . . . 11
6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.2. Device Benchmarks for session setup . . . . . . . . . . . 13
6.1. Baseline Session Establishment Rate of the test bed . . . 11 5.3. Device Benchmarks for registrations . . . . . . . . . . . 13
6.2. Session Establishment Rate without media . . . . . . . . . 11 6. Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.3. Session Establishment Rate with Media not on DUT . . . . . 11 6.1. Baseline Session Establishment Rate of the test bed . . . 13
6.4. Session Establishment Rate with Media on DUT . . . . . . . 12 6.2. Session Establishment Rate without media . . . . . . . . . 14
6.5. Session Establishment Rate with TLS Encrypted SIP . . . . 12 6.3. Session Establishment Rate with Media not on DUT . . . . . 14
6.6. Session Establishment Rate with IPsec Encrypted SIP . . . 13 6.4. Session Establishment Rate with Media on DUT . . . . . . . 14
6.7. Registration Rate . . . . . . . . . . . . . . . . . . . . 13 6.5. Session Establishment Rate with TLS Encrypted SIP . . . . 15
6.8. Re-Registration Rate . . . . . . . . . . . . . . . . . . . 14 6.6. Session Establishment Rate with IPsec Encrypted SIP . . . 15
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 6.7. Registration Rate . . . . . . . . . . . . . . . . . . . . 16
8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 6.8. Re-Registration Rate . . . . . . . . . . . . . . . . . . . 16
9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17
10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 8. Security Considerations . . . . . . . . . . . . . . . . . . . 17
10.1. Normative References . . . . . . . . . . . . . . . . . . . 15 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17
10.2. Informative References . . . . . . . . . . . . . . . . . . 15 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Appendix A. R code to simulate benchmarking algorithm . . . . . . 15 10.1. Normative References . . . . . . . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 10.2. Informative References . . . . . . . . . . . . . . . . . . 18
Appendix A. R Code Component to simulate benchmarking
algorithm . . . . . . . . . . . . . . . . . . . . . . 18
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20
1. Terminology 1. Terminology
In this document, the key words "MUST", "MUST NOT", "REQUIRED", In this document, the key words "MUST", "MUST NOT", "REQUIRED",
"SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT
RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as RECOMMENDED", "MAY", and "OPTIONAL" are to be interpreted as
described in BCP 14, conforming to [RFC2119] and indicate requirement described in BCP 14, conforming to [RFC2119] and indicate requirement
levels for compliant implementations. levels for compliant implementations.
RFC 2119 defines the use of these key words to help make the intent RFC 2119 defines the use of these key words to help make the intent
of standards track documents as clear as possible. While this of standards track documents as clear as possible. While this
document uses these keywords, this document is not a standards track document uses these keywords, this document is not a standards track
document. The term Throughput is defined in [RFC2544]. document. The term Throughput is defined in [RFC2544].
Terms specific to SIP [RFC3261] performance benchmarking are defined Terms specific to SIP [RFC3261] performance benchmarking are defined
in [I-D.sip-bench-term]. in [I-D.sip-bench-term].
2. Introduction 2. Introduction
This document describes the methodology for benchmarking Session This document describes the methodology for benchmarking Session
Initiation Protocol (SIP) performance as described in Terminology Initiation Protocol (SIP) performance as described in the Terminology
document [I-D.sip-bench-term]. The methodology and terminology are document [I-D.sip-bench-term]. The methodology and terminology are
to be used for benchmarking signaling plane performance with varying to be used for benchmarking signaling plane performance with varying
signaling and media load. Media streams, when used, are used only to signaling and media load. Media streams, when used, are used only to
study how they impact the signaling behavior. This document study how they impact the signaling behavior. This document
concentrates on benchmarking SIP session setup and SIP registrations concentrates on benchmarking SIP session setup and SIP registrations
only. only.
The device-under-test (DUT) is a SIP server, which may be any SIP The device-under-test (DUT) is a RFC3261-capable [RFC3261] network
conforming [RFC3261] device. Benchmarks can be obtained and compared intermediary that plays the role of a registrar, redirect server,
for different types of devices such as a SIP proxy server, Session stateful proxy, a Session Border Controller (SBC) or a B2BUA. This
Border Controllers (SBC), SIP registrars and a SIP proxy server document does not require the intermediary to assume the role of a
paired with a media relay. stateless proxy. Benchmarks can be obtained and compared for
different types of devices such as a SIP proxy server, Session Border
Controllers (SBC), SIP registrars and a SIP proxy server paired with
a media relay.
The test cases provide metrics for benchmarking the maximum 'SIP The test cases provide metrics for benchmarking the maximum 'SIP
Registration Rate' and maximum 'SIP Session Establishment Rate' that Registration Rate' and maximum 'SIP Session Establishment Rate' that
the DUT can sustain over an extended period of time without failures the DUT can sustain over an extended period of time without failures
(extended period of time is defined in the algorithm in Section 4.8). (extended period of time is defined in the algorithm in
Some cases are included to cover encrypted SIP. The test topologies Section 4.10). Some cases are included to cover encrypted SIP. The
that can be used are described in the Test Setup section. Topologies test topologies that can be used are described in the Test Setup
in which the DUT handles media as well as those in which the DUT does section. Topologies in which the DUT handles media as well as those
not handle media are both considered. The measurement of the in which the DUT does not handle media are both considered. The
performance characteristics of the media itself is outside the scope measurement of the performance characteristics of the media itself is
of these documents. outside the scope of these documents.
SIP permits a wide range of configuration options that are explained
in Section 4 and Section 2 of [I-D.sip-bench-term]. Benchmark values
could possibly be impacted by Associated Media. The selected values
for Session Duration and Media Streams per Session enable benchmark
metrics to be evaluated without Associated Media. Session
Establishment Rate could possibly be impacted by the selected value
for Maximum Sessions Attempted. The benchmark for Session
Establishment Rate is measured with a fixed value for maximum Session
Attempts.
SIP permits a wide range of configuration options that are explained Benchmark metrics could possibly be impacted by Associated Media.
in Section 4 and Section 2 of [I-D.sip-bench-term]. Benchmark The selected values for Session Duration and Media Streams per
metrics could possibly be impacted by Associated Media. The selected Session enable benchmark metrics to be benchmarked without Associated
values for Session Duration and Media Streams per Session enable Media. Session Setup Rate could possibly be impacted by the selected
benchmark metrics to be benchmarked without Associated Media. value for Maximum Sessions Attempted. The benchmark for Session
Session Setup Rate could possibly be impacted by the selected value
for Maximum Sessions Attempted. The benchmark for Session
Establishment Rate is measured with a fixed value for maximum Session Establishment Rate is measured with a fixed value for maximum Session
Attempts. Attempts.
Finally, the overall value of these tests is to serve as a comparison Finally, the overall value of these tests is to serve as a comparison
function between multiple SIP implementations. One way to use these function between multiple SIP implementations. One way to use these
tests is to derive benchmarks with SIP devices from Vendor-A, derive tests is to derive benchmarks with SIP devices from Vendor-A, derive
a new set of benchmarks with similar SIP devices from Vendor-B and a new set of benchmarks with similar SIP devices from Vendor-B and
perform a comparison on the results of Vendor-A and Vendor-B. This perform a comparison on the results of Vendor-A and Vendor-B. This
document does not make any claims on the interpretation of such document does not make any claims on the interpretation of such
results. results.
3. Benchmarking Topologies 3. Benchmarking Topologies
There are two test topologies; one in which the DUT does not process
the media (Figure 1) and the other in which it does process media
(Figure 2). In both cases, the tester or EA sends traffic into the
DUT and absorbs traffic from the DUT. The diagrams in Figure 1 and
Figure 2 represent the logical flow of information and do not dictate
a particular physical arrangements of the entities.
Test organizations need to be aware that these tests generate large Test organizations need to be aware that these tests generate large
volumes of data and consequently ensure that networking devices like volumes of data and consequently ensure that networking devices like
hubs, switches or routers are able to handle the generated volume. hubs, switches or routers are able to handle the generated volume.
The test cases enumerated in Section 6.1 to Section 6.6 operate on
two test topologies: one in which the DUT does not process the media
(Figure 1) and the other in which it does process media (Figure 2).
In both cases, the tester or EA sends traffic into the DUT and
absorbs traffic from the DUT. The diagrams in Figure 1 and Figure 2
represent the logical flow of information and do not dictate a
particular physical arrangements of the entities.
Figure 1 depicts a layout in which the DUT is an intermediary between Figure 1 depicts a layout in which the DUT is an intermediary between
the two interfaces of the EA. If the test case requires the exchange the two interfaces of the EA. If the test case requires the exchange
of media, the media does not flow through the DUT but rather passes of media, the media does not flow through the DUT but rather passes
directly between the two endpoints. Figure 2 shows the DUT as an directly between the two endpoints. Figure 2 shows the DUT as an
intermediary between the two interfaces of the EA. If the test case intermediary between the two interfaces of the EA. If the test case
requires the exchange of media, the media flows through the DUT requires the exchange of media, the media flows through the DUT
between the endpoints. between the endpoints.
+--------+ Session +--------+ Session +--------+ +--------+ Session +--------+ Session +--------+
| | Attempt | | Attempt | | | | Attempt | | Attempt | |
skipping to change at page 6, line 33 skipping to change at page 6, line 33
| | | | | | | | | | | |
| | Response | | Response | | | | Response | | Response | |
| Tester +<------------| DUT +<------------| Tester | | Tester +<------------| DUT +<------------| Tester |
| (EA) | | | | (EA) | | (EA) | | | | (EA) |
| |<===========>| |<===========>| | | |<===========>| |<===========>| |
+--------+ Media +--------+ Media +--------+ +--------+ Media +--------+ Media +--------+
(Optional) (Optional) (Optional) (Optional)
Figure 2: DUT as an intermediary forwarding media Figure 2: DUT as an intermediary forwarding media
The test cases enumerated in Section 6.7 and Section 6.8 use the
topology in Figure 3 below.
+--------+ Registration +--------+
| | request | |
| |------------->+ |
| | | |
| | Response | |
| Tester +<-------------| DUT |
| (EA) | | |
| | | |
+--------+ +--------+
Figure 3: Registration and Re-registration tests
During registration or re-registration, the DUT may involve backend
network elements and data stores. These network elements and data
stores are not shown in Figure 3, but it is understood that they will
impact the time required for the DUT to generate a response.
This document explicitly separates a registration test (Section 6.7)
from a re-registration test (Section 6.8) because in certain
networks, the time to re-register may vary from the time to perform
an initial registration due to the backend processing involved. It
is expected that the registration tests and the re-registration test
will be performed with the same set of backend network elements in
order to derive a stable metric.
4. Test Setup Parameters 4. Test Setup Parameters
4.1. Selection of SIP Transport Protocol 4.1. Selection of SIP Transport Protocol
Test cases may be performed with any transport protocol supported by Test cases may be performed with any transport protocol supported by
SIP. This includes, but is not limited to, TCP, UDP, TLS and SIP. This includes, but is not limited to, TCP, UDP, TLS and
websockets. The protocol used for the SIP transport protocol must be websockets. The protocol used for the SIP transport protocol must be
reported with benchmarking results. reported with benchmarking results.
4.2. Signaling Server SIP allows a DUT to use different transports for signaling on either
side of the connection to the EAs. Therefore, this document assumes
that the same transport is used on both sides of the connection; if
this is not the case in any of the tests, the transport on each side
of the connection MUST be reported in the test reporting template.
4.2. Connection-oriented Transport Management
SIP allows a device to open one connection and send multiple requests
over the same connection (responses are normally received over the
same connection that the request was sent out on). The protocol also
allows a device to open a new connection for each individual request.
A connection management strategy will have an impact on the results
obtained from the test cases, especially for connection-oriented
transports such as TLS. For such transports, the cryptographic
handshake must occur every time a connection is opened.
The connection management strategy, i.e., use of one connection to
send all requests or closing an existing connection and opening a new
connection to send each request, MUST be reported with the
benchmarking result.
4.3. Signaling Server
The Signaling Server is defined in the companion terminology The Signaling Server is defined in the companion terminology
document, ([I-D.sip-bench-term], Section 3.2.2). The Signaling document, ([I-D.sip-bench-term], Section 3.2.2). The Signaling
Server is a DUT. Server is a DUT.
4.3. Associated Media 4.4. Associated Media
Some tests require Associated Media to be present for each SIP Some tests require Associated Media to be present for each SIP
session. The test topologies to be used when benchmarking DUT session. The test topologies to be used when benchmarking DUT
performance for Associated Media are shown in Figure 1 and Figure 2. performance for Associated Media are shown in Figure 1 and Figure 2.
4.4. Selection of Associated Media Protocol 4.5. Selection of Associated Media Protocol
The test cases specified in this document provide SIP performance The test cases specified in this document provide SIP performance
independent of the protocol used for the media stream. Any media independent of the protocol used for the media stream. Any media
protocol supported by SIP may be used. This includes, but is not protocol supported by SIP may be used. This includes, but is not
limited to, RTP, RTSP, and SRTP. The protocol used for Associated limited to, RTP, and SRTP. The protocol used for Associated Media
Media MUST be reported with benchmarking results. MUST be reported with benchmarking results.
4.5. Number of Associated Media Streams per SIP Session 4.6. Number of Associated Media Streams per SIP Session
Benchmarking results may vary with the number of media streams per Benchmarking results may vary with the number of media streams per
SIP session. When benchmarking a DUT for voice, a single media SIP session. When benchmarking a DUT for voice, a single media
stream is used. When benchmarking a DUT for voice and video, two stream is used. When benchmarking a DUT for voice and video, two
media streams are used. The number of Associated Media Streams MUST media streams are used. The number of Associated Media Streams MUST
be reported with benchmarking results. be reported with benchmarking results.
4.6. Session Duration 4.7. Codec Type
The test cases specified in this document provide SIP performance
independent of the media stream codec. Any codec supported by the
EAs may be used. The codec used for Associated Media MUST be
reported with the benchmarking results.
4.8. Session Duration
The value of the DUT's performance benchmarks may vary with the The value of the DUT's performance benchmarks may vary with the
duration of SIP sessions. Session Duration MUST be reported with duration of SIP sessions. Session Duration MUST be reported with
benchmarking results. A Session Duration of zero seconds indicates benchmarking results. A Session Duration of zero seconds indicates
transmission of a BYE immediately following a successful SIP transmission of a BYE immediately following a successful SIP
establishment. Setting this parameter to the value '0' indicates establishment. Setting this parameter to the value '0' indicates
that a BYE will be sent by the EA immediately after the EA receives a that a BYE will be sent by the EA immediately after the EA receives a
200 OK to the INVITE. Setting this parameter to a time value greater 200 OK to the INVITE. Setting this parameter to a time value greater
than the duration of the test indicates that a BYE is never sent. than the duration of the test indicates that a BYE is never sent.
4.7. Attempted Sessions per Second (sps) 4.9. Attempted Sessions per Second (sps)
The value of the DUT's performance benchmarks may vary with the The value of the DUT's performance benchmarks may vary with the
Session Attempt Rate offered by the tester. Session Attempt Rate Session Attempt Rate offered by the tester. Session Attempt Rate
MUST be reported with the benchmarking results. MUST be reported with the benchmarking results.
4.8. Benchmarking algorithm The test cases enumerated in Section 6.1 to Section 6.6 require that
the EA is configured to send the final 2xx-class response as quickly
as it can. This document does not require the tester to add any
delay between receiving a request and generating a final response.
4.10. Benchmarking algorithm
In order to benchmark the test cases uniformly in Section 6, the In order to benchmark the test cases uniformly in Section 6, the
algorithm described in this section should be used. A prosaic algorithm described in this section should be used. A prosaic
description of the algorithm and a pseudo-code description are description of the algorithm and a pseudo-code description are
provided below, and a simulation written in the R statistical provided below, and a simulation written in the R statistical
language is provided in Appendix A. language [Rtool] is provided in Appendix A.
The goal is to find the largest value, R, a SIP Session Attempt Rate, The goal is to find the largest value, R, a SIP Session Attempt Rate,
measured in sessions-per-second (sps), which the DUT can process with measured in sessions-per-second (sps), which the DUT can process with
zero errors over a defined, extended period. This period is defined zero errors over a defined, extended period. This period is defined
as the amount of time needed to attempt N SIP sessions, where N is a as the amount of time needed to attempt N SIP sessions, where N is a
parameter of test, at the attempt rate, R. An iterative process is parameter of test, at the attempt rate, R. An iterative process is
used to find this rate. The algorithm corresponding to this process used to find this rate. The algorithm corresponding to this process
converges to R. converges to R.
If the DUT vendor provides a value for R, the tester can use this If the DUT vendor provides a value for R, the tester can use this
skipping to change at page 10, line 8 skipping to change at page 12, line 4
w := max(0.10, w / 2) w := max(0.10, w / 2)
} }
} }
return h return h
end proc end proc
5. Reporting Format 5. Reporting Format
5.1. Test Setup Report 5.1. Test Setup Report
SIP Transport Protocol = ___________________________ SIP Transport Protocol = ___________________________
(valid values: TCP|UDP|TLS|SCTP|websockets|specify-other) (valid values: TCP|UDP|TLS|SCTP|websockets|specify-other)
(specify if same transport used for connections to the DUT
and connections from the DUT. If different transports
used on each connection, enumerate the transports used)
Connection management strategy for connection oriented
transports
DUT receives requests on one connection = _______
(yes or no. If no, DUT accepts a new connection for
every incoming request, sends a response on that
connection and closes the connection)
DUT sends requests on one connection = __________
(yes or no. If no, DUT initiates a new connection to
send out each request, gets a response on that
connection and closes the connection)
Session Attempt Rate = _____________________________ Session Attempt Rate = _____________________________
(session attempts/sec) (session attempts/sec)
Total Sessions Attempted = _________________________ Total Sessions Attempted = _________________________
(total sessions to be created over duration of test) (total sessions to be created over duration of test)
Media Streams Per Session = _______________________ Media Streams Per Session = _______________________
(number of streams per session) (number of streams per session)
Associated Media Protocol = _______________________ Associated Media Protocol = _______________________
(RTP|RTSP|specify-other) (RTP|SRTP|specify-other)
Media Packet Size = _______________________________
(bytes) Codec = ____________________________________________
(Codec type as identified by the organization that
specifies the codec)
Media Packet Size (audio only) = __________________
(Number of bytes in an audio packet)
Establishment Threshold time = ____________________ Establishment Threshold time = ____________________
(seconds) (seconds)
TLS ciphersuite used TLS ciphersuite used
(for tests involving TLS) = ________________________ (for tests involving TLS) = ________________________
(e.g., TLS_RSA_WITH_AES_128_CBC_SHA) (e.g., TLS_RSA_WITH_AES_128_CBC_SHA)
IPSec profile used IPSec profile used
(for tests involving IPSEC) = _____________________ (for tests involving IPSEC) = _____________________
5.2. Device Benchmarks for IS 5.2. Device Benchmarks for session setup
Session Establishment Rate = ______________________ Session Establishment Rate = ______________________
(sessions per second) (sessions per second)
Is DUT acting as a media relay (yes/no) = _________ Is DUT acting as a media relay (yes/no) = _________
5.3. Device Benchmarks for NS 5.3. Device Benchmarks for registrations
Registration Rate = ____________________________ Registration Rate = ____________________________
(registrations per second) (registrations per second)
Re-registration Rate = ____________________________ Re-registration Rate = ____________________________
(registrations per second) (registrations per second)
Notes = ____________________________________________
(List any specific backend processing required or
other parameters that may impact the rate)
6. Test Cases 6. Test Cases
6.1. Baseline Session Establishment Rate of the test bed 6.1. Baseline Session Establishment Rate of the test bed
Objective: Objective:
To benchmark the Session Establishment Rate of the Emulated Agent To benchmark the Session Establishment Rate of the Emulated Agent
(EA) with zero failures. (EA) with zero failures.
Procedure: Procedure:
1. Configure the DUT in the test topology shown in Figure 1. 1. Configure the DUT in the test topology shown in Figure 1.
2. Set media streams per session to 0. 2. Set media streams per session to 0.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Execute benchmarking algorithm as defined in Section 4.10 to
get the baseline session establishment rate. This rate MUST get the baseline session establishment rate. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: This is the scenario to obtain the maximum Session Expected Results: This is the scenario to obtain the maximum Session
Establishment Rate of the EA and the test bed when no DUT is Establishment Rate of the EA and the test bed when no DUT is
present. The results of this test might be used to normalize test present. The results of this test might be used to normalize test
results performed on different test beds or simply to better results performed on different test beds or simply to better
understand the impact of the DUT on the test bed in question. understand the impact of the DUT on the test bed in question.
6.2. Session Establishment Rate without media 6.2. Session Establishment Rate without media
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with no To benchmark the Session Establishment Rate of the DUT with no
associated media and zero failures. associated media and zero failures.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 1 or Figure 2. Figure 1 or Figure 2.
2. Set media streams per session to 0. 2. Set media streams per session to 0.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Execute benchmarking algorithm as defined in Section 4.10 to
get the session establishment rate. This rate MUST be get the session establishment rate. This rate MUST be
recorded using any pertinent parameters as shown in the recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Find the Session Establishment Rate of the DUT Expected Results: Find the Session Establishment Rate of the DUT
when the EA is not sending media streams. when the EA is not sending media streams.
6.3. Session Establishment Rate with Media not on DUT 6.3. Session Establishment Rate with Media not on DUT
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
failures when Associated Media is included in the benchmark test failures when Associated Media is included in the benchmark test
but the media is not running through the DUT. but the media is not running through the DUT.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 1. Figure 1.
2. Set media streams per session to 1. 2. Set media streams per session to 1.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Execute benchmarking algorithm as defined in Section 4.10 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Associated Media with any number of media streams per SIP session Associated Media with any number of media streams per SIP session
are expected to be identical to the Session Establishment Rate are expected to be identical to the Session Establishment Rate
results obtained without media in the case where the DUT is results obtained without media in the case where the DUT is
running on a platform separate from the Media Relay. running on a platform separate from the Media Relay.
skipping to change at page 12, line 31 skipping to change at page 15, line 9
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
failures when Associated Media is included in the benchmark test failures when Associated Media is included in the benchmark test
and the media is running through the DUT. and the media is running through the DUT.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 2. Figure 2.
2. Set media streams per session to 1. 2. Set media streams per session to 1.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Execute benchmarking algorithm as defined in Section 4.10 to
get the session establishment rate with media. This rate MUST get the session establishment rate with media. This rate MUST
be recorded using any pertinent parameters as shown in the be recorded using any pertinent parameters as shown in the
reporting format of Section 5.1. reporting format of Section 5.1.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
Associated Media may be lower than those obtained without media in Associated Media may be lower than those obtained without media in
the case where the DUT and the Media Relay are running on the same the case where the DUT and the Media Relay are running on the same
platform. platform.
6.5. Session Establishment Rate with TLS Encrypted SIP 6.5. Session Establishment Rate with TLS Encrypted SIP
skipping to change at page 13, line 4 skipping to change at page 15, line 29
6.5. Session Establishment Rate with TLS Encrypted SIP 6.5. Session Establishment Rate with TLS Encrypted SIP
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
failures when using TLS encrypted SIP signaling. failures when using TLS encrypted SIP signaling.
Procedure: Procedure:
1. If the DUT is being benchmarked as a proxy or B2BUA, then 1. If the DUT is being benchmarked as a proxy or B2BUA, then
configure the DUT in the test topology shown in Figure 1 or configure the DUT in the test topology shown in Figure 1 or
Figure 2. Figure 2.
2. Configure the tester to enable TLS over the transport being 2. Configure the tester to enable TLS over the transport being
used during benchmarking. Note the ciphersuite being used for used during benchmarking. Note the ciphersuite being used for
TLS and record it in Section 5.1. TLS and record it in Section 5.1.
3. Set media streams per session to 0 (media is not used in this 3. Set media streams per session to 0 (media is not used in this
test). test).
4. Execute benchmarking algorithm as defined in Section 4.8 to 4. Execute benchmarking algorithm as defined in Section 4.10 to
get the session establishment rate with TLS encryption. get the session establishment rate with TLS encryption.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
TLS Encrypted SIP may be lower than those obtained with plaintext TLS Encrypted SIP may be lower than those obtained with plaintext
SIP. SIP.
6.6. Session Establishment Rate with IPsec Encrypted SIP 6.6. Session Establishment Rate with IPsec Encrypted SIP
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
skipping to change at page 13, line 26 skipping to change at page 16, line 4
6.6. Session Establishment Rate with IPsec Encrypted SIP 6.6. Session Establishment Rate with IPsec Encrypted SIP
Objective: Objective:
To benchmark the Session Establishment Rate of the DUT with zero To benchmark the Session Establishment Rate of the DUT with zero
failures when using IPsec Encrypted SIP signaling. failures when using IPsec Encrypted SIP signaling.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 1 or Figure 2. Figure 1 or Figure 2.
2. Set media streams per session to 0 (media is not used in this 2. Set media streams per session to 0 (media is not used in this
test). test).
3. Configure tester for IPSec. Note the IPSec profile being used 3. Configure tester for IPSec. Note the IPSec profile being used
for and record it in Section 5.1. for and record it in Section 5.1.
4. Execute benchmarking algorithm as defined in Section 4.8 to 4. Execute benchmarking algorithm as defined in Section 4.10 to
get the session establishment rate with encryption. get the session establishment rate with encryption.
Expected Results: Session Establishment Rate results obtained with Expected Results: Session Establishment Rate results obtained with
IPSec Encrypted SIP may be lower than those obtained with IPSec Encrypted SIP may be lower than those obtained with
plaintext SIP. plaintext SIP.
6.7. Registration Rate 6.7. Registration Rate
Objective: Objective:
To benchmark the maximum registration rate the DUT can handle over To benchmark the maximum registration rate the DUT can handle over
an extended time period with zero failures. an extended time period with zero failures.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 1 or Figure 2. Figure 3.
2. Set the registration timeout value to at least 3600 seconds. 2. Set the registration timeout value to at least 3600 seconds.
3. Execute benchmarking algorithm as defined in Section 4.8 to 3. Each register request MUST be made to a distinct address of
get the maximum registration rate. This rate MUST be recorded record (AoR). Execute benchmarking algorithm as defined in
using any pertinent parameters as shown in the reporting Section 4.10 to get the maximum registration rate. This rate
format of Section 5.1. For example, the use of TLS or IPSec MUST be recorded using any pertinent parameters as shown in
during registration must be noted in the reporting format. the reporting format of Section 5.1. For example, the use of
TLS or IPSec during registration must be noted in the
reporting format. In the same vein, any specific backend
processing (use of databases, authentication servers, etc.)
SHOULD be recorded as well.
Expected Results: Provides a maximum registration rate. Expected Results: Provides a maximum registration rate.
6.8. Re-Registration Rate 6.8. Re-Registration Rate
Objective: Objective:
To benchmark the re-registration rate of the DUT with zero To benchmark the re-registration rate of the DUT with zero
failures. failures using the same backend processing and parameters used
during Section 6.7.
Procedure: Procedure:
1. Configure a DUT according to the test topology shown in 1. Configure a DUT according to the test topology shown in
Figure 1 or Figure 2. Figure 3.
2. First, execute test detailed in Section 6.7 to register the 2. First, execute test detailed in Section 6.7 to register the
endpoints with the registrar and obtain the registration rate. endpoints with the registrar and obtain the registration rate.
3. After at least 5 minutes of Step 2, but no more than 10 3. After at least 5 minutes of Step 2, but no more than 10
minutes after Step 2 has been performed, execute Step 3 of the minutes after Step 2 has been performed, re-register the same
test in Section 6.7. This will count as a re-registration AoRs used in Step 3 of Section 6.7. This will count as a re-
because the SIP address of records have not yet expired. registration because the SIP AoRs have not yet expired.
Expected Results: The rate should be at least equal to but not more Expected Results: Note the rate obtained through this test for
than the result of Section 6.7. comparison with the rate obtained in Section 6.7.
7. IANA Considerations 7. IANA Considerations
This document does not requires any IANA considerations. This document does not requires any IANA considerations.
8. Security Considerations 8. Security Considerations
Documents of this type do not directly affect the security of Documents of this type do not directly affect the security of
Internet or corporate networks as long as benchmarking is not Internet or corporate networks as long as benchmarking is not
performed on devices or systems connected to production networks. performed on devices or systems connected to production networks.
skipping to change at page 14, line 46 skipping to change at page 17, line 29
is discussed in RFC3261, RFC3550, and RFC3711 and various other is discussed in RFC3261, RFC3550, and RFC3711 and various other
drafts. This document attempts to formalize a set of common drafts. This document attempts to formalize a set of common
methodology for benchmarking performance of SIP devices in a lab methodology for benchmarking performance of SIP devices in a lab
environment. environment.
9. Acknowledgments 9. Acknowledgments
The authors would like to thank Keith Drage and Daryl Malas for their The authors would like to thank Keith Drage and Daryl Malas for their
contributions to this document. Dale Worley provided an extensive contributions to this document. Dale Worley provided an extensive
review that lead to improvements in the documents. We are grateful review that lead to improvements in the documents. We are grateful
to Barry Constantine for providing valuable comments during the to Barry Constantine, William Cerveny and Robert Sparks for providing
document's WGLC. valuable comments during the document's last calls and expert
reviews.
10. References 10. References
10.1. Normative References 10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544, March 1999.
skipping to change at page 15, line 28 skipping to change at page 18, line 12
draft-ietf-bmwg-sip-bench-term-10 (work in progress), draft-ietf-bmwg-sip-bench-term-10 (work in progress),
May 2014. May 2014.
10.2. Informative References 10.2. Informative References
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E. A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261, Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002. June 2002.
Appendix A. R code to simulate benchmarking algorithm [Rtool] R Development Core Team, "R: A language and environment
for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. ISBN 3-900051-07-0, URL
http://www.R-project.org/", , 2011.
Appendix A. R Code Component to simulate benchmarking algorithm
# Copyright (c) 2014 IETF Trust and Vijay K. Gurbani. All
# rights reserved.
#
# Redistribution and use in source and binary forms, with
# or without modification, are permitted provided that the
# following conditions are met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials
# provided with the distribution.
# * Neither the name of Internet Society, IETF or IETF Trust,
# nor the names of specific contributors, may be used
# to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
# OF SUCH DAMAGE.
w = 0.10 w = 0.10
d = max(0.10, w / 2) d = max(0.10, w / 2)
DUT_max_sps = 460 # Change as needed to set the max sps value DUT_max_sps = 460 # Change as needed to set the max sps value
# for a DUT # for a DUT
# Returns R, given r (initial session attempt rate). # Returns R, given r (initial session attempt rate).
# E.g., assume that a DUT handles 460 sps in steady state # E.g., assume that a DUT handles 460 sps in steady state
# and you have saved this code in a file simulate.r. Then, # and you have saved this code in a file simulate.r. Then,
# start an R session and do the following: # start an R session and do the following:
 End of changes. 50 change blocks. 
110 lines changed or deleted 243 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/