< draft-rosa-bmwg-vnfbench-03.txt   draft-rosa-bmwg-vnfbench-04.txt >
BMWG R. Rosa, Ed. BMWG R. Rosa, Ed.
Internet-Draft C. Rothenberg Internet-Draft C. Rothenberg
Intended status: Informational UNICAMP Intended status: Informational UNICAMP
Expires: July 2, 2019 M. Peuster Expires: December 26, 2019 M. Peuster
H. Karl H. Karl
UPB UPB
December 29, 2018 June 24, 2019
Methodology for VNF Benchmarking Automation Methodology for VNF Benchmarking Automation
draft-rosa-bmwg-vnfbench-03 draft-rosa-bmwg-vnfbench-04
Abstract Abstract
This document describes a common methodology for the automated This document describes a common methodology for the automated
benchmarking of Virtualized Network Functions (VNFs) executed on benchmarking of Virtualized Network Functions (VNFs) executed on
general-purpose hardware. Specific cases of benchmarking general-purpose hardware. Specific cases of automated benchmarking
methodologies for particular VNFs can be derived from this document. methodologies for particular VNFs can be derived from this document.
Two open source reference implementations are reported as running Two open source reference implementations are reported as running
code embodiments of the proposed, automated benchmarking methodology. code embodiments of the proposed, automated benchmarking methodology.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on July 2, 2019. This Internet-Draft will expire on December 26, 2019.
Copyright Notice Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Considerations . . . . . . . . . . . . . . . . . . . . . . . 5 4. Considerations . . . . . . . . . . . . . . . . . . . . . . . 4
4.1. VNF Testing Methods . . . . . . . . . . . . . . . . . . . . 5 4.1. VNF Testing Methods . . . . . . . . . . . . . . . . . . . . 5
4.2. Benchmarking Procedures . . . . . . . . . . . . . . . . . . 6 4.2. Benchmarking Procedures . . . . . . . . . . . . . . . . . . 5
5. A Generic VNF Benchmarking Architectural Framework . . . . . 7 4.2.1. Phase I: Deployment . . . . . . . . . . . . . . . . . . . 6
5.1. Deployment Scenarios . . . . . . . . . . . . . . . . . . . 9 4.2.2. Phase II: Configuration . . . . . . . . . . . . . . . . . 6
4.2.3. Phase III: Execution . . . . . . . . . . . . . . . . . . 6
4.2.4. Phase IV: Report . . . . . . . . . . . . . . . . . . . . 7
5. Generic VNF Benchmarking Architectural Framework . . . . . . 7
5.1. Deployment Scenarios . . . . . . . . . . . . . . . . . . . 10
6. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 10 6. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.1. VNF Benchmarking Descriptor (VNF-BD) . . . . . . . . . . . 11 6.1. VNF Benchmarking Descriptor (VNF-BD) . . . . . . . . . . . 12
6.1.1. Descriptor Headers . . . . . . . . . . . . . . . . . . . 11 6.1.1. Descriptor Headers . . . . . . . . . . . . . . . . . . . 12
6.1.2. Target Information . . . . . . . . . . . . . . . . . . . 11 6.1.2. Target Information . . . . . . . . . . . . . . . . . . . 12
6.1.3. Deployment Scenario . . . . . . . . . . . . . . . . . . . 11 6.1.3. Experiments . . . . . . . . . . . . . . . . . . . . . . . 12
6.1.4. Settings . . . . . . . . . . . . . . . . . . . . . . . . 12 6.1.4. Environment . . . . . . . . . . . . . . . . . . . . . . . 12
6.2. VNF Performance Profile (VNF-PP) . . . . . . . . . . . . . 13 6.1.5. Scenario . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2.1. Execution Environment . . . . . . . . . . . . . . . . . . 13 6.1.6. Proceedings . . . . . . . . . . . . . . . . . . . . . . . 14
6.2.2. Measurement Results . . . . . . . . . . . . . . . . . . . 14 6.2. VNF Performance Profile (VNF-PP) . . . . . . . . . . . . . 14
6.3. Procedures . . . . . . . . . . . . . . . . . . . . . . . . 15 6.2.1. Execution Environment . . . . . . . . . . . . . . . . . . 14
6.3.1. Pre-Execution . . . . . . . . . . . . . . . . . . . . . . 15 6.2.2. Measurement Results . . . . . . . . . . . . . . . . . . . 15
6.3.2. Automated Execution . . . . . . . . . . . . . . . . . . . 15 6.3. Procedures . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3.3. Post-Execution . . . . . . . . . . . . . . . . . . . . . 17 6.3.1. Pre-Execution . . . . . . . . . . . . . . . . . . . . . . 16
6.4. Particular Cases . . . . . . . . . . . . . . . . . . . . . 17 6.3.2. Automated Execution . . . . . . . . . . . . . . . . . . . 17
6.4.1. Capacity . . . . . . . . . . . . . . . . . . . . . . . . 17 6.3.3. Post-Execution . . . . . . . . . . . . . . . . . . . . . 18
6.4.2. Isolation . . . . . . . . . . . . . . . . . . . . . . . . 17 6.4. Particular Cases . . . . . . . . . . . . . . . . . . . . . 18
6.4.3. Failure Handling . . . . . . . . . . . . . . . . . . . . 18 6.4.1. Capacity . . . . . . . . . . . . . . . . . . . . . . . . 18
6.4.4. Elasticity and Flexibility . . . . . . . . . . . . . . . 18 6.4.2. Isolation . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4.5. Handling Configurations . . . . . . . . . . . . . . . . . 18 6.4.3. Failure Handling . . . . . . . . . . . . . . . . . . . . 19
6.4.6. White Box VNF . . . . . . . . . . . . . . . . . . . . . . 18 6.4.4. Elasticity and Flexibility . . . . . . . . . . . . . . . 19
7. Relevant Influencing Aspects . . . . . . . . . . . . . . . . 18 6.4.5. Handling Configurations . . . . . . . . . . . . . . . . . 19
8. Open Source Reference Implementations . . . . . . . . . . . . 19 6.4.6. White Box VNF . . . . . . . . . . . . . . . . . . . . . . 19
8.1. Gym . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 7. Open Source Reference Implementations . . . . . . . . . . . . 20
8.2. tng-bench . . . . . . . . . . . . . . . . . . . . . . . . . 21 7.1. Gym . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9. Security Considerations . . . . . . . . . . . . . . . . . . . 22 7.2. tng-bench . . . . . . . . . . . . . . . . . . . . . . . . . 21
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 8. Security Considerations . . . . . . . . . . . . . . . . . . . 22
11. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 22 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22
12. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 22
12.1. Normative References . . . . . . . . . . . . . . . . . . . 22 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 22
12.2. Informative References . . . . . . . . . . . . . . . . . . 23 11.1. Normative References . . . . . . . . . . . . . . . . . . . 22
11.2. Informative References . . . . . . . . . . . . . . . . . . 23
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24
1. Introduction 1. Introduction
The Benchmarking Methodology Working Group (BMWG) already presented The Benchmarking Methodology Working Group (BMWG) already presented
considerations for benchmarking of VNFs and their infrastructure in considerations for benchmarking of VNFs and their infrastructure in
[RFC8172]. Similar to the motivation given in [RFC8172], the [RFC8172]. Similar to the motivation given in [RFC8172], the
following aspects motivate the need for VNF benchmarking: (i) pre- following aspects justify the need for VNF benchmarking: (i) pre-
deployment infrastructure dimensioning to realize associated VNF deployment infrastructure dimensioning to realize associated VNF
performance profiles; (ii) comparison factor with physical network performance profiles; (ii) comparison factor with physical network
functions; (iii) and output results for analytical VNF development. functions; (iii) and output results for analytical VNF development.
Even if many methodologies already described by the BMWG, e.g., self- Even if many methodologies already described by the BMWG, e.g., self-
contained black-box benchmarking, can be applied to VNF benchmarking contained black-box benchmarking, can be applied to VNF benchmarking
scenarios, further considerations have to be made. This is, on the scenarios, further considerations have to be made. This is, on the
one hand, because VNFs, which are software components, do not have one hand, because VNFs, which are software components, do not have
strict and clear execution boundaries and depend on underlying strict and clear execution boundaries and depend on underlying
virtualization environment parameters as well as management and virtualization environment parameters as well as management and
orchestration decisions [ETS14a]. On the other hand, can and should orchestration decisions [ETS14a]. On the other hand, can and should
the flexible, software-based nature of VNFs be exploited to fully the flexible, software-based nature of VNFs be exploited to fully
automate the entire benchmarking procedure end-to-end. This is an automate the entire benchmarking procedure end-to-end. This is an
inherent need to align VNF benchmarking with the agile methods inherent need to align VNF benchmarking with the agile methods
enabled by the concept of network function virtualization (NFV) enabled by the concept of Network Functions Virtualization (NFV)
[ETS14e] More specifically it allows: (i) the development of agile [ETS14e]. More specifically it allows: (i) the development of agile
performance-focused DevOps methodologies for Continuous Integration performance-focused DevOps methodologies for Continuous Integration
and Delivery (CI/CD) of VNFs; (ii) the creation of on-demand VNF test and Delivery (CI/CD) of VNFs; (ii) the creation of on-demand VNF test
descriptors for upcoming execution environments; (iii) the path for descriptors for upcoming execution environments; (iii) the path for
precise-analytics of extensively automated catalogues of VNF precise-analytics of automated catalogues of VNF performance
performance profiles; (iv) and run-time mechanisms to assist VNF profiles; (iv) and run-time mechanisms to assist VNF lifecycle
lifecycle orchestration/management workflows, e.g., automated orchestration/management workflows, e.g., automated resource
resource dimensioning based on benchmarking insights. dimensioning based on benchmarking insights.
This document describes basic methodologies and guidelines to fully This document describes basic methodologies and guidelines to fully
automate VNF benchmarking procedures, without limiting the automated automate VNF benchmarking procedures, without limiting the automated
process to a specific benchmark or infrastructure. After presenting process to a specific benchmark or infrastructure. After presenting
initial considerations, the document first describes a generic initial considerations, the document first describes a generic
architectural framework to setup automated benchmarking experiments. architectural framework to setup automated benchmarking experiments.
Second, the automation methodology is discussed, with a particular Second, the automation methodology is discussed, with a particular
focus on experiment and procedure description approaches to support focus on experiment and procedure description approaches to support
reproducibility of the automated benchmarks, a key challenge in VNF reproducibility of the automated benchmarks, a key challenge in VNF
benchmarking. Finally, two independent, open-source reference benchmarking. Finally, two independent, open-source reference
implementations are presented. The document addresses state-of-the- implementations are presented. The document addresses state-of-the-
art work on VNF benchmarking from scientific publications and current art work on VNF benchmarking from scientific publications and current
developments in other standardization bodies (e.g., [ETS14c] and developments in other standardization bodies (e.g., [ETS14c] and
[RFC8204]) wherever possible. [RFC8204]) wherever possible.
2. Terminology 2. Terminology
Common benchmarking terminology contained in this document is derived Common benchmarking terminology contained in this document is derived
from [RFC1242]. Also, the reader is assumed to be familiar with the from [RFC1242]. The reader is assumed to be familiar with the
terminology as defined in the European Telecommunications Standards terminology as defined in the European Telecommunications Standards
Institute (ETSI) NFV document [ETS14b]. Some of these terms, and Institute (ETSI) NFV document [ETS14b]. Some of these terms, and
others commonly used in this document, are defined below. others commonly used in this document, are defined below.
NFV: Network Function Virtualization - the principle of separating NFV: Network Function Virtualization - the principle of separating
network functions from the hardware they run on by using virtual network functions from the hardware they run on by using virtual
hardware abstraction. hardware abstraction.
VNF: Virtualized Network Function - a software-based network VNF: Virtualized Network Function - a software-based network
function. A VNF can be either represented by a single entity or function. A VNF can be either represented by a single entity or
be composed by a set of smaller, interconnected software be composed by a set of smaller, interconnected software
components, called VNF components (VNFCs) [ETS14d]. Those VNFs components, called VNF components (VNFCs) [ETS14d]. Those VNFs
are also called composed VNFs. are also called composed VNFs.
NS: Network Service - a collection of interconnected VNFs forming a VNFC: Virtualized Network Function Component - a software component
end-to-end service. The interconnection is often done using that implements (parts of) the VNF functionality. A VNF can
chaining of functions based on a VNF-FG. consist of a single VNFC or multiple, interconnected VNFCs
[ETS14d]
VNF-FG: Virtualized Network Function Forwarding Graph - an ordered
list of VNFs or VNFCs creating a service chain.
NFVI: NFV Infrastructure - collection of NFVI PoPs under one
orchestrator.
NFVI PoP: NFV Infrastructure Point of Presence - any combination of
virtualized compute, storage, and network resources.
VIM: Virtualized Infrastructure Manager - functional block that is
responsible for controlling and managing the NFVI compute,
storage, and network resources, usually within one operator's
Infrastructure Domain (e.g. NFVI-PoP).
VNFM: Virtualized Network Function Manager - functional block that
is responsible for controlling and managing the VNF life-cycle.
NFVO: NFV Orchestrator - functional block coordinates the management
of network service (NS) life-cycles, VNF life-cycles (supported by
the VNFM) and NFVI resources (supported by the VIM) to ensure an
optimized allocation of the necessary resources and connectivity.
VNFD: Virtualised Network Function Descriptor - configuration VNFD: Virtualised Network Function Descriptor - configuration
template that describes a VNF in terms of its deployment and template that describes a VNF in terms of its deployment and
operational behaviour, and is used in the process of VNF on- operational behaviour, and is used in the process of VNF on-
boarding and managing the life cycle of a VNF instance. boarding and managing the life cycle of a VNF instance.
VNFC: Virtualized Network Function Component - a software component NS: Network Service - a collection of interconnected VNFs forming a
that implements (parts of) the VNF functionality. A VNF can end-to-end service. The interconnection is often done using
consist of a single VNFC or multiple, interconnected VNFCs chaining of functions.
[ETS14d]
3. Scope 3. Scope
This document assumes VNFs as black boxes when defining their This document assumes VNFs as black boxes when defining their
benchmarking methodologies. White box approaches are assumed and benchmarking methodologies. White box approaches are assumed and
analysed as a particular case under the proper considerations of analysed as a particular case under the proper considerations of
internal VNF instrumentation, later discussed in this document. internal VNF instrumentation, later discussed in this document.
This document outlines a methodology for VNF benchmarking, This document outlines a methodology for VNF benchmarking,
specifically addressing its automation. specifically addressing its automation.
skipping to change at page 6, line 7 skipping to change at page 5, line 30
Dimensioning: Performance metrics are provided and the corresponding Dimensioning: Performance metrics are provided and the corresponding
parameters obtained. Note, multiple deployments may be required, parameters obtained. Note, multiple deployments may be required,
or if possible, underlying allocated resources need to be or if possible, underlying allocated resources need to be
dynamically altered. dynamically altered.
Note: Verification and Dimensioning can be reduced to Benchmarking. Note: Verification and Dimensioning can be reduced to Benchmarking.
Therefore, we focus on Benchmarking in the rest of the document. Therefore, we focus on Benchmarking in the rest of the document.
4.2. Benchmarking Procedures 4.2. Benchmarking Procedures
VNF benchmarking procedures contain multiple aspects that may or may A (automated) benchmarking procedure can be divided into three sub-
not be automated: procedures:
Orchestration: Placement (assignment/allocation of resources) and Trial: Is a single process or iteration to obtain VNF performance
interconnection (physical/virtual) of network function(s) and metrics from benchmarking measurements. A Test should always run
benchmark components (e.g., OpenStack/Kubernetes templates, NFV multiple Trials to get statistical confidence about the obtained
description solutions, like OSM/ONAP). measurements.
Configuration: Benchmark components and VNFs are configured to Test: Defines unique structural and functional parameters (e.g.,
execute the test settings (e.g., populate routing table, load PCAP configurations, resource assignment) for benchmarked components to
source files in source of stimulus). perform one or multiple Trials. Each Test must be executed
following a particular benchmarking scenario composed by a Method.
Proper measures must be taken to ensure statistical validity
(e.g., independence across Trials of generated load patterns).
Execution: Experiments run repeatedly according to configuration Method: Consists of one or more Tests to benchmark a VNF. A Method
and orchestrated components for each VNF benchmarking test case. can explicitly list ranges of parameter values for the
configuration of a benchmarking scenario and its components. Each
value of such a range is to be realized in a Test. I.e., Methods
can define parameter studies.
Output: There might be generic VNF footprint metrics (e.g., CPU and In general, automated VNF benchmarking Tests must capture relevant
memory consumption) and specific VNF traffic processing metrics causes of performance variability. To dissect a VNF benchmarking
(e.g., transactions or throughput), which can be parsed and Test, in the sections that follow different benchmarking phases are
processed in generic or specific ways (e.g., by statistics or categorized defining generic operations that may be automated. When
machine learning algorithms). automating a VNF benchmarking methodology, all the influencing
aspects on the performance of a VNF must be carefully analyzed and
comprehensively reported, in each phase of the overall benchmarking
process.
For the purposes of dissecting the automated execution procedures, 4.2.1. Phase I: Deployment
consider the following definitions:
Trial: is a single process or iteration to obtain VNF benchmarking The placement (i.e., assignment and allocation of resources) and the
metrics from measurement. A Benchmarking Test should always run interconnection, physical and/or virtual, of network function(s) and
multiple Trails to get statistical confidence about the obtained benchmarking components can be realized by orchestration platforms
measurements. (e.g., OpenStack, Kubernetes, Open Source MANO). In automated
manners, the realization of a benchmarking testbed/scenario through
those means usually rely on network service templates (e.g., TOSCA,
Heat, YANG). Such descriptors have to capture all relevant details
of the execution environment to allow the benchmarking framework to
correctly instantiate the SUT as well as helper functions required
for a Test.
Test: Defines parameters, e.g., configurations, resource 4.2.2. Phase II: Configuration
assignment, for benchmarked components to perform one or multiple
trials. Each Trial must be executed following a particular
deployment scenario composed by a Method. Proper measures must be
taken to ensure statistic validity (e.g., independence across
trials of generated load patterns).
Method: Consists of one or more Tests to benchmark a VNF. A Method The configuration of benchmarking components and VNFs (e.g., populate
can explicitly list ranges of parameter values for the routing table, load PCAP source files in source of traffic stimulus)
configuration of benchmarking components. Each value of such a to execute the Test settings can be realized by programming
range is to be realized in a Test. I.e., Methods can define interfaces in an automated way. In the scope of NFV, there might
parameter studies. exist management interfaces to control a VNF during a benchmarking
Test. Likewise, infrastructure or orchestration components can
establish the proper configuration of an execution environment to
realize all the capabilities enabling the description of the
benchmarking Test. Each configuration registry, its deployment
timestamp and target, must all be contained in the VNF benchmarking
report.
5. A Generic VNF Benchmarking Architectural Framework 4.2.3. Phase III: Execution
In the execution of a benchmarking Test, the VNF configuration can be
programmed to be changed by itself or by a VNF management platform.
It means that during a Trial execution, particular behaviors of a VNF
can be automatically triggered, e.g., auto-scaling of its internal
components. Those must be captured in the detailed procedures of the
VNF execution and its performance report. I.e., the execution of a
Trial can determine arrangements of internal states inside a VNF,
which can interfere in observed benchmarking metrics. For instance,
in a particular benchmarking case where the monitoring measurements
of the VNF and/or execution environment are available for extraction,
Tests should be run to verify if the monitoring of the VNF and/or
execution environment can impact the VNF performance metrics.
4.2.4. Phase IV: Report
The report of a VNF benchmarking Method might contain generic metrics
(e.g., CPU and memory consumption) and VNF-specific traffic
processing metrics (e.g., transactions or throughput), which can be
stored and processed in generic or specific ways (e.g., by statistics
or machine learning algorithms). If automated procedures are applied
over the generation of a benchmarking report, those must be detailed
in the report itself, jointly with their input raw measurements and
output processed data. I.e., any algorithm used in the generation of
processed metrics must be disclosed in the report.
5. Generic VNF Benchmarking Architectural Framework
A generic VNF benchmarking architectural framework, shown in A generic VNF benchmarking architectural framework, shown in
Figure 1, establishes the disposal of essential components and Figure 1, establishes the disposal of essential components and
control interfaces, explained below, that enable the automation of control interfaces, explained below, that enable the automation of
VNF benchmarking methodologies. VNF benchmarking methodologies.
+---------------+ +---------------+
| Manager | | Manager |
Control | (Coordinator) | Control | (Coordinator) |
Interface +---+-------+---+ Interface +---+-------+---+
skipping to change at page 7, line 41 skipping to change at page 8, line 34
| | | | Environment | | | | | | | | Environment | | | |
|{Probers}| +-----------| | | |{Probers} | |{Probers}| +-----------| | | |{Probers} |
+-----.---+ | +----.---------.--+ | +-----.----+ +-----.---+ | +----.---------.--+ | +-----.----+
: +---------^---------V-----+ : : +---------^---------V-----+ :
V : : : V : : :
:................>.....: :............>..: :................>.....: :............>..:
Stimulus Traffic Flow Stimulus Traffic Flow
Figure 1: Generic VNF Benchmarking Setup Figure 1: Generic VNF Benchmarking Setup
Virtualized Network Function (VNF) -- consists of one or more
software components, so called VNF components (VNFC), adequate for
performing a network function according to allocated virtual
resources and satisfied requirements in an execution environment.
A VNF can demand particular configurations for benchmarking
specifications, demonstrating variable performance based on
available virtual resources/parameters and configured enhancements
targeting specific technologies (e.g., NUMA, SR-IOV, CPU-Pinning).
Execution Environment -- defines a virtualized and controlled
composition of capabilities necessary for the execution of a VNF.
An execution environment stands as a general purpose level of
virtualization with abstracted resources available for one or more
VNFs. It can also define specific technology habilitation,
incurring in viable settings for enhancing the performance of
VNFs.
Agent -- executes active stimulus using probers, i.e., benchmarking Agent -- executes active stimulus using probers, i.e., benchmarking
tools, to benchmark and collect network and system performance tools, to benchmark and collect network and system performance
metrics. A single Agent can perform localized benchmarks in metrics. A single Agent can perform localized benchmarks in
execution environments (e.g., stress tests on CPU, memory, disk I/ execution environments (e.g., stress tests on CPU, memory, disk I/
O) or can generate stimulus traffic and the other end be the VNF O) or can generate stimulus traffic and the other end be the VNF
itself where, for example, one-way latency is evaluated. The itself where, for example, one-way latency is evaluated. The
interaction among distributed Agents enable the generation and interaction among distributed Agents enable the generation and
collection of end-to-end metrics (e.g., frame loss rate, latency) collection of end-to-end metrics (e.g., frame loss rate, latency)
measured from stimulus traffic flowing through a VNF. An Agent measured from stimulus traffic flowing through a VNF. An Agent
can be defined by a physical or virtual network function. In can be defined by a physical or virtual network function.
addition, Agent must expose to Manager its available Probers and
execution environment capabilities.
Prober -- defines an abstraction layer for a software or hardware Prober -- defines an abstraction layer for a software or hardware
tool able to generate stimulus traffic to a VNF or perform tool able to generate stimulus traffic to a VNF or perform
stress tests on execution environments. Probers might be stress tests on execution environments. Probers might be
specific or generic to an execution environment or a VNF. For specific or generic to an execution environment or a VNF. For
an Agent, a Prober must provide programmable interfaces for its an Agent, a Prober must provide programmable interfaces for its
life cycle management workflows, e.g., configuration of life cycle management, e.g., configuration of operational
operational parameters, execution of stilumus, parsing of parameters, execution of stilumus, parsing of extracted
extracted metrics, and debugging options. Specific Probers metrics, and debugging options. Specific Probers might be
might be developed to abstract and to realize the description developed to abstract and to realize the description of
of particular benchmarking methodologies. particular VNF benchmarking methodologies.
Monitor -- when possible is instantiated inside the System Under Monitor -- when possible is instantiated inside the System Under
Test, VNF and/or NFVI PoP (e.g., as a plug-in process in a Test, VNF and/or infrastructure (e.g., as a plug-in process in a
virtualized environment), to perform the passive monitoring, using virtualized execution environment), to perform the passive
Listeners, for the extraction of metrics while Agents` stimuli monitoring, using Listeners, for the extraction of metrics while
takes place. Monitors observe particular properties according to Agents` stimuli takes place. Monitors observe particular
NFVI PoPs and VNFs capabilities, i.e., exposed passive monitoring properties according to the execution environment and VNFs
interfaces. Multiple Listeners can be executed at once in capabilities, i.e., exposed passive monitoring interfaces.
synchrony with a Prober' stimulus on a SUT. A Monitor can be Multiple Listeners can be executed at once in synchrony with a
defined as a virtual network function. In addition, Monitor must Prober' stimulus on a SUT. A Monitor can be defined as a
expose to Manager its available Listeners and execution virtualized network function.
environment capabilities.
Listener -- defines one or more software interfaces for the Listener -- defines one or more software interfaces for the
extraction of particular metrics monitored in a target VNF and/ extraction of metrics monitored in a target VNF and/or
or execution environment. A Listener must provide programmable execution environment. A Listener must provide programmable
interfaces for its life cycle management workflows, e.g., interfaces for its life cycle management workflows, e.g.,
configuration of operational parameters, execution of passive configuration of operational parameters, execution of passive
monitoring captures, parsing of extracted metrics, and monitoring captures, parsing of extracted metrics, and
debugging options. White-box benchmarking approaches must be debugging options. Varied methods of passive performance
carefully analyzed, as varied methods of passive performance monitoring might be implemented as a Listener, depending on the
monitoring might be implemented as a Listener, possibly interfaces exposed by the VNF and/or execution environment.
impacting the VNF and/or execution environment performance
results.
Manager -- performs (i) the discovery of available Agents/Monitors Manager -- performs (i) the discovery of available Agents/Monitors
and their respective features (i.e., available Probers/Listeners and their respective features (i.e., available Probers/Listeners
and execution environment capabilities), (ii) the coordination and and execution environment capabilities), (ii) the coordination and
synchronization of activities of Agents and Monitors to perform a synchronization of activities of Agents and Monitors to perform a
benchmark test, (iii) the collection, processing and aggregation benchmarking Test, (iii) the collection, processing and
of all VNF benchmarking results that correlates the VNF stimuli aggregation of all VNF benchmarking measurements that correlates
and the, possible, SUT monitored metrics. A Manager executes the the VNF stimuli and the, possible, SUT monitored metrics. A
main configuration, operation, and management actions to deliver Manager executes the main configuration, operation, and management
the VNF benchmarking results. A Manager can be defined as a actions to deliver the VNF benchmarking report. A Manager can be
physical or virtual network function. defined as a physical or virtualized network function.
Virtualized Network Function (VNF) -- consists of one or more
software components, so called VNF components (VNFC), adequate for
performing a network function according to allocated virtual
resources and satisfied requirements in an execution environment.
A VNF can demand particular configurations for benchmarking
specifications, demonstrating variable performance based on
available virtual resources/parameters and configured enhancements
targeting specific technologies (e.g., NUMA, SR-IOV, CPU-Pinning).
Execution Environment -- defines a virtualized and controlled
composition of capabilities necessary for the execution of a VNF.
An execution environment stands as a general purpose level of
virtualization with abstracted resources available for one or more
VNFs. It can also define specific technology habilitation,
incurring in viable settings for enhancing the performance of
VNFs.
5.1. Deployment Scenarios 5.1. Deployment Scenarios
A deployment scenario realizes the instantiation of physical and/or A deployment scenario realizes the actual instantiation of physical
virtual of components of a Generic VNF Benchmarking Architectural and/or virtual components of a Generic VNF Benchmarking Architectural
Framework needed to habilitate the execution of an automated VNF Framework needed to habilitate the execution of an automated VNF
benchmarking methodology. benchmarking methodology. The following considerations hold for a
deployment scenario:
The following considerations hold for deployment scenarios:
o Not all components are mandatory, possible to be disposed in o Not all components are mandatory for a Test, possible to be
varied settings. disposed in varied settings.
o Components can be composed in a single entity and be defined as o Components can be composed in a single entity and be defined as
black or white boxes. For instance, Manager and Agents could black or white boxes. For instance, Manager and Agents could
jointly define one hardware/software entity to perform a VNF jointly define one hardware/software entity to perform a VNF
benchmark and present results. benchmarking Test and present measurement results.
o Monitor is not a mandatory component and must be considered only
when performed white box benchmarking approaches for a VNF and/or
its execution environment.
o Monitor can be defined by multiple instances of software o Monitor can be defined by multiple instances of software
components, each addressing a VNF or execution environment and components, each addressing a VNF or execution environment.
their respective open interfaces for the extraction of metrics.
o Agents can be disposed in varied topology setups, included the o Agents can be disposed in varied topology setups, included the
possibility of multiple input and output ports of a VNF being possibility of multiple input and output ports of a VNF being
directly connected each in one Agent. directly connected each in one Agent.
o All benchmarking components defined in a deployment scenario must o All benchmarking components defined in a deployment scenario must
perform the synchronization of clocks. perform the synchronization of clocks.
6. Methodology 6. Methodology
Portability is an intrinsic characteristic of VNFs and allows them to Portability is an intrinsic characteristic of VNFs and allows them to
be deployed in multiple environments. This enables various be deployed in multiple environments. This enables various
benchmarking setups in varied deployment scenarios. A VNF benchmarking setups in varied deployment scenarios. A VNF
benchmarking methodology must be described in a clear and objective benchmarking methodology must be described in a clear and objective
manner in order to allow effective repeatability and comparability of manner following four basic principles:
the test results. Those results, the outcome of a VNF benchmarking
process, must be captured in a VNF Benchmarking Report (VNF-BR), as o Comparability: Output of Tests shall be simple to understand and
shown in Figure 2. process, in a human-readable format, coherent, and easily reusable
(e.g., inputs for analytic applications).
o Repeatability: A Test setup shall be comprehensively defined
through a flexible design model that can be interpreted and
executed by the testing platform repeatedly but supporting
customization.
o Configurability: Open interfaces and extensible messaging models
shall be available between benchmarking components for flexible
composition of Test descriptors and platform configurations.
o Interoperability: Tests shall be ported to different environments
using lightweight components.
______________ ______________
+--------+ | | +--------+ | |
| | | Automated | | | | Automated |
| VNF-BD |--(defines)-->| Benchmarking | | VNF-BD |--(defines)-->| Benchmarking |
| | | Process | | | | Methodology |
+--------+ |______________| +--------+ |______________|
V V
| |
(generates) (generates)
| |
v v
+-------------------------+ +-------------------------+
| VNF-BR | | VNF-BR |
| +--------+ +--------+ | | +--------+ +--------+ |
| | | | | | | | | | | |
| | VNF-BD | | VNF-PP | | | | VNF-BD | | VNF-PP | |
| | {copy} | | | | | | {copy} | | | |
| +--------+ +--------+ | | +--------+ +--------+ |
+-------------------------+ +-------------------------+
Figure 2: VNF benchmarking process inputs and outputs Figure 2: VNF benchmarking process inputs and outputs
A VNF Benchmarking Report consist of two parts: As shown in Figure 2, the outcome of an automated VNF benchmarking
methodology, must be captured in a VNF Benchmarking Report (VNF-BR),
consisting of two parts:
VNF Benchmarking Descriptor (VNF-BD) -- contains all required VNF Benchmarking Descriptor (VNF-BD) -- contains all required
definitions and requirements to deploy, configure, execute, and definitions and requirements to deploy, configure, execute, and
reproduce VNF benchmarking tests. VNF-BDs are defined by the reproduce VNF benchmarking tests. VNF-BDs are defined by the
developer of a benchmarking methodology and serve as input to the developer of a benchmarking methodology and serve as input to the
benchmarking process, before being included in the generated VNF- benchmarking process, before being included in the generated VNF-
BR. BR.
VNF Performance Profile (VNF-PP) -- contains all measured metrics VNF Performance Profile (VNF-PP) -- contains all measured metrics
resulting from the execution of a benchmarking. Additionally, it resulting from the execution of a benchmarking. Additionally, it
skipping to change at page 11, line 21 skipping to change at page 12, line 19
to facilitate comparability of VNF-BRs. to facilitate comparability of VNF-BRs.
A VNF-BR correlates structural and functional parameters of VNF-BD A VNF-BR correlates structural and functional parameters of VNF-BD
with extracted VNF benchmarking metrics of the obtained VNF-PP. The with extracted VNF benchmarking metrics of the obtained VNF-PP. The
content of each part of a VNF-BR is described in the following content of each part of a VNF-BR is described in the following
sections. sections.
6.1. VNF Benchmarking Descriptor (VNF-BD) 6.1. VNF Benchmarking Descriptor (VNF-BD)
VNF Benchmarking Descriptor (VNF-BD) -- an artifact that specifies a VNF Benchmarking Descriptor (VNF-BD) -- an artifact that specifies a
method of how to measure a VNF Performance Profile. The Method of how to measure a VNF Performance Profile. The
specification includes structural and functional instructions and specification includes structural and functional instructions and
variable parameters at different abstraction levels (e.g., topology variable parameters at different abstraction levels (e.g., topology
of the deployment scenario, benchmarking target metrics, parameters of the deployment scenario, benchmarking target metrics, parameters
of benchmarking components). VNF-BD may be specific to a VNF or of benchmarking components). A VNF-BD may be specific to a VNF or
applicable to several VNF types. A VNF-BD can be used to elaborate a applicable to several VNF types. It can be used to elaborate a VNF
VNF benchmark deployment scenario aiming at the extraction of benchmark deployment scenario aiming at the extraction of particular
particular VNF performance metrics. VNF performance metrics.
The following items define the VNF-BD contents. The following items define the VNF-BD contents.
6.1.1. Descriptor Headers 6.1.1. Descriptor Headers
The definition of parameters concerning the descriptor file, e.g., The definition of parameters concerning the descriptor file, e.g.,
its version, identidier, name, author and description. its version, identidier, name, author and description.
6.1.2. Target Information 6.1.2. Target Information
General information addressing the target VNF(s) the VNF-BD is General information addressing the target VNF(s) the VNF-BD is
applicable, with references to any specific characteristics, i.e., applicable, with references to any specific characteristics, i.e.,
the VNF type, model, version/release, author, vendor, architectural the VNF type, model, version/release, author, vendor, architectural
components, among any other particular features. components, among any other particular features.
6.1.3. Deployment Scenario 6.1.3. Experiments
This section contains all information needed to describe the
deployment of all involved functional components mandatory for the
execution of the benchmarking tests addressed by the VNF-BD.
6.1.3.1. Topology
Information about the experiment topology, concerning the disposition The specification of the number of executions for Trials, Tests and
of the components in a benchmarking setup (see Section 5). It must Method. The execution of a VNF-BD corresponds to the execution of
define the type of each component and how they are interconnected the specified Method.
(i.e., interface, link and network characteristics). Acceptable
topology descriptors might include references to external
configuration files particular of orchestration technologies (e.g.,
TOSCA, YANG).
6.1.3.2. Requirements 6.1.4. Environment
Involves the definition of execution environment requirements for the The details referring to the name, description, and information
tests. Therefore, they concern all required capabilities needed for associated with the interfaces needed for the orchestration, if
the execution of the target VNF and the other components composing necessary, of the specified VNF-BD scenario. I.e., it refers to a
the benchmarking setup. Examples of requirements include the specific interface that receives the VNF-BD scenario information and
allocation of CPUs, memory and disk for each component in the converts it to the template needed for an orchestration platform. In
deployment scenario. this case, the means to the Manager component interface such
orchestration platform must be provided, as well as its outcome
orchestration status information (e.g., management interfaces of
deployed components).
6.1.3.3. Policies 6.1.5. Scenario
Involves the definition of execution environment policies to run the This section contains all information needed to describe the
tests. Policies might specify the (anti)-affinity placement rules deployment of all involved functional components mandatory for the
for each component in the topology, min/max allocation of resources, execution of the benchmarking Tests addressed by the VNF-BD.
specific enabling technologies (e.g., DPDK, SR-IOV, PCIE) needed for
each component.
6.1.4. Settings 6.1.5.1. Nodes
Involves any specific configuration of benchmarking components in a Information about each component in a benchmarking setup (see
setup described the the deployment scenario topology. Section 5). It contains the identification, name, image, role (i.e.,
agent, monitor, sut), connection-points and resource requirements
(i.e., allocation of cpu, memory, disk).
6.1.4.1. Components The lifecycle specification of a node lists all the workflows that
must be realized on it during a Test. For instance, main workflows
include: create, start, stop, delete. Particular workflows can be
specified containing the required parameters and implementation.
Those details must reflect the actions taken on or by a node that
might affect the VNF performance profile.
Specifies the details of each component in the described topology in 6.1.5.2. Links
the deployment scenario. For instante, it contains the role of each
component and its particular parameters, as the cases detailed below:
VNF Configurations: Defines any specific configuration that must be Links contain information about the data plane links interconnecting
loaded into the VNF to execute the benchmarking experiments (e.g., the components of a benchmarking setup. Links refer to two or more
routing table, firewall rules, subscribers profile). connection-points of a node. A link might refer to be part of a
network. Depending on the link type, the network might be
implemented as a layer 2 mesh, or as directional-oriented traffic
forwarding flow entries. Links also detain resource requirements,
specifying the minimum bandwidth, latency, and frame loss rate for
the execution of benchmarking Tests.
Agents: Defines the configured toolset of probers and related 6.1.5.3. Policies
benchmarking/active metrics, available workloads, traffic formats/
traces, and configurations to enable hardware capabilities (if
existent). In addition, it defines metrics from each prober to be
extracted when running the benchmarking tests.
Monitors: defines the configured toolset of listeners and related Involves the definition of execution environment policies to run the
monitoring/passive metrics, configuration of the interfaces with Tests. Policies might specify the (anti-)affinity placement rules
the monitoring target (VNF and/or execution environment), and for each component in the topology, min/max allocation of resources,
configurations to enable specific hardware capabilities (if and specific enabling technologies (e.g., DPDK, SR-IOV, PCIE) needed
existent). In addition, it defines metrics from each listener to for each component.
be extracted when running the benchmarking tests.
6.1.4.2. Environment 6.1.6. Proceedings
The definition of parameters concerning the execution environment of This information is utilized by the Manager component to execute the
the VNF-BD, for instance, containing name, description, plugin/ benchmarking Tests. It consists of agent(s) and monitor(s) settings,
driver, and parameters to realize the interface with an orchestration detailing their prober(s)/listener(s) specification and running
component responsible to instantiate each VNF-BD deployment scenario. parameters.
6.1.4.3. Procedures Configuration Agents: Defines a list containing the Agent(s) needed for the VNF-
BD tests. The information of each Agent contains its host
environment, making reference to a node specified in the VNF-BD
scenario (Section 6.1.5). In addition, each Agent also is defined
with the the configured toolset of the Prober(s) and their running
parameters fulfilled (e.g., stimulus workload, traffic format/
trace, configurations to enable hardware capabilities, if
existent). In each Prober, it is also detailed the output metrics
to be extracted from it when running the benchmarking Tests.
The definition of parameters concerning the execution of the Monitors: Defines a list containing the Monitor(s) needed for the
benchmarking procedures, for instance, containing the number of VNF-BD tests. The information of each Monitor contains its host
repetitions for each trial, test, and the whole VNF-BD (method). environment, making reference to a node specified in the VNF-BD
scenario (Section 6.1.5) and detailing the placement settings of
it (e.g., internal or external with the target VNF and/or
execution environment). In addition, each Monitor also is defined
with the the configured toolset of the Listener(s) and their
running parameters fulfilled (e.g., tap interfaces, period of
monitoring, interval among the measurements). In each Listener,
it is also detailed the output metrics to be extracted from it
when running the benchmarking Tests.
6.2. VNF Performance Profile (VNF-PP) 6.2. VNF Performance Profile (VNF-PP)
VNF Performance Profile (VNF-PP) -- defines a mapping between VNF Performance Profile (VNF-PP) -- defines a mapping between
resources allocated to a VNF (e.g., CPU, memory) as well as assigned resources allocated to a VNF (e.g., CPU, memory) as well as assigned
configurations (e.g., routing table used by the VNF) and the VNF configurations (e.g., routing table used by the VNF) and the VNF
performance metrics (e.g., throughput, latency, CPU, memory) obtained performance metrics (e.g., throughput, latency, CPU, memory) obtained
in a benchmarking test conducted using a VNF-BD. Logically, packet in a benchmarking Test conducted using a VNF-BD. Logically, packet
processing metrics are presented in a specific format addressing processing metrics are presented in a specific format addressing
statistical significance (e.g., median, standard deviation, statistical significance (e.g., median, standard deviation,
percentiles) where a correspondence among VNF parameters and the percentiles) where a correspondence among VNF parameters and the
delivery of a measured VNF performance exists. delivery of a measured VNF performance exists.
The following items define the VNF-PP contents. The following items define the VNF-PP contents.
6.2.1. Execution Environment 6.2.1. Execution Environment
Execution environment information is has to be included in every VNF- Execution environment information has to be included in every VNF-PP
PP and is required to describe the environment on which a benchmark and is required to describe the environment on which a benchmark Test
test was actually executed. was actually executed.
Ideally, any person who has a VNF-BD and its complementing VNF-PP Ideally, any person who has a VNF-BD and its complementing VNF-PP
with its execution environment information available, must be able to with its execution environment information available, must be able to
reproduce the same deployment scenario and VNF benchmarking tests to reproduce the same deployment scenario and VNF benchmarking Tests to
obtain identical VNF-PP measurement results. obtain identical VNF-PP measurement results.
If not already defined by the VNF-BD deployment scenario requirements If not already defined by the VNF-BD deployment scenario requirements
(Section 6.1.3), for each component in the deployment scenario of the (Section 6.1.5), for each component in the deployment scenario of the
VNF benchmarking setup, the following topics must be detailed: VNF benchmarking setup, the following topics must be detailed:
Hardware Specs: Contains any information associated with the Hardware Specs: Contains any information associated with the
underlying hardware capabilities offered and used by the component underlying hardware capabilities offered and used by the component
during the benchmarking tests. Examples of such specification during the benchmarking Tests. Examples of such specification
include allocated CPU architecture, connected NIC specs, allocated include allocated CPU architecture, connected NIC specs, allocated
memory DIMM, etc. In addition, any information concerning details memory DIMM, etc. In addition, any information concerning details
of resource isolation must also be described in this part of the of resource isolation must also be described in this part of the
VNF-PP. VNF-PP.
Software Specs: Contains any information associated with the Software Specs: Contains any information associated with the
software apparatus offered and used during the benchmarking tests. software apparatus offered and used during the benchmarking Tests.
Examples include versions of operating systems, kernels, Examples include versions of operating systems, kernels,
hypervisors, container image versions, etc. hypervisors, container image versions, etc.
Optionally, a VNF-PP execution environment might contain references Optionally, a VNF-PP execution environment might contain references
to an orchestration description document (e.g., HEAT template) to to an orchestration description document (e.g., HEAT template) to
clarify technological aspects of the execution environment and any clarify technological aspects of the execution environment and any
specific parameters that it might contain for the VNF-PP. specific parameters that it might contain for the VNF-PP.
6.2.2. Measurement Results 6.2.2. Measurement Results
Measurement results concern the extracted metrics, output of Measurement results concern the extracted metrics, output of
benchmarking procedures, classified into: benchmarking procedures, classified into:
VNF Processing/Active Metrics: Concerns metrics explicitly defined VNF Processing/Active Metrics: Concerns metrics explicitly defined
by or extracted from direct interactions of Agents with a VNF. by or extracted from direct interactions of Agents with a VNF.
Those can be defined as generic metric related to network packet Those can be defined as generic metric related to network packet
processing (e.g., throughput, latency) or metrics specific to a processing (e.g., throughput, latency) or metrics specific to a
particular VNF (e.g., vIMS confirmed transactions, DNS replies). particular VNF (e.g., HTTP confirmed transactions, DNS replies).
VNF Monitored/Passive Metrics: Concerns the Monitors' metrics VNF Monitored/Passive Metrics: Concerns the Monitors' metrics
captured from a VNF execution, classified according to the captured from a VNF execution, classified according to the
virtualization level (e.g., baremetal, VM, container) and virtualization level (e.g., baremetal, VM, container) and
technology domain (e.g., related to CPU, memory, disk) from where technology domain (e.g., related to CPU, memory, disk) from where
they were obtained. they were obtained.
Depending on the configuration of the benchmarking setup and the Depending on the configuration of the benchmarking setup and the
planned use cases for the resulting VNF-PPs, measurement results can planned use cases for the resulting VNF-PPs, measurement results can
be stored as raw data, e.g., time series data about CPU utilization be stored as raw data, e.g., time series data about CPU utilization
skipping to change at page 15, line 19 skipping to change at page 16, line 26
6.3. Procedures 6.3. Procedures
The methodology for VNF Benchmarking Automation encompasses the The methodology for VNF Benchmarking Automation encompasses the
process defined in Figure 2, i.e., the procedures that translate a process defined in Figure 2, i.e., the procedures that translate a
VNF-BD into a VNF-PP composing a VNF-BR by the means of the VNF-BD into a VNF-PP composing a VNF-BR by the means of the
components specified in Figure 1. This section details the sequence components specified in Figure 1. This section details the sequence
of events that realize such process. of events that realize such process.
6.3.1. Pre-Execution 6.3.1. Pre-Execution
Before the execution of benchmark tests, some procedures must be Before the execution of benchmarking Tests, some procedures must be
performed: performed:
1. A VNF-BD must be defined to be later instantiated into a 1. A VNF-BD must be defined to be later instantiated into a
deployment scenario and executed its tests. Such a description deployment scenario and have executed its Tests. Such a
must contain all the structural and functional settings defined in description must contain all the structural and functional
Section 6.1. At the end of this step, the complete method of settings defined in Section 6.1. At the end of this step, the
benchmarking the target VNF is defined. complete Method of benchmarking the target VNF is defined.
2. The environment needed for a VNF-BD must be defined to realize 2. The VNF target image must be prepared to be benchmarked, having
all its capabilities fully described. In addition all the probers
and listeners defined in the VNF-BD must be implemented to realize
the benchmark Tests. At the end of this step, the complete set of
components of the benchmarking VNF-BD deployment scenario is
defined.
3. The environment needed for a VNF-BD must be defined to realize
its deployment scenario, in an automated or manual method. This its deployment scenario, in an automated or manual method. This
step might count on the instantiation of orchestration platforms step might count on the instantiation of orchestration platforms
and the composition of specific topology descriptors needed by and the composition of specific topology descriptors needed by
those platforms to realize the VNF-BD deployment scenario. At the those platforms to realize the VNF-BD deployment scenario. At the
end of this step, the whole environment needed to instantiate the end of this step, the whole environment needed to instantiate the
components of a VNF-BD deployment scenario is defined. components of a VNF-BD deployment scenario is defined.
3. The VNF target image must be prepared to be benchmarked, having
all its capabilities fully described. In addition all the probers
and listeners defined in the VNF-BD must be implemented to realize
the benchmark tests. At the end of this step, the complete set of
components of the benchmarking VNF-BD deployment scenario is
defined.
6.3.2. Automated Execution 6.3.2. Automated Execution
Satisfied all the pre-execution procedures, the automated execution Satisfied all the pre-execution procedures, the automated execution
of the tests specified by the VNF-BD follow: of the Tests specified by the VNF-BD follow:
1. Upon the parsing of a VNF-BD, the Manager must detect the VNF-BD 1. Upon the parsing of a VNF-BD, the Manager must detect the VNF-BD
variable input field (e.g., list of resources values) and compose variable input field (e.g., list of resources values) and compose
the all the permutations of parameters. For each permutation, the the all the permutations of parameters. For each permutation, the
Manager must elaborate a VNF-BD instance. Each VNF-BD instance Manager must elaborate a VNF-BD instance. Each VNF-BD instance
defines a test, and it will have its deployment scenario defines a Test, and it will have its deployment scenario
instantiated accordingly. I.e., the Manager must interface an instantiated accordingly. I.e., the Manager must interface an
orchestration platform to realize the automated instantiation of orchestration platform to realize the automated instantiation of
each deployment scenario defined by a VNF-BD instance (i.e., a each deployment scenario defined by a VNF-BD instance (i.e., a
test). The Manager must iterate through all the VNF-BD instances Test). The Manager must iterate through all the VNF-BD instances
to finish the whole set of tests defined by all the permutations to finish the whole set of Tests defined by all the permutations
of the VNF-BD input fields. of the VNF-BD input fields.
2. Given a VNF-BD instance, the Manager, using the VNF-BD 2. Given a VNF-BD instance, the Manager, using the VNF-BD
environment settings, must interface an orchestrator platform environment settings, must interface an orchestrator platform
requesting the deployment of a scenario to realize a test. To requesting the deployment of a scenario to realize a Test. To
perform such step, The Manager might interface a plugin/driver perform such step, The Manager might interface a management
responsible to properly parse the deployment scenario function responsible to properly parse the deployment scenario
specifications into the orchestration platform interface format. specifications into the orchestration platform interface format.
3. An orchestration platform must deploy the scenario requested by 3. An orchestration platform must deploy the scenario requested by
the Manager, assuring the requirements and policies specified on the Manager, assuring the requirements and policies specified on
it. In addition, the orchestration platform must acknowledge the it. In addition, the orchestration platform must acknowledge the
deployed scenario to the Manager specifying the management deployed scenario to the Manager specifying the management
interfaces of the VNF and the other components in the running interfaces of the VNF and the other components in the running
instances for the benchmarking test. instances for the benchmarking Test.
4. Agent(s) and Monitor(s) (if existing) and the target VNF must be 4. Agent(s) and Monitor(s) (if existing) and the target VNF must be
configured by the Manager according to the components settings configured by the Manager according to the components settings
defined in the VNF-BD instance. After this step, the whole VNF-BD defined in the VNF-BD instance. After this step, the whole VNF-BD
test will be ready to be performed. Test will be ready to be executed.
5. Manager must interface Agent(s) and Monitor(s) (if existing) via 5. Manager must interface Agent(s) and Monitor(s) (if existing) via
control interfaces to required the execution of the benchmark management interfaces to require the execution of the benchmarking
stimuli (and monitoring, if existing) and retrieve expected probers (and listeners, if existing), and retrieve expected
metrics captured during or at the end of each Trial. I.e., for a metrics captured during or at the end of each Trial. I.e., for a
single test, according to the VNF-BD execution settings, the single Test, according to the VNF-BD execution settings, the
Manager must guarantee that one or more trials realize the Manager must guarantee that one or more Trials realize the
required measurements to characterize the performance behavior of required measurements to characterize the performance behavior of
a VNF. a VNF.
6. Output measurements from each obtained benchmarking test, and 6. Output measurements from each obtained benchmarking Test, and
its possible trials, must be collected by the Manager, until all its possible Trials, must be collected by the Manager, until all
tests be finished. In the execution settings of the parsed VNF- the Tests are finished. In the execution settings of the parsed
BD, the Manager must check the method repetition, and perform the VNF-BD, the Manager must check the Method repetition, and perform
whole VNF-BD tests (i.e., since step 1), until all methods are the whole set of VNF-BD Tests (i.e., since step 1), until all
finished. methods specified are finished.
7. Collected all measurements from the VNF-BD (trials, tests and 7. Collected all measurements from the VNF-BD (Trials, Tests and
methods) execution, the intended metrics, as described in the VNF- Methods) execution, the intended metrics, as described in the VNF-
BD, must be parsed, extracted and combined to create the BD, must be parsed, extracted and combined to create the
corresponding VNF-PP. The combination of used VNF-BD and corresponding VNF-PP. The combination of used VNF-BD and
generated VNF-PP make up the resulting VNF benchmark report (VNF- generated VNF-PP compose the resulting VNF benchmark report (VNF-
BR). BR).
6.3.3. Post-Execution 6.3.3. Post-Execution
After the process of a VNF-BD, generated the associated VNF-BR, some After the process of a VNF-BD execution, some automated procedures,
procedures must be guaranteed: not necessarily mandatory, can be performed to improve the quality
and utility of a VNF-BR:
1. Perform a statistical analysis of the output VNF-BR.
2. Perform a machine learning based analysis of the output VNF-BR. 1. Archive the raw output contained in the VNF-PP, perform
statistical analysis on it, or train machine learning models with
the collected data.
3. Research the analysis outputs to the detect any possible cause- 2. Evaluate the analysis output to the detection of any possible
effect factors and/or intrinsic correlations in the VNF-BR (e.g., cause-effect factors and/or intrinsic correlations in the VNF-BR
outliers). (e.g., outliers).
4. Review the input VNF-BD and modify it to realize the proper 3. Review the input VNF-BD and modify it to realize the proper
extraction of the target VNF metrics based on the performed extraction of the target VNF metrics based on the performed
research Iterate in the previous steps until composing a stable research. Iterate in the previous steps until composing a stable
and representative VNF-BR. and representative VNF-BR.
6.4. Particular Cases 6.4. Particular Cases
As described in [RFC8172], VNF benchmarking might require to change As described in [RFC8172], VNF benchmarking might require to change
and adapt existing benchmarking methodologies. More specifically, and adapt existing benchmarking methodologies. More specifically,
the following cases need to be considered. the following cases need to be considered.
6.4.1. Capacity 6.4.1. Capacity
skipping to change at page 18, line 48 skipping to change at page 20, line 8
6.4.6. White Box VNF 6.4.6. White Box VNF
A benchmarking setup must be able to define scenarios with and A benchmarking setup must be able to define scenarios with and
without monitoring components inside the VNFs and/or the hosting without monitoring components inside the VNFs and/or the hosting
container or VM. If no monitoring solution is available from within container or VM. If no monitoring solution is available from within
the VNFs, the benchmark is following the black-box concept. If, in the VNFs, the benchmark is following the black-box concept. If, in
contrast, those additional sources of information from within the VNF contrast, those additional sources of information from within the VNF
are available, VNF-PPs must be able to handle these additional VNF are available, VNF-PPs must be able to handle these additional VNF
performance metrics. performance metrics.
7. Relevant Influencing Aspects 7. Open Source Reference Implementations
In general, automated VNF benchmarking tests as herein described must
capture relevant causes of performance variability. Concerning a
deployment scenario, influencing aspects on the performance of a VNF
can be observed in:
Deployment Scenario Topology: The disposition of components can
define particular interconnections among them composing a specific
case/method of VNF benchmarking.
Execution Environment: The availability of generic and specific
capabilities satisfying VNF requirements define a skeleton of
opportunities for the allocation of VNF resources. In addition,
particular cases can define multiple VNFs interacting in the same
execution environment of a benchmarking setup.
VNF: A detailed description of functionalities performed by a VNF
sets possible traffic forwarding and processing operations it can
perform on packets, added to its running requirements and specific
configurations, which might affect and compose a benchmarking
setup.
Agent: The toolset available for the benchmarking stimulus of a VNF
and its characteristics of packets format and workload can
interfere in a benchmarking setup. VNFs can support specific
traffic format as stimulus.
Monitor: In a particular benchmarking setup where measurements of
VNF and/or execution environment metrics are available for
extraction, an important analysis consist in verifying if the
Monitor components can impact performance metrics of the VNF and
the underlying execution environment.
Manager: The overall composition of VNF benchmarking procedures can
determine arrangements of internal states inside a VNF, which can
interfere in observed benchmarking metrics.
The listed influencing aspects must be carefully analyzed while
automating a VNF benchmarking methodology.
8. Open Source Reference Implementations Currently, technical motivating factors in favor of the automation of
VNF benchmarking methodologies comprise: (i) the facility to run
high-fidelity and commodity traffic generators by software; (ii) the
existent means to construct synthetic traffic workloads purely by
software (e.g., handcrafted pcap files); (iii) the increasing
availability of datasets containing actual sources of production
traffic able to be reproduced in benchmarking tests; (iv) the
existence of a myriad of automating tools and open interfaces to
programmatically manage VNFs; (v) the varied set of orchestration
platforms enabling the allocation of resources and instantition of
VNFs through automated machineries based on well-defined templates;
(vi) the ability to utilize a large tool set of software components
to compose pipelines that mathematically analyze benchmarking metrics
in automated ways.
There are two open source reference implementations that are build to In simple terms, network softwarization enables automation. There
are two open source reference implementations that are build to
automate benchmarking of Virtualized Network Functions (VNFs). automate benchmarking of Virtualized Network Functions (VNFs).
8.1. Gym 7.1. Gym
The software, named Gym, is a framework for automated benchmarking of The software, named Gym, is a framework for automated benchmarking of
Virtualized Network Functions (VNFs). It was coded following the Virtualized Network Functions (VNFs). It was coded following the
initial ideas presented in a 2015 scientific paper entitled "VBaaS: initial ideas presented in a 2015 scientific paper entitled "VBaaS:
VNF Benchmark-as-a-Service" [Rosa-a]. Later, the evolved design and VNF Benchmark-as-a-Service" [Rosa-a]. Later, the evolved design and
prototyping ideas were presented at IETF/IRTF meetings seeking impact prototyping ideas were presented at IETF/IRTF meetings seeking impact
into NFVRG and BMWG. into NFVRG and BMWG.
Gym was built to receive high-level test descriptors and execute them Gym was built to receive high-level test descriptors and execute them
to extract VNFs profiles, containing measurements of performance to extract VNFs profiles, containing measurements of performance
metrics - especially to associate resources allocation (e.g., vCPU) metrics - especially to associate resources allocation (e.g., vCPU)
with packet processing metrics (e.g., throughput) of VNFs. From the with packet processing metrics (e.g., throughput) of VNFs. From the
original research ideas [Rosa-a], such output profiles might be used original research ideas [Rosa-a], such output profiles might be used
by orchestrator functions to perform VNF lifecycle tasks (e.g., by orchestrator functions to perform VNF lifecycle tasks (e.g.,
deployment, maintenance, tear-down). deployment, maintenance, tear-down).
The proposed guiding principles, elaborated in [Rosa-b], to design
and build Gym can be composed in multiple practical ways for
different VNF testing purposes:
o Comparability: Output of tests shall be simple to understand and
process, in a human-read able format, coherent, and easily
reusable (e.g., inputs for analytic applications).
o Repeatability: Test setup shall be comprehensively defined through
a flexible design model that can be interpreted and executed by
the testing platform repeatedly but supporting customization.
o Configurability: Open interfaces and extensible messaging models
shall be available between components for flexible composition of
test descriptors and platform configurations.
o Interoperability: Tests shall be ported to different environments
using lightweight components.
In [Rosa-b] Gym was utilized to benchmark a decomposed IP Multimedia In [Rosa-b] Gym was utilized to benchmark a decomposed IP Multimedia
Subsystem VNF. And in [Rosa-c], a virtual switch (Open vSwitch - Subsystem VNF. And in [Rosa-c], a virtual switch (Open vSwitch -
OVS) was the target VNF of Gym for the analysis of VNF benchmarking OVS) was the target VNF of Gym for the analysis of VNF benchmarking
automation. Such articles validated Gym as a prominent open source automation. Such articles validated Gym as a prominent open source
reference implementation for VNF benchmarking tests. Such articles reference implementation for VNF benchmarking tests. Such articles
set important contributions as discussion of the lessons learned and set important contributions as discussion of the lessons learned and
the overall NFV performance testing landscape, included automation. the overall NFV performance testing landscape, included automation.
Gym stands as one open source reference implementation that realizes Gym stands as one open source reference implementation that realizes
the VNF benchmarking methodologies presented in this document. Gym the VNF benchmarking methodologies presented in this document. Gym
is being released open source at [Gym]. The code repository includes is released as open source tool under Apache 2.0 license [gym].
also VNF Benchmarking Descriptor (VNF-BD) examples on the vIMS and
OVS targets as described in [Rosa-b] and [Rosa-c].
8.2. tng-bench 7.2. tng-bench
Another software that focuses on implementing a framework to Another software that focuses on implementing a framework to
benchmark VNFs is the "5GTANGO VNF/NS Benchmarking Framework" also benchmark VNFs is the "5GTANGO VNF/NS Benchmarking Framework" also
called "tng-bench" (previously "son-profile") and was developed as called "tng-bench" (previously "son-profile") and was developed as
part of the two European Union H2020 projects SONATA NFV and 5GTANGO part of the two European Union H2020 projects SONATA NFV and 5GTANGO
[tango]. Its initial ideas were presented in [Peu-a] and the system [tango]. Its initial ideas were presented in [Peu-a] and the system
design of the end-to-end prototype was presented in [Peu-b]. design of the end-to-end prototype was presented in [Peu-b].
Tng-bench aims to be a framework for the end-to-end automation of VNF Tng-bench aims to be a framework for the end-to-end automation of VNF
benchmarking processes. Its goal is to automate the benchmarking benchmarking processes. Its goal is to automate the benchmarking
skipping to change at page 22, line 5 skipping to change at page 22, line 10
experiments were used to test tng-bench for scenarios in which experiments were used to test tng-bench for scenarios in which
composed VNFs, consisting of multiple VNF components (VNFCs), have to composed VNFs, consisting of multiple VNF components (VNFCs), have to
be benchmarked. The presented results highlight the need to be benchmarked. The presented results highlight the need to
benchmark composed VNFs in end-to-end scenarios rather than only benchmark composed VNFs in end-to-end scenarios rather than only
benchmark each individual component in isolation, to produce benchmark each individual component in isolation, to produce
meaningful VNF- PPs for the complete VNF. meaningful VNF- PPs for the complete VNF.
Tng-bench is actively developed and released as open source tool Tng-bench is actively developed and released as open source tool
under Apache 2.0 license [tng-bench]. under Apache 2.0 license [tng-bench].
9. Security Considerations 8. Security Considerations
Benchmarking tests described in this document are limited to the Benchmarking tests described in this document are limited to the
performance characterization of VNFs in a lab environment with performance characterization of VNFs in a lab environment with
isolated network. isolated network.
The benchmarking network topology will be an independent test setup The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test traffic into a production network, or misroute traffic to the test
management network. management network.
Special capabilities SHOULD NOT exist in the VNF benchmarking Special capabilities SHOULD NOT exist in the VNF benchmarking
deployment scenario specifically for benchmarking purposes. Any deployment scenario specifically for benchmarking purposes. Any
implications for network security arising from the VNF benchmarking implications for network security arising from the VNF benchmarking
deployment scenario SHOULD be identical in the lab and in production deployment scenario SHOULD be identical in the lab and in production
networks. networks.
10. IANA Considerations 9. IANA Considerations
This document does not require any IANA actions. This document does not require any IANA actions.
11. Acknowledgement 10. Acknowledgement
The authors would like to thank the support of Ericsson Research, The authors would like to thank the support of Ericsson Research,
Brazil. Parts of this work have received funding from the European Brazil. Parts of this work have received funding from the European
Union's Horizon 2020 research and innovation programme under grant Union's Horizon 2020 research and innovation programme under grant
agreement No. H2020-ICT-2016-2 761493 (5GTANGO: https://5gtango.eu). agreement No. H2020-ICT-2016-2 761493 (5GTANGO: https://5gtango.eu).
12. References 11. References
12.1. Normative References 11.1. Normative References
[ETS14a] ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1", [ETS14a] ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1",
Dec 2014, <http://www.etsi.org/deliver/etsi\_gs/ Dec 2014, <http://www.etsi.org/deliver/etsi\_gs/
NFV/001\_099/002/01.02.01-\_60/gs\_NFV002v010201p.pdf>. NFV/001\_099/002/01.02.01-\_60/gs\_NFV002v010201p.pdf>.
[ETS14b] ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV [ETS14b] ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV
003 V1.2.1", Dec 2014, 003 V1.2.1", Dec 2014,
<http://www.etsi.org/deliver/etsi_gs/NFV/001_099- <http://www.etsi.org/deliver/etsi_gs/NFV/001_099-
/003/01.02.01_60/gs_NFV003v010201p.pdf>. /003/01.02.01_60/gs_NFV003v010201p.pdf>.
skipping to change at page 23, line 29 skipping to change at page 23, line 34
<https://www.rfc-editor.org/info/rfc1242>. <https://www.rfc-editor.org/info/rfc1242>.
[RFC8172] A. Morton, "Considerations for Benchmarking Virtual [RFC8172] A. Morton, "Considerations for Benchmarking Virtual
Network Functions and Their Infrastructure", July 2017, Network Functions and Their Infrastructure", July 2017,
<https://www.rfc-editor.org/info/rfc8172>. <https://www.rfc-editor.org/info/rfc8172>.
[RFC8204] M. Tahhan, B. O'Mahony, A. Morton, "Benchmarking Virtual [RFC8204] M. Tahhan, B. O'Mahony, A. Morton, "Benchmarking Virtual
Switches in the Open Platform for NFV (OPNFV)", September Switches in the Open Platform for NFV (OPNFV)", September
2017, <https://www.rfc-editor.org/info/rfc8204>. 2017, <https://www.rfc-editor.org/info/rfc8204>.
12.2. Informative References 11.2. Informative References
[Gym] "Gym Home Page", <https://github.com/intrig-unicamp/gym>. [gym] "Gym Framework Source Code",
<https://github.com/intrig-unicamp/gym>.
[Peu-a] M. Peuster, H. Karl, "Understand Your Chains: Towards [Peu-a] M. Peuster, H. Karl, "Understand Your Chains: Towards
Performance Profile-based Network Service Management", Performance Profile-based Network Service Management",
Fifth European Workshop on Software Defined Networks Fifth European Workshop on Software Defined Networks
(EWSDN) , 2016, (EWSDN) , 2016,
<http://ieeexplore.ieee.org/document/7956044/>. <http://ieeexplore.ieee.org/document/7956044/>.
[Peu-b] M. Peuster, H. Karl, "Profile Your Chains, Not Functions: [Peu-b] M. Peuster, H. Karl, "Profile Your Chains, Not Functions:
Automated Network Service Profiling in DevOps Automated Network Service Profiling in DevOps
Environments", IEEE Conference on Network Function Environments", IEEE Conference on Network Function
 End of changes. 100 change blocks. 
368 lines changed or deleted 365 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/