< draft-mkonstan-nf-service-density-00.txt   draft-mkonstan-nf-service-density-01.txt >
Benchmarking Working Group M. Konstantynowicz, Ed. Benchmarking Working Group M. Konstantynowicz, Ed.
Internet-Draft P. Mikus, Ed. Internet-Draft P. Mikus, Ed.
Intended status: Informational Cisco Systems Intended status: Informational Cisco Systems
Expires: September 12, 2019 March 11, 2019 Expires: January 9, 2020 July 08, 2019
NFV Service Density Benchmarking NFV Service Density Benchmarking
draft-mkonstan-nf-service-density-00 draft-mkonstan-nf-service-density-01
Abstract Abstract
Network Function Virtualization (NFV) system designers and operators Network Function Virtualization (NFV) system designers and operators
continuously grapple with the problem of qualifying performance of continuously grapple with the problem of qualifying performance of
network services realised with software Network Functions (NF) network services realised with software Network Functions (NF)
running on Commercial-Off-The-Shelf (COTS) servers. One of the main running on Commercial-Off-The-Shelf (COTS) servers. One of the main
challenges is getting repeatable and portable benchmarking results challenges is getting repeatable and portable benchmarking results
and using them to derive deterministic operating range that is and using them to derive deterministic operating range that is
production deployment worthy. production deployment worthy.
skipping to change at page 1, line 46 skipping to change at page 1, line 46
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 12, 2019. This Internet-Draft will expire on January 9, 2020.
Copyright Notice Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1. Problem Description . . . . . . . . . . . . . . . . . . . 3 2.1. Problem Description . . . . . . . . . . . . . . . . . . . 4
2.2. Proposed Solution . . . . . . . . . . . . . . . . . . . . 4 2.2. Proposed Solution . . . . . . . . . . . . . . . . . . . . 4
3. NFV Service . . . . . . . . . . . . . . . . . . . . . . . . . 5 3. NFV Service . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Topology . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1. Topology . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2. Configuration . . . . . . . . . . . . . . . . . . . . . . 7 3.2. Configuration . . . . . . . . . . . . . . . . . . . . . . 8
3.3. Packet Path(s) . . . . . . . . . . . . . . . . . . . . . 8 3.3. Packet Path(s) . . . . . . . . . . . . . . . . . . . . . 9
4. Virtualization Technology . . . . . . . . . . . . . . . . . . 10 4. Virtualization Technology . . . . . . . . . . . . . . . . . . 12
5. Host Networking . . . . . . . . . . . . . . . . . . . . . . . 11 5. Host Networking . . . . . . . . . . . . . . . . . . . . . . . 13
6. NFV Service Density Matrix . . . . . . . . . . . . . . . . . 12 6. NFV Service Density Matrix . . . . . . . . . . . . . . . . . 14
7. Compute Resource Allocation . . . . . . . . . . . . . . . . . 13 7. Compute Resource Allocation . . . . . . . . . . . . . . . . . 15
8. NFV Service Density Benchmarks . . . . . . . . . . . . . . . 17 8. NFV Service Data-Plane Benchmarking . . . . . . . . . . . . . 19
8.1. Test Methodology - MRR Throughput . . . . . . . . . . . . 17 9. Sample NFV Service Density Benchmarks . . . . . . . . . . . . 19
8.2. VNF Service Chain . . . . . . . . . . . . . . . . . . . . 18 9.1. Intrepreting the Sample Results . . . . . . . . . . . . . 20
8.3. CNF Service Chain . . . . . . . . . . . . . . . . . . . . 18 9.2. Benchmarking MRR Throughput . . . . . . . . . . . . . . . 20
8.4. CNF Service Pipeline . . . . . . . . . . . . . . . . . . 19 9.3. VNF Service Chain . . . . . . . . . . . . . . . . . . . . 20
8.5. Sample Results: FD.io CSIT . . . . . . . . . . . . . . . 20 9.4. CNF Service Chain . . . . . . . . . . . . . . . . . . . . 21
8.6. Sample Results: CNCF/CNFs . . . . . . . . . . . . . . . . 21 9.5. CNF Service Pipeline . . . . . . . . . . . . . . . . . . 22
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 9.6. Sample Results: FD.io CSIT . . . . . . . . . . . . . . . 23
10. Security Considerations . . . . . . . . . . . . . . . . . . . 23 9.7. Sample Results: CNCF/CNFs . . . . . . . . . . . . . . . . 24
11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 9.8. Sample Results: OPNFV NFVbench . . . . . . . . . . . . . 26
12. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26
12.1. Normative References . . . . . . . . . . . . . . . . . . 23 11. Security Considerations . . . . . . . . . . . . . . . . . . . 26
12.2. Informative References . . . . . . . . . . . . . . . . . 23 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 26
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 27
13.1. Normative References . . . . . . . . . . . . . . . . . . 27
13.2. Informative References . . . . . . . . . . . . . . . . . 27
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28
1. Terminology 1. Terminology
o NFV - Network Function Virtualization, a general industry term o NFV: Network Function Virtualization, a general industry term
describing network functionality implemented in software. describing network functionality implemented in software.
o NFV service - a software based network service realized by a o NFV service: a software based network service realized by a
topology of interconnected constituent software network function topology of interconnected constituent software network function
applications. applications.
o NFV service instance - a single instantiation of NFV service. o NFV service instance: a single instantiation of NFV service.
o Data-plane optimized software - any software with dedicated o Data-plane optimized software: any software with dedicated threads
threads handling data-plane packet processing e.g. FD.io VPP handling data-plane packet processing e.g. FD.io VPP (Vector
(Vector Packet Processor), OVS-DPDK. Packet Processor), OVS-DPDK.
o Packet Loss Ratio (PLR): ratio of packets received relative to
packets transmitted over the test trial duration, calculated using
formula: PLR = ( pkts_transmitted - pkts_received ) /
pkts_transmitted. For bi-directional throughput tests aggregate
PLR is calculated based on the aggregate number of packets
transmitted and received.
o Packet Throughput Rate: maximum packet offered load DUT/SUT
forwards within the specified Packet Loss Ratio (PLR). In many
cases the rate depends on the frame size processed by DUT/SUT.
Hence packet throughput rate MUST be quoted with specific frame
size as received by DUT/SUT during the measurement. For bi-
directional tests, packet throughput rate should be reported as
aggregate for both directions. Measured in packets-per-second
(pps) or frames-per-second (fps), equivalent metrics.
o Non Drop Rate (NDR): maximum packet/bandwith throughput rate
sustained by DUT/SUT at PLR equal zero (zero packet loss) specific
to tested frame size(s). MUST be quoted with specific packet size
as received by DUT/SUT during the measurement. Packet NDR
measured in packets-per-second (or fps), bandwidth NDR expressed
in bits-per-second (bps).
o Partial Drop Rate (PDR): maximum packet/bandwith throughput rate
sustained by DUT/SUT at PLR greater than zero (non-zero packet
loss) specific to tested frame size(s). MUST be quoted with
specific packet size as received by DUT/SUT during the
measurement. Packet PDR measured in packets-per-second (or fps),
bandwidth PDR expressed in bits-per-second (bps).
o Maximum Receive Rate (MRR): packet/bandwidth rate regardless of
PLR sustained by DUT/SUT under specified Maximum Transmit Rate
(MTR) packet load offered by traffic generator. MUST be quoted
with both specific packet size and MTR as received by DUT/SUT
during the measurement. Packet MRR measured in packets-per-second
(or fps), bandwidth MRR expressed in bits-per-second (bps).
2. Motivation 2. Motivation
2.1. Problem Description 2.1. Problem Description
Network Function Virtualization (NFV) system designers and operators Network Function Virtualization (NFV) system designers and operators
continuously grapple with the problem of qualifying performance of continuously grapple with the problem of qualifying performance of
network services realised with software Network Functions (NF) network services realised with software Network Functions (NF)
running on Commercial-Off-The-Shelf (COTS) servers. One of the main running on Commercial-Off-The-Shelf (COTS) servers. One of the main
challenges is getting repeatable and portable benchmarking results challenges is getting repeatable and portable benchmarking results
skipping to change at page 3, line 48 skipping to change at page 4, line 36
2. How to choose the best compute resource allocation scheme to 2. How to choose the best compute resource allocation scheme to
maximise service yield per node? maximise service yield per node?
3. How do different NF applications compare from the service density 3. How do different NF applications compare from the service density
perspective? perspective?
4. How do the virtualisation technologies compare e.g. Virtual 4. How do the virtualisation technologies compare e.g. Virtual
Machines, Containers? Machines, Containers?
Getting answers to these points should allow designers to make a data Getting answers to these points should allow designers to make data
based decision about the NFV technology and service design best based decisions about the NFV technology and service design best
suited to meet requirements of their use cases. Equally, obtaining suited to meet requirements of their use cases. Thereby obtained
the benchmarking data underpinning those answers should make it benchmarking data would aid in selection of the most appropriate NFV
easier for operators to work out expected deterministic operating infrastructure design and platform and enable more accurate capacity
range of chosen design. planning, an important element for commercial viability of the NFV
service.
2.2. Proposed Solution 2.2. Proposed Solution
The primary goal of the proposed benchmarking methodology is to focus The primary goal of the proposed benchmarking methodology is to focus
on NFV technologies used to construct NFV services. More on NFV technologies used to construct NFV services. More
specifically to i) measure packet data-plane performance of multiple specifically to i) measure packet data-plane performance of multiple
NFV service instances while running them at varied service "packing" NFV service instances while running them at varied service "packing"
densities on a single server and ii) quantify the impact of using densities on a single server and ii) quantify the impact of using
multiple NFs to construct each NFV service instance and introducing multiple NFs to construct each NFV service instance and introducing
multiple packet processing hops and links on each packet path. multiple packet processing hops and links on each packet path.
skipping to change at page 4, line 42 skipping to change at page 5, line 32
| Host Networking | | Host Networking |
+-------------------------------+ +-------------------------------+
Figure 1. NFV software technology stack. Figure 1. NFV software technology stack.
Proposed methodology is complementary to existing NFV benchmarking Proposed methodology is complementary to existing NFV benchmarking
industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009] industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009]
and extends the benchmarking scope to NFV services. and extends the benchmarking scope to NFV services.
This document does not describe a complete benchmarking methodology, This document does not describe a complete benchmarking methodology,
instead it is focusing on system under test configuration part. Each instead it is focusing on the system under test configuration. Each
of the compute node configurations identified by (RowIndex, of the compute node configurations identified in this document is to
ColumnIndex) is to be evaluated for NFV service data-plane be evaluated for NFV service data-plane performance using existing
performance using existing and/or emerging network benchmarking and/or emerging network benchmarking standards. This may include
standards. This may include methodologies specified in [RFC2544], methodologies specified in [RFC2544], [TST009],
[TST009], [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or
[draft-vpolak-bmwg-plrsearch]. [draft-vpolak-bmwg-plrsearch].
3. NFV Service 3. NFV Service
It is assumed that each NFV service instance is built of one or more It is assumed that each NFV service instance is built of one or more
constituent NFs and is described by: topology, configuration and constituent NFs and is described by: topology, configuration and
resulting packet path(s). resulting packet path(s).
Each set of NFs forms an independent NFV service instance, with Each set of NFs forms an independent NFV service instance, with
multiple sets present in the host. multiple sets present in the host.
skipping to change at page 5, line 42 skipping to change at page 6, line 33
1. Chain topology: a set of NFs connect to host data-plane with 1. Chain topology: a set of NFs connect to host data-plane with
minimum of two virtual interfaces each, enabling host data-plane minimum of two virtual interfaces each, enabling host data-plane
to facilitate NF to NF service chain forwarding and provide to facilitate NF to NF service chain forwarding and provide
connectivity with external network. connectivity with external network.
2. Pipeline topology: a set of NFs connect to each other in a line 2. Pipeline topology: a set of NFs connect to each other in a line
fashion with edge NFs homed to host data-plane. Host data-plane fashion with edge NFs homed to host data-plane. Host data-plane
provides connectivity with external network. provides connectivity with external network.
Both topologies are shown in figures below. In both cases multiple NFV service topologies are running in
parallel. Both topologies are shown in figures 2. and 3. below.
NF chain topology: NF chain topology:
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
| SmNF1 SmNF2 SmNFn Service-m |
| ... ... ... ... |
| S2NF1 S2NF2 S2NFn Service-2 |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | S1NF1 | | S1NF2 | | S1NFn | | | | S1NF1 | | S1NF2 | | S1NFn | |
| | | | | .... | | Service1 | | | | | | .... | | Service-1 |
| | | | | | | | | | | | | | | |
| +-+----+-+ +-+----+-+ + + +-+----+-+ | | +-+----+-+ +-+----+-+ + + +-+----+-+ |
| | | | | | | | | Virtual | | | | | | | | | | Virtual |
| | |<-CS->| |<-CS->| |<-CS->| | Interfaces | | | |<-CS->| |<-CS->| |<-CS->| | Interfaces |
| +-+----+------+----+------+----+------+----+-+ | | +-+----+------+----+------+----+------+----+-+ |
| | | CS: Chain | | | | CS: Chain |
| | | Segment | | | | Segment |
| | Host Data-Plane | | | | Host Data-Plane | |
| +-+--+----------------------------------+--+-+ | | +-+--+----------------------------------+--+-+ |
| | | | | | | | | | | |
skipping to change at page 7, line 8 skipping to change at page 8, line 8
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 2. NF chain topology forming a service instance. Figure 2. NF chain topology forming a service instance.
NF pipeline topology: NF pipeline topology:
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
| SmNF1 SmNF2 SmNFn Service-m |
| ... ... ... ... |
| S2NF1 S2NF2 S2NFn Service-2 |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | S1NF1 | | S1NF2 | | S1NFn | | | | S1NF1 | | S1NF2 | | S1NFn | |
| | +--+ +--+ .... +--+ | Service1 | | | +--+ +--+ .... +--+ | Service1 |
| | | | | | | | | | | | | | | |
| +-+------+ +--------+ +------+-+ | | +-+------+ +--------+ +------+-+ |
| | | Virtual | | | | Virtual |
| |<-Pipeline Edge Pipeline Edge->| Interfaces | | |<-Pipeline Edge Pipeline Edge->| Interfaces |
| +-+----------------------------------------+-+ | | +-+----------------------------------------+-+ |
| | | | | | | |
| | | | | | | |
skipping to change at page 7, line 45 skipping to change at page 8, line 48
including Layer-2, Layer-3 and/or Layer-4-to-7 processing as including Layer-2, Layer-3 and/or Layer-4-to-7 processing as
appropriate to specific NF and NFV service design. L2 sub- interface appropriate to specific NF and NFV service design. L2 sub- interface
encapsulations (e.g. 802.1q, 802.1ad) and IP overlay encapsulation encapsulations (e.g. 802.1q, 802.1ad) and IP overlay encapsulation
(e.g. VXLAN, IPSec, GRE) may be represented here too as appropriate, (e.g. VXLAN, IPSec, GRE) may be represented here too as appropriate,
although in most cases they are used as external encapsulation and although in most cases they are used as external encapsulation and
handled by host networking data-plane. handled by host networking data-plane.
NFV configuration determines logical network connectivity that is NFV configuration determines logical network connectivity that is
Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV
service specific aspects. In the context of NFV density benchmarking service specific aspects. In the context of NFV density benchmarking
methodology the initial focus is on the former. methodology the initial focus is on logical network connectivity
between the NFs, and no NFV service specific configurations. NF
specific functionality is emulated using IPv4/IPv6 routing.
Building on the two identified NFV topologies, two common NFV Building on the two identified NFV topologies, two common NFV
configurations are considered: configurations are considered:
1. Chain configuration: 1. Chain configuration:
* Relies on chain topology to form NFV service chains. * Relies on chain topology to form NFV service chains.
* NF packet forwarding designs: * NF packet forwarding designs:
skipping to change at page 9, line 30 skipping to change at page 11, line 8
* Host data-plane is involved in packet forwarding operations * Host data-plane is involved in packet forwarding operations
between NIC interfaces and edge NFs only. between NIC interfaces and edge NFs only.
Both packet paths are shown in figures below. Both packet paths are shown in figures below.
Snake packet path: Snake packet path:
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
| SmNF1 SmNF2 SmNFn Service-m |
| ... ... ... ... |
| S2NF1 S2NF2 S2NFn Service-2 |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | S1NF1 | | S1NF2 | | S1NFn | | | | S1NF1 | | S1NF2 | | S1NFn | |
| | | | | .... | | Service1 | | | | | | .... | | Service1 |
| | XXXX | | XXXX | | XXXX | | | | XXXX | | XXXX | | XXXX | |
| +-+X--X+-+ +-+X--X+-+ +X X+ +-+X--X+-+ | | +-+X--X+-+ +-+X--X+-+ +X X+ +-+X--X+-+ |
| |X X| |X X| |X X| |X X| Virtual | | |X X| |X X| |X X| |X X| Virtual |
| |X X| |X X| |X X| |X X| Interfaces | | |X X| |X X| |X X| |X X| Interfaces |
| +-+X--X+------+X--X+------+X--X+------+X--X+-+ | | +-+X--X+------+X--X+------+X--X+------+X--X+-+ |
| | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | | | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | |
| | X X | | | | X X | |
skipping to change at page 10, line 10 skipping to change at page 12, line 8
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 4. Snake packet path thru NF chain topology. Figure 4. Snake packet path thru NF chain topology.
Pipeline packet path: Pipeline packet path:
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
| SmNF1 SmNF2 SmNFn Service-m |
| ... ... ... ... |
| S2NF1 S2NF2 S2NFn Service-2 |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | S1NF1 | | S1NF2 | | S1NFn | | | | S1NF1 | | S1NF2 | | S1NFn | |
| | +--+ +--+ .... +--+ | Service1 | | | +--+ +--+ .... +--+ | Service1 |
| | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | | | | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | |
| +--X-----+ +--------+ +-----X--+ | | +--X-----+ +--------+ +-----X--+ |
| |X X| Virtual | | |X X| Virtual |
| |X X| Interfaces | | |X X| Interfaces |
| +-+X--------------------------------------X+-+ | | +-+X--------------------------------------X+-+ |
| | X X | | | | X X | |
| | X X | | | | X X | |
skipping to change at page 10, line 38 skipping to change at page 12, line 39
| Traffic Generator | | Traffic Generator |
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 5. Pipeline packet path thru NF pipeline topology. Figure 5. Pipeline packet path thru NF pipeline topology.
In all cases packets enter NFV system via shared physical NIC In all cases packets enter NFV system via shared physical NIC
interfaces controlled by shared host data-plane, are then associated interfaces controlled by shared host data-plane, are then associated
with specific NFV service (based on service discriminator) and with specific NFV service (based on service discriminator) and
subsequently are cross- connected/switched/routed by host data-plane subsequently are cross- connected/switched/routed by host data-plane
to and through NF topologies per one of above listed schemes. to and through NF topologies per one of the above listed schemes.
4. Virtualization Technology 4. Virtualization Technology
NFV services are built of composite isolated NFs, with virtualisation NFV services are built of composite isolated NFs, with virtualisation
technology providing the workload isolation. Following technology providing the workload isolation. Following
virtualisation technology types are considered for NFV service virtualisation technology types are considered for NFV service
density benchmarking: density benchmarking:
1. Virtual Machines (VMs) 1. Virtual Machines (VMs)
skipping to change at page 13, line 5 skipping to change at page 15, line 5
performance, NFs and host data-plane software require direct access performance, NFs and host data-plane software require direct access
to critical compute resources. Due to a shared nature of all to critical compute resources. Due to a shared nature of all
resources on a compute node, a clearly defined resource allocation resources on a compute node, a clearly defined resource allocation
scheme is defined in the next section to address this. scheme is defined in the next section to address this.
In each tested configuration host data-plane is a gateway between the In each tested configuration host data-plane is a gateway between the
external network and the internal NFV network topologies. Offered external network and the internal NFV network topologies. Offered
packet load is generated and received by an external traffic packet load is generated and received by an external traffic
generator per usual benchmarking practice. generator per usual benchmarking practice.
It is proposed that initial benchmarks are done with the offered It is proposed that benchmarks are done with the offered packet load
packet load distributed equally across all configured NFV service distributed equally across all configured NFV service instances.
instances. This could be followed by various per NFV service This approach should provide representative benchmarking data for
instance load ratios mimicking expected production deployment each tested topology and configuraiton, and a good guesstimate of
scenario(s). maximum performance required for capacity planning.
Following sections specify compute resource allocation, followed by Following sections specify compute resource allocation, followed by
examples of applying NFV service density methodology to VNF and CNF examples of applying NFV service density methodology to VNF and CNF
benchmarking use cases. benchmarking use cases.
7. Compute Resource Allocation 7. Compute Resource Allocation
Performance optimized NF and host data-plane software threads require Performance optimized NF and host data-plane software threads require
timely execution of packet processing instructions and are very timely execution of packet processing instructions and are very
sensitive to any interruptions (or stalls) to this execution e.g. cpu sensitive to any interruptions (or stalls) to this execution e.g. cpu
skipping to change at page 17, line 20 skipping to change at page 19, line 20
002 3 6 12 18 24 30 002 3 6 12 18 24 30
004 6 12 24 36 48 60 004 6 12 24 36 48 60
006 9 18 36 54 72 90 006 9 18 36 54 72 90
008 12 24 48 72 96 120 008 12 24 48 72 96 120
010 15 30 60 90 120 150 010 15 30 60 90 120 150
RowIndex: Number of NFV Service Instances, 1..10. RowIndex: Number of NFV Service Instances, 1..10.
ColumnIndex: Number of NFs per NFV Service Instance, 1..10. ColumnIndex: Number of NFs per NFV Service Instance, 1..10.
Value: Total number of physical processor cores used for NFs. Value: Total number of physical processor cores used for NFs.
8. NFV Service Density Benchmarks 8. NFV Service Data-Plane Benchmarking
NF service density scenarios should have their data-plane performance
benchmarked using existing and/or emerging network benchmarking
standards as noted earlier.
Following metrics should be measured (or calculated) and reported:
o Packet throughput rate (packets-per-second)
* Specific to tested packet size or packet sequence (e.g. some
type of packet size mix sent in recurrent sequence).
* Applicable types of throughput rate: NDR, PDR, MRR.
o (Calculated) Bandwidth throughput rate (bits-per-second)
corresponding to the measured packet throughput rate.
o Packet one-way latency (seconds)
* Measured at different packet throughput rates load e.g. light,
medium, heavy.
Listed metrics should be itemized per service instance and per
direction (e.g. forward/reverse) for latency.
9. Sample NFV Service Density Benchmarks
To illustrate defined NFV service density applicability, following To illustrate defined NFV service density applicability, following
sections describe three sets of NFV service topologies and sections describe three sets of NFV service topologies and
configurations that have been benchmarked in open-source: i) in configurations that have been benchmarked in open-source: i) in
[LFN-FDio-CSIT], a continuous testing and data-plane benchmarking [LFN-FDio-CSIT], a continuous testing and data-plane benchmarking
project, and ii) as part of CNCF CNF Testbed initiative project, ii) as part of CNCF CNF Testbed initiative
[CNCF-CNF-Testbed]. [CNCF-CNF-Testbed] and iii) in OPNFV NFVbench project.
In both cases each NFV service instance definition is based on the In the first two cases each NFV service instance definition is based
same set of NF applications, and varies only by network addressing on the same set of NF applications, and varies only by network
configuration to emulate multi-tenant operating environment. addressing configuration to emulate multi-tenant operating
environment.
8.1. Test Methodology - MRR Throughput OPNFV NFVbench project is focusing on benchmarking the actual
production deployments that are aligned with OPNFV specifications.
9.1. Intrepreting the Sample Results
TODO How to interpret and avoid misreading included results? And how
to avoid falling into the trap of using these results to draw
generilized conclusions about performance of different virtualization
technologies, e.g. VM and Containers, irrespective of deployment
scenarios and what VNFs and CNFs are in the actual use.
9.2. Benchmarking MRR Throughput
Initial NFV density throughput benchmarks have been performed using Initial NFV density throughput benchmarks have been performed using
Maximum Receive Rate (MRR) test methodology defined and used in FD.io Maximum Receive Rate (MRR) test methodology defined and used in FD.io
CSIT. CSIT.
MRR tests measure the packet forwarding rate under the maximum load MRR tests measure the packet forwarding rate under specified Maximum
offered by traffic generator over a set trial duration, regardless of Transmit Rate (MTR) packet load offered by traffic generator over a
packet loss. Maximum load for specified Ethernet frame size is set set trial duration, regardless of packet loss ratio (PLR). MTR for
to the bi-directional link rate (2x 10GbE in referred results). specified Ethernet frame size was set to the bi-directional link
rate, 2x 10GbE in referred results.
Tests were conducted with two traffic profiles: i) continuous stream Tests were conducted with two traffic profiles: i) continuous stream
of 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x of 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x
570B, 1x 1518B), all sizes are L2 untagged Ethernet. 570B, 1x 1518B), all sizes are L2 untagged Ethernet.
NFV service topologies tested include: VNF service chains, CNF NFV service topologies tested include: VNF service chains, CNF
service chains and CNF service pipelines. service chains and CNF service pipelines.
8.2. VNF Service Chain 9.3. VNF Service Chain
VNF Service Chain (VSC) topology is tested with KVM hypervisor VNF Service Chain (VSC) topology is tested with KVM hypervisor
(Ubuntu 18.04-LTS), with NFV service instances consisting of NFs (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs
running in VMs (VNFs). Host data-plane is provided by FD.io VPP running in VMs (VNFs). Host data-plane is provided by FD.io VPP
vswitch. Virtual interfaces are virtio-vhostuser. Snake forwarding vswitch. Virtual interfaces are virtio-vhostuser. Snake forwarding
packet path is tested using [TRex] traffic generator, see figure. packet path is tested using [TRex] traffic generator, see figure.
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
skipping to change at page 18, line 40 skipping to change at page 21, line 32
|X | | X| Physical |X | | X| Physical
|X | | X| Interfaces |X | | X| Interfaces
+---+X-+----------------------------------+-X+--------------+ +---+X-+----------------------------------+-X+--------------+
| | | |
| Traffic Generator (TRex) | | Traffic Generator (TRex) |
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 6. VNF service chain test setup. Figure 6. VNF service chain test setup.
8.3. CNF Service Chain 9.4. CNF Service Chain
CNF Service Chain (CSC) topology is tested with Docker containers CNF Service Chain (CSC) topology is tested with Docker containers
(Ubuntu 18.04-LTS), with NFV service instances consisting of NFs (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs
running in Containers (CNFs). Host data-plane is provided by FD.io running in Containers (CNFs). Host data-plane is provided by FD.io
VPP vswitch. Virtual interfaces are memif. Snake forwarding packet VPP vswitch. Virtual interfaces are memif. Snake forwarding packet
path is tested using [TRex] traffic generator, see figure. path is tested using [TRex] traffic generator, see figure.
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
skipping to change at page 19, line 32 skipping to change at page 22, line 32
|X | | X| Physical |X | | X| Physical
|X | | X| Interfaces |X | | X| Interfaces
+---+X-+----------------------------------+-X+--------------+ +---+X-+----------------------------------+-X+--------------+
| | | |
| Traffic Generator (TRex) | | Traffic Generator (TRex) |
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 7. CNF service chain test setup. Figure 7. CNF service chain test setup.
8.4. CNF Service Pipeline 9.5. CNF Service Pipeline
CNF Service Pipeline (CSP) topology is tested with Docker containers CNF Service Pipeline (CSP) topology is tested with Docker containers
(Ubuntu 18.04-LTS), with NFV service instances consisting of NFs (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs
running in Containers (CNFs). Host data-plane is provided by FD.io running in Containers (CNFs). Host data-plane is provided by FD.io
VPP vswitch. Virtual interfaces are memif. Pipeline forwarding VPP vswitch. Virtual interfaces are memif. Pipeline forwarding
packet path is tested using [TRex] traffic generator, see figure. packet path is tested using [TRex] traffic generator, see figure.
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| Host Compute Node | | Host Compute Node |
| | | |
skipping to change at page 20, line 32 skipping to change at page 23, line 32
|X | | X| Physical |X | | X| Physical
|X | | X| Interfaces |X | | X| Interfaces
+---+X-+----------------------------------+-X+--------------+ +---+X-+----------------------------------+-X+--------------+
| | | |
| Traffic Generator (TRex) | | Traffic Generator (TRex) |
| | | |
+-----------------------------------------------------------+ +-----------------------------------------------------------+
Figure 8. CNF service chain test setup. Figure 8. CNF service chain test setup.
8.5. Sample Results: FD.io CSIT 9.6. Sample Results: FD.io CSIT
FD.io CSIT project introduced NFV density benchmarking in release FD.io CSIT project introduced NFV density benchmarking in release
CSIT-1901 and published results for the following NFV service CSIT-1904 and published results for the following NFV service
topologies and configurations: topologies and configurations:
1. VNF Service Chains 1. VNF Service Chains
* VNF: DPDK-L3FWD v18.10 * VNF: DPDK-L3FWD v19.02
+ IPv4 forwarding + IPv4 forwarding
+ NF-1c + NF-1c
* vSwitch: VPP v19.01-release * vSwitch: VPP v19.04-release
+ L2 MAC switching + L2 MAC switching
+ vSwitch-1c, vSwitch-2c + vSwitch-1c, vSwitch-2c
* frame sizes: 64B, IMIX * frame sizes: 64B, IMIX
2. CNF Service Chains 2. CNF Service Chains
* CNF: VPP v19.01-release * CNF: VPP v19.04-release
+ IPv4 routing + IPv4 routing
+ NF-1c + NF-1c
* vSwitch: VPP v19.01-release * vSwitch: VPP v19.04-release
+ L2 MAC switching + L2 MAC switching
+ vSwitch-1c, vSwitch-2c + vSwitch-1c, vSwitch-2c
* frame sizes: 64B, IMIX * frame sizes: 64B, IMIX
3. CNF Service Pipelines 3. CNF Service Pipelines
* CNF: VPP v19.01-release * CNF: VPP v19.04-release
+ IPv4 routing + IPv4 routing
+ NF-1c + NF-1c
* vSwitch: VPP v19.01-release * vSwitch: VPP v19.04-release
+ L2 MAC switching + L2 MAC switching
+ vSwitch-1c, vSwitch-2c + vSwitch-1c, vSwitch-2c
* frame sizes: 64B, IMIX * frame sizes: 64B, IMIX
More information is available in FD.io CSIT-1901 report, with More information is available in FD.io CSIT-1904 report, with
specific references listed below: specific references listed below:
o Testbed: [CSIT-1901-testbed-2n-skx] o Testbed: [CSIT-1904-testbed-2n-skx]
o Test environment: [CSIT-1901-test-enviroment] o Test environment: [CSIT-1904-test-enviroment]
o Methodology: [CSIT-1901-nfv-density-methodology] o Methodology: [CSIT-1904-nfv-density-methodology]
o Results: [CSIT-1901-nfv-density-results] o Results: [CSIT-1904-nfv-density-results]
8.6. Sample Results: CNCF/CNFs 9.7. Sample Results: CNCF/CNFs
CNCF CI team introduced a CNF testbed initiative focusing on CNCF CI team introduced a CNF testbed initiative focusing on
benchmaring NFV density with open-source network applications running benchmaring NFV density with open-source network applications running
as VNFs and CNFs. Following NFV service topologies and as VNFs and CNFs. Following NFV service topologies and
configurations have been tested to date: configurations have been tested to date:
1. VNF Service Chains 1. VNF Service Chains
* VNF: VPP v18.10-release * VNF: VPP v18.10-release
skipping to change at page 23, line 13 skipping to change at page 26, line 13
+ vSwitch-1c, vSwitch-2c + vSwitch-1c, vSwitch-2c
* frame sizes: 64B, IMIX * frame sizes: 64B, IMIX
More information is available in CNCF CNF Testbed github, with More information is available in CNCF CNF Testbed github, with
summary test results presented in summary markdown file, references summary test results presented in summary markdown file, references
listed below: listed below:
o Results: [CNCF-CNF-Testbed-Results] o Results: [CNCF-CNF-Testbed-Results]
9. IANA Considerations 9.8. Sample Results: OPNFV NFVbench
No requests of IANA TODO Add short NFVbench based test description, and NFVbench sweep
chart with single VM per service instance: Y-axis packet throughput
rate or bandwidth throughput rate, X-axis number of concurrent
service instances.
10. Security Considerations 10. IANA Considerations
.. No requests of IANA.
11. Acknowledgements 11. Security Considerations
Benchmarking activities as described in this memo are limited to
technology characterization of a DUT/SUT using controlled stimuli in
a laboratory environment, with dedicated address space and the
constraints specified in the sections above.
The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test
traffic into a production network or misroute traffic to the test
management network.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT/SUT SHOULD be identical in the lab and in production
networks.
12. Acknowledgements
Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of
the CNCF Testbed initiative for their contributions and useful the CNCF Testbed initiative for their contributions and useful
suggestions. suggestions. Extended thanks to Alec Hothan of OPNFV NFVbench
project for numerous comments, suggestions and references to his/team
work in the OPNFV/NVFbench project.
12. References 13. References
12.1. Normative References 13.1. Normative References
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, Network Interconnect Devices", RFC 2544,
DOI 10.17487/RFC2544, March 1999, DOI 10.17487/RFC2544, March 1999,
<https://www.rfc-editor.org/info/rfc2544>. <https://www.rfc-editor.org/info/rfc2544>.
[RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
May 2017, <https://www.rfc-editor.org/info/rfc8174>. May 2017, <https://www.rfc-editor.org/info/rfc8174>.
12.2. Informative References 13.2. Informative References
[BSDP] "Benchmarking Software Data Planes Intel(R) Xeon(R) [BSDP] "Benchmarking Software Data Planes Intel(R) Xeon(R)
Skylake vs. Broadwell", March 2019, <https://fd.io/wp- Skylake vs. Broadwell", March 2019, <https://fd.io/wp-
content/uploads/sites/34/2019/03/ content/uploads/sites/34/2019/03/
benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf>. benchmarking_sw_data_planes_skx_bdx_mar07_2019.pdf>.
[CNCF-CNF-Testbed] [CNCF-CNF-Testbed]
"Cloud native Network Function (CNF) Testbed", March 2019, "Cloud native Network Function (CNF) Testbed", July 2019,
<https://github.com/cncf/cnf-testbed/>. <https://github.com/cncf/cnf-testbed/>.
[CNCF-CNF-Testbed-Results] [CNCF-CNF-Testbed-Results]
"CNCF CNF Testbed: NFV Service Density Benchmarking", "CNCF CNF Testbed: NFV Service Density Benchmarking",
December 2018, <https://github.com/cncf/cnf- December 2018, <https://github.com/cncf/cnf-
testbed/blob/master/comparison/doc/ testbed/blob/master/comparison/doc/
cncf-cnfs-results-summary.md>. cncf-cnfs-results-summary.md>.
[CSIT-1901-nfv-density-methodology] [CSIT-1904-nfv-density-methodology]
"FD.io CSIT Test Methodology: NFV Service Density", March "FD.io CSIT Test Methodology: NFV Service Density", June
2019, 2019,
<https://docs.fd.io/csit/rls1901/report/introduction/ <https://docs.fd.io/csit/rls1904/report/introduction/
methodology_nfv_service_density.html>. methodology_nfv_service_density.html>.
[CSIT-1901-nfv-density-results] [CSIT-1904-nfv-density-results]
"FD.io CSIT Test Results: NFV Service Density", March "FD.io CSIT Test Results: NFV Service Density", June 2019,
2019, <https://docs.fd.io/csit/rls1901/report/vpp_performa <https://docs.fd.io/csit/rls1904/report/vpp_performance_te
nce_tests/nf_service_density/index.html>. sts/nf_service_density/index.html>.
[CSIT-1901-test-enviroment] [CSIT-1904-test-enviroment]
"FD.io CSIT Test Environment", March 2019, "FD.io CSIT Test Environment", June 2019,
<https://docs.fd.io/csit/rls1901/report/ <https://docs.fd.io/csit/rls1904/report/
vpp_performance_tests/test_environment.html>. vpp_performance_tests/test_environment.html>.
[CSIT-1901-testbed-2n-skx] [CSIT-1904-testbed-2n-skx]
"FD.io CSIT Test Bed", March 2019, "FD.io CSIT Test Bed", June 2019,
<https://docs.fd.io/csit/rls1901/report/introduction/ <https://docs.fd.io/csit/rls1904/report/introduction/
physical_testbeds.html#node-xeon-skylake-2n-skx>. physical_testbeds.html#node-xeon-skylake-2n-skx>.
[draft-vpolak-bmwg-plrsearch] [draft-vpolak-bmwg-plrsearch]
"Probabilistic Loss Ratio Search for Packet Throughput "Probabilistic Loss Ratio Search for Packet Throughput
(PLRsearch)", November 2018, <https://tools.ietf.org/html/ (PLRsearch)", July 2019,
draft-vpolak-bmwg-plrsearch-00>. <https://tools.ietf.org/html/draft-vpolak-bmwg-plrsearch>.
[draft-vpolak-mkonstan-bmwg-mlrsearch] [draft-vpolak-mkonstan-bmwg-mlrsearch]
"Multiple Loss Ratio Search for Packet Throughput "Multiple Loss Ratio Search for Packet Throughput
(MLRsearch)", November 2018, <https://tools.ietf.org/html/ (MLRsearch)", July 2019, <https://tools.ietf.org/html/
draft-vpolak-mkonstan-bmwg-mlrsearch-00>. draft-vpolak-mkonstan-bmwg-mlrsearch>.
[LFN-FDio-CSIT] [LFN-FDio-CSIT]
"Fast Data io, Continuous System Integration and Testing "Fast Data io, Continuous System Integration and Testing
Project", March 2019, <https://wiki.fd.io/view/CSIT>. Project", July 2019, <https://wiki.fd.io/view/CSIT>.
[NFVbench]
"NFVbench Data Plane Performance Measurement Features",
July 2019, <https://opnfv-
nfvbench.readthedocs.io/en/latest/testing/user/userguide/
readme.html>.
[RFC8204] Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking [RFC8204] Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking
Virtual Switches in the Open Platform for NFV (OPNFV)", Virtual Switches in the Open Platform for NFV (OPNFV)",
RFC 8204, DOI 10.17487/RFC8204, September 2017, RFC 8204, DOI 10.17487/RFC8204, September 2017,
<https://www.rfc-editor.org/info/rfc8204>. <https://www.rfc-editor.org/info/rfc8204>.
[TRex] "TRex Low-Cost, High-Speed Stateful Traffic Generator", [TRex] "TRex Low-Cost, High-Speed Stateful Traffic Generator",
March 2019, <https://github.com/cisco-system-traffic- July 2019, <https://github.com/cisco-system-traffic-
generator/trex-core>. generator/trex-core>.
[TST009] "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions [TST009] "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions
Virtualisation (NFV) Release 3; Testing; Specification of Virtualisation (NFV) Release 3; Testing; Specification of
Networking Benchmarks and Measurement Methods for NFVI", Networking Benchmarks and Measurement Methods for NFVI",
October 2018, <https://www.etsi.org/deliver/etsi_gs/NFV- October 2018, <https://www.etsi.org/deliver/etsi_gs/NFV-
TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf>. TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf>.
Authors' Addresses Authors' Addresses
 End of changes. 61 change blocks. 
110 lines changed or deleted 236 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/