draft-ietf-bmwg-virtual-net-00.txt   draft-ietf-bmwg-virtual-net-01.txt 
Network Working Group A. Morton Network Working Group A. Morton
Internet-Draft AT&T Labs Internet-Draft AT&T Labs
Intended status: Informational May 31, 2015 Intended status: Informational September 23, 2015
Expires: December 2, 2015 Expires: March 26, 2016
Considerations for Benchmarking Virtual Network Functions and Their Considerations for Benchmarking Virtual Network Functions and Their
Infrastructure Infrastructure
draft-ietf-bmwg-virtual-net-00 draft-ietf-bmwg-virtual-net-01
Abstract Abstract
Benchmarking Methodology Working Group has traditionally conducted Benchmarking Methodology Working Group has traditionally conducted
laboratory characterization of dedicated physical implementations of laboratory characterization of dedicated physical implementations of
internetworking functions. This memo investigates additional internetworking functions. This memo investigates additional
considerations when network functions are virtualized and performed considerations when network functions are virtualized and performed
in commodity off-the-shelf hardware. in commodity off-the-shelf hardware.
Version NOTES: Version NOTES:
Addressed Barry Constantine's comments throughout the draft, see: Addressed Ramki Krishnan's comments on section 4.5, power, see that
section (7/27 message to the list). Addressed Saurabh
http://www.ietf.org/mail-archive/web/bmwg/current/msg03167.html Chattopadhyay's 7/24 comments on VNF resources and other resource
conditions and their effect on benchmarking, see section 3.4.
Addressed Marius Georgescu's 7/17 comments on the list (sections 4.3
and 4.4).
AND, comments from the extended discussion during IETF-92 BMWG AND, comments from the extended discussion during IETF-93 BMWG
session: session:
1 & 2: General Purpose HW and why we care to a greater degree about Section 4.2: VNF footprint and auxilliary metrics (Maryam Tahhan),
"what's in the black box" in this benchmarking context. Section 4.3: Verification affect metrics (Ramki Krishnan);
Section 4.4: Auxilliary metrics in the Matrix (Maryam Tahhan, Scott
3: System under Test description = platform and VNFs and... Bradner, others)
4.1 Scale and capacity benchmarks still needed.
4.4 Compromise on appearance of capacity and the 3x3 Matrix
new 4.5, Power consumption
Requirements Language Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119]. document are to be interpreted as described in RFC 2119 [RFC2119].
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
skipping to change at page 2, line 20 skipping to change at page 2, line 12
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 2, 2015. This Internet-Draft will expire on March 26, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Considerations for Hardware and Testing . . . . . . . . . . . 5 3. Considerations for Hardware and Testing . . . . . . . . . . . 4
3.1. Hardware Components . . . . . . . . . . . . . . . . . . . 5 3.1. Hardware Components . . . . . . . . . . . . . . . . . . . 5
3.2. Configuration Parameters . . . . . . . . . . . . . . . . 5 3.2. Configuration Parameters . . . . . . . . . . . . . . . . 5
3.3. Testing Strategies . . . . . . . . . . . . . . . . . . . 6 3.3. Testing Strategies . . . . . . . . . . . . . . . . . . . 6
3.4. Attention to Shared Resources . . . . . . . . . . . . . . 7 3.4. Attention to Shared Resources . . . . . . . . . . . . . . 7
4. Benchmarking Considerations . . . . . . . . . . . . . . . . . 7 4. Benchmarking Considerations . . . . . . . . . . . . . . . . . 7
4.1. Comparison with Physical Network Functions . . . . . . . 7 4.1. Comparison with Physical Network Functions . . . . . . . 8
4.2. Continued Emphasis on Black-Box Benchmarks . . . . . . . 8 4.2. Continued Emphasis on Black-Box Benchmarks . . . . . . . 8
4.3. New Benchmarks and Related Metrics . . . . . . . . . . . 8 4.3. New Benchmarks and Related Metrics . . . . . . . . . . . 9
4.4. Assessment of Benchmark Coverage . . . . . . . . . . . . 9 4.4. Assessment of Benchmark Coverage . . . . . . . . . . . . 9
4.5. Power Consumption . . . . . . . . . . . . . . . . . . . . 11 4.5. Power Consumption . . . . . . . . . . . . . . . . . . . . 12
5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12
6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13
8. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.1. Normative References . . . . . . . . . . . . . . . . . . 12 8.1. Normative References . . . . . . . . . . . . . . . . . . 13
8.2. Informative References . . . . . . . . . . . . . . . . . 14 8.2. Informative References . . . . . . . . . . . . . . . . . 14
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 14 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15
1. Introduction 1. Introduction
Benchmarking Methodology Working Group (BMWG) has traditionally Benchmarking Methodology Working Group (BMWG) has traditionally
conducted laboratory characterization of dedicated physical conducted laboratory characterization of dedicated physical
implementations of internetworking functions (or physical network implementations of internetworking functions (or physical network
functions, PNFs). The Black-box Benchmarks of Throughput, Latency, functions, PNFs). The Black-box Benchmarks of Throughput, Latency,
Forwarding Rates and others have served our industry for many years. Forwarding Rates and others have served our industry for many years.
[RFC1242] and [RFC2544] are the cornerstones of the work. [RFC1242] and [RFC2544] are the cornerstones of the work.
skipping to change at page 5, line 37 skipping to change at page 5, line 29
Labs conducting comparisons of different VNFs may be able to use the Labs conducting comparisons of different VNFs may be able to use the
same hardware platform over many studies, until the steady march of same hardware platform over many studies, until the steady march of
innovations overtakes their capabilities (as happens with the lab's innovations overtakes their capabilities (as happens with the lab's
traffic generation and testing devices today). traffic generation and testing devices today).
3.2. Configuration Parameters 3.2. Configuration Parameters
It will be necessary to configure and document the settings for the It will be necessary to configure and document the settings for the
entire general-purpose platform to ensure repeatability and foster entire general-purpose platform to ensure repeatability and foster
future comparisons, including: future comparisons, including but clearly not limited-to the
following:
o number of server blades (shelf occupation) o number of server blades (shelf occupation)
o CPUs o CPUs
o caches o caches
o storage system o storage system
o I/O o I/O
skipping to change at page 7, line 4 skipping to change at page 6, line 47
loop) would ideally be isolated and the performance of other VMs loop) would ideally be isolated and the performance of other VMs
would continue according to their specifications. would continue according to their specifications.
3. System errors will likely occur as transients, implying a 3. System errors will likely occur as transients, implying a
distribution of performance characteristics with a long tail distribution of performance characteristics with a long tail
(like latency), leading to the need for longer-term tests of each (like latency), leading to the need for longer-term tests of each
set of configuration and test parameters. set of configuration and test parameters.
4. The desire for elasticity and flexibility among network functions 4. The desire for elasticity and flexibility among network functions
will include tests where there is constant flux in the number of will include tests where there is constant flux in the number of
VM instances. Requests for and instantiation of new VMs, along VM instances, the resources the VMs require, and the set-up/tear-
with Releases for VMs hosting VNFs that are no longer needed down of network paths that support VM connectivity. Requests for
would be an normal operational condition. In other words, and instantiation of new VMs, along with Releases for VMs hosting
benchmarking should include scenarios with production life cycle VNFs that are no longer needed would be an normal operational
management of VMs and their VNFs and network connectivity in- condition. In other words, benchmarking should include scenarios
progress, as well as static configurations. with production life cycle management of VMs and their VNFs and
network connectivity in-progress, as well as static
configurations.
5. All physical things can fail, and benchmarking efforts can also 5. All physical things can fail, and benchmarking efforts can also
examine recovery aided by the virtual architecture with different examine recovery aided by the virtual architecture with different
approaches to resiliency. approaches to resiliency.
3.4. Attention to Shared Resources 3.4. Attention to Shared Resources
Since many components of the new NFV Infrastructure are virtual, test Since many components of the new NFV Infrastructure are virtual, test
set-up design must have prior knowledge of inter-actions/dependencies set-up design must have prior knowledge of inter-actions/dependencies
within the various resource domains in the System Under Test (SUT). within the various resource domains in the System Under Test (SUT).
skipping to change at page 7, line 33 skipping to change at page 7, line 29
Otherwise, the results will have unexpected dependencies not Otherwise, the results will have unexpected dependencies not
encountered in physical device benchmarking. encountered in physical device benchmarking.
Note: The term "tester" has traditionally referred to devices Note: The term "tester" has traditionally referred to devices
dedicated to testing in BMWG literature. In this new context, dedicated to testing in BMWG literature. In this new context,
"tester" additionally refers to functions dedicated to testing, which "tester" additionally refers to functions dedicated to testing, which
may be either virtual or physical. "Tester" has never referred to may be either virtual or physical. "Tester" has never referred to
the individuals performing the tests. the individuals performing the tests.
The shared-resource aspect of test design remains one of the critical The shared-resource aspect of test design remains one of the critical
challenges to overcome in a reasonable way to produce useful results. challenges to overcome in a way to produce useful results.
The physical test device remains a solid foundation to compare Benchmarking set-ups may designate isolated resources for the DUT and
against results using combinations of physical and virtual test other critical support components (such as the host/kernel) as the
functions, or results using only virtual testers when necessary to first baseline step, and add other loading processes. The added
assess virtual interfaces and other virtual functions. complexity of each set-up leads to shared-resource testing scenarios,
where the characteristics of the competing load (in terms of memory,
storage, and CPU utilization) will directly affect the benchmarking
results (and variability of the results), but the results should
reconcile with the baseline.
The physical test device remains a solid foundation to compare with
results using combinations of physical and virtual test functions, or
results using only virtual testers when necessary to assess virtual
interfaces and other virtual functions.
4. Benchmarking Considerations 4. Benchmarking Considerations
This section discusses considerations related to Benchmarks This section discusses considerations related to Benchmarks
applicable to VNFs and their associated technologies. applicable to VNFs and their associated technologies.
4.1. Comparison with Physical Network Functions 4.1. Comparison with Physical Network Functions
In order to compare the performance of VNFs and system In order to compare the performance of VNFs and system
implementations with their physical counterparts, identical implementations with their physical counterparts, identical
skipping to change at page 8, line 19 skipping to change at page 8, line 28
function hosting remain as critical factors in performance function hosting remain as critical factors in performance
assessment. assessment.
4.2. Continued Emphasis on Black-Box Benchmarks 4.2. Continued Emphasis on Black-Box Benchmarks
When the network functions under test are based on Open Source code, When the network functions under test are based on Open Source code,
there may be a tendency to rely on internal measurements to some there may be a tendency to rely on internal measurements to some
extent, especially when the externally-observable phenomena only extent, especially when the externally-observable phenomena only
support an inference of internal events (such as routing protocol support an inference of internal events (such as routing protocol
convergence observed in the dataplane). Examples include CPU/Core convergence observed in the dataplane). Examples include CPU/Core
utilization and Memory Comitted/used. However, external observations utilization, Network utilization, Storage utilization, and Memory
remain essential as the basis for Benchmarks. Internal observations Comitted/used. These "white-box" metrics provide one view of the
with fixed specification and interpretation may be provided in resource footprint of a VNF. Note: The resource utilization metrics
parallel, to assist the development of operations procedures when the do not easily match the 3x4 Matrix.
However, external observations remain essential as the basis for
Benchmarks. Internal observations with fixed specification and
interpretation may be provided in parallel (as auxilliary metrics),
to assist the development of operations procedures when the
technology is deployed, for example. Internal metrics and technology is deployed, for example. Internal metrics and
measurements from Open Source implementations may be the only direct measurements from Open Source implementations may be the only direct
source of performance results in a desired dimension, but source of performance results in a desired dimension, but
corroborating external observations are still required to assure the corroborating external observations are still required to assure the
integrity of measurement discipline was maintained for all reported integrity of measurement discipline was maintained for all reported
results. results.
A related aspect of benchmark development is where the scope includes A related aspect of benchmark development is where the scope includes
multiple approaches to a common function under the same benchmark. multiple approaches to a common function under the same benchmark.
For example, there are many ways to arrange for activation of a For example, there are many ways to arrange for activation of a
skipping to change at page 8, line 48 skipping to change at page 9, line 14
4.3. New Benchmarks and Related Metrics 4.3. New Benchmarks and Related Metrics
There will be new classes of benchmarks needed for network design and There will be new classes of benchmarks needed for network design and
assistance when developing operational practices (possibly automated assistance when developing operational practices (possibly automated
management and orchestration of deployment scale). Examples follow management and orchestration of deployment scale). Examples follow
in the paragraphs below, many of which are prompted by the goals of in the paragraphs below, many of which are prompted by the goals of
increased elasticity and flexibility of the network functions, along increased elasticity and flexibility of the network functions, along
with accelerated deployment times. with accelerated deployment times.
Time to deploy VNFs: In cases where the general-purpose hardware is o Time to deploy VNFs: In cases where the general-purpose hardware
already deployed and ready for service, it is valuable to know the is already deployed and ready for service, it is valuable to know
response time when a management system is tasked with "standing-up" the response time when a management system is tasked with
100's of virtual machines and the VNFs they will host. "standing-up" 100's of virtual machines and the VNFs they will
host.
Time to migrate VNFs: In cases where a rack or shelf of hardware must o Time to migrate VNFs: In cases where a rack or shelf of hardware
be removed from active service, it is valuable to know the response must be removed from active service, it is valuable to know the
time when a management system is tasked with "migrating" some number response time when a management system is tasked with "migrating"
of virtual machines and the VNFs they currently host to alternate some number of virtual machines and the VNFs they currently host
hardware that will remain in-service. to alternate hardware that will remain in-service.
Time to create a virtual network in the general-purpose o Time to create a virtual network in the general-purpose
infrastructure: This is a somewhat simplified version of existing infrastructure: This is a somewhat simplified version of existing
benchmarks for convergence time, in that the process is initiated by benchmarks for convergence time, in that the process is initiated
a request from (centralized or distributed) control, rather than by a request from (centralized or distributed) control, rather
inferred from network events (link failure). The successful response than inferred from network events (link failure). The successful
time would remain dependent on dataplane observations to confirm that response time would remain dependent on dataplane observations to
the network is ready to perform. confirm that the network is ready to perform.
o Effect of verification measurements on performance: A complete
VNF, or something as simple as a new poicy to implement in a VNF,
is implemented. The action to verify instantiation of the VNF or
policy could affect performance during normal operation.
Also, it appears to be valuable to measure traditional packet Also, it appears to be valuable to measure traditional packet
transfer performance metrics during the assessment of traditional and transfer performance metrics during the assessment of traditional and
new benchmarks, including metrics that may be used to support service new benchmarks, including metrics that may be used to support service
engineering such as the Spatial Composition metrics found in engineering such as the Spatial Composition metrics found in
[RFC6049]. Examples include Mean one-way delay in section 4.1 of [RFC6049]. Examples include Mean one-way delay in section 4.1 of
[RFC6049], Packet Delay Variation (PDV) in [RFC5481], and Packet [RFC6049], Packet Delay Variation (PDV) in [RFC5481], and Packet
Reordering [RFC4737] [RFC4689]. Reordering [RFC4737] [RFC4689].
4.4. Assessment of Benchmark Coverage 4.4. Assessment of Benchmark Coverage
It can be useful to organize benchmarks according to their applicable It can be useful to organize benchmarks according to their applicable
life cycle stage and the performance criteria they intend to assess. life cycle stage and the performance criteria they were designed to
The table below provides a way to organize benchmarks such that there assess. The table below provides a way to organize benchmarks such
is a clear indication of coverage for the intersection of life cycle that there is a clear indication of coverage for the intersection of
stages and performance criteria. life cycle stages and performance criteria.
|----------------------------------------------------------| |----------------------------------------------------------|
| | | | | | | | | |
| | SPEED | ACCURACY | RELIABILITY | | | SPEED | ACCURACY | RELIABILITY |
| | | | | | | | | |
|----------------------------------------------------------| |----------------------------------------------------------|
| | | | | | | | | |
| Activation | | | | | Activation | | | |
| | | | | | | | | |
|----------------------------------------------------------| |----------------------------------------------------------|
skipping to change at page 10, line 15 skipping to change at page 10, line 36
would be placed in the intersection of Activation and Speed, making would be placed in the intersection of Activation and Speed, making
it clear that there are other potential performance criteria to it clear that there are other potential performance criteria to
benchmark, such as the "percentage of unsuccessful VM/VNF stand-ups" benchmark, such as the "percentage of unsuccessful VM/VNF stand-ups"
in a set of 100 attempts. This example emphasizes that the in a set of 100 attempts. This example emphasizes that the
Activation and De-activation life cycle stages are key areas for NFV Activation and De-activation life cycle stages are key areas for NFV
and related infrastructure, and encourage expansion beyond and related infrastructure, and encourage expansion beyond
traditional benchmarks for normal operation. Thus, reviewing the traditional benchmarks for normal operation. Thus, reviewing the
benchmark coverage using this table (sometimes called the 3x3 matrix) benchmark coverage using this table (sometimes called the 3x3 matrix)
can be a worthwhile exercise in BMWG. can be a worthwhile exercise in BMWG.
In one of the first applications of the 3x3 matrix on BMWG, we In one of the first applications of the 3x3 matrix in BMWG
discovered that metrics on measured size, capacity, or scale do not [I-D.bhuvan-bmwg-sdn-controller-benchmark-meth], we discovered that
easily match one of the three columns above. Following discussion, metrics on measured size, capacity, or scale do not easily match one
this was resolved in two ways: of the three columns above. Following discussion, this was resolved
in two ways:
o Add a column, Scaleability, for use when categorizing benchmarks. o Add a column, Scale, for use when categorizing and assessing the
coverage of benchmarks (without measured results). Examples of
this use are found in
[I-D.bhuvan-bmwg-sdn-controller-benchmark-meth] and
[I-D.vsperf-bmwg-vswitch-opnfv]. This is the 3x4 Matrix.
o If using the matrix to report results in an organized way, keep o If using the matrix to report results in an organized way, keep
size, capacity, and scale metrics separate from the 3x3 matrix and size, capacity, and scale metrics separate from the 3x3 matrix and
incorporate them in the report with other qualifications of the incorporate them in the report with other qualifications of the
results. results.
Note: The resource utilization (e.g., CPU) metrics do not fit in the
Matrix. They are not benchmarks, and omitting them confirms their
status as auxilliary metrics. Resource assignments are configuration
parameters, and these are reported seperately.
This approach encourages use of the 3x3 matrix to organize reports of This approach encourages use of the 3x3 matrix to organize reports of
results, where the capacity at which the various metrics were results, where the capacity at which the various metrics were
measured could be included in the title of the matrix (and results measured could be included in the title of the matrix (and results
for multiple capacities would result in separate 3x3 matrices, if for multiple capacities would result in separate 3x3 matrices, if
there were sufficient measurements/results to organize in that way). there were sufficient measurements/results to organize in that way).
For example, results for each VM and VNF could appear in the 3x3 For example, results for each VM and VNF could appear in the 3x3
matrix, organized to illustrate resource occupation (CPU Cores) in a matrix, organized to illustrate resource occupation (CPU Cores) in a
particular physical computing system, as shown below. particular physical computing system, as shown below.
skipping to change at page 11, line 49 skipping to change at page 12, line 16
environment there could be VNFs of multiple types and categories. In environment there could be VNFs of multiple types and categories. In
this figure, VNFs #3-#5 are assumed to require small CPU resources, this figure, VNFs #3-#5 are assumed to require small CPU resources,
while VNF#2 requires 4 cores to perform its function. while VNF#2 requires 4 cores to perform its function.
4.5. Power Consumption 4.5. Power Consumption
Although there is incomplete work to benchmark physical network Although there is incomplete work to benchmark physical network
function power consumption in a meaningful way, the desire to measure function power consumption in a meaningful way, the desire to measure
the physical infrastructure supporting the virtual functions only the physical infrastructure supporting the virtual functions only
adds to the need. Both maximum power consumption and dynamic power adds to the need. Both maximum power consumption and dynamic power
consumption (with varying load?) would be useful. consumption (with varying load) would be useful. The IPMI standard
[IPMI2.0] has been implemented by many manufacturers, and supports
measurement of instantaneous energy consumption.
>>> ADD REC from Dallas meeting... To assess the instantaneous energy consumption of virtual resources,
it may be possible to estimate the value using an overall metric
based on utilization readings, according to
[I-D.krishnan-nfvrg-policy-based-rm-nfviaas].
5. Security Considerations 5. Security Considerations
Benchmarking activities as described in this memo are limited to Benchmarking activities as described in this memo are limited to
technology characterization of a Device Under Test/System Under Test technology characterization of a Device Under Test/System Under Test
(DUT/SUT) using controlled stimuli in a laboratory environment, with (DUT/SUT) using controlled stimuli in a laboratory environment, with
dedicated address space and the constraints specified in the sections dedicated address space and the constraints specified in the sections
above. above.
The benchmarking network topology will be an independent test setup The benchmarking network topology will be an independent test setup
skipping to change at page 13, line 6 skipping to change at page 13, line 30
8. References 8. References
8.1. Normative References 8.1. Normative References
[NFV.PER001] [NFV.PER001]
"Network Function Virtualization: Performance and "Network Function Virtualization: Performance and
Portability Best Practices", Group Specification ETSI GS Portability Best Practices", Group Specification ETSI GS
NFV-PER 001 V1.1.1 (2014-06), June 2014. NFV-PER 001 V1.1.1 (2014-06), June 2014.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
<http://www.rfc-editor.org/info/rfc2119>.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, May "Framework for IP Performance Metrics", RFC 2330,
1998. DOI 10.17487/RFC2330, May 1998,
<http://www.rfc-editor.org/info/rfc2330>.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999. Network Interconnect Devices", RFC 2544,
DOI 10.17487/RFC2544, March 1999,
<http://www.rfc-editor.org/info/rfc2544>.
[RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Delay Metric for IPPM", RFC 2679, September 1999. Delay Metric for IPPM", RFC 2679, DOI 10.17487/RFC2679,
September 1999, <http://www.rfc-editor.org/info/rfc2679>.
[RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Packet Loss Metric for IPPM", RFC 2680, September 1999. Packet Loss Metric for IPPM", RFC 2680,
DOI 10.17487/RFC2680, September 1999,
<http://www.rfc-editor.org/info/rfc2680>.
[RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip
Delay Metric for IPPM", RFC 2681, September 1999. Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681,
September 1999, <http://www.rfc-editor.org/info/rfc2681>.
[RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation
Metric for IP Performance Metrics (IPPM)", RFC 3393, Metric for IP Performance Metrics (IPPM)", RFC 3393,
November 2002. DOI 10.17487/RFC3393, November 2002,
<http://www.rfc-editor.org/info/rfc3393>.
[RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network
performance measurement with periodic streams", RFC 3432, performance measurement with periodic streams", RFC 3432,
November 2002. DOI 10.17487/RFC3432, November 2002,
<http://www.rfc-editor.org/info/rfc3432>.
[RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana,
"Terminology for Benchmarking Network-layer Traffic "Terminology for Benchmarking Network-layer Traffic
Control Mechanisms", RFC 4689, October 2006. Control Mechanisms", RFC 4689, DOI 10.17487/RFC4689,
October 2006, <http://www.rfc-editor.org/info/rfc4689>.
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
S., and J. Perser, "Packet Reordering Metrics", RFC 4737, S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
November 2006. DOI 10.17487/RFC4737, November 2006,
<http://www.rfc-editor.org/info/rfc4737>.
[RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J.
Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)",
RFC 5357, October 2008. RFC 5357, DOI 10.17487/RFC5357, October 2008,
<http://www.rfc-editor.org/info/rfc5357>.
[RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch,
Time Protocol Version 4: Protocol and Algorithms "Network Time Protocol Version 4: Protocol and Algorithms
Specification", RFC 5905, June 2010. Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010,
<http://www.rfc-editor.org/info/rfc5905>.
[RFC7498] Quinn, P. and T. Nadeau, "Problem Statement for Service [RFC7498] Quinn, P., Ed. and T. Nadeau, Ed., "Problem Statement for
Function Chaining", RFC 7498, April 2015. Service Function Chaining", RFC 7498,
DOI 10.17487/RFC7498, April 2015,
<http://www.rfc-editor.org/info/rfc7498>.
8.2. Informative References 8.2. Informative References
[RFC1242] Bradner, S., "Benchmarking terminology for network [I-D.bhuvan-bmwg-sdn-controller-benchmark-meth]
interconnection devices", RFC 1242, July 1991. Vengainathan, B., Basil, A., Tassinari, M., Manral, V.,
and S. Banks, "Benchmarking Methodology for SDN Controller
Performance", draft-bhuvan-bmwg-sdn-controller-benchmark-
meth-01 (work in progress), July 2015.
[I-D.krishnan-nfvrg-policy-based-rm-nfviaas]
Krishnan, R., Figueira, N., Krishnaswamy, D., Lopez, D.,
Wright, S., Hinrichs, T., and R. Krishnaswamy, "NFVIaaS
Architectural Framework for Policy Based Resource
Placement and Scheduling", draft-krishnan-nfvrg-policy-
based-rm-nfviaas-05 (work in progress), September 2015.
[I-D.vsperf-bmwg-vswitch-opnfv]
Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking
Virtual Switches in OPNFV", draft-vsperf-bmwg-vswitch-
opnfv-00 (work in progress), July 2015.
[IPMI2.0] "Intelligent Platform Management Interface, v2.0 with
latest Errata",
http://www.intel.com/content/www/us/en/servers/ipmi/ipmi-
intelligent-platform-mgt-interface-spec-2nd-gen-v2-0-spec-
update.html, April 2015.
[RFC1242] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242,
July 1991, <http://www.rfc-editor.org/info/rfc1242>.
[RFC5481] Morton, A. and B. Claise, "Packet Delay Variation [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation
Applicability Statement", RFC 5481, March 2009. Applicability Statement", RFC 5481, DOI 10.17487/RFC5481,
March 2009, <http://www.rfc-editor.org/info/rfc5481>.
[RFC6049] Morton, A. and E. Stephan, "Spatial Composition of [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of
Metrics", RFC 6049, January 2011. Metrics", RFC 6049, DOI 10.17487/RFC6049, January 2011,
<http://www.rfc-editor.org/info/rfc6049>.
[RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics
(IPPM) Registry of Metrics Are Obsolete", RFC 6248, April (IPPM) Registry of Metrics Are Obsolete", RFC 6248,
2011. DOI 10.17487/RFC6248, April 2011,
<http://www.rfc-editor.org/info/rfc6248>.
[RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New
Performance Metric Development", BCP 170, RFC 6390, Performance Metric Development", BCP 170, RFC 6390,
October 2011. DOI 10.17487/RFC6390, October 2011,
<http://www.rfc-editor.org/info/rfc6390>.
Author's Address Author's Address
Al Morton Al Morton
AT&T Labs AT&T Labs
200 Laurel Avenue South 200 Laurel Avenue South
Middletown,, NJ 07748 Middletown,, NJ 07748
USA USA
Phone: +1 732 420 1571 Phone: +1 732 420 1571
Fax: +1 732 368 1192 Fax: +1 732 368 1192
Email: acmorton@att.com Email: acmorton@att.com
URI: http://home.comcast.net/~acmacm/ URI: http://home.comcast.net/~acmacm/
 End of changes. 44 change blocks. 
94 lines changed or deleted 174 lines changed or added

This html diff was produced by rfcdiff 1.42. The latest version is available from http://tools.ietf.org/tools/rfcdiff/