[Docs] [txt|pdf|xml|html] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits]

Versions: 00 01 02 03 04 05 06 07 08 09 10 11 12 RFC 6390

Network Working Group                                           A. Clark
Internet-Draft                                     Telchemy Incorporated
Intended status: BCP                                           B. Claise
Expires: April 29, 2010                              Cisco Systems, Inc.
                                                        October 26, 2009


              Framework for Performance Metric Development
                  draft-ietf-pmol-metrics-framework-03

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.  This document may contain material
   from IETF Documents or IETF Contributions published or made publicly
   available before November 10, 2008.  The person(s) controlling the
   copyright in some of this material may not have granted the IETF
   Trust the right to allow modifications of such material outside the
   IETF Standards Process.  Without obtaining an adequate license from
   the person(s) controlling the copyright in such materials, this
   document may not be modified outside the IETF Standards Process, and
   derivative works of it may not be created outside the IETF Standards
   Process, except to format it for publication as an RFC or to
   translate it into languages other than English.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on April 29, 2010.

Copyright Notice

   Copyright (c) 2009 IETF Trust and the persons identified as the
   document authors.  All rights reserved.




Clark & Claise           Expires April 29, 2010                 [Page 1]


Internet-Draft           Perf. Metric Framework             October 2009


   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents in effect on the date of
   publication of this document (http://trustee.ietf.org/license-info).
   Please review these documents carefully, as they describe your rights
   and restrictions with respect to this document.

Abstract

   This document describes a framework and a process for developing
   performance metrics for IP-based applications that operate over
   reliable or datagram transport protocols, and that can be used to
   characterize traffic on live networks and services.  The framework
   refers to a Performance Metrics Entity, or PM Entity, which may in
   future be a working group or directorate or a combination of these
   two.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119].






























Clark & Claise           Expires April 29, 2010                 [Page 2]


Internet-Draft           Perf. Metric Framework             October 2009


Table of Contents

   1.  TO DO  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4
   2.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
     2.1.  Background and Motivation  . . . . . . . . . . . . . . . .  4
     2.2.  Organization of this document  . . . . . . . . . . . . . .  5
   3.  Terminology  . . . . . . . . . . . . . . . . . . . . . . . . .  6
     3.1.  Quality of Service . . . . . . . . . . . . . . . . . . . .  6
     3.2.  Application Performance Metric . . . . . . . . . . . . . .  6
     3.3.  Quality of Experience  . . . . . . . . . . . . . . . . . .  6
   4.  Purpose and Scope  . . . . . . . . . . . . . . . . . . . . . .  6
   5.  QoS versus Application Performance Metrics versus QoE  . . . .  7
   6.  Metrics Development  . . . . . . . . . . . . . . . . . . . . .  7
     6.1.  Identifying and Categorizing the Audience  . . . . . . . .  7
     6.2.  Definitions of a Metric  . . . . . . . . . . . . . . . . .  8
     6.3.  Computed Metrics . . . . . . . . . . . . . . . . . . . . .  9
       6.3.1.  Composed Metrics . . . . . . . . . . . . . . . . . . .  9
       6.3.2.  Index (from compagg) . . . . . . . . . . . . . . . . . 10
     6.4.  Metric Specification . . . . . . . . . . . . . . . . . . . 10
       6.4.1.  Outline  . . . . . . . . . . . . . . . . . . . . . . . 11
       6.4.2.  Normative parts of metric definition . . . . . . . . . 11
       6.4.3.  Informative parts of metric definition . . . . . . . . 12
       6.4.4.  Metric Definition Template . . . . . . . . . . . . . . 13
       6.4.5.  Example: Burst Packet Loss Frequency . . . . . . . . . 14
     6.5.  Dependencies . . . . . . . . . . . . . . . . . . . . . . . 15
       6.5.1.  Timing accuracy  . . . . . . . . . . . . . . . . . . . 15
       6.5.2.  Dependencies of metric definitions on related
               events or metrics  . . . . . . . . . . . . . . . . . . 15
       6.5.3.  Relationship between application performance and
               lower layer metrics  . . . . . . . . . . . . . . . . . 15
       6.5.4.  Middlebox presence . . . . . . . . . . . . . . . . . . 16
     6.6.  Organization of Results  . . . . . . . . . . . . . . . . . 16
     6.7.  Parameters, the variables of a metric  . . . . . . . . . . 16
   7.  Performance Metric Development Process . . . . . . . . . . . . 17
     7.1.  New Proposals for Metrics  . . . . . . . . . . . . . . . . 17
     7.2.  Reviewing Metrics  . . . . . . . . . . . . . . . . . . . . 17
     7.3.  Proposal Approval  . . . . . . . . . . . . . . . . . . . . 18
     7.4.  PM Entity Interaction with other WGs . . . . . . . . . . . 19
     7.5.  Standards Track Performance Metrics  . . . . . . . . . . . 19
   8.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 19
   9.  Security Considerations  . . . . . . . . . . . . . . . . . . . 19
   10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20
   11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20
     11.1. Normative References . . . . . . . . . . . . . . . . . . . 20
     11.2. Informative References . . . . . . . . . . . . . . . . . . 20
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 21





Clark & Claise           Expires April 29, 2010                 [Page 3]


Internet-Draft           Perf. Metric Framework             October 2009


1.  TO DO

   o  Multiple EDITOR'S NOTES throughout the document

   o  Should we refer to ITU G1010 for application performance metrics?

   o  Do we want a definition for Performance Metric Entity?


2.  Introduction

   Many networking technologies, applications, or services, are
   distributed in nature, and their performance may be impacted by IP
   impairments, server capacity, congestion and other factors.  It is
   important to measure the performance of applications and services to
   ensure that quality objectives are being met and to support problem
   diagnosis.  Standardized metrics help to ensure that performance
   measurement is implemented consistently and to facilitate
   interpretation and comparison.

   There are at least three phases in the development of performance
   standards.  They are:

   1.  Definition of a Performance Metric and its units of measure

   2.  Specification of a Method of Measurement

   3.  Specification of the Reporting Format

   During the development of metrics, it is often useful to define
   performance objectives and expected value ranges.  However, this is
   not defined as part of the metric specification.

   This document refers to a Performance Metrics Entity, or PM Entity,
   which may in future be a working group or directorate or a
   combination of these two.

2.1.  Background and Motivation

   Although the IETF has two active Working Groups dedicated to the
   development of performance metrics, they each have strict limitations
   in their charters:

   - The Benchmarking Methodology Working Group has addressed a range of
   networking technologies and protocols in their long history (such as
   IEEE 802.3, ATM, Frame Relay, and Routing Protocols), but the charter
   strictly limits their performance characterizations to the laboratory
   environment.



Clark & Claise           Expires April 29, 2010                 [Page 4]


Internet-Draft           Perf. Metric Framework             October 2009


   - The IP Performance Metrics (IPPM) Working Group has developed a set
   of standard metrics that can be applied to the quality, performance,
   and reliability of Internet data delivery services.  The IPPM metrics
   development is applicable to live IP networks, but it is specifically
   prohibited from developing metrics that characterize traffic at upper
   layers, such as a VoIP stream.

   A BOF held at IETF-69 introduced the IETF community to the
   possibility of a generalized activity to define standardized
   performance metrics.  The existence of a growing list of Internet-
   Drafts on performance metrics (with community interest in
   development, but in un-chartered areas) illustrates the need for
   additional performance work.  The majority of people present at the
   BOF supported the proposition that IETF should be working in these
   areas, and no one objected to any of the proposals.

   The IETF does have current and completed activities related to the
   reporting of application performance metrics: for example the Real-
   time Application Quality-of-Service Monitoring (RAQMON) Framework RFC
   4710 [RFC4710], which extends the remote network monitoring (RMON)
   family of specifications to allow real-time quality-of-service (QoS)
   monitoring of various applications that run on devices such as IP
   phones, pagers, Instant Messaging clients, mobile phones, and various
   other handheld computing devices.

   The IETF is also actively involved in the development of reliable
   transport protocols which would affect the relationship between IP
   performance and application performance.

   EDITOR'S NOTE: I'm not sure what the previous sentence refers to?

   Thus there is a gap in the currently chartered coverage of IETF WGs:
   development of performance metrics for non-IP-layer protocols that
   can be used to characterize performance on live networks.

   EDITOR'S NOTE: must extend on the "non-IP-layer".  Could be above L4
   such as voice specific metrics, but also L2 such as (G)MPLS

2.2.  Organization of this document

   This document is divided in two major sections beyond the Purpose and
   Scope section.  The first is a definition and description of a
   performance metric and its key aspects.  The second defines a process
   to develop these metrics that is applicable to the IETF environment.







Clark & Claise           Expires April 29, 2010                 [Page 5]


Internet-Draft           Perf. Metric Framework             October 2009


3.  Terminology

3.1.  Quality of Service

   The Quality of Service (QoS) is defined similarly to the ITU "QoS
   experienced/perceived by customer/user (QoSE)" E800 [E800], i.e.:
   Totality of characteristics of a telecommunications service that bear
   on its ability to satisfy stated and implied needs of the user of the
   service.

   EDITOR'S NOTE: currently searching for a QoS definition in the IETF

3.2.  Application Performance Metric

   EDITOR'S NOTE: to be filled in

3.3.  Quality of Experience

   The Quality of Experience (QoE) is defined similarly to the ITU "QoS
   experienced/perceived by customer/user (QoSE)" E800 [E800], i.e.: a
   statement expressing the level of quality that customers/users
   believe they have experienced.

   NOTE 1 - The level of QoS experienced and/or perceived by the
   customer/user may be expressed by an opinion rating.

   NOTE 2 - QoSE has two main man components: quantitative and
   qualitative.  The quantitative component can be influenced by the
   complete end-to-end system effects (network infrastructure).

   NOTE 3 - The qualitative component can be influenced by user
   expectations, ambient conditions, psychological factors, application
   context, etc.

   NOTE 4 - QoSE may also be considered as QoSD received and interpreted
   by a user with the pertinent qualitative factors influencing his/her
   perception of the service.


4.  Purpose and Scope

   The purpose of this document is to define a framework and a process
   for developing performance metrics for IP-based applications that
   operate over reliable or datagram transport protocols, and that can
   be used to characterize traffic on live networks and services.  As
   such, this document will not define any performance metrics.

   The scope of this document includes the support of metric definition



Clark & Claise           Expires April 29, 2010                 [Page 6]


Internet-Draft           Perf. Metric Framework             October 2009


   for any protocol developed by the IETF.  However this document is not
   intended to supercede existing working methods within Working Groups
   that have existing chartered work in this area.

   This process is not intended to govern performance metric development
   in existing IETF WG that are focused on metrics development, such as
   IPPM and BMWG.  However, the framework and guidelines may be useful
   in these activities, and MAY be applied where appropriate.  A typical
   example is the development of performance metrics to be exported with
   the IPFIX protocol RFC 5101 [RFC5101], with specific IPFIX
   information elements RFC 5102 [RFC5102], which would benefit from the
   framework in this document.

   The framework in this document applies to performance metrics derived
   from both active and passive measurements.


5.  QoS versus Application Performance Metrics versus QoE

   QoS deals with the network and protocol, while QoE deals with the
   notion of a user in a context of a task or a service.  As a
   consequence, QoE leads to the notion of Application Performance
   Metrics.  For example, QoS performance metrics contain the one-way
   delay and the delay variation RFC 5481 [RFC5481] and the Mean Opinion
   Score (MOS) P.800 [P.800] can be modelled and calculated as an
   Application Performance Metric for multimedia applications.  However,
   the MOS for a particular user in the specific context such as a
   conference call, an IPTV or an emergency call are different QoE's.
   Finally, QoS and Application Performance Metrics are quantitative,
   while QoE is qualitative.

   EDITOR'S NOTE: not too happy about the MOS example, as it's debatable
   whether MOS is QoE or Applicatoin Performance Metric?  If there is a
   better example...


6.  Metrics Development

   This section provides key definitions and qualifications of
   performance metrics.

6.1.  Identifying and Categorizing the Audience

   Many of the aspects of metric definition and reporting, even the
   selection or determination of the essential metrics, depend on who
   will use the results, and for what purpose: in order to properly
   maintain service quality? or to identify and quantify problems?  The
   question, "How will the results be used?" usually yields important



Clark & Claise           Expires April 29, 2010                 [Page 7]


Internet-Draft           Perf. Metric Framework             October 2009


   factors to consider when developing performance metrics.

   All documents defining performance metrics SHOULD identify the
   primary audience and its associated requirements.  The audience can
   influence both the definition of metrics and the methods of
   measurement.

   The key areas of variation between different metric users include:

   o  Suitability of passive measurements of live traffic, or active
      measurements using dedicated traffic

   o  Measurement in laboratory environment, or on a network of deployed
      devices

   o  Suitability of passive measurements of live traffic, or active
      measurements using dedicated traffic

   o  Measurement in laboratory environment, or on a network of deployed
      devices

   o  Accuracy of the results

   o  Access to measurement points and configuration information

   o  Measurement topology (point-to-point, point-to-multipoint)

   o  Scale of the measurement system

   o  Measurements conducted on-demand, or continuously

   o  Required reporting formats

6.2.  Definitions of a Metric

   A metric is a measure of an observable behavior of an networking
   technology, an application, or a service.  Most of the time, the
   metric can be directly measured.  However, sometimes, the metric
   definition is computed: it assumes some implicit or explicit
   underlying statistical process.  In such case, the metric is an
   estimate of a parameter of this process, assuming that that the
   statistical process closely models the behavior of the system.

   A metric should serve some defined purpose.  This may include the
   measurement of capacity, quantifying how bad some problem is,
   measurement of service level, problem diagnosis or location and other
   such uses.  A metric may also be an input to some other process, for
   example the computation of a composite metric or a model or



Clark & Claise           Expires April 29, 2010                 [Page 8]


Internet-Draft           Perf. Metric Framework             October 2009


   simulation of a system.  Tests of the "usefulness" of a metric
   include:

      (i) the degree to which its absence would cause significant loss
      of information on the behavior or performance of the application
      or system being measured

      (ii) the correlation between the performance metric, the QoS G1000
      [G1000] and QoE delivered to the user (person or other
      application)

      (iii) the degree to which the metric is able to support the
      identification and location of problems affecting service quality.

      (iv) the requirement to develop policies (Service Level Agreement,
      and potentially Service Level Contract) based on the metric.

   For example, consider a distributed application operating over a
   network connection that is subject to packet loss.  A Packet Loss
   Rate (PLR) metric is defined as the mean packet loss rate over some
   time period.  If the application performs poorly over network
   connections with high packet loss rate and always performs well when
   the packet loss rate is zero then the PLR metric is useful to some
   degree.  Some applications are sensitive to short periods of high
   loss (bursty loss) and are relatively insensitive to isolated packet
   loss events; for this type of application there would be very weak
   correlation between PLR and application performance.  A "better"
   metric would consider both the packet loss rate and the distribution
   of loss events.  If application performance is degraded when the PLR
   exceeds some rate then a useful metric may be a measure of the
   duration and frequency of periods during which the PLR exceeds that
   rate.

6.3.  Computed Metrics

6.3.1.  Composed Metrics

   Some metrics may not be measured directly, but may be composed from
   base metrics that have been measured.  A composed metric is derived
   from other metrics by applying a deterministic process or function
   (e.g., a composition function).  The process may use metrics that are
   identical to the metric being composed, or metrics that are
   dissimilar, or some combination of both types.Usually the base
   metrics have a limited scope in time or space, and they can be
   combined to estimate the performance of some larger entities.

   Some examples of composed metrics and composed metric definitions
   are:



Clark & Claise           Expires April 29, 2010                 [Page 9]


Internet-Draft           Perf. Metric Framework             October 2009


   Spatial Composition is defined as the composition of metrics of the
   same type with differing spatial domains
   [I-D.ietf-ippm-framework-compagg]
   [I-D.ietf-ippm-spatial-composition].  For spatially composed metrics
   to be meaningful, the spatial domains should be non- overlapping and
   contiguous, and the composition operation should be mathematically
   appropriate for the type of metric.

   Temporal Composition is defined as the composition of sets of metrics
   of the same type with differing time spans
   [I-D.ietf-ippm-framework-compagg].  For temporally composed metrics
   to be meaningful, the time spans should be non-overlapping and
   contiguous, and the composition operation should be mathematically
   appropriate for the type of metric.

   Temporal Aggregation is a summarization of metrics into a smaller
   number of metrics that relate to the total time span covered by the
   original metrics.  An example would be to compute the minimum,
   maximum and average values of a series of time sampled values of a
   metric.

   EDITOR'S NOTE: review draft-ietf-ippm-framework-compagg-08.txt and
   determine is something should be added in this section

   EDITOR'S NOTE: should we mention the IPFIX Mediators drafts that
   explains about aggregation? http://www.ietf.org/id/
   draft-ietf-ipfix-mediators-problem-statement-06.txt
   http://www.ietf.org/id/draft-ietf-ipfix-mediators-framework-04.txt

6.3.2.  Index (from compagg)

   An Index is a metric for which the output value range has been
   selected for convenience or clarity, and the behavior of which is
   selected to support ease of understanding (e.g.  G.107 R Factor).
   The deterministic function for an index is often developed after the
   index range and behavior have been determined.

   EDITOR'S NOTE: the section title was "Index (from compagg)".  I guess
   it refers to
   http://www.ietf.org/id/draft-ietf-ippm-framework-compagg-08.txt
   section 3.5 "composed metrics" now.  Do we want to keep a separate
   sub section, or do we combine this with the previous section?

6.4.  Metric Specification







Clark & Claise           Expires April 29, 2010                [Page 10]


Internet-Draft           Perf. Metric Framework             October 2009


6.4.1.  Outline

   A metric definition MUST have a normative part that defines what the
   metric is and how it is measured or computed and SHOULD have an
   informative part that describes the metric and its application.

6.4.2.  Normative parts of metric definition

   The normative part of a metric definition MUST define at least the
   following:

   (i) Metric Name

   Metric names MUST be unique within the set of metrics being defined
   and MAY be descriptive.

   (ii) Metric Description

   The description MUST explain what the metric is, what is being
   measured and how this relates to the performance of the system being
   measured.

   (iii) Collection Method

   EDITOR'S NOTE: remove "measurement" in "measurement method" as this
   this method can be measured, estimated or computed".  Looking for a
   generic term -> collection method?  Do we want to change from
   measurement to collection all over?

   This MUST define what is being measured, estimated or computed and
   the specific algorithm to be used.  Terms such as "average" should be
   qualified (e.g. running average or average over some interval).
   Exception cases SHOULD also be defined with the appropriate handling
   method.  For example, there are a number of commonly used metrics
   related to packet loss; these often don't define the criteria by
   which a packet is determined to be lost (vs very delayed) or how
   duplicate packets are handled.  For example, if the average packet
   loss rate during a time interval is reported, and a packet's arrival
   is delayed from one interval to the next then was it "lost" during
   the interval during which it should have arrived or should it be
   counted as received?

   Some parameters linked to the method MAY also be reported, in order
   to fully interpret the performance metric.  For example, the time
   interval, the load, the minimum packet loss, etc...

   (iv) Units of measurement




Clark & Claise           Expires April 29, 2010                [Page 11]


Internet-Draft           Perf. Metric Framework             October 2009


   The units of measurement MUST be clearly stated.

   (v) Measurement Point(s)

   If the measurement is specific to a measurement point this SHOULD be
   defined.  The measurement domain MAY also be defined.  Specifically,
   if measurement points are spread across domains, the measurement
   domain (intra-, inter-) is another factor to consider.

   EDITOR'S NOTE: discuss that the collection is not necessarily scoped
   to a single observation point.

   (vi) Measurement timing

   The acceptable range of timing intervals or sampling intervals for a
   measurement and the timing accuracy required for such intervals MUST
   be specified.  Short sampling intervals or frequent samples provide a
   rich source of information that can help to assess application
   performance but may lead to excessive measurement data.  Long
   measurement or sampling intervals reduce the amount of reported and
   collected data such that it may be insufficient to understand
   application performance or service quality insofar as the measured
   quantity may vary significantly with time.

   EDITOR'S NOTE: explain that, in case of multiple measurement points,
   synchronized clocks might be required.  See RFC5481

6.4.3.  Informative parts of metric definition

   The informative part of a metric specification is intended to support
   the implementation and use of the metric.  This part SHOULD provide
   the following data:

   (i) Implementation

   The implementation description MAY be in the form of text, algorithm
   or example software.  The objective of this part of the metric
   definition is to assist implementers to achieve a consistent result.

   (ii) Verification

   The metric definition SHOULD provide guidance on verification
   testing.  This may be in the form of test vectors, a formal
   verification test method or informal advice.

   (iii) Use and Applications

   The Use and Applications description is intended to assist the "user"



Clark & Claise           Expires April 29, 2010                [Page 12]


Internet-Draft           Perf. Metric Framework             October 2009


   to understand how, when and where the metric can be applied, and what
   significance the value range for the metric may have.  This MAY
   include a definition of the "typical" and "abnormal" range of the
   metric, if this was not apparent from the nature of the metric.

   For example:

   (a) it is fairly intuitive that a lower packet loss rate would equate
   to better performance.  However the user may not know the
   significance of some given packet loss rate,

   (b) the speech level of a telephone signal is commonly expressed in
   dBm0.  If the user is presented with:

   Speech level = -7 dBm0

   this is not intuitively understandable, unless the user is a
   telephony expert.  If the metric definition explains that the typical
   range is -18 to -28 dBm0, a value higher than -18 means the signal
   may be too high (loud) and less than -28 means that the signal may be
   too low (quiet), it is much easier to interpret the metric.

   (iv) Reporting Model

   The Reporting Model definition is intended to make any relationship
   between the metric and the reporting model clear.  There are often
   implied relationships between the method of reporting metrics and the
   metric itself, however these are often not made apparent to the
   implementor.  For example, if the metric is a short term running
   average packet delay variation (e.g.  PPDV as defined in RFC3550) and
   this value is reported at intervals of 6-10 seconds, the resulting
   measurement may have limited accuracy when packet delay variation is
   non-stationary.

6.4.4.  Metric Definition Template

   Normative

   o  Metric Name

   o  Metric Description

   o  Method

   o  Units of measurement

   o  Measurement Timing




Clark & Claise           Expires April 29, 2010                [Page 13]


Internet-Draft           Perf. Metric Framework             October 2009


   Informative

   o  Implementation Guidelines

   o  Verification

   o  Use and Applications

   o  Reporting Model

6.4.5.  Example: Burst Packet Loss Frequency

   The burst packet loss frequency can be observed at different layers.
   The following example is specific to RTP RFC 3550 [RFC3550].

   Metric Name: BurstPacketLossFrequency

   Metric Description: A burst of packet loss is defined as a longest
   period starting and ending with lost packets during which no more
   than Gmin consecutive packets are received.  The
   BurstPacketLossFrequency is defined as the number of bursts of packet
   loss occurring during a specified time interval (e.g. per minute, per
   hour, per day).  If Gmin is set to 0 then a burst of packet loss
   would comprise only consecutive lost packets, whereas a Gmin of 16
   would define bursts as periods of both lost and received packets
   (sparse bursts) having a loss rate of greater than 5.9%.

   Method: Bursts may be detected using the Markov Model algorithm
   defined in RFC3611.  The BurstPacketLossFrequency is calculated by
   counting the number of burst events within the defined measurement
   interval.  A burst that spans the boundary between two time intervals
   shall be counted within the later of the two intervals.

   Units of Measurement: Bursts per time interval (e.g. per second, per
   hour, per day)

   Measurement Timing: This metric can be used over a wide range of time
   intervals.  Using time intervals of longer than one hour may prevent
   the detection of variations in the value of this metric due to time-
   of-day changes in network load.  Timing intervals should not vary in
   duration by more than +/- 2%.

   Implementation Guidelines: See RFC3611.

   Verification Testing: See Appendix for C code to generate test
   vectors.

   Use and Applications: This metric is useful to detect IP network



Clark & Claise           Expires April 29, 2010                [Page 14]


Internet-Draft           Perf. Metric Framework             October 2009


   transients that affect the performance of applications such as Voice
   over IP or IP Video.  The value of Gmin may be selected to ensure
   that bursts correspond to a packet loss rate that would degrade the
   performance of the application of interest (e.g. 16 for VoIP).

   Reporting Model: This metric needs to be associated with a defined
   time interval, which could be defined by fixed intervals or by a
   sliding window.

6.5.  Dependencies

6.5.1.  Timing accuracy

   The accuracy of the timing of a measurement may affect the accuracy
   of the metric.  This may not materially affect a sampled value metric
   however would affect an interval based metric.  Some metrics, for
   example the number of events per time interval, would be directly
   affected; for example a 10% variation in time interval would lead
   directly to a 10% variation in the measured value.  Other metrics,
   such as the average packet loss rate during some time interval, would
   be affected to a lesser extent.

   If it is necessary to correlate sampled values or intervals then it
   is essential that the accuracy of sampling time and interval start/
   stop times is sufficient for the application (for example +/- 2%).

6.5.2.  Dependencies of metric definitions on related events or metrics

   Metric definitions may explicitly or implicitly rely on factors that
   may not be obvious.  For example, the recognition of a packet as
   being "lost" relies on having some method to know the packet was
   actually lost (e.g.  RTP sequence number), and some time threshold
   after which a non-received packet is declared as lost.  It is
   important that any such dependencies are recognized and incorporated
   into the metric definition.

6.5.3.  Relationship between application performance and lower layer
        metrics

   Lower layer metrics may be used to compute or infer the performance
   of higher layer applications, potentially using an application
   performance model.  The accuracy of this will depend on many factors
   including:

   (i) The completeness of the set of metrics - i.e. are there metrics
   for all the input values to the application performance model?

   (ii) Correlation between input variables (being measured) and



Clark & Claise           Expires April 29, 2010                [Page 15]


Internet-Draft           Perf. Metric Framework             October 2009


   application performance

   (iii) Variability in the measured metrics and how this variability
   affects application performance

6.5.4.  Middlebox presence

   Presence of a middlebox RFC 3303 [RFC3303], e.g., proxy, NAT,
   redirect server, session border controller (SBC), and application
   layer gateway (ALG) may add variability to or restrict the scope of
   measurements of a metric.  For example, an SBC that does not process
   RTP loopback packets may block or locally terminate this traffic
   rather then pass it through to its target.

6.6.  Organization of Results

   The IPPM Framework [RFC2330] organizes the results of metrics into
   three related notions:

   o  singleton, an elementary instance, or "atomic" value.

   o  sample, a set of singletons with some common properties and some
      varying properties.

   o  statistic, a value derived from a sample through deterministic
      calculation, such as the mean.

   Many metrics can use this organization for the results, with or
   without the term names used by IPPM working group.  Section 11 of RFC
   2330 [RFC2330] should consulted for further details.

6.7.  Parameters, the variables of a metric

   Metrics are completely defined when all options and input variables
   have been identified and considered.  These variables are sometimes
   left unspecified in a metric definition, and their general name
   indicates that the user must set them and report them with the
   results.  Such variables are called "parameters" in the IPPM metric
   template.  The scope of the metric, the time at which it was
   conducted, the settings for timers and the thresholds for counters
   are all examples of parameters.

   All documents defining performance metrics SHOULD identify ALL key
   parameters for each metric.







Clark & Claise           Expires April 29, 2010                [Page 16]


Internet-Draft           Perf. Metric Framework             October 2009


7.  Performance Metric Development Process

7.1.  New Proposals for Metrics

   The following entry criteria will be considered for each proposal.

   Proposals SHOULD be prepared as Internet Drafts, describing the
   metrics and conforming to the qualifications above as much as
   possible.

   Proposals SHOULD be vetted by the corresponding protocol development
   Working Group prior to discussion by the PM Entity.  This aspect of
   the process includes an assessment of the need for the metrics
   proposed and assessment of the support for their development in IETF.

   Proposals SHOULD include an assessment of interaction and/or overlap
   with work in other Standards Development Organizations.

   Proposals SHOULD specify the intended audience and users of the
   metrics.  The development process encourages participation by members
   of the intended audience.

   Proposals SHOULD survey the existing standards work in the area and
   identify additional expertise that might be consulted, or possible
   overlap with other standards development orgs.

   Proposals SHOULD identify any security and IANA requirements.
   Security issues could potentially involve revealing of user
   identifying data or the potential misuse of active test tools.  IANA
   considerations may involve the need for a metrics registry.

7.2.  Reviewing Metrics

   Each metric SHOULD be assessed according to the following list of
   qualifications:

   o  Unambiguously defined?

   o  Units of Measure Specified?

   o  Measurement Interval Specified?

   o  Measurement Errors Identified?

   o  Repeatable?

   o  Implementable?




Clark & Claise           Expires April 29, 2010                [Page 17]


Internet-Draft           Perf. Metric Framework             October 2009


   o  Assumptions concerning underlying process?

   o  Use cases?

   o  Correlation with application performance/ user experience?

7.3.  Proposal Approval

   New work item proposals SHALL be approved using the existing IETF
   process.

   The process depends on the form that the PM Entity ultimately takes,
   Directorate or working group.

   In all cases, the proposal will need to achieve consensus, in the
   corresponding protocol development working group (or alternatively,
   an "Area" working group with broad charter), that there is interest
   and a need for the work.

   IF the PM Entity is a Directorate,

   THEN Approval SHOULD include the following steps

   o  consultation with the PM Directorate, using this framework
      document

   o  consultation with Area Director(s)

   o  and possibly IESG approval of a new or revised charter for the
      working group

   IF the PM Entity is a Working Group, and the protocol development
   working group decides to take up the work under its charter,

   THEN the approval is the same as the PM Directorate steps above, with
   the possible additional assignment of a PM Advisor for the work item.

   IF the PM Entity is a Working Group, and the protocol development
   working group decides it does not have sufficient expertise to
   take-up the work, or the proposal falls outside the current charter,

   THEN

   Approval SHOULD include the following steps

   o  identification of protocol expertise to support metric development





Clark & Claise           Expires April 29, 2010                [Page 18]


Internet-Draft           Perf. Metric Framework             October 2009


   o  consensus in the PM working group that there is interest and a
      need for the work, and that a document conforming to this
      framework can be successfully developed

   o  consultation with Area Director(s)

   o  IESG approval of a revised charter for the PM working group

7.4.  PM Entity Interaction with other WGs

   The PM Entity SHALL work in partnership with the related protocol
   development WG when considering an Internet Draft that specifies
   performance metrics for a protocol.  A sufficient number of
   individuals with expertise must be willing to consult on the draft.
   If the related WG has concluded, comments on the proposal should
   still be sought from key RFC authors and former chairs, or from the
   WG mailing list if it was not closed.

   Existing mailing lists SHOULD be used however a dedicated mailing
   list MAY be initiated if necessary to facilitate work on a draft.

   In some cases, it will be appropriate to have the IETF session
   discussion during the related protocol WG session, to maximize
   visibility of the effort to that WG and expand the review.

7.5.  Standards Track Performance Metrics

   The PM Entity will manage the progression of PM RFCs along the
   Standards Track.  See [I-D.bradner-metricstest].  This may include
   the preparation of test plans to examine different implementations of
   the metrics to ensure that the metric definitions are clear and
   unambiguous (depending on the final form of the draft above).


8.  IANA Considerations

   This document makes no request of IANA.

   Note to RFC Editor: this section may be removed on publication as an
   RFC.


9.  Security Considerations

   In general, the existence of framework for performance metric
   development does not constitute a security issue for the Internet.
   Metric definitions may introduce security issues and this framework
   recommends that those defining metrics should identify any such risk



Clark & Claise           Expires April 29, 2010                [Page 19]


Internet-Draft           Perf. Metric Framework             October 2009


   factors.

   The security considerations that apply to any active measurement of
   live networks are relevant here as well.  See [RFC4656].

   EDITOR'S NOTE: do we want to mention something about specific to
   passive?  For example, anonymization.


10.  Acknowledgements

   The authors would like to thank Al Morton, Dan Romascanu, Daryl Malas
   and Loki Jorgenson for their comments and contributions.


11.  References

11.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC4656]  Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
              Zekauskas, "A One-way Active Measurement Protocol
              (OWAMP)", RFC 4656, September 2006.

11.2.  Informative References

   [E800]     "ITU-T Recommendation E.800. SERIES E: OVERALL NETWORK
              OPERATION, TELEPHONE SERVICE, SERVICE OPERATION AND HUMAN
              FACTORS".

   [G1000]    "ITU-T Recommendation G.1000. Communications Quality of
              Service: A framework and definitions".

   [I-D.bradner-metricstest]
              Bradner, S. and V. Paxson, "Advancement of metrics
              specifications on the IETF Standards Track",
              draft-bradner-metricstest-03 (work in progress),
              August 2007.

   [I-D.ietf-ippm-framework-compagg]
              Morton, A., "Framework for Metric Composition",
              draft-ietf-ippm-framework-compagg-08 (work in progress),
              June 2009.

   [I-D.ietf-ippm-spatial-composition]
              Morton, A. and E. Stephan, "Spatial Composition of



Clark & Claise           Expires April 29, 2010                [Page 20]


Internet-Draft           Perf. Metric Framework             October 2009


              Metrics", draft-ietf-ippm-spatial-composition-10 (work in
              progress), October 2009.

   [P.800]    "ITU-T Recommendation P.800. : Methods for subjective
              determination of transmission quality".

   [RFC2330]  Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
              "Framework for IP Performance Metrics", RFC 2330,
              May 1998.

   [RFC3303]  Srisuresh, P., Kuthan, J., Rosenberg, J., Molitor, A., and
              A. Rayhan, "Middlebox communication architecture and
              framework", RFC 3303, August 2002.

   [RFC3550]  Schulzrinne, H., Casner, S., Frederick, R., and V.
              Jacobson, "RTP: A Transport Protocol for Real-Time
              Applications", STD 64, RFC 3550, July 2003.

   [RFC4710]  Siddiqui, A., Romascanu, D., and E. Golovinsky, "Real-time
              Application Quality-of-Service Monitoring (RAQMON)
              Framework", RFC 4710, October 2006.

   [RFC5101]  Claise, B., "Specification of the IP Flow Information
              Export (IPFIX) Protocol for the Exchange of IP Traffic
              Flow Information", RFC 5101, January 2008.

   [RFC5102]  Quittek, J., Bryant, S., Claise, B., Aitken, P., and J.
              Meyer, "Information Model for IP Flow Information Export",
              RFC 5102, January 2008.

   [RFC5481]  Morton, A. and B. Claise, "Packet Delay Variation
              Applicability Statement", RFC 5481, March 2009.


Authors' Addresses

   Alan Clark
   Telchemy Incorporated
   2905 Premiere Parkway, Suite 280
   Duluth, Georgia  30097
   USA

   Phone:
   Fax:
   Email: alan.d.clark@telchemy.com
   URI:





Clark & Claise           Expires April 29, 2010                [Page 21]


Internet-Draft           Perf. Metric Framework             October 2009


   Benoit Claise
   Cisco Systems, Inc.
   De Kleetlaan 6a b1
   Diegem  1831
   Belgium

   Phone: +32 2 704 5622
   Fax:
   Email: bclaise@cisco.com
   URI:









































Clark & Claise           Expires April 29, 2010                [Page 22]


Html markup produced by rfcmarkup 1.129c, available from https://tools.ietf.org/tools/rfcmarkup/