[Docs] [txt|pdf] [Tracker] [WG] [Email] [Nits]

Versions: 00 01

Network Working Group              G. Almes, Advanced Network & Services
Internet Draft                   W. Cerveny, Advanced Network & Services
                                               P. Krishnaswamy, BellCore
                             J. Mahdavi, Pittsburgh Supercomputer Center
                              M. Mathis, Pittsburgh Supercomputer Center
                                       V. Paxson, Lawrence Berkeley Labs
Expiration Date: May 1997                                  November 1996


                   Framework for IP Provider Metrics
                <draft-ietf-bmwg-ippm-framework-00.txt>


1. Status of this Memo

   This document is an Internet Draft.  Internet Drafts are working doc-
   uments  of the Internet Engineering Task Force (IETF), its areas, and
   its working groups.  Note that other groups may also distribute work-
   ing documents as Internet Drafts.

   Internet  Drafts  are  draft  documents  valid  for  a maximum of six
   months, and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet Drafts as reference
   material or to cite them other than as ``work in progress''.

   To learn the current status of any Internet Draft, please  check  the
   ``1id-abstracts.txt'' listing contained in the Internet Drafts shadow
   directories  on  ftp.is.co.za   (Africa),   nic.nordu.net   (Europe),
   munnari.oz.au  (Pacific  Rim),  ds.internic.net  (US  East Coast), or
   ftp.isi.edu (US West Coast).

   This memo provides information for the Internet community.  This memo
   does  not  specify an Internet standard of any kind.  Distribution of
   this memo is unlimited.


2. Introduction

   The purpose of this memo is to define a general framework for partic-
   ular metrics to be developed by the IP Provider Metrics (IPPM) effort
   within the Benchmarking Methodology Working Group (BMWG) of the Oper-
   ational Requirements Area (OR) of the IETF.

   We  begin  by  laying  out  several  criteria for the metrics that we
   adopt.  These criteria are designed to promote an  IPPM  effort  that
   will  maximize an accurate common understanding by Internet users and
   Internet providers of the performance and reliability both of end-to-
   end  paths  through  the  Internet  and  of specific 'IP clouds' that



Almes et al.                                                    [Page 1]

ID                  Framework for IP Provider Metrics      November 1996


   comprise portions of those paths.

   We next define some Internet vocabulary that will allow us  to  speak
   clearly about Internet components such as routers, paths, and clouds.

   We next  define  three  fundamental  concepts,  metrics,  measurement
   methodology,  and  uncertainties/errors,  that will allow us to speak
   clearly about specific metrics.  Given these concepts, we proceed  to
   discuss  how  they  relate to the analytical framework shared by many
   aspects of the Internet engineering discipline.   We  then  introduce
   the  notion  of  empirically defined metrics, and continue to discuss
   two forms of composition.

   Based on experience in applying the (original  Jul-96)  framework  to
   specific  metrics  for delay, we have introduced (in the Nov-96 revi-
   sion) some additional material on measurement technology.  This  con-
   sists  of  guidelines  related  to  clock issues, the notion of 'wire
   time' as distinct from 'host time', and some ideas  for  sampling  of
   singleton metrics.

   In  some  sections of the memo, we will surround some commentary text
   with the brackets {Comment: ... }.  We stress that this commentary is
   only  commentary, and is not itself part of the framework document or
   a proposal of particular metrics.  In some cases this commentary will
   discuss  some  of the properties of metrics that might be envisioned,
   but the reader should assume that any  such  discussion  is  intended
   only  to shed light on points made in the framework document, and not
   to suggest any specific metrics.



3. Criteria for IP Provider Metrics

   The overarching goal of the IP Provider Metrics effort is to  achieve
   a  situation  in which users and providers of Internet transport ser-
   vice have an accurate common understanding  of  the  performance  and
   reliability of the Internet component 'clouds' that they use/provide.

   To achieve  this,  performance  and  reliability  metrics  for  paths
   through  the  Internet must be developed.  In several meetings of the
   BMWG criteria for these metrics have been specified:
 +    The metrics must be concrete and well-defined,
 +    A methodology for a metric should have the  property  that  it  is
      repeatable:  if the methodology is used multiple times under iden-
      tical conditions, the same measurements should result in the  same
      measurements.





Almes et al.                                                    [Page 2]

ID                  Framework for IP Provider Metrics      November 1996


 +    The  metrics  must  exhibit no bias for IP clouds implemented with
      identical technology,
 +    The metrics must exhibit understood and fair bias  for  IP  clouds
      implemented with non-identical technology,
 +    The metrics must be useful to users and providers in understanding
      the performance they experience or provide,
 +    The metrics must avoid inducing artificial performance goals.


4. Terminology for Paths and Clouds

   The following list defines terms that  need  to  be  precise  in  the
   development  of  path  metrics.  We proceed from low-level notions of
   host, router, and link, then proceed to define the  notions  of  path
   and  notions of IP cloud and exchange that allow us to segment a path
   into relevant pieces.


host A computer capable of communicating using the  Internet  protocols;
     includes "routers".

link A  single  link-level  connection  between  two  (or  more)  hosts;
     includes leased lines, ethernets, frame relay clouds, etc.

router
     A host which facilitates network-level communication between  hosts
     by forwarding IP packets.

path A  sequence  of the form < h0, l1, h1, ..., ln, hn >, where n >= 0,
     each hi is a host, each li is a link  between  hi-1  and  hi,  each
     h1...hn-1  is  a  router.  In an appropriate operational configura-
     tion, the links and routers in the  path  facilitate  network-layer
     communication of packets from h0 to hn.  Note that path is a unidi-
     rectional concept.

subpath
     Given a path, a subpath is any subsequence of the given path  which
     is  itself  a path.  (Thus, the first and last element of a subpath
     is a host.)

cloud
     An undirected (possibly cyclic) graph whose  vertices  are  routers
     and whose edges are links that connect pairs of routers.  Formally,
     ethernets, frame relay clouds, and other links  that  connect  more
     than  two  routers  are modelled as fully-connected meshes of graph
     edges.  Note that to connect to a  cloud  means  to  connect  to  a
     router  of  the  cloud over a link; this link is not itself part of
     the cloud.



Almes et al.                                                    [Page 3]

ID                  Framework for IP Provider Metrics      November 1996


exchange
     A special case of a link, an exchange directly  connects  either  a
     host to a cloud and/or one cloud to another cloud.

cloud subpath
     A  subpath  of  a  given  path, all of whose hosts are routers of a
     given cloud.

path digest
     A sequence of the form < h0, e1, C1, ..., en, hn >, where n  >=  0,
     h0 and hn are hosts, each e1 ... en is an exchange, and each C1 ...
     Cn-1 is a cloud subpath.


5. Three Fundamental Concepts


5.1. Metrics

   In the operational Internet, there are several quantities related  to
   the  performance  and  reliability  of the Internet that we'd like to
   know the value of.  When such a quantity is carefully  specified,  we
   term  the  quantity a metric.  We anticipate that there will be sepa-
   rate RFCs for each metric (or for each closely related group of  met-
   rics).

   In some cases, there might be no obvious means to effectively measure
   the metric; this is allowed, and even understood to be very useful in
   some  cases.   It is required, however, that the specification of the
   metric be as clear as possible about what quantity  is  being  speci-
   fied.    Thus,  difficulty  in  practical  measurement  is  sometimes
   allowed, but ambiguity in meaning is not.

   Each metric will be defined in terms of standard  units  of  measure-
   ment.  The international metric system will be used, with the follow-
   ing points specifically noted:
 +    When a unit is expressed in simple meters (for distance/length) or
      seconds  (for  duration), appropriate related units based on thou-
      sands or thousandths of acceptable units  are  acceptable.   Thus,
      distances  expressed  in  kilometers  (Km), durations expressed in
      milliseconds (msec), or microseconds (usec) are allowed,  but  not
      centimeters  (because  the  prefix is not in terms of thousands or
      thousandths).








Almes et al.                                                    [Page 4]

ID                  Framework for IP Provider Metrics      November 1996


 +    When a unit is expressed in a combination  of  units,  appropriate
      related  units  based  on  thousands  or thousandths of acceptable
      units are acceptable, but all such thousands/thousandths  must  be
      grouped  at  the beginning.  Thus, kilo-meters per second (Km/sec)
      is allowed, but meters per millisecond is not.
 +    The unit of information is the bit.
 +    When metric prefixes are  used  with  bits  or  with  combinations
      including  bits,  those  prefixes  will  have their metric meaning
      (related to decimal 1000), and not the meaning  conventional  with
      computer  storage  (related  to  decimal  1024).   In any RFC that
      defines a metric whose units include bits, this convention will be
      followed and will be repeated to ensure clarity for the reader.
 +    When a time is given, it will be taken in UTC.
   Note  that  these  points apply to the specifications for metrics and
   not, for example, to packet formats where octets will likely be  used
   in preference/addition to bits.

   Finally,  we note that some metrics may be defined purely in terms of
   other metrics; such metrics are call 'derived metrics'.


5.2. Measurement Methodology

   For a given set of well-defined metrics, a number  of  distinct  mea-
   surement methodologies may exist.  A partial list includes:
 +    Direct  measurement  of  a  performance metric using injected test
      traffic.  Example: measurement of the round-trip delay  of  an  IP
      packet of a given size over a given route at a given time.
 +    Projection  of  a  metric from lower-level measurements.  Example:
      given accurate measurements of propagation delay and bandwidth for
      each  step  along a path, projection of the complete delay for the
      path for an IP packet of a given size.
 +    Estimation of a consituent metric from a set  of  more  aggregated
      measurements.  Example: given accurate measurements of delay for a
      given one-hop path for IP packets of different  sizes,  estimation
      of propagation delay for the link of that one-hop path.
 +    Estimation  of  a  given  metric at one time from a set of related
      metrics at other times.  Example: given an accurate measurement of
      flow  capacity  at  a  past  time, together with a set of accurate
      delay measurements for that past time and the  current  time,  and
      given  a  model  of flow dynamics, estimate the flow capacity that
      would be observed at the current time.
   This list is by no means exhaustive.  The purpose is to point out the
   variety of measurement techniques.

   When  a given metric is specified, a given measurement approach might
   be noted and discussed.  That approach, however, is not formally part
   of the specification.



Almes et al.                                                    [Page 5]

ID                  Framework for IP Provider Metrics      November 1996


   A  methodology  for  a  metric  should  have  the property that it is
   repeatable: if the methodology is used multiple times under identical
   conditions, it should result in consistent measurements.

   Backing  off a little from the word 'identical' in the previous para-
   graph, we could more accurately use the word 'continuity' to describe
   a  property  of a given methodology: a methodology for a given metric
   exhibits continuity  if,  for  small  variations  in  conditions,  it
   results  in small variations in the resulting measurements.  Slightly
   more precisely, for every positive epsilon, there exists  a  positive
   delta,  such  that if two sets of conditions are within delta of each
   other, then the resulting measurements will be within epsilon of each
   other.   At  this  point, this should be taken as a heuristic driving
   our intuition about one kind of robustness property rather than as  a
   precise notion.

   A  metric  that has at least one methodology that exhibits continuity
   is said itself to exhibit continuity.

   Note that some metrics, such as hop-count along a path, are  integer-
   valued  and  therefore  cannot  exhibit continuity in quite the sense
   given above.

   Note further that, in practice, it may not be practical to  know  (or
   be  able  to  quantify) the conditions relevant to a measurement at a
   given time.  For example, since the instantaneous load (in packets to
   be  served)  at  a given router in a high-speed wide-area network can
   vary widely over relatively brief periods and will be very  hard  for
   an  external observer to quantify, various statistics of a given met-
   ric may be more repeatable, or may  better  exhibit  continuity.   In
   that  case  those  particular statistics should be specified when the
   metric is specified.

   Finally, some measurement methodologies may be 'conservative' in  the
   sense  that a measurement that may themselves modify the value of the
   performance metric they attempt to measure.  {Comment:  for  example,
   in a wide-are high-speed network under modest load, a test using sev-
   eral small 'ping' packets to measure delay would likely not interfere
   (much)  with the delay properties of that network as observed by oth-
   ers.  The corresponding statement about tests using a large  flow  to
   measure flow capacity would likely fail.}










Almes et al.                                                    [Page 6]

ID                  Framework for IP Provider Metrics      November 1996


5.3. Measurements, Uncertainties, and Errors

   Even  the  very best measurement methodologies for the very most well
   behaved metrics will exhibit errors.  Those who develop such measure-
   ment methodologies, however, should strive to:
 +    minimize their uncertainties/errors,
 +    understand and document the sources of uncertainty/error, and
 +    quantify the amounts of uncertainty/error.
   by  doing so, the measurement community will work together to improve
   our ability to understand the  performance  and  reliability  of  the
   Internet.

   For example, when developing a method for measuring delay, understand
   how any errors in your clocks introduce errors into your  delay  mea-
   surement,  and  quantify  this  effect  as  well as you can.  In some
   cases, this will result in a requirement that a clock be at least  up
   to  a  certain quality if it is to be used to make a certain measure-
   ment.

   As a second example, consider the timing  error  due  to  measurement
   overheads  within  the computer making the measurement, as opposed to
   delays due to the Internet component being measured.  The former is a
   measurement  error, while the latter reflects the metric of interest.
   Note that one technique that can help avoid this overhead is the  use
   of  a  packet  filter/sniffer,  running  on  a separate computer that
   records network packets and timestamps them accurately.  The  result-
   ing trace can then be analysed to assess the test traffic, minimising
   the effect of measurement host delays, or  at  least  allowing  those
   delays to be accounted for.

   Finally, we note that derived metrics (defined above) or metrics that
   exhibit spatial or temporal composition (defined below)  offer  occa-
   sion  for the analysis of measurement uncertainty of related measure-
   ments to be themselves related.


6. Metrics and the Analytical Framework

   As the Internet has evolved from the early  packet-switching  studies
   of the 1960s, the Internet engineering community has evolved a common
   analytical framework of concepts.  This analytical framework,  or  A-
   frame,  used  by  designers  and  implementers of protocols, by those
   involved in measurement, and by those who study computer network per-
   formance using the tools of simulation and analysis, has great advan-
   tage to our work.  A major objective  here  is  to  generate  network
   characterizations  that are consistent in both analytical and practi-
   cal settings, since this will maximize the chances that non-empirical
   network  study can be better correlated with, and used to further our



Almes et al.                                                    [Page 7]

ID                  Framework for IP Provider Metrics      November 1996


   understanding of, real network behavior.

   Whenever possible, therefore, we would like to develop  and  leverage
   the  A-frame.   Thus, whenever a metric to be specified is understood
   to be closely related to concepts (such as  the  Internet  components
   defined  above)  within  the  A-frame, we will attempt to specify the
   metric in the A-frame's terms.   In  such  a  specification  we  will
   develop the A-frame by precisely defining the concepts needed for the
   metric, then leverage the A-frame by defining the metric in terms  of
   those concepts.

   Such  a  metric will be called an 'analytically specified metric' or,
   more simply an analytical metric.

   {Comment: Examples of such analytical metrics might include:

propagation time of a link
     The time, in seconds, required by a single bit to travel  from  the
     output  port  on  one Internet host across a single link to another
     Internet host.

bandwidth of a link for packets of size k
     The capacity, in bits/second, where  only  those  bits  of  the  IP
     packet are counted, for a packet of size k bytes.

route
     The path, as defined in Section 4, from A to B at a given time.

hop count of a route
     The value 'n' of the route path.
     }

     Note  that  we  make no a priori list of just what A-frame concepts
     will emerge in these specifications, but we do encourage their  use
     and  urge  that  they be carefully specified so that, as our set of
     metrics develops, so will a specified set of A-frame concepts tech-
     nically  consistent  with  each other and consonent with the common
     understanding of those concepts within the general Internet  commu-
     nity.

     These  A-frame  concepts  will  be intended to abstract from actual
     Internet components in such a way that:
 +    the essential function of the component is retained,








Almes et al.                                                    [Page 8]

ID                  Framework for IP Provider Metrics      November 1996


 +    properties of the component relevant to the metrics we aim to cre-
      ate are retained,
 +    a  subset  of these component properties are defined as analytical
      metrics, and
 +    those properties of actual Internet  components  not  relevant  to
      defining the metrics we aim to create are dropped.

   {Comment:  for  example,  when considering a router in the context of
   packet forwarding, we might model the  router  as  a  component  that
   receives packets on an input link, queues them on a FIFO packet queue
   of finite size, employs tail-drop when the packet queue is full,  and
   forwards  them  on  an  output  link.   The  transmission  speed  (in
   bits/second) of the input and output links, the latency in the router
   (in  seconds), and the maximum size of the packet queue (in bits) are
   relevant analytical metrics.}

   In some cases, such analytical metrics used in relation to  a  router
   will  be  very closely related to specific metrics of the performance
   of Internet paths.  For example, an obvious formula (L + P/B) involv-
   ing the latency in the router (L), the packet size (in bits) (P), and
   the transmission speed of the output link (B) might closely  approxi-
   mate  the  increase  in  packet delay due to the insertion of a given
   router along a path.

   We stress, however, that well-chosen and well-specified A-frame  con-
   cepts  and  their analytical metrics will support more general metric
   creation efforts in less obvious ways.

   {Comment: for example, when considering the flow capacity of a  path,
   it may be of real value to be able to model each of the routers along
   the path as packet forwarders as above.   Techniques  for  estimating
   the  flow  capacity of a path might use the maximum packet queue size
   as a parameter in decidedly non-obvious ways.  For  example,  as  the
   maximum  queue  size  increases, so will the ability of the router to
   continuously move traffic along an output link  despite  fluctuations
   in  traffic  from  an input link.  Estimating this increase, however,
   remains a research topic.}

   The key role of these concepts is to abstract the properties  of  the
   Internet components relevant to given metrics.  Judgement is required
   to avoid making assumptions that bias the modeling and metric  effort
   toward one kind of design.

   {Comment:  for  example, routers might not use tail-drop, even though
   tail-drop might be easier to model analytically.}

   Note that, when we specify A-frame concepts and  analytical  metrics,
   we  will  inevitably make simplifying assumptions.  Further, as noted



Almes et al.                                                    [Page 9]

ID                  Framework for IP Provider Metrics      November 1996


   above, judgement is required in making these assumptions in order  to
   make them best suit our purposes.

   Finally,  note that different elements of the A-frame might well make
   different simplifying assumptions.  For example, the abstraction of a
   router  used  to  further  the  definition  of  delay might treat the
   router's packet queue as a single FIFO queue, but the abstraction  of
   a  router  used to further the definition of the handling of an RSVP-
   enabled packet might treat  the  router's  packet  queue  to  support
   bounded delay -- a contradictory assumption.  This is not to say that
   we make contradictory assumptions at the same time, but that two dif-
   ferent parts of our work might refine the simpler base concept in two
   divergent ways for different purposes.


7. Empirically Specified Metrics

   There are useful performance and reliability metrics that do not  fit
   so  neatly  into  the  A-frame, usually because the A-frame lacks the
   complexity or power for dealing with them.  For  example,  "the  best
   flow  capacity  achievable  along  a path using an RFC-1122-compliant
   TCP" would be good to be able to measure, but we have  no  analytical
   framework  of  sufficient  complexity  to  allow us to cast that flow
   capacity as an analytical metric.

   These notions can still be well specified  by  instead  describing  a
   reference methodology for measuring them.

   Such  a  metric  will be called an 'empirically specified metric', or
   more simply, an empirical metric.

   Such empirical metrics should have three properties:
 +    we should have a clear definition for each in terms of  real-world
      Internet components,
 +    we should have at least one effective means to measure them, and
 +    to the extent possible, we should have an (necessarily incomplete)
      understanding of the metric in terms of the A-frame so that we can
      use our measurements to reason about the performance and reliabil-
      ity of A-frame components and of aggregations  of  A-frame  compo-
      nents.











Almes et al.                                                   [Page 10]

ID                  Framework for IP Provider Metrics      November 1996


8. Two Forms of Composition


8.1. Spatial Composition of Metrics

   In  some  cases,  it may be realistic and useful to define metrics in
   such a fashion that they exhibit spatial composition.

   By spatial composition, we mean a characteristic of  some  path  met-
   rics, in which the metric as applied to a (complete) path can also be
   defined for various subpaths (cf. definition above), and in which the
   appropriate  A-frame concepts for the metric suggest useful relation-
   ships between the metric applied to these various subpaths (including
   the complete path, the various cloud subpaths of a given path digest,
   and even single routers along the path).  The effectiveness  of  spa-
   tial composition depends:
 +    on the usefulness in analysis of these relationships as applied to
      the relevant A-frame components, and
 +    on the practical use of the corresponding relationships as applied
      to metrics and to measurement methodologies.

   {Comment:  for  example, consider some metric for delay of a 100-byte
   packet across a path P, and consider further a path digest  <h0,  e1,
   C1, ..., en, hn> of P.  The definition of such a metric might include
   a conjecture that the delay across P is very nearly the  sum  of  the
   corresponding  metric across the exhanges (ei) and clouds (Ci) of the
   given path digest.  The definition would further include  a  note  on
   how  a corresponding relation applies to relevant A-frame components,
   both for the path P and for the exchanges  and  clouds  of  the  path
   digest.}

   When the definition of a metric includes a conjecture that the metric
   across the path is related to the metric across the subpaths  of  the
   path,  that  conjecture  constitutes a claim that the metric exhibits
   spatial composition.  The definition should then include:
 +    the specific conjecture applied to the metric,
 +    a justification of the practical utility  of  the  composition  in
      terms  of  making accurate measurements of the metric on the path,
      and
 +    a justification of the usefulness of the composition in  terms  of
      making analysis of the path using A-frame concepts more effective.










Almes et al.                                                   [Page 11]

ID                  Framework for IP Provider Metrics      November 1996


8.2. Temporal Composition of Formal Models and Empirical Metrics

   In some cases, it may be realistic and useful to  define  metrics  in
   such a fashion that they exhibit temporal composition.

   By  temporal  composition, we mean a characteristic of some path met-
   rics, in which the metric as applied to a path at a given time  T  is
   also  defined  for various times t0 < t1 < ... < tn < T, and in which
   the appropriate A-frame concepts for the metric suggests useful rela-
   tionships  between  the  metric  applied at times t0, ..., tn and the
   metric applied at time T.  The effectiveness of temporal  composition
   depends:
 +    on the usefulness in analysis of these relationships as applied to
      the relevant A-frame components, and
 +    on the practical use of the corresponding relationships as applied
      to metrics and to measurement methodologies.

   {Comment:  for  example,  consider some  metric for the expected flow
   capacity across a path P during the  five-minute  period  surrounding
   the time T, and suppose further that we have the corresponding values
   for each of the four previous five-minute periods t0, t1, t2, and t3.
   The  definition  of such a metric might include a conjecture that the
   flow capacity at time T can be  estimated  from  a  certain  kind  of
   extrapolation  from  the values of t0, ..., t3.  The definition would
   further include a note on how a  corresponding  relation  applies  to
   relevant A-frame components.

   Note:  any (spatial or temporal) compositions involving flow capacity
   are likely to be subtle, and temporal compositions are generally more
   subtle  than  spatial  compositions,  so the reader should understand
   that the foregoing example is intentionally naive.}

   When the definition of a metric includes a conjecture that the metric
   across the path at a given time T is related to the metric across the
   path for a set of other times, that conjecture  constitutes  a  claim
   that the metric exhibits temporal composition.  The definition should
   then include:
 +    the specific conjecture applied to the metric,
 +    a justification of the practical utility  of  the  composition  in
      terms  of  making accurate measurements of the metric on the path,
      and
 +    a justification of the usefulness of the composition in  terms  of
      making analysis of the path using A-frame concepts more effective.








Almes et al.                                                   [Page 12]

ID                  Framework for IP Provider Metrics      November 1996


9. Two Sets of Issues related to Time


9.1. Clock Issues

   Measurements of time lie at  the  heart  of  many  Internet  metrics.
   Because  of this, it will often be crucial when designing a methodol-
   ogy for measuring a metric  to  understand  the  different  types  of
   errors  and  uncertainties  introduced  by imperfect clocks.  In this
   section we define terminology for discussing the  characteristics  of
   clocks  and  touch  upon  related measurement issues which need to be
   addressed by any sound methodology.

   The Network Time Protocol (NTP; RFC 1305) defines a nomenclature  for
   discussing  clock characteristics, which we will also use when appro-
   priate [Mi92].  The main goal of NTP is to provide accurate timekeep-
   ing  over fairly long time scales, such as minutes to days, while for
   measurement purposes often what is more important is short-term accu-
   racy,  between  the beginning of the measurement and the end, or over
   the course of gathering a body of measurements (a sample).  This dif-
   ference  in  goals sometimes leads to different definitions of termi-
   nology as well, as discussed below.

   To begin, we define a clock's "offset" at a particular moment as  the
   difference between the time reported by the clock and the "true" time
   as defined by international standards.  If the clock reports  a  time
   Tc and the true time is Tt, then the clock's offset is Tc - Tt.

   We  will refer to a clock as "accurate" at a particular moment if the
   clock's offset is zero, and more generally a  clock's  "accuracy"  is
   how  close  the  absolute  value  of the offset is to zero.  For NTP,
   accuracy also includes a notion of the frequency of  the  clock;  for
   our  purposes,  we split out this notion into that of "skew", because
   we define accuracy in terms of a single moment in  time  rather  than
   over an interval of time.

   A  clock's  "skew" at a particular moment is the frequency difference
   (first derivative of its offset with respect to  true  time)  between
   the clock and true time.

   As  noted  in  RFC  1305, real clocks exhibit some variation in skew.
   That is, the second derivative of the clock's offset with respect  to
   true time is generally non-zero.  In keeping with RFC 1305, we define
   this quantity as the clock's "drift".

   A clock's "resolution" is the smallest unit by which the clock's time
   is  updated.   It  gives  a  lower  bound on the clock's uncertainty.
   (Note that clocks can have very fine resolutions and  yet  be  wildly



Almes et al.                                                   [Page 13]

ID                  Framework for IP Provider Metrics      November 1996


   inaccurate.)   Resolution  is  defined in terms of seconds.  However,
   resolution is relative to the clock's reported time and not  to  true
   time,  so  for  example  a  resolution of 10 msec only means that the
   clock updates its notion of time in 0.01 second increments, not  that
   this is the true amount of time between updates.

   {Comment: Systems differ on how an application interface to the clock
   reports the time on subsequent calls during which the clock  has  not
   advanced.   Some  systems  simply  return  the same unchanged time as
   given for previous calls.  Others may add a small  increment  to  the
   reported  time to maintain monotonic increasing timestamps.  For sys-
   tems that do the latter, we do *not* consider these small  increments
   when defining the clock's resolution.  They are instead an impediment
   to assessing the clock's resolution, since a natural method for doing
   so  is  to  repeatedly query the clock to determine the smallest non-
   zero difference in reported times.}

   It is expected that a clock's resolution  changes  only  rarely  (for
   example, due to a hardware upgrade).

   There are a number of interesting metrics for which some natural mea-
   surement methodologies involve comparing times reported by  two  dif-
   ferent  clocks.   An  example  is  one-way packet delay (currently an
   Internet Draft [Al96]).  Here, the time  required  for  a  packet  to
   travel through the network is measured by comparing the time reported
   by a clock at one end of the the packet's path, corresponding to when
   the  packet  first  entered  the network, with the time reported by a
   clock at the other end of the path, corresponding to when the  packet
   finished traversing the network.

   We  are  thus  also  interested in terminology for describing how two
   clocks C1 and C2 compare.  To do so, we introduce  terms  related  to
   those  above  in  which  the notion of "true time" is replaced by the
   time as reported by clock C1.  For example, clock C2's  offset  rela-
   tive  to  C1  at  a particular moment is Tc2 - Tc1, the instantaneous
   difference in time reported by C2 and C1.   To  disambiguate  between
   the  use  of  the  terms  to compare two clocks versus the use of the
   terms to compare to true time, we will in the  former  case  use  the
   phrases  "relative".  So the offset defined earlier in this paragraph
   is the "relative offset" between C2 and C1.  {Comment: Note that  the
   notion  of  "resolution"  does  not  have  an  analog  when comparing
   clocks.}

   If two clocks are "accurate" with respect to one another (their rela-
   tive  offset  is  zero), we will refer to the pair of clocks as "syn-
   chronized".  Note that clocks can be highly  synchronized  yet  arbi-
   trarily  inaccurate  in  terms of how well they tell true time.  This
   point  is  important  because   for   many   Internet   measurements,



Almes et al.                                                   [Page 14]

ID                  Framework for IP Provider Metrics      November 1996


   synchronization  between  two clocks is more important than the accu-
   racy of the clocks.  The same is *not* true of skew: it is  generally
   (much) more important that the clocks have minimal absolute skew than
   that they have  minimal  relative  skew.   These  distinctions  arise
   because  for  Internet  measurement  what is often most important are
   differences in time as  computed  by  comparing  the  output  of  two
   clocks.   The  process  of computing the difference removes any error
   due to clock inaccuracies with respect to true time; but it  is  cru-
   cial  that  the differences themselves accurately reflect differences
   in true time.

   Measurement methodologies will often begin with the step of  assuring
   that  two  clocks  are  synchronized and have minimal skew and drift.
   {Comment: An effective way to assure these conditions (and also clock
   accuracy) is by using clocks that derive their notion of time from an
   external source, rather than only the host computer's clock.   (These
   latter  are often subject to large errors.)  It is further preferable
   that the clocks directly derive their time,  for  example  by  having
   immediate access to a GPS (Global Positioning System) unit.}

   Two  important  concerns  arise if the clocks indirectly derive their
   time using a network time synchronization protocol such as NTP:
 +    First, NTP's accuracy depends in part on the properties  (particu-
      larly  delay)  of  the  Internet  paths used by the NTP peers, and
      these might be exactly the properties that we wish to measure,  so
      it would be unsound to use NTP to calibrate such measurements.
 +    Second,  NTP  focuses  on  clock  accuracy,  which can come at the
      expense of short-term clock skew and drift.  For example,  when  a
      host's  clock  is indirectly synchronized to a time source, if the
      synchronization intervals occur infrequently, then the  host  will
      sometimes  be faced with the problem of how to adjust its current,
      incorrect time, Ti, with a considerably different,  more  accurate
      time  it  has just learned, Ta.  Two general ways in which this is
      done are to either immediately set the current time to Ta,  or  to
      adjust  the  local  clock's  update frequency (hence, its skew) so
      that at some point in the future the local  time  Ti'  will  agree
      with  the  more accurate time Ta'.  The first mechanism introduces
      discontinuities and  can  also  violate  common  assumptions  that
      timestamps  are  monotone  increasing.  If the host's clock is set
      backward in time, sometimes this can be easily detected.   If  the
      clock  is  set forward in time, this can be harder to detect.  The
      skew induced by the second  mechanism  can  lead  to  considerable
      inaccuracies  when  computing  differences  in  time, as discussed
      above.

   To illustrate why skew is a crucial concern, consider samples of one-
   way  delays  between two Internet hosts made at one minute intervals.
   The true transmission delay between the hosts might plausibly  be  on



Almes et al.                                                   [Page 15]

ID                  Framework for IP Provider Metrics      November 1996


   the  order  of  50  msec  for  a  transcontinental path.  If the skew
   between the two clocks is 0.01%, that is,  1  part  in  10,000,  then
   after  10  minutes  of observation the error introduced into the mea-
   surement is 60 msec.  Unless corrected, this error is enough to  com-
   pletely  wipe out any accuracy in the transmission delay measurement.
   Finally, we note that assessing skew  errors  between  unsynchronized
   network  clocks  is an open research area, so we are not aware of any
   further guidance presently available for how to compensate for  these
   errors.   This  shortcoming  makes  use of a solid, independent clock
   source such as GPS especially desirable.


9.2. The Notion of "Wire Time"

   Internet measurement is often complicated  by  the  use  of  Internet
   hosts  themselves to perform the measurement.  These hosts can intro-
   duce delays, bottlenecks, and the like that are due  to  hardware  or
   operating  system  effects  and  have  nothing to do with the network
   behavior we would like to measure.

   In order to provide a general way of talking about these effects,  we
   introduce two notions of "wire time".  These notions are only defined
   in terms of a particular Internet link L.
 +    For a given packet P, the wire arrival time of P on L is the first
      time  T  at which all the bits of P have begun transmission across
      L.
 +    For a given packet P, the wire exit time of P on L  is  the  first
      time  T  at  which  all  the bits of P have completed transmission
      across L.
   Note that it may well be that some of P's bits have  finished  trans-
   mission  across  L  prior  to other bits beginning transmission -- in
   general, there may never be a time when all of  P  is  simultaneously
   being  transmitted,  which  is  why we need to pick a (somewhat arbi-
   trary) notion like "all the bits" in order  to  designate  a  precise
   time.   Also note that the link L may be comprised of multiple physi-
   cal channels.  For defining wire time, we consider these channels  to
   together  comprise  a  single  logical link, and P's wire time is the
   first time during which all of its bits have been sent  over  any  of
   the channels.

   It is possible, though one hopes uncommon, that a packet P might make
   multiple trips over a particular link L, due to  a  forwarding  loop.
   These  trips  might  even  overlap, depending on the link technology.
   Whenever this occurs, we define a separate wire time associated  with
   each instance of P seen on the link.  This definition is worth making
   because it serves as a reminder that notions like *the* unique time a
   packet passes a point in the Internet are inherently slippery.




Almes et al.                                                   [Page 16]

ID                  Framework for IP Provider Metrics      November 1996


   The  term  wire time has historically been used to loosely denote the
   time at which a packet appeared on a link, without exactly specifying
   whether  this  refers  to  the first bit, the last bit, or some other
   consideration.  This informal definition is  generally  already  very
   useful,  as it is usually used to make a distinction between when the
   packet's propagation delays begin and cease to be due to the  network
   rather than the endpoint hosts.

   When  appropriate,  metrics  should be defined in terms of wire times
   rather than host endpoint times,  so  that  the  metric's  definition
   highlights  the issue of separating delays due to the host from those
   due to the network.

   We note that these notions are delicate,  and  hope  to  improve  our
   understanding of them with experience.

   {Comment:  It  can sometimes be difficult to measure wire times.  One
   technique is to use a packet filter to monitor  traffic  on  a  link.
   The  architecture  of  these filters often attempts to associate with
   each packet a timestamp as close to the wire time  as  possible.   We
   note  however  that  one  common source of error is to run the packet
   filter on one of the endpoint hosts.   In  this  case,  it  has  been
   observed that some packet filters receive for some packets timestamps
   corresponding to when the packet was *scheduled* to be injected  into
   the  network,  rather  than  when it actually was *sent* out onto the
   network (wire time).  There can be a substantial  difference  between
   these two times.  A technique for dealing with this problem is to run
   the packet filter on a separate  host  that  passively  monitors  the
   given  link.  This can be problematic however for some link technolo-
   gies.}


10. Singletons, Samples, and Statistics

   In the process of applying early versions of the  Framework  to  spe-
   cific  metrics,  it became clear that a separation was needed between
   three distinct -- yet related -- notions:
 +    By a 'singleton' metric, we refer to metrics that are, in a sense,
      atomic.   For example, a single instance of one-way delay from one
      host to another might be defined as a singleton metric.
 +    By a 'sample' metric, we refer to metrics  derived  from  a  given
      singleton   metric  by  taking  a  number  of  distinct  instances
      together.  For example, a sample of one-way delays from  one  host
      to  another  taken  at  one-second intervals over a given one-hour
      period might be defined as a sample metric based.






Almes et al.                                                   [Page 17]

ID                  Framework for IP Provider Metrics      November 1996


 +    By a 'statistical' metric, we refer  to  metrics  derived  from  a
      given sample metric by taking some statistic of the values defined
      by the singleton metric on the sample.  For example, the  mean  of
      all  the  one-way  delay values on the sample given above might be
      defined as a statistical metric.
   By applying these notions of singleton, sample, and  statistic  in  a
   consistent way, we will be able to reuse lessons learned about how to
   define samples and statistics on various metrics.  The  orthogonality
   among  these three notions will thus make all our work more effective
   and more intelligible by the community.

   In the remainder of this section, we will cover some topics  in  sam-
   pling  and  statistics that we believe will be important to a variety
   of metric definitions and measurement efforts.


10.1. Methods of Collecting Samples

   The main reason for collecting samples is to see what sort of  varia-
   tions  and  consistencies  are  present in the metric being measured.
   These variations might be with respect to  different  points  in  the
   Internet,  or different measurement times.  When assessing variations
   based on a sample, one generally makes an assumption that the  sample
   is  "unbiased",  meaning  that the process of collecting the measure-
   ments in the sample did not skew the sample  so  that  it  no  longer
   accurately reflects the metric's variations and consistencies.

   One  common  way  of collecting samples is to make measurements sepa-
   rated by fixed amounts of time: periodic sampling.  Periodic sampling
   is  particularly attractive because of its simplicity, but it suffers
   from two potential problems:
 +    If the metric being measured itself  exhibits  periodic  behavior,
      then  there  is  a possibility that the sampling will observe only
      part of the periodic behavior  if  the  periods  happen  to  agree
      (either  directly, or if one is a multiple of the other).  Related
      to this problem is the notion that  periodic  sampling  is  highly
      predictable.   Predictable sampling is susceptible to manipulation
      if there are mechanisms by which a  network  component's  behavior
      can  be  temporarily  changed such that the sampling only sees the
      modified behavior.
 +    The act of measurement can perturb what  is  being  measured  (for
      example,  injecting  measurement traffic into a network alters the
      congestion level of the network), and repeated periodic  perturba-
      tions  can  drive  a  network into a state of synchronization (cf.
      [FJ94]), greatly  magnifying  what  might  individually  be  minor
      effects.

   A  more  sound  approach  is  based  on  "random  additive sampling".



Almes et al.                                                   [Page 18]

ID                  Framework for IP Provider Metrics      November 1996


   Samples are separated by independent,  randomly  generated  intervals
   that  have  a  common  statistical distribution G(t).  The quality of
   this sampling depends on the distribution G(t).  For example, if G(t)
   generates  a constant value g with probability one, then the sampling
   reduces to periodic sampling with a period of g.


10.1.1. Poisson Sampling

   It can be proved that if G(t) is  an  exponential  distribution  with
   rate lambda, that is
   G(t) = 1 - exp(-lambda * t)
   then  the  arrival of new samples *cannot* be predicted, and the sam-
   pling is unbiased.  Furthermore, the sampling is asymptotically unbi-
   ased  even  if the act of sampling affects the network's state.  Such
   sampling is referred to as "Poisson sampling".  It is  not  prone  to
   inducing  synchronization,  it can be used to accurately collect mea-
   surements of periodic behavior, and it is not prone  to  manipulation
   by anticipating when new samples will occur.

   Because  of  these  valuable properties, samples of Internet measure-
   ments should be gathered using Poisson sampling  unless  there  is  a
   compelling reason to use a different approach.

   In  its  purest form, Poisson sampling is done by generating indepen-
   dent, exponentially distributed intervals and gathering a single mea-
   surement  after  each  interval has elapsed.  It can be shown that if
   starting at time T one performs Poisson sampling over an interval dT,
   during  which a total of N measurements happen to be made, then those
   measurements will be uniformly  distributed  over  the  interval  [T,
   T+dT].   So  another way of conducting Poisson sampling is to pick dT
   and N and generate N random sampling times uniformly over the  inter-
   val [T, T+dT].  The two approaches are equivalent, except if N and dT
   are externally known.  In that case, the property of not  being  able
   to  predict measurement times is weakened (the other properties still
   hold).  The N/dT approach has an advantage that  dealing  with  fixed
   values  of  N  and dT can be simpler than dealing with a fixed lambda
   but variable numbers of measurements over variably-sized intervals.


10.1.2. Geometric Sampling

   Closely related to Poisson sampling is "geometric sampling", in which
   external  events  are measured with a fixed probability p.  For exam-
   ple, one might capture all the packets over a link  but  only  record
   the  packet  to a trace file if a randomly generated number uniformly
   distributed between 0 and 1 is less than a given p.   Geometric  sam-
   pling  has  the same properties of being unbiased and not predictable



Almes et al.                                                   [Page 19]

ID                  Framework for IP Provider Metrics      November 1996


   in advance as Poisson sampling, so if it fits a  particular  Internet
   measurement  task, it too is sound.  See [CPB93] for more discussion.


10.1.3. Generating Poisson Sampling Intervals

   To generate Poisson sampling intervals, one first determines the rate
   lambda  at  which  the  samples will on average be made (e.g., for an
   average sampling interval of 30 seconds, we have lambda  =  1/30,  if
   the units of time are seconds).  One then generates a series of expo-
   nentially-distributed (pseudo-)random numbers E1, E2, ...,  En.   The
   first  measurement is made at time E1, the next at time E1+E2, and so
   on.

   One    technique     for     generating     exponentially-distributed
   (pseudo-)random  numbers  is based on the ability to generate U1, U2,
   ..., Un,  (pseudo-)random  numbers  that  are  uniformly  distributed
   between  0 and 1.  Many computers provide libraries that can do this.
   Given such Ui, to generate Ei one uses:
   Ei = -log(Ui) / lambda
   where log(Ui) is the natural logarithm of Ui.

   Implementation details:

   There are at least three different methods for approximating  Poisson
   sampling, which we describe here as Methods 1 through 3.  Method 1 is
   the easiest to implement and has the most error, and method 3 is  the
   most  difficult  to  implement  and  has the least error (potentially
   none).

   Method 1 is to proceed as follows:
   1.  Generate E1 and wait that long.
   2.  Perform a measurement.
   3.  Generate E2 and wait that long.
   4.  Perform a measurement.
   5.  Generate E3 and wait that long.
   6.  Perform a measurement ...

   The problem with this approach is that the  "Perform  a  measurement"
   steps  themselves take time, so the sampling is not done at times E1,
   E1+E2, etc., but rather at E1, E1+M1+E2, etc., where Mi is the amount
   of  time required for the i'th measurement.  If Mi is very small com-
   pared to 1/lambda then the potential error introduced by  this  tech-
   nique  is likewise small.  As Mi becomes a non-negligible fraction of
   1/lambda, the potential error increases.

   Method 2 attempts to correct this error by taking  into  account  the
   amount  of  time  required  by  the measurements (i.e., the Mi's) and



Almes et al.                                                   [Page 20]

ID                  Framework for IP Provider Metrics      November 1996


   adjusting the waiting intervals accordingly:
   1.  Generate E1 and wait that long.
   2.  Perform a measurement and measure M1, the time it took to do so.
   3.  Generate E2 and wait for a time E2-M1.
   4.  Perform a measurement and measure M2 ..

   This approach works fine as long as E{i+1} >= Mi.  But if E{i+1} < Mi
   then  it is impossible to wait the proper amount of time.  (Note that
   this case corresponds to needing to perform two measurements simulta-
   neously.)

   Method  3  is  generating  a schedule of measurement times E1, E1+E2,
   etc., and then sticking to it:
   1.  Generate E1, E2, ..., En.
   2.  Compute measurement times T1, T2, ..., Tn, as Ti = E1 + ... + Ei.
   3.  Arrange that at times T1, T2, ..., Tn, a measurement is made.

   By allowing simultaneous measurements, Method 3 avoids the  shortcom-
   ings  of  Methods  1  and  2.  If, however, simultaneous measurements
   interfere with one another, then Method 3 does not gain  any  benefit
   and may actually prove worse than Methods 1 or 2.

   For  Internet phenomena, it is not known to what degree the inaccura-
   cies of these methods are significant.  If the  Mi's  are  much  less
   than 1/lambda, then any of the three should suffice.  If the Mi's are
   less than 1/lambda but perhaps not greatly less,  then  Method  2  is
   preferred to Method 1.  If simultaneous measurements do not interfere
   with one another, then Method 3 is preferred, though it can  be  con-
   siderably harder to implement.


10.2. Self-Consistency

   A fundamental requirement for a sound measurement methodology is that
   measurement be made using as few unconfirmed assumptions as possible.
   Experience  has  painfully  shown  how  easy  it is to make an (often
   implicit) assumption that turns out to be incorrect.  An  example  is
   incorporating  into a measurement the reading of a clock synchronized
   to a highly accurate source.  It is easy to assume that the clock  is
   therefore  accurate; but due to software bugs, a loss of power in the
   source, or a loss of communication between the source and the  clock,
   the clock could actually be quite inaccurate.

   This is not to argue that one must not make any assumptions when mea-
   suring, but rather that, to the extent which  is  practical,  assump-
   tions  should  be  tested.   One  powerful  way for doing so involves
   checking for self-consistency.  Such checking  applies  both  to  the
   observed  value(s)  of  the  measurement  *and the values used by the



Almes et al.                                                   [Page 21]

ID                  Framework for IP Provider Metrics      November 1996


   measurement process itself*.  A simple example of the former is  that
   when  computing  a  round trip time, one should check to see if it is
   negative.  Since negative time intervals are non-physical, if it ever
   is negative that finding immediately flags an error.  *These sorts of
   errors should then be investigated!*   It  is  crucial  to  determine
   where  the  error  lies,  because  only by doing so diligently can we
   build up faith in a methodology's fundamental soundness.   For  exam-
   ple,  it could easily be that the round trip time is negative because
   during the measurement the clock was set backward in the  process  of
   synchronizing  it with another source.  But it could also be that the
   measurement program accesses uninitialized memory in one of its  com-
   putations  and,  only very rarely, that leads to a bogus computation.
   This second error is more serious, if the same  program  is  used  by
   others  to perform the same measurement.  Furthermore, once uncovered
   it can be completely fixed.

   A more subtle example of  testing  for  self-consistency  comes  from
   gathering  samples  of  one-way  Internet delays.  If one has a large
   sample of such delays, it may well be highly telling to, for example,
   fit  a line to the pairs of (time of measurement, measured delay), to
   see if the resulting line has a clearly non-zero  slope.   If  so,  a
   possible  interpretation  is  that one of the clocks used in the mea-
   surements is skewed compared to the other.  Another interpretation is
   that the slope is actually due to genuine network effects.  Determin-
   ing which is indeed the case will often be highly illuminating.  Fur-
   thermore,  if  making  this  check is part of the methodology, then a
   finding that the long-term slope is very near zero is  positive  evi-
   dence  that  the measurements are probably not biased by a difference
   in skew.

   A final example illustrates checking the measurement  process  itself
   for  self-consistency.  Above we outline Poisson sampling techniques,
   based on generating  exponentially-distributed  intervals.   A  sound
   measurement methodology would include testing the generated intervals
   to see whether they are indeed exponentially distributed (and also to
   see if they suffer from correlation).  In appendix [To Be Written] we
   discuss and give C code for one such  technique,  a  general-purpose,
   well-regarded  goodness-of-fit test called the Anderson-Darling test.

   Finally, we note that what is truly relevant for Poisson sampling  of
   Internet  metrics  is  often  not when the measurements began but the
   wire times corresponding to the  measurement  process.   These  could
   well  be different, due to complications on the hosts used to perform
   the measurement.  Thus, even  those  with  complete  faith  in  their
   pseudo-random  generators and subsequent algorithms are encouraged to
   consider how they might test the assumptions of each measurement pro-
   cedure as much as possible.




Almes et al.                                                   [Page 22]

ID                  Framework for IP Provider Metrics      November 1996


10.3. Defining Statistical Distributions

   One way of describing a collection of measurements (a sample) is as a
   statistical distribution -- informally, as  percentiles.   There  are
   several  slightly  different  ways  of  doing so.  In this section we
   define a standard definition to give  uniformity  to  these  descrip-
   tions.

   The  "empirical  distribution function" (EDF) of a set of scalar mea-
   surements is a function F(x) which for any  x  gives  the  fractional
   proportion  of  the  total measurements that were <= x.  If x is less
   than the minimum value observed, then F(x) is 0.  If it is greater or
   equal to the maximum value observed, then F(x) is 1.

   For example, given the 6 measurements:
   -2, 7, 7, 4, 18, -5
   Then  F(-8) = 0, F(-5) = 1/6, F(-5.0001) = 0, F(-4.999) = 1/6, F(7) =
   5/6, F(18) = 1, F(239) = 1.

   Note that we can recover the different measured values and  how  many
   times  each  occurred from F(x) -- no information regarding the range
   in values is lost.  Summarizing measurements using histograms, on the
   other  hand,  in general loses information about the different values
   observed, so the EDF is preferred.

   Using either the EDF or a histogram, however, we do lose  information
   regarding  the order in which the values were observed.  Whether this
   loss is potentially significant will depend on the metric being  mea-
   sured.

   We will use the term "percentile" to refer to the smallest value of x
   for which F(x) >= a given percentage.  So the 50th percentile of  the
   example  above  is  4, since F(4) = 3/6 = 50%; the 25th percentile is
   -2, since F(-5) = 1/6 < 25%, and F(-2) = 2/6 >= 25%; the  100th  per-
   centile  is  18;  and the 0th percentile is -infinity, as is the 15th
   percentile.

   Care must be taken when using  percentiles  to  summarize  a  sample,
   because  they  can  lend  an unwarranted appearance of more precision
   than is really available.  Any such summary MUST include  the  sample
   size N, because any percentile difference finer than 1/N is below the
   resolution of the sample.

   See [DS86] for more details regarding EDF's.

   We close with a note on the common (and important!) notion of median.
   In  statistics,  the  median  of  a distribution is defined to be the
   point X for which the probability of observing a value <= X is  equal



Almes et al.                                                   [Page 23]

ID                  Framework for IP Provider Metrics      November 1996


   to  the  probability  of  observing a value > X.  When estimating the
   median of a set of observations, the estimate depends on whether  the
   number of observations, N, is odd or even:
 +    If  N is odd, then the 50th percentile as defined above is used as
      the estimated median.
 +    If N is even, then the estimated median is the average of the cen-
      tral  two observations; that is, if the observations are sorted in
      ascending order and numbered from 1 to N, where N = 2*K, then  the
      estimated  median is the average of the (K)'th and (K+1)'th obser-
      vations.
   Usually the term "estimated" is dropped from  the  phrase  "estimated
   median" and this value is simply referred to as the "median".


10.4. Testing For Goodness-of-Fit

   For  some  forms of measurement calibration we need to test whether a
   set of numbers is consistent with those  numbers  having  been  drawn
   from  a particular distribution.  An example is that to apply a self-
   consistency check to measurements made using a Poisson  process,  one
   test  is to see whether the sampling times do indeed reflect an expo-
   nential distribution; or if the dT/N  approach  discussed  above  was
   used, whether the times are uniformly distributed across [T, dT].

   There  are  a  large number of statistical goodness-of-fit techniques
   for performing such tests.  See [DS86]  for  a  thorough  discussion.
   That  reference  recommends  the Anderson-Darling EDF test as being a
   good all-purpose test, as well as one  that  is  especially  good  at
   detecting deviations from a given distribution in the lower and upper
   tails of the EDF.

   It is important to understand  that  the  nature  of  goodness-of-fit
   tests  is that one first selects a "significance level", which is the
   probability that the test will erroneously declare that the EDF of  a
   given  set  of  measurements fails to match a particular distribution
   when in fact the measurements do indeed  reflect  that  distribution.
   Unless otherwise stated, IPPM goodness-of-fit tests are done using 5%
   significance.  This means that if the test is applied to 100  samples
   and  5  of those samples are deemed to have failed the test, then the
   samples are all consistent with the distribution  being  tested.   If
   significantly  more of the samples fail the test, then the assumption
   that the samples are consistent with the  distribution  being  tested
   must  be  rejected.   If  significantly fewer of the samples fail the
   test, then the samples have potentially been doctored too well to fit
   the  distribution.   Similarly, some goodness-of-fit tests (including
   Anderson-Darling) can detect whether it is likely that a given sample
   was  doctored.   We also use a significance of 5% for this case; that
   is, the test will report that a given honest sample is "too  good  to



Almes et al.                                                   [Page 24]

ID                  Framework for IP Provider Metrics      November 1996


   be true" 5% of the time, so if the test reports this finding signifi-
   cantly more often than one time out of twenty, it  is  an  indication
   that something unusual is occurring.

   Appendix  [To  Be  Written]  gives sample C code for implementing the
   Anderson-Darling test, as well as further discussing its use.

   See [Pa94] for a discussion of goodness-of-fit  and  closeness-of-fit
   tests in the context of network measurement.


11. Avoiding Stochastic Metrics

   When  defining  metrics  applying to a path, subpath, cloud, or other
   network element, we in general do not define them in stochastic terms
   (probabilities).   We instead prefer a deterministic definition.  So,
   for example, rather than defining a metric about a "packet loss prob-
   ability  between  A  and B", we would define a metric about a "packet
   loss rate between A and B".  (A measurement given by the first  defi-
   nition might be "0.73", and by the second "73 packets out of 100".)

   The  reason for this distinction is as follows.  When definitions are
   made in terms of probabilities, there are often hidden assumptions in
   the  definition  about  a stochastic model of the behavior being mea-
   sured.  The fundamental goal with avoiding probabilities in our  met-
   ric  definitions  is to avoid biasing our definitions by these hidden
   assumptions.

   For example, an easy hidden assumption to make is that packet loss in
   a  network  component  due  to queueing overflows can be described as
   something that happens to any given packet with a  particular  proba-
   bility.   Usually,  however, queueing drops are actually *determinis-
   tic*, and assuming that they should  be  described  probabilistically
   can  obscure  crucial correlations between queueing drops among a set
   of packets.  So it's better to  explicitly  note  stochastic  assump-
   tions, rather than have them sneak into our definitions implicitly.

   This  does  *not*  mean  that we abandon stochastic models for under-
   standing network performance!, only that when defining IP metrics  we
   avoid  terms  such  as  "probability"  for terms like "proportion" or
   "rate".  We will still use, for example, random sampling in order  to
   estimate  probabilities  used  by stochastic models related to the IP
   metrics.  We also do not rule out the possibility of stochastic  met-
   rics  when  they are truly appropriate (for example, perhaps to model
   transmission errors caused by certain types of line noise).






Almes et al.                                                   [Page 25]

ID                  Framework for IP Provider Metrics      November 1996


12. Packets of Type P

   A fundamental property of many Internet metrics is that the value  of
   the  metric depends on the type of IP packet(s) used to make the mea-
   surement.  Consider an IP-connectivity metric: one obtains  different
   results  depending  on  whether one is interested in connectivity for
   packets destined for well-known TCP ports or unreserved UDP ports, or
   those with invalid IP checksums, or those with TTL's of 16, for exam-
   ple.  In some circumstances these distinctions will be highly  inter-
   esting  (for  example, in the presence of firewalls, or RSVP reserva-
   tions).

   Because of this distinction, we introduce the  generic  notion  of  a
   "packet  of  type  P",  where  in  some contexts P will be explicitly
   defined (i.e., exactly  what  type  of  packet  we  mean),  partially
   defined  (e.g., "with a payload of B octets"), or left generic.  Thus
   we may talk about generic IP-type-P-connectivity or more specific IP-
   port-HTTP-connectivity.  Some metrics and methodologies may be fruit-
   fully defined using generic type P definitions which  are  then  made
   specific when performing actual measurements.

   Whenever a metric's value depends on the type of the packets involved
   in the metric, the metric's name will include either a specific  type
   or  a  phrase  such  as  "type-P".   Thus  we will not define an "IP-
   connectivity" metric but instead an  "IP-type-P-connectivity"  metric
   and/or perhaps an "IP-port-HTTP-connectivity" metric.  This serves as
   an important reminder that one must be conscious of the exact type of
   traffic being measured.

   A  closely  related  note: it would be very useful to know if a given
   Internet component treats equally a class C  of  different  types  of
   packets.   If  so, then any one of those types of packets can be used
   for subsequent measurement of the component.  This suggests we devise
   a metric or suite of metrics that attempt to determine C.


13. Internet Addresses vs. Hosts

   When  considering  a metric for some path through the Internet, it is
   often natural to think about it as being for the path  from  Internet
   host  H1  to  host  H2.   A definition in these terms, though, can be
   ambiguous, because Internet hosts can be attached to  more  than  one
   network.  In this case, the result of the metric will depend on which
   of these networks is actually used.

   Because of this ambiguitiy, usually such definitions  should  instead
   be defined in terms of Internet IP addresses.  For the common case of
   a unidirectional path through the Internet,  we  will  use  the  term



Almes et al.                                                   [Page 26]

ID                  Framework for IP Provider Metrics      November 1996


   "Src"  to  denote  the  IP  address of the beginning of the path, and
   "Dst" to denote the IP address of the end.


14. Well-Formed Packets

   Unless otherwise stated, all metric definitions that concern IP pack-
   ets  include an implicit assumption that the packet is *well formed*.
   A packet is well formed if it meets all of the following criteria:
 +    Its length as given in the IP header corresponds to  the  size  of
      the IP header plus the size of the payload.
 +    It  includes  a valid IP header: the version field is 4 (later, we
      will expand this to include 6); the header length  is  >=  5;  the
      checksum is correct.
 +    It is not an IP fragment.
 +    The  source  and  destination addresses correspond to the hosts in
      question.
 +    Either the packet possesses sufficient  TTL  to  travel  from  the
      source to the destination if the TTL is decremented by one at each
      hop, or it possesses the maximum TTL of 255.
 +    It does not contain IP options unless explicitly noted.
 +    If a transport header is present, it too contains a valid checksum
      and other valid fields.
   We  further require that if a packet is described as having a "length
   of B octets", then 0 <= B <= 65535; and if B is the payload length in
   octets, then B <= (65535-IP header size in octets).

   So, for example, one might imagine defining an IP connectivity metric
   as "IP-type-T-connectivity for well-formed packets with  the  IP  TOS
   field  set  to  0", or, more succinctly, "IP-type-T-connectivity with
   the IP TOS field set to 0", since well-formed is already implied.

   A particular type of well-formed packet often useful to  consider  is
   the  "minimal  IP packet from A to B" - this is an IP packet with the
   following properties:
   - It is well-formed.
   - Its data payload is 0 octets.
   - It contains no options.
   - Its protocol field is 4 (IP) ???  0 (reserved) ???

   When defining IP metrics we keep in mind that no  packet  smaller  or
   simpler  than  this  can be transmitted over a correctly operating IP
   network.








Almes et al.                                                   [Page 27]

ID                  Framework for IP Provider Metrics      November 1996


15. Acknowledgements

   The comments of Brian Carpenter and Jeff Sedayao are appreciated.


16. Security Considerations

   This memo raises no security issues.


17. References

   [Al96] G. Almes and S. Kalidindi, "A One-way Delay Metric for  IPPM",
   Internet Draft <draft-ietf-bmwg-ippm-delay-00.txt>, November 1996.

   [DS86]  R. D'Agostino and M. Stephens, editors, Goodness-of-Fit Tech-
   niques, Marcel Dekker, Inc., 1986.

   [CPB93] K. Claffy, G. Polyzos, and H-W. Braun, ``Application of  Sam-
   pling Methodologies to Network Traffic Characterization,'' Proc. SIG-
   COMM '93, pp. 194-203, San Francisco, September 1993.

   [FJ94] S. Floyd and V. Jacobson, ``The  Synchronization  of  Periodic
   Routing  Messages,''  IEEE/ACM  Transactions on Networking, 2(2), pp.
   122-136, April 1994.

   [Mi92] D. Mills, "Network Time Protocol (v3)", April 1992

   [Pa94] V. Paxson, ``Empirically-Derived Analytic Models of  Wide-Area
   TCP  Connections,''  IEEE/ACM  Transactions  on Networking, 2(4), pp.
   316-336, August 1994.

   [Pa96]  V.   Paxson,   ftp://ftp.ee.lbl.gov/papers/metrics-framework-
   INET96.ps.Z


18. Authors' Addresses

   Guy Almes <almes@advanced.org>
   Advanced Network & Services, Inc.
   200 Business Park Drive
   Armonk, NY  10504
   USA
   Phone: +1 914/273-7863

   Bill Cerveny <cerveny@advanced.org>
   Advanced Network & Services, Inc.
   200 Business Park Drive



Almes et al.                                                   [Page 28]

ID                  Framework for IP Provider Metrics      November 1996


   Armonk, NY  10504
   USA

   Padma Krishnaswamy <kri@bellcore.com>
   Bell Communications Research
   445 South Street
   Morristown, NJ  07960
   USA

   Jamshid Mahdavi <mahdavi@psc.edu>
   Pittsburgh Supercomputing Center
   4400 5th Avenue
   Pittsburgh, PA  15213
   USA

   Matt Mathis <mathis@psc.edu>
   Pittsburgh Supercomputing Center
   4400 5th Avenue
   Pittsburgh, PA  15213
   USA

   Vern Paxson <vern@ee.lbl.gov>
   MS 50B/2239
   Lawrence Berkeley National Laboratory
   University of California
   Berkeley, CA  94720
   USA
   Phone: +1 510/486-7504























Almes et al.                                                   [Page 29]


Html markup produced by rfcmarkup 1.108, available from http://tools.ietf.org/tools/rfcmarkup/