 1/draftietfrmcatvideotrafficmodel03.txt 20180118 20:13:09.886911324 0800
+++ 2/draftietfrmcatvideotrafficmodel04.txt 20180118 20:13:09.922912190 0800
@@ 1,58 +1,60 @@
Network Working Group X. Zhu
InternetDraft S. Mena
Intended status: Informational Cisco Systems
Expires: January 18, 2018 Z. Sarker
+Expires: July 22, 2018 Z. Sarker
Ericsson AB
 July 17, 2017
+ January 18, 2018
Modeling Video Traffic Sources for RMCAT Evaluations
 draftietfrmcatvideotrafficmodel03
+ draftietfrmcatvideotrafficmodel04
Abstract
This document describes two reference video traffic source models for
evaluating RMCAT candidate algorithms. The first model statistically
characterizes the behavior of a live video encoder in response to
changing requests on target video rate. The second model is trace
 driven, and emulates the encoder output by scaling the preencoded
 video frame sizes from a widely used video test sequence. Both
 models are designed to strike a balance between simplicity,
 repeatability, and authenticity in modeling the interactions between
 a video traffic source and the congestion control module.
+ driven, and emulates the encoder output based on actual encoded video
+ frame sizes from a highresolution test sequence. Both models are
+ designed to strike a balance between simplicity, repeatability, and
+ authenticity in modeling the interactions between a live video
+ traffic source and the congestion control module. Finally, the
+ document describes how both approaches can be combined into a hybrid
+ model.
Status of This Memo
This InternetDraft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
InternetDrafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as InternetDrafts. The list of current Internet
 Drafts is at http://datatracker.ietf.org/drafts/current/.
+ Drafts is at https://datatracker.ietf.org/drafts/current/.
InternetDrafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use InternetDrafts as reference
material or to cite them other than as "work in progress."
 This InternetDraft will expire on January 18, 2018.
+ This InternetDraft will expire on July 22, 2018.
Copyright Notice
 Copyright (c) 2017 IETF Trust and the persons identified as the
+ Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
 (http://trustee.ietf.org/licenseinfo) in effect on the date of
+ (https://trustee.ietf.org/licenseinfo) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
@@ 60,27 +62,27 @@
3. Desired Behavior of A Synthetic Video Traffic Model . . . . . 3
4. Interactions Between Synthetic Video Traffic Source and
Other Components at the Sender . . . . . . . . . . . . . . . 4
5. A Statistical Reference Model . . . . . . . . . . . . . . . . 6
5.1. Timedamped response to target rate update . . . . . . . 7
5.2. Temporary burst and oscillation during transient . . . . 8
5.3. Output rate fluctuation at steady state . . . . . . . . . 8
5.4. Rate range limit imposed by video content . . . . . . . . 9
6. A TraceDriven Model . . . . . . . . . . . . . . . . . . . . 9
6.1. Choosing the video sequence and generating the traces . . 10
 6.2. Using the traces in the syntethic codec . . . . . . . . . 11
+ 6.2. Using the traces in the synthetic codec . . . . . . . . . 11
6.2.1. Main algorithm . . . . . . . . . . . . . . . . . . . 11
6.2.2. Notes to the main algorithm . . . . . . . . . . . . . 13
6.3. Varying frame rate and resolution . . . . . . . . . . . . 13
7. Combining The Two Models . . . . . . . . . . . . . . . . . . 14
8. Implementation Status . . . . . . . . . . . . . . . . . . . . 15
 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15
+ 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 16
10.1. Normative References . . . . . . . . . . . . . . . . . . 16
10.2. Informative References . . . . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17
1. Introduction
When evaluating candidate congestion control algorithms designed for
realtime interactive media, it is important to account for the
characteristics of traffic patterns generated from a live video
@@ 96,23 +98,24 @@
On the other hand, evaluation results of a candidate RMCAT algorithm
should mostly reflect performance of the congestion control module,
and somewhat decouple from peculiarities of any specific video codec.
It is also desirable that evaluation tests are repeatable, and be
easily duplicated across different candidate algorithms.
One way to strike a balance between the above considerations is to
evaluate RMCAT algorithms using a synthetic video traffic source
model that captures key characteristics of the behavior of a live
video encoder. To this end, this draft presents two reference
 models. The first is based on statistical modelling; the second is
+ models. The first is based on statistical modeling; the second is
tracedriven. The draft also discusses the pros and cons of each
 approach, as well as how both approaches can be combined.
+ approach, as well as how both approaches can be combined into a
+ hybrid model.
2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described RFC2119 [RFC2119].
3. Desired Behavior of A Synthetic Video Traffic Model
A live video encoder employs encoder rate control to meet a target
@@ 161,30 +164,31 @@
o Statistical resemblance: The synthetic traffic should match the
outcome of the real video encoder in terms of statistical
characteristics, such as the mean, variance, peak, and
autocorrelation coefficients of the bitrate. It is also important
that the statistical resemblance should hold across different time
scales, ranging from tens of milliseconds to subseconds.
o Wide range of coverage: The model should be easily configurable to
cover a wide range of codec behaviors (e.g., with either fast or
slow reaction time in live encoder rate control) and video content
 variations (e.g, ranging from highmotion to lowmotion).
+ variations (e.g., ranging from highmotion to lowmotion).
These distinct behavior features can be characterized via simple
 statistical models, or a tracedriven approach. We present an
 example of each in Section 5 and Section 6
+ statistical modelling, or a tracedriven approach. Section 5 and
+ Section 6 provide an example of each approach, respectively.
+ Section 7 discusses how both models can be combined together.
4. Interactions Between Synthetic Video Traffic Source and Other
Components at the Sender
 Figure 1 depitcs the interactions of the synthetic video encoder with
+ Figure 1 depicts the interactions of the synthetic video encoder with
other components at the sender, such as the application, the
congestion control module, the media packet transport module, etc.
Both reference models, as described later in Section 5 and Section 6,
follow the same set of interactions.
The synthetic video encoder takes in raw video frames captured by the
camera and then dynamically generates a sequence of encoded video
frames with varying size and interval. These encoded frames are
processed by other modules in order to transmit the video stream over
the network. During the lifetime of a video transmission session,
@@ 249,25 +253,25 @@
 
+ +>
interface from interface to
other modules (a) other modules (b)
Figure 1: Interaction between synthetic video encoder and other
modules at the sender
5. A Statistical Reference Model
 In this section, we describe one simple statistical model of the live
 video encoder traffic source. Figure 2 summarizes the list of
 tunable parameters in this statistical model. A more comprehensive
 survey of popular methods for modelling video traffic source behavior
 can be found in [Tanwir2013].
+ This section describes one simple statistical model of the live video
+ encoder traffic source. Figure 2 summarizes the list of tunable
+ parameters in this statistical model. A more comprehensive survey of
+ popular methods for modeling video traffic source behavior can be
+ found in [Tanwir2013].
+==============+====================================+================+
 Notation  Parameter Name  Example Value 
+==============+====================================+================+
 R_v  Target rate request to encoder  1 Mbps 
++++
 FPS  Target frame rate of encoder output 30 Hz 
++++
 tau_v  Encoder reaction latency  0.2 s 
++++
@@ 298,297 +302,304 @@
* Example value of K_B for a video stream encoded at 720p and 30 frames
per second, using H.264/AVC encoder.
Figure 2: List of tunable parameters in a statistical video traffic
source model.
5.1. Timedamped response to target rate update
While the congestion control module can update its target rate
 request R_v at any time, our model dictates that the encoder will
 only react to such changes after tau_v seconds from a previous rate
 transition. In other words, when the encoder has reacted to a rate
 change request at time t, it will simply ignore all subsequent rate
 change requests until time t+tau_v.
+ request R_v at any time, the statistical model dictates that the
+ encoder will only react to such changes tau_v seconds after a
+ previous rate transition. In other words, when the encoder has
+ reacted to a rate change request at time t, it will simply ignore all
+ subsequent rate change requests until time t+tau_v.
5.2. Temporary burst and oscillation during transient
The output rate R_o during the period [t, t+tau_v] is considered to
be in transient. Based on observations from video encoder output
 data, we model the transient behavior of an encoder upon reacting to
 a new target rate request in the form of high variation in output
 frame sizes. It is assumed that the overall average output rate R_o
 during this period matches the target rate R_v. Consequently, the
 occasional burst of large frames are followed by smallerthan average
 encoded frames.
+ data, the transient behavior of an encoder upon reacting to a new
+ target rate request is modelled in the form of high variation in
+ output frame sizes. It is assumed that the overall average output
+ rate R_o during this period matches the target rate R_v.
+ Consequently, the occasional burst of large frames are followed by
+ smallerthanaverage encoded frames.
This temporary burst is characterized by two parameters:
o burst duration K_d: number of frames in the burst event; and
o burst frame size K_B: size of the initial burst frame which is
typically significantly larger than average frame size at steady
state.
It can be noted that these burst parameters can also be used to mimic
the insertion of a large ondemand I frame in the presence of severe
packet losses. The values of K_d and K_B typically depend on the
type of video codec, spatial and temporal resolution of the encoded
stream, as well as the video content activity level.
5.3. Output rate fluctuation at steady state
 We model output rate R_o during steady state as randomly fluctuating
 around the target rate R_v. The output traffic can be characterized
 as the combination of two random processes denoting the frame
 interval t and output frame size B over time. These two random
+ The output rate R_o during steady state is modelled as randomly
+ fluctuating around the target rate R_v. The output traffic can be
+ characterized as the combination of two random processes denoting the
+ frame interval t and output frame size B over time. These two random
processes capture two sources of variations in the encoder output:
o Fluctuations in frame interval: the intervals between adjacent
frames have been observed to fluctuate around the reference
interval of t0 = 1/FPS. Deviations in normalized frame interval
DELTA_t = (tt0)/t0 can be modelled by a zeromean Laplacian
distribution with scaling parameter SCALE_t. The value of SCALE_t
dictates the "width" of the Laplacian distribution and therefore
the amount of fluctuations in actual frame intervals (t) with
 respect to the reference t0.
+ respect to the reference frame interval t0.
o Fluctuations in frame size: size of the output encoded frames also
tend to fluctuate around the reference frame size B0=R_v/8/FPS.
Likewise, deviations in the normalized frame size DELTA_B =
(BB0)/B0 can be modelled by a zeromean Laplacian distribution
with scaling parameter SCALE_B. The value of SCALE_B dictates the
"width" of this second Laplacian distribution and correspondingly
the amount of fluctuations in output frame sizes (B) with respect
to the reference target B0.
Both values of SCALE_t and SCALE_B can be obtained via parameter
fitting from empirical data captured for a given video encoder.
Example values are listed in Figure 2 based on empirical data
presented in [IETFInterim].
5.4. Rate range limit imposed by video content
The output rate R_o is further clipped within the dynamic range
[R_min, R_max], which in reality are dictated by scene and motion
 complexity of the captured video content. In our model, these
 parameters are specified by the application.
+ complexity of the captured video content. In the proposed
+ statistical model, these parameters are specified by the application.
6. A TraceDriven Model
 We now present the second approach to model a video traffic source.
 This approach is based on running an actual live video encoder on a
 set of chosen raw video sequences and using the encoder's output
+ The second approach for modelling a video traffic source is trace
+ driven. This can be achieved by running an actual live video encoder
+ on a set of chosen raw video sequences and using the encoder's output
traces for constructing a synthetic live encoder. With this
approach, the recorded video traces naturally exhibit temporal
fluctuations around a given target rate request R_v from the
congestion control module.
The following list summarizes the main steps of this approach:
 1) Choose one or more representative raw video sequences.
+ 1. Choose one or more representative raw video sequences.
 2) Encode the sequence(s) using an actual live video encoder. Repeat
 the process for a number of bitrates. Keep only the sequence of
 frame sizes for each bitrate.
+ 2. Encode the sequence(s) using an actual live video encoder.
+ Repeat the process for a number of bitrates. Keep only the
+ sequence of frame sizes for each bitrate.
 3) Construct a data structure that contains the output of the
+ 3. Construct a data structure that contains the output of the
previous step. The data structure should allow for easy bitrate
lookup.
 4) Upon a target bitrate request R_v from the controller, look up the
 closest bitrates among those previously stored. Use the frame size
 sequences stored for those bitrates to approximate the frame sizes to
 output.
+ 4. Upon a target bitrate request R_v from the controller, look up
+ the closest bitrates among those previously stored. Use the
+ frame size sequences stored for those bitrates to approximate the
+ frame sizes to output.
 5) The output of the synthetic encoder contains "encoded" frames with
 zeros as contents but with realistic sizes.
+ 5. The output of the synthetic encoder contains "encoded" frames
+ with zeros as contents but with realistic sizes.
 Section 6.1 explains steps 1), 2), and 3), Section 6.2 elaborates on
 steps 4) and 5). Finally, Section 6.3 briefly discusses the
 possibility to extend the model for supporting variable frame rate
 and/or variable frame resolution.
+ In the following, Section 6.1 explains the first three steps (13),
+ Section 6.2 elaborates on the remaining two steps (45). Finally,
+ Section 6.3 briefly discusses the possibility to extend the trace
+ driven model for supporting timevarying frame rate and/or time
+ varying frame resolution.
6.1. Choosing the video sequence and generating the traces
 The first step we need to perform is a careful choice of a set of
 video sequences that are representative of the use cases we want to
 model. Our use case here is video conferencing, so we must choose a
 lowmotion sequence that resembles a "talking head", for instance a
 news broadcast or a video capture of an actual conference call.
+ The first step is a careful choice of a set of video sequences that
+ are representative of the target use cases for the video traffic
+ model. For the example use case of interactive video conferencing,
+ it is recommended to choose a lowmotion sequence that resembles a
+ "talking head", e.g. from a news broadcast or recording of an actual
+ video conferencing call.
The length of the chosen video sequence is a tradeoff. If it is too
long, it will be difficult to manage the data structures containing
the traces. If it is too short, there will be an obvious periodic
pattern in the output frame sizes, leading to biased results when
 evaluating congestion controller performance. In our experience, a
 sequence whose length is between 2 and 4 minutes is a fair tradeoff.
+ evaluating congestion control performance. In our experience, a
+ sequence with a length between 2 and 4 minutes is a fair tradeoff.
 Once we have chosen the raw video sequence, denoted S, we use a live
 encoder, e.g. [H264] or [HEVC] to produce a set of encoded
 sequences. As discussed in Section 3, a live encoder's output
 bitrate can be tuned by varying three input parameters, namely,
 quantization step size, frame rate, and picture resolution. In order
 to simplify the choice of these parameters for a given target rate,
 we assume a fixed frame rate (e.g. 30 fps) and a fixed resolution
 (e.g., 720p). See section 6.3 for a discussion on how to relax these
 assumptions.
+ Given the chosen raw video sequence, denoted S, one can use a live
+ encoder, e.g. some implementation of [H264] or [HEVC], to produce a
+ set of encoded sequences. As discussed in Section 3, the output
+ bitrate of the live encoder can be achieved by tuning three input
+ parameters: quantization step size, frame rate, and picture
+ resolution. In order to simplify the choice of these parameters for
+ a given target rate, one can typically assume a fixed frame rate
+ (e.g. 30 fps) and a fixed resolution (e.g., 720p) when configuring
+ the live encoder. See Section 6.3 for a discussion on how to relax
+ these assumptions.
 Following these simplifications, we run the chosen encoder by setting
 a constant target bitrate at the beginning, then letting the encoder
 vary the quantization step size internally while encoding the input
 video sequence. Besides, we assume that the first frame is encoded
 as an Iframe and the rest are Pframes. We further assume that the
 encoder algorithm does not use knowledge of frames in the future when
 encoding a given frame.
+ Following these simplifications, the chosen encoder can be configured
+ to start at a constant target bitrate, then vary the quantization
+ step size (internally via the video encoder rate controller) to meet
+ various externally specified target rates. It can be further assumed
+ the first frame is encoded as an Iframe and the rest are Pframes.
+ For live encoding, the encoder rate control algorithm typically does
+ not use knowledge of frames in the future when encoding a given
+ frame.
 Given R_min and R_max, which are the minimum and maximum bitrates at
 which the synthetic codec is to operate (see Section 4), we divide
 the bitrate range between R_min and R_max in n_s + 1 bitrate steps of
 length l = (R_max  R_min) / n_s. We then use the following simple
 algorithm to encode the raw video sequence.
+ Given the minimum and maximum bitrates at which the synthetic codec
+ is to operate (denoted as R_min and R_max, see Section 4), the entire
+ range of target bitrates can be divided into n_s + 1 bitrate steps of
+ length l = (R_max  R_min) / n_s. The following simple algorithm is
+ used to encode the raw video sequence.
r = R_min
while r <= R_max do
Traces[r] = encode_sequence(S, r, e)
r = r + l
 where function encode_sequence takes as parameters, respectively, a
 raw video sequence, a constant target rate, and an encoder algorithm;
 it returns a vector with the sizes of frames in the order they were
 encoded. The output vector is stored in a map structure called
 Traces, whose keys are bitrates and whose values are vectors of frame
 sizes.
+ The function encode_sequence takes as input parameters, respectively,
+ a raw video sequence (S), a constant target rate (r), and an encoder
+ rate control algorithm (e); it returns a vector with the sizes of
+ frames in the order they were encoded. The output vector is stored
+ in a map structure called Traces, whose keys are bitrates and whose
+ values are vectors of frame sizes.
The choice of a value for n_s is important, as it determines the
 number of vectors of frame sizes stored in map Traces. The minimum
 value one can choose for n_s is 1, and its maximum value depends on
 the amount of memory available for holding the map Traces. A
 reasonable value for n_s is one that makes the steps' length l = 200
 kbps. We will further discuss step length l in the next section.
+ number of vectors of frame sizes stored in the map Traces. The
+ minimum value one can choose for n_s is 1, and the maximum value
+ depends on the amount of memory available for holding the map Traces.
+ A reasonable value for n_s is one that results in steps of length l =
+ 200 kbps. The next section will discuss further the choice of the
+ step length l.
Finally, note that, as mentioned in previous sections, R_min and
R_max may be modified after the initial sequences are encoded.
Hence, the algorithm described in the next section also covers the
cases when the current target bitrate is less than R_min, or greater
than R_max.
6.2. Using the traces in the syntethic codec
+6.2. Using the traces in the synthetic codec
The main idea behind the tracedriven synthetic codec is that it
 mimics a real live codec's rate adaptation when the congestion
 controller updates the target rate R_v dynamically. It does so by
 switching to a different frame size vector stored in the map Traces
 when needed.
+ mimics the rate adaptation behavior of a real live codec upon dynamic
+ updates of the target rate R_v by the congestion control module. It
+ does so by switching to a different frame size vector stored in the
+ map Traces when needed.
6.2.1. Main algorithm
 We maintain two variables r_current and t_current:
+ The main algorithm for rate adaptation in the synthetic codec
+ maintains two variables: r_current and t_current.
 * r_current points to one of the keys of map Traces. Upon a change
 in the value of R_v, typically because the congestion controller
 detects that the network conditions have changed, r_current is
 updated to the greatest key in Traces that is less than or equal to
 the new value of R_v. For the moment, we assume the value of R_v to
 be clipped in the range [R_min, R_max].
+ o The variable r_current points to one of the keys of map Traces.
+ Upon a change in the value of R_v, typically because the
+ congestion controller detects that the network conditions have
+ changed, r_current is updated to the greatest key in Traces that
+ is less than or equal to the new value of R_v. It is assumed that
+ the value of R_v is clipped within the range [R_min, R_max].
r_current = r
such that
( r in keys(Traces) and
r <= R_v and
(not(exists) r' in keys(Traces) such that r < r' <= R_v) )
 * t_current is an index to the frame size vector stored in
 Traces[r_current]. It is updated every time a new frame is due. We
 assume all vectors stored in Traces to have the same size, denoted
 size_traces. The following equation governs the update of t_current:
+ o The variable t_current is an index to the frame size vector stored
+ in Traces[r_current]. It is updated every time a new frame is
+ due. It is assumed that all vectors stored Traces to have the
+ same size, denoted as size_traces. The following equation governs
+ the update of t_current:
if t_current < SkipFrames then
t_current = t_current + 1
else
t_current = ((t_current+1SkipFrames) % (size_traces SkipFrames))
+ SkipFrames
where operator % denotes modulo, and SkipFrames is a predefined
constant that denotes the number of frames to be skipped at the
beginning of frame size vectors after t_current has wrapped around.
The point of constant SkipFrames is avoiding the effect of
 periodically sending a (big) Iframe followed by several smaller
 thannormal Pframes. We typically set SkipFrames to 20, although it
 could be set to 0 if we are interested in studying the effect of
+ periodically sending a large Iframe followed by several smaller
+ thanaverage Pframes. A typical value of SkipFrames is 20, although
+ it could be set to 0 if one is interested in studying the effect of
sending Iframes periodically.
 We initialize r_current to R_min, and t_current to 0.
+ The initial value of r_current is set to R_min, and the initial value
+ of t_current set to 0.
 When a new frame is due, we need to calculate its size. There are
 three cases:
+ When a new frame is due, its size can be calculated following one of
+ the three cases below:
 a) R_min <= R_v < Rmax: In this case we use linear interpolation of
 the frame sizes appearing in Traces[r_current] and
 Traces[r_current + l]. The interpolation is done as follows:
+ a) R_min <= R_v < Rmax: the output frame size is calculated via
+ linear interpolation of the frame sizes appearing in
+ Traces[r_current] and Traces[r_current + l]. The interpolation is
+ done as follows:
size_lo = Traces[r_current][t_current]
size_hi = Traces[r_current + l][t_current]
distance_lo = ( R_v  r_current ) / l
framesize = size_hi * distance_lo + size_lo * (1  distance_lo)
 b) R_v < R_min: In this case, we scale the trace sequence with the
 lowest bitrate, in the following way:
+ b) R_v < R_min: the output frame size is calculated via scaling with
+ respect to the lowest bitrate R_min, as follows:
factor = R_v / R_min
framesize = max(1, factor * Traces[R_min][t_current])
 c) R_v >= R_max: We also use scaling for this case. We use the
 trace sequence with the greatest bitrate:
+ c) R_v >= R_max: the output frame size is calculated by scaling with
+ respect to the highest bitrate R_max:
factor = R_v / R_max
framesize = factor * Traces[R_max][t_current]
 In case b), we set the minimum to 1 byte, since the value of factor
 can be arbitrarily close to 0.
+ In case b), we set the minimum output size to 1 byte, since the value
+ of factor can be arbitrarily close to 0.
6.2.2. Notes to the main algorithm
 * Reacting to changes in target bitrate. Similarly to the
 statistical model presented in Section 5, the tracedriven synthetic
 codec can have a time bound, tau_v, to reacting to target bitrate
 changes. If the codec has reacted to an update in R_v at time t, it
 will delay any further update to R_v to time t + tau_v. Note that,
 in any case, the value of tau_v cannot be chosen shorter than the
 time between frames, i.e. the inverse of the frame rate.
+ Note that main algorithm as described above can be further extended
+ to mimic some additional typical behaviors of a live encoder. Two
+ examples are given below:
 * Iframes on demand. The synthetic codec could be extended to
+ o Iframes on demand: The synthetic codec can be extended to
simulate the sending of Iframes on demand, e.g., as a reaction to
 losses. To implement this extension, the codec's API is augmented
 with a new function to request a new Iframe. Upon calling such
 function, t_current is reset to 0.
+ losses. To implement this extension, the codec's incoming
+ interface (see (a) in Figure 1) is augmented with a new function
+ to request a new Iframe. Upon calling such function, t_current
+ is reset to 0.
 * Variable length l of steps defined between R_min and R_max. In the
 main algorithm's description, the step length l is fixed. However,
 if the range [R_min, R_max] is very wide, it is also possible to
 define a set of steps with a nonconstant length. The idea behind
 this modification is that the difference between 400 kbps and 600
 kbps as bitrate is much more important than the difference between
 4400 kbps and 4600 kbps. For example, one could define steps of
 length 200 Kbps under 1 Mbps, then steps of length 300 kbps between 1
 Mbps and 2 Mbps; 400 kbps between 2 Mbps and 3 Mbps, and so on.
+ o Variable step length l between R_min and R_max: In the main
+ algorithm, the step length l is fixed for ease of explanation.
+ However, if the range [R_min, R_max] is very wide, it is also
+ possible to define a set of intermediate encoding rates with
+ variable step length. The rationale behind this modification is
+ that the difference between 400 kbps and 600 kbps as target
+ bitrate is much more significant than the difference between 4400
+ kbps and 4600 kbps. For example, one could define steps of length
+ 200 Kbps under 1 Mbps, then steps of length 300 Kbps between 1
+ Mbps and 2 Mbps; 400 Kbps between 2 Mbps and 3 Mbps, and so on.
6.3. Varying frame rate and resolution
The tracedriven synthetic codec model explained in this section is
 relatively simple because we have fixed the frame rate and the frame
 resolution. The model could be extended to have variable frame rate,
 variable spatial resolution, or both.
+ relatively simple due to fixed frame rate and frame resolution. The
+ model can extended further to accommodate variable frame rate and/or
+ variable spatial resolution.
When the encoded picture quality at a given bitrate is low, one can
 potentially decrease the frame rate (if the video sequence is
+ potentially decrease either the frame rate (if the video sequence is
currently in low motion) or the spatial resolution in order to
improve qualityofexperince (QoE) in the overall encoded video. On
the other hand, if target bitrate increases to a point where there is
no longer a perceptible improvement in the picture quality of
individual frames, then one might afford to increase the spatial
resolution or the frame rate (useful if the video is currently in
high motion).
Many techniques have been proposed to choose over time the best
combination of encoder quatization step size, frame rate, and spatial
@@ 596,31 +607,31 @@
[Ozer2011][Hu2010]. Future work may consider extending the trace
driven codec to accommodate variable frame rate and/or resolution.
From the perspective of congestion control, varying the spatial
resolution typically requires a new intracoded frame to be
generated, thereby incurring a temporary burst in the output traffic
pattern. The impact of frame rate change tends to be more subtle:
reducing frame rate from high to low leads to sparsely spaced larger
encoded packets instead of many densely spaced smaller packets. Such
difference in traffic profiles may still affect the performance of
 congestion control, especially when outgoing packets are not paced at
 the transport module. We leave the investigation of varying frame
 rate to future work.
+ congestion control, especially when outgoing packets are not paced by
+ the media transport module. Investigation of varying frame rate and
+ resolution are left for future work.
7. Combining The Two Models
It is worthwhile noting that the statistical and tracedriven models
 each has its own advantages and drawbacks. While both models are
 fairly simple to implement, it takes significantly greater effort to
 fit the parameters of a statistical model to actual encoder output
 data whereas it is straightforward for a tracedriven model to obtain
+ each has its own advantages and drawbacks. Both models are fairly
+ simple to implement. It takes significantly greater effort to fit
+ the parameters of a statistical model to actual encoder output data
+ whereas it is straightforward for a tracedriven model to obtain
encoded frame size data. On the other hand, once validated, the
statistical model is more flexible in mimicking a wide range of
encoder/content behaviors by simply varying the correponding
parameters in the model. In this regard, a tracedriven model relies
 by definition  on additional data collection efforts for
accommodating new codecs or video contents.
In general, the tracedriven model is more realistic for mimicking
ongoing, steadystate behavior of a video traffic source whereas the
statistical model is more versatile for simulating transient events
@@ 642,75 +653,80 @@
+> frame from 
steadystate  trace 
++
Figure 3: Hybrid approach for modeling video traffic
As shown in Figure 3, the video traffic model operates in transient
state if the requested target rate R_v is substantially higher than
the previous target, or else it operates in steady state. During
transient state, a total of K_d frames are generated by the
 statistical model, resulting in 1 big burst frame with size K_B
+ statistical model, resulting in one (1) big burst frame with size K_B
followed by K_d1 smaller frames. When operating at steadystate,
the video traffic model simply generates a frame according to the
tracedriven model given the target rate, while modulating the frame
interval according to the distribution specified by the statistical
model. One example criterion for determining whether the traffic
model should operate in transient state is whether the rate increase
 exceeds 10% of previous target rate.
+ exceeds 10% of previous target rate. Finally, as this model follows
+ transient state behavior dictated by the statistical model, upon a
+ substantial rate change, the model will follow the timedamping
+ mechanism defined in Section 5.1, which is governed by parameter
+ tau_v.
8. Implementation Status
The statistical model has been implemented as a traffic generator
module within the [ns2] network simulation platform.
 More recently, both the statistical and tracedriven models have been
 implemented as a standalone traffic source module. This can be
 easily integrated into network simulation platforms such as [ns2]
 and [ns3], as well as testbeds using a real network. The stand
 alone traffic source module is available as an open source
 implementation at [Syncodecs].
+ More recently, the statistical, tracedriven, and hybrid models have
+ been implemented as a standalone, platformindependent traffic
+ source module. This can be easily integrated into network simulation
+ platforms such as [ns2] and [ns3], as well as testbeds using a real
+ network. The standalone traffic source module is available as an
+ open source implementation at [Syncodecs].
9. IANA Considerations
There are no IANA impacts in this memo.
10. References
10.1. Normative References
 [H264] ITUT Recommendation H.264, "Advanced video coding for
 generic audiovisual services", 2003,
 .

 [HEVC] ITUT Recommendation H.265, "High efficiency video
 coding", 2015.

[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997,
 .
+ .
10.2. Informative References
+ [H264] ITUT Recommendation H.264, "Advanced video coding for
+ generic audiovisual services", May 2003,
+ .
+
+ [HEVC] ITUT Recommendation H.265, "High efficiency video
+ coding", April 2013,
+ .
+
[Hu2010] Hu, H., Ma, Z., and Y. Wang, "Optimization of Spatial,
Temporal and Amplitude Resolution for RateConstrained
Video Coding and Scalable Video Adaptation", in Proc. 19th
IEEE International Conference on Image
Processing, (ICIP'12), September 2012.
[IETFInterim]
Zhu, X., Mena, S., and Z. Sarker, "Update on RMCAT Video
Traffic Model: Trace Analysis and Model Update", April
 2017, .
+ 2017, .
[ns2] "The Network Simulator  ns2",
.
[ns3] "The Network Simulator  ns3", .
[Ozer2011]
Ozer, J., "Video Compression for Flash, Apple Devices and
HTML5", ISBN 13:9780976259503, 2011.