draft-ietf-ppsp-survey-00.txt   draft-ietf-ppsp-survey-01.txt 
PPSP Y. Gu PPSP Y. Gu
Internet-Draft N. Zong Internet-Draft N. Zong
Intended status: Standards Track Huawei Intended status: Standards Track Huawei
Expires: August 31, 2011 Hui. Zhang Expires: September 12, 2011 Hui. Zhang
NEC Labs America. NEC Labs America.
Yunfei. Zhang Yunfei. Zhang
China Mobile China Mobile
J. Lei J. Lei
University of Goettingen University of Goettingen
Gonzalo. Camarillo Gonzalo. Camarillo
Ericsson Ericsson
Yong. Liu Yong. Liu
Polytechnic University Polytechnic University
February 27, 2011 Delfin. Montuno
Lei. Xie
Huawei
March 11, 2011
Survey of P2P Streaming Applications Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-00 draft-ietf-ppsp-survey-01
Abstract Abstract
This document presents a survey of popular Peer-to-Peer streaming This document presents a survey of popular Peer-to-Peer streaming
applications on the Internet. We focus on the Architecture and Peer applications on the Internet. We focus on the Architecture and Peer
Protocol/Tracker Signaling Protocol description in the presentation, Protocol/Tracker Signaling Protocol description in the presentation,
and study a selection of well-known P2P streaming systems, including and study a selection of well-known P2P streaming systems, including
Joost, PPlive, andother popular existing systems. Through the Joost, PPlive, andother popular existing systems. Through the
survey, we summarize a common P2P streaming process model and the survey, we summarize a common P2P streaming process model and the
correspondent signaling process for P2P Streaming Protocol correspondent signaling process for P2P Streaming Protocol
skipping to change at page 1, line 46 skipping to change at page 2, line 4
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 12, 2011.
This Internet-Draft will expire on August 31, 2011.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
skipping to change at page 2, line 19 skipping to change at page 3, line 7
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 3 2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 4
3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 4 3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 5
3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 4 3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 5
3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 6
3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 7 3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 9
3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 13 3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 21
3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 13 3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 23
3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 16 3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 26
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 16 3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 26
4. A common P2P Streaming Process Model . . . . . . . . . . . . . 17 4. A common P2P Streaming Process Model . . . . . . . . . . . . . 29
5. Security Considerations . . . . . . . . . . . . . . . . . . . 18 5. Security Considerations . . . . . . . . . . . . . . . . . . . 30
6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 30
7. Informative References . . . . . . . . . . . . . . . . . . . . 19 7. Informative References . . . . . . . . . . . . . . . . . . . . 30
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33
1. Introduction 1. Introduction
Toward standardizing the signaling protocols used in today's Peer-to- Toward standardizing the signaling protocols used in today's Peer-to-
Peer (P2P) streaming applications, we surveyed several popular P2P Peer (P2P) streaming applications, we surveyed several popular P2P
streaming systems regarding their architectures and signaling streaming systems regarding their architectures and signaling
protocols between peers, as well as, between peers and trackers. The protocols between peers, as well as, between peers and trackers. The
studied P2P streaming systems, running worldwide or domestically, studied P2P streaming systems, running worldwide or domestically,
include such as PPLive, Joost, Cybersky-TV, and Octoshape. This include such as PPLive, Joost, Cybersky-TV, and Octoshape. This
document does not intend to cover all design options of P2P streaming document does not intend to cover all design options of P2P streaming
skipping to change at page 7, line 27 skipping to change at page 8, line 27
| |
| |
| |
| |
+------------+ +---------+ +------------+ +---------+
| Super Node |-------| Peer2 | | Super Node |-------| Peer2 |
+------------+ +---------+ +------------+ +---------+
Figure 1, Architecture of Joost system Figure 1, Architecture of Joost system
The following sections describe Joost QoS related features, extracted
mostly from [Joost- experiment], [JO2-Moreira] and [JO7-Joost Network
Architecture].
For peer selection, Host Cache of a peer, which is refreshed
periodically, stores a list of Joost super nodes IP addresses and
ports. The selection strategy is influenced by the number of peers
accessing the same content. Specifically, the number of candidate
peers made available is proportional to the number of active peers.
If there are a few of them, then Joost content server is made
available to assist in the data delivery. Although there is no
explicit consideration for peer heterogeneities in peer selection,
low capacity peers tend to partner with low capacity peer. Peers
under the same NAT also tend to serve each other preferentially [JO2-
Moreira]. It may consider geographical locality but not have AS-
level awareness or exploit topological locality and thus may have
impact on the efficiency of video distribution.
To maintain the overlay networks, super nodes probe clients, clients
probe clients and super nodes, and super nodes communicate with super
nodes and servers. To make up for inadequate bandwidth and to be
scalable, Joost forms groups of Joost Server Islands, each island
consisting of one streaming control server controlling ten streaming
servers. Moreover, STUN protocol enables a client to discover
whether it is behind a NAT or firewall and the type of the NAT or
firewall.
For data delivery, audio and video traffic are streamed separately to
allow for multi-lingual programming. Content comes mostly from peers
and occasionally from content server for !olong-tail!+/- content. As
peers are assumed to contribute in a best-effort manner,
infrastructures are needed to make up for insufficient bandwidth,
including in the asymmetric scenario. However, super nodes are not
part of the bandwidth supplying infrastructures as they only relay
control traffic but not data traffic to clients. To support the P2P
media distribution services, Joost uses an agent based peer-to-peer
system called Anthill. Joost also employs Local Video Cache for
later viewing and to avoid reloading but will still require
authorization from Joost server when accessing the video file at a
later time.
Joost provides large buffering and thus causes longer start-up delay
for VoD traffic than for live media streaming traffic. It affords
more FEC for VoD traffic but gives higher priority in delivery to
live media streaming traffic.
For Joost, load-balancing and fault-tolerance is shifted directly
into the client and all is done natively in the p2p code.
To enhance user viewing experience, Joost provides chat capability
between viewers and user program rating mechanisms.
3.1.2. Octoshape 3.1.2. Octoshape
CNN has been working with a P2P Plug-in, from a Denmark-based company CNN has been working with a P2P Plug-in, from a Denmark-based company
Octoshape, to broadcast its living streaming. Octoshape helps CNN Octoshape, to broadcast its living streaming. Octoshape helps CNN
serve a peak of more than a million simultaneous viewers. It has serve a peak of more than a million simultaneous viewers. It has
also provided several innovative delivery technologies such as loss also provided several innovative delivery technologies such as loss
resilient transport, adaptive bit rate, adaptive path optimization resilient transport, adaptive bit rate, adaptive path optimization
and adaptive proximity delivery. Figure 2 depicts the architecture and adaptive proximity delivery. Figure 2 depicts the architecture
of the Octoshape system. of the Octoshape system.
skipping to change at page 8, line 14 skipping to change at page 10, line 17
than the playback rate of the live stream. If not, an artificial than the playback rate of the live stream. If not, an artificial
peer may be added to deliver extra bandwidth. peer may be added to deliver extra bandwidth.
Each single peer has an address book of other peers who is watching Each single peer has an address book of other peers who is watching
the same channel. A Standby list is set up based on the address the same channel. A Standby list is set up based on the address
book. The peer periodically probes/asks the peers in the standby book. The peer periodically probes/asks the peers in the standby
list to be sure that they are ready to take over if one of the list to be sure that they are ready to take over if one of the
current senders stops or gets congested. [Octoshape] current senders stops or gets congested. [Octoshape]
Peer Protocol: The live stream is firstly sent to a few peers in the Peer Protocol: The live stream is firstly sent to a few peers in the
network and then be spread to the rest. When a peer joins a channel,
it notifies all the other peers about its presence over Peer
Protocol, which will drive the others to add it into their address
books. Although [Octoshape] declares that each peer records all the
peers joining the channel, we suspect that not all the peers are
recorded, considering the notification traffic will be large and
peers will be busy with recording when a popular program starts in a
channel and lots of peers switch to this channel. Maybe some
geographic or topological neighbors are notified and the peer gets
its address book from these neighbors.
Peer Protocol: The live stream is firstly sent to a few peers in the
network and then spread to the rest of the network. When a peer network and then spread to the rest of the network. When a peer
joins a channel, it notifies all the other peers about its presence joins a channel, it notifies all the other peers about its presence
using Peer Protocol, which will drive the others to add it into their using Peer Protocol, which will drive the others to add it into their
address books. Although [Octoshape] declares that each peer records address books. Although [Octoshape] declares that each peer records
all the peers joining the channel, we suspect that not all the peers all the peers joining the channel, we suspect that not all the peers
are recorded, considering the notification traffic will be large and are recorded, considering the notification traffic will be large and
peers will be busy with recording when a popular program starts in a peers will be busy with recording when a popular program starts in a
channel and lots of peers switch to this channel. Maybe some channel and lots of peers switch to this channel. Maybe some
geographic or topological neighbors are notified and the peer gets geographic or topological neighbors are notified and the peer gets
its address book from these nearby neighbors. its address book from these nearby neighbors.
skipping to change at page 9, line 27 skipping to change at page 11, line 27
***************************************** *****************************************
| |
| |
+---------------+ +---------------+
| Content Server| | Content Server|
+---------------+ +---------------+
Figure 2, Architecture of Octoshape system Figure 2, Architecture of Octoshape system
The following sections describe Octoshape QoS related features,
extracted mostly from [OctoshapeWeb], [OC2-Alstrup] and [OC3-
Alstrup]. As it is a closed system, the details of how the features
are implemented are not available.
To spread the burden of data distribution across several peers and
thus limiting the impact of peer loss, Octoshape splits a live stream
into a number of smaller equal-sized sub-streams. For example, a
400kbit/s live stream is split and coded into 12 distinct 100kbit/s
sub-streams. Only a subset of these sub-streams needs to reach a
user for it to reconstruct the !(R)original!_ live stream. The
number of distinct sub-streams could be as many as the number of
active peers.
Therefore, even if the upload capacity of a peer is smaller than its
download capacity, it would now be easier to contribute a sub-stream
than a whole live stream. An Octoshape peer can then receive from
each neighboring peer at least a distinct sub-stream. To make up for
the bandwidth asymmetry, artificial end users are used to deliver
additional bandwidth. Multi OctoServers are also available to
guarantee no single point of failure [OC3-Alstrup].
Octoshape keeps peer!_s availability information in an address book.
Each peer keeps a periodically updated stand-by list and passes it
along with its transmitted sub-stream. With constant monitoring of
the quality and consistency of each content source, the peer can
switch partners in case of bottleneck and congestion to a better
source.
Octoshape provides operator to control who should and should not
receive certain video signal due to copyright restriction, to control
access based in part on IP numbers, and to obtain real time
statistics during any live events.
To optimize bandwidth utilization, Octoshape leverages computers
within a network to minimize external bandwidth usage and to select
the most reliable and !oclosest!+/- source to each viewer. It also
chooses the best matching available codecs and players and scales bit
rate up and down according to available internet connection.
Octoshape [OctoshapeWeb] claims to have patented resiliency and
throughput technologies to deliver quality streams to the mobile and
wireless edge networks. This throughput optimization technology also
cleans up latent and lossy network connections between the encoder
and the distribution point, providing a stable, high quality, stream
for distribution. Octoshape also claims to be able to deliver true
HD, 1280x720 30fps (720p) video over the Internet and to have
advanced DVR functionalities such as allowing users to move
seamlessly forward and back through the streams with almost no
waiting time.
3.1.3. PPLive 3.1.3. PPLive
PPLive is one of the most popular P2P streaming software in China. PPLive is one of the most popular P2P streaming software in China.
It has two major communication protocols. One is Registration and It has two major communication protocols. One is Registration and
peer discovery protocol, i.e. Tracker Protocol, and the other is P2P peer discovery protocol, i.e. Tracker Protocol, and the other is P2P
chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows the chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows the
architecture of PPLive. architecture of PPLive.
Tracker Protocol: First, a peer gets the channel list from the Tracker Protocol: First, a peer gets the channel list from the
Channel server, in a way similar to that of Joost. Then the peer Channel server, in a way similar to that of Joost. Then the peer
skipping to change at page 10, line 40 skipping to change at page 13, line 42
+--------------+ +--------------+
| |
| |
| |
+---------------+ +---------------+
| Tracker Server| | Tracker Server|
+---------------+ +---------------+
Figure 3, Architecture of PPlive system Figure 3, Architecture of PPlive system
The following sections describe PPLive QoS related features,
extracted mostly from [PL3-Hei], [PL5-Vu], [PL6-Liu], and [PL7-Liu].
After obtaining an initial peer list from the member server, a peer
periodically updates its peer list by querying both member server and
partner peers. New peers are aggressively contacted at a fixed rate.
In selecting peers as partners, a peer considers their upload-
bandwidth and in part, their location information [PL6-Horvath] in
selecting on a FCFS basis those that have responded [PL7-Liu].
For data distribution, PPLive, a data-driven or mesh-pull scheme
[PL3-Hei], divides the media content into small portions called
chunks uses and TCP for video streaming. Neighbor peers use a
gossip-like protocol to exchange their buffer map that indicates
chunks available for sharing. Peers obtain one or more their missing
chunks from one or more peers having them. Available chunks may also
be downloaded from the original channel server.
PPLive uses a double buffering mechanism consisting of TV Engine and
Media Player for its stream reassembly and display [PL3-Hei]. The TV
Engine is responsible for downloading video chunks from the PPLive
network and streaming the downloaded video to the Media Player, which
in turns displays the content to the user, after each buffer is
filled up to its respective predetermined threshold.
PPLive is observed to have the download scheduling policy of giving
higher priority to rare chunks and to chunks closer to play out
deadline and to be using a sliding window mechanism to regulate the
buffering of chunks.
To utilize available peer resources, peers in one subscribed overlay
may also be harnessed to support peers in other subscribed overlays
[PL5-Vu].
3.1.4. Zattoo 3.1.4. Zattoo
Zattoo is P2P live streaming system which serves over 3 million Zattoo is P2P live streaming system which serves over 3 million
registered users over European countries [Zattoo].The system delivers registered users over European countries [Zattoo].The system delivers
live streaming using a receiver-based, peer-division multiplexing live streaming using a receiver-based, peer-division multiplexing
scheme. Zattoo reliabily streams media among peers using the mesh scheme. Zattoo reliably streams media among peers using the mesh
structure. structure.
Figure 4 depcits a typical procedure of single TV channel carried Figure 4 depicts a typical procedure of single TV channel carried
over Zattoo network. First, Zattoo system broadcasts live TV, over Zattoo network. First, Zattoo system broadcasts live TV,
captured from satellites, onto the Internet. Each TV channel is captured from satellites, onto the Internet. Each TV channel is
delivered through a separate P2P network. delivered through a separate P2P network.
------------------------------- -------------------------------
| ------------------ | -------- | ------------------ | --------
| | Broadcast | |---------|Peer1 |----------- | | Broadcast | |---------|Peer1 |-----------
| | Servers | | -------- | | | Servers | | -------- |
| Administrative Servers | ------------- | Administrative Servers | -------------
| ------------------------ | | Super Node| | ------------------------ | | Super Node|
skipping to change at page 11, line 34 skipping to change at page 15, line 34
interested TV channel. In return, the Rendezvous Server sends back a interested TV channel. In return, the Rendezvous Server sends back a
list joined peers carrying the channel. list joined peers carrying the channel.
Peer Protocol: Similar to aforementioned procedures in Joost, PPLive, Peer Protocol: Similar to aforementioned procedures in Joost, PPLive,
a new Zattoo peer requests to join an existing peer among the peer a new Zattoo peer requests to join an existing peer among the peer
list. Upon the availability of bandwidth, requested peer decides how list. Upon the availability of bandwidth, requested peer decides how
to multiplex a stream onto its set of neighboring peers. When to multiplex a stream onto its set of neighboring peers. When
packets arrive at the peer, sub-streams are stored for reassembly packets arrive at the peer, sub-streams are stored for reassembly
constructing the full stream. constructing the full stream.
Note Zattoo relies on Bandwdith Estimation Server to initially Note Zattoo relies on Bandwidth Estimation Server to initially
estimate the amount of available uplink bandwith at a peer. Once a estimate the amount of available uplink bandwidth at a peer. Once a
peer starts to forward substream to other peers, it receives QoS peer starts to forward substream to other peers, it receives QoS
feedback from other receivers if the quality of sub-stream drops feedback from other receivers if the quality of sub-stream drops
below a threshold. below a threshold.
The following sections describe Zattoo QoS related features,
extracted mostly from [ZT1-Chang].
For reliable data delivery, each live stream is partitioned into
video segments. Each video segment is coded for forward error
correction with Reed-Solomon error correcting code into n sub-stream
packets such that having obtained k correct packets of a segment is
sufficient to reconstruct the remaining n-k packets of the same video
segment. To receive a video segment, each peer then specifies the
sub-stream(s) of the video segment it would like to receive from the
neighboring peers.
Zattoo uses Peer-Division Multiplexing (PDM) scheme for its data
delivery topology setup. In this scheme, each new peer independently
executes the Search and Join phases. In the Search Phase, a peer
queries the members of the peer list for sub-streams availability; in
response, receives additional prospective peers, sub-streams
availability, quality indications, and sub-stream sequence numbers;
and then selects, among the responses, partnering peers or quits
after failing two search attempts.
In the Join Phase, a joining peer, having selected the candidate
peers, requests to partner with some of them, spreading the load
among them and preferring topologically close-by peers, if these
peers have less capacity or carry lower quality sub-streams. Barring
departure or performance degradation of neighboring peers, the
established connections stay and the specified sub-stream packet of
every segment continues to be forwarded without further per-packet
handshaking between peers.
To manage stream efficiently for incoming and outgoing destinations,
each peer has a packet buffer, called IOB (Input-Output Buffer). The
IOB is referenced by an input pointer, a repair pointer, and one or
more output pointers, one for each forwarding destination such as
player, file, and other peer. The input pointer points to the slot
in the IOB where the next incoming packet with sequence number higher
than the highest sequence number received so far will be stored, and
the repair pointer always points to one slot beyond the last packet
received in order and is used to regulate packet retransmission and
adaptive PDM (to be described later). A packet map and forwarding
discipline is associated with each output pointer to accommodate the
different forwarding rates and regimes required by the destinations.
Note that retransmission requests are sent to random peers and not to
partnering peers and they are honoured only if the requested packets
are still in IOB and there is sufficient left-over capacity to
transmit all the requested packets. To avoid buffer overrun, a set
of two buffers is used in the IOB instead of a circular buffer.
Zattoo uses Adaptive Peer-Division Multiplexing scheme to handle
longer term bandwidth fluctuations. In this scheme, each peer
determines how many sub-streams to transmit and when to switch
partners. Specifically, each peer continually estimates the amount
of available uplink bandwidth based initially on probe packets to the
Zattoo Bandwidth Estimation Server and later, based on peer QoS
feedbacks, using different algorithms depending on the underlying
transport protocol. A peer increases its estimated available uplink
bandwidth, if the current estimate is below some threshold and if
there has been no bad quality feedback from neighboring peers for a
period of time, according to some algorithm similar to how TCP
maintains its congestion window size. Each peer then admits
neighbors based on the currently estimated available uplink
bandwidth. In case a new estimate indicates insufficient bandwidth
to support the existing number of peer connections, one connection at
a time, preferably starting with the one requiring the least
bandwidth, is closed. On the other hand, if loss rate of packets
from a peer!_s neighbor reaches a certain threshold, the peer will
attempt to shift the degraded neighboring peer load to other existing
peers, while looking for replacement peer. When a replacement is
found, the load is shifted to it and the degraded neighbor is
dropped. As expected if a peer!_s neighbor is lost due to departure,
the peer initiates the process to replace the lost peer. To optimize
the PDM configuration, a peer may occasionally initiate switching
existing partnering peers to topologically closer peers.
3.1.5. PPStream 3.1.5. PPStream
The system architecture and working flows of PPStream is similar to The system architecture and working flows of PPStream is similar to
PPLive. PPStream transfers data using mostly TCP, only occasionally PPLive. PPStream transfers data using mostly TCP, only occasionally
UDP. UDP.
Video Download Policy of PPStream Video Download Policy of PPStream
1 Top ten peers do not contribute to a large part of the download 1 Top ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream gets the video from traffic. This would suggest that PPStream gets the video from
skipping to change at page 12, line 4 skipping to change at page 17, line 29
The system architecture and working flows of PPStream is similar to The system architecture and working flows of PPStream is similar to
PPLive. PPStream transfers data using mostly TCP, only occasionally PPLive. PPStream transfers data using mostly TCP, only occasionally
UDP. UDP.
Video Download Policy of PPStream Video Download Policy of PPStream
1 Top ten peers do not contribute to a large part of the download 1 Top ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream gets the video from traffic. This would suggest that PPStream gets the video from
many peers simultaneously, and its peers have long session many peers simultaneously, and its peers have long session
duration; duration;
2 PPStream does not send multiple chunk requests for different 2 PPStream does not send multiple chunk requests for different
chunks to one peer at one time; chunks to one peer at one time;
PPStream maintains a constant peer list with relatively large number PPStream maintains a constant peer list with relatively large number
of peers. [P2PIPTV-measuring] of peers. [P2PIPTV-measuring]
The following sections describe PPStream QoS related features,
extracted mostly from [PS3-Li], [PS4-Jia] and [PS5-Wei].
PPStream is mainly mesh-based but to some extent it is layered in its
data distribution topology. It uses geographic clustering to some
extent based on geographic longitude and latitude of the IP addresses
[PS4-Jia].
To ensure data availability, some form of chunk retransmission
request mechanism is used and the buffer map is shared at high rate,
although concurrent requests for the same data chunk is rare. Each
data chunk, identified by the play time offset encoded by the program
source, is divided into 128 sub-chunks of 8KB size each. The chunk
id is used to ensure sequential ordering of received data chunk.
The buffer map consists of one or more 128-bit flags denoting the
availability of sub-chunks and having a corresponding time offset.
Usually a buffer map contains only one data chunk at a time and is
thus smaller than that of PPLive. It also contains sending peer!_s
playback status to the other peers because as soon as a data chunk is
played back, the chunk is deleted or replaced by the next data chunk
[PS5-Wei].
At the initiating stage, a peer can use up to 4 data chunks and on a
stabilized stage, a peer uses usually one data chunk. However, in
transient stage, a peer uses variable number of chunks. Although,
sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy, the same fetching pattern for
one data chunk seems to repeat in the following data chunks [PS3-Li].
Moreover, high bandwidth PPStream peers tend to receive chunks
earlier and contributes more than lower bandwidth peers.
3.1.6. SopCast 3.1.6. SopCast
The system architecture and working flows of SopCast is similar to The system architecture and working flows of SopCast is similar to
PPLive. SOPCast transfer data mainly using UDP, occasionally TCP; PPLive. SOPCast transfer data mainly using UDP, occasionally TCP;
Top ten peers contribute to about half of the total download traffic. Top ten peers contribute to about half of the total download traffic.
SOPCast's download policy is similar to PPLive's policy in that it SOPCast's download policy is similar to PPLive's policy in that it
switches periodically between provider peers. However, SOPCast seems switches periodically between provider peers. However, SOPCast seems
to always need more than one peer to get the video, while in PPLive a to always need more than one peer to get the video, while in PPLive a
single peer could be the only video provider; single peer could be the only video provider;
SOPCast's peer list can be as large as PPStream's peer list. But SOPCast's peer list can be as large as PPStream's peer list. But
SOPCast's peer list varies over time. [P2PIPTV-measuring] SOPCast's peer list varies over time. [P2PIPTV-measuring]
The following sections describe SopCast QoS related features,
extracted mostly from [SC1-Ali], [SC2-Ciullo], [SC4-Fallica], [SC5-
Sentinelli], [SC6-Silverston], and [SC7-Tang].
SopCast allows for software update through (HTTP) a centralized web
server and makes available channel list through (HTTP) another
centralized server.
SopCast traffic is encoded and SopCast TV content is divided into
video chunks or blocks with equal sizes of 10KB [SC7-Tang]. Sixty
percent of its traffic is signaling packets and 40% is actual video
data packets [SC4-Fallica]. SopCast produces more signaling traffic
compared to PPLive, PPStream, and TVAnts, whereas PPLive produces the
least [SC6-Silverston]. Its traffic is also noted to have long-range
dependency [SC6-Silverston], indicating that mitigating it with QoS
mechanisms may be difficult. [SC1-Ali] reported that SopCast
communication mechanism starts with UDP for the exchange of control
messages among its peers using a gossip-like protocol and then moves
to TCP for the transfer of video segments. This use of TCP for data
transfer seems to contradict others findings [SC4-Fallica, SC6-
Silverston].
To discover candidate peers, a peer requests peer list from Tracker,
or from neighboring peer using a gossip-like protocol. To retrieve
content [SC4-Fallica], a new peer contacts peers selected randomly
from the peer list it obtained from having queried the root servers
(trackers). The process of contacting peers slows down after the
initial bootstrap phase [SC3-Horvath, SC2-Ciullo]. The number of
peers a node typically connects to for download is about 2 to 5 [SC5-
Sentinelli] and there is no observed preference for peers with
shorter paths [SC2-Ciullo]. Partner peers periodically advertise
content availability and exchange sought content. In forming
multiple parent and children relationships, a peer does not exploit
peer location information [SC3-Horvath]. In general, parents are
chosen solely based on performance; however, lower capacity nodes
seem to be choosing parents that are closer to improve performance
and to compensate for its bandwidth constraints [SC1-Ali]. When
needed, a peer can download video streams directly from the Source
Provide, a node that broadcasts the entire video [SC7-Tang]. In the
process of data exchange, there is no enforcement of tit-for-tat like
mechanisms [SC2-Ciullo].
Similar to PPLive, SopCast uses a double-buffering mechanism. The
SopCast buffer downloads video chunks from the network, storing them,
and upon exceeding a predetermined number of stored chunks, launches
the Media player. The Media player buffer then downloads video
content from the local web server listening port and upon receiving
sufficient amount of content, starts video playback.
3.1.7. TVants 3.1.7. TVants
The system architecture and working flows of TVants is similar to The system architecture and working flows of TVants is similar to
PPLive. TVAnts is more balanced between TCP and UDP in data PPLive. TVAnts is more balanced between TCP and UDP in data
transmission; transmission;
The system architecture and working flows of TVants is similar to The system architecture and working flows of TVants is similar to
PPLive. TVAnts is more balanced between TCP and UDP in data PPLive. TVAnts is more balanced between TCP and UDP in data
transmission; transmission;
skipping to change at page 13, line 46 skipping to change at page 20, line 46
from its peer list to connect and exchange peer information (e.g. from its peer list to connect and exchange peer information (e.g.
buffer map, peer status, etc) with connected peers to know where buffer map, peer status, etc) with connected peers to know where
to get what data; to get what data;
(5) The new peer decides what data should be requested in which (5) The new peer decides what data should be requested in which
order / priority using some scheduling algorithm and the peer order / priority using some scheduling algorithm and the peer
information obtained in Step (4); information obtained in Step (4);
(6) The new peer requests the data from some connected peers. (6) The new peer requests the data from some connected peers.
The following sections describe TVAnts QoS related features,
extracted mostly from [TV1-Alessandria], [TV2-Ciullo], and [TV3-
Horvath].
TVAnts peer discovery mechanism is very greedy during the first part
of a peer life and stabilizes afterwards [TV2-Ciullo].
For data delivery, peers exhibit mild preference to exchange data
among themselves in the same Autonomous System and also among peers
in the same subnet. TVAnts peer also exhibits some preference to
download from closer peers. According to [TV3-Horvath], TVAnts peer
exploits location information and download mostly from high-bandwidth
peers. However, it does not seem to enforce any tit-for-tat
mechanisms in the data delivery.
TVAnts [TV1-Alessandria] seems to be sensitive to network impairments
such as changes in network capacity, packet loss, and delay. For
capacity loss, a peer will always seek for more peers to download.
In the process of trying to avoid bad paths and selecting good peers
to continue downloading data, aggressive and potentially harmful
behavior for both application and the network results when bottleneck
is affecting all potential peers.
When limited access capacity is experienced, a peer reacts by
increasing redundancy (with FEC or ARQ mechanism) as if reacting to
loss and thus causes higher download rate. To recover from packet
losses, some kind of ARQ mechanism is also used. Although network
conditions do impact video stream distribution such as the network
delay impacting the start-up phase, they seem to have little impact
on the network topology discovery and maintenance process.
3.2. Tree-based P2P streaming systems 3.2. Tree-based P2P streaming systems
3.2.1. PeerCast 3.2.1. PeerCast
PeerCast adopts a Tree structure. The architecture of PeerCast is PeerCast adopts a Tree structure. The architecture of PeerCast is
shown in Figure 6. shown in Figure 6.
Peers in one channel construct the Broadcast Tree and the Broadcast Peers in one channel construct the Broadcast Tree and the Broadcast
server is the root of the Tree. A Tracker can be implemented server is the root of the Tree. A Tracker can be implemented
independently or merged in the Broadcast server. Tracker in Tree independently or merged in the Broadcast server. Tracker in Tree
skipping to change at page 15, line 5 skipping to change at page 22, line 34
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
|Receiver1| |Receiver2| |Receiver3| |Receiver4| |Receiver1| |Receiver2| |Receiver3| |Receiver4|
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 6, Architecture of PeerCast system Figure 6, Architecture of PeerCast system
The following sections describe PeerCast QoS related features,
extracted mostly from [CVV1-Zhang], [CVV4-Chu], [CVV5-Chu], and
[CVV6-Chu].
Each PeerCast node has a peering layer which is a layer between the
application layer and the transport layer. The peering layer of each
node coordinates among similar nodes to establish and maintain a
multicast tree. Moreover, the peering layer also supports simple,
lightweight redirect primitive. This primitive allows a peer p to
direct another peer c which is either opening a data-transfer session
with p, or has a session already established with p to a target peer
t to try to establish a data-transfer session. Peer discovery starts
at the root (source) or some selected sub-tree root and goes
recursively down the tree structure. When a peer leaves normally, it
informs its parent who then releases the peer, and it also redirects
all its immediate children to find new parents starting at some
target t.
The peering layer allows for different policies of topology
maintenance. In choosing a parent from among the children of a given
peer, a child can be chosen randomly, one at a time in some fixed
order, or based on least access latency with respect to the choosing
peer. There are also many choices of peers to start and limit the
search. The different combinations are all the descendants of a
leaving peer have to start searching from the root [root-All (RTA)];
only the children of a leaving peer have to start searching from the
root [Root (RT)]; all the descendants of a leaving peer have to start
searching from the parent of the leaving peer [Grandfather-All
(GFA)]; and only the children of the leaving peer have to start
searching from the parent of the leaving peer [Grandfather (GF)].
A heart-beat mechanism at the peer is available to handle failed
peer. With this mechanism, a peer sends keep-alive messages to its
parent and children. If a parent peer detects that a child has
skipped a specified number of heart-beats, it deems the child as lost
and tidies up. Similarly, a child peer starts its search for new
parent once its current parent is deemed to have left.
PeerCast also proposes but has not evaluated a number of algorithms
that use some cost function to optimize the overlay. Some of them
are described next. If a parent is already saturated, a newly
arrived peer replaces one of the costlier children than the newly
arrived peer and the replaced peer tries to reconnect somewhere else
[Knock-Down]. Newly arrived peer replaces the target peer and the
target peer becomes its child [Join-Flip]. Unstable peers are pushed
down to the bottom of the tree [Leaf-Sink]. Existing child and
parent relationship is flipped [Maintain-Flip].
3.2.2. Conviva 3.2.2. Conviva
Conviva[TM][conviva] is a real-time media control platform for Conviva[TM][conviva] is a real-time media control platform for
Internet multimedia broadcasting. For its early prototype, End Internet multimedia broadcasting. For its early prototype, End
System Multicast (ESM) [ESM04] is the underlying networking System Multicast (ESM) [ESM04] is the underlying networking
technology on organizing and maintaining an overlay broadcasting technology on organizing and maintaining an overlay broadcasting
topology. Next we present the overview of ESM. ESM adopts a Tree topology. Next we present the overview of ESM. ESM adopts a Tree
structure. The architecture of ESM is shown in Figure 7. structure. The architecture of ESM is shown in Figure 7.
ESM has two versions of protocols: one for smaller scale conferencing ESM has two versions of protocols: one for smaller scale conferencing
skipping to change at page 16, line 31 skipping to change at page 24, line 44
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
| Peer3 | | Peer4 | | Peer5 | | Peer6 | | Peer3 | | Peer4 | | Peer5 | | Peer6 |
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 7, Architecture of ESM system Figure 7, Architecture of ESM system
The following sections describe ESM QoS related features, extracted
mostly from [CVV1-Zhang], [CVV4-Chu], [CVV5-Chu], and [CVV6-Chu], and
the details of Conviva are not publicly available.
ESM constructs the multicast tree in a two-step process. It
constructs first a mesh of the participating peers; the mesh having
the following properties:
o The shortest path delay between any pair of peers in the mesh is
at most K times the unicast delay between them, where K is a small
constant.
o Each peer has a limited number of neighbors in the mesh which does
not exceed a given (per-member) bound chosen to reflect the
bandwidth of the peer!_s connection to the Internet.
It then constructs a (reverse) shortest path spanning trees of the
mesh with the root being the source.
Therefore a peer participates in two types of topology management: a
control structure in which peers make sure they are always connected
in a mesh and a data delivery structure in which peers make sure data
gets delivered to them in a tree structure.
To keep connected, each peer maintains communication with a small
number of random neighbors and a complete list of members through a
gossip-like algorithm. When a new node joins, it gets a list of
group members from the source. To look for a parent, it sends probe
request to a subset of the group members it obtained; evaluates them
with respect to delay to the source, application throughput and link
bandwidth; and then chooses from among them a candidate parent that
is not a descendant and is not saturated. In addition to using RTT-
probes, consisting of 1-Kbyte transfers to detect bottleneck
bandwidth, performance history of previously chosen parent is also
considered. The peer also avoids probing hosts with low bottleneck
bandwidth.
When a peer leaves normally, it notifies its neighboring peers and
the neighboring peers propagate the departing peer info. At the same
time, the departing peer continues to forward packets for some time
to minimize transient packet loss. When a peer leaves due to
failure, active peers detect the departure of the peer through its
non-responsiveness to their probe messages. Active peers that
detected the loss then propagate the departed peer info. A departed
peer list that is flushed after a sufficient amount of time has
passed keeps track of leaving and failed peers. The list enables
refreshes from an active peer and a leaving/failed peer to be
distinguished.
Departing peers and failing peers could in some instance partition a
mesh into two or more components. Mesh repair algorithm detects such
occurrences by noticing split in the membership list and tries to
repair by virtually linking between active members to one of the non-
active members, trying one non-active member at a time.
To improve mesh/tree structural and operating quality, each peer
randomly probes one another to add new links that have perceived gain
in utility; and each peer continually monitors existing links to drop
those links that have perceived drop in utility. Switching parent
occurs if a peer leaves or fails; if there is a persistent congestion
or low bandwidth condition; or if there is a better clustering
configuration. To allow for more public hosts to be available for
becoming parents of NATs, public hosts preferentially choose NATs as
parents.
The data delivery structure, obtained from running a distance vector
protocol on top of the mesh using latency between neighbors as the
routing metric, is maintained using various mechanisms. Each peer
maintains and keeps up to date the routing cost to every other
member, together with the path that leads to such cost. To ensure
routing table stability, data continues to be forwarded along the old
routes for sufficient time until the routing tables converge. The
time is set to be larger than the cost of any path with a valid
route, but smaller than infinite cost. To make better use of the
path bandwidth, streams of different bit-rates are forwarded
according to the following priority scheme: audio being higher than
video streams and lower quality video being higher than quality
video. Moreover, bit-rates of stream are adapted to the peer
performance capability.
3.3. Hybrid P2P streaming system 3.3. Hybrid P2P streaming system
3.3.1. New Coolstreaming 3.3.1. New Coolstreaming
The Coolstreaming, first released in summer 2004 with a mesh-based The Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P structure, arguably represented the first successful large-scale P2P
live streaming. As the above analysis, it has poor delay performance live streaming. As the above analysis, it has poor delay performance
and high overhead associated each video block transmission. After and high overhead associated each video block transmission. After
that, New coolstreaming[New CoolStreaming] adopts a hybrid mesh and that, New coolstreaming[New CoolStreaming] adopts a hybrid mesh and
tree structure with hybrid pull and push mechanism. All the peers tree structure with hybrid pull and push mechanism. All the peers
skipping to change at page 17, line 42 skipping to change at page 27, line 41
| Peer1 | | Peer2 | | Peer1 | | Peer2 |
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 | | Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 8 Content Delivery Architecture Figure 8 Content Delivery Architecture
The following sections describe Coolstreaming QoS related features,
extracted mostly from [CS1-Bo] and [CS2-Xie].
The basic components of Coolstreaming consist of the source,
bootstrap node, web server, log server, media servers, and peers.
Three basic modules in a peer help it maintain a partial view of the
overlay (Membership Manager); establish and maintain partnership with
other peers with which Buffer Maps indicating available video
content, are exchanged (Partnership Manager),; and manage data
delivery, retrieval, and play out (Stream Manager).
In building the overlay topology, a newly arrived peer contacts the
bootstrap node for a list of nodes and stores it in its own mCache.
From the stored list, it selects nodes randomly to forms partnership
and then parent-children relationship, where a partnership between
two nodes exists when only block availability information is
exchanged between them, and a parent-children relationship exists
when, in addition to being partner, video content is also exchanged.
Video content is processed for ease of delivery, retrieval, storage,
and play out. To manage content delivery, a video stream is divided
into blocks with equal size, each of which is assigned a sequence
number to represent its playback order in the stream. Each block is
further divided into K sub-blocks and the set of ith sub-blocks of
all blocks constitutes the ith sub-stream of the video stream, where
i is the value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its
parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer.
The synchronization buffer stores the received sub-blocks of each
sub-stream according to the associated block sequence number of the
video stream. The cache buffer then picks up the sub-blocks
according to the associated sub-stream index of each ordered block.
To advertise the availability of the latest block of different sub-
streams in its buffer, a node uses a Buffer Map which is represented
by two vectors of K elements each. Each entry of the first vector
indicates the block sequence number of the latest received sub-
stream, and each bit entry of the second vector if set indicates the
index of the sub-stream that is being requested.
For data delivery, a node uses a hybrid push and pull scheme with
randomly selected partners. A node having requested one or more
distinct sub-streams from a partner as indicated in its first Buffer
Map will continue to receive the sub-streams of all subsequent blocks
from the same partner until future conditions cause the partner to do
otherwise. Moreover, users retrieve video indirectly from the source
through a number of strategically located servers.
To keep the parent-children relationship above a certain level of
quality, each node constantly monitors the status of the on-going
sub-stream reception and re-selects parents according to sub-stream
availability patterns. Specifically, if a node observes that the
block sequence number of the sub-stream of a parent is much smaller
than any of its other partners!_ by a predetermined amount, then the
node concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum
and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced.
4. A common P2P Streaming Process Model 4. A common P2P Streaming Process Model
As shown in Figure 8, a common P2P streaming process can be As shown in Figure 8, a common P2P streaming process can be
summarized based on Section 3: summarized based on Section 3:
1) When a peer wants to receive streaming content: 1) When a peer wants to receive streaming content:
1.1) Peer acquires a list of peers/parent nodes from the 1.1) Peer acquires a list of peers/parent nodes from the
tracker. tracker.
skipping to change at page 20, line 18 skipping to change at page 31, line 28
[Challenge] [Challenge]
Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the
Internet: Issues, Existing Approaches, and Challenges", Internet: Issues, Existing Approaches, and Challenges",
June 2007. June 2007.
[NewCoolstreaming] [NewCoolstreaming]
Li, Bo, et al., "Inside the New Coolstreaming: Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications", Principles,Measurements and Performance Implications",
Apr. 2008. Apr. 2008.
[JO2-Moreira]
Moreira, J, et al., "IEEE Network Operations and
Management Symposium", Apr. 2008.
[JO7-Joost Network Architecture]
"Joost Network Architecture,
http://scaryideas.com/content/2362/".
[OC2-Alstrup]
Alstrup, S, et al., "Octoshape "C a new technology for
large-scale streaming over the Internet", 2005.
[OC3-Alstrup]
Alstrup, S, et al., "Grid live streaming to millions",
2006.
[PL3-Hei] Hei, X, et al., "Insights into PPLive: A measurement study
of a large-scale P2P IPTV system", May 2006.
[PL5-Vu] Vu, L, et al., "Understanding Overlay Characteristics of a
Large-Scale Peer-to-Peer IPTV System", November 2010.
[PL6-Horvath]
"".
[SC3-Horvath]
"".
[TV3-Horvath]
Horvath, A, et al., "Dissecting PPLive, SopCast, TVAnt".
[PL7-Liu] Liu, Y, et al., "A Case Study of Traffic Locality in
Internet P2P Live Streaming Systems".
[PS3-Li] Li, C, et al., "Measurement Based PPStream client behavior
analysis", 2009.
[PS4-Jia] Jia, J, et al., "Characterizing PPStream across Internet",
2007.
[PS5-Wei] Wei, T, et al., "Study of PPStream Based on Measurement",
2008.
[SC1-Ali] Ali, S, et al., "Measurement of Commercial Peer-to-Peer
Live Video Streaming", Aug 2006.
[SC2-Ciullo]
"".
[TV2] Ciullo, D, et al., "Network Awareness of P2P Live
Streaming Applications: A Measurement Study", Aug 2010.
[SC4-Fallica]
Fallica, B, et al., "On the Quality of Experience of
SopCast", Aug 2008.
[SC5-Sentinelli]
Sentinelli, A, et al., "Will IPTV Ride the Peer-to-Peer
Stream?", June 2007.
[SC6-Silverston]
Silverston, T, et al., "Traffic analysis of peer-to-peer
IPTV communities", 2009.
[SC7-Tang]
Tang, S, et al., "Topology dynamics in a P2PTV network",
2009.
[TV1-Alessandria]
Alessandria, E, et al., "P2P-TV Systems under Adverse
Network Conditions: a Measurement Study", 2009.
[ZT1-Chang]
Chang, H, et al., "Live streaming performance of the
Zattoo network", 2009.
[PC1-Deshpande]
Deshpande, H, et al., "Streaming Live Media over a Peer-
to-Peer Network", August 2001.
[PC2-http]
"http://arbor.ee.ntu.edu.tw/archive/p2p/p2p/showDoc2.pdf".
[PC3-http]
"http://ilpubs.stanford.edu:8090/863/".
[CVV1-Zhang]
Zhang, H, et al., "End System Multicast", May 2004.
[CVV4-Chu]
Chu, Y, et al., "A Case for End System Multicast",
June 2000.
[CVV5-Chu]
Chu, Y, et al., "Early Experience with an Internet
Broadcast System Based on Overlay Multicast", June 2004.
[CVV6-Chu]
Chu, Y, et al., "Narada is a self-organizing, overlay-
based protocol for achieving multicast without network
support", Aug 2001.
[CS1-Bo] Li, B, et al., "Inside the New Coolstreaming: Principles,
Measurements and Performance Implications", 2008.
[CS2-Xie] Xie, S, et al., "Coolstreaming: Design, Theory, and
Practice", 2007.
Authors' Addresses Authors' Addresses
Gu Yingjie Gu Yingjie
Huawei Huawei
Baixia Road No. 91 Baixia Road No. 91
Nanjing, Jiangsu Province 210001 Nanjing, Jiangsu Province 210001
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
skipping to change at line 900 skipping to change at page 34, line 39
Gonzalo Camarillo Gonzalo Camarillo
Ericsson Ericsson
Email: Gonzalo.Camarillo@ericsson.com Email: Gonzalo.Camarillo@ericsson.com
Liu Yong Liu Yong
Polytechnic University Polytechnic University
Email: yongliu@poly.edu Email: yongliu@poly.edu
Delfin Montuno
Huawei
Email: delfin.montuno@huawei.com
Xie Lei
Huawei
Email: xielei57471@huawei.com
 End of changes. 23 change blocks. 
42 lines changed or deleted 653 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/