draft-ietf-ppsp-survey-03.txt   draft-ietf-ppsp-survey-04.txt 
PPSP Y. Gu, Ed. PPSP Y. Gu
Internet-Draft N. Zong, Ed. Internet-Draft N. Zong, Ed.
Intended status: Standards Track Huawei Intended status: Standards Track Huawei
Expires: April 19, 2013 Yunfei. Zhang Expires: August 29, 2013 Y. Zhang
China Mobile China Mobile
October 16, 2012 F. Piccolo
Cisco
S. Duan
CATR
February 25, 2013
Survey of P2P Streaming Applications Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-03 draft-ietf-ppsp-survey-04
Abstract Abstract
This document presents a survey of popular Peer-to-Peer streaming This document presents a survey of some of the most popular Peer-to-
applications on the Internet. We focus on the Architecture and Peer Peer (P2P) streaming applications on the Internet. Main selection
Protocol/Tracker Signaling Protocol description in the presentation, criteria were popularity and availability of information on operation
and study a selection of well-known P2P streaming systems, including details at writing time. In doing this, selected applications will
Joost, PPlive, andother popular existing systems. Through the not be reviewed as a whole, but we will focus exclusively on the
survey, we summarize a common P2P streaming process model and the signaling and control protocol used to establish and maintain overlay
correspondent signaling process for P2P Streaming Protocol connections among peers and to advertise and download streaming
standardization. content.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 19, 2013. This Internet-Draft will expire on August 29, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 3 2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 4
3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 4 3. Classification of P2P Streaming Applications Based on
3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 4 Overlay Topology . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1. Mesh-based P2P Streaming Applications . . . . . . . . . . 5
3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 8 3.1.1. Octoshape . . . . . . . . . . . . . . . . . . . . . . 6
3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.2. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.3. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.4. PPStream . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.5. SopCast . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.6. Tribler . . . . . . . . . . . . . . . . . . . . . . . 13
3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 16 3.1.7. QQLive . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 17 3.2. Tree-based P2P streaming applications . . . . . . . . . . 16
3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 19 3.2.1. End System Multicast (ESM) . . . . . . . . . . . . . . 17
3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 21 3.3. Hybrid P2P streaming applications . . . . . . . . . . . . 18
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 21 3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 19
4. A common P2P Streaming Process Model . . . . . . . . . . . . . 23 4. Security Considerations . . . . . . . . . . . . . . . . . . . 21
5. Security Considerations . . . . . . . . . . . . . . . . . . . 24 5. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 21
6. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 24 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21
7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25 7. Informative References . . . . . . . . . . . . . . . . . . . . 21
8. Informative References . . . . . . . . . . . . . . . . . . . . 25 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26
1. Introduction 1. Introduction
Toward standardizing the signaling protocols used in today's Peer-to- An ever increasing number of multimedia streaming systems have been
Peer (P2P) streaming applications, we surveyed several popular P2P adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and
streaming systems regarding their architectures and signaling video contents from a source to a large number of end users. This is
protocols between peers, as well as, between peers and trackers. The the reference scenario of this document, which presents a survey of
studied P2P streaming systems, running worldwide or domestically. some of the most popular P2P streaming applications available on the
This document does not intend to cover all design options of P2P nowadays Internet. The presented survey does not aim at being
streaming applications. Instead, we choose a representative set of exhaustive. Reviewed applications have indeed been selected mainly
applications and focus on the respective signaling characteristics of based on their popularity and on the information publicly available
each kind. Through the survey, we generalize a common streaming on P2P operation details at writing time.
process model from those P2P streaming systems, and summarize the
companion signaling process as the base for P2P Streaming Protocol
(PPSP) standardization.
2. Terminologies and concepts
Chunk: A chunk is a basic unit of partitioned streaming media, which
is used by a peer for the purpose of storage, advertisement and
exchange among peers [P2PVOD].
Content Distribution Network (CDN) node: A CDN node refers to a
network entity that usually is deployed at the network edge to store
content provided by the original servers, and serves content to the
clients located nearby topologically.
Live streaming: The scenario where all clients receive streaming
content for the same ongoing event. The lags between the play points
of the clients and that of the streaming source are small..
P2P cache: A P2P cache refers to a network entity that caches P2P
traffic in the network, and either transparently or explicitly
distributes content to other peers.
P2P streaming protocols: P2P streaming protocols refer to multiple
protocols such as streaming control, resource discovery, streaming
data transport, etc. which are needed to build a P2P streaming
system.
Peer/PPSP peer: A peer/PPSP peer refers to a participant in a P2P
streaming system. The participant not only receives streaming
content, but also stores and uploads streaming content to other
participants.
PPSP protocols: PPSP protocols refer to the key signaling protocols
among various P2P streaming system components, including the tracker
and peers.
Swarm: A swarm refers to a group of clients (i.e. peers) sharing the
same content (e.g. video/audio program, digital file, etc) at a given
time.
Tracker/PPSP tracker: A tracker/PPSP tracker refers to a directory
service which maintains the lists of peers/PPSP peers storing chunks
for a specific channel or streaming file, and answers queries from
peers/PPSP peers.
Video-on-demand (VoD): A kind of application that allows users to
select and watch video content on demand
3. Survey of P2P streaming system
In this section, we summarize some existing P2P streaming systems.
The construction techniques used in these systems can be largely
classified into two categories: tree-based and mesh-based structures.
Tree-based structure: Group members self-organize into a tree
structure, based on which group management and data delivery is
performed. Such structure and push-based content delivery have small
maintenance cost and good scalability and low delay in retrieving the
content(associated with startup delay) and can be easily implemented.
However, it may result in low bandwidth usage and less reliability.
Mesh-based structure: In contrast to tree-based structure, a mesh
uses multiple links between any two nodes. Thus, the reliability of
data transmission is relatively high. Besides, multiple links
results in high bandwidth usage. Nevertheless, the cost of
maintaining such mesh is much larger than that of a tree, and pull-
based content delivery lead to high overhead associated each video
block transmission, in particular the delay in retrieving the
content.
Hybrid structure: Combine tree-based and mesh-based structure,
combine pull-based and push-based content delivery to utilize the
advantages of two structures. It has high reliability as much as
mesh-based structure, lower delay than mesh-based structure, lower
overhead associated each video block transmission and high topology
maintenance cost as much as mesh-based structure.
3.1. Mesh-based P2P streaming systems
Mesh-based systems implement a mesh distribution graph, where each
node contacts a subset of peers to obtain a number of chunks. Every
node needs to know which chunks are owned by its peers and explicitly
"pulls" the chunks it needs. This type of scheme involves overhead,
due in part to the exchange of buffer maps between nodes (i.e. nodes
advertise the set of chunks they own) and in part to the "pull"
process (i.e. each node sends a request in order to receive the
chunks). Since each node relies on multiple peers to retrieve
content, mesh based systems offer good resilience to node failures.
On the negative side they require large buffers to support the chunk
pull (clearly, large buffers are needed to increase the chances of
finding a chunk).
In a mesh-based P2P streaming system, peers are not confined to a
static topology. Instead, the peering relationships are established/
terminated based on the content availability and bandwidth
availability on peers. A peer dynamically connects to a subset of
random peers in the system. Peers periodically exchange information
about their data availability. The content is pulled by a peer from
its neighbors who have already obtained the content. Since multiple
neighbors are maintained at any given moment, mesh-based streaming
systems are highly robust to peer churns. However, the dynamic
peering relationships make the content distribution efficiency
unpredictable. Different data packets may traverse different routes
to users. Consequently, users may suffer from content playback
quality degradation ranging from low bit rates, long startup delays,
to frequent playback freezes.
3.1.1. Joost
Joost announced to give up P2P technology on its desktop version last
year, though it introduced a flash version for browsers and iPhone
application. The key reason why Joost shut down its desktop version
is probably the legal issues of provided media content. However, as
one of the most popular P2P VoD application in the past years, it's
worthwhile to understand how Joost works. The peer management and
data transmission in Joost mainly relies on mesh-based structure.
The three key components of Joost are servers, super nodes and peers.
There are five types of servers: Tracker server, Version server,
Backend server, Content server and Graphics server. Supernodes are
managing the p2p control of Joost nodes and Joost nodes are all the
running clients in the Joost network. The architecture of Joost
system is shown in Figure 1.
First, we introduce the functionalities of Joost's key components
through three basic phases. Then we will discuss the Peer protocol
and Tracker protocol of Joost.
Installation: Backend server is involved in the installation phase.
Backend server provides peer with an initial channel list in a SQLite
file. No other parameters, such as local cache, node ID, or
listening port, are configured in this file.
Bootstrapping: In case of a newcomer, Tracker server provides several In addition, selected applications are not reviewed as a whole, but
super node addresses and possibly some content server addresses. with exclusive focus on signaling and control protocols used to
Then the peer connects Version server for the latest software construct and maintain the overlay connections among peers and to
version. Later, the peer starts to connect some super nodes to advertise and download multimedia content. More precisely, we assume
obtain the list of other available peers and begins streaming video throughout the document the high level system model reported in
contents. Super nodes in Joost only deal with control and peer Figure 1.
management traffic. They do not relay/forward any media data. +--------------------------------+
| Tracker |
| Information on multimedia |
| content and peer set |
+--------------------------------+
^ | ^ |
| | | |
Trcker | | Tracker | |
Protocol | | Protocol | |
| | | |
| | | |
| V | V
+-------------+ +------------+
| Peer1 |<-------->| Peer 2 |
+-------------+ Peer +------------+
Protocol
When Joost is first launched, a login mechanism is initiated using Figure 1, High level model of P2P streaming systems assumed
HTTPS and TLSv1. After, a TCP synchronization, the client as reference througout the document
authenticates with a certificate to the login server. Once the login As Figure 1 shows, it is possible to identify in every P2P streaming
process is done, the client first contacts a supernode, which address system two main types of entity: peers and trackers. Peers represent
is hard coded in Joost binary to get a list of peers and a Joost end users, which join dynamically the system to send and receive
Seeder to contact. Of course, this depends on the channel chosen by streamed media content, whereas trackers represent well-known nodes,
the user. Once launched, the Joost client checks if there is a more which are stably connected to the system and provide peers with
recent version available sending an HTTP request. metadata information about the streamed content and the set of active
peers. According to this model, it is possible to distinguish among
two different control and signaling protocols:
Once authenticated to the video service, Joost node uses the same the protocol that regulates the interaction between trackers and
authentication mechanism (TCP synchronization, certificate validation peers and will be denoted as "tracker protocol" in the document;
and shared key verification) to login to the backend server.This the protocol that regulates the interaction between peers and will
server validates the access to all HTTPS services like channel chat, be denoted as "peer protocol" in the document.
channel list, video content search.
Joost uses TCP port 80 for HTTP, port 443 for HTTPS transfers and UDP Hence, whenever possible, we will always try to identify tracker and
port 4166 for video packets exchange mainly from long-tail servers peer protocols and we will provide the corresponding details.
and each Joost peer chooses its own UDP port to exchange with other
peers.
Channel switching: Super nodes are responsible for redirecting This document is organized as follows. Section 2 introduces
clients to content server or peers. terminology and concepts used throughout the current survey. Since
overlay topology built on connections among peers impacts some
aspects of tracker and peer protocols, Section 2 classifies P2P
streaming application according to the main overlay topologies: mesh-
based, tree-based and hybrid. Then, Section 3 presents some of the
most popular mesh-based P2P streaming applications: Octoshape,
PPLive, Zattoo, PPStream, SopCast, Tribler, QQLive. Likewise,
Section 4 presents End System Multicast as example of tree-based P2P
streaming applications. Finally Section 5 presents New Coolstreaming
as example of hybrid-topology P2P streaming application.
Peers communicate with servers over HTTP/HTTPs and with super nodes/ 2. Terminologies and concepts
other peers over UDP.
Tracker Protocol: Because super nodes here are responsible for Chunk: A chunk is a basic unit of data organized in P2P streaming for
providing the peerlist/content servers to peers, protocol used storage, scheduling, advertisement and exchange among peers.
between tracker server and peers is rather simple. Peers get the
addresses of super nodes and content servers from Tracker Server over
HTTP. After that, Tracker sever will not appear in any stage, e.g.
channel switching, VoD interaction. In fact, the protocol spoken
between peers and super nodes is more like what we normally called
"Tracker Protocol". It enables super nodes to check peer status,
maintain peer lists for several, if not all, channels. It provides
peer list/content servers to peers. Thus, in the rest of this
section, when we mention Tracker Protocol, we mean the one used
between peers and super nodes.
Joost uses supernodes only to control the traffic but never as relays Live streaming: It refers to a scenario where all the audiences
for video content. The main streams are sent from the Joost Seeders receive streaming content for the same ongoing event. It is desired
and all the traffic is encrypted secure shared video content from that the lags between the play points of the audiences and streaming
piracy. Joost peers cache the received content to re-stream it when source be small.
needed by other peers, to recover from missed video blocks.
Although Joost is a peer-to-peer video distribution technology, it Peer: A peer refers to a participant in a P2P streaming system that
relies heavily on a few centralized servers to provide the licensed not only receives streaming content, but also caches and streams
video content and uses the peer-to-peer overlay to service content at streaming content to other participants.
a faster rate. The centralized nature of Joost is the main factor
that influences its lack of locality awareness and low fairness
ratio. Since Joost is directly providing at least two thirds of the
video content to its clients, only one third will have to be supplied
by independent nodes. This approach does not scale well, and is
sustainable today only because of the relatively low user population.
From a network usage perspective, Joost consumes approximately 700 Peer protocol: Control and signaling protocol that regulates
kbps downstream and 120 kbps upstream, regardless of the total interaction among peers.
capacity of the network. This is assuming the network upstream
capacity it is larger than 1Mbps.
There may be some type of RTT-savvy selection algorithm at work, Pull: Transmission of multimedia content only if requested by
which gives priority to peers with RTT less than or equal to the RTT receiving peer.
of a Joost content providing super node.
Peers will communicate with super nodes in some scenarios using Push: Transmission of multimedia content without any request from
Tracker Protocol. receiving peer.
1. When a peer starts Joost software, after the installation and Swarm: A swarm refers to a group of peers who exchange data to
bootstrapping, the peer will communicate with one or several super distribute chunks of the same content at a given time.
nodes to get a list of available peers/content servers.
2. For on-demand video functions, super nodes periodically exchange Tracker: A tracker refers to a directory service that maintains a
small UDP packets for peer management purpose. list of peers participating in a specific audio/video channel or in
the distribution of a streaming file.
3. When switching between channels, peers contact super nodes and Tracker protocol: Control and signaling protocol that regulates
the latter help the peers find available peers to fetch the requested interaction among peers and trackers.
media data.
Peer Protocol: The following investigations are mainly motivated from Video-on-demand (VoD): It refers to a scenario where different
[JOOSTEXP ], in which a data-driven reverse-engineer experiments are audiences may watch different parts of the same recorded streaming
performed. We omitted the analysis process and directly show the with downloaded content.
conclusion. Media data in Joost is split into chunks and then
encrypted. Each chunk is packetized with about 5-10 seconds of video
data. After receiving peer list from super nodes, a peer negotiates
with some or, if necessary, all of the peers in the list to find out
what chunks they have. Then the peer makes decision about from which
peers to get the chunks. No peer capability information is exchanged
in the Peer Protocol.
+---------------+ +-------------------+
| Version Server| | Tracker Server |
+---------------+ +-------------------+
\ |
\ |
\ | +---------------+
\ | |Graphics Server|
\ | +---------------+
\ | |
+--------------+ +-------------+ +--------------+
|Content Server|--------| Peer1 |--------|Backend Server|
+--------------+ +-------------+ +--------------+
|
|
|
|
+------------+ +---------+
| Super Node |-------| Peer2 |
+------------+ +---------+
Figure 1, Architecture of Joost system 3. Classification of P2P Streaming Applications Based on Overlay
Topology
Joost provides large buffering and thus causes longer start-up delay Depending on the topology that can be associated with overlay
for VoD traffic than for live media streaming traffic. It affords connections among peers, it is possible to distinguish among the
more FEC for VoD traffic but gives higher priority in delivery to following general types of P2P streaming applications:
live media streaming traffic.
To enhance user viewing experience, Joost provides chat capability - tree-based: peers are organized to form a tree-shape overlay
between viewers and user program rating mechanisms. network rooted at the streaming source, and multimedia content
delivery is push-based. Peers that forward data are called parent
nodes, and peers that receive it are called children nodes. Due
to their structured nature, tree-based P2P streaming applications
present a very low cost of topology maintenance and are able to
guarantee good performance in terms of scalability and delay. On
the other side, they are not very resilient to peer churn, that
may be very high in a P2P environment;
3.1.2. Octoshape - mesh-based: peers are organized in a randomly connected overlay
network, and multimedia content delivery is pull-based. This is
the reason why these systems are also referred to as "data-
driven". Due to their unstructured nature, mesh-based P2P
streaming application are very resilient with respect to peer
churn and are able to guarantee network resource utilization
higher than for tree-based applications. On the other side, the
cost to maintain overlay topology may limit performance in terms
of scalability and delay, and pull-based data delivery calls for
large size buffer where to store chunks;
CNN [CNN] has been working with a P2P Plug-in, from a Denmark-based - hybrid: this category includes all the P2P application that
company Octoshape, to broadcast its living streaming. Octoshape cannot be classified as simply mesh-based or tree-based and
helps CNN serve a peak of more than a million simultaneous viewers. present characteristics of both mesh-based and tree-based
It has also provided several innovative delivery technologies such as categories.
loss resilient transport, adaptive bit rate, adaptive path
optimization and adaptive proximity delivery. Figure 2 depicts the
architecture of the Octoshape system.
Octoshape maintains a mesh overlay topology. Its overlay topology 3.1. Mesh-based P2P Streaming Applications
maintenance scheme is similar to that of P2P file-sharing
applications, such as BitTorrent. There is no Tracker server in
Octoshape, thus no Tracker Protocol is required. Peers obtain live
streaming from content servers and peers over Octoshape Protocol.
Several data streams are constructed from live stream. No data
streams are identical and any number K of data streams can
reconstruct the original live stream. The number K is based on the
original media playback rate and the playback rate of each data
stream. For example, a 400Kbit/s media is split into four 100Kbit/s
data streams, and then k = 4. Data streams are constructed in peers,
instead of Broadcast server, which release server from large burden.
The number of data streams constructed in a particular peer equals
the number of peers downloading data from the particular peer, which
is constrained by the upload capacity of the particular peer. To get
the best performance, the upload capacity of a peer should be larger
than the playback rate of the live stream. If not, an artificial
peer may be added to deliver extra bandwidth.
Each single peer has an address book of other peers who is watching In mesh-based P2P streaming application peers self-organize in a
the same channel. A Standby list is set up based on the address randomly connected overlay graph where peers interact with a limited
book. The peer periodically probes/asks the peers in the standby subset of peers (neighbors) and explicitly request chunks they need
list to be sure that they are ready to take over if one of the (pull-based or data-driven delivery). This type of content delivery
current senders stops or gets congested. [Octoshape] may be associated with high overhead, not only because peers
formulate requests to in order to download chunks they need, but also
because in some applications peers exchange information about chunks
they own (in form of so called buffer-maps, a sort of bit maps with a
bit "1" in correspondence of chunks stored in the local buffer). The
main advantage of this kind of applications lies in that a peer does
not rely on a single peer for retrieving multimedia content. Hence,
these applications are very resilient to peer churn. On the other
side, overlay connections are not persistent and highly dynamic
(being driven by content availability), and this makes content
distribution efficiency unpredictable. In fact, different chunks may
be retrieved via different network paths, and this may turn at end
users into playback quality degradation ranging from low bit rates,
to long startup delays, to frequent playback freezes. Moreover,
peers have to maintain large buffers to increase the probability of
satisfying chunk requests received by neighbors.
Peer Protocol: The live stream is firstly sent to a few peers in the 3.1.1. Octoshape
network and then spread to the rest of the network. When a peer
joins a channel, it notifies all the other peers about its presence
using Peer Protocol, which will drive the others to add it into their
address books. Although [Octoshape] declares that each peer records
all the peers joining the channel, we suspect that not all the peers
are recorded, considering the notification traffic will be large and
peers will be busy with recording when a popular program starts in a
channel and lots of peers switch to this channel. Maybe some
geographic or topological neighbors are notified and the peer gets
its address book from these nearby neighbors.
The peer sends requests to some selected peers for the live stream Octoshape [Octoshape] is popular for the realization of the P2P
and the receivers answers OK or not according to their upload plug-in CNN [CNN] that has been using Octoshape to broadcast its
capacity. The peer continues sending requests to peers until it living streaming. Octoshape helps CNN serve a peak of more than a
finds enough peers to provide the needed data streams to redisplay million simultaneous viewers. But Octoshape has also provided
the original live stream. several innovative delivery technologies such as loss resilient
transport, adaptive bit rate, adaptive path optimization and adaptive
proximity delivery.
Figure 2 depicts the architecture of the Octoshape system.
+------------+ +--------+ +------------+ +--------+
| Peer 1 |---| Peer 2 | | Peer 1 |---| Peer 2 |
+------------+ +--------+ +------------+ +--------+
| \ / | | \ / |
| \ / | | \ / |
| \ | | \ |
| / \ | | / \ |
| / \ | | / \ |
| / \ | | / \ |
+--------------+ +-------------+ +--------------+ +-------------+
skipping to change at page 10, line 17 skipping to change at page 6, line 43
+------------+ +--------+ +------------+ +--------+
| \ / | | \ / |
| \ / | | \ / |
| \ | | \ |
| / \ | | / \ |
| / \ | | / \ |
| / \ | | / \ |
+--------------+ +-------------+ +--------------+ +-------------+
| Peer 4 |----| Peer3 | | Peer 4 |----| Peer3 |
+--------------+ +-------------+ +--------------+ +-------------+
***************************************** *****************************************
| |
| |
+---------------+ +---------------+
| Content Server| | Content Server|
+---------------+ +---------------+
Figure 2, Architecture of Octoshape system Figure 2, Architecture of Octoshape system
To spread the burden of data distribution across several peers and As it can be seen from the picture, there are no trackers and
thus limiting the impact of peer loss, Octoshape splits a live stream consequently no tracker protocol is necessary.
into a number of smaller equal-sized sub-streams. For example, a
400kbit/s live stream is split and coded into 12 distinct 100kbit/s
sub-streams. Only a subset of these sub-streams needs to reach a
user for it to reconstruct the "original" live stream. The number of
distinct sub-streams could be as many as the number of active peers.
To optimize bandwidth utilization, Octoshape leverages computers As regards the peer protocol, as soon as a peer joins a channel, it
within a network to minimize external bandwidth usage and to select notifies all the other peers about its presence, in such a way that
the most reliable and "closest" source to each viewer. It also each peer maintains a sort of address book with the information
chooses the best matching available codecs and players and scales bit necessary to contact other peers who are watching the same channel.
rate up and down according to available internet connection. Although Octoshape inventors claim in [Octoshape] that each peer
records all peers joining a channel, we suspect that it is very
unlikely that all peers are recorded. In fact, the corresponding
overhead traffic would be large, especially when a popular program
starts in a channel and lots of peers switch to this channel. Maybe
only some geographic or topological neighbors are notified and the
joining peer gets the address book from these nearby neighbors.
3.1.3. PPLive Regarding data distribution strategy, in the Octoshape solution the
original stream is split into a number K of smaller equal-sized data
streams, but a number N > K of unique data streams are actually
constructed, in such a way that a peer receiving any K of the N
available data streams is able to play the original stream. For
instance, if the original live stream is a 400 kbit/sec signal, for
K=4 and N=12, 12 unique data streams are constructed, and a peer that
downloads any 4 of the 12 data streams is able to play the live
stream. In this way, each peer sends requests of data streams to
some selected peers, and it receives positive/negative answers
depending on availability of upload capacity at requested peers. In
case of negative answers, a continues sending requests it finds K
peers willing to upload the minimum number if data streams needed to
redisplay the original live stream. Since the number of peers served
by a given peer is limited by its upload capacity, the upload
capacity at each peer should be larger than the playback rate of the
live stream. Otherwise, artificial peers may be added to offer extra
bandwidth.
In order to mitigate the impact of peer loss, the address book is
also used at each peer to derive the so called Standby List, which
Octoshape peers use to probe other peers and be sure that they are
ready to take over if one of the current senders leaves or gets
congested.
Finally, in order to optimize bandwidth utilization, Octoshape
leverages peers within a network to minimize external bandwidth usage
and to select the most reliable and "closest" source to each viewer.
It also chooses the best matching available codecs and players, and
it scales bit rate up and down according to available internet
connection.
3.1.2. PPLive
PPLive [PPLive] is one of the most popular P2P streaming software in PPLive [PPLive] is one of the most popular P2P streaming software in
China. The PPLive system includes six parts. China. The PPLive system includes six parts.
(1) Video streaming server: providing the source of video content and (1) Video streaming server: providing the source of video content and
coding the content for adapting the network transmission rate and the coding the content for adapting the network transmission rate and the
client playing. client playing.
(2) Peer: also called node or client. The nodes compose the self- (2) Peer: also called node or client. The peers compose the self-
organizing network logically and each node can join or withdraw organizing network logically and each peer can join or leave
whenever. When the client downloads the content, it also provides whenever. When the client downloads the content, it also provides
its own content to the other client at the same time. its own content to the other client at the same time.
(3) Directory server: when the user start up the PPLive client, the (3) Directory server: server which the PPLive client, when launched
client will automatically register the user information to this or shut down by user, automatically registers user information to and
server; when the client exits, the client will cancel its peer. cancels user information from.
(4) Tracker server: this server will record the information of all (4) Tracker server: server that records the information of all users
the users which see the same content. When the client request some watching the same content. In more detail, when the PPLive client
content, this server will check if there are other peers owning the requests some content, this server will check if there are other
content and send the information of these peers to the client, if on, peers owning the content and send the information to the client.
then tell the client to request the video steaming server for the
content.
(5) Web server: providing PPLive software updating and downloading. (5) Web server: providing PPLive software updating and downloading.
(6) Channel list server: this server store the information of all the (6) Channel list server: server that stores the information of all
programs which can be seen by the users, including VoD programs and the programs which can be watched by end users, including VoD
broadcasting programs, such as program name, file size and programs and live broadcasting programs.
attribution.
PPLive has two major communication protocols. One is Registration
and peer discovery protocol, i.e. Tracker Protocol, and the other is
P2P chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows
the architecture of PPLive.
Tracker Protocol: First, a peer gets the channel list from the
Channel server, in a way similar to that of Joost. Then the peer
chooses a channel and asks the Tracker server for the peerlist of
this channel.
Peer Protocol: The peer contacts the peers in its peerlist to get
additional peerlists, which are aggregated with its existing list.
Through this list, peers can maintain a mesh for peer management and
data delivery.
For the video-on-demand (VoD) operation, because different peers
watch different parts of the channel, a peer buffers up to a few
minutes worth of chunks within a sliding window to share with each
others. Some of these chunks may be chunks that have been recently
played; the remaining chunks are chunks scheduled to be played in the
next few minutes. Peers upload chunks to each other. To this end,
peers send to each other "buffer-map" messages; a buffer-map message
indicates which chunks a peer currently has buffered and can share.
The buffer-map message includes the offset (the ID of the first
chunk), the length of the buffer map, and a string of zeroes and ones
indicating which chunks are available (starting with the chunk
designated by the offset). PPlive transfer Data over UDP.
Video Download Policy of PPLive:
1) Top ten peers contribute to a major part of the download
traffic. Meanwhile, the top peer session is quite short compared
with the video session duration. This would suggest that PPLive
gets video from only a few peers at any given time, and switches
periodically from one peer to another;
2) PPLive can send multiple chunk requests for different chunks to
one peer at one time;
3) PPLive is observed to have the download scheduling policy of PPLive uses two major communication protocols. The first one is the
giving higher priority to rare chunks and to chunks closer to play Registration and Peer Discovery protocol, the equivalent of tracker
out deadline and to be using a sliding window mechanism to protocol, and the second one is the P2P Chunk Distribution protocol,
regulate the buffering of chunks. the equivalent of peer protocol. Figure 3 shows the architecture of
PPLive system.
PPLive maintains a constant peer list with relatively small number of
peers. [P2PIPTVMEA]
+------------+ +--------+ +------------+ +--------+
| Peer 2 |----| Peer 3 | | Peer 2 |----| Peer 3 |
+------------+ +--------+ +------------+ +--------+
| | | |
| | | |
+--------------+ +--------------+
| Peer 1 | | Peer 1 |
+--------------+ +--------------+
| |
| |
| |
+---------------+ +---------------+
| Tracker Server| | Tracker Server|
+---------------+ +---------------+
Figure 3, Architecture of PPlive system Figure 3, Architecture of PPlive system
3.1.4. Zattoo As regards the tracker protocol, firstly a peer gets the channel list
from the Channel list server; secondly it chooses a channel and asks
the Tracker server for a peer-list associated with the selected
channel.
Zattoo is P2P live streaming system which serves over 3 million As regards the peer protocol, a peer contacts the peers in its peer-
registered users over European countries [Zattoo].The system delivers list to get additional peer-lists, to be merged with the original one
received by Tracker server with the goal of constructing and
maintaining an overlay mesh for peer management and data delivery.
According to [P2PIPTVMEA], PPLive peers maintain a constant peer-list
when the number of peers is relatively small.
For the video-on-demand (VoD) operation, because different peers
watch different parts of the channel, a peer buffers chunks up to a
few minutes of content within a sliding window. Some of these chunks
may be chunks that have been recently played; the remaining chunks
are chunks scheduled to be played in the next few minutes. In order
to upload chunks to each other, peers exchange "buffer-map" messages.
A buffer-map message indicates which chunks a peer currently has
buffered and can share, and it includes the offset (the ID of the
first chunk), the length of the buffer map, and a string of zeroes
and ones indicating which chunks are available (starting with the
chunk designated by the offset). PPlive transfer Data over UDP.
The download policy of PPLive may be summarized with the following
three points:
top-ten peers contribute to a major part of the download traffic.
Meanwhile, session with top-ten peers is quite short, if compared
with the video session duration. This would suggest that PPLive
gets video from only a few peers at any given time, and switches
periodically from one peer to another;
PPLive can send multiple chunk requests for different chunks to
one peer at one time;
PPLive is observed to have the download scheduling policy of
giving higher priority to rare chunks and to chunks closer to play
out deadline.
3.1.3. Zattoo
Zattoo [Zattoo] is P2P live streaming system which serves over 3
million registered users over European countries.The system delivers
live streaming using a receiver-based, peer-division multiplexing live streaming using a receiver-based, peer-division multiplexing
scheme. Zattoo reliabily streams media among peers using the mesh scheme. Zattoo reliably streams media among peers using the mesh
structure. structure.
Figure 4 depcits a typical procedure of single TV channel carried Figure 4 depicts a typical procedure of single TV channel carried
over Zattoo network. First, Zattoo system broadcasts live TV, over Zattoo network. First, Zattoo system broadcasts a live TV
captured from satellites, onto the Internet. Each TV channel is channel, captured from satellites, onto the Internet. Each TV
delivered through a separate P2P network. channel is delivered through a separate P2P network.
------------------------------- -------------------------------
| ------------------ | -------- | ------------------ | --------
| | Broadcast | |---------|Peer1 |----------- | | Broadcast | |---------|Peer1 |-----------
| | Servers | | -------- | | | Servers | | -------- |
| Administrative Servers | ------------- | Administrative Servers | -------------
| ------------------------ | | Super Node| | ------------------------ | | Super Node|
| | Authentication Server | | ------------- | | Authentication Server | | -------------
| | Rendezvous Server | | | | | Rendezvous Server | | |
| | Feedback Server | | -------- | | | Feedback Server | | -------- |
| | Other Servers | |---------|Peer2 |----------| | | Other Servers | |---------|Peer2 |----------|
| ------------------------| | -------- | ------------------------| | --------
------------------------------| ------------------------------|
Figure 4, Basic architecture of Zattoo system
Tracker(Rendezvous Server) Protocol: In order to receive the signal Figure 4, Basic architecture of Zattoo system
the requested channel, registered users are required to be
authenticated through Zattoo Authentication Server. Upon
authentication, users obtain a ticket with specific lifetime. Then,
users contact Rendezvous Server with the ticket and identify of
interested TV channel. In return, the Rendezvous Server sends back a
list joined peers carrying the channel.
Peer Protocol: Similar to aforementioned procedures in Joost, PPLive, In order to receive a TV channel, users are required to be
a new Zattoo peer requests to join an existing peer among the peer authenticated through Zattoo Authentication Server. Upon
list. Upon the availability of bandwidth, requested peer decides how authentication, users obtain a ticket identifying the interest TV
to multiplex a stream onto its set of neighboring peers. When channel with a specific lifetime. Then, users contact the Rendezvous
packets arrive at the peer, sub-streams are stored for reassembly Server, which plays the role of tracker and based on the received
constructing the full stream. ticket sends back a list joined of peers carrying the channel.
Note Zattoo relies on Bandwdith Estimation Server to initially As regards the peer protocol, a peer establishes overlay connections
estimate the amount of available uplink bandwith at a peer. Once a with other peers randomly selected in the peer-list received by the
peer starts to forward substream to other peers, it receives QoS Rendezvous Server.
feedback from other receivers if the quality of sub-stream drops
below a threshold.
For reliable data delivery, each live stream is partitioned into For reliable data delivery, each live stream is partitioned into
video segments. Each video segment is coded for forward error video segments. Each video segment is coded for forward error
correction with Reed-Solomon error correcting code into n sub-stream correction with Reed-Solomon error correcting code into n sub-stream
packets such that having obtained k correct packets of a segment is packets such that having obtained k correct packets of a segment is
sufficient to reconstruct the remaining n-k packets of the same video sufficient to reconstruct the remaining n-k packets of the same video
segment. To receive a video segment, each peer then specifies the segment. To receive a video segment, each peer then specifies the
sub-stream(s) of the video segment it would like to receive from the sub-stream(s) of the video segment it would like to receive from the
neighboring peers. neighboring peers.
Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to Peers decide how to multiplex a stream among its neighboring peers
handle longer term bandwidth fluctuations. In this scheme, each peer based on the availability of upload bandwidth. With reference to
determines how many sub-streams to transmit and when to switch such aspect, Zattoo peers rely on Bandwdith Estimation Server to
partners. Specifically, each peer continually estimates the amount initially estimate the amount of available uplink bandwidth at a
of available uplink bandwidth based initially on probe packets to the peer. Once a peer starts to forward substream to other peers, it
Zattoo Bandwidth Estimation Server and later, based on peer QoS receives QoS feedback from its receivers if the quality of sub-stream
feedbacks, using different algorithms depending on the underlying drops below a threshold.
transport protocol. A peer increases its estimated available uplink
bandwidth, if the current estimate is below some threshold and if
there has been no bad quality feedback from neighboring peers for a
period of time, according to some algorithm similar to how TCP
maintains its congestion window size. Each peer then admits
neighbors based on the currently estimated available uplink
bandwidth. In case a new estimate indicates insufficient bandwidth
to support the existing number of peer connections, one connection at
a time, preferably starting with the one requiring the least
bandwidth, is closed. On the other hand, if loss rate of packets
from a peer's neighbor reaches a certain threshold, the peer will
attempt to shift the degraded neighboring peer load to other existing
peers, while looking for a replacement peer. When one is found, the
load is shifted to it and the degraded neighbor is dropped. As
expected if a peer's neighbor is lost due to departure, the peer
initiates the process to replace the lost peer. To optimize the PDM
configuration, a peer may occasionally initiate switching existing
partnering peers to topologically closer peers.
3.1.5. PPStream
The system architecture and working flows of PPStream is similar to
PPLive [PPStream]. PPStream transfers data using mostly TCP, only
occasionally UDP.
Video Download Policy of PPStream
1) Top ten peers do not contribute to a large part of the download Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
traffic. This would suggest that PPStream gets the video from handle longer term bandwidth fluctuations. According to this scheme,
many peers simultaneously, and its peers have long session each peer determines how many sub-streams to transmit and when to
duration; switch partners. Specifically, each peer continuously estimates the
amount of available uplink bandwidth based initially on probe packets
sent to Zattoo Bandwidth Estimation Server and subsequently on peer
QoS feedbacks, by using different algorithms depending on the
underlying transport protocol. A peer increases its estimated
available uplink bandwidth, if the current estimate is below some
threshold and if there has been no bad quality feedback from
neighboring peers for a period of time, according to some algorithm
similar to how TCP maintains its congestion window size. Each peer
then admits neighbors based on the currently estimated available
uplink bandwidth. In case a new estimate indicates insufficient
bandwidth to support the existing number of peer connections, one
connection at a time, preferably starting with the one requiring the
least bandwidth, is closed. On the other hand, if loss rate of
packets from a peer's neighbor reaches a certain threshold, the peer
will attempt to shift the degraded neighboring peer load to other
existing peers, while looking for a replacement peer. When one is
found, the load is shifted to it and the degraded neighbor is
dropped. As expected if a peer's neighbor is lost due to departure,
the peer initiates the process to replace the lost peer. To optimize
the PDM configuration, a peer may occasionally initiate switching
existing partnering peers to topologically closer peers.
2) PPStream does not send multiple chunk requests for different 3.1.4. PPStream
chunks to one peer at one time;
PPStream maintains a constant peer list with relatively large number The system architecture of PPStream [PPStream] is similar to the one
of peers. [P2PIPTVMEA] of PPLive.
To ensure data availability, PPStream uses some form of chunk To ensure data availability, PPStream uses some form of chunk
retransmission request mechanism and shares buffer map at high rate, retransmission request mechanism and shares buffer map at high rate.
although it rarely requests concurrently for the same data chunk.
Each data chunk, identified by the play time offset encoded by the Each data chunk, identified by the play time offset encoded by the
program source, is divided into 128 sub-chunks of 8KB size each. The program source, is divided into 128 sub-chunks of 8KB size each. The
chunk id is used to ensure sequential ordering of received data chunk id is used to ensure sequential ordering of received data
chunk. chunk. The buffer map consists of one or more 128-bit flags denoting
the availability of sub-chunks, and it includes information on time
offset. Usually, a buffer map contains only one data chunk at a
time, and it also contains sending peer's playback status, because as
soon as a data chunk is played back, the chunk is deleted or replaced
by the next data chunk.
The buffer map consists of one or more 128-bit flags denoting the At the initiating stage a peer can use up to four data chunks,
availability of sub-chunks and having a corresponding time offset. whereas on a stabilized stage a peer uses usually one data chunk.
Usually a buffer map contains only one data chunk at a time and is However, in transient stage, a peer uses variable number of chunks.
thus smaller than that of PPLive. It also contains sending peer's Sub-chunks within each data chunks are fetched nearly in random
playback status to the other peers because as soon as a data chunk is without using rarest or greedy policy. The same fetching pattern for
played back, the chunk is deleted or replaced by the next data chunk. one data chunk seems to repeat itself in the subsequent data chunks.
Moreover, higher bandwidth PPStream peers tend to receive chunks
earlier and thus to contribute more than lower bandwidth peers.
At the initiating stage, a peer can use up to 4 data chunks and on a Based on the experimental results reported in [P2PIPTVMEA], download
stabilized stage, a peer uses usually one data chunk. However, in policy of PPStream may be summarized with the following two points:
transient stage, a peer uses variable number of chunks. Although,
sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy, the same fetching pattern for
one data chunk seems to repeat in the following data chunks.
Moreover, high bandwidth PPStream peers tend to receive chunks
earlier and thus to contributes more than lower bandwidth peers.
3.1.6. SopCast top-ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream peer gets the video
from many peers simultaneously, and session between peers have
long duration;
The system architecture and working flows of SopCast is similar to PPStream does not send multiple chunk requests for different
PPLive. SOPCast transfer data mainly using UDP, occasionally TCP; chunks to one peer at one time; PPStream maintains a constant peer
list with relatively large number of peers.
Top ten peers contribute to about half of the total download traffic. 3.1.5. SopCast
SOPCast's download policy is similar to PPLive's policy in that it
switches periodically between provider peers. However, SOPCast seems
to always need more than one peer to get the video, while in PPLive a
single peer could be the only video provider;
SOPCast's peer list can be as large as PPStream's peer list. But The system architecture of SopCast [SopCast] is similar to the one of
SOPCast's peer list varies over time. [P2PIPTVMEA] PPLive.
SopCast allows for software update through (HTTP) a centralized web SopCast allows for software updates via HTTP through a centralized
server and makes available channel list through (HTTP) another web server, and it makes list of channels available via HTTP through
centralized server. another centralized server.
SopCast traffic is encoded and SopCast TV content is divided into SopCast traffic is encoded and SopCast TV content is divided into
video chunks or blocks with equal sizes of 10KB. Sixty percent of video chunks or blocks with equal sizes of 10KB. Sixty percent of
its traffic is signaling packets and 40% is actual video data its traffic is signaling packets and 40% is actual video data
packets. SopCast produces more signaling traffic compared to PPLive, packets. SopCast produces more signaling traffic compared to PPLive,
PPStream, and TVAnts, whereas PPLive produces the least. Its traffic PPStream, with PPLive producing the minimum of signaling traffic. It
is also noted to have long-range dependency, indicating that has been observed in [P2PIPTVMEA] that SopCast traffic has long-range
mitigating it with QoS mechanisms may be difficult. It is reported dependency, which also means that eventual QoS mitigation mechanisms
that SopCast communication mechanism starts with UDP for the exchange may be ineffective. Moreover, according to [P2PIPTVMEA], SopCast
of control messages among its peers using a gossip-like protocol and communication mechanism starts with UDP for the exchange of control
then moves to TCP for the transfer of video segments. This use of messages among its peers by using a gossip-like protocol and then
TCP for data transfer seems to contradict others findings. moves to TCP for the transfer of video segments. It also seems that
top-ten peers contribute to about half of the total download traffic.
Finally, SopCast peer-list can be as large as PPStream peer-list, but
differently from PPStream SopCast peer-list varies over time.
3.1.7. TVants 3.1.6. Tribler
The system architecture and working flows of TVants is similar to Tribler [tribler] is a BitTorrent client that is able to go very much
PPLive. TVAnts is more balanced between TCP and UDP in data beyond BitTorrent model also thanks to the support for video
transmission; streaming. Initially developed by a team of researchers at Delft
University of Technology, Tribler was able to attract attention from
other universities and media companies and to receive European Union
research funding (P2P-Next and QLectives projects).
The system architecture and working flows of TVants is similar to Differently from BitTorrent, where a tracker server centrally
PPLive. TVAnts is more balanced between TCP and UDP in data coordinates uploads/downloads of chunks among peers and peers
transmission; directly interact with each other only when they actually upload/
download chunks to/from each other, there is no tracker server in
Tribler and, as a consequence, there is no need of tracker protocol.
TVAnts' peer list is also large and varies over time. [P2PIPTVMEA] Peer protocol is instead used to organize peers in an overlay mesh.
In more detail, Tribler bootstrap process consists in preloading well
known super-peer addresses into peer local cache, in such a way that
a joining peer randomly selects a super-peer to retrieve a random
list of already active peers to establish overlay connections with.
A gossip-like mechanism called BuddyCast allows Tribler peers to
exchange their preference lists, that is their downloaded file, and
to build the so called Preference Cache. This cache is used to
calculate similarity levels among peers and to identify the so called
"taste buddies" as the peers with highest similarity. Thanks to this
mechanism each peer maintains two lists of peers: i) a list of its
top-N taste buddies along with their current preference lists, and
ii) a list of random peers. So a peer alternatively selects a peer
from one of the lists and sends it its preference list, taste-buddy
list and a selection of random peers. The goal behind the
propagation of this kind of information is the support for the remote
search function, a completely decentralized search service that
consists in querying Preference Cache of taste buddies in order to
find the torrent file associated with an interest file. If no
torrent is found in this way, Tribler users may alternatively resort
to web-based torrent collector servers available for BitTorrent
clients.
For data delivery, peers exhibit mild preference to exchange data As already said, Tribler supports video streaming in two different
among themselves in the same Autonomous System and also among peers forms: video on demand and live streaming.
in the same subnet. TVAnts peer also exhibits some preference to
download from closer peers. TVAnts peer exploits location
information and download mostly from high-bandwidth peers. However,
it does not seem to enforce any tit-for-tat mechanisms in the data
delivery.
TVAnts seems to be sensitive to network impairments such as changes As regards video on demand, a peer first of all keeps informed its
in network capacity, packet loss, and delay. For capacity loss, a neighbors about the chunks it has. Then, on the one side it applies
peer will always seek for more peers to download. In the process of suitable chunk-picking policy in order to establish the order
trying to avoid bad paths and selecting good peers to continue according to which to request the chunks he wants to download. This
downloading data, aggressive and potentially harmful behavior for policy aims to assure that chunks come to the media player in order
both application and the network results when bottleneck is affecting and in the same time that overall chunk availability is maximized.
all potential peers. To this end, the chunk-picking policy differentiates among high, mid
and low priority chunks depending on their closeness with the
playback position. High priority chunks are requested first and in
strict order. When there are no more high priority chunks to
request, mid priority chunks are requested according to a rarest-
first policy. Finally, when there are no more mid priority chunks to
request, low priority chunks are requested according to a rarest-
first policy as well. On the other side, Tribler peers follow the
give-to-get policy in order to establish which peer neighbors are
allowed to request chunks (according to BitTorrent jargon to be
unchoked). In more detail, time is subdivided in periods and after
each period Tribler peers first sort their neighbors according to the
decreasing numbers of chunks they have forwarded to other peers,
counting only the chunks they originally received from them. In case
if tie, Tribler sorts their neighbors according to the decreasing
total number of chunks they have forwarded to other peers. Since
children could lie regarding the number of chunks forwarded to
others, Tribler peers do directly not ask their children, but their
grandchildren. In this way, Tribler peer unchokes the three highest-
ranked neighbours and, in order to saturate upload bandwidth and in
the same time not decrease the performance of individual connections,
it further unchokes a limited number of neighbors. Moreover, in
order to search for better neighbors, Tribler peers randomly select a
new peer in the rest of the neighbours and optimistically unchoke it
every two periods.
When a peer experiences limited access capacity, it reacts by As regards live streaming, differently from video on demand scenario,
increasing redundancy (with FEC or ARQ mechanism) as if reacting to the number of chunks cannot be known in advance. As a consequence a
loss and thus causes higher download rate. To recover from packet sliding window of fixed width is used to identify chunks of interest:
losses, it uses some kind of ARQ mechanism. Although network every chunk that falls out the sliding window is considered out-
conditions do impact video stream distribution such as the network dated, is locally deleted and is considered as deleted by peer
delay impacting the start-up phase, they seem to have little impact neighbors as well. In this way, when a peer joins the network, it
on the network topology discovery and maintenance process. learns about chunks its neighbors possess and identifies the most
recent one. This is assumed as beginning of the sliding window at
the joining peer, which starts downloading and uploading chunks
according to the description provided for video on demand scenario.
Finally, differently from what happens for video on demand scenario,
where torrent files includes a hash for each chunk in order to
prevent malicious attackers from corrupting data, torrent files in
live streaming scenario include the public key of the stream source.
Each chunk is then assigned with absolute sequence number and
timestamp and signed by source public key. Such a mechanism allows
Tribler peers to use the public key included in torrent file and
verity the integrity of each chunk.
3.2. Tree-based P2P streaming systems 3.1.7. QQLive
Tree-based systems implement a tree distribution graph, rooted at the QQLive [QQLive] is large-scale video broadcast software including
source of content. In principle, each node receives data from a streaming media encoding, distribution and broadcasting. Its client
parent node, which may be the source or a peer. If peers do not can apply for web, desktop program or other environments and provides
change too often, such systems require little overhead, since packets abundant interactive function in order to meet the watching
are forwarded from node to node without the need for extra messages. requirements of different kinds of users.
However, in high churn environments (i.e. fast turnover of peers in
the tree), the tree must be continuously destroyed and rebuilt, a
process that requires considerable control message overhead. As a
side effect, nodes must buffer data for at least the time required to
repair the tree, in order to avoid packet loss. One major drawback
of tree-based streaming systems is their vulnerability to peer churn.
A peer departure will temporarily disrupt video delivery to all peers
in the sub-tree rooted at the departed peer.
3.2.1. PeerCast Due to the lack of technical details from QQLive vendor, we got some
knowledge about QQLive from paper [QQLivePaper], whose authors did
some measurements and based on this identify the main components and
working flow of QQLive.
PeerCast adopts a Tree structure. The architecture of PeerCast is Main components of QQLive include:
shown in Figure 6.
Peers in one channel construct the Broadcast Tree and the Broadcast login server, storing user login information and channel
server is the root of the Tree. A Tracker can be implemented information;
independently or merged in the Broadcast server. Tracker in Tree
based P2P streaming application selects the parent nodes for those
new peers who join in the Tree. A Transfer node in the Tree receives
and transfers data simultaneously.
Peer Protocol: The peer joins a channel and gets the broadcast server authentication server, processing user login authentication;
address. First of all, the peer sends a request to the server, and
the server answers OK or not according to its idle capability. If
the broadcast server has enough idle capability, it will include the
peer in its child-list. Otherwise, the broadcast server will choose
at most eight nodes of its children and answer the peer. The peer
records the nodes and contacts one of them, until it finds a node
that can server it.
In stead of requesting the channel by the peer, a Transfer node channel server, storing all information about channels including
pushes live stream to its children, which can be a transfer node or a channel connection nodes watching a channel;
receiver. A node in the tree will notify its status to its parent
periodically, and the latter will update its child-list according to
the received notifications.
------------------------------ program server, storing audio and video data information;
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Broadcast server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
|Transfer1| |Transfer2|
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
|Receiver1| |Receiver2| |Receiver3| |Receiver4|
+---------+ +---------+ +---------+ +---------+
Figure 6, Architecture of PeerCast system log server, recording the beginning and ending information of
channels;
Each PeerCast node has a peering layer that is between the peer node, watching programs and transporting streaming media.
application layer and the transport layer. The peering layer of each
node coordinates among similar nodes to establish and maintain a
multicast tree. Moreover, the peering layer also supports a simple,
lightweight redirect primitive. This primitive allows a peer p to
direct another peer c which is either opening a data-transfer session
with p, or has a session already established with p to a target peer
t to try to establish a data-transfer session. Peer discovery starts
at the root (source) or some selected sub-tree root and goes
recursively down the tree structure. When a peer leaves normally, it
informs its parent who then releases the peer, and it also redirects
all its immediate children to find new parents starting at some
target node.
The peering layer allows for different policies of topology Main working flow of QQLive includes startup stage and play stage.
maintenance. In choosing a parent from among the children of a given
peer, a child can be chosen randomly, one at a time in some fixed
order, or based on least access latency with respect to the choosing
peer.
3.2.2. Conviva Startup stage includes only interactions between peers and
centralized QQLive servers, so it may be regarded as associated with
tracker protocol. This stage begins when a peer launches QQLive
client. Peer provides authentication information in an
authentication message, which it sends to the authentication server.
Authentication server verifies QQLive provided credentials and if
these are valid, QQLive client starts communicating with login server
through SSL. QQLive client sends a message including QQLive account
and nickname, and login serve returns a message including information
such as membership point, total view time, upgrading time and so on.
At this point, QQLive client requests channel server for updating
channel list. QQLive client firstly loads an old channel list stored
locally and then it overwrites the old list with the new channel list
received from channel server. The full channel list is not obtained
via a single request. QQLive client firstly requests for channel
classification and then requests the channel list within a specific
channel category selected by the user. This approach will give
higher real-time performance to QQLive.
Conviva [conviva] is a real-time media control platform for Internet Play stage includes interactions between peers and centralized QQLive
multimedia broadcasting. For its early prototype, End System servers and between QQLive peers, so it may be regarded as associated
Multicast (ESM) [ESM] is the underlying networking technology on to both tracker protocol and peer protocol. IN more detail, play
organizing and maintaining an overlay broadcasting topology. Next we stage is structured in the following phases:
present the overview of ESM. ESM adopts a Tree structure. The
architecture of ESM is shown in Figure 7.
ESM has two versions of protocols: one for smaller scale conferencing Open channel. QQLive client sends a message to dogin server with
apps with multiple sources, and the other for larger scale the ID of chosen channel through UDP, whereas login server replies
broadcasting apps with Single source. We focus on the latter version with a message including channel ID, channel name and program
in this survey. name. Afterwards, QQLive client communicates with program server
through SSL to access program information. Finally QQLive client
communicates with channel server through UDP to obtain initial
peer information.
ESM maintains a single tree for its overlay topology. Its basic View channel. QQLive client establishes connections with peers
functional components include two parts: a bootstrap protocol, a and sends packets with fixed length of 118 bytes, which contains
parent selection algorithm, and a light-weight probing protocol for channel ID. QQLive client maintains communication with channel
tree topology construction and maintenance; a separate control server by reporting its own information and obtaining updated
structure decoupled from tree, where a gossip-like algorithm is used information. Peer nodes transport stream packet data through UDP
for each member to know a small random subset of group members; with fixed-port between 13000 and14000.
members also maintain pathes from source.
Upon joining, a node gets a subset of group membership from the Stop channel. QQLive client continuously sends five identical UDP
source (the root node); it then finds parent using a parent selection packets to channel server with each data packet fixed length of 93
algorithm. The node uses light-weight probing heuristics to a subset bytes.
of members it knows, and evaluates remote nodes and chooses a
candidate parent. It also uses the parent selection algorithm to
deal with performance degradation due to node and network churns.
ESM Supports for NATs. It allows NATs to be parents of public hosts, Close client. QQLive client sends a UDP message to notify log
and public hosts can be parents of all hosts including NATs as server and an SSL message to login server, then it continuously
children. sends five identical UDP packets to channel server with each data
packet fixed length of 45 bytes.
------------------------------ 3.2. Tree-based P2P streaming applications
| +---------+ |
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Broadcast server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
| Peer1 | | Peer2 |
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer3 | | Peer4 | | Peer5 | | Peer6 |
+---------+ +---------+ +---------+ +---------+
Figure 7, Architecture of ESM system In tree-based P2P streaming applications peers self-organize in a
tree-shape overlay network, where peers do not ask for a specific
content chunk, but simply receive it from their so called "parent"
node. Such content delivery model is denoted as push-based.
Receiving peers are denoted as children, whereas sending nodes are
denoted as parents. Overhead to maintain overlay topology is usually
lower for tree-based streaming applications than for mesh-based
streaming applications, whereas performance in terms of scalability
and delay are usually higher. On the other side, the greatest
drawback of this type of application lies in that each node depends
on one single node, its father in overlay tree, to receive streamed
content. Thus, tree-based streaming applications suffer from peer
churn phenomenon more than mesh-based ones.
ESM constructs the multicast tree in a two-step process. It 3.2.1. End System Multicast (ESM)
constructs first a mesh of the participating peers; the mesh having
the following properties:
1) The shortest path delay between any pair of peers in the mesh Even though End System Multicast (ESM) project is ended by now and
is at most K times the unicast delay between them, where K is a ESM infrastructure is not being currently implemented anywhere, we
small constant. decided to include it in this survey for a twofold reason. First of
all, it was probably the first and most significant research work
proposing the possibility of implementing multicast functionality at
end hosts in a P2P way. Secondly, ESM research group at Carnegie
Mellon University developed the world's first P2P live streaming
system, and some members founded later Conviva [conviva] live
platform.
2) Each peer has a limited number of neighbors in the mesh which The main property of ESM is that it constructs the multicast tree in
does not exceed a given (per-member) bound chosen to reflect the a two-step process. The first step aims at the construction of a
bandwidth of the peer's connection to the Internet. mesh among participating peers, whereas the second step aims at the
construction of data delivery trees rooted at the stream source.
Therefore a peer participates in two types of topology management
structures: a control structure that guarantees peers are always
connected in a mesh, and a data delivery structure that guarantees
data gets delivered in an overlay multicast tree.
It then constructs a (reverse) shortest path spanning trees of the There exist two versions of ESM.
mesh with the root being the source.
Therefore a peer participates in two types of topology management: a The first version of ESM architecture [ESM1] was conceived for small
control structure in which peers make sure they are always connected scale multi-source conferencing applications. Regarding the mesh
in a mesh and a data delivery structure in which peers make sure data construction phase, when a new member wants to join the group, an
gets delivered to them in a tree structure. out-of-bandwidth bootstrap mechanism provides the new member with a
list of some group member. The new member randomly selects a few
group members as peer neighbors. The number of selected neighbors
does not exceed a given bound, which reflects the bandwidth of the
peer's connection to the Internet. Each peer periodically emits a
refresh message with monotonically increasing sequence number, which
is propagated across the mesh in such a way that each peer can
maintain a list of all the other peers in the system. When a peer
leaves, either it notifies its neighbors and the information is
propagated across the mesh to all participating peers, or peer
neighbors detect the condition of abrupt departure and propagate it
through the mesh. To improve mesh/tree quality, on the one side
peers constantly and randomly probe each other to add new links; on
the other side, peers continually monitor existing links to drop the
ones that are not perceived as good-quality-links. This is done
thanks to the evaluation of a utility function and a cost function,
which are conceived to guarantee that the shortest overlay delay
between any pair of peers is comparable to the unicast delay among
them. Regarding multicast tree construction phase, peers run a
distance-vector protocol on top of the tree and use latency as
routing metric. In this way, data delivery trees may be constructed
from the reverse shortest path between source and recipients.
To improve mesh/tree structural and operating quality, each peer The second and subsequent version of ESM architecture [ESM2] was
randomly probes one another to add new links that have perceived gain conceived for an operational large scale single-source Internet
in utility; and each peer continually monitors existing links to drop broadcast system. As regards the mesh construction phase, a node
those links that have perceived drop in utility. Switching parent joins the system by contacting the source and retrieving a random
occurs if a peer leaves or fails; if there is a persistent congestion list of already connected nodes. Information on active participating
or low bandwidth condition; or if there is a better clustering peers is maintained thanks to a gossip protocol: each peer
configuration. To allow for more public hosts to be available for periodically advertises to a randomly selected neighbor a subset of
becoming parents of NATs, public hosts preferentially choose NATs as nodes he knows and the last timestamps it has heard for each known
parents. node.
The data delivery structure, obtained from running a distance vector The main difference with the first version is that the second version
protocol on top of the mesh using latency between neighbors as the constructs and maintains the data delivery tree in a completely
routing metric, is maintained using various mechanisms. Each peer distributed manner according to the following criteria: i) each node
maintains and keeps up to date the routing cost to every other maintains a degree bound on the maximum number of children it can
member, together with the path that leads to such cost. To ensure accept depending on its uplink bandwidth, ii) tree is optimized
routing table stability, data continues to be forwarded along the old mainly for bandwidth and secondarily for delay. To this end, a
routes for sufficient time until the routing tables converge. The parent selection algorithm allows identifying among the neighbors the
time is set to be larger than the cost of any path with a valid one that guarantees the best performance in terms of throughput and
route, but smaller than infinite cost. To make better use of the delay. The same algorithm is also applied either if a parent leaves
path bandwidth, streams of different bit-rates are forwarded the system or if a node is experiencing poor performance (in terms of
according to the following priority scheme: audio being higher than both bandwidth and packet loss). As loop prevention mechanism, each
video streams and lower quality video being higher than quality node keeps also the information about the hosts in the path between
video. Moreover, bit-rates of stream are adapted to the peer the source and its parent node.It then constructs a (reverse)
performance capability. shortest path spanning trees of the mesh with the root being the
source.
3.3. Hybrid P2P streaming system This second ESM prototype is also able to cope with receiver
heterogeneity and presence of NAT/firewalls. In more detail, audio
stream is kept separated from video stream and multiple bit-rate
video streams are encoded at source and broadcast in parallel though
the overlay tree. Audio is always prioritized over video streams,
and lower quality video is always prioritized over high quality
videos. In this way, system can dynamically select the most suitable
video stream according to receiver bandwidth and network congestion
level. Moreover, in order to take presence of hosts behind NAT/
firewalls, tree is structured in such a way that public host use
hosts behind NAT/firewalls as parents.
The object of the hybrid P2P streaming system is to use the 3.3. Hybrid P2P streaming applications
comprehensive advantage of tree-mesh topology and pull-push mode in
order to achieve balance among system robust, scalability and This type of applications aims at integrating the main advantages of
application real-time performance. mesh-based and tree-based approaches. To this end, overlay topology
is mixed mesh-tree, and content delivery model is push-pull.
3.3.1. New Coolstreaming 3.3.1. New Coolstreaming
The Coolstreaming, first released in summer 2004 with a mesh-based Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P structure, arguably represented the first successful large-scale P2P
live streaming. As the above analysis, it has poor delay performance live streaming. Nevertheless, it suffers poor delay performance and
and high overhead associated each video block transmission. After high overhead associated each video block transmission. In the
that, New coolstreaming [NEWCOOLStreaming] adopts a hybrid mesh and attempt of overcoming such a limitation, New Coolstreaming
tree structure with hybrid pull and push mechanism. All the peers [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a
are organized into mesh-based topology in the similar way like pplive hybrid pull-push content delivery mechanism.
to ensure high reliability.
Besides, content delivery mechanism is the most important part of New
Coolstreaming. Fig.8 is the content delivery architecture. The
video stream is divided into blocks with equal size, in which each
block is assigned a sequence number to represent its playback order
in the stream. We divide each video stream into multiple sub-streams
without any coding, in which each node can retrieve any sub-stream
independently from different parent nodes. This subsequently reduces
the impact to content delivery due to a parent departure or failure.
The details of hybrid push and pull content delivery scheme are shown
in the following:
(1) A node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in BM, the requested
partner, i.e., the parent node.( The node can subscribe more sub-
streams to its partners in this way to obtain higher play quality.)
(2) The selected parent node will continue pushing all blocks in need
of the sub-stream to the requested node.
This not only reduces the overhead associated with each video block Figure 5 illustrates New Coolstreaming architecture.
transfer, but more importantly, significantly reduces the timing
involved in retrieving video content.
------------------------------ ------------------------------
| +---------+ | | +---------+ |
| | Tracker | | | | Tracker | |
| +---------+ | | +---------+ |
| | | | | |
| | | | | |
| +---------------------+ | | +---------------------+ |
| | Content server | | | | Content server | |
| +---------------------+ | | +---------------------+ |
|------------------------------ |------------------------------
skipping to change at page 22, line 42 skipping to change at page 19, line 39
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer1 | | Peer2 | | Peer1 | | Peer2 |
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 | | Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 8 Content Delivery Architecture
Figure 5, New Coolstreaming Architecture
The video stream is divided into equal-size blocks or chunks, which
are assigned with a sequence number to implicitly define the playback
order in the stream. Video stream is subdivided into multiple sub-
streams without any coding, so that each node can retrieve any sub-
stream independently from different parent nodes. This consequently
reduces the impact on content delivery due to a parent departure or
failure. The details of hybrid push-pull content delivery scheme are
as follows:
a node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in buffer map, the
requested partner, i.e., the parent node. The node can subscribe
more sub-streams to its partners in this way to obtain higher play
quality;
the selected parent node will continue pushing all blocks of the
sub-stream to the requesting node.
This not only reduces the overhead associated with each video block
transfer, but more importantly it significantly reduces the delay in
retrieving video content.
Video content is processed for ease of delivery, retrieval, storage Video content is processed for ease of delivery, retrieval, storage
and play out. To manage content delivery, a video stream is divided and play out. To manage content delivery, a video stream is divided
into blocks with equal size, each of which is assigned a sequence into blocks with equal size, each of which is assigned a sequence
number to represent its playback order in the stream. Each block is number to represent its playback order in the stream. Each block is
further divided into K sub-blocks and the set of ith sub-blocks of further divided into K sub-blocks and the set of i-th sub-blocks of
all blocks constitutes the ith sub-stream of the video stream, where all blocks constitutes the i-th sub-stream of the video stream, where
i is a value bigger than 0 and less than K+1. To retrieve video i is a value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its content, a node receives at most K distinct sub-streams from its
parent nodes. To store retrieved sub-streams, a node uses a double parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer. buffering scheme having a synchronization buffer and a cache buffer.
The synchronization buffer stores the received sub-blocks of each The synchronization buffer stores the received sub-blocks of each
sub-stream according to the associated block sequence number of the sub-stream according to the associated block sequence number of the
video stream. The cache buffer then picks up the sub-blocks video stream. The cache buffer then picks up the sub-blocks
according to the associated sub-stream index of each ordered block. according to the associated sub-stream index of each ordered block.
To advertise the availability of the latest block of different sub- To advertise the availability of the latest block of different sub-
streams in its buffer, a node uses a Buffer Map which is represented streams in its buffer, a node uses a Buffer Map which is represented
skipping to change at page 23, line 37 skipping to change at page 21, line 8
sub-stream reception and re-selects parents according to sub-stream sub-stream reception and re-selects parents according to sub-stream
availability patterns. Specifically, if a node observes that the availability patterns. Specifically, if a node observes that the
block sequence number of the sub-stream of a parent is much smaller block sequence number of the sub-stream of a parent is much smaller
than any of its other partners by a predetermined amount, the node than any of its other partners by a predetermined amount, the node
then concludes that the parent is lagging sufficiently behind and then concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum needs to be replaced. Furthermore, a node also evaluates the maximum
and minimum of the block sequence numbers in its synchronization and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced. parents and thus needs also to be replaced.
4. A common P2P Streaming Process Model 4. Security Considerations
As shown in Figure 8, a common P2P streaming process can be
summarized based on Section 3:
1) When a peer wants to receive streaming content:
1.1) Peer acquires a list of peers/parent nodes from the
tracker.
1.2) Peer exchanges its content availability with the peers on
the obtained peer list, or requests to be adopted by the parent
nodes.
1.3) Peer identifies the peers with desired content, or the
available parent node.
1.4) Peer requests for the content from the identified peers,
or receives the content from its parent node.
2) When a peer wants to share streaming content with others:
2.1) Peer sends information to the tracker about the swarms it
belongs to, plus streaming status and/or content availability.
+---------------------------------------------------------+
| +--------------------------------+ |
| | Tracker | |
| +--------------------------------+ |
| ^ | ^ |
| | | | |
| query | | peer list/ |streaming Status/ |
| | | Parent nodes |Content availability/ |
| | | |node capability |
| | | | |
| | V | |
| +-------------+ +------------+ |
| | Peer1 |<------->| Peer 2 | |
| +-------------+ content/+------------+ |
| join requests |
+---------------------------------------------------------+
Figure 8, A common P2P streaming process model
The functionality of Tracker and data transfer in Mesh-based
application and Tree-based is a little different. In the Mesh-based
applications, such as Joost and PPLive, Tracker maintains the lists
of peers storing chunks for a specific channel or streaming file. It
provides peer list for peers to download from, as well as upload to,
each other. In the Tree-based applications, such as PeerCast and
Canviva, Tracker directs new peers to find parent nodes and the data
flows from parent to child only.
5. Security Considerations
This document does not raise security issues. This document does not raise security issues.
6. Author List 5. Author List
The authors of this document are listed as below. The authors of this document are listed as below.
Hui Zhang, NEC Labs America. Hui Zhang, NEC Labs America.
Jun Lei, University of Goettingen. Jun Lei, University of Goettingen.
Gonzalo Camarillo, Ericsson. Gonzalo Camarillo, Ericsson.
Yong Liu, Polytechnic University. Yong Liu, Polytechnic University.
Delfin Montuno, Huawei. Delfin Montuno, Huawei.
Lei Xie, Huawei. Lei Xie, Huawei.
Shihui Duan, CATR. 6. Acknowledgments
7. Acknowledgments
We would like to acknowledge Jiang xingfeng for providing good ideas We would like to acknowledge Jiang xingfeng for providing good ideas
for this document. for this document.
8. Informative References 7. Informative References
[PPLive] "www.pplive.com". [Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet".
[PPStream] [CNN] CNN web site, www.cnn.com
"www.ppstream.com".
[CNN] "www.cnn.com". [PPLive] PPLive web site, www.pplive.com
[JOOSTEXP] [P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV
Lei, Jun, et al., "An Experimental Analysis of Joost Peer- Systems", June 2007.
to-Peer VoD Service".
[P2PVOD] Huang, Yan, et al., "Challenges, Design and Analysis of a [Zattoo] Zattoo web site, http: //zattoo.com/
Large-scale P2P-VoD System", 2008.
[Octoshape] [PPStream] PPStream web site, www.ppstream.com
Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet".
[Zattoo] "http: //zattoo.com/". [SopCast] SopCast web site, http://www.sopcast.com/
[Conviva] "http://www.rinera.com/". [tribler] Tribler Protocol Specification, January 2009, on line
available at http://svn.tribler.org/bt2-design/proto-spec-unified/
trunk/proto-spec-current.pdf
[ESM] Zhang, Hui., "End System Multicast, [QQLive] QQLive web site, http://v.qq.com
http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
May .
[Survey] Liu, Yong, et al., "A survey on peer-to-peer video [QQLivePaper] Liju Feng, et al., "Research on active monitoring based
streaming systems", 2008. QQLive real-time information Acquisition System", 2009.
[P2PIPTVMEA] [conviva] Conviva web site, http://www.conviva.com
Silverston, Thomas, et al., "Measuring P2P IPTV Systems".
[Challenge] [ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June
Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the 2000. (http://esm.cs.cmu.edu/technology/papers/
Internet: Issues, Existing Approaches, and Challenges", Sigmetrics.CaseForESM.2000.pdf)
June 2007.
[NEWCOOLStreaming] [ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet
Li, Bo, et al., "Inside the New Coolstreaming: Broadcast System Based on Overlay Multicast", June 2004. (http://
Principles,Measurements and Performance Implications", static.usenix.org/events/usenix04/tech/general/full_papers/chu/
Apr. 2008. chu.pdf)
[NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications", April 2008.
Authors' Addresses Authors' Addresses
Gu Yingjie (editor) Gu Yingjie
Huawei Huawei
No.101 Software Avenue No.101 Software Avenue
Nanjing, Jiangsu Province 210012 Nanjing 210012
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
Email: guyingjie@huawei.com Email: guyingjie@huawei.com
Zong Ning (editor) Zong Ning (editor)
Huawei Huawei
No.101 Software Avenue No.101 Software Avenue
Nanjing, Jiangsu Province 210012 Nanjing 210012
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
Email: zongning@huawei.com Email: zongning@huawei.com
Zhang Yunfei Zhang Yunfei
China Mobile China Mobile
Email: zhangyunfei@chinamobile.com Email: zhangyunfei@chinamobile.com
Francesca Lo Piccolo
Cisco
Email: flopicco@cisco.com
Duan Shihui
CATR
No.52 HuaYuan BeiLu
Beijing 100191
P.R.China
Phone: +86-10-62300068
Email: duanshihui@catr.cn
 End of changes. 129 change blocks. 
835 lines changed or deleted 688 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/