draft-ietf-ppsp-survey-04.txt   draft-ietf-ppsp-survey-05.txt 
PPSP Y. Gu PPSP Y. Gu
Internet-Draft N. Zong, Ed. Internet-Draft Unaffiliated
Intended status: Standards Track Huawei Intended status: Standards Track N. Zong, Ed.
Expires: August 29, 2013 Y. Zhang Expires: January 13, 2014 Huawei
China Mobile Y. Zhang
F. Piccolo Coolpad
F. Lo Piccolo
Cisco Cisco
S. Duan S. Duan
CATR CATR
February 25, 2013 July 12, 2013
Survey of P2P Streaming Applications Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-04 draft-ietf-ppsp-survey-05
Abstract Abstract
This document presents a survey of some of the most popular Peer-to- This document presents a survey of some of the most popular Peer-to-
Peer (P2P) streaming applications on the Internet. Main selection Peer (P2P) streaming applications on the Internet. Main selection
criteria were popularity and availability of information on operation criteria have been popularity and availability of information on
details at writing time. In doing this, selected applications will operation details at writing time. In doing this, selected
not be reviewed as a whole, but we will focus exclusively on the applications are not reviewed as a whole, but they are reviewed with
signaling and control protocol used to establish and maintain overlay main focus on the signaling and control protocol used to establish
connections among peers and to advertise and download streaming and maintain overlay connections among peers and to advertise and
content. download streaming content.
Status of this Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 29, 2013. This Internet-Draft will expire on January 13, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 4 2. Terminologies and concepts . . . . . . . . . . . . . . . . . 4
3. Classification of P2P Streaming Applications Based on 3. Classification of P2P Streaming Applications Based on Overlay
Overlay Topology . . . . . . . . . . . . . . . . . . . . . . . 5 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Mesh-based P2P Streaming Applications . . . . . . . . . . 5 3.1. Mesh-based P2P Streaming Applications . . . . . . . . . . 5
3.1.1. Octoshape . . . . . . . . . . . . . . . . . . . . . . 6 3.1.1. Octoshape . . . . . . . . . . . . . . . . . . . . . . 6
3.1.2. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1.2. PPLive . . . . . . . . . . . . . . . . . . . . . . . 7
3.1.3. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.3. Zattoo . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.4. PPStream . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.4. PPStream . . . . . . . . . . . . . . . . . . . . . . 11
3.1.5. SopCast . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.5. Tribler . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.6. Tribler . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.6. QQLive . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.7. QQLive . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2. Tree-based P2P streaming applications . . . . . . . . . . 15
3.2. Tree-based P2P streaming applications . . . . . . . . . . 16 3.2.1. End System Multicast (ESM) . . . . . . . . . . . . . 16
3.2.1. End System Multicast (ESM) . . . . . . . . . . . . . . 17 3.3. Hybrid P2P streaming applications . . . . . . . . . . . . 18
3.3. Hybrid P2P streaming applications . . . . . . . . . . . . 18 3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 18
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 19 4. Security Considerations . . . . . . . . . . . . . . . . . . . 19
4. Security Considerations . . . . . . . . . . . . . . . . . . . 21 5. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 19
5. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 21 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19
6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21 7. Informative References . . . . . . . . . . . . . . . . . . . 19
7. Informative References . . . . . . . . . . . . . . . . . . . . 21 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 20
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 20
1. Introduction 1. Introduction
An ever increasing number of multimedia streaming systems have been An ever-increasing number of multimedia streaming systems have been
adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and
video contents from a source to a large number of end users. This is video contents from a source to a large number of end users. This is
the reference scenario of this document, which presents a survey of the reference scenario of this document, which presents a survey of
some of the most popular P2P streaming applications available on the some of the most popular P2P streaming applications available on the
nowadays Internet. The presented survey does not aim at being nowadays Internet.
exhaustive. Reviewed applications have indeed been selected mainly
based on their popularity and on the information publicly available
on P2P operation details at writing time.
In addition, selected applications are not reviewed as a whole, but The presented survey does not aim at being exhaustive. Reviewed
with exclusive focus on signaling and control protocols used to applications have indeed been selected mainly based on their
construct and maintain the overlay connections among peers and to popularity and on the information publicly available on P2P operation
advertise and download multimedia content. More precisely, we assume details at writing time.
throughout the document the high level system model reported in
Figure 1. In addition, the selected applications are not reviewed as a whole,
+--------------------------------+ but they are reviewed with main focus on signaling and control
| Tracker | protocols used to construct and maintain the overlay connections
| Information on multimedia | among peers and to advertise and download multimedia content. More
| content and peer set | precisely, we assume throughout the document the high level system
+--------------------------------+ model reported in Figure 1.
^ | ^ |
| | | | +--------------------------------+
Trcker | | Tracker | | | Tracker |
Protocol | | Protocol | | | Information on multimedia |
| | | | | content and peer set |
| | | | +--------------------------------+
| V | V ^ | ^ |
+-------------+ +------------+ | | | |
| Peer1 |<-------->| Peer 2 | | | Traker | | Traker
+-------------+ Peer +------------+ | | Protocol | | Protocol
Protocol | | | |
| V | V
+-------------+ +------------+
| Peer 1 |<--------| Peer 2 |
| |-------->| |
+-------------+ +------------+
Peer Protocol
Figure 1, High level architecture of P2P streaming systems assumed as
reference model througout the document
Figure 1, High level model of P2P streaming systems assumed
as reference througout the document
As Figure 1 shows, it is possible to identify in every P2P streaming As Figure 1 shows, it is possible to identify in every P2P streaming
system two main types of entity: peers and trackers. Peers represent system two main types of entity: peers and trackers. Peers represent
end users, which join dynamically the system to send and receive end users, which join dynamically the system to send and receive
streamed media content, whereas trackers represent well-known nodes, streamed media content, whereas trackers represent well-known nodes,
which are stably connected to the system and provide peers with which are stably connected to the system and provide peers with
metadata information about the streamed content and the set of active metadata information about the streamed content and the set of active
peers. According to this model, it is possible to distinguish among peers. According to this model, it is possible to distinguish among
two different control and signaling protocols: two different control/signaling protocols:
the protocol that regulates the interaction between trackers and 1) the "tracker protocol" that regulates the interaction between
peers and will be denoted as "tracker protocol" in the document; trackers and peer;
the protocol that regulates the interaction between peers and will
be denoted as "peer protocol" in the document.
Hence, whenever possible, we will always try to identify tracker and 2) the "peer protocol" that regulates the interaction between
peer protocols and we will provide the corresponding details. peers.
Hence, whenever possible, we always try to identity tracker and peer
protocols and we provide the corresponding details.
This document is organized as follows. Section 2 introduces This document is organized as follows. Section 2 introduces
terminology and concepts used throughout the current survey. Since terminology and concepts used throughout the current survey. Since
overlay topology built on connections among peers impacts some overlay topology built on connections among peers impacts some
aspects of tracker and peer protocols, Section 2 classifies P2P aspects of tracker and peer protocols, Section 2 classifies P2P
streaming application according to the main overlay topologies: mesh- streaming applications according to the overlay topology: mesh-based,
based, tree-based and hybrid. Then, Section 3 presents some of the tree-based and hybrid. Then, Section 3 presents some of the most
most popular mesh-based P2P streaming applications: Octoshape, popular mesh-based P2P streaming applications: Octoshape, PPLive,
PPLive, Zattoo, PPStream, SopCast, Tribler, QQLive. Likewise, Zattoo, PPStream, Tribler, QQLive. Likewise, Section 4 presents End
Section 4 presents End System Multicast as example of tree-based P2P System Multicast as example of tree-based P2P streaming applications.
streaming applications. Finally Section 5 presents New Coolstreaming Finally Section 5 presents New Coolstreaming as example of hybrid-
as example of hybrid-topology P2P streaming application. topology P2P streaming application.
2. Terminologies and concepts 2. Terminologies and concepts
Chunk: A chunk is a basic unit of data organized in P2P streaming for Channel: TV channel from which live streaming content is transmitted
storage, scheduling, advertisement and exchange among peers. in a P2P streaming application.
Live streaming: It refers to a scenario where all the audiences Chunk: Basic unit that a streaming media is partitioned into for the
receive streaming content for the same ongoing event. It is desired purposes of storage, scheduling, advertisement and exchange among
that the lags between the play points of the audiences and streaming peers.
source be small.
Peer: A peer refers to a participant in a P2P streaming system that Live streaming: Application that allows users to receive almost in
not only receives streaming content, but also caches and streams real-time multimedia content related to on ongoing event and streamed
streaming content to other participants. from a source. The lag between the play points at the receivers and
the ones at the streaming source has to be small.
Peer: P2P node that dynamically participates in a P2P streaming
system not only to receive streaming content but also to store and
upload streaming content to other participants.
Peer protocol: Control and signaling protocol that regulates Peer protocol: Control and signaling protocol that regulates
interaction among peers. interaction among peers.
Pull: Transmission of multimedia content only if requested by Pull: Transmission of multimedia content only if requested by
receiving peer. receiving peers.
Push: Transmission of multimedia content without any request from Push: Transmission of multimedia content without any request from
receiving peer. receiving peers.
Swarm: A swarm refers to a group of peers who exchange data to Swarm: A group of peers sharing the same streaming content at a given
distribute chunks of the same content at a given time. time.
Tracker: A tracker refers to a directory service that maintains a Tracker: P2P node that stably participates in a P2P streaming system
list of peers participating in a specific audio/video channel or in to provide a directory service by maintaining information both on the
the distribution of a streaming file. peer set and on the chunks each peer stores.
Tracker protocol: Control and signaling protocol that regulates Tracker protocol: Control and signaling protocol that regulates
interaction among peers and trackers. interaction among peers and trackers.
Video-on-demand (VoD): It refers to a scenario where different Video-on-demand (VoD): Application that allows users to select and
audiences may watch different parts of the same recorded streaming watch video content on demand.
with downloaded content.
3. Classification of P2P Streaming Applications Based on Overlay 3. Classification of P2P Streaming Applications Based on Overlay
Topology Topology
Depending on the topology that can be associated with overlay Depending on the topology that can be associated with overlay
connections among peers, it is possible to distinguish among the connections among peers, it is possible to distinguish among the
following general types of P2P streaming applications: following general types of P2P streaming applications:
- tree-based: peers are organized to form a tree-shape overlay 1) tree-based: peers are organized to form a tree-shape overlay
network rooted at the streaming source, and multimedia content network rooted at the streaming source, and multimedia content
delivery is push-based. Peers that forward data are called parent delivery is push-based. Peers that forward data are called parent
nodes, and peers that receive it are called children nodes. Due nodes, and peers that receive it are called children nodes. Due
to their structured nature, tree-based P2P streaming applications to their structured nature, tree-based P2P streaming applications
present a very low cost of topology maintenance and are able to guarantee both topology maintenance at very low cost and good
guarantee good performance in terms of scalability and delay. On performance in terms of scalability and delay. On the other side,
the other side, they are not very resilient to peer churn, that they are not very resilient to peer churn, that may be very high
may be very high in a P2P environment; in a P2P environment;
- mesh-based: peers are organized in a randomly connected overlay 2) mesh-based: peers are organized in a randomly connected overlay
network, and multimedia content delivery is pull-based. This is network, and multimedia content delivery is pull-based. This is
the reason why these systems are also referred to as "data- the reason why these systems are also referred to as "data-
driven". Due to their unstructured nature, mesh-based P2P driven". Due to their unstructured nature, mesh-based P2P
streaming application are very resilient with respect to peer streaming application are very resilient with respect to peer
churn and are able to guarantee network resource utilization churn and guarantee higher network resource utilization than the
higher than for tree-based applications. On the other side, the one associated with tree-based applications. On the other side,
cost to maintain overlay topology may limit performance in terms the cost to maintain overlay topology may limit performance in
of scalability and delay, and pull-based data delivery calls for terms of delay, and pull-based data delivery calls for large size
large size buffer where to store chunks; buffers where to store chunks;
- hybrid: this category includes all the P2P application that 3) hybrid: this category includes all the P2P applications that
cannot be classified as simply mesh-based or tree-based and cannot be classified as simply mesh-based or tree-based and
present characteristics of both mesh-based and tree-based present characteristics of both mesh-based and tree-based
categories. categories.
3.1. Mesh-based P2P Streaming Applications 3.1. Mesh-based P2P Streaming Applications
In mesh-based P2P streaming application peers self-organize in a In mesh-based P2P streaming application peers self-organize in a
randomly connected overlay graph where peers interact with a limited randomly connected overlay graph where each peer interacts with a
subset of peers (neighbors) and explicitly request chunks they need limited subset of other peers (neighbors) and explicitly requests
(pull-based or data-driven delivery). This type of content delivery chunks it needs (pull-based or data-driven delivery). This type of
may be associated with high overhead, not only because peers content delivery may be associated with high overhead, not only
formulate requests to in order to download chunks they need, but also because peers formulate requests in order to download chunks they
because in some applications peers exchange information about chunks need, but also because in some applications peers exchange
they own (in form of so called buffer-maps, a sort of bit maps with a information about chunks they own (in form of so called buffer-maps,
bit "1" in correspondence of chunks stored in the local buffer). The a sort of bit maps with a bit "1" in correspondence of chunks stored
main advantage of this kind of applications lies in that a peer does in the local buffer). On the one side, the main advantage of this
not rely on a single peer for retrieving multimedia content. Hence, kind of applications lies in that a peer does not rely on a single
these applications are very resilient to peer churn. On the other peer for retrieving multimedia content. Hence, these applications
side, overlay connections are not persistent and highly dynamic are very resilient to peer churn. On the other side, overlay
(being driven by content availability), and this makes content connections are highly dynamic and not persistent (being driven by
distribution efficiency unpredictable. In fact, different chunks may content availability), and this makes content distribution efficiency
be retrieved via different network paths, and this may turn at end unpredictable. In fact, different chunks may be retrieved via
users into playback quality degradation ranging from low bit rates, different network paths, and this may turn at end users into playback
to long startup delays, to frequent playback freezes. Moreover, quality degradation ranging from low bit rates to long startup
peers have to maintain large buffers to increase the probability of delays, to frequent playback freezes. Moreover, peers have to
satisfying chunk requests received by neighbors. maintain large buffers to increase the probability of satisfying
chunk requests received by neighbors.
3.1.1. Octoshape 3.1.1. Octoshape
Octoshape [Octoshape] is popular for the realization of the P2P Octoshape [Octoshape] is a P2P plug-in that has been realized by the
plug-in CNN [CNN] that has been using Octoshape to broadcast its homonym Danish company and has become popular for being used by CNN
living streaming. Octoshape helps CNN serve a peak of more than a [CNN] to broadcast living streaming content. Octoshape helps indeed
million simultaneous viewers. But Octoshape has also provided CNN serve a peak of more than a million simultaneous viewers thanks
not only to the P2P content distribution paradigm, but also to
several innovative delivery technologies such as loss resilient several innovative delivery technologies such as loss resilient
transport, adaptive bit rate, adaptive path optimization and adaptive transport, adaptive bit rate, adaptive path optimization and adaptive
proximity delivery. proximity delivery.
Figure 2 depicts the architecture of the Octoshape system. Figure 2 depicts the architecture of the Octoshape system.
+------------+ +--------+ +------------+ +--------+
| Peer 1 |---| Peer 2 | | Peer 1 |---| Peer 2 |
+------------+ +--------+ +------------+ +--------+
| \ / | | \ / |
| \ / | | \ / |
| \ | | \ |
| / \ | | / \ |
| / \ | | / \ |
| / \ | | / \ |
+--------------+ +-------------+ +--------------+ +-------------+
skipping to change at page 7, line 8 skipping to change at page 7, line 8
| |
+---------------+ +---------------+
| Content Server| | Content Server|
+---------------+ +---------------+
Figure 2, Architecture of Octoshape system Figure 2, Architecture of Octoshape system
As it can be seen from the picture, there are no trackers and As it can be seen from the picture, there are no trackers and
consequently no tracker protocol is necessary. consequently no tracker protocol is necessary.
As regards the peer protocol, as soon as a peer joins a channel, it As regards the peer protocol, information on peers that already
notifies all the other peers about its presence, in such a way that joined the channel is transmitted in form of metadata when streaming
each peer maintains a sort of address book with the information the live content. In such a way each peer maintains a sort of
necessary to contact other peers who are watching the same channel. Address Book with the information necessary to contact other peers
Although Octoshape inventors claim in [Octoshape] that each peer who are watching the same channel.
records all peers joining a channel, we suspect that it is very
unlikely that all peers are recorded. In fact, the corresponding
overhead traffic would be large, especially when a popular program
starts in a channel and lots of peers switch to this channel. Maybe
only some geographic or topological neighbors are notified and the
joining peer gets the address book from these nearby neighbors.
Regarding data distribution strategy, in the Octoshape solution the Regarding data distribution strategy, in the Octoshape solution the
original stream is split into a number K of smaller equal-sized data original stream is split into a number K of smaller equal-sized data
streams, but a number N > K of unique data streams are actually streams, but a number N > K of unique data streams are actually
constructed, in such a way that a peer receiving any K of the N constructed, in such a way that a peer receiving any K of the N
available data streams is able to play the original stream. For available data streams is able to play the original stream. For
instance, if the original live stream is a 400 kbit/sec signal, for instance, if the original live stream is a 400 kbit/sec signal, for
K=4 and N=12, 12 unique data streams are constructed, and a peer that K=4 and N=12, 12 unique data streams are constructed, and a peer that
downloads any 4 of the 12 data streams is able to play the live downloads any 4 of the 12 data streams is able to play the live
stream. In this way, each peer sends requests of data streams to stream. In this way, each peer sends requests of data streams to
some selected peers, and it receives positive/negative answers some selected peers, and it receives positive/negative answers
depending on availability of upload capacity at requested peers. In depending on availability of upload capacity at requested peers. In
case of negative answers, a continues sending requests it finds K case of negative answers, a peer continues sending requests until it
peers willing to upload the minimum number if data streams needed to finds K peers willing to upload the minimum number of data streams
redisplay the original live stream. Since the number of peers served needed to display the original live stream. This allows a flexible
by a given peer is limited by its upload capacity, the upload use of bandwidth at end users. In fact, since the original stream is
capacity at each peer should be larger than the playback rate of the split into smaller data streams, a peer that does not have enough
live stream. Otherwise, artificial peers may be added to offer extra upload capacity to transmit the original whole stream can transmit a
bandwidth. number of smaller data streams that fits its actual upload capacity.
In order to mitigate the impact of peer loss, the address book is In order to mitigate the impact of peer loss, the address book is
also used at each peer to derive the so called Standby List, which also used at each peer to derive the so called Standby List, which
Octoshape peers use to probe other peers and be sure that they are Octoshape peers use to probe other peers and be sure that they are
ready to take over if one of the current senders leaves or gets ready to take over if one of the current senders leaves or gets
congested. congested.
Finally, in order to optimize bandwidth utilization, Octoshape Finally, in order to optimize bandwidth utilization, Octoshape
leverages peers within a network to minimize external bandwidth usage leverages peers within a network to minimize external bandwidth usage
and to select the most reliable and "closest" source to each viewer. and to select the most reliable and "closest" source to each viewer.
It also chooses the best matching available codecs and players, and It also chooses the best matching available codecs and players, and
it scales bit rate up and down according to available internet it scales bit rate up and down according to the available Internet
connection. connection.
3.1.2. PPLive 3.1.2. PPLive
PPLive [PPLive] was first developed in Huazhong University of Science
and Technology in 2004, and it is one of the earliest and most
popular P2P streaming software in China. To give an idea, PPLive
website reached 50 millions of visitors for the opening ceremony of
Beijing 2008 Olympics, and the dedicated Olympics channel attracted
221 millions of views in two weeks.
PPLive [PPLive] is one of the most popular P2P streaming software in Even though PPLive was renamed to PPTV in 2010, we continue using the
China. The PPLive system includes six parts. old name PPLive throughout this document.
(1) Video streaming server: providing the source of video content and
coding the content for adapting the network transmission rate and the
client playing.
(2) Peer: also called node or client. The peers compose the self-
organizing network logically and each peer can join or leave
whenever. When the client downloads the content, it also provides
its own content to the other client at the same time.
(3) Directory server: server which the PPLive client, when launched
or shut down by user, automatically registers user information to and
cancels user information from.
(4) Tracker server: server that records the information of all users PPLive system includes the following main components:
watching the same content. In more detail, when the PPLive client
requests some content, this server will check if there are other
peers owning the content and send the information to the client.
(5) Web server: providing PPLive software updating and downloading. 1) video streaming server, that plays the role of source of video
content and copes with content coding issues;
(6) Channel list server: server that stores the information of all 2) peer, also called node or client, that is PPLive entity
the programs which can be watched by end users, including VoD downloading video content from other peers and uploading video
programs and live broadcasting programs. content to other peers;
PPLive uses two major communication protocols. The first one is the 3) channel server, that provides the list of available channels
Registration and Peer Discovery protocol, the equivalent of tracker (live TV or VoD content) to a PPLive, as soon as the peer joins
protocol, and the second one is the P2P Chunk Distribution protocol, the system;
the equivalent of peer protocol. Figure 3 shows the architecture of
PPLive system.
+------------+ +--------+ 4) tracker server, that provides a PPLive peer with the list of
| Peer 2 |----| Peer 3 | online peers that are watching the same channel as the one the
+------------+ +--------+ joining peer is interested in.
| |
| |
+--------------+
| Peer 1 |
+--------------+
|
|
|
+---------------+
| Tracker Server|
+---------------+
Figure 3, Architecture of PPlive system Figure 3 illustrates the high level diagram of PPLive system.
As regards the tracker protocol, firstly a peer gets the channel list +------------+ +------------+
from the Channel list server; secondly it chooses a channel and asks | Peer 2 |----| Peer 3 |
the Tracker server for a peer-list associated with the selected +------------+ +------------+
channel. | | | |
| +--------------+ |
| | Peer 1 | |
| +--------------+ |
| | |
+------------------------------+
| |
| +----------------------+ |
| |Video Streaming Server| |
| +----------------------+ |
| | Channel Server | |
| +----------------------+ |
| | Tracker Server | |
| +----------------------+ |
| |
+------------------------------+
As regards the peer protocol, a peer contacts the peers in its peer- Figure 3, High level overview of PPLive system architecture
list to get additional peer-lists, to be merged with the original one
received by Tracker server with the goal of constructing and
maintaining an overlay mesh for peer management and data delivery.
According to [P2PIPTVMEA], PPLive peers maintain a constant peer-list
when the number of peers is relatively small.
For the video-on-demand (VoD) operation, because different peers As regards the tracker protocol, as soon as a PPLive peer joins the
watch different parts of the channel, a peer buffers chunks up to a systems and selects the channel to watch, it retrieves from the
few minutes of content within a sliding window. Some of these chunks tracker server a list of peers that are watching the same channel.
may be chunks that have been recently played; the remaining chunks
are chunks scheduled to be played in the next few minutes. In order
to upload chunks to each other, peers exchange "buffer-map" messages.
A buffer-map message indicates which chunks a peer currently has
buffered and can share, and it includes the offset (the ID of the
first chunk), the length of the buffer map, and a string of zeroes
and ones indicating which chunks are available (starting with the
chunk designated by the offset). PPlive transfer Data over UDP.
The download policy of PPLive may be summarized with the following As regards the peer protocol, it controls both peer discovery and
three points: chunk distribution process. More specifically, peer discovery is
regulated by a kind of gossip-like mechanism. After retrieving the
list of active peers watching a specific channel from tracker server,
a PPLive sends out probes to establish active peer connections, and
some of those peers may return also their own list of active peers to
help the new peer discover more peers in the initial phase. Chunk
distribution process is mainly based on buffer map exchange to
advertise the availability of cached chunks. In more detail, PPLive
software client exploits two local buffers to cache chunks: the
PPLive TV engine buffer and media player buffer. The main reason
behind the double buffer structure is to address the download rate
variations when downloading chunks from PPLive network. In fact,
received chunks are first buffered and reassembled into the PPLive TV
engine buffer; as soon as the number of consecutive chunks in PPLive
TV engine buffer overcomes a predefined threshold, the media player
buffer downloads chunks from the PPLive TV engine buffer; finally,
when the media player buffer fills up to the required level, the
actual video playback starts.
top-ten peers contribute to a major part of the download traffic. Being the nature of PPLive protocols and algorithm proprietary, most
Meanwhile, session with top-ten peers is quite short, if compared of known details have been derived from measurement studies.
with the video session duration. This would suggest that PPLive Specifically, it seems that:
gets video from only a few peers at any given time, and switches
periodically from one peer to another;
PPLive can send multiple chunk requests for different chunks to 1) number of peers from which a PPLive node downloads live TV
one peer at one time; chunks from is constant and relatively low, and the top-ten peers
contribute to a major part of the download traffic, as shown in
[P2PIPTVMEA];
PPLive is observed to have the download scheduling policy of 2) PPLive can provide satisfactory performance for popular live TV
giving higher priority to rare chunks and to chunks closer to play and VoD channels. For unpopular live TV channels, performance may
out deadline. severely degrade, whereas for unpopular VoD channels this problem
rarely happens, as it shown in [CNSR]. Authors of [CNSR] also
demonstrate that the workload in most VoD channels is well
balanced, whereas for live TV channels the workload distribution
is unbalanced, and a small number of peers provide most video
data.
3.1.3. Zattoo 3.1.3. Zattoo
Zattoo [Zattoo] is P2P live streaming system that was launched in
Switzerland in 2006 in coincidence with the EUFA European Football
Championship and in few years was able to attract almost 10 million
registered users in several European countries.
Zattoo [Zattoo] is P2P live streaming system which serves over 3 Figure 4 depicts the high level architecture of Zattoo system. The
million registered users over European countries.The system delivers main reference for the information provided in this document is
live streaming using a receiver-based, peer-division multiplexing [IMC09].
scheme. Zattoo reliably streams media among peers using the mesh
structure.
Figure 4 depicts a typical procedure of single TV channel carried +-------------------------------------+
over Zattoo network. First, Zattoo system broadcasts a live TV | ------------------------------- | +------+
channel, captured from satellites, onto the Internet. Each TV | | Broadcast Server | |---|Peer 1|---|
channel is delivered through a separate P2P network. | ------------------------------- | +------+ |
------------------------------- | | Authentication Server | | +--------------+
| ------------------ | -------- | ------------------------------- | | Repeater node|
| | Broadcast | |---------|Peer1 |----------- | | Rendezvous Server | | +--------------+
| | Servers | | -------- | | ------------------------------- | +------+ |
| Administrative Servers | ------------- | | Bandwidth Estimation Server | |---|Peer 2|---|
| ------------------------ | | Super Node| | ------------------------------- | +------+
| | Authentication Server | | ------------- | | Other Servers | |
| | Rendezvous Server | | | | ------------------------------- |
| | Feedback Server | | -------- | +-------------------------------------+
| | Other Servers | |---------|Peer2 |----------|
| ------------------------| | --------
------------------------------|
Figure 4, Basic architecture of Zattoo system Figure 4, High level overview of Zattoo system architecture
In order to receive a TV channel, users are required to be Broadcast server is in charge of capturing, encoding, encrypting and
authenticated through Zattoo Authentication Server. Upon sending the TV channel to the Zattoo network. A number N of logical
authentication, users obtain a ticket identifying the interest TV sub-streams is derived from the original stream, and packets of the
channel with a specific lifetime. Then, users contact the Rendezvous same order in the sub-streams are grouped together into the so-called
Server, which plays the role of tracker and based on the received segments. Each segment is then coded via a Reed-Salomon error
ticket sends back a list joined of peers carrying the channel. correcting code in such a way that any number k < N of received
packets in the segment is enough to reconstruct the whole segment.
As regards the peer protocol, a peer establishes overlay connections Authentication server is the first point of contact for a peer the
with other peers randomly selected in the peer-list received by the joins the system. It authenticates Zattoo users and assigns them
Rendezvous Server. with a limited lifetime ticket. Then, a user contacts the Rendezvous
server and specifies the TV channel of interest. It also presents
the tickets received by the authentication server. Provided that the
presented ticket is valid, the rendezvous server returns a list of
Zattoo peers that have already joined the requested channel and a
signed channel ticket. Hence, rendezvous server plays the role of
tracker. At this point the direct interaction between peers starts
and it is regulated by the peer protocol.
For reliable data delivery, each live stream is partitioned into A new Zattoo user contacts the peers returned by the rendezvous
video segments. Each video segment is coded for forward error server in order to identify a set of neighboring peers covering the
correction with Reed-Solomon error correcting code into n sub-stream full set of sub-streams in the TV channel. This process is denoted
packets such that having obtained k correct packets of a segment is in Zattoo jargon as Peer Division Multiplexing (PDM). To ease the
sufficient to reconstruct the remaining n-k packets of the same video identification of neighboring peers, each contacted peer provides
segment. To receive a video segment, each peer then specifies the also the list of its own known peers, in such a way that a new Zattoo
sub-stream(s) of the video segment it would like to receive from the user, if needed, can contact new more peers besides the ones
neighboring peers. indicated by the rendezvous server. In selecting which peers to
establish connections with, a peer adopts the criterion of
topological closeness. The topological location of a peer is defined
in Zattoo as (in order of preference) its subset number, its
autonomous system number and its country code, and its provided to
each peer by the authentication server.
Peers decide how to multiplex a stream among its neighboring peers Zattoo peer protocol provides also a mechanism to make PDM process
based on the availability of upload bandwidth. With reference to adaptive with respect to bandwidth fluctuations. First of all, a
such aspect, Zattoo peers rely on Bandwdith Estimation Server to peer controls the admission of new connections based on the available
initially estimate the amount of available uplink bandwidth at a uplink bandwidth. This is estimated i) at beginning with each peer
peer. Once a peer starts to forward substream to other peers, it sending probe messages to the Bandwidth Estimation server, and ii)
receives QoS feedback from its receivers if the quality of sub-stream while forwarding sub-streams to other peers based on the quality-of-
drops below a threshold. service feedback received by those peers. A quality-of-service
feedback is sent from the receiver to the sender only when the
quality of the received sub-stream is below a given threshold. So if
a quality-of-service feedback is received, a Zattoo peer decrements
the estimation of available uplink bandwidth, and if this drops below
the amount needed to supports the current connections, a proper
number of connections is closed. On the other side, if no quality-
of-service feedback is received for a given time interval, a Zattoo
peer increments the estimation of available uplink bandwidth
according to a mechanism very similar to the one of TCP congestion
window (double increase or linear increase depending on whether the
estimate is below or a given threshold).
Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to As it can be seen also in Figure 4, there exist two classes of Zattoo
handle longer term bandwidth fluctuations. According to this scheme, nodes: simple peers, whose behavior has already been presented, and
each peer determines how many sub-streams to transmit and when to Repeater nodes, that serve as bandwidth multiplier, are able to
switch partners. Specifically, each peer continuously estimates the forward any sub-stream and implement the same peer protocol as simple
amount of available uplink bandwidth based initially on probe packets peers.
sent to Zattoo Bandwidth Estimation Server and subsequently on peer
QoS feedbacks, by using different algorithms depending on the
underlying transport protocol. A peer increases its estimated
available uplink bandwidth, if the current estimate is below some
threshold and if there has been no bad quality feedback from
neighboring peers for a period of time, according to some algorithm
similar to how TCP maintains its congestion window size. Each peer
then admits neighbors based on the currently estimated available
uplink bandwidth. In case a new estimate indicates insufficient
bandwidth to support the existing number of peer connections, one
connection at a time, preferably starting with the one requiring the
least bandwidth, is closed. On the other hand, if loss rate of
packets from a peer's neighbor reaches a certain threshold, the peer
will attempt to shift the degraded neighboring peer load to other
existing peers, while looking for a replacement peer. When one is
found, the load is shifted to it and the degraded neighbor is
dropped. As expected if a peer's neighbor is lost due to departure,
the peer initiates the process to replace the lost peer. To optimize
the PDM configuration, a peer may occasionally initiate switching
existing partnering peers to topologically closer peers.
3.1.4. PPStream 3.1.4. PPStream
The system architecture of PPStream [PPStream] is similar to the one PPStream [PPStream] is a very populare P2P streaming software in
of PPLive. China and in many other countries of East Asia.
To ensure data availability, PPStream uses some form of chunk
retransmission request mechanism and shares buffer map at high rate.
Each data chunk, identified by the play time offset encoded by the
program source, is divided into 128 sub-chunks of 8KB size each. The
chunk id is used to ensure sequential ordering of received data
chunk. The buffer map consists of one or more 128-bit flags denoting
the availability of sub-chunks, and it includes information on time
offset. Usually, a buffer map contains only one data chunk at a
time, and it also contains sending peer's playback status, because as
soon as a data chunk is played back, the chunk is deleted or replaced
by the next data chunk.
At the initiating stage a peer can use up to four data chunks,
whereas on a stabilized stage a peer uses usually one data chunk.
However, in transient stage, a peer uses variable number of chunks.
Sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy. The same fetching pattern for
one data chunk seems to repeat itself in the subsequent data chunks.
Moreover, higher bandwidth PPStream peers tend to receive chunks
earlier and thus to contribute more than lower bandwidth peers.
Based on the experimental results reported in [P2PIPTVMEA], download
policy of PPStream may be summarized with the following two points:
top-ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream peer gets the video
from many peers simultaneously, and session between peers have
long duration;
PPStream does not send multiple chunk requests for different
chunks to one peer at one time; PPStream maintains a constant peer
list with relatively large number of peers.
3.1.5. SopCast The system architecture of PPStream is very similar to the one of
PPLive. When a PPStream peer joins the system, it retrieves the list
of channels from the channel list server. After selecting the
channel to watch, a PPStream peer retrieves from the peer list server
the identifiers of peers that are watching the selected channel, and
it establishes connections that are used first of all to exchange
buffer-maps. In more detail, a PPStream chunk is identified by the
play time offset which is encoded by the streaming source and it is
subdivided into sub-chunks. So buffer-maps in PPStream carry the
play time offset information and are strings of bits that indicate
the availability of sub-chunks. After receiving the buffer-maps from
the connected peers, a PPStream peer selects peers to download sub-
chunks from according to a rate-based algorithm, which maximizes the
utility of uplink and downlink bandwidth.
The system architecture of SopCast [SopCast] is similar to the one of 3.1.5. Tribler
PPLive.
SopCast allows for software updates via HTTP through a centralized Tribler [tribler] is a BitTorrent client that was able to go very
web server, and it makes list of channels available via HTTP through much beyond BitTorrent model also thanks to the support for video
another centralized server. streaming. Initially developed by a team of researchers at Delft
University of Technology, Tribler was able to both i) attract
attention from other universities and media companies and ii) receive
European Union research funding (P2P-Next and QLectives projects).
SopCast traffic is encoded and SopCast TV content is divided into Differently from BitTorrent, where a tracker server centrally
video chunks or blocks with equal sizes of 10KB. Sixty percent of coordinates peers in uploads/downloads of chunks and peers directly
its traffic is signaling packets and 40% is actual video data interact with each other only when they actually upload/download
packets. SopCast produces more signaling traffic compared to PPLive, chunks to/from each other, there is no tracker server in Tribler and,
PPStream, with PPLive producing the minimum of signaling traffic. It as a consequence, there is no need of tracker protocol.
has been observed in [P2PIPTVMEA] that SopCast traffic has long-range
dependency, which also means that eventual QoS mitigation mechanisms
may be ineffective. Moreover, according to [P2PIPTVMEA], SopCast
communication mechanism starts with UDP for the exchange of control
messages among its peers by using a gossip-like protocol and then
moves to TCP for the transfer of video segments. It also seems that
top-ten peers contribute to about half of the total download traffic.
Finally, SopCast peer-list can be as large as PPStream peer-list, but
differently from PPStream SopCast peer-list varies over time.
3.1.6. Tribler This is illustrated also in Figure 5, which depicts the high level
architecture of Tribler.
Tribler [tribler] is a BitTorrent client that is able to go very much +------------+
beyond BitTorrent model also thanks to the support for video | Superpeer |
streaming. Initially developed by a team of researchers at Delft +------------+
University of Technology, Tribler was able to attract attention from / \
other universities and media companies and to receive European Union / \
research funding (P2P-Next and QLectives projects). +------------+ +------------+
| Peer 2 |----| Peer 3 |
+------------+ +------------+
/ | \
/ | \
/ +--------------+ \
/ | Peer 1 | \
/ +--------------+ \
/ / \ \
+------------+ / +--------------+
| Peer 4 | / | Peer 5 |
+------------+ / +--------------+
\ / /
\ / /
\ / +------------+
+------------+ | Superpeer |
| Superpeer | +------------+
+------------+
Differently from BitTorrent, where a tracker server centrally Figure 5, High level overview of Tribler system architecture
coordinates uploads/downloads of chunks among peers and peers
directly interact with each other only when they actually upload/
download chunks to/from each other, there is no tracker server in
Tribler and, as a consequence, there is no need of tracker protocol.
Peer protocol is instead used to organize peers in an overlay mesh. Regarding peer protocol and the organization of overlay mesh, Tribler
In more detail, Tribler bootstrap process consists in preloading well bootstrap process consists in preloading well known superpeer
known super-peer addresses into peer local cache, in such a way that addresses into peer local cache, in such a way that a joining peer
a joining peer randomly selects a super-peer to retrieve a random randomly selects a superpeer to retrieve a random list of already
list of already active peers to establish overlay connections with. active peers to establish overlay connections with. A gossip-like
A gossip-like mechanism called BuddyCast allows Tribler peers to mechanism called BuddyCast allows Tribler peers to exchange their
exchange their preference lists, that is their downloaded file, and preference list, that is their downloaded files, and to build the so
to build the so called Preference Cache. This cache is used to called Preference Cache. This cache is used to calculate similarity
calculate similarity levels among peers and to identify the so called levels among peers and to identify the so called "taste buddies" as
"taste buddies" as the peers with highest similarity. Thanks to this the peers with highest similarity. Thanks to this mechanism each
mechanism each peer maintains two lists of peers: i) a list of its peer maintains two lists of peers: i) a list of its top-N taste
top-N taste buddies along with their current preference lists, and buddies along with their current preference lists, and ii) a list of
ii) a list of random peers. So a peer alternatively selects a peer random peers. So a peer alternatively selects a peer from one of the
from one of the lists and sends it its preference list, taste-buddy lists and sends it its preference list, taste-buddy list and a
list and a selection of random peers. The goal behind the selection of random peers. The goal behind the propagation of this
propagation of this kind of information is the support for the remote kind of information is the support for the remote search function, a
search function, a completely decentralized search service that completely decentralized search service that consists in querying
consists in querying Preference Cache of taste buddies in order to Preference Cache of taste buddies in order to find the torrent file
find the torrent file associated with an interest file. If no associated with an interest file. If no torrent is found in this
torrent is found in this way, Tribler users may alternatively resort way, Tribler users may alternatively resort to web-based torrent
to web-based torrent collector servers available for BitTorrent collector servers available for BitTorrent clients.
clients.
As already said, Tribler supports video streaming in two different As already said, Tribler supports video streaming in two different
forms: video on demand and live streaming. forms: video on demand and live streaming.
As regards video on demand, a peer first of all keeps informed its As regards video on demand, a peer first of all keeps informed its
neighbors about the chunks it has. Then, on the one side it applies neighbors about the chunks it has. Then, on the one side it applies
suitable chunk-picking policy in order to establish the order suitable chunk-picking policy in order to establish the order
according to which to request the chunks he wants to download. This according to which to request the chunks he wants to download. This
policy aims to assure that chunks come to the media player in order policy aims to assure that chunks come to the media player in order
and in the same time that overall chunk availability is maximized. and in the same time that overall chunk availability is maximized.
skipping to change at page 14, line 25 skipping to change at page 14, line 6
request, mid priority chunks are requested according to a rarest- request, mid priority chunks are requested according to a rarest-
first policy. Finally, when there are no more mid priority chunks to first policy. Finally, when there are no more mid priority chunks to
request, low priority chunks are requested according to a rarest- request, low priority chunks are requested according to a rarest-
first policy as well. On the other side, Tribler peers follow the first policy as well. On the other side, Tribler peers follow the
give-to-get policy in order to establish which peer neighbors are give-to-get policy in order to establish which peer neighbors are
allowed to request chunks (according to BitTorrent jargon to be allowed to request chunks (according to BitTorrent jargon to be
unchoked). In more detail, time is subdivided in periods and after unchoked). In more detail, time is subdivided in periods and after
each period Tribler peers first sort their neighbors according to the each period Tribler peers first sort their neighbors according to the
decreasing numbers of chunks they have forwarded to other peers, decreasing numbers of chunks they have forwarded to other peers,
counting only the chunks they originally received from them. In case counting only the chunks they originally received from them. In case
if tie, Tribler sorts their neighbors according to the decreasing of tie, Tribler sorts their neighbors according to the decreasing
total number of chunks they have forwarded to other peers. Since total number of chunks they have forwarded to other peers. Since
children could lie regarding the number of chunks forwarded to children could lie regarding the number of chunks forwarded to
others, Tribler peers do directly not ask their children, but their others, Tribler peers do directly not ask their children, but their
grandchildren. In this way, Tribler peer unchokes the three highest- grandchildren. In this way, Tribler peer unchokes the three highest-
ranked neighbours and, in order to saturate upload bandwidth and in ranked neighbours and, in order to saturate upload bandwidth and in
the same time not decrease the performance of individual connections, the same time not decrease the performance of individual connections,
it further unchokes a limited number of neighbors. Moreover, in it further unchokes a limited number of neighbors. Moreover, in
order to search for better neighbors, Tribler peers randomly select a order to search for better neighbors, Tribler peers randomly select a
new peer in the rest of the neighbours and optimistically unchoke it new peer in the rest of the neighbours and optimistically unchoke it
every two periods. every two periods.
As regards live streaming, differently from video on demand scenario, As regards live streaming, differently from video on demand scenario,
the number of chunks cannot be known in advance. As a consequence a the number of chunks cannot be known in advance. As a consequence a
sliding window of fixed width is used to identify chunks of interest: sliding window of fixed width is used to identify chunks of interest:
every chunk that falls out the sliding window is considered out- every chunk that falls out the sliding window is considered outdated,
dated, is locally deleted and is considered as deleted by peer is locally deleted and is considered as deleted by peer neighbors as
neighbors as well. In this way, when a peer joins the network, it well. In this way, when a peer joins the network, it learns about
learns about chunks its neighbors possess and identifies the most chunks its neighbors possess and identify the most recent one. This
recent one. This is assumed as beginning of the sliding window at is assumed as beginning of the sliding window at the joining peer,
the joining peer, which starts downloading and uploading chunks which starts downloading and uploading chunks according to the
according to the description provided for video on demand scenario. description provided for video on demand scenario. Finally,
Finally, differently from what happens for video on demand scenario, differently from what happens for video on demand scenario, where
where torrent files includes a hash for each chunk in order to torrent files include a hash for each chunk in order to prevent
prevent malicious attackers from corrupting data, torrent files in malicious attackers from corrupting data, torrent files in live
live streaming scenario include the public key of the stream source. streaming scenario include the public key of the stream source. Each
Each chunk is then assigned with absolute sequence number and chunk is then assigned with absolute sequence number and timestamp
timestamp and signed by source public key. Such a mechanism allows and signed by source public key. Such a mechanism allows Tribler
Tribler peers to use the public key included in torrent file and peers to use the public key included in torrent file and verity the
verity the integrity of each chunk. integrity of each chunk.
3.1.7. QQLive 3.1.6. QQLive
QQLive [QQLive] is large-scale video broadcast software including QQLive [QQLive] is large-scale video broadcast software including
streaming media encoding, distribution and broadcasting. Its client streaming media encoding, distribution and broadcasting. Its client
can apply for web, desktop program or other environments and provides can apply for web, desktop program or other environments and provides
abundant interactive function in order to meet the watching abundant interactive function in order to meet the watching
requirements of different kinds of users. requirements of different kinds of users.
Due to the lack of technical details from QQLive vendor, we got some QQLive adopts CDN and P2P architecture for video distribution and is
knowledge about QQLive from paper [QQLivePaper], whose authors did different from other popular P2P streaming applications. QQLive
some measurements and based on this identify the main components and provides video source by source servers and CDN and the video content
working flow of QQLive. can be push to every region by CDN throughout China. In each region,
QQLive adopts P2P technology for video content distribution.
Main components of QQLive include:
login server, storing user login information and channel
information;
authentication server, processing user login authentication;
channel server, storing all information about channels including
channel connection nodes watching a channel;
program server, storing audio and video data information;
log server, recording the beginning and ending information of
channels;
peer node, watching programs and transporting streaming media.
Main working flow of QQLive includes startup stage and play stage. One of the main aims for QQLive is to use the simplest architecture
to provide the best user experience. So QQLive take some servers to
implement P2P file distribution. There are two servers in QQLive:
Stun Server and Tracker Server. Stun Server is responsible for NAT
traversing. Tracker Server is responsible for providing content
address information. There are a group of these two Servers for
providing services. There is no Super Peer in QQLive.
Startup stage includes only interactions between peers and Working flow of QQLive includes startup stage and play stage.
centralized QQLive servers, so it may be regarded as associated with
tracker protocol. This stage begins when a peer launches QQLive
client. Peer provides authentication information in an
authentication message, which it sends to the authentication server.
Authentication server verifies QQLive provided credentials and if
these are valid, QQLive client starts communicating with login server
through SSL. QQLive client sends a message including QQLive account
and nickname, and login serve returns a message including information
such as membership point, total view time, upgrading time and so on.
At this point, QQLive client requests channel server for updating
channel list. QQLive client firstly loads an old channel list stored
locally and then it overwrites the old list with the new channel list
received from channel server. The full channel list is not obtained
via a single request. QQLive client firstly requests for channel
classification and then requests the channel list within a specific
channel category selected by the user. This approach will give
higher real-time performance to QQLive.
Play stage includes interactions between peers and centralized QQLive 1) Startup stage includes only interactions between peers and
servers and between QQLive peers, so it may be regarded as associated Tracker servers. There is a built-in URL in QQLive client
to both tracker protocol and peer protocol. IN more detail, play software. When the client startups and connects to the network,
stage is structured in the following phases: the client gets the Tracker's address through DNS and tells the
Tracker the information of its owned video contents.
Open channel. QQLive client sends a message to dogin server with 2) play stage includes interactions between peers and peers or
the ID of chosen channel through UDP, whereas login server replies peers and CDN. Generally, the client will download the video
with a message including channel ID, channel name and program content from CDN during the first 30 seconds and then gets
name. Afterwards, QQLive client communicates with program server contents from other peers. If unfortunately there is no peer
through SSL to access program information. Finally QQLive client which owns the content, the client will get the content from CDN
communicates with channel server through UDP to obtain initial again.
peer information.
View channel. QQLive client establishes connections with peers As the client watches the video, the client will store the video to
and sends packets with fixed length of 118 bytes, which contains the hard disk. The default storage space is one Gbyte. If the
channel ID. QQLive client maintains communication with channel storage space is full, the client will delete the oldest content.
server by reporting its own information and obtaining updated When the client do VCR operation, if the video content is stored in
information. Peer nodes transport stream packet data through UDP hard disk, the client will not do interactions with other peers or
with fixed-port between 13000 and14000. CDN.
Stop channel. QQLive client continuously sends five identical UDP There are two main protocols in QQLive: tracker protocol and peer
packets to channel server with each data packet fixed length of 93 protocol. These two protocols are all full private and encrypt the
bytes. whole message. The tracker protocol uses UDP and the port for the
tracker server is fixed. For the video streaming, if the client gets
the streaming from CDN, the client use the HTTP with port 80 and no
encryption; if the client gets the streaming from other peers, the
client use UDP to transfer the encrypted media streaming and not RTP/
RTCP.
Close client. QQLive client sends a UDP message to notify log If there are messages or video content missing, the client will take
server and an SSL message to login server, then it continuously retransmission and the retransmission interval is decided by the
sends five identical UDP packets to channel server with each data network condition. The QQLive doesn't care the strategy of
packet fixed length of 45 bytes. transmission and chunk selection which is simple and not similar with
BT because of the CDN support.
3.2. Tree-based P2P streaming applications 3.2. Tree-based P2P streaming applications
In tree-based P2P streaming applications peers self-organize in a In tree-based P2P streaming applications peers self-organize in a
tree-shape overlay network, where peers do not ask for a specific tree-shape overlay network, where peers do not ask for a specific
content chunk, but simply receive it from their so called "parent" chunk, but simply receive it from their so called "parent" node.
node. Such content delivery model is denoted as push-based. Such content delivery model is denoted as push-based. Receiving
Receiving peers are denoted as children, whereas sending nodes are peers are denoted as children, whereas sending nodes are denoted as
denoted as parents. Overhead to maintain overlay topology is usually parents. Overhead to maintain overlay topology is usually lower for
lower for tree-based streaming applications than for mesh-based tree-based streaming applications than for mesh-based streaming
streaming applications, whereas performance in terms of scalability applications, whereas performance in terms of delay are usually
and delay are usually higher. On the other side, the greatest higher. On the other side, the greatest drawback of this type of
drawback of this type of application lies in that each node depends application lies in that each node depends on one single node, its
on one single node, its father in overlay tree, to receive streamed parent in overlay tree, to receive streamed content. Thus, tree-
content. Thus, tree-based streaming applications suffer from peer based streaming applications suffer from peer churn phenomenon more
churn phenomenon more than mesh-based ones. than mesh-based ones.
3.2.1. End System Multicast (ESM) 3.2.1. End System Multicast (ESM)
Even though End System Multicast (ESM) project is ended by now and Even though End System Multicast (ESM) project is ended by now and
ESM infrastructure is not being currently implemented anywhere, we ESM infrastructure is not being currently implemented anywhere, we
decided to include it in this survey for a twofold reason. First of decided to include it in this survey for a twofold reason. First of
all, it was probably the first and most significant research work all, it was probably the first and most significant research work
proposing the possibility of implementing multicast functionality at proposing the possibility of implementing multicast functionality at
end hosts in a P2P way. Secondly, ESM research group at Carnegie end hosts in a P2P way. Secondly, ESM research group at Carnegie
Mellon University developed the world's first P2P live streaming Mellon University developed the first P2P live streaming system of
system, and some members founded later Conviva [conviva] live the world, and some members founded later Conviva [conviva] live
platform. platform.
The main property of ESM is that it constructs the multicast tree in The main property of ESM is that it constructs the multicast tree in
a two-step process. The first step aims at the construction of a a two-step process. The first step aims at the construction of a
mesh among participating peers, whereas the second step aims at the mesh among participating peers, whereas the second step aims at the
construction of data delivery trees rooted at the stream source. construction of data delivery trees rooted at the stream source.
Therefore a peer participates in two types of topology management Therefore a peer participates in two types of topology management
structures: a control structure that guarantees peers are always structures: a control structure that guarantees peers are always
connected in a mesh, and a data delivery structure that guarantees connected in a mesh, and a data delivery structure that guarantees
data gets delivered in an overlay multicast tree. data gets delivered in an overlay multicast tree.
There exist two versions of ESM. There exist two versions of ESM.
The first version of ESM architecture [ESM1] was conceived for small The first version of ESM architecture [ESM1] was conceived for small
scale multi-source conferencing applications. Regarding the mesh scale multi-source conferencing applications. Regarding the mesh
construction phase, when a new member wants to join the group, an construction phase, when a new member wants to join the group, an
out-of-bandwidth bootstrap mechanism provides the new member with a out-of-bandwidth bootstrap mechanism provides the new member with a
list of some group member. The new member randomly selects a few list of some group members. The new member randomly selects a few
group members as peer neighbors. The number of selected neighbors group members as peer neighbors. The number of selected neighbors
does not exceed a given bound, which reflects the bandwidth of the never exceeds a given bound, which reflects the bandwidth of the
peer's connection to the Internet. Each peer periodically emits a peer's connection to the Internet. Each peer periodically emits a
refresh message with monotonically increasing sequence number, which refresh message with monotonically increasing sequence number, which
is propagated across the mesh in such a way that each peer can is propagated across the mesh in such a way that each peer can
maintain a list of all the other peers in the system. When a peer maintain a list of all the other peers in the system. When a peer
leaves, either it notifies its neighbors and the information is leaves, either it notifies its neighbors and the information is
propagated across the mesh to all participating peers, or peer propagated across the mesh to all the participating peers, or peer
neighbors detect the condition of abrupt departure and propagate it neighbors detect the condition of abrupt departure and propagate it
through the mesh. To improve mesh/tree quality, on the one side through the mesh. To improve mesh/tree quality, on the one side
peers constantly and randomly probe each other to add new links; on peers constantly and randomly probe each other to add new links; on
the other side, peers continually monitor existing links to drop the the other side, peers continually monitor existing links to drop the
ones that are not perceived as good-quality-links. This is done ones that are not perceived as good-quality links. This is done
thanks to the evaluation of a utility function and a cost function, thanks to the evaluation of a utility function and a cost function,
which are conceived to guarantee that the shortest overlay delay which are conceived to guarantee that the shortest overlay delay
between any pair of peers is comparable to the unicast delay among between any pair of peers is comparable to the unicast delay among
them. Regarding multicast tree construction phase, peers run a them. Regarding multicast tree construction phase, peers run a
distance-vector protocol on top of the tree and use latency as distance-vector protocol on top of the tree and use latency as
routing metric. In this way, data delivery trees may be constructed routing metric. In this way, data delivery trees may be constructed
from the reverse shortest path between source and recipients. from the reverse shortest path between source and recipients.
The second and subsequent version of ESM architecture [ESM2] was The second and subsequent version of ESM architecture [ESM2] was
conceived for an operational large scale single-source Internet conceived for an operational large scale single-source Internet
broadcast system. As regards the mesh construction phase, a node broadcast system. As regards the mesh construction phase, a node
joins the system by contacting the source and retrieving a random joins the system by contacting the source and retrieving a random
list of already connected nodes. Information on active participating list of already connected nodes. Information on active participating
peers is maintained thanks to a gossip protocol: each peer peers is maintained thanks to a gossip protocol: each peer
periodically advertises to a randomly selected neighbor a subset of periodically advertises to a randomly selected neighbor a subset of
nodes he knows and the last timestamps it has heard for each known nodes he knows and the last timestamps it has heard for each known
node. node. The main difference with the first version is that the second
version constructs and maintains the data delivery tree in a
The main difference with the first version is that the second version completely distributed manner according to the following criteria: i)
constructs and maintains the data delivery tree in a completely each node maintains a degree bound on the maximum number of children
distributed manner according to the following criteria: i) each node it can accept depending on its uplink bandwidth, ii) tree is
maintains a degree bound on the maximum number of children it can optimized mainly for bandwidth and secondarily for delay. To this
accept depending on its uplink bandwidth, ii) tree is optimized end, a parent selection algorithm allows identifying among the
mainly for bandwidth and secondarily for delay. To this end, a neighbors the one that guarantees the best performance in terms of
parent selection algorithm allows identifying among the neighbors the throughput and delay. The same algorithm is also applied either if a
one that guarantees the best performance in terms of throughput and parent leaves the system or if a node is experiencing poor
delay. The same algorithm is also applied either if a parent leaves performance (in terms of both bandwidth and packet loss). As loop
the system or if a node is experiencing poor performance (in terms of prevention mechanism, each node keeps also the information about the
both bandwidth and packet loss). As loop prevention mechanism, each hosts in the path between the source and its parent node.
node keeps also the information about the hosts in the path between
the source and its parent node.It then constructs a (reverse)
shortest path spanning trees of the mesh with the root being the
source.
This second ESM prototype is also able to cope with receiver This second ESM prototype is also able to cope with receiver
heterogeneity and presence of NAT/firewalls. In more detail, audio heterogeneity and presence of NAT/firewalls. In more detail, audio
stream is kept separated from video stream and multiple bit-rate stream is kept separated from video stream and multiple bit-rate
video streams are encoded at source and broadcast in parallel though video streams are encoded at source and broadcast in parallel though
the overlay tree. Audio is always prioritized over video streams, the overlay tree. Audio is always prioritized over video streams,
and lower quality video is always prioritized over high quality and lower quality video is always prioritized over high quality
videos. In this way, system can dynamically select the most suitable video. In this way, system can dynamically select the most suitable
video stream according to receiver bandwidth and network congestion video stream according to receiver bandwidth and network congestion
level. Moreover, in order to take presence of hosts behind NAT/ level. Moreover, in order to take presence of hosts behind NAT/
firewalls, tree is structured in such a way that public host use firewalls, tree is structured in such a way that public hosts use
hosts behind NAT/firewalls as parents. hosts behind NAT/firewalls as parents.
3.3. Hybrid P2P streaming applications 3.3. Hybrid P2P streaming applications
This type of applications aims at integrating the main advantages of This type of applications aims at integrating the main advantages of
mesh-based and tree-based approaches. To this end, overlay topology mesh-based and tree-based approaches. To this end, overlay topology
is mixed mesh-tree, and content delivery model is push-pull. is mixed mesh-tree, and content delivery model is push-pull.
3.3.1. New Coolstreaming 3.3.1. New Coolstreaming
Coolstreaming, first released in summer 2004 with a mesh-based Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P structure, arguably represented the first successful large-scale P2P
live streaming. Nevertheless, it suffers poor delay performance and live streaming. Nevertheless, it suffers poor delay performance and
high overhead associated each video block transmission. In the high overhead associated with each video block transmission. In the
attempt of overcoming such a limitation, New Coolstreaming attempt of overcoming such a limitation, New Coolstreaming
[NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a
hybrid pull-push content delivery mechanism. hybrid pull-push content delivery mechanism.
Figure 5 illustrates New Coolstreaming architecture. Like in the old Coolstreaming, a newly joined node contacts a special
------------------------------ bootstrap node and retrieves a partial list of active nodes in the
| +---------+ | system.
| | Tracker | |
| +---------+ |
| | |
| | |
| +---------------------+ |
| | Content server | |
| +---------------------+ |
|------------------------------
/ \
/ \
/ \
/ \
+---------+ +---------+
| Peer1 | | Peer2 |
+---------+ +---------+
/ \ / \
/ \ / \
/ \ / \
+---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+
Figure 5, New Coolstreaming Architecture
The video stream is divided into equal-size blocks or chunks, which
are assigned with a sequence number to implicitly define the playback
order in the stream. Video stream is subdivided into multiple sub-
streams without any coding, so that each node can retrieve any sub-
stream independently from different parent nodes. This consequently
reduces the impact on content delivery due to a parent departure or
failure. The details of hybrid push-pull content delivery scheme are
as follows:
a node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in buffer map, the
requested partner, i.e., the parent node. The node can subscribe
more sub-streams to its partners in this way to obtain higher play
quality;
the selected parent node will continue pushing all blocks of the
sub-stream to the requesting node.
This not only reduces the overhead associated with each video block The interaction with bootstrap node is the only one related to the
transfer, but more importantly it significantly reduces the delay in tracker protocol. The rest of New Coolstreaming interactions are
retrieving video content. related to peer protocol.
Video content is processed for ease of delivery, retrieval, storage The newly joined node then establishes a partnership with few active
and play out. To manage content delivery, a video stream is divided nodes by periodically exchanging information on content availability.
into blocks with equal size, each of which is assigned a sequence Streaming content is divided in New Coolstreaming in equal-size
number to represent its playback order in the stream. Each block is blocks or chunks, which are unambiguously associated with sequence
further divided into K sub-blocks and the set of i-th sub-blocks of numbers that represent the playback order. Chunks are then grouped
all blocks constitutes the i-th sub-stream of the video stream, where to form multiple sub-streams.
i is a value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its
parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer.
The synchronization buffer stores the received sub-blocks of each
sub-stream according to the associated block sequence number of the
video stream. The cache buffer then picks up the sub-blocks
according to the associated sub-stream index of each ordered block.
To advertise the availability of the latest block of different sub-
streams in its buffer, a node uses a Buffer Map which is represented
by two vectors of K elements each. Each entry of the first vector
indicates the block sequence number of the latest received sub-
stream, and each bit entry of the second vector if set indicates the
block sequence index of the sub-stream that is being requested.
For data delivery, a node uses a hybrid push and pull scheme with Like in most of P2P streaming applications information on content
randomly selected partners. A node having requested one or more availability is exchanged in form of buffer-maps. However, New
distinct sub-streams from a partner as indicated in its first Buffer Coolstreaming buffer-maps differ from the usual format of strings of
Map will continue to receive the sub-streams of all subsequent blocks bits where each bit represents the availability of a chunk. Two
from the same partner until future conditions cause the partner to do vectors represent indeed buffer-maps in New Coolstreaming. The first
otherwise. Moreover, users retrieve video indirectly from the source vector reports the sequence numbers of the last chunk received for a
through a number of strategically located servers. given sub-stream. The second vector is used to explicitly request
chunks from partner peers. In more details, the second vector has as
many bits as sub-streams, and a peer receiving a bit "1" in
correspondence of a given sub-stream is being requested from the
sending peer to upload chunks belonging to that sub-streams. Since
chunks are explicitly requested, data delivery may be regarded as
pull-based. However, data delivery is push-based as well, since
every time a node is requested to upload chunks, it uploads all
chunks for that sub-stream starting from the one indicated in the
first vector of received buffer-map.
To keep the parent-children relationship above a certain level of In order to improve quality of mesh-tree overlay, each node
quality, each node constantly monitors the status of the on-going continuously monitors the quality of active connections in terms of
sub-stream reception and re-selects parents according to sub-stream mutual delay between sub-streams. If such quality drops below a
availability patterns. Specifically, if a node observes that the predefined threshold, a New Coolstreaming node selects a new partner
block sequence number of the sub-stream of a parent is much smaller among its partners. Parent re-selection is also applied in case of
than any of its other partners by a predetermined amount, the node leaving of the previous parent.
then concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum
and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced.
4. Security Considerations 4. Security Considerations
This document does not raise security issues. This document does not raise security issues.
5. Author List 5. Author List
The authors of this document are listed as below. Other authors of this document are listed as below.
Hui Zhang, NEC Labs America. Hui Zhang, NEC Labs America.
Jun Lei, University of Goettingen. Jun Lei, University of Goettingen.
Gonzalo Camarillo, Ericsson. Gonzalo Camarillo, Ericsson.
Yong Liu, Polytechnic University. Yong Liu, Polytechnic University.
Delfin Montuno, Huawei. Delfin Montuno, Huawei.
skipping to change at page 21, line 38 skipping to change at page 19, line 42
6. Acknowledgments 6. Acknowledgments
We would like to acknowledge Jiang xingfeng for providing good ideas We would like to acknowledge Jiang xingfeng for providing good ideas
for this document. for this document.
7. Informative References 7. Informative References
[Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new [Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet". technology for large-scale streaming over the Internet".
[CNN] CNN web site, www.cnn.com [CNN] CNN web site, http://www.cnn.com
[PPLive] PPLive web site, www.pplive.com [PPLive] PPLive web site, http://www.pplive.com
[P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV [P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV
Systems", June 2007. Systems", June 2007.
[Zattoo] Zattoo web site, http: //zattoo.com/ [CNSR] Li, Ruixuan, et al., "Measurement Study on PPLive Based on
Channel Popularity", May 2011.
[PPStream] PPStream web site, www.ppstream.com [Zattoo] Zattoo web site, http://www.zattoo.com
[SopCast] SopCast web site, http://www.sopcast.com/ [IMC09] Chang, Hyunseok, et al., "Live streaming performance of the
Zattoo network", November 2009.
[PPStream] PPStream web site, http:// www.ppstream.com
[tribler] Tribler Protocol Specification, January 2009, on line [tribler] Tribler Protocol Specification, January 2009, on line
available at http://svn.tribler.org/bt2-design/proto-spec-unified/ available at http://svn.tribler.org/bt2-design/proto-spec-unified/
trunk/proto-spec-current.pdf trunk/proto-spec-current.pdf
[QQLive] QQLive web site, http://v.qq.com [QQLive] QQLive web site, http://v.qq.com
[QQLivePaper] Liju Feng, et al., "Research on active monitoring based
QQLive real-time information Acquisition System", 2009.
[conviva] Conviva web site, http://www.conviva.com [conviva] Conviva web site, http://www.conviva.com
[ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June [ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June
2000. (http://esm.cs.cmu.edu/technology/papers/ 2000. (http://esm.cs.cmu.edu/technology/papers/
Sigmetrics.CaseForESM.2000.pdf) Sigmetrics.CaseForESM.2000.pdf)
[ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet [ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet
Broadcast System Based on Overlay Multicast", June 2004. (http:// Broadcast System Based on Overlay Multicast", June 2004. (http://
static.usenix.org/events/usenix04/tech/general/full_papers/chu/ static.usenix.org/events/usenix04/tech/general/full_papers/chu/
chu.pdf) chu.pdf)
[NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming: [NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications", April 2008. Principles,Measurements and Performance Implications", April 2008.
8. References
Authors' Addresses Authors' Addresses
Gu Yingjie Gu Yingjie
Huawei Unaffiliated
No.101 Software Avenue
Nanjing 210012 Email: guyingjie@gmail.com
P.R.China
Phone: +86-25-56624760
Fax: +86-25-56624702
Email: guyingjie@huawei.com
Zong Ning (editor) Zong Ning (editor)
Huawei Huawei
No.101 Software Avenue No.101 Software Avenue
Nanjing 210012 Nanjing 210012
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
Email: zongning@huawei.com Email: zongning@huawei.com
Zhang Yunfei Zhang Yunfei
China Mobile Coolpad
Email: zhangyunfei@chinamobile.com Email: hishigh@gmail.com
Francesca Lo Piccolo Francesca Lo Piccolo
Cisco Cisco
Via del Serafico 200
Rome 00142
Italy
Phone: +39-06-51645136
Email: flopicco@cisco.com Email: flopicco@cisco.com
Duan Shihui Duan Shihui
CATR CATR
No.52 HuaYuan BeiLu No.52 HuaYuan BeiLu
Beijing 100191 Beijing 100191
P.R.China P.R.China
Phone: +86-10-62300068 Phone: +86-10-62300068
Email: duanshihui@catr.cn Email: duanshihui@catr.cn
 End of changes. 111 change blocks. 
613 lines changed or deleted 535 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/