draft-ietf-ppsp-survey-02.txt   draft-ietf-ppsp-survey-03.txt 
PPSP Y. Gu PPSP Y. Gu, Ed.
Internet-Draft N. Zong Internet-Draft N. Zong, Ed.
Intended status: Standards Track Huawei Intended status: Standards Track Huawei
Expires: September 12, 2011 Hui. Zhang Expires: April 19, 2013 Yunfei. Zhang
NEC Labs America.
Yunfei. Zhang
China Mobile China Mobile
J. Lei October 16, 2012
University of Goettingen
Gonzalo. Camarillo
Ericsson
Yong. Liu
Polytechnic University
Delfin. Montuno
Lei. Xie
Huawei
July 05, 2011
Survey of P2P Streaming Applications Survey of P2P Streaming Applications
draft-ietf-ppsp-survey-02 draft-ietf-ppsp-survey-03
Abstract Abstract
This document surveys a number of popular Peer-to-Peer streaming This document presents a survey of popular Peer-to-Peer streaming
applications on the Internet. The Architecture and Peer applications on the Internet. We focus on the Architecture and Peer
Protocol/Tracker Signaling Protocol description is our main focus., Protocol/Tracker Signaling Protocol description in the presentation,
We study well-known P2P streaming systems, including Joost, PPlive, and study a selection of well-known P2P streaming systems, including
and other popular existing systems, and we summarize a common P2P Joost, PPlive, andother popular existing systems. Through the
streaming process model and its correspondent signaling process for survey, we summarize a common P2P streaming process model and the
use in the P2P Streaming Protocol standardization effort. correspondent signaling process for P2P Streaming Protocol
standardization.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 12, 2011.
This Internet-Draft will expire on April 19, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 4 2. Terminologies and concepts . . . . . . . . . . . . . . . . . . 3
3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 5 3. Survey of P2P streaming system . . . . . . . . . . . . . . . . 4
3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 5 3.1. Mesh-based P2P streaming systems . . . . . . . . . . . . . 4
3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 6 3.1.1. Joost . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 9 3.1.2. Octoshape . . . . . . . . . . . . . . . . . . . . . . 8
3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.3. PPLive . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.4. Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 17 3.1.5. PPStream . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.6. SopCast . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.7. TVants . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 21 3.2. Tree-based P2P streaming systems . . . . . . . . . . . . . 16
3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 21 3.2.1. PeerCast . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.2. Conviva . . . . . . . . . . . . . . . . . . . . . . . 19
3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 26 3.3. Hybrid P2P streaming system . . . . . . . . . . . . . . . 21
3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 26 3.3.1. New Coolstreaming . . . . . . . . . . . . . . . . . . 21
4. A common P2P Streaming Process Model . . . . . . . . . . . . . 29 4. A common P2P Streaming Process Model . . . . . . . . . . . . . 23
5. Security Considerations . . . . . . . . . . . . . . . . . . . 30 5. Security Considerations . . . . . . . . . . . . . . . . . . . 24
6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 30 6. Author List . . . . . . . . . . . . . . . . . . . . . . . . . 24
7. Informative References . . . . . . . . . . . . . . . . . . . . 30 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 8. Informative References . . . . . . . . . . . . . . . . . . . . 25
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26
1. Introduction 1. Introduction
Toward standardizing the signaling protocols used in today's Peer-to- Toward standardizing the signaling protocols used in today's Peer-to-
Peer (P2P) streaming applications, we survey several popular P2P Peer (P2P) streaming applications, we surveyed several popular P2P
streaming systems regarding their architectures and signaling streaming systems regarding their architectures and signaling
protocols between peers, as well as, between peers and trackers. The protocols between peers, as well as, between peers and trackers. The
studied P2P streaming systems, operating worldwide or domestically, studied P2P streaming systems, running worldwide or domestically.
include PPLive, Joost, and Octoshape. As this This document does not intend to cover all design options of P2P
document is not intended to cover all design options of P2P streaming streaming applications. Instead, we choose a representative set of
applications, we choose a representative set of applications and focus on the respective signaling characteristics of
applications and focus on each of their respective signaling each kind. Through the survey, we generalize a common streaming
characteristics. Through the survey, we propose a generalized common process model from those P2P streaming systems, and summarize the
streaming process model derived from those P2P streaming systems, and companion signaling process as the base for P2P Streaming Protocol
summarize the companion signaling process as the basis for P2P (PPSP) standardization.
Streaming Protocol (PPSP) standardization.
2. Terminologies and concepts 2. Terminologies and concepts
Chunk: A chunk is a basic unit of partitioned streaming media, which Chunk: A chunk is a basic unit of partitioned streaming media, which
is used by a peer for the purpose of storage, advertisement and is used by a peer for the purpose of storage, advertisement and
exchange among peers [Sigcomm:P2P streaming]. exchange among peers [P2PVOD].
Content Distribution Network (CDN) node: A CDN node refers to a Content Distribution Network (CDN) node: A CDN node refers to a
network entity that usually is deployed at the network edge to store network entity that usually is deployed at the network edge to store
content provided by the original servers, and serves content to the content provided by the original servers, and serves content to the
clients located nearby topologically. clients located nearby topologically.
Live streaming: The scenario where all clients receive streaming Live streaming: The scenario where all clients receive streaming
content for the same ongoing event. The lags between the play points content for the same ongoing event. The lags between the play points
of the clients and that of the streaming source are small.. of the clients and that of the streaming source are small..
skipping to change at page 5, line 21 skipping to change at page 4, line 20
service which maintains the lists of peers/PPSP peers storing chunks service which maintains the lists of peers/PPSP peers storing chunks
for a specific channel or streaming file, and answers queries from for a specific channel or streaming file, and answers queries from
peers/PPSP peers. peers/PPSP peers.
Video-on-demand (VoD): A kind of application that allows users to Video-on-demand (VoD): A kind of application that allows users to
select and watch video content on demand select and watch video content on demand
3. Survey of P2P streaming system 3. Survey of P2P streaming system
In this section, we summarize some existing P2P streaming systems. In this section, we summarize some existing P2P streaming systems.
The construction techniques used in these systems can be The construction techniques used in these systems can be largely
classified mainly into three categories: tree-based, mesh-based, classified into two categories: tree-based and mesh-based structures.
and hybrid structures.
Tree-based structure: Group members self-organize into a tree Tree-based structure: Group members self-organize into a tree
structure, based on which group management and data delivery are structure, based on which group management and data delivery is
performed. Such structure with push-based content delivery has small performed. Such structure and push-based content delivery have small
maintenance cost, good scalability and low delay in retrieving the maintenance cost and good scalability and low delay in retrieving the
content(associated with startup delay) and can be easily implemented. content(associated with startup delay) and can be easily implemented.
However, it may result in low bandwidth usage and less reliability. However, it may result in low bandwidth usage and less reliability.
Mesh-based structure: In contrast to tree-based structure, a mesh Mesh-based structure: In contrast to tree-based structure, a mesh
uses multiple links between any two nodes. Thus, the reliability of uses multiple links between any two nodes. Thus, the reliability of
data transmission is relatively high. Besides, multiple links data transmission is relatively high. Besides, multiple links
result in high bandwidth usage. Nevertheless, the maintenance cost results in high bandwidth usage. Nevertheless, the cost of
is much larger than that of a tree, and the pull-based content maintaining such mesh is much larger than that of a tree, and pull-
delivery leads to high overhead associated with each video based content delivery lead to high overhead associated each video
block transmission, particularly, the delay in retrieving the block transmission, in particular the delay in retrieving the
content. content.
Hybrid structure: Has a combined tree-based and mesh-based structure Hybrid structure: Combine tree-based and mesh-based structure,
and uses both pull-based and push-based content delivery to take combine pull-based and push-based content delivery to utilize the
advantages of the two structures. Compared to the mesh-based structure, advantages of two structures. It has high reliability as much as
it has an equally high reliability and topology maintenance cost but mesh-based structure, lower delay than mesh-based structure, lower
a much lower delay and overhead associated each video block overhead associated each video block transmission and high topology
transmission. maintenance cost as much as mesh-based structure.
3.1. Mesh-based P2P streaming systems 3.1. Mesh-based P2P streaming systems
Mesh-based systems implement a mesh distribution graph, where each
node contacts a subset of peers to obtain a number of chunks. Every
node needs to know which chunks are owned by its peers and explicitly
"pulls" the chunks it needs. This type of scheme involves overhead,
due in part to the exchange of buffer maps between nodes (i.e. nodes
advertise the set of chunks they own) and in part to the "pull"
process (i.e. each node sends a request in order to receive the
chunks). Since each node relies on multiple peers to retrieve
content, mesh based systems offer good resilience to node failures.
On the negative side they require large buffers to support the chunk
pull (clearly, large buffers are needed to increase the chances of
finding a chunk).
In a mesh-based P2P streaming system, peers are not confined to a
static topology. Instead, the peering relationships are established/
terminated based on the content availability and bandwidth
availability on peers. A peer dynamically connects to a subset of
random peers in the system. Peers periodically exchange information
about their data availability. The content is pulled by a peer from
its neighbors who have already obtained the content. Since multiple
neighbors are maintained at any given moment, mesh-based streaming
systems are highly robust to peer churns. However, the dynamic
peering relationships make the content distribution efficiency
unpredictable. Different data packets may traverse different routes
to users. Consequently, users may suffer from content playback
quality degradation ranging from low bit rates, long startup delays,
to frequent playback freezes.
3.1.1. Joost 3.1.1. Joost
Joost announced to give up P2P technology on its desktop version last Joost announced to give up P2P technology on its desktop version last
year, though it introduced a flash version for browsers and iPhone year, though it introduced a flash version for browsers and iPhone
application. The key reason why Joost shut down its desktop version application. The key reason why Joost shut down its desktop version
is probably the legal issues of provided media content. However, as is probably the legal issues of provided media content. However, as
one of the most popular P2P VoD application in the past years, it's one of the most popular P2P VoD application in the past years, it's
worthwhile to understand how Joost works. The peer management and worthwhile to understand how Joost works. The peer management and
data transmission in Joost mainly relies on mesh-based structure. data transmission in Joost mainly relies on mesh-based structure.
The three key components of Joost are servers, super nodes and peers. The three key components of Joost are servers, super nodes and peers.
There are five types of servers: Tracker server, Version server, There are five types of servers: Tracker server, Version server,
Backend server, Content server and Graphics server. The architecture Backend server, Content server and Graphics server. Supernodes are
of Joost system is shown in Figure 1. managing the p2p control of Joost nodes and Joost nodes are all the
running clients in the Joost network. The architecture of Joost
system is shown in Figure 1.
First, we introduce the functionalities of Joost's key components First, we introduce the functionalities of Joost's key components
through three basic phases. Then we will discuss the Peer protocol through three basic phases. Then we will discuss the Peer protocol
and Tracker protocol of Joost. and Tracker protocol of Joost.
Installation: In the installation phase, Backend server provides peer Installation: Backend server is involved in the installation phase.
with an initial channel list in a SQLite file. No other parameters, Backend server provides peer with an initial channel list in a SQLite
such as local cache, node ID, or listening port, are configured in file. No other parameters, such as local cache, node ID, or
this file. listening port, are configured in this file.
Bootstrapping: In case of a newcomer, Tracker server provides several Bootstrapping: In case of a newcomer, Tracker server provides several
super node addresses and possibly some content server addresses. super node addresses and possibly some content server addresses.
Then the peer connects to Version server for the latest software Then the peer connects Version server for the latest software
version. Later, the peer connects to some super nodes to version. Later, the peer starts to connect some super nodes to
obtain the list of other available peers through which streaming video obtain the list of other available peers and begins streaming video
begins. Unlike Skype [skype], super nodes in Joost only contents. Super nodes in Joost only deal with control and peer
deal with control and peer management traffic. They do not relay/ management traffic. They do not relay/forward any media data.
forward any media data.
. When Joost is first launched, a login mechanism is initiated using
HTTPS and TLSv1. After, a TCP synchronization, the client
authenticates with a certificate to the login server. Once the login
process is done, the client first contacts a supernode, which address
is hard coded in Joost binary to get a list of peers and a Joost
Seeder to contact. Of course, this depends on the channel chosen by
the user. Once launched, the Joost client checks if there is a more
recent version available sending an HTTP request.
Once authenticated to the video service, Joost node uses the same
authentication mechanism (TCP synchronization, certificate validation
and shared key verification) to login to the backend server.This
server validates the access to all HTTPS services like channel chat,
channel list, video content search.
Joost uses TCP port 80 for HTTP, port 443 for HTTPS transfers and UDP
port 4166 for video packets exchange mainly from long-tail servers
and each Joost peer chooses its own UDP port to exchange with other
peers.
Channel switching: Super nodes are responsible for redirecting Channel switching: Super nodes are responsible for redirecting
clients to content server or peers. clients to content server or peers.
Peers communicate with servers over HTTP/HTTPs and with super nodes/ Peers communicate with servers over HTTP/HTTPs and with super nodes/
other peers over UDP. other peers over UDP.
Tracker Protocol: Because super nodes here are responsible only for Tracker Protocol: Because super nodes here are responsible for
providing the peerlist/content servers to peers, the protocol used providing the peerlist/content servers to peers, protocol used
between tracker server and peers is rather simple. Peers get the between tracker server and peers is rather simple. Peers get the
addresses of super nodes and content servers from Tracker Server over addresses of super nodes and content servers from Tracker Server over
HTTP. After that, Tracker server will not appear in any other stages, HTTP. After that, Tracker sever will not appear in any stage, e.g.
e.g. channel switching, VoD interaction. In fact, the protocol channel switching, VoD interaction. In fact, the protocol spoken
spoken between peers and super nodes is more like what we would between peers and super nodes is more like what we normally called
normally called "Tracker Protocol". It enables super nodes to check "Tracker Protocol". It enables super nodes to check peer status,
peer status and maintain peer lists for several, if not all, channels. maintain peer lists for several, if not all, channels. It provides
peer list/content servers to peers. Thus, in the rest of this
It provides peer list/content servers to peers. Thus, in the rest of section, when we mention Tracker Protocol, we mean the one used
this section, when we mention Tracker Protocol, we mean the one used
between peers and super nodes. between peers and super nodes.
Joost uses supernodes only to control the traffic but never as relays
for video content. The main streams are sent from the Joost Seeders
and all the traffic is encrypted secure shared video content from
piracy. Joost peers cache the received content to re-stream it when
needed by other peers, to recover from missed video blocks.
Although Joost is a peer-to-peer video distribution technology, it
relies heavily on a few centralized servers to provide the licensed
video content and uses the peer-to-peer overlay to service content at
a faster rate. The centralized nature of Joost is the main factor
that influences its lack of locality awareness and low fairness
ratio. Since Joost is directly providing at least two thirds of the
video content to its clients, only one third will have to be supplied
by independent nodes. This approach does not scale well, and is
sustainable today only because of the relatively low user population.
From a network usage perspective, Joost consumes approximately 700
kbps downstream and 120 kbps upstream, regardless of the total
capacity of the network. This is assuming the network upstream
capacity it is larger than 1Mbps.
There may be some type of RTT-savvy selection algorithm at work,
which gives priority to peers with RTT less than or equal to the RTT
of a Joost content providing super node.
Peers will communicate with super nodes in some scenarios using Peers will communicate with super nodes in some scenarios using
Tracker Protocol. Tracker Protocol.
1. When a peer starts Joost software, after the installation and 1. When a peer starts Joost software, after the installation and
bootstrapping, the peer will communicate with one or several super bootstrapping, the peer will communicate with one or several super
nodes to get a list of available peers/content servers. nodes to get a list of available peers/content servers.
2. For on-demand video functions, super nodes periodically exchange 2. For on-demand video functions, super nodes periodically exchange
small UDP packets for peer management purpose. small UDP packets for peer management purpose.
3. When switching between channels, peers contact super nodes to 3. When switching between channels, peers contact super nodes and
find available peers for fetching the requested media data. the latter help the peers find available peers to fetch the requested
media data.
Peer Protocol: The following investigations are mainly motivated from Peer Protocol: The following investigations are mainly motivated from
[Joost- experiment], in which data-driven reverse-engineer [JOOSTEXP ], in which a data-driven reverse-engineer experiments are
experiments are performed. We omit the analysis process and performed. We omitted the analysis process and directly show the
directly show the conclusion. Media data in Joost is split into conclusion. Media data in Joost is split into chunks and then
chunks and then encrypted. Each chunk is packetized with about 5-10 encrypted. Each chunk is packetized with about 5-10 seconds of video
seconds of video data. After receiving peer list from super nodes, a data. After receiving peer list from super nodes, a peer negotiates
peer negotiates with some or, if necessary, all of the peers on the with some or, if necessary, all of the peers in the list to find out
list to find out what chunks they have. Then the peer makes decision what chunks they have. Then the peer makes decision about from which
about from which peers to get the chunks. No peer capability peers to get the chunks. No peer capability information is exchanged
information is exchanged in the Peer Protocol. in the Peer Protocol.
+---------------+ +-------------------+ +---------------+ +-------------------+
| Version Server| | Tracker Server | | Version Server| | Tracker Server |
+---------------+ +-------------------+ +---------------+ +-------------------+
\ | \ |
\ | \ |
\ | +---------------+ \ | +---------------+
\ | |Graphics Server| \ | |Graphics Server|
\ | +---------------+ \ | +---------------+
\ | | \ | |
+--------------+ +-------------+ +--------------+ +--------------+ +-------------+ +--------------+
skipping to change at page 8, line 27 skipping to change at page 8, line 27
| |
| |
| |
| |
+------------+ +---------+ +------------+ +---------+
| Super Node |-------| Peer2 | | Super Node |-------| Peer2 |
+------------+ +---------+ +------------+ +---------+
Figure 1, Architecture of Joost system Figure 1, Architecture of Joost system
The following sections describe Joost QoS related features,
extracted mostly from [Joost- experiment], [Moreira] and [Joost
Network Architecture].
For peer selection, Host Cache of a peer, which is refreshed
periodically, stores a list of Joost super nodes IP addresses and
ports. The selection strategy is influenced by the number of peers
accessing the same content. Specifically, the number of candidate
peers made available is proportional to the number of active peers.
If there are only a few of them, then Joost content server is made
available to assist in the data delivery. Although there is no
explicit consideration for peer heterogeneities in peer selection,
low capacity peers tend to partner with low capacity peer. Peers
under the same NAT also tend to serve each other preferentially [
Moreira]. It may consider geographical locality but may not have
AS-level awareness or exploit topological locality and thus may
have impact on the efficiency of video distribution.
To maintain the overlay networks, super nodes probe clients, clients
probe clients and super nodes, and super nodes communicate with super
nodes and servers. To make up for inadequate bandwidth and to be
scalable, Joost forms groups of Joost Server Islands, each island
consisting of one streaming control server controlling up to ten
streaming servers. Moreover, STUN protocol enables a client to
discover whether it is behind a NAT or firewall and the type of the
NAT or firewall.
For data delivery, Joost streams audio and video traffic separately to
allow for multi-lingual programming. Content comes mostly from peers
and occasionally from content server for "long-tail" content. As
peers are assumed to contribute in a best-effort manner,
infrastructures are needed to make up for insufficient bandwidth,
including in the asymmetric scenario. However, super nodes are not
part of the bandwidth supplying infrastructures as they only relay
control traffic but not data traffic to clients. To support the P2P
media distribution services, Joost uses an agent based peer-to-peer
system called Anthill. Joost also employs Local Video Cache for
later viewing and to avoid reloading, but it will still require
authorization from Joost server when accessing the locally cached video
file at a later time.
Joost provides large buffering and thus causes longer start-up delay Joost provides large buffering and thus causes longer start-up delay
for VoD traffic than for live media streaming traffic. It affords for VoD traffic than for live media streaming traffic. It affords
more FEC for VoD traffic but gives higher priority in delivery to more FEC for VoD traffic but gives higher priority in delivery to
live media streaming traffic. live media streaming traffic.
For Joost, load-balancing and fault-tolerance is shifted directly
into the client and all is done natively in the p2p code.
To enhance user viewing experience, Joost provides chat capability To enhance user viewing experience, Joost provides chat capability
between viewers and user program rating mechanisms. between viewers and user program rating mechanisms.
3.1.2. Octoshape 3.1.2. Octoshape
CNN has been working with a P2P Plug-in, from a Denmark-based company CNN [CNN] has been working with a P2P Plug-in, from a Denmark-based
Octoshape, to broadcast its living streaming. Octoshape helps CNN company Octoshape, to broadcast its living streaming. Octoshape
serve a peak of more than a million simultaneous viewers. It has helps CNN serve a peak of more than a million simultaneous viewers.
also provided several innovative delivery technologies such as loss It has also provided several innovative delivery technologies such as
resilient transport, adaptive bit rate, adaptive path optimization loss resilient transport, adaptive bit rate, adaptive path
and adaptive proximity delivery. Figure 2 depicts the architecture optimization and adaptive proximity delivery. Figure 2 depicts the
of the Octoshape system. architecture of the Octoshape system.
Octoshape maintains a mesh overlay topology. Its overlay topology Octoshape maintains a mesh overlay topology. Its overlay topology
maintenance scheme is similar to that of P2P file-sharing maintenance scheme is similar to that of P2P file-sharing
applications, such as BitTorrent. There is no Tracker server in applications, such as BitTorrent. There is no Tracker server in
Octoshape, thus no Tracker Protocol is required. Peers obtain live Octoshape, thus no Tracker Protocol is required. Peers obtain live
streaming from content servers and peers over Octoshape Protocol. streaming from content servers and peers over Octoshape Protocol.
Several data streams are constructed from live stream. No data Several data streams are constructed from live stream. No data
streams are identical and any number K of data streams can streams are identical and any number K of data streams can
reconstruct the original live stream. The number K is based on the reconstruct the original live stream. The number K is based on the
original media playback rate and the playback rate of each data original media playback rate and the playback rate of each data
stream. For example, a 400Kbit/s media is split into four 100Kbit/s stream. For example, a 400Kbit/s media is split into four 100Kbit/s
data streams, and then k = 4. Data streams are constructed in peers, data streams, and then k = 4. Data streams are constructed in peers,
instead of Broadcast server, which release server from large burden. instead of Broadcast server, which release server from large burden.
The number of data streams constructed in a particular peer equals The number of data streams constructed in a particular peer equals
the number of peers downloading data from that particular peer, which the number of peers downloading data from the particular peer, which
is constrained by the upload capacity of the particular peer. To get is constrained by the upload capacity of the particular peer. To get
the best performance, the upload capacity of a peer should be larger the best performance, the upload capacity of a peer should be larger
than the playback rate of the live stream. If not, an artificial than the playback rate of the live stream. If not, an artificial
peer may be added to deliver extra bandwidth. peer may be added to deliver extra bandwidth.
Each single peer has an address book of other peers who is watching Each single peer has an address book of other peers who is watching
the same channel. A Standby list is set up based on the address the same channel. A Standby list is set up based on the address
book. The peer periodically probes/asks the peers on the standby book. The peer periodically probes/asks the peers in the standby
list to be sure that they are ready to take over if one of the list to be sure that they are ready to take over if one of the
current senders stops or gets congested. [Octoshape] current senders stops or gets congested. [Octoshape]
Peer Protocol: The live stream is initially sent to a few peers in Peer Protocol: The live stream is firstly sent to a few peers in the
the network and then spread to the rest of the network. When a peer network and then spread to the rest of the network. When a peer
joins a channel, it notifies all the other peers about its presence joins a channel, it notifies all the other peers about its presence
using Peer Protocol, which will drive the others to add it into their using Peer Protocol, which will drive the others to add it into their
address books. Although [Octoshape] declares that each peer records address books. Although [Octoshape] declares that each peer records
all the peers joining the channel, we suspect that not all the peers all the peers joining the channel, we suspect that not all the peers
are recorded, considering the notification traffic will be large and are recorded, considering the notification traffic will be large and
peers will be busy with recording when a popular program starts in a peers will be busy with recording when a popular program starts in a
channel and lots of peers switch to this channel. Maybe some channel and lots of peers switch to this channel. Maybe some
geographic or topological neighbors are notified and the peer gets geographic or topological neighbors are notified and the peer gets
its address book from these nearby neighbors. its address book from these nearby neighbors.
The peer sends requests to some selected peers for the live stream The peer sends requests to some selected peers for the live stream
and the receivers answers OK or not according to their upload and the receivers answers OK or not according to their upload
capacity. The peer continues sending requests to peers until it capacity. The peer continues sending requests to peers until it
finds enough peers to provide the needed data streams to redisplay finds enough peers to provide the needed data streams to redisplay
the original live stream. The details of Octoshape are not publicly the original live stream.
disclosed, we hope someone else can provide more specific
information.
+------------+ +--------+ +------------+ +--------+
| Peer 1 |---| Peer 2 | | Peer 1 |---| Peer 2 |
+------------+ +--------+ +------------+ +--------+
| \ / | | \ / |
| \ / | | \ / |
| \ | | \ |
| / \ | | / \ |
| / \ | | / \ |
| / \ | | / \ |
skipping to change at page 11, line 27 skipping to change at page 10, line 27
***************************************** *****************************************
| |
| |
+---------------+ +---------------+
| Content Server| | Content Server|
+---------------+ +---------------+
Figure 2, Architecture of Octoshape system Figure 2, Architecture of Octoshape system
The following sections describe Octoshape QoS related features,
extracted mostly from [OctoshapeWeb], [Alstrup1] and [
Alstrup2]. As it is a closed system, the details of how the features
are implemented are not available.
To spread the burden of data distribution across several peers and To spread the burden of data distribution across several peers and
thus limiting the impact of peer loss, Octoshape splits a live stream thus limiting the impact of peer loss, Octoshape splits a live stream
into a number of smaller equal-sized sub-streams. For example, a into a number of smaller equal-sized sub-streams. For example, a
400kbit/s live stream is split and coded into 12 distinct 100kbit/s 400kbit/s live stream is split and coded into 12 distinct 100kbit/s
sub-streams. Only a subset of these sub-streams needs to reach a sub-streams. Only a subset of these sub-streams needs to reach a
user for it to reconstruct the "original" live stream. The user for it to reconstruct the "original" live stream. The number of
number of distinct sub-streams could be as many as the number of distinct sub-streams could be as many as the number of active peers.
active peers.
Therefore, even if the upload capacity of a peer is smaller than its
download capacity, it would now be easier to contribute a sub-stream
than a whole live stream. An Octoshape peer can then receive from
each neighboring peer at least a distinct sub-stream. To make up for
the bandwidth asymmetry, artificial end users are used to deliver
additional bandwidth. Multi OctoServers are also available to
guarantee no single point of failure [Alstrup2].
Octoshape keeps peer availability information in an address book.
Each peer keeps a periodically updated stand-by list and passes it
along with its transmitted sub-stream. With constant monitoring of
the quality and consistency of each content source, the peer can
switch partners in case of bottleneck and congestion to a better
source.
Octoshape provides operator to control who should and should not
receive certain video signal due to copyright restriction, to control
access based in part on IP numbers, and to obtain real time
statistics during any live events.
To optimize bandwidth utilization, Octoshape leverages computers To optimize bandwidth utilization, Octoshape leverages computers
within a network to minimize external bandwidth usage and to select within a network to minimize external bandwidth usage and to select
the most reliable and "closest" source to each viewer. It also the most reliable and "closest" source to each viewer. It also
chooses the best matching available codecs and players and scales bit chooses the best matching available codecs and players and scales bit
rate up and down according to available internet connection. rate up and down according to available internet connection.
Octoshape [OctoshapeWeb] claims to have patented resiliency and
throughput technologies to deliver quality streams to the mobile and
wireless edge networks. This throughput optimization technology also
cleans up latent and lossy network connections between the encoder
and the distribution point, providing a stable, high quality, stream
for distribution. Octoshape also claims to be able to deliver true
HD, 1280x720 30fps (720p) video over the Internet and to have
advanced DVR functionalities such as allowing users to move
seamlessly forward and back through the streams with almost no
waiting time.
3.1.3. PPLive 3.1.3. PPLive
PPLive is one of the most popular P2P streaming software in China. PPLive [PPLive] is one of the most popular P2P streaming software in
It has two major communication protocols. One is Registration and China. The PPLive system includes six parts.
peer discovery protocol, i.e. Tracker Protocol, and the other is P2P
chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows the (1) Video streaming server: providing the source of video content and
architecture of PPLive. coding the content for adapting the network transmission rate and the
client playing.
(2) Peer: also called node or client. The nodes compose the self-
organizing network logically and each node can join or withdraw
whenever. When the client downloads the content, it also provides
its own content to the other client at the same time.
(3) Directory server: when the user start up the PPLive client, the
client will automatically register the user information to this
server; when the client exits, the client will cancel its peer.
(4) Tracker server: this server will record the information of all
the users which see the same content. When the client request some
content, this server will check if there are other peers owning the
content and send the information of these peers to the client, if on,
then tell the client to request the video steaming server for the
content.
(5) Web server: providing PPLive software updating and downloading.
(6) Channel list server: this server store the information of all the
programs which can be seen by the users, including VoD programs and
broadcasting programs, such as program name, file size and
attribution.
PPLive has two major communication protocols. One is Registration
and peer discovery protocol, i.e. Tracker Protocol, and the other is
P2P chunk distribution protocol, i.e. Peer Protocol. Figure 3 shows
the architecture of PPLive.
Tracker Protocol: First, a peer gets the channel list from the Tracker Protocol: First, a peer gets the channel list from the
Channel server, in a way similar to that of Joost. Then the peer Channel server, in a way similar to that of Joost. Then the peer
chooses a channel and asks the Tracker server for the peerlist of chooses a channel and asks the Tracker server for the peerlist of
this channel. this channel.
Peer Protocol: The peer contacts the peers on its peerlist to get Peer Protocol: The peer contacts the peers in its peerlist to get
additional peerlists, which are aggregated with its existing list. additional peerlists, which are aggregated with its existing list.
Through this list, peers maintain a mesh for peer management and Through this list, peers can maintain a mesh for peer management and
data delivery. data delivery.
For the video-on-demand (VoD) operation, because different peers For the video-on-demand (VoD) operation, because different peers
watch different parts of the channel, a peer buffers up to a few watch different parts of the channel, a peer buffers up to a few
minutes worth of chunks within a sliding window to share with each minutes worth of chunks within a sliding window to share with each
others. Some of these chunks may be chunks that have been recently others. Some of these chunks may be chunks that have been recently
played; the remaining chunks are chunks scheduled to be played in the played; the remaining chunks are chunks scheduled to be played in the
next few minutes. Peers upload chunks to each other. To this end, next few minutes. Peers upload chunks to each other. To this end,
peers send to each other "buffer-map" messages; a buffer-map message peers send to each other "buffer-map" messages; a buffer-map message
indicates which chunks a peer currently has buffered and can share. indicates which chunks a peer currently has buffered and can share.
The buffer-map message includes the offset (the ID of the first The buffer-map message includes the offset (the ID of the first
chunk), the length of the buffer map, and a string of zeroes and ones chunk), the length of the buffer map, and a string of zeroes and ones
indicating which chunks are available (starting with the chunk indicating which chunks are available (starting with the chunk
designated by the offset). PPlive transfer Data over UDP. designated by the offset). PPlive transfer Data over UDP.
Video Download Policy of PPLive Video Download Policy of PPLive:
1 Top ten peers contribute most of the downloaded 1) Top ten peers contribute to a major part of the download
traffic. Meanwhile, the top peer session is quite short compared traffic. Meanwhile, the top peer session is quite short compared
with the video session duration. This would suggest that a PPLive with the video session duration. This would suggest that PPLive
peer gets video from only a few peers at any given time, and gets video from only a few peers at any given time, and switches
switches periodically from one peer to another; periodically from one peer to another;
2 PPLive can send multiple chunk requests for different chunks to 2) PPLive can send multiple chunk requests for different chunks to
one peer at one time; one peer at one time;
3) PPLive is observed to have the download scheduling policy of
giving higher priority to rare chunks and to chunks closer to play
out deadline and to be using a sliding window mechanism to
regulate the buffering of chunks.
PPLive maintains a constant peer list with relatively small number of PPLive maintains a constant peer list with relatively small number of
peers. [P2PIPTV-measuring] peers. [P2PIPTVMEA]
+------------+ +--------+ +------------+ +--------+
| Peer 2 |----| Peer 3 | | Peer 2 |----| Peer 3 |
+------------+ +--------+ +------------+ +--------+
| | | |
| | | |
+--------------+ +--------------+
| Peer 1 | | Peer 1 |
+--------------+ +--------------+
| |
| |
| |
+---------------+ +---------------+
| Tracker Server| | Tracker Server|
+---------------+ +---------------+
Figure 3, Architecture of PPlive system Figure 3, Architecture of PPlive system
The following sections describe PPLive QoS related features,
extracted mostly from [Hei], [Vu], [Horvath], and [Liu].
After obtaining an initial peer list from the member server, a peer
periodically updates its peer list by querying both member server and
partner peers. New peers are aggressively contacted at a fixed rate.
In selecting peers as partners, a peer considers their upload-
bandwidth and in part, their location information [Horvath] by
selecting on a FCFS basis those that have responded [Liu].
For data distribution, PPLive, using a data-driven or mesh-pull scheme
[Hei], divides the media content into small portions called
chunks and uses TCP for video streaming. Neighbor peers use a
gossip-like protocol to exchange their buffer maps that indicate
chunks available for sharing. Peers obtain one or more of their
missing chunks from one or more peers having them. Available chunks
may also be downloaded from the original channel server.
PPLive uses a double buffering mechanism consisting of the TV Engine
and the Media Player for its stream reassembly and display [Hei]. The
TV Engine is responsible for downloading video chunks from the PPLive
network and streaming the downloaded video to the Media Player, which
in turns displays the content to the user, after each buffer is
filled up to its respective predetermined threshold.
PPLive is observed to have the download scheduling policy of giving
higher priority to rare chunks and to chunks closer to play out
deadline and to be using a sliding window mechanism to regulate the
buffering of chunks.
To utilize available peer resources, peers in one subscribed overlay
may also be harnessed to support peers in other subscribed overlays
[Vu].
3.1.4. Zattoo 3.1.4. Zattoo
Zattoo is P2P live streaming system which serves over 3 million Zattoo is P2P live streaming system which serves over 3 million
registered users over European countries [Zattoo].The system delivers registered users over European countries [Zattoo].The system delivers
live streaming using a receiver-based, peer-division multiplexing live streaming using a receiver-based, peer-division multiplexing
scheme. Zattoo reliably streams media among peers using the mesh scheme. Zattoo reliabily streams media among peers using the mesh
structure. structure.
Figure 4 depicts the basic architecture of a Zatto system. First, Figure 4 depcits a typical procedure of single TV channel carried
the Zattoo system broadcasts live TV, captured from satellites, onto over Zattoo network. First, Zattoo system broadcasts live TV,
the Internet. Each TV channel is delivered through a separate P2P captured from satellites, onto the Internet. Each TV channel is
network. delivered through a separate P2P network.
------------------------------- -------------------------------
| ------------------ | -------- | ------------------ | --------
| | Broadcast | |---------|Peer1 |----------- | | Broadcast | |---------|Peer1 |-----------
| | Servers | | -------- | | | Servers | | -------- |
| Administrative Servers | ------------- | Administrative Servers | -------------
| ------------------------ | | Super Node| | ------------------------ | | Super Node|
| | Authentication Server | | ------------- | | Authentication Server | | -------------
| | Rendezvous Server | | | | | Rendezvous Server | | |
| | Feedback Server | | -------- | | | Feedback Server | | -------- |
| | Other Servers | |---------|Peer2 |----------| | | Other Servers | |---------|Peer2 |----------|
| ------------------------| | -------- | ------------------------| | --------
------------------------------| ------------------------------|
Figure 4, Basic architecture of Zattoo system Figure 4, Basic architecture of Zattoo system
Tracker (Rendezvous Server) Protocol: In order to receive the signal Tracker(Rendezvous Server) Protocol: In order to receive the signal
of the requested channel, all registered users are required to be the requested channel, registered users are required to be
authenticated through the Zattoo Authentication Server. Upon authenticated through Zattoo Authentication Server. Upon
authentication, each user obtains a ticket with a specific lifetime. authentication, users obtain a ticket with specific lifetime. Then,
Then, the user then contacts Rendezvous Server with the ticket and users contact Rendezvous Server with the ticket and identify of
identifies the TV channel of interested. In return, the Rendezvous interested TV channel. In return, the Rendezvous Server sends back a
Server sends back a list of active peers carrying the channel. list joined peers carrying the channel.
Peer Protocol: Similar to aforementioned procedures in Joost and
PPLive, a newly joined Zattoo peer requests to partner with peers from
among the obtained peer list. Based on the availability of bandwidth,
a requested peer decides how to multiplex a stream onto its set of
neighboring peers. When the requesting peer receives packets, it stores
them to form sub-streams for reassembling the full stream.
Note Zattoo relies on the Bandwidth Estimation Server to initially Peer Protocol: Similar to aforementioned procedures in Joost, PPLive,
estimate the amount of available uplink bandwidth at a peer. Once a a new Zattoo peer requests to join an existing peer among the peer
peer starts to forward substreams to other peers, it receives QoS list. Upon the availability of bandwidth, requested peer decides how
feedback from the receiver of the substreams for any received sub-stream to multiplex a stream onto its set of neighboring peers. When
quality that drops below a threshold. packets arrive at the peer, sub-streams are stored for reassembly
constructing the full stream.
The following sections describe Zattoo QoS related features, Note Zattoo relies on Bandwdith Estimation Server to initially
extracted mostly from [Chang]. estimate the amount of available uplink bandwith at a peer. Once a
peer starts to forward substream to other peers, it receives QoS
feedback from other receivers if the quality of sub-stream drops
below a threshold.
For reliable data delivery, each live stream is partitioned into For reliable data delivery, each live stream is partitioned into
video segments. Each video segment is coded for forward error video segments. Each video segment is coded for forward error
correction with Reed-Solomon error correcting code into n sub-stream correction with Reed-Solomon error correcting code into n sub-stream
packets such that having obtained k correct packets of a segment is packets such that having obtained k correct packets of a segment is
sufficient to reconstruct the remaining n-k packets of the same video sufficient to reconstruct the remaining n-k packets of the same video
segment. To receive a video segment, each peer then specifies the segment. To receive a video segment, each peer then specifies the
sub-stream(s) of the video segment it would like to receive from the sub-stream(s) of the video segment it would like to receive from the
neighboring peers. neighboring peers.
Zattoo uses the Peer-Division Multiplexing (PDM) scheme for its data Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
delivery topology setup. In this scheme, each new peer independently handle longer term bandwidth fluctuations. In this scheme, each peer
executes the Search and Join phases. In the Search Phase, a peer
queries the members of the peer list for sub-streams availability; in
response, it receives additional prospective peers, sub-streams
availability, quality indications, and sub-stream sequence numbers;
and it then selects, among the responses, partnering peers or quits
after failing two search attempts.
In the Join Phase, a joining peer, having selected the candidate
peers, requests to partner with some of them, spreading the load
among them and preferring topologically close-by peers, if these
peers have less capacity or carry lower quality sub-streams. Barring
departure or performance degradation of neighboring peers, the
established connections persist and the specified sub-stream packet
of every segment continues to be forwarded without further per-packet
handshaking between peers.
To manage stream efficiently for incoming and outgoing destinations,
each peer has a packet buffer, called IOB (Input-Output Buffer). The
IOB is referenced by an input pointer, a repair pointer and one or
more output pointers, one for each forwarding destination such as
player, file, and other peer. The input pointer points to the slot
in the IOB where the next incoming packet with sequence number higher
than the highest sequence number received so far will be stored, and
the repair pointer always points to one slot beyond the last packet
received in order and is used to regulate packet retransmission and
adaptive PDM (to be described later). A packet map and forwarding
discipline is associated with each output pointer to accommodate the
different forwarding rates and regimes required by the destinations.
Note that retransmission requests are sent to random peers and not to
partnering peers. Furthermore, they are honoured only if the requested
packets are still in IOB and there is sufficient left-over capacity to
transmit all the requested packets. To avoid buffer overrun, a set
of two buffers is used in the IOB instead of a circular buffer.
Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to handle
longer term bandwidth fluctuations. In this scheme, each peer
determines how many sub-streams to transmit and when to switch determines how many sub-streams to transmit and when to switch
partners. Specifically, each peer continually estimates the amount partners. Specifically, each peer continually estimates the amount
of available uplink bandwidth based initially on probe packets to the of available uplink bandwidth based initially on probe packets to the
Zattoo Bandwidth Estimation Server and later, based on peer QoS Zattoo Bandwidth Estimation Server and later, based on peer QoS
feedbacks, using different algorithms depending on the underlying feedbacks, using different algorithms depending on the underlying
transport protocol. A peer increases its estimated available uplink transport protocol. A peer increases its estimated available uplink
bandwidth, if the current estimate is below some threshold and if bandwidth, if the current estimate is below some threshold and if
there has been no bad quality feedback from neighboring peers for a there has been no bad quality feedback from neighboring peers for a
period of time, according to some algorithm similar to how TCP period of time, according to some algorithm similar to how TCP
maintains its congestion window size. Each peer then admits maintains its congestion window size. Each peer then admits
neighbors based on the currently estimated available uplink neighbors based on the currently estimated available uplink
bandwidth. In case a new estimate indicates insufficient bandwidth bandwidth. In case a new estimate indicates insufficient bandwidth
to support the existing number of peer connections, one connection at to support the existing number of peer connections, one connection at
a time, preferably starting with the one requiring the least a time, preferably starting with the one requiring the least
bandwidth, is closed. On the other hand, if loss rate of packets bandwidth, is closed. On the other hand, if loss rate of packets
from a peer's neighbor reaches a certain threshold, the peer will from a peer's neighbor reaches a certain threshold, the peer will
attempt to shift the degraded neighboring peer load to other existing attempt to shift the degraded neighboring peer load to other existing
peers, while looking for a replacement peer. When one is peers, while looking for a replacement peer. When one is found, the
found, the load is shifted to it and the degraded neighbor is load is shifted to it and the degraded neighbor is dropped. As
dropped. As expected if a peer's neighbor is lost due to departure, expected if a peer's neighbor is lost due to departure, the peer
the peer initiates the process to replace the lost peer. To optimize initiates the process to replace the lost peer. To optimize the PDM
the PDM configuration, a peer may occasionally initiate switching configuration, a peer may occasionally initiate switching existing
existing partnering peers to topologically closer peers. partnering peers to topologically closer peers.
3.1.5. PPStream 3.1.5. PPStream
The system architecture and working flows of PPStream is similar to The system architecture and working flows of PPStream is similar to
PPLive. PPStream transfers data using mostly TCP, only occasionally PPLive [PPStream]. PPStream transfers data using mostly TCP, only
UDP. occasionally UDP.
Video Download Policy of PPStream Video Download Policy of PPStream
1 Top ten peers do not contribute to a large part of the download 1) Top ten peers do not contribute to a large part of the download
traffic. This would suggest that PPStream gets the video from traffic. This would suggest that PPStream gets the video from
many peers simultaneously, and its peers have long session many peers simultaneously, and its peers have long session
duration; duration;
2 PPStream does not send multiple chunk requests for different 2) PPStream does not send multiple chunk requests for different
chunks to one peer at one time; chunks to one peer at one time;
PPStream maintains a constant peer list with relatively large number PPStream maintains a constant peer list with relatively large number
of peers. [P2PIPTV-measuring] of peers. [P2PIPTVMEA]
The following sections describe PPStream QoS related features,
extracted mostly from [Li], [Jia] and [Wei].
PPStream is mainly mesh-based but to some extent has a layered data
distribution topology. It uses geographic clustering to some
extent based on geographic longitude and latitude of the IP addresses
[Jia].
To ensure data availability, PPStream uses some form of chunk To ensure data availability, PPStream uses some form of chunk
retransmission request mechanism and shares buffer map at high rate, retransmission request mechanism and shares buffer map at high rate,
although it rarely requests concurrently for the same data chunk. Each although it rarely requests concurrently for the same data chunk.
data chunk, identified by the play time offset encoded by the program Each data chunk, identified by the play time offset encoded by the
source, is divided into 128 sub-chunks of 8KB size each. The chunk program source, is divided into 128 sub-chunks of 8KB size each. The
id is used to ensure sequential ordering of received data chunk. chunk id is used to ensure sequential ordering of received data
chunk.
The buffer map consists of one or more 128-bit flags denoting the The buffer map consists of one or more 128-bit flags denoting the
availability of sub-chunks and having a corresponding time offset. availability of sub-chunks and having a corresponding time offset.
Usually a buffer map contains only one data chunk at a time and is Usually a buffer map contains only one data chunk at a time and is
thus smaller than that of PPLive. It also contains sending peer's thus smaller than that of PPLive. It also contains sending peer's
playback status to the other peers because as soon as a data chunk is playback status to the other peers because as soon as a data chunk is
played back, the chunk is deleted or replaced by the next data chunk played back, the chunk is deleted or replaced by the next data chunk.
[Wei].
At the initiating stage, a peer can use up to 4 data chunks and on a At the initiating stage, a peer can use up to 4 data chunks and on a
stabilized stage, a peer uses usually one data chunk. However, in stabilized stage, a peer uses usually one data chunk. However, in
transient stage, a peer uses variable number of chunks. Although, transient stage, a peer uses variable number of chunks. Although,
sub-chunks within each data chunks are fetched nearly in random sub-chunks within each data chunks are fetched nearly in random
without using rarest or greedy policy, the same fetching pattern for without using rarest or greedy policy, the same fetching pattern for
one data chunk seems to repeat in the following data chunks [Li]. one data chunk seems to repeat in the following data chunks.
Moreover, high bandwidth PPStream peers tend to receive chunks Moreover, high bandwidth PPStream peers tend to receive chunks
earlier and thus to contributes more than lower bandwidth peers. earlier and thus to contributes more than lower bandwidth peers.
3.1.6. SopCast 3.1.6. SopCast
The system architecture and working flows of SopCast is similar to The system architecture and working flows of SopCast is similar to
PPLive. SOPCast transfers data mainly using UDP, occasionally TCP; PPLive. SOPCast transfer data mainly using UDP, occasionally TCP;
Top ten peers contribute to about half of the total download traffic. Top ten peers contribute to about half of the total download traffic.
SOPCast's download policy is similar to PPLive's policy in that it SOPCast's download policy is similar to PPLive's policy in that it
switches periodically between provider peers. However, SOPCast seems switches periodically between provider peers. However, SOPCast seems
to always need more than one peer to get the video, while in PPLive a to always need more than one peer to get the video, while in PPLive a
single peer could be the only video provider; single peer could be the only video provider;
SOPCast's peer list can be as large as PPStream's peer list. But SOPCast's peer list can be as large as PPStream's peer list. But
SOPCast's peer list varies over time. [P2PIPTV-measuring] SOPCast's peer list varies over time. [P2PIPTVMEA]
The following sections describe SopCast QoS related features,
extracted mostly from [Ali], [Ciullo], [Fallica], [Sentinelli],
[SC6-Silverston], and [SC7-Tang].
SopCast allows for software update through (HTTP) a centralized web SopCast allows for software update through (HTTP) a centralized web
server and makes available channel list through (HTTP) another server and makes available channel list through (HTTP) another
centralized server. centralized server.
SopCast traffic is encoded and SopCast TV content is divided into SopCast traffic is encoded and SopCast TV content is divided into
video chunks or blocks with equal sizes of 10KB [Tang]. Sixty video chunks or blocks with equal sizes of 10KB. Sixty percent of
percent of its traffic is signaling packets and 40% is actual video its traffic is signaling packets and 40% is actual video data
data packets [Fallica]. SopCast produces more signaling traffic packets. SopCast produces more signaling traffic compared to PPLive,
compared to PPLive, PPStream, and TVAnts, whereas PPLive produces the PPStream, and TVAnts, whereas PPLive produces the least. Its traffic
least [Silverston]. Its traffic is also noted to have long-range is also noted to have long-range dependency, indicating that
dependency [Silverston], indicating that mitigating it with QoS mitigating it with QoS mechanisms may be difficult. It is reported
mechanisms may be difficult. [Ali] reported that SopCast that SopCast communication mechanism starts with UDP for the exchange
communication mechanism starts with UDP for the exchange of control of control messages among its peers using a gossip-like protocol and
messages among its peers using a gossip-like protocol and then moves then moves to TCP for the transfer of video segments. This use of
to TCP for the transfer of video segments. This use of TCP for data TCP for data transfer seems to contradict others findings.
transfer seems to contradict others findings [Fallica, Silverston].
To discover candidate peers, a peer requests peer list from Tracker,
or from neighboring peer using a gossip-like protocol. To retrieve
content [Fallica], a new peer contacts peers selected randomly
from the peer list it obtained from having queried the root servers
(trackers). The process of contacting peers slows down after the
initial bootstrap phase [Horvath, Ciullo]. The number of
peers a node typically connects to for download is about 2 to 5 [SC5-
Sentinelli] and there is no observed preference for peers with
shorter paths [Ciullo]. Partner peers periodically advertise
content availability and exchange sought content. In forming
multiple parent and children relationships, a peer does not exploit
peer location information [Horvath]. In general, parents are
chosen solely based on performance; however, lower capacity nodes
seem to be choosing parents that are closer to improve performance
and to compensate for its bandwidth constraints [Ali]. When
needed, a peer can download video streams directly from the Source
Provide, a node that broadcasts the entire video [Tang]. In the
process of data exchange, there is no enforcement of tit-for-tat like
mechanisms [Ciullo].
Similar to PPLive, SopCast uses a double-buffering mechanism. The
SopCast buffer downloads video chunks from the network, storing them,
and upon exceeding a predetermined number of stored chunks, launches
the Media player. The Media player buffer then downloads video
content from the local web server listening port and upon receiving
sufficient amount of content, starts video playback.
3.1.7. TVants 3.1.7. TVants
The system architecture and working flows of TVants is similar to The system architecture and working flows of TVants is similar to
PPLive. TVAnts is more balanced between TCP and UDP in data PPLive. TVAnts is more balanced between TCP and UDP in data
transmission; transmission;
TVAnts' peer list is also large and varies over time. [P2PIPTV- The system architecture and working flows of TVants is similar to
measuring] PPLive. TVAnts is more balanced between TCP and UDP in data
transmission;
We illustrate in Figure 5 the common Main components and steps of
PPLive, PPStream, SopCast and TVants.
+------------+
| Tracker |
/+------------+
/
/ +------+
1,2/ /|Peer 1|
/ / +------+
/ /3,4,6
+---------+/ +------+
|New Peer |---------------|Peer 2|
+---------+\ 4,6 +------+
|5 | \
|---| \ +------+
3,4,6 \|Peer 3|
+------+
Figure 5, Main components and steps of PPLive, PPStream, SopCast and Tvants
The main steps are:
(1) A new peer registers with tracker / distributed hash table
(DHT) to join the peer group sharing the same channel / media
content;
(2) Tracker / DHT returns an initial peer list to the new peer;
(3) The new peer harvests peer lists by gossiping (i.e. exchange
peer list) with the peers on the initial peer list to aggregate
more peers sharing the channel / media content;
(4) The new peer randomly (or with some guide) selects some peers
from its peer list to connect and exchange peer information (e.g.
buffer map, peer status, etc) with connected peers to know where
to get what data;
(5) The new peer decides what data should be requested in which
order / priority using some scheduling algorithm and the peer
information obtained in Step (4);
(6) The new peer requests the data from some connected peers.
The following sections describe TVAnts QoS related features,
extracted mostly from [Alessandria], [Ciullo], and [Horvath].
TVAnts peer discovery mechanism is very greedy during the first part TVAnts' peer list is also large and varies over time. [P2PIPTVMEA]
of a peer life and stabilizes afterwards [Ciullo].
For data delivery, peers exhibit mild preference to exchange data For data delivery, peers exhibit mild preference to exchange data
among themselves in the same Autonomous System and also among peers among themselves in the same Autonomous System and also among peers
in the same subnet. TVAnts peer also exhibits some preference to in the same subnet. TVAnts peer also exhibits some preference to
download from closer peers. According to [Horvath], TVAnts peer download from closer peers. TVAnts peer exploits location
exploits location information and download mostly from high-bandwidth information and download mostly from high-bandwidth peers. However,
peers. However, it does not seem to enforce any tit-for-tat it does not seem to enforce any tit-for-tat mechanisms in the data
mechanisms in the data delivery. delivery.
TVAnts [Alessandria] seems to be sensitive to network impairments TVAnts seems to be sensitive to network impairments such as changes
such as changes in network capacity, packet loss, and delay. For in network capacity, packet loss, and delay. For capacity loss, a
capacity loss, a peer will always seek for more peers to download. peer will always seek for more peers to download. In the process of
In the process of trying to avoid bad paths and selecting good peers trying to avoid bad paths and selecting good peers to continue
to continue downloading data, aggressive and potentially harmful downloading data, aggressive and potentially harmful behavior for
behavior for both application and the network results when bottleneck both application and the network results when bottleneck is affecting
is affecting all potential peers. all potential peers.
When a peer experiences limited access capacity, it reacts by When a peer experiences limited access capacity, it reacts by
increasing redundancy (with FEC or ARQ mechanism) as if reacting to increasing redundancy (with FEC or ARQ mechanism) as if reacting to
loss and thus causes higher download rate. To recover from packet loss and thus causes higher download rate. To recover from packet
losses, it uses some kind of ARQ mechanism. Although network losses, it uses some kind of ARQ mechanism. Although network
conditions do impact video stream distribution such as the network conditions do impact video stream distribution such as the network
delay impacting the start-up phase, they seem to have little impact delay impacting the start-up phase, they seem to have little impact
on the network topology discovery and maintenance process. on the network topology discovery and maintenance process.
3.2. Tree-based P2P streaming systems 3.2. Tree-based P2P streaming systems
Tree-based systems implement a tree distribution graph, rooted at the
source of content. In principle, each node receives data from a
parent node, which may be the source or a peer. If peers do not
change too often, such systems require little overhead, since packets
are forwarded from node to node without the need for extra messages.
However, in high churn environments (i.e. fast turnover of peers in
the tree), the tree must be continuously destroyed and rebuilt, a
process that requires considerable control message overhead. As a
side effect, nodes must buffer data for at least the time required to
repair the tree, in order to avoid packet loss. One major drawback
of tree-based streaming systems is their vulnerability to peer churn.
A peer departure will temporarily disrupt video delivery to all peers
in the sub-tree rooted at the departed peer.
3.2.1. PeerCast 3.2.1. PeerCast
PeerCast adopts a Tree structure. The architecture of PeerCast is PeerCast adopts a Tree structure. The architecture of PeerCast is
shown in Figure 6. shown in Figure 6.
Peers in one channel construct the Broadcast Tree and the Broadcast Peers in one channel construct the Broadcast Tree and the Broadcast
server is the root of the Tree. A Tracker can be implemented server is the root of the Tree. A Tracker can be implemented
independently or merged in the Broadcast server. Tracker in Tree independently or merged in the Broadcast server. Tracker in Tree
based P2P streaming application selects the parent nodes for those based P2P streaming application selects the parent nodes for those
new peers who join in the Tree. A Transfer node in the Tree receives new peers who join in the Tree. A Transfer node in the Tree receives
skipping to change at page 22, line 5 skipping to change at page 17, line 31
address. First of all, the peer sends a request to the server, and address. First of all, the peer sends a request to the server, and
the server answers OK or not according to its idle capability. If the server answers OK or not according to its idle capability. If
the broadcast server has enough idle capability, it will include the the broadcast server has enough idle capability, it will include the
peer in its child-list. Otherwise, the broadcast server will choose peer in its child-list. Otherwise, the broadcast server will choose
at most eight nodes of its children and answer the peer. The peer at most eight nodes of its children and answer the peer. The peer
records the nodes and contacts one of them, until it finds a node records the nodes and contacts one of them, until it finds a node
that can server it. that can server it.
In stead of requesting the channel by the peer, a Transfer node In stead of requesting the channel by the peer, a Transfer node
pushes live stream to its children, which can be a transfer node or a pushes live stream to its children, which can be a transfer node or a
receiver. A node in the tree will notify its parent about its status receiver. A node in the tree will notify its status to its parent
periodically, and the parent will update its child-list according to periodically, and the latter will update its child-list according to
the received notifications. the received notifications.
------------------------------ ------------------------------
| +---------+ | | +---------+ |
| | Tracker | | | | Tracker | |
| +---------+ | | +---------+ |
| | | | | |
| | | | | |
| +---------------------+ | | +---------------------+ |
| | Broadcast server | | | | Broadcast server | |
| +---------------------+ | | +---------------------+ |
|------------------------------ |------------------------------
skipping to change at page 22, line 34 skipping to change at page 18, line 31
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
|Receiver1| |Receiver2| |Receiver3| |Receiver4| |Receiver1| |Receiver2| |Receiver3| |Receiver4|
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 6, Architecture of PeerCast system Figure 6, Architecture of PeerCast system
The following sections describe PeerCast QoS related features,
extracted mostly from [Deshpande] and [Despande1].
Each PeerCast node has a peering layer that is between the Each PeerCast node has a peering layer that is between the
application layer and the transport layer. The peering layer of each application layer and the transport layer. The peering layer of each
node coordinates among similar nodes to establish and maintain a node coordinates among similar nodes to establish and maintain a
multicast tree. Moreover, the peering layer also supports a simple, multicast tree. Moreover, the peering layer also supports a simple,
lightweight redirect primitive. This primitive allows a peer p to lightweight redirect primitive. This primitive allows a peer p to
direct another peer c which is either opening a data-transfer session direct another peer c which is either opening a data-transfer session
with p, or has a session already established with p to a target peer with p, or has a session already established with p to a target peer
t to try to establish a data-transfer session. Peer discovery starts t to try to establish a data-transfer session. Peer discovery starts
at the root (source) or some selected sub-tree root and goes at the root (source) or some selected sub-tree root and goes
recursively down the tree structure. When a peer leaves normally, it recursively down the tree structure. When a peer leaves normally, it
informs its parent who then releases the peer, and it also redirects informs its parent who then releases the peer, and it also redirects
all its immediate children to find new parents starting at some all its immediate children to find new parents starting at some
target node. target node.
The peering layer allows for different policies of topology The peering layer allows for different policies of topology
maintenance. In choosing a parent from among the children of a given maintenance. In choosing a parent from among the children of a given
peer, a child can be chosen randomly, one at a time in some fixed peer, a child can be chosen randomly, one at a time in some fixed
order, or based on least access latency with respect to the choosing order, or based on least access latency with respect to the choosing
peer. There are also many choices of peers to start and limit the peer.
search. The different combinations are all the descendants of a
leaving peer have to start searching from the root [root-All (RTA)];
only the children of a leaving peer have to start searching from the
root [Root (RT)]; all the descendants of a leaving peer have to start
searching from the parent of the leaving peer [Grandfather-All
(GFA)]; and only the children of the leaving peer have to start
searching from the parent of the leaving peer [Grandfather (GF)].
A heart-beat mechanism at the peer is available to handle failed
peer. With this mechanism, a peer sends keep-alive messages to its
parent and children. If a parent peer detects that a child has
skipped a specified number of heart-beats, it deems the child as lost
and tidies up. Similarly, a child peer starts its search for new
parent once its current parent is deemed to have left.
PeerCast also proposes but has not evaluated a number of algorithms
that use some cost function to optimize the overlay. Some of them
are described next. If a parent is already saturated, a newly
arrived peer replaces one of the costlier children than the newly
arrived peer and the replaced peer tries to reconnect somewhere else
[Knock-Down]. Newly arrived peer replaces the target peer and the
target peer becomes its child [Join-Flip]. Unstable peers are pushed
down to the bottom of the tree [Leaf-Sink]. Existing child and
parent relationship is flipped [Maintain-Flip].
3.2.2. Conviva 3.2.2. Conviva
Conviva[TM][conviva] is a real-time media control platform for Conviva [conviva] is a real-time media control platform for Internet
Internet multimedia broadcasting. For its early prototype, End multimedia broadcasting. For its early prototype, End System
System Multicast (ESM) [ESM04] is the underlying networking Multicast (ESM) [ESM] is the underlying networking technology on
technology for organizing and maintaining an overlay broadcasting organizing and maintaining an overlay broadcasting topology. Next we
topology. Next we present an overview of ESM. ESM adopts a Tree present the overview of ESM. ESM adopts a Tree structure. The
structure. The architecture of ESM is shown in Figure 7. architecture of ESM is shown in Figure 7.
ESM has two versions of protocols: one for the smaller scale conferencing ESM has two versions of protocols: one for smaller scale conferencing
apps with multiple sources, and the other for the larger scale apps with multiple sources, and the other for larger scale
broadcasting apps with a single source. We focus on the latter version broadcasting apps with Single source. We focus on the latter version
in this survey. in this survey.
ESM maintains a single tree for its overlay topology. Its basic ESM maintains a single tree for its overlay topology. Its basic
functional components include two parts: a bootstrap protocol, a functional components include two parts: a bootstrap protocol, a
parent selection algorithm, and a light-weight probing protocol for parent selection algorithm, and a light-weight probing protocol for
tree topology construction and maintenance; a separate control tree topology construction and maintenance; a separate control
structure decoupled from tree, where a gossip-like algorithm is used structure decoupled from tree, where a gossip-like algorithm is used
for each member to know a small random subset of group members; for each member to know a small random subset of group members;
members also maintain paths from source. members also maintain pathes from source.
Upon joining, a node gets a subset of group membership from the Upon joining, a node gets a subset of group membership from the
source (the root node); it then finds parent using a parent selection source (the root node); it then finds parent using a parent selection
algorithm. The node uses light-weight probing heuristics on a subset algorithm. The node uses light-weight probing heuristics to a subset
of members it knows, and evaluates remote nodes and chooses a of members it knows, and evaluates remote nodes and chooses a
candidate parent. It also uses the parent selection algorithm to candidate parent. It also uses the parent selection algorithm to
deal with performance degradation due to node and network churns. deal with performance degradation due to node and network churns.
ESM Supports for NATs. It allows NATs to be parents of public hosts, ESM Supports for NATs. It allows NATs to be parents of public hosts,
and public hosts can be parents of all hosts including NATs as and public hosts can be parents of all hosts including NATs as
children. children.
------------------------------ ------------------------------
| +---------+ | | +---------+ |
| | Tracker | | | | Tracker | |
| +---------+ | | +---------+ |
| | | | | |
| | | | | |
| +---------------------+ | | +---------------------+ |
| | Broadcast server | | | | Broadcast server | |
| +---------------------+ | | +---------------------+ |
|------------------------------ |------------------------------
skipping to change at page 24, line 44 skipping to change at page 20, line 31
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
| Peer3 | | Peer4 | | Peer5 | | Peer6 | | Peer3 | | Peer4 | | Peer5 | | Peer6 |
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 7, Architecture of ESM system Figure 7, Architecture of ESM system
The following sections describe only ESM QoS related features,
extracted mostly from [ESM04], [Chu1], [Chu2], and [Chu3], as those
of the Conviva are not publicly available.
ESM constructs the multicast tree in a two-step process. It ESM constructs the multicast tree in a two-step process. It
constructs first a mesh of the participating peers; the mesh having constructs first a mesh of the participating peers; the mesh having
the following properties: the following properties:
o The shortest path delay between any pair of peers in the mesh is 1) The shortest path delay between any pair of peers in the mesh
at most K times the unicast delay between them, where K is a small is at most K times the unicast delay between them, where K is a
constant. small constant.
o Each peer has a limited number of neighbors in the mesh which does 2) Each peer has a limited number of neighbors in the mesh which
not exceed a given (per-member) bound chosen to reflect the does not exceed a given (per-member) bound chosen to reflect the
bandwidth of the peer's connection to the Internet. bandwidth of the peer's connection to the Internet.
It then constructs a (reverse) shortest path spanning trees of the It then constructs a (reverse) shortest path spanning trees of the
mesh with the root being the source. mesh with the root being the source.
Therefore a peer participates in two types of topology management: a Therefore a peer participates in two types of topology management: a
control structure in which peers make sure they are always connected control structure in which peers make sure they are always connected
in a mesh and a data delivery structure in which peers make sure data in a mesh and a data delivery structure in which peers make sure data
gets delivered to them in a tree structure. gets delivered to them in a tree structure.
To keep connected, each peer maintains communication with a small
number of random neighbors and a complete list of members through a
gossip-like algorithm. When a new node joins, it gets a list of
group members from the source. To look for a parent, it sends probe
request to a subset of the group members it obtained; evaluates them
with respect to delay to the source, application throughput and link
bandwidth; and then chooses from among them a candidate parent that
is not a descendant and is not saturated. In addition to using RTT-
probes, consisting of 1-Kbyte transfers to detect bottleneck
bandwidth, a node also considers the performance history of previously
chosen parent. The peer also avoids probing hosts that have low
bandwidth or are bottlenecked.
When a peer leaves normally, it notifies its neighboring peers and
the neighboring peers propagate the departing peer info. At the same
time, the departing peer continues to forward packets for some time
to minimize transient packet loss. When a peer leaves due to
failure, active peers detect the departure of the peer through its
non-responsiveness to their probe messages. Active peers that
detected the loss then propagate the departed peer info. A departed
peer list that is flushed after a sufficient amount of time has
passed keeps track of leaving and failed peers. The list enables
refreshes from an active peer and a leaving/failed peer to be
distinguished.
Departing peers and failing peers could in some instance partition a
mesh into two or more components. Mesh repair algorithm detects such
occurrences by noticing split in the membership list and tries to
repair by virtually linking between active members to one of the non-
active members, trying one non-active member at a time.
To improve mesh/tree structural and operating quality, each peer To improve mesh/tree structural and operating quality, each peer
randomly probes one another to add new links that have perceived gain randomly probes one another to add new links that have perceived gain
in utility; and each peer continually monitors existing links to drop in utility; and each peer continually monitors existing links to drop
those links that have perceived drop in utility. Switching parent those links that have perceived drop in utility. Switching parent
occurs if a peer leaves or fails; if there is a persistent congestion occurs if a peer leaves or fails; if there is a persistent congestion
or low bandwidth condition; or if there is a better clustering or low bandwidth condition; or if there is a better clustering
configuration. To allow for more public hosts to be available for configuration. To allow for more public hosts to be available for
becoming parents of NATs, public hosts preferentially choose NATs as becoming parents of NATs, public hosts preferentially choose NATs as
parents. parents.
skipping to change at page 26, line 30 skipping to change at page 21, line 29
time is set to be larger than the cost of any path with a valid time is set to be larger than the cost of any path with a valid
route, but smaller than infinite cost. To make better use of the route, but smaller than infinite cost. To make better use of the
path bandwidth, streams of different bit-rates are forwarded path bandwidth, streams of different bit-rates are forwarded
according to the following priority scheme: audio being higher than according to the following priority scheme: audio being higher than
video streams and lower quality video being higher than quality video streams and lower quality video being higher than quality
video. Moreover, bit-rates of stream are adapted to the peer video. Moreover, bit-rates of stream are adapted to the peer
performance capability. performance capability.
3.3. Hybrid P2P streaming system 3.3. Hybrid P2P streaming system
The object of the hybrid P2P streaming system is to use the
comprehensive advantage of tree-mesh topology and pull-push mode in
order to achieve balance among system robust, scalability and
application real-time performance.
3.3.1. New Coolstreaming 3.3.1. New Coolstreaming
The Coolstreaming, first released in summer 2004 with a mesh-based The Coolstreaming, first released in summer 2004 with a mesh-based
structure, arguably represented the first successful large-scale P2P structure, arguably represented the first successful large-scale P2P
live streaming. As in the above analysis, it has poor delay live streaming. As the above analysis, it has poor delay performance
performance and high overhead associated each video block and high overhead associated each video block transmission. After
transmission. To improve the situation, New coolstreaming [New that, New coolstreaming [NEWCOOLStreaming] adopts a hybrid mesh and
CoolStreaming] adopts a hybrid mesh and tree structure with hybrid tree structure with hybrid pull and push mechanism. All the peers
pull and push mechanism. All the peers are organized into a are organized into mesh-based topology in the similar way like pplive
mesh-based topology similar to PPLive to ensure high reliability. to ensure high reliability.
Besides, content delivery mechanism is the most important part of New Besides, content delivery mechanism is the most important part of New
Coolstreaming. Fig.8 is the content delivery architecture. The Coolstreaming. Fig.8 is the content delivery architecture. The
video stream is divided into blocks with equal size, in which each video stream is divided into blocks with equal size, in which each
block is assigned a sequence number to represent its playback order block is assigned a sequence number to represent its playback order
in the stream. Each video stream is further divided into multiple in the stream. We divide each video stream into multiple sub-streams
sub-streams without any coding, allowing each node to retrieve any without any coding, in which each node can retrieve any sub-stream
sub-stream independently from different parent nodes. This independently from different parent nodes. This subsequently reduces
consequently reduces the impact to content delivery due to a parent the impact to content delivery due to a parent departure or failure.
departure or failure. The details of hybrid push and pull content The details of hybrid push and pull content delivery scheme are shown
delivery scheme are shown in the following: in the following:
(1) A node first subscribes to a sub-stream by connecting to one of (1) A node first subscribes to a sub-stream by connecting to one of
its partners via a single request (pull) in BM to the requested its partners via a single request (pull) in BM, the requested
partner, i.e., the parent node.( The node can subscribe to more than partner, i.e., the parent node.( The node can subscribe more sub-
one sub-stream from its partners to obtain higher play quality.) streams to its partners in this way to obtain higher play quality.)
(2) The selected parent node will continue pushing all blocks (2) The selected parent node will continue pushing all blocks in need
of the requested sub-stream to the requesting node. of the sub-stream to the requested node.
This not only reduces the overhead associated with each video block This not only reduces the overhead associated with each video block
transfer, but more importantly, significantly reduces the delay transfer, but more importantly, significantly reduces the timing
incurred in retrieving video content. involved in retrieving video content.
------------------------------ ------------------------------
| +---------+ | | +---------+ |
| | Tracker | | | | Tracker | |
| +---------+ | | +---------+ |
| | | | | |
| | | | | |
| +---------------------+ | | +---------------------+ |
| | Content server | | | | Content server | |
| +---------------------+ | | +---------------------+ |
|------------------------------ |------------------------------
skipping to change at page 27, line 41 skipping to change at page 22, line 44
| Peer1 | | Peer2 | | Peer1 | | Peer2 |
+---------+ +---------+ +---------+ +---------+
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
| Peer2 | | Peer3 | | Peer1 | | Peer3 | | Peer2 | | Peer3 | | Peer1 | | Peer3 |
+---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+ +---------+
Figure 8 Content Delivery Architecture Figure 8 Content Delivery Architecture
The following sections describe Coolstreaming QoS related features,
extracted mostly from [Bo] and [Xie].
The basic components of Coolstreaming consist of the source,
bootstrap node, web server, log server, media servers, and peers.
Three basic modules in a peer help it maintain a partial view of the
overlay (Membership Manager); establish and maintain partnership with
other peers using Buffer Maps to indicate available video
content for exchange (Partnership Manager); and manage data
delivery, retrieval and play out (Stream Manager).
In building the overlay topology, a newly arrived peer contacts the
bootstrap node for a list of nodes and stores it in its own mCache.
From the stored list, it selects nodes randomly to forms partnership
and then parent-children relationship, where a partnership between
two nodes exists when only block availability information is
exchanged between them, and a parent-children relationship exists
when, in addition to being partner, video content is also exchanged.
Video content is processed for ease of delivery, retrieval, storage Video content is processed for ease of delivery, retrieval, storage
and play out. To manage content delivery, a video stream is divided and play out. To manage content delivery, a video stream is divided
into blocks with equal size, each of which is assigned a sequence into blocks with equal size, each of which is assigned a sequence
number to represent its playback order in the stream. Each block is number to represent its playback order in the stream. Each block is
further divided into K sub-blocks and the set of ith sub-blocks of further divided into K sub-blocks and the set of ith sub-blocks of
all blocks constitutes the ith sub-stream of the video stream, where all blocks constitutes the ith sub-stream of the video stream, where
i is a value bigger than 0 and less than K+1. To retrieve video i is a value bigger than 0 and less than K+1. To retrieve video
content, a node receives at most K distinct sub-streams from its content, a node receives at most K distinct sub-streams from its
parent nodes. To store retrieved sub-streams, a node uses a double parent nodes. To store retrieved sub-streams, a node uses a double
buffering scheme having a synchronization buffer and a cache buffer. buffering scheme having a synchronization buffer and a cache buffer.
skipping to change at page 28, line 45 skipping to change at page 23, line 30
Map will continue to receive the sub-streams of all subsequent blocks Map will continue to receive the sub-streams of all subsequent blocks
from the same partner until future conditions cause the partner to do from the same partner until future conditions cause the partner to do
otherwise. Moreover, users retrieve video indirectly from the source otherwise. Moreover, users retrieve video indirectly from the source
through a number of strategically located servers. through a number of strategically located servers.
To keep the parent-children relationship above a certain level of To keep the parent-children relationship above a certain level of
quality, each node constantly monitors the status of the on-going quality, each node constantly monitors the status of the on-going
sub-stream reception and re-selects parents according to sub-stream sub-stream reception and re-selects parents according to sub-stream
availability patterns. Specifically, if a node observes that the availability patterns. Specifically, if a node observes that the
block sequence number of the sub-stream of a parent is much smaller block sequence number of the sub-stream of a parent is much smaller
than any of its other partners by a predetermined amount, the than any of its other partners by a predetermined amount, the node
node then concludes that the parent is lagging sufficiently behind and then concludes that the parent is lagging sufficiently behind and
needs to be replaced. Furthermore, a node also evaluates the maximum needs to be replaced. Furthermore, a node also evaluates the maximum
and minimum of the block sequence numbers in its synchronization and minimum of the block sequence numbers in its synchronization
buffer to determine if any parent is lagging behind the rest of its buffer to determine if any parent is lagging behind the rest of its
parents and thus needs also to be replaced. parents and thus needs also to be replaced.
4. A common P2P Streaming Process Model 4. A common P2P Streaming Process Model
As shown in Figure 8, a common P2P streaming process can be As shown in Figure 8, a common P2P streaming process can be
summarized based on Section 3: summarized based on Section 3:
skipping to change at page 30, line 10 skipping to change at page 24, line 45
application and Tree-based is a little different. In the Mesh-based application and Tree-based is a little different. In the Mesh-based
applications, such as Joost and PPLive, Tracker maintains the lists applications, such as Joost and PPLive, Tracker maintains the lists
of peers storing chunks for a specific channel or streaming file. It of peers storing chunks for a specific channel or streaming file. It
provides peer list for peers to download from, as well as upload to, provides peer list for peers to download from, as well as upload to,
each other. In the Tree-based applications, such as PeerCast and each other. In the Tree-based applications, such as PeerCast and
Canviva, Tracker directs new peers to find parent nodes and the data Canviva, Tracker directs new peers to find parent nodes and the data
flows from parent to child only. flows from parent to child only.
5. Security Considerations 5. Security Considerations
This document does not consider security issues. It follows the This document does not raise security issues.
security consideration in [draft-zhang-ppsp-problem-statement].
6. Acknowledgments 6. Author List
We would like to acknowledge Jiang XingFeng for providing good ideas The authors of this document are listed as below.
Hui Zhang, NEC Labs America.
Jun Lei, University of Goettingen.
Gonzalo Camarillo, Ericsson.
Yong Liu, Polytechnic University.
Delfin Montuno, Huawei.
Lei Xie, Huawei.
Shihui Duan, CATR.
7. Acknowledgments
We would like to acknowledge Jiang xingfeng for providing good ideas
for this document. for this document.
7. Informative References 8. Informative References
[PPLive] "www.pplive.com". [PPLive] "www.pplive.com".
[PPStream] [PPStream]
"www.ppstream.com". "www.ppstream.com".
[CNN] "www.cnn.com". [CNN] "www.cnn.com".
[OctoshapeWeb] [JOOSTEXP]
"www.octoshape.com".
[Joost-Experiment]
Lei, Jun, et al., "An Experimental Analysis of Joost Peer- Lei, Jun, et al., "An Experimental Analysis of Joost Peer-
to-Peer VoD Service". Dec. 2009. to-Peer VoD Service".
[Sigcomm_P2P_Streaming] [P2PVOD] Huang, Yan, et al., "Challenges, Design and Analysis of a
Huang, Yan, et al., "Challenges, Design and Analysis of a
Large-scale P2P-VoD System", 2008. Large-scale P2P-VoD System", 2008.
[Octoshape] [Octoshape]
Alstrup, Stephen, et al., "Introducing Octoshape-a new Alstrup, Stephen, et al., "Introducing Octoshape-a new
technology for large-scale streaming over the Internet". technology for large-scale streaming over the Internet".
[Zattoo] "http: //zattoo.com/". [Zattoo] "http: //zattoo.com/".
[Conviva] "http://www.rinera.com/". [Conviva] "http://www.rinera.com/".
[ESM04] Zhang, Hui., "End System Multicast, [ESM] Zhang, Hui., "End System Multicast,
http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf", http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
May 2004. May .
[Survey] Liu, Yong, et al., "A survey on peer-to-peer video [Survey] Liu, Yong, et al., "A survey on peer-to-peer video
streaming systems", 2008. streaming systems", 2008.
[draft-zhang-alto-traceroute-00] [P2PIPTVMEA]
"www.ietf.org/internet-draft/
draft-zhang-alto-traceroute-00.txt".
[P2PStreamingSurvey]
Zong, Ning, et al., "Survey of P2P Streaming", Nov. 2008.
[P2PIPTV_measuring]
Silverston, Thomas, et al., "Measuring P2P IPTV Systems". Silverston, Thomas, et al., "Measuring P2P IPTV Systems".
[Challenge] [Challenge]
Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the
Internet: Issues, Existing Approaches, and Challenges", Internet: Issues, Existing Approaches, and Challenges",
June 2007. June 2007.
[NewCoolstreaming] [NEWCOOLStreaming]
Li, Bo, et al., "Inside the New Coolstreaming: Li, Bo, et al., "Inside the New Coolstreaming:
Principles,Measurements and Performance Implications", Principles,Measurements and Performance Implications",
Apr. 2008. Apr. 2008.
[Moreira]
Moreira, J, et al., " A step towards understanding Joost
IPTV", Apr. 2008.
[Joost Network Architecture]
"Joost Network Architecture,
http://scaryideas.com/content/2362/".
[Alstrup]
Alstrup, S, et al., "Octoshape a new technology for
large-scale streaming over the Internet", 2005.
[Alstrup]
Alstrup, S, et al., "Grid live streaming to millions",
2006.
[Hei]
Hei, X, et al., "Insights into PPLive: A measurement study
of a large-scale P2P IPTV system", May 2006.
[Vu]
Vu, L, et al., "Understanding Overlay Characteristics of a
Large-Scale Peer-to-Peer IPTV System", November 2010.
[Horvath]
Horvath, A, et al., "Dissecting PPLive, SopCast, TVAnt", 2008.
[Liu]
Liu, Y, et al., "A Case Study of Traffic Locality in
Internet P2P Live Streaming Systems", June 2009.
[Li]
Li, C, et al., "Measurement Based PPStream client behavior
analysis", 2009.
[Jia]
Jia, J, et al., "Characterizing PPStream across Internet",
2007.
[Wei]
Wei, T, et al., "Study of PPStream Based on Measurement",
2008.
[Ali]
Ali, S, et al., "Measurement of Commercial Peer-to-Peer
Live Video Streaming", Aug 2006.
[Ciullo]
Ciullo, D, et al., "Network Awareness of P2P Live
Streaming Applications: A Measurement Study", Aug 2010.
[Fallica]
Fallica, B, et al., "On the Quality of Experience of
SopCast", Aug 2008.
[Sentinelli]
Sentinelli, A, et al., "Will IPTV Ride the Peer-to-Peer
Stream?", June 2007.
[Silverston]
Silverston, T, et al., "Traffic analysis of peer-to-peer
IPTV communities", 2009.
[Tang]
Tang, S, et al., "Topology dynamics in a P2PTV network",
2009.
[Alessandria]
Alessandria, E, et al., "P2P-TV Systems under Adverse
Network Conditions: a Measurement Study", 2009.
[Chang]
Chang, H, et al., "Live streaming performance of the
Zattoo network", 2009.
[Deshpande]
Deshpande, H, et al., "Streaming Live Media over a Peer-
to-Peer Network", August 2001.
[Desphande]
Desphande, H, et al., " Streaming Live Media over Peers,
http://ilpubs.stanford.edu:8090/863/", December 2008.
[Chu1]
Chu, Y, et al., "A Case for End System Multicast",
June 2000.
[Chu2]
Chu, Y, et al., "Early Experience with an Internet
Broadcast System Based on Overlay Multicast", June 2004.
[Chu3]
Chu, Y, et al., "Narada is a self-organizing, overlay-
based protocol for achieving multicast without network
support", Aug 2001.
[Bo]
Li, B, et al., "Inside the New Coolstreaming: Principles,
Measurements and Performance Implications", 2008.
[Xie]
Xie, S, et al., "Coolstreaming: Design, Theory, and
Practice", 2007.
Authors' Addresses Authors' Addresses
Gu Yingjie Gu Yingjie (editor)
Huawei Huawei
Baixia Road No. 91 No.101 Software Avenue
Nanjing, Jiangsu Province 210001 Nanjing, Jiangsu Province 210012
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
Email: guyingjie@huawei.com Email: guyingjie@huawei.com
Zong Ning
Zong Ning (editor)
Huawei Huawei
Baixia Road No. 91 No.101 Software Avenue
Nanjing, Jiangsu Province 210001 Nanjing, Jiangsu Province 210012
P.R.China P.R.China
Phone: +86-25-56624760 Phone: +86-25-56624760
Fax: +86-25-56624702 Fax: +86-25-56624702
Email: zongning@huawei.com Email: zongning@huawei.com
Hui Zhang
NEC Labs America.
Email: huizhang@nec-labs.com
Zhang Yunfei Zhang Yunfei
China Mobile China Mobile
Email: zhangyunfei@chinamobile.com Email: zhangyunfei@chinamobile.com
Lei Jun
University of Goettingen
Phone: +49 (551) 39172032
Email: lei@cs.uni-goettingen.de
Gonzalo Camarillo
Ericsson
Email: Gonzalo.Camarillo@ericsson.com
Liu Yong
Polytechnic University
Email: yongliu@poly.edu
Delfin Montuno
Huawei
Email: delfin.montuno@huawei.com
Xie Lei
Huawei
Email: xielei57471@huawei.com
 End of changes. 106 change blocks. 
709 lines changed or deleted 401 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/