PPSP                                                          Y. Gu Gu, Ed.
Internet-Draft                                              N. Zong Zong, Ed.
Intended status: Standards Track                                  Huawei
Expires: September 12, 2011                                   Hui. Zhang
                                                       NEC Labs America. April 19, 2013                                    Yunfei. Zhang
                                                            China Mobile
                                                                  J. Lei
                                                University of Goettingen
                                                      Gonzalo. Camarillo
                                                                Ericsson
                                                               Yong. Liu
                                                  Polytechnic University
                                                         Delfin. Montuno
                                                                Lei. Xie
                                                                  Huawei
                                                          July 05, 2011
                                                        October 16, 2012

                  Survey of P2P Streaming Applications
                       draft-ietf-ppsp-survey-02
                       draft-ietf-ppsp-survey-03

Abstract

   This document surveys presents a number survey of popular Peer-to-Peer streaming
   applications on the Internet.  The  We focus on the Architecture and Peer
   Protocol/Tracker Signaling Protocol description is our main focus.,
   We in the presentation,
   and study a selection of well-known P2P streaming systems, including
   Joost, PPlive,
   and other andother popular existing systems, and systems.  Through the
   survey, we summarize a common P2P streaming process model and its the
   correspondent signaling process for
   use in the P2P Streaming Protocol standardization effort.
   standardization.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 12, 2011. April 19, 2013.

Copyright Notice

   Copyright (c) 2011 2012 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4  3
   2.  Terminologies and concepts . . . . . . . . . . . . . . . . . .  4  3
   3.  Survey of P2P streaming system . . . . . . . . . . . . . . . .  5  4
     3.1.  Mesh-based P2P streaming systems . . . . . . . . . . . . .  5  4
       3.1.1.  Joost  . . . . . . . . . . . . . . . . . . . . . . . .  6  5
       3.1.2.  Octoshape  . . . . . . . . . . . . . . . . . . . . . .  9  8
       3.1.3.  PPLive . . . . . . . . . . . . . . . . . . . . . . . . 12 10
       3.1.4.  Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 14 12
       3.1.5.  PPStream . . . . . . . . . . . . . . . . . . . . . . . 17 14
       3.1.6.  SopCast  . . . . . . . . . . . . . . . . . . . . . . . 18 15
       3.1.7.  TVants . . . . . . . . . . . . . . . . . . . . . . . . 19 16
     3.2.  Tree-based P2P streaming systems . . . . . . . . . . . . . 21 16
       3.2.1.  PeerCast . . . . . . . . . . . . . . . . . . . . . . . 21 17
       3.2.2.  Conviva  . . . . . . . . . . . . . . . . . . . . . . . 23 19
     3.3.  Hybrid P2P streaming system  . . . . . . . . . . . . . . . 26 21
       3.3.1.  New Coolstreaming  . . . . . . . . . . . . . . . . . . 26 21
   4.  A common P2P Streaming Process Model . . . . . . . . . . . . . 29 23
   5.  Security Considerations  . . . . . . . . . . . . . . . . . . . 30 24
   6.  Acknowledgments  Author List  . . . . . . . . . . . . . . . . . . . . . . . . . 30 24
   7.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 25
   8.  Informative References . . . . . . . . . . . . . . . . . . . . 30 25
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 26

1.  Introduction

   Toward standardizing the signaling protocols used in today's Peer-to-
   Peer (P2P) streaming applications, we survey surveyed several popular P2P
   streaming systems regarding their architectures and signaling
   protocols between peers, as well as, between peers and trackers.  The
   studied P2P streaming systems, operating running worldwide or domestically,
   include PPLive, Joost, and Octoshape.  As this domestically.
   This document is does not intended intend to cover all design options of P2P
   streaming
   applications, applications.  Instead, we choose a representative set of
   applications and focus on each of their the respective signaling
   characteristics. characteristics of
   each kind.  Through the survey, we propose generalize a generalized common streaming
   process model derived from those P2P streaming systems, and summarize the
   companion signaling process as the basis base for P2P Streaming Protocol
   (PPSP) standardization.

2.  Terminologies and concepts

   Chunk: A chunk is a basic unit of partitioned streaming media, which
   is used by a peer for the purpose of storage, advertisement and
   exchange among peers [Sigcomm:P2P streaming]. [P2PVOD].

   Content Distribution Network (CDN) node: A CDN node refers to a
   network entity that usually is deployed at the network edge to store
   content provided by the original servers, and serves content to the
   clients located nearby topologically.

   Live streaming: The scenario where all clients receive streaming
   content for the same ongoing event.  The lags between the play points
   of the clients and that of the streaming source are small..

   P2P cache: A P2P cache refers to a network entity that caches P2P
   traffic in the network, and either transparently or explicitly
   distributes content to other peers.

   P2P streaming protocols: P2P streaming protocols refer to multiple
   protocols such as streaming control, resource discovery, streaming
   data transport, etc. which are needed to build a P2P streaming
   system.

   Peer/PPSP peer: A peer/PPSP peer refers to a participant in a P2P
   streaming system.  The participant not only receives streaming
   content, but also stores and uploads streaming content to other
   participants.

   PPSP protocols: PPSP protocols refer to the key signaling protocols
   among various P2P streaming system components, including the tracker
   and peers.

   Swarm: A swarm refers to a group of clients (i.e. peers) sharing the
   same content (e.g. video/audio program, digital file, etc) at a given
   time.

   Tracker/PPSP tracker: A tracker/PPSP tracker refers to a directory
   service which maintains the lists of peers/PPSP peers storing chunks
   for a specific channel or streaming file, and answers queries from
   peers/PPSP peers.

   Video-on-demand (VoD): A kind of application that allows users to
   select and watch video content on demand

3.  Survey of P2P streaming system

   In this section, we summarize some existing P2P streaming systems.
   The construction techniques used in these systems can be largely
   classified mainly into three two categories: tree-based, mesh-based, tree-based and hybrid mesh-based structures.

   Tree-based structure: Group members self-organize into a tree
   structure, based on which group management and data delivery are is
   performed.  Such structure with and push-based content delivery has have small
   maintenance cost, cost and good scalability and low delay in retrieving the
   content(associated with startup delay) and can be easily implemented.
   However, it may result in low bandwidth usage and less reliability.

   Mesh-based structure: In contrast to tree-based structure, a mesh
   uses multiple links between any two nodes.  Thus, the reliability of
   data transmission is relatively high.  Besides, multiple links
   result
   results in high bandwidth usage.  Nevertheless, the maintenance cost of
   maintaining such mesh is much larger than that of a tree, and the pull-based pull-
   based content delivery leads lead to high overhead associated with each video
   block transmission, particularly, in particular the delay in retrieving the
   content.

   Hybrid structure: Has a combined Combine tree-based and mesh-based structure
   and uses both structure,
   combine pull-based and push-based content delivery to take utilize the
   advantages of the two structures. Compared to the mesh-based structure,
   it  It has an equally high reliability and topology maintenance cost but
   a as much as
   mesh-based structure, lower delay and than mesh-based structure, lower
   overhead associated each video block
   transmission. transmission and high topology
   maintenance cost as much as mesh-based structure.

3.1.  Mesh-based P2P streaming systems
3.1.1.  Joost

   Joost announced

   Mesh-based systems implement a mesh distribution graph, where each
   node contacts a subset of peers to give up P2P technology on its desktop version last
   year, though it introduced obtain a flash version for browsers and iPhone
   application.  The key reason why Joost shut down number of chunks.  Every
   node needs to know which chunks are owned by its desktop version
   is probably peers and explicitly
   "pulls" the legal issues of provided media chunks it needs.  This type of scheme involves overhead,
   due in part to the exchange of buffer maps between nodes (i.e. nodes
   advertise the set of chunks they own) and in part to the "pull"
   process (i.e. each node sends a request in order to receive the
   chunks).  Since each node relies on multiple peers to retrieve
   content, mesh based systems offer good resilience to node failures.
   On the negative side they require large buffers to support the chunk
   pull (clearly, large buffers are needed to increase the chances of
   finding a chunk).

   In a mesh-based P2P streaming system, peers are not confined to a
   static topology.  Instead, the peering relationships are established/
   terminated based on the content availability and bandwidth
   availability on peers.  A peer dynamically connects to a subset of
   random peers in the system.  Peers periodically exchange information
   about their data availability.  The content is pulled by a peer from
   its neighbors who have already obtained the content.  Since multiple
   neighbors are maintained at any given moment, mesh-based streaming
   systems are highly robust to peer churns.  However, the dynamic
   peering relationships make the content distribution efficiency
   unpredictable.  Different data packets may traverse different routes
   to users.  Consequently, users may suffer from content playback
   quality degradation ranging from low bit rates, long startup delays,
   to frequent playback freezes.

3.1.1.  Joost

   Joost announced to give up P2P technology on its desktop version last
   year, though it introduced a flash version for browsers and iPhone
   application.  The key reason why Joost shut down its desktop version
   is probably the legal issues of provided media content.  However, as
   one of the most popular P2P VoD application in the past years, it's
   worthwhile to understand how Joost works.  The peer management and
   data transmission in Joost mainly relies on mesh-based structure.

   The three key components of Joost are servers, super nodes and peers.
   There are five types of servers: Tracker server, Version server,
   Backend server, Content server and Graphics server.  Supernodes are
   managing the p2p control of Joost nodes and Joost nodes are all the
   running clients in the Joost network.  The architecture of Joost
   system is shown in Figure 1.

   First, we introduce the functionalities of Joost's key components
   through three basic phases.  Then we will discuss the Peer protocol
   and Tracker protocol of Joost.

   Installation: In Backend server is involved in the installation phase, phase.
   Backend server provides peer with an initial channel list in a SQLite
   file.  No other parameters, such as local cache, node ID, or
   listening port, are configured in this file.

   Bootstrapping: In case of a newcomer, Tracker server provides several
   super node addresses and possibly some content server addresses.
   Then the peer connects to Version server for the latest software
   version.  Later, the peer connects starts to connect some super nodes to
   obtain the list of other available peers through which and begins streaming video
   begins.  Unlike Skype [skype], super
   contents.  Super nodes in Joost only deal with control and peer
   management traffic.  They do not relay/
   forward relay/forward any media data.
.

   When Joost is first launched, a login mechanism is initiated using
   HTTPS and TLSv1.  After, a TCP synchronization, the client
   authenticates with a certificate to the login server.  Once the login
   process is done, the client first contacts a supernode, which address
   is hard coded in Joost binary to get a list of peers and a Joost
   Seeder to contact.  Of course, this depends on the channel chosen by
   the user.  Once launched, the Joost client checks if there is a more
   recent version available sending an HTTP request.

   Once authenticated to the video service, Joost node uses the same
   authentication mechanism (TCP synchronization, certificate validation
   and shared key verification) to login to the backend server.This
   server validates the access to all HTTPS services like channel chat,
   channel list, video content search.

   Joost uses TCP port 80 for HTTP, port 443 for HTTPS transfers and UDP
   port 4166 for video packets exchange mainly from long-tail servers
   and each Joost peer chooses its own UDP port to exchange with other
   peers.

   Channel switching: Super nodes are responsible for redirecting
   clients to content server or peers.

   Peers communicate with servers over HTTP/HTTPs and with super nodes/
   other peers over UDP.

   Tracker Protocol: Because super nodes here are responsible only for
   providing the peerlist/content servers to peers, the protocol used
   between tracker server and peers is rather simple.  Peers get the
   addresses of super nodes and content servers from Tracker Server over
   HTTP.  After that, Tracker server sever will not appear in any other stages, stage, e.g.
   channel switching, VoD interaction.  In fact, the protocol spoken
   between peers and super nodes is more like what we would normally called
   "Tracker Protocol".  It enables super nodes to check peer status and status,
   maintain peer lists for several, if not all, channels.  It provides
   peer list/content servers to peers.  Thus, in the rest of this
   section, when we mention Tracker Protocol, we mean the one used
   between peers and super nodes.

   Peers will communicate with super nodes in some scenarios using
   Tracker Protocol.

   1.  When a peer starts

   Joost software, after the installation and
   bootstrapping, the peer will communicate with one or several super
   nodes to get a list of available peers/content servers.

   2.  For on-demand video functions, super nodes periodically exchange
   small UDP packets for peer management purpose.

   3.  When switching between channels, peers contact super nodes uses supernodes only to
   find available peers for fetching control the requested media data.

   Peer Protocol: traffic but never as relays
   for video content.  The following investigations main streams are mainly motivated sent from
   [Joost- experiment], in which data-driven reverse-engineer
   experiments are performed.  We omit the analysis process and
   directly show the conclusion.  Media data in Joost is split into
   chunks Seeders
   and then encrypted.  Each chunk all the traffic is packetized with about 5-10
   seconds of encrypted secure shared video data.  After receiving peer list content from super nodes, a
   peer negotiates with some or, if necessary, all of the
   piracy.  Joost peers on cache the
   list received content to find out what chunks they have.  Then the peer makes decision
   about re-stream it when
   needed by other peers, to recover from which peers missed video blocks.

   Although Joost is a peer-to-peer video distribution technology, it
   relies heavily on a few centralized servers to get provide the chunks.  No peer capability
   information is exchanged in licensed
   video content and uses the Peer Protocol.

                   +---------------+       +-------------------+
                   | Version Server|       |   Tracker Server  |
                   +---------------+       +-------------------+
                             \                       |
                              \                      |
                               \                     | +---------------+
                                \                    | |Graphics Server|
                                 \                   | +---------------+
                                  \                  |     |
   +--------------+        +-------------+        +--------------+
   |Content Server|--------|    Peer1    |--------|Backend Server|
   +--------------+        +-------------+        +--------------+
                                     |
                                     |
                                     |
                                     |
                              +------------+       +---------+
                              | Super Node |-------|  Peer2  |
                              +------------+       +---------+

   Figure 1, Architecture of Joost system peer-to-peer overlay to service content at
   a faster rate.  The following sections describe Joost QoS related features,
   extracted mostly from [Joost- experiment], [Moreira] and [Joost
   Network Architecture].

   For peer selection, Host Cache centralized nature of a peer, which Joost is refreshed
   periodically, stores a list the main factor
   that influences its lack of Joost super nodes IP addresses locality awareness and
   ports.  The selection strategy low fairness
   ratio.  Since Joost is influenced by the number directly providing at least two thirds of peers
   accessing the same content.  Specifically, the number of candidate
   peers made available is proportional
   video content to the number of active peers.
   If there are its clients, only a few one third will have to be supplied
   by independent nodes.  This approach does not scale well, and is
   sustainable today only because of them, then the relatively low user population.

   From a network usage perspective, Joost content server consumes approximately 700
   kbps downstream and 120 kbps upstream, regardless of the total
   capacity of the network.  This is made
   available to assist in assuming the data delivery.  Although there is no
   explicit consideration for peer heterogeneities in peer selection,
   low network upstream
   capacity peers tend it is larger than 1Mbps.

   There may be some type of RTT-savvy selection algorithm at work,
   which gives priority to partner peers with low capacity peer.  Peers
   under the same NAT also tend to serve each other preferentially [
   Moreira].  It may consider geographical locality but may not have
   AS-level awareness RTT less than or exploit topological locality and thus may
   have impact on equal to the efficiency RTT
   of video distribution.

   To maintain the overlay networks, super nodes probe clients, clients
   probe clients and super nodes, and a Joost content providing super nodes node.

   Peers will communicate with super nodes and servers.  To make up for inadequate bandwidth and to be
   scalable, Joost forms groups of in some scenarios using
   Tracker Protocol.

   1.  When a peer starts Joost Server Islands, each island
   consisting of software, after the installation and
   bootstrapping, the peer will communicate with one streaming control server controlling up to ten
   streaming servers.  Moreover, STUN protocol enables a client or several super
   nodes to
   discover whether it is behind get a NAT or firewall and the type list of the
   NAT or firewall. available peers/content servers.

   2.  For data delivery, Joost streams audio and on-demand video traffic separately to
   allow functions, super nodes periodically exchange
   small UDP packets for multi-lingual programming.  Content comes mostly from peer management purpose.

   3.  When switching between channels, peers contact super nodes and occasionally from content server for "long-tail" content.  As
   the latter help the peers find available peers are assumed to contribute fetch the requested
   media data.

   Peer Protocol: The following investigations are mainly motivated from
   [JOOSTEXP ], in which a best-effort manner,
   infrastructures data-driven reverse-engineer experiments are needed to make up for insufficient bandwidth,
   including in
   performed.  We omitted the asymmetric scenario.  However, super nodes are not
   part of analysis process and directly show the bandwidth supplying infrastructures as they only relay
   control traffic but not data traffic to clients.  To support the P2P
   media distribution services, Joost uses an agent based peer-to-peer
   system called Anthill.
   conclusion.  Media data in Joost also employs Local Video Cache for
   later viewing is split into chunks and then
   encrypted.  Each chunk is packetized with about 5-10 seconds of video
   data.  After receiving peer list from super nodes, a peer negotiates
   with some or, if necessary, all of the peers in the list to avoid reloading, but it will still require
   authorization find out
   what chunks they have.  Then the peer makes decision about from Joost server when accessing which
   peers to get the locally cached video
   file at a later time. chunks.  No peer capability information is exchanged
   in the Peer Protocol.
                   +---------------+       +-------------------+
                   | Version Server|       |   Tracker Server  |
                   +---------------+       +-------------------+
                             \                       |
                              \                      |
                               \                     | +---------------+
                                \                    | |Graphics Server|
                                 \                   | +---------------+
                                  \                  |     |
   +--------------+        +-------------+        +--------------+
   |Content Server|--------|    Peer1    |--------|Backend Server|
   +--------------+        +-------------+        +--------------+
                                     |
                                     |
                                     |
                                     |
                              +------------+       +---------+
                              | Super Node |-------|  Peer2  |
                              +------------+       +---------+

   Figure 1, Architecture of Joost system

   Joost provides large buffering and thus causes longer start-up delay
   for VoD traffic than for live media streaming traffic.  It affords
   more FEC for VoD traffic but gives higher priority in delivery to
   live media streaming traffic.

   For Joost, load-balancing and fault-tolerance is shifted directly
   into the client and all is done natively in the p2p code.

   To enhance user viewing experience, Joost provides chat capability
   between viewers and user program rating mechanisms.

3.1.2.  Octoshape

   CNN [CNN] has been working with a P2P Plug-in, from a Denmark-based
   company Octoshape, to broadcast its living streaming.  Octoshape
   helps CNN serve a peak of more than a million simultaneous viewers.
   It has also provided several innovative delivery technologies such as
   loss resilient transport, adaptive bit rate, adaptive path
   optimization and adaptive proximity delivery.  Figure 2 depicts the
   architecture of the Octoshape system.

   Octoshape maintains a mesh overlay topology.  Its overlay topology
   maintenance scheme is similar to that of P2P file-sharing
   applications, such as BitTorrent.  There is no Tracker server in
   Octoshape, thus no Tracker Protocol is required.  Peers obtain live
   streaming from content servers and peers over Octoshape Protocol.
   Several data streams are constructed from live stream.  No data
   streams are identical and any number K of data streams can
   reconstruct the original live stream.  The number K is based on the
   original media playback rate and the playback rate of each data
   stream.  For example, a 400Kbit/s media is split into four 100Kbit/s
   data streams, and then k = 4.  Data streams are constructed in peers,
   instead of Broadcast server, which release server from large burden.
   The number of data streams constructed in a particular peer equals
   the number of peers downloading data from that the particular peer, which
   is constrained by the upload capacity of the particular peer.  To get
   the best performance, the upload capacity of a peer should be larger
   than the playback rate of the live stream.  If not, an artificial
   peer may be added to deliver extra bandwidth.

   Each single peer has an address book of other peers who is watching
   the same channel.  A Standby list is set up based on the address
   book.  The peer periodically probes/asks the peers on in the standby
   list to be sure that they are ready to take over if one of the
   current senders stops or gets congested.  [Octoshape]

   Peer Protocol: The live stream is initially firstly sent to a few peers in the
   network and then spread to the rest of the network.  When a peer
   joins a channel, it notifies all the other peers about its presence
   using Peer Protocol, which will drive the others to add it into their
   address books.  Although [Octoshape] declares that each peer records
   all the peers joining the channel, we suspect that not all the peers
   are recorded, considering the notification traffic will be large and
   peers will be busy with recording when a popular program starts in a
   channel and lots of peers switch to this channel.  Maybe some
   geographic or topological neighbors are notified and the peer gets
   its address book from these nearby neighbors.

   The peer sends requests to some selected peers for the live stream
   and the receivers answers OK or not according to their upload
   capacity.  The peer continues sending requests to peers until it
   finds enough peers to provide the needed data streams to redisplay
   the original live stream.  The details of Octoshape are not publicly
   disclosed, we hope someone else can provide more specific
   information.

            +------------+   +--------+
            |   Peer 1   |---| Peer 2 |
            +------------+   +--------+
                 |    \    /      |
                 |     \  /       |
                 |      \         |
                 |     / \        |
                 |    /   \       |
                 |  /      \      |
      +--------------+    +-------------+
      |     Peer 4   |----|    Peer3    |
      +--------------+    +-------------+

      *****************************************
                         |
                         |
                 +---------------+
                 | Content Server|
                 +---------------+

      Figure 2, Architecture of Octoshape system

   The following sections describe Octoshape QoS related features,
   extracted mostly from [OctoshapeWeb], [Alstrup1] and [
   Alstrup2].  As it is a closed system, the details of how the features
   are implemented are not available.

   To spread the burden of data distribution across several peers and
   thus limiting the impact of peer loss, Octoshape splits a live stream
   into a number of smaller equal-sized sub-streams.  For example, a
   400kbit/s live stream is split and coded into 12 distinct 100kbit/s
   sub-streams.  Only a subset of these sub-streams needs to reach a
   user for it to reconstruct the "original" live stream.  The number of
   distinct sub-streams could be as many as the number of active peers.

   Therefore, even if the upload capacity of

   To optimize bandwidth utilization, Octoshape leverages computers
   within a peer is smaller than its
   download capacity, it would now be easier to contribute a sub-stream
   than a whole live stream.  An Octoshape peer can then receive from
   each neighboring peer at least a distinct sub-stream.  To make up for
   the bandwidth asymmetry, artificial end users are used to deliver
   additional bandwidth.  Multi OctoServers are also available to
   guarantee no single point of failure [Alstrup2].

   Octoshape keeps peer availability information in an address book.
   Each peer keeps a periodically updated stand-by list and passes it
   along with its transmitted sub-stream.  With constant monitoring of
   the quality and consistency of each content source, the peer can
   switch partners in case of bottleneck and congestion to a better
   source.

   Octoshape provides operator to control who should and should not
   receive certain video signal due to copyright restriction, to control
   access based in part on IP numbers, and to obtain real time
   statistics during any live events.

   To optimize bandwidth utilization, Octoshape leverages computers
   within a network network to minimize external bandwidth usage and to select
   the most reliable and "closest" source to each viewer.  It also
   chooses the best matching available codecs and players and scales bit
   rate up and down according to available internet connection.

   Octoshape [OctoshapeWeb] claims to have patented resiliency and
   throughput technologies to deliver quality streams to the mobile and
   wireless edge networks.  This throughput optimization technology also
   cleans up latent and lossy network connections between the encoder
   and the distribution point, providing a stable, high quality, stream
   for distribution.  Octoshape also claims to be able to deliver true
   HD, 1280x720 30fps (720p) video over the Internet and to have
   advanced DVR functionalities such as allowing users to move
   seamlessly forward and back through the streams with almost no
   waiting time.

3.1.3.  PPLive

   PPLive [PPLive] is one of the most popular P2P streaming software in
   China.
   It has two major communication protocols.  One is Registration and
   peer discovery protocol, i.e.  Tracker Protocol, and the other is P2P
   chunk distribution protocol, i.e.  Peer Protocol.  Figure 3 shows the
   architecture of PPLive.

   Tracker Protocol: First, a peer gets the channel list from  The PPLive system includes six parts.

   (1) Video streaming server: providing the
   Channel server, in a way similar to that source of Joost.  Then the peer
   chooses a channel video content and asks
   coding the Tracker server content for adapting the peerlist of
   this channel.

   Peer Protocol: The peer contacts network transmission rate and the peers on
   client playing.

   (2) Peer: also called node or client.  The nodes compose the self-
   organizing network logically and each node can join or withdraw
   whenever.  When the client downloads the content, it also provides
   its own content to the other client at the same time.

   (3) Directory server: when the user start up the PPLive client, the
   client will automatically register the user information to this
   server; when the client exits, the client will cancel its peer.

   (4) Tracker server: this server will record the information of all
   the users which see the same content.  When the client request some
   content, this server will check if there are other peers owning the
   content and send the information of these peers to the client, if on,
   then tell the client to request the video steaming server for the
   content.

   (5) Web server: providing PPLive software updating and downloading.

   (6) Channel list server: this server store the information of all the
   programs which can be seen by the users, including VoD programs and
   broadcasting programs, such as program name, file size and
   attribution.

   PPLive has two major communication protocols.  One is Registration
   and peer discovery protocol, i.e.  Tracker Protocol, and the other is
   P2P chunk distribution protocol, i.e.  Peer Protocol.  Figure 3 shows
   the architecture of PPLive.

   Tracker Protocol: First, a peer gets the channel list from the
   Channel server, in a way similar to that of Joost.  Then the peer
   chooses a channel and asks the Tracker server for the peerlist of
   this channel.

   Peer Protocol: The peer contacts the peers in its peerlist to get
   additional peerlists, which are aggregated with its existing list.
   Through this list, peers can maintain a mesh for peer management and
   data delivery.

   For the video-on-demand (VoD) operation, because different peers
   watch different parts of the channel, a peer buffers up to a few
   minutes worth of chunks within a sliding window to share with each
   others.  Some of these chunks may be chunks that have been recently
   played; the remaining chunks are chunks scheduled to be played in the
   next few minutes.  Peers upload chunks to each other.  To this end,
   peers send to each other "buffer-map" messages; a buffer-map message
   indicates which chunks a peer currently has buffered and can share.
   The buffer-map message includes the offset (the ID of the first
   chunk), the length of the buffer map, and a string of zeroes and ones
   indicating which chunks are available (starting with the chunk
   designated by the offset).  PPlive transfer Data over UDP.

   Video Download Policy of PPLive

      1 PPLive:

      1) Top ten peers contribute  most to a major part of the downloaded download
      traffic.  Meanwhile, the top peer session is quite short compared
      with the video session duration.  This would suggest that a PPLive
      peer
      gets video from only a few peers at any given time, and switches
      periodically from one peer to another;

      2

      2) PPLive can send multiple chunk requests for different chunks to
      one peer at one time;

      3) PPLive maintains a constant peer list with relatively small number of
   peers.  [P2PIPTV-measuring]
            +------------+    +--------+
            |   Peer is observed to have the download scheduling policy of
      giving higher priority to rare chunks and to chunks closer to play
      out deadline and to be using a sliding window mechanism to
      regulate the buffering of chunks.

   PPLive maintains a constant peer list with relatively small number of
   peers.  [P2PIPTVMEA]
            +------------+    +--------+
            |   Peer 2   |----| Peer 3 |
            +------------+    +--------+
                     |          |
                     |          |
                    +--------------+
                    |    Peer 1    |
                    +--------------+
                            |
                            |
                            |
                    +---------------+
                    | Tracker Server|
                    +---------------+

      Figure 3, Architecture of PPlive system

   The following sections describe PPLive QoS related features,
   extracted mostly from [Hei], [Vu], [Horvath], and [Liu].

   After obtaining an initial peer list from the member server, a peer
   periodically updates its peer list by querying both member server and
   partner peers.  New peers are aggressively contacted at a fixed rate.
   In selecting peers as partners, a peer considers their upload-
   bandwidth and in part, their location information [Horvath] by
   selecting on a FCFS basis those that have responded [Liu].

   For data distribution, PPLive,

3.1.4.  Zattoo

   Zattoo is P2P live streaming system which serves over 3 million
   registered users over European countries [Zattoo].The system delivers
   live streaming using a data-driven or mesh-pull scheme
   [Hei], divides the receiver-based, peer-division multiplexing
   scheme.  Zattoo reliabily streams media content into small portions called
   chunks and uses TCP for video streaming.  Neighbor peers use a
   gossip-like protocol to exchange their buffer maps that indicate
   chunks available for sharing.  Peers obtain one or more of their
   missing chunks from one or more among peers having them.  Available chunks
   may also be downloaded from using the original channel server.

   PPLive uses mesh
   structure.

   Figure 4 depcits a double buffering mechanism consisting typical procedure of the single TV Engine
   and channel carried
   over Zattoo network.  First, Zattoo system broadcasts live TV,
   captured from satellites, onto the Media Player for its stream reassembly and display [Hei].  The Internet.  Each TV Engine is responsible for downloading video chunks from the PPLive
   network and streaming the downloaded video to the Media Player, which
   in turns displays the content to the user, after each buffer is
   filled up to its respective predetermined threshold.

   PPLive is observed to have the download scheduling policy of giving
   higher priority to rare chunks and to chunks closer to play out
   deadline and to be using a sliding window mechanism to regulate the
   buffering of chunks.

   To utilize available peer resources, peers in one subscribed overlay
   may also be harnessed to support peers in other subscribed overlays
   [Vu].

3.1.4.  Zattoo

   Zattoo is P2P live streaming system which serves over 3 million
   registered users over European countries [Zattoo].The system delivers
   live streaming using a receiver-based, peer-division multiplexing
   scheme.  Zattoo reliably streams media among peers using the mesh
   structure.

   Figure 4 depicts  the basic architecture of a Zatto system.  First,
   the Zattoo system broadcasts live TV, captured from satellites, onto
   the Internet.  Each TV channel channel is
   delivered through a separate P2P network.

      -------------------------------
      |   ------------------        |         --------
      |   |  Broadcast     |        |---------|Peer1 |-----------
      |   |  Servers       |        |         --------          |
      |   Administrative Servers    |                      -------------
      |   ------------------------  |                      | Super Node|
      |   | Authentication Server | |                      -------------
      |   | Rendezvous Server     | |                           |
      |   | Feedback Server       | |         --------          |
      |   | Other Servers         | |---------|Peer2 |----------|
      |   ------------------------| |         --------
      ------------------------------|
 Figure 4, Basic architecture of Zattoo system

   Tracker (Rendezvous

   Tracker(Rendezvous Server) Protocol: In order to receive the signal
   of
   the requested channel, all registered users are required to be
   authenticated through the Zattoo Authentication Server.  Upon
   authentication, each user obtains users obtain a ticket with a specific lifetime.  Then, the user then contacts
   users contact Rendezvous Server with the ticket and
   identifies the TV channel identify of interested.
   interested TV channel.  In return, the Rendezvous Server sends back a
   list of active joined peers carrying the channel.

   Peer Protocol: Similar to aforementioned procedures in Joost and Joost, PPLive,
   a newly joined new Zattoo peer requests to partner with peers from join an existing peer among the obtained peer
   list.  Based on  Upon the availability of bandwidth,
   a requested peer decides how
   to multiplex a stream onto its set of neighboring peers.  When
   packets arrive at the requesting peer receives packets, it stores
   them to form peer, sub-streams are stored for reassembling reassembly
   constructing the full stream.

   Note Zattoo relies on the Bandwidth Bandwdith Estimation Server to initially
   estimate the amount of available uplink bandwidth bandwith at a peer.  Once a
   peer starts to forward substreams substream to other peers, it receives QoS
   feedback from other receivers if the receiver quality of the substreams for any received sub-stream
   quality that drops
   below a threshold.

   The following sections describe Zattoo QoS related features,
   extracted mostly from [Chang].

   For reliable data delivery, each live stream is partitioned into
   video segments.  Each video segment is coded for forward error
   correction with Reed-Solomon error correcting code into n sub-stream
   packets such that having obtained k correct packets of a segment is
   sufficient to reconstruct the remaining n-k packets of the same video
   segment.  To receive a video segment, each peer then specifies the
   sub-stream(s) of the video segment it would like to receive from the
   neighboring peers.

   Zattoo uses the Adaptive Peer-Division Multiplexing (PDM) scheme for its data
   delivery topology setup. to
   handle longer term bandwidth fluctuations.  In this scheme, each new peer independently
   executes the Search
   determines how many sub-streams to transmit and Join phases.  In the Search Phase, a when to switch
   partners.  Specifically, each peer
   queries continually estimates the members amount
   of available uplink bandwidth based initially on probe packets to the
   Zattoo Bandwidth Estimation Server and later, based on peer list for sub-streams availability; in
   response, it receives additional prospective peers, sub-streams
   availability, quality indications, and sub-stream sequence numbers;
   and it then selects, among the responses, partnering peers or quits
   after failing two search attempts.

   In QoS
   feedbacks, using different algorithms depending on the Join Phase, a joining peer, having selected underlying
   transport protocol.  A peer increases its estimated available uplink
   bandwidth, if the candidate
   peers, requests to partner with current estimate is below some of them, spreading the load
   among them threshold and preferring topologically close-by peers, if these
   peers have less capacity or carry lower
   there has been no bad quality sub-streams.  Barring
   departure or performance degradation of feedback from neighboring peers, the
   established connections persist and the specified sub-stream packet peers for a
   period of every segment continues time, according to be forwarded without further per-packet
   handshaking between peers.

   To manage stream efficiently for incoming and outgoing destinations,
   each some algorithm similar to how TCP
   maintains its congestion window size.  Each peer has a packet buffer, called IOB (Input-Output Buffer).  The
   IOB is referenced by an input pointer, then admits
   neighbors based on the currently estimated available uplink
   bandwidth.  In case a repair pointer and one or
   more output pointers, one for each forwarding destination such as
   player, file, and other peer.  The input pointer points new estimate indicates insufficient bandwidth
   to support the slot
   in existing number of peer connections, one connection at
   a time, preferably starting with the IOB where one requiring the next incoming packet with sequence number higher
   than least
   bandwidth, is closed.  On the highest sequence number received so far other hand, if loss rate of packets
   from a peer's neighbor reaches a certain threshold, the peer will be stored, and
   attempt to shift the repair pointer always points degraded neighboring peer load to other existing
   peers, while looking for a replacement peer.  When one slot beyond is found, the last packet
   received in order and
   load is used shifted to regulate packet retransmission and
   adaptive PDM (to be described later).  A packet map it and forwarding
   discipline the degraded neighbor is associated with each output pointer dropped.  As
   expected if a peer's neighbor is lost due to accommodate the
   different forwarding rates and regimes required by departure, the destinations.
   Note that retransmission requests are sent to random peers and not to
   partnering peers. Furthermore, they are honoured only if peer
   initiates the requested
   packets are still in IOB and there is sufficient left-over capacity process to
   transmit all replace the requested packets. lost peer.  To avoid buffer overrun, a set
   of two buffers is used in optimize the IOB instead of PDM
   configuration, a circular buffer.

   Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to handle
   longer term bandwidth fluctuations.  In this scheme, each peer
   determines how many sub-streams may occasionally initiate switching existing
   partnering peers to transmit topologically closer peers.

3.1.5.  PPStream

   The system architecture and when to switch
   partners.  Specifically, each peer continually estimates the amount working flows of available uplink bandwidth based initially on probe packets PPStream is similar to the
   Zattoo Bandwidth Estimation Server and later, based on peer QoS
   feedbacks,
   PPLive [PPStream].  PPStream transfers data using different algorithms depending on mostly TCP, only
   occasionally UDP.

   Video Download Policy of PPStream

      1) Top ten peers do not contribute to a large part of the underlying
   transport protocol.  A peer increases its estimated available uplink
   bandwidth, if download
      traffic.  This would suggest that PPStream gets the current estimate is below some threshold and if
   there has been no bad quality feedback video from neighboring
      many peers simultaneously, and its peers have long session
      duration;

      2) PPStream does not send multiple chunk requests for a
   period of time, according to some algorithm similar to how TCP
   maintains its congestion window size.  Each peer then admits
   neighbors based on the currently estimated available uplink
   bandwidth.  In case a new estimate indicates insufficient bandwidth
   to support the existing number of peer connections, one connection at
   a time, preferably starting with the one requiring the least
   bandwidth, is closed.  On the other hand, if loss rate of packets
   from a peer's neighbor reaches a certain threshold, the peer will
   attempt to shift the degraded neighboring peer load to other existing
   peers, while looking for a replacement peer.  When one is
   found, the load is shifted to it and the degraded neighbor is
   dropped.  As expected if a peer's neighbor is lost due to departure,
   the peer initiates the process to replace the lost peer.  To optimize
   the PDM configuration, a peer may occasionally initiate switching
   existing partnering peers to topologically closer peers.

3.1.5.  PPStream

   The system architecture and working flows of PPStream is similar to
   PPLive.  PPStream transfers data using mostly TCP, only occasionally
   UDP.

   Video Download Policy of PPStream

      1 Top ten peers do not contribute to a large part of the download
      traffic.  This would suggest that PPStream gets the video from
      many peers simultaneously, and its peers have long session
      duration;

      2 PPStream does not send multiple chunk requests for different
      chunks different
      chunks to one peer at one time;

   PPStream maintains a constant peer list with relatively large number
   of peers.  [P2PIPTV-measuring]

   The following sections describe PPStream QoS related features,
   extracted mostly from [Li], [Jia] and [Wei].

   PPStream is mainly mesh-based but to some extent has a layered  data
   distribution topology.  It uses geographic clustering to some
   extent based on geographic longitude and latitude of the IP addresses
   [Jia].  [P2PIPTVMEA]

   To ensure data availability, PPStream uses some form of chunk
   retransmission request mechanism and shares buffer map at high rate,
   although it rarely requests concurrently for the same data chunk.
   Each data chunk, identified by the play time offset encoded by the
   program source, is divided into 128 sub-chunks of 8KB size each.  The
   chunk id is used to ensure sequential ordering of received data
   chunk.

   The buffer map consists of one or more 128-bit flags denoting the
   availability of sub-chunks and having a corresponding time offset.
   Usually a buffer map contains only one data chunk at a time and is
   thus smaller than that of PPLive.  It also contains sending peer's
   playback status to the other peers because as soon as a data chunk is
   played back, the chunk is deleted or replaced by the next data chunk
   [Wei]. chunk.

   At the initiating stage, a peer can use up to 4 data chunks and on a
   stabilized stage, a peer uses usually one data chunk.  However, in
   transient stage, a peer uses variable number of chunks.  Although,
   sub-chunks within each data chunks are fetched nearly in random
   without using rarest or greedy policy, the same fetching pattern for
   one data chunk seems to repeat in the following data chunks [Li]. chunks.
   Moreover, high bandwidth PPStream peers tend to receive chunks
   earlier and thus to contributes more than lower bandwidth peers.

3.1.6.  SopCast

   The system architecture and working flows of SopCast is similar to
   PPLive.  SOPCast transfers data mainly using UDP, occasionally TCP;

   Top ten peers contribute to about half of the total download traffic.
   SOPCast's download policy is similar to PPLive's policy in that it
   switches periodically between provider peers.  However, SOPCast seems
   to always need more than one peer to get the video, while in PPLive a
   single peer could be the only video provider;

   SOPCast's peer list can be as large as PPStream's peer list.  But
   SOPCast's peer list varies over time.  [P2PIPTV-measuring]

   The following sections describe SopCast QoS related features,
   extracted mostly from [Ali], [Ciullo], [Fallica], [Sentinelli],
   [SC6-Silverston], and [SC7-Tang].

   SopCast allows for software update through (HTTP) a centralized web
   server and makes available channel list through (HTTP) another
   centralized server.

   SopCast traffic is encoded and SopCast TV content is divided into
   video chunks or blocks with equal sizes of 10KB [Tang].  Sixty
   percent of its traffic is signaling packets and 40% is actual video
   data packets [Fallica].  SopCast produces more signaling traffic
   compared to PPLive, PPStream, and TVAnts, whereas PPLive produces the
   least [Silverston].  Its traffic is also noted to have long-range
   dependency [Silverston], indicating that mitigating it with QoS
   mechanisms may be difficult.  [Ali] reported that SopCast
   communication mechanism starts with UDP for the exchange of control
   messages among its peers using a gossip-like protocol and then moves
   to TCP for the transfer of video segments.  This use of TCP for data
   transfer seems to contradict others findings [Fallica, Silverston].

   To discover candidate peers, a peer requests peer list from Tracker,
   or from neighboring peer using a gossip-like protocol.  To retrieve
   content [Fallica], a new peer contacts peers selected randomly
   from the peer list it obtained from having queried the root servers
   (trackers).  The process of contacting peers slows down after the
   initial bootstrap phase [Horvath, Ciullo].  The number of
   peers a node typically connects to for download is about 2 to 5 [SC5-
   Sentinelli] and there is no observed preference for peers with
   shorter paths [Ciullo].  Partner peers periodically advertise
   content availability and exchange sought content.  In forming
   multiple parent and children relationships, a peer does not exploit
   peer location information [Horvath].  In general, parents are
   chosen solely based on performance; however, lower capacity nodes
   seem to be choosing parents that are closer to improve performance
   and to compensate for its bandwidth constraints [Ali].  When
   needed, a peer can download video streams directly from the Source
   Provide, a node that broadcasts the entire video [Tang].  In the
   process of data exchange, there is no enforcement of tit-for-tat like
   mechanisms [Ciullo].

   Similar to PPLive, SopCast uses a double-buffering mechanism.  The
   SopCast buffer downloads video chunks from the network, storing them,
   and upon exceeding a predetermined number of stored chunks, launches
   the Media player.  The Media player buffer then downloads video
   content from the local web server listening port and upon receiving
   sufficient amount of content, starts video playback.

3.1.7.  TVants contributes more than lower bandwidth peers.

3.1.6.  SopCast

   The system architecture and working flows of TVants SopCast is similar to
   PPLive.  TVAnts  SOPCast transfer data mainly using UDP, occasionally TCP;

   Top ten peers contribute to about half of the total download traffic.
   SOPCast's download policy is more balanced similar to PPLive's policy in that it
   switches periodically between TCP and UDP provider peers.  However, SOPCast seems
   to always need more than one peer to get the video, while in data
   transmission;

   TVAnts' PPLive a
   single peer could be the only video provider;

   SOPCast's peer list is also can be as large and as PPStream's peer list.  But
   SOPCast's peer list varies over time.  [P2PIPTV-
   measuring]

   We illustrate in Figure 5 the common Main components and steps of
   PPLive, PPStream,  [P2PIPTVMEA]

   SopCast allows for software update through (HTTP) a centralized web
   server and TVants.

                        +------------+
                        |   Tracker  |
                       /+------------+
                      /
                     /    +------+
                1,2/     /|Peer 1|
                  /     / +------+
                 /     /3,4,6
           +---------+/              +------+
           |New Peer |---------------|Peer 2|
           +---------+\     4,6      +------+
           |5  |       \
           |---|        \ +------+
                   3,4,6 \|Peer 3|
                          +------+

   Figure 5, Main components makes available channel list through (HTTP) another
   centralized server.

   SopCast traffic is encoded and steps SopCast TV content is divided into
   video chunks or blocks with equal sizes of 10KB.  Sixty percent of
   its traffic is signaling packets and 40% is actual video data
   packets.  SopCast produces more signaling traffic compared to PPLive,
   PPStream, SopCast and Tvants

   The main steps are:

      (1) A new peer registers with tracker / distributed hash table
      (DHT) to join the peer group  sharing TVAnts, whereas PPLive produces the same channel / media
      content;

      (2) Tracker / DHT returns an initial peer list least.  Its traffic
   is also noted to the new peer;

      (3) The new peer harvests peer lists by gossiping (i.e. exchange
      peer list) have long-range dependency, indicating that
   mitigating it with the peers on the initial peer list to aggregate
      more peers sharing the channel / media content;

      (4) The new peer randomly (or QoS mechanisms may be difficult.  It is reported
   that SopCast communication mechanism starts with some guide) selects some peers
      from its peer list to connect and UDP for the exchange peer information (e.g.
      buffer map, peer status, etc) with connected
   of control messages among its peers using a gossip-like protocol and
   then moves to know where TCP for the transfer of video segments.  This use of
   TCP for data transfer seems to get what data;

      (5) contradict others findings.

3.1.7.  TVants

   The new peer decides what data should be requested in which
      order / priority using some scheduling algorithm system architecture and the peer
      information obtained working flows of TVants is similar to
   PPLive.  TVAnts is more balanced between TCP and UDP in Step (4);

      (6) The new peer requests the data from some connected peers.
   transmission;

   The following sections describe TVAnts QoS related features,
   extracted mostly from [Alessandria], [Ciullo], system architecture and [Horvath]. working flows of TVants is similar to
   PPLive.  TVAnts peer discovery mechanism is very greedy during the first part
   of a more balanced between TCP and UDP in data
   transmission;

   TVAnts' peer life list is also large and stabilizes afterwards [Ciullo]. varies over time.  [P2PIPTVMEA]

   For data delivery, peers exhibit mild preference to exchange data
   among themselves in the same Autonomous System and also among peers
   in the same subnet.  TVAnts peer also exhibits some preference to
   download from closer peers.  According to [Horvath],  TVAnts peer exploits location
   information and download mostly from high-bandwidth peers.  However,
   it does not seem to enforce any tit-for-tat mechanisms in the data
   delivery.

   TVAnts [Alessandria] seems to be sensitive to network impairments such as changes
   in network capacity, packet loss, and delay.  For capacity loss, a
   peer will always seek for more peers to download.  In the process of
   trying to avoid bad paths and selecting good peers to continue
   downloading data, aggressive and potentially harmful behavior for
   both application and the network results when bottleneck is affecting
   all potential peers.

   When a peer experiences limited access capacity, it reacts by
   increasing redundancy (with FEC or ARQ mechanism) as if reacting to
   loss and thus causes higher download rate.  To recover from packet
   losses, it uses some kind of ARQ mechanism.  Although network
   conditions do impact video stream distribution such as the network
   delay impacting the start-up phase, they seem to have little impact
   on have little impact
   on the network topology discovery and maintenance process.

3.2.  Tree-based P2P streaming systems

   Tree-based systems implement a tree distribution graph, rooted at the
   source of content.  In principle, each node receives data from a
   parent node, which may be the source or a peer.  If peers do not
   change too often, such systems require little overhead, since packets
   are forwarded from node to node without the need for extra messages.
   However, in high churn environments (i.e. fast turnover of peers in
   the tree), the tree must be continuously destroyed and rebuilt, a
   process that requires considerable control message overhead.  As a
   side effect, nodes must buffer data for at least the time required to
   repair the network topology discovery and maintenance process.

3.2.  Tree-based P2P tree, in order to avoid packet loss.  One major drawback
   of tree-based streaming systems is their vulnerability to peer churn.
   A peer departure will temporarily disrupt video delivery to all peers
   in the sub-tree rooted at the departed peer.

3.2.1.  PeerCast

   PeerCast adopts a Tree structure.  The architecture of PeerCast is
   shown in Figure 6.

   Peers in one channel construct the Broadcast Tree and the Broadcast
   server is the root of the Tree.  A Tracker can be implemented
   independently or merged in the Broadcast server.  Tracker in Tree
   based P2P streaming application selects the parent nodes for those
   new peers who join in the Tree.  A Transfer node in the Tree receives
   and transfers data simultaneously.

   Peer Protocol: The peer joins a channel and gets the broadcast server
   address.  First of all, the peer sends a request to the server, and
   the server answers OK or not according to its idle capability.  If
   the broadcast server has enough idle capability, it will include the
   peer in its child-list.  Otherwise, the broadcast server will choose
   at most eight nodes of its children and answer the peer.  The peer
   records the nodes and contacts one of them, until it finds a node
   that can server it.

   In stead of requesting the channel by the peer, a Transfer node
   pushes live stream to its children, which can be a transfer node or a
   receiver.  A node in the tree will notify its parent about its status to its parent
   periodically, and the parent latter will update its child-list according to
   the received notifications.

               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |   Broadcast server  |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                  +---------+
            |Transfer1|                  |Transfer2|
            +---------+                  +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |Receiver1|  |Receiver2|     |Receiver3|  |Receiver4|
      +---------+  +---------+     +---------+  +---------+

      Figure 6, Architecture of PeerCast system

   The following sections describe PeerCast QoS related features,
   extracted mostly from [Deshpande] and [Despande1].

   Each PeerCast node has a peering layer that is between the
   application layer and the transport layer.  The peering layer of each
   node coordinates among similar nodes to establish and maintain a
   multicast tree.  Moreover, the peering layer also supports a simple,
   lightweight redirect primitive.  This primitive allows a peer p to
   direct another peer c which is either opening a data-transfer session
   with p, or has a session already established with p to a target peer
   t to try to establish a data-transfer session.  Peer discovery starts
   at the root (source) or some selected sub-tree root and goes
   recursively down the tree structure.  When a peer leaves normally, it
   informs its parent who then releases the peer, and it also redirects
   all its immediate children to find new parents starting at some
   target node.

   The peering layer allows for different policies of topology
   maintenance.  In choosing a parent from among the children of a given
   peer, a child can be chosen randomly, one at a time in some fixed
   order, or based on least access latency with respect to the choosing
   peer.  There are also many choices of peers to start and limit the
   search.  The different combinations are all the descendants of a
   leaving peer have to start searching from the root [root-All (RTA)];
   only the children of a leaving peer have to start searching from the
   root [Root (RT)]; all the descendants of a leaving peer have to start
   searching from the parent of the leaving peer [Grandfather-All
   (GFA)]; and only the children of the leaving peer have to start
   searching from the parent of the leaving peer [Grandfather (GF)].

   A heart-beat mechanism at the peer is available to handle failed
   peer.  With this mechanism, a peer sends keep-alive messages to its
   parent and children.  If a parent peer detects that a child has
   skipped a specified number of heart-beats, it deems the child as lost
   and tidies up.  Similarly, a child peer starts its search for new
   parent once its current parent is deemed to have left.

   PeerCast also proposes but has not evaluated a number of algorithms
   that use some cost function to optimize the overlay.  Some of them
   are described next.  If a parent is already saturated, a newly
   arrived peer replaces one of the costlier children than the newly
   arrived peer and the replaced peer tries to reconnect somewhere else
   [Knock-Down].  Newly arrived peer replaces the target peer and the
   target peer becomes its child [Join-Flip].  Unstable peers are pushed
   down to the bottom of the tree [Leaf-Sink].  Existing child and
   parent relationship is flipped [Maintain-Flip].

3.2.2.  Conviva

   Conviva[TM][conviva]

   Conviva [conviva] is a real-time media control platform for Internet
   multimedia broadcasting.  For its early prototype, End System
   Multicast (ESM) [ESM04] [ESM] is the underlying networking technology for on
   organizing and maintaining an overlay broadcasting topology.  Next we
   present an the overview of ESM.  ESM adopts a Tree structure.  The
   architecture of ESM is shown in Figure 7.

   ESM has two versions of protocols: one for the smaller scale conferencing
   apps with multiple sources, and the other for the larger scale
   broadcasting apps with a single Single source.  We focus on the latter version
   in this survey.

   ESM maintains a single tree for its overlay topology.  Its basic
   functional components include two parts: a bootstrap protocol, a
   parent selection algorithm, and a light-weight probing protocol for
   tree topology construction and maintenance; a separate control
   structure decoupled from tree, where a gossip-like algorithm is used
   for each member to know a small random subset of group members;
   members also maintain paths pathes from source.

   Upon joining, a node gets a subset of group membership from the
   source (the root node); it then finds parent using a parent selection
   algorithm.  The node uses light-weight probing heuristics on to a subset
   of members it knows, and evaluates remote nodes and chooses a
   candidate parent.  It also uses the parent selection algorithm to
   deal with performance degradation due to node and network churns.

   ESM Supports for NATs.  It allows NATs to be parents of public hosts,
   and public hosts can be parents of all hosts including NATs as
   children.

               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |    Broadcast server |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                   +---------+
            |  Peer1   |                  |  Peer2  |
            +---------+                   +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |  Peer3  |  |  Peer4  |     |  Peer5  |  |  Peer6  |
      +---------+  +---------+     +---------+  +---------+

      Figure 7, Architecture of ESM system

   The following sections describe only ESM QoS related features,
   extracted mostly from [ESM04], [Chu1], [Chu2], and [Chu3], as those
   of the Conviva are not publicly available.

   ESM constructs the multicast tree in a two-step process.  It
   constructs first a mesh of the participating peers; the mesh having
   the following properties:

   o

      1) The shortest path delay between any pair of peers in the mesh
      is at most K times the unicast delay between them, where K is a
      small constant.

   o

      2) Each peer has a limited number of neighbors in the mesh which
      does not exceed a given (per-member) bound chosen to reflect the
      bandwidth of the peer's connection to the Internet.

   It then constructs a (reverse) shortest path spanning trees of the
   mesh with the root being the source.

   Therefore a peer participates in two types of topology management: a
   control structure in which peers make sure they are always connected
   in a mesh and a data delivery structure in which peers make sure data
   gets delivered to them in a tree structure.

   To keep connected, each peer maintains communication with a small
   number of random neighbors and a complete list of members through a
   gossip-like algorithm.  When a new node joins, it gets a list of
   group members from the source.  To look for a parent, it sends probe
   request to a subset of the group members it obtained; evaluates them
   with respect to delay to the source, application throughput and link
   bandwidth; and then chooses from among them a candidate parent that
   is not a descendant and is not saturated.  In addition to using RTT-
   probes, consisting of 1-Kbyte transfers to detect bottleneck
   bandwidth, a node also considers the performance history of previously
   chosen parent. The peer also avoids probing hosts that have low
   bandwidth or are bottlenecked.

   When a peer leaves normally, it notifies its neighboring peers and
   the neighboring peers propagate the departing peer info.  At the same
   time, the departing peer continues to forward packets for some time
   to minimize transient packet loss.  When a peer leaves due to
   failure, active peers detect the departure of the peer through its
   non-responsiveness to their probe messages.  Active peers that
   detected the loss then propagate the departed peer info.  A departed
   peer list that is flushed after a sufficient amount of time has
   passed keeps track of leaving and failed peers.  The list enables
   refreshes from an active peer and a leaving/failed peer to be
   distinguished.

   Departing peers and failing of topology management: a
   control structure in which peers could make sure they are always connected
   in some instance partition a mesh into two or more components.  Mesh repair algorithm detects such
   occurrences by noticing split in the membership list and tries to
   repair by virtually linking between active members a data delivery structure in which peers make sure data
   gets delivered to one of the non-
   active members, trying one non-active member at them in a time. tree structure.

   To improve mesh/tree structural and operating quality, each peer
   randomly probes one another to add new links that have perceived gain
   in utility; and each peer continually monitors existing links to drop
   those links that have perceived drop in utility.  Switching parent
   occurs if a peer leaves or fails; if there is a persistent congestion
   or low bandwidth condition; or if there is a better clustering
   configuration.  To allow for more public hosts to be available for
   becoming parents of NATs, public hosts preferentially choose NATs as
   parents.

   The data delivery structure, obtained from running a distance vector
   protocol on top of the mesh using latency between neighbors as the
   routing metric, is maintained using various mechanisms.  Each peer
   maintains and keeps up to date the routing cost to every other
   member, together with the path that leads to such cost.  To ensure
   routing table stability, data continues to be forwarded along the old
   routes for sufficient time until the routing tables converge.  The
   time is set to be larger than the cost of any path with a valid
   route, but smaller than infinite cost.  To make better use of the
   path bandwidth, streams of different bit-rates are forwarded
   according to the following priority scheme: audio being higher than
   video streams and lower quality video being higher than quality
   video.  Moreover, bit-rates of stream are adapted to the peer
   performance capability.

3.3.  Hybrid P2P streaming system

   The object of the hybrid P2P streaming system is to use the
   comprehensive advantage of tree-mesh topology and pull-push mode in
   order to achieve balance among system robust, scalability and
   application real-time performance.

3.3.1.  New Coolstreaming

   The Coolstreaming, first released in summer 2004 with a mesh-based
   structure, arguably represented the first successful large-scale P2P
   live streaming.  As in the above analysis, it has poor delay performance
   and high overhead associated each video block transmission. To improve the situation,  After
   that, New coolstreaming [New
   CoolStreaming] [NEWCOOLStreaming] adopts a hybrid mesh and
   tree structure with hybrid pull and push mechanism.  All the peers
   are organized into a mesh-based topology in the similar to PPLive way like pplive
   to ensure high reliability.

   Besides, content delivery mechanism is the most important part of New
   Coolstreaming.  Fig.8 is the content delivery architecture.  The
   video stream is divided into blocks with equal size, in which each
   block is assigned a sequence number to represent its playback order
   in the stream.  Each  We divide each video stream is further divided into multiple sub-streams
   without any coding, allowing in which each node to can retrieve any sub-stream
   independently from different parent nodes.  This
   consequently subsequently reduces
   the impact to content delivery due to a parent departure or failure.
   The details of hybrid push and pull content delivery scheme are shown
   in the following:

   (1) A node first subscribes to a sub-stream by connecting to one of
   its partners via a single request (pull) in BM to BM, the requested
   partner, i.e., the parent node.( The node can subscribe to more than
   one sub-stream from sub-
   streams to its partners in this way to obtain higher play quality.)

   (2) The selected parent node will continue pushing all blocks in need
   of the requested sub-stream to the requesting requested node.

   This not only reduces the overhead associated with each video block
   transfer, but more importantly, significantly reduces the delay
   incurred timing
   involved in retrieving video content.
                   ------------------------------
                  |            +---------+      |
                  |            | Tracker |      |
                  |            +---------+      |
                  |                  |          |
                  |                  |          |
                  |   +---------------------+   |
                  |   |    Content server   |   |
                  |   +---------------------+   |
                  |------------------------------
                        /                     \
                       /                       \
                      /                         \
                     /                           \
               +---------+                   +---------+
               |  Peer1  |                   |  Peer2  |
               +---------+                   +---------+
                /      \                       /      \
               /        \                     /        \
              /          \                   /          \
         +---------+  +---------+     +---------+  +---------+
         |  Peer2  |  |  Peer3  |     |  Peer1  |  |  Peer3  |
         +---------+  +---------+     +---------+  +---------+
                Figure 8 Content Delivery Architecture

   The following sections describe Coolstreaming QoS related features,
   extracted mostly from [Bo] and [Xie].

   The basic components of Coolstreaming consist of the source,
   bootstrap node, web server, log server, media servers, and peers.
   Three basic modules in a peer help it maintain a partial view of the
   overlay (Membership Manager); establish and maintain partnership with
   other peers using Buffer Maps to indicate available video
   content for exchange (Partnership Manager); and manage data
   delivery, retrieval and play out (Stream Manager).

   In building the overlay topology, a newly arrived peer contacts the
   bootstrap node for a list of nodes and stores it in its own mCache.
   From the stored list, it selects nodes randomly to forms partnership
   and then parent-children relationship, where a partnership between
   two nodes exists when only block availability information is
   exchanged between them, and a parent-children relationship exists
   when, in addition to being partner, video content is also exchanged.  |     |  Peer1  |  |  Peer3  |
         +---------+  +---------+     +---------+  +---------+
                Figure 8 Content Delivery Architecture

   Video content is processed for ease of delivery, retrieval, storage
   and play out.  To manage content delivery, a video stream is divided
   into blocks with equal size, each of which is assigned a sequence
   number to represent its playback order in the stream.  Each block is
   further divided into K sub-blocks and the set of ith sub-blocks of
   all blocks constitutes the ith sub-stream of the video stream, where
   i is a value bigger than 0 and less than K+1.  To retrieve video
   content, a node receives at most K distinct sub-streams from its
   parent nodes.  To store retrieved sub-streams, a node uses a double
   buffering scheme having a synchronization buffer and a cache buffer.
   The synchronization buffer stores the received sub-blocks of each
   sub-stream according to the associated block sequence number of the
   video stream.  The cache buffer then picks up the sub-blocks
   according to the associated sub-stream index of each ordered block.
   To advertise the availability of the latest block of different sub-
   streams in its buffer, a node uses a Buffer Map which is represented
   by two vectors of K elements each.  Each entry of the first vector
   indicates the block sequence number of the latest received sub-
   stream, and each bit entry of the second vector if set indicates the
   block sequence index of the sub-stream that is being requested.

   For data delivery, a node uses a hybrid push and pull scheme with
   randomly selected partners.  A node having requested one or more
   distinct sub-streams from a partner as indicated in its first Buffer
   Map will continue to receive the sub-streams of all subsequent blocks
   from the same partner until future conditions cause the partner to do
   otherwise.  Moreover, users retrieve video indirectly from the source
   through a number of strategically located servers.

   To keep the parent-children relationship above a certain level of
   quality, each node constantly monitors the status of the on-going
   sub-stream reception and re-selects parents according to sub-stream
   availability patterns.  Specifically, if a node observes that the
   block sequence number of the sub-stream of a parent is much smaller
   than any of its other partners by a predetermined amount, the node
   then concludes that the parent is lagging sufficiently behind and
   needs to be replaced.  Furthermore, a node also evaluates the maximum
   and minimum of the block sequence numbers in its synchronization
   buffer to determine if any parent is lagging behind the rest of its
   parents and thus needs also to be replaced.

4.  A common P2P Streaming Process Model

   As shown in Figure 8, a common P2P streaming process can be
   summarized based on Section 3:

      1) When a peer wants to receive streaming content:

         1.1) Peer acquires a list of peers/parent nodes from the
         tracker.

         1.2) Peer exchanges its content availability with the peers on
         the obtained peer list, or requests to be adopted by the parent
         nodes.

         1.3) Peer identifies the peers with desired content, or the
         available parent node.

         1.4) Peer requests for the content from the identified peers,
         or receives the content from its parent node.

      2) When a peer wants to share streaming content with others:

         2.1) Peer sends information to the tracker about the swarms it
         belongs to, plus streaming status and/or content availability.

                  +---------------------------------------------------------+
                  |   +--------------------------------+                    |
                  |   |              Tracker           |                    |
                  |   +--------------------------------+                    |
                  |        ^  |                    ^                        |
                  |        |  |                    |                        |
                  |  query |  | peer list/         |streaming Status/       |
                  |        |  | Parent nodes       |Content availability/   |
                  |        |  |                    |node capability         |
                  |        |  |                    |                        |
                  |        |  V                    |                        |
                  |   +-------------+         +------------+                |
                  |   |    Peer1    |<------->|  Peer 2    |                |
                  |   +-------------+ content/+------------+                |
                  |                   join requests                         |
                  +---------------------------------------------------------+
   Figure 8, A common P2P streaming process model

   The functionality of Tracker and data transfer in Mesh-based
   application and Tree-based is a little different.  In the Mesh-based
   applications, such as Joost and PPLive, Tracker maintains the lists
   of peers storing chunks for a specific channel or streaming file.  It
   provides peer list for peers to download from, as well as upload to,
   each other.  In the Tree-based applications, such as PeerCast and
   Canviva, Tracker directs new peers to find parent nodes and the data
   flows from parent to child only.

5.  Security Considerations

   This document does not consider raise security issues.  It follows the
   security consideration in [draft-zhang-ppsp-problem-statement].

6.  Author List

   The authors of this document are listed as below.

      Hui Zhang, NEC Labs America.

      Jun Lei, University of Goettingen.

      Gonzalo Camarillo, Ericsson.

      Yong Liu, Polytechnic University.

      Delfin Montuno, Huawei.

      Lei Xie, Huawei.

      Shihui Duan, CATR.

7.  Acknowledgments

   We would like to acknowledge Jiang XingFeng xingfeng for providing good ideas
   for this document.

7.

8.  Informative References

   [PPLive]   "www.pplive.com".

   [PPStream]
              "www.ppstream.com".

   [CNN]      "www.cnn.com".

   [OctoshapeWeb]
              "www.octoshape.com".

   [Joost-Experiment]

   [JOOSTEXP]
              Lei, Jun, et al., "An Experimental Analysis of Joost Peer-
              to-Peer VoD Service". Dec. 2009.

   [Sigcomm_P2P_Streaming]

   [P2PVOD]   Huang, Yan, et al., "Challenges, Design and Analysis of a
              Large-scale P2P-VoD System", 2008.

   [Octoshape]
              Alstrup, Stephen, et al., "Introducing Octoshape-a new
              technology for large-scale streaming over the Internet".

   [Zattoo]   "http: //zattoo.com/".

   [Conviva]  "http://www.rinera.com/".

   [ESM04]

   [ESM]      Zhang, Hui., "End System Multicast,
              http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
              May 2004. .

   [Survey]   Liu, Yong, et al., "A survey on peer-to-peer video
              streaming systems", 2008.

   [draft-zhang-alto-traceroute-00]
              "www.ietf.org/internet-draft/
              draft-zhang-alto-traceroute-00.txt".

   [P2PStreamingSurvey]
              Zong, Ning, et al., "Survey of P2P Streaming", Nov. 2008.

   [P2PIPTV_measuring]

   [P2PIPTVMEA]
              Silverston, Thomas, et al., "Measuring P2P IPTV Systems".

   [Challenge]
              Li, Bo, et al., "Peer-to-Peer Live Video Streaming on the
              Internet: Issues, Existing Approaches, and Challenges",
              June 2007.

   [NewCoolstreaming]

   [NEWCOOLStreaming]
              Li, Bo, et al., "Inside the New Coolstreaming:
              Principles,Measurements and Performance Implications",
              Apr. 2008.

   [Moreira]
              Moreira, J, et al., " A step towards understanding Joost
              IPTV", Apr. 2008.

   [Joost Network Architecture]
              "Joost Network Architecture,
              http://scaryideas.com/content/2362/".

   [Alstrup]
              Alstrup, S, et al., "Octoshape  a new technology for
              large-scale streaming over the Internet", 2005.

   [Alstrup]
              Alstrup, S, et al., "Grid live streaming to millions",
              2006.

   [Hei]
              Hei, X, et al., "Insights into PPLive: A measurement study
              of a large-scale P2P IPTV system", May 2006.

   [Vu]
              Vu, L, et al., "Understanding Overlay Characteristics of a
              Large-Scale Peer-to-Peer IPTV System", November 2010.

   [Horvath]
              Horvath, A, et al., "Dissecting PPLive, SopCast, TVAnt", 2008.

   [Liu]
              Liu, Y, et al., "A Case Study of Traffic Locality in
              Internet P2P Live Streaming Systems", June 2009.

   [Li]
              Li, C, et al., "Measurement Based PPStream client behavior
              analysis", 2009.

   [Jia]
              Jia, J, et al., "Characterizing PPStream across Internet",
              2007.

   [Wei]
              Wei, T, et al., "Study of PPStream Based on Measurement",
              2008.

   [Ali]
              Ali, S, et al., "Measurement of Commercial Peer-to-Peer
              Live Video Streaming", Aug 2006.

   [Ciullo]
              Ciullo, D, et al., "Network Awareness of P2P Live
              Streaming Applications: A Measurement Study", Aug 2010.

   [Fallica]
              Fallica, B, et al., "On the Quality of Experience of
              SopCast", Aug 2008.

   [Sentinelli]
              Sentinelli, A, et al., "Will IPTV Ride the Peer-to-Peer
              Stream?", June 2007.

   [Silverston]
              Silverston, T, et al., "Traffic analysis of peer-to-peer
              IPTV communities", 2009.

   [Tang]
              Tang, S, et al., "Topology dynamics in a P2PTV network",
              2009.

   [Alessandria]
              Alessandria, E, et al., "P2P-TV Systems under Adverse
              Network Conditions: a Measurement Study", 2009.

   [Chang]
              Chang, H, et al., "Live streaming performance of the
              Zattoo network", 2009.

   [Deshpande]
              Deshpande, H, et al., "Streaming Live Media over a Peer-
              to-Peer Network", August 2001.

   [Desphande]
              Desphande, H, et al., " Streaming Live Media over Peers,
              http://ilpubs.stanford.edu:8090/863/", December 2008.

   [Chu1]
              Chu, Y, et al., "A Case for End System Multicast",
              June 2000.

   [Chu2]
              Chu, Y, et al., "Early Experience with an Internet
              Broadcast System Based on Overlay Multicast", June 2004.

   [Chu3]
              Chu, Y, et al., "Narada is a self-organizing, overlay-
              based protocol for achieving multicast without network
              support", Aug 2001.

   [Bo]
              Li, B, et al., "Inside the New Coolstreaming: Principles,
              Measurements and Performance Implications", 2008.

   [Xie]
              Xie, S, et al., "Coolstreaming: Design, Theory, and
              Practice", 2007.

Authors' Addresses

   Gu Yingjie (editor)
   Huawei
   Baixia Road No. 91
   No.101 Software Avenue
   Nanjing, Jiangsu Province  210001  210012
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: guyingjie@huawei.com

   Zong Ning (editor)
   Huawei
   Baixia Road No. 91
   No.101 Software Avenue
   Nanjing, Jiangsu Province  210001  210012
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: zongning@huawei.com

   Hui Zhang
   NEC Labs America.

   Email: huizhang@nec-labs.com

   Zhang Yunfei
   China Mobile

   Email: zhangyunfei@chinamobile.com

   Lei Jun
   University of Goettingen

   Phone: +49 (551) 39172032
   Email: lei@cs.uni-goettingen.de

   Gonzalo Camarillo
   Ericsson

   Email: Gonzalo.Camarillo@ericsson.com

   Liu Yong
   Polytechnic University

   Email: yongliu@poly.edu

   Delfin Montuno
   Huawei

   Email: delfin.montuno@huawei.com
   Xie Lei
   Huawei

   Email: xielei57471@huawei.com