PPSP                                                               Y. Gu
Internet-Draft                                              N. Zong, Ed.                                              Unaffiliated
Intended status: Standards Track                                  Huawei                            N. Zong, Ed.
Expires: August 29, 2013 January 13, 2014                                         Huawei
                                                                Y. Zhang
                                                            China Mobile
                                                           F. Lo Piccolo
                                                                 S. Duan
                                                       February 25,
                                                           July 12, 2013

                  Survey of P2P Streaming Applications


   This document presents a survey of some of the most popular Peer-to-
   Peer (P2P) streaming applications on the Internet.  Main selection
   criteria were have been popularity and availability of information on
   operation details at writing time.  In doing this, selected
   applications will are not be reviewed as a whole, but we will they are reviewed with
   main focus exclusively on the signaling and control protocol used to establish
   and maintain overlay connections among peers and to advertise and
   download streaming content.

Status of this This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on August 29, 2013. January 13, 2014.

Copyright Notice

   Copyright (c) 2013 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . .  3   2
   2.  Terminologies and concepts  . . . . . . . . . . . . . . . . . .   4
   3.  Classification of P2P Streaming Applications Based on Overlay
       Topology  . . . . . . . . . . . . . . . . . . . . . . . . . .   5
     3.1.  Mesh-based P2P Streaming Applications . . . . . . . . . .   5
       3.1.1.  Octoshape . . . . . . . . . . . . . . . . . . . . . .   6
       3.1.2.  PPLive  . . . . . . . . . . . . . . . . . . . . . . . .  8   7
       3.1.3.  Zattoo  . . . . . . . . . . . . . . . . . . . . . . . . 10   9
       3.1.4.  PPStream  . . . . . . . . . . . . . . . . . . . . . . .  11
       3.1.5.  SopCast  Tribler . . . . . . . . . . . . . . . . . . . . . . .  12
       3.1.6.  Tribler  . . . . . . . . . . . . . . . . . . . . . . . 13
       3.1.7.  QQLive  . . . . . . . . . . . . . . . . . . . . . . . . 15  14
     3.2.  Tree-based P2P streaming applications . . . . . . . . . . 16  15
       3.2.1.  End System Multicast (ESM)  . . . . . . . . . . . . . . 17  16
     3.3.  Hybrid P2P streaming applications . . . . . . . . . . . .  18
       3.3.1.  New Coolstreaming . . . . . . . . . . . . . . . . . . 19  18
   4.  Security Considerations . . . . . . . . . . . . . . . . . . . 21  19
   5.  Author List . . . . . . . . . . . . . . . . . . . . . . . . . 21  19
   6.  Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21  19
   7.  Informative References  . . . . . . . . . . . . . . . . . . .  19
   8.  References  . . . . . . . . . . . . . . . . . . . . . 21 . . . .  20
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . . . 22  20

1.  Introduction

   An ever increasing ever-increasing number of multimedia streaming systems have been
   adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and
   video contents from a source to a large number of end users.  This is
   the reference scenario of this document, which presents a survey of
   some of the most popular P2P streaming applications available on the
   nowadays Internet.

   The presented survey does not aim at being exhaustive.  Reviewed
   applications have indeed been selected mainly based on their
   popularity and on the information publicly available on P2P operation
   details at writing time.

   In addition, the selected applications are not reviewed as a whole,
   but they are reviewed with exclusive main focus on signaling and control
   protocols used to construct and maintain the overlay connections
   among peers and to advertise and download multimedia content.  More
   precisely, we assume throughout the document the high level system
   model reported in Figure 1.

                  |            Tracker             |
                  |  Information on multimedia     |
                  |     content and peer set       |
                     ^  |                    ^  |
                     |  |                    |  |
                     |  |            Tracker  Traker            |  |
                    Protocol  Traker
                     |  | Protocol           |  |
                             |  |                    |  | Protocol
                     |  |                    |  |
                     |  V                    |  V
                +-------------+         +------------+
                |    Peer1    |<-------->|   Peer 1    |<--------|   Peer 2   |
                |             |-------->|            |
                +-------------+   Peer         +------------+
                             Peer Protocol

   Figure 1, High level model architecture of P2P streaming systems assumed as
                reference model througout the document

   As Figure 1 shows, it is possible to identify in every P2P streaming
   system two main types of entity: peers and trackers.  Peers represent
   end users, which join dynamically the system to send and receive
   streamed media content, whereas trackers represent well-known nodes,
   which are stably connected to the system and provide peers with
   metadata information about the streamed content and the set of active
   peers.  According to this model, it is possible to distinguish among
   two different control and signaling control/signaling protocols:

      1) the protocol "tracker protocol" that regulates the interaction between
      trackers and
      peers and will be denoted as "tracker protocol" in peer;

      2) the document;
      the protocol "peer protocol" that regulates the interaction between peers and will
      be denoted as "peer protocol" in the document.

   Hence, whenever possible, we will always try to identify identity tracker and peer
   protocols and we will provide the corresponding details.

   This document is organized as follows.  Section 2 introduces
   terminology and concepts used throughout the current survey.  Since
   overlay topology built on connections among peers impacts some
   aspects of tracker and peer protocols, Section 2 classifies P2P
   streaming application applications according to the main overlay topologies: mesh-
   based, topology: mesh-based,
   tree-based and hybrid.  Then, Section 3 presents some of the most
   popular mesh-based P2P streaming applications: Octoshape, PPLive,
   Zattoo, PPStream, SopCast, Tribler, QQLive.  Likewise, Section 4 presents End
   System Multicast as example of tree-based P2P streaming applications.
   Finally Section 5 presents New Coolstreaming as example of hybrid-topology hybrid-
   topology P2P streaming application.

2.  Terminologies and concepts

   Chunk: A chunk

   Channel: TV channel from which live streaming content is a basic unit of data organized transmitted
   in a P2P streaming application.

   Chunk: Basic unit that a streaming media is partitioned into for the
   purposes of storage, scheduling, advertisement and exchange among

   Live streaming: It refers Application that allows users to a scenario where all the audiences receive streaming almost in
   real-time multimedia content for the same related to on ongoing event.  It is desired
   that the lags event and streamed
   from a source.  The lag between the play points of at the audiences receivers and
   the ones at the streaming source has to be small.

   Peer: A peer refers to a participant P2P node that dynamically participates in a P2P streaming
   system that not only receives to receive streaming content, content but also caches to store and streams
   upload streaming content to other participants.

   Peer protocol: Control and signaling protocol that regulates
   interaction among peers.

   Pull: Transmission of multimedia content only if requested by
   receiving peer. peers.

   Push: Transmission of multimedia content without any request from
   receiving peer. peers.

   Swarm: A swarm refers to a group of peers who exchange data to
   distribute chunks of sharing the same streaming content at a given

   Tracker: A tracker refers P2P node that stably participates in a P2P streaming system
   to provide a directory service that maintains a
   list of peers participating in a specific audio/video channel or in by maintaining information both on the distribution of a streaming file.
   peer set and on the chunks each peer stores.

   Tracker protocol: Control and signaling protocol that regulates
   interaction among peers and trackers.

   Video-on-demand (VoD): It refers Application that allows users to a scenario where different
   audiences may select and
   watch different parts of the same recorded streaming
   with downloaded content. video content on demand.

3.  Classification of P2P Streaming Applications Based on Overlay

   Depending on the topology that can be associated with overlay
   connections among peers, it is possible to distinguish among the
   following general types of P2P streaming applications:


      1) tree-based: peers are organized to form a tree-shape overlay
      network rooted at the streaming source, and multimedia content
      delivery is push-based.  Peers that forward data are called parent
      nodes, and peers that receive it are called children nodes.  Due
      to their structured nature, tree-based P2P streaming applications
      present a
      guarantee both topology maintenance at very low cost of topology maintenance and are able to
      guarantee good
      performance in terms of scalability and delay.  On the other side,
      they are not very resilient to peer churn, that may be very high
      in a P2P environment;


      2) mesh-based: peers are organized in a randomly connected overlay
      network, and multimedia content delivery is pull-based.  This is
      the reason why these systems are also referred to as "data-
      driven".  Due to their unstructured nature, mesh-based P2P
      streaming application are very resilient with respect to peer
      churn and are able to guarantee higher network resource utilization
      higher than for the
      one associated with tree-based applications.  On the other side,
      the cost to maintain overlay topology may limit performance in
      terms of scalability and delay, and pull-based data delivery calls for large size buffer
      buffers where to store chunks;


      3) hybrid: this category includes all the P2P application applications that
      cannot be classified as simply mesh-based or tree-based and
      present characteristics of both mesh-based and tree-based

3.1.  Mesh-based P2P Streaming Applications

   In mesh-based P2P streaming application peers self-organize in a
   randomly connected overlay graph where peers interact each peer interacts with a
   limited subset of other peers (neighbors) and explicitly request requests
   chunks they need it needs (pull-based or data-driven delivery).  This type of
   content delivery may be associated with high overhead, not only
   because peers formulate requests to in order to download chunks they
   need, but also because in some applications peers exchange
   information about chunks they own (in form of so called buffer-maps,
   a sort of bit maps with a bit "1" in correspondence of chunks stored
   in the local buffer).  The  On the one side, the main advantage of this
   kind of applications lies in that a peer does not rely on a single
   peer for retrieving multimedia content.  Hence, these applications
   are very resilient to peer churn.  On the other side, overlay
   connections are not persistent and highly dynamic and not persistent (being driven by
   content availability), and this makes content distribution efficiency
   unpredictable.  In fact, different chunks may be retrieved via
   different network paths, and this may turn at end users into playback
   quality degradation ranging from low bit rates, rates to long startup
   delays, to frequent playback freezes.  Moreover, peers have to
   maintain large buffers to increase the probability of satisfying
   chunk requests received by neighbors.

3.1.1.  Octoshape

   Octoshape [Octoshape] is popular for the realization of the a P2P plug-in CNN [CNN] that has been using Octoshape realized by the
   homonym Danish company and has become popular for being used by CNN
   [CNN] to broadcast its living streaming. streaming content.  Octoshape helps indeed
   CNN serve a peak of more than a million simultaneous viewers.  But Octoshape has viewers thanks
   not only to the P2P content distribution paradigm, but also provided to
   several innovative delivery technologies such as loss resilient
   transport, adaptive bit rate, adaptive path optimization and adaptive
   proximity delivery.

   Figure 2 depicts the architecture of the Octoshape system.

            +------------+   +--------+
            |   Peer 1   |---| Peer 2 |
            +------------+   +--------+
                 |    \    /      |
                 |     \  /       |
                 |      \         |
                 |     / \        |
                 |    /   \       |
                 |  /      \      |
      +--------------+    +-------------+
      |     Peer 4   |----|    Peer3    |
      +--------------+    +-------------+
                 | Content Server|

      Figure 2, Architecture of Octoshape system

   As it can be seen from the picture, there are no trackers and
   consequently no tracker protocol is necessary.

   As regards the peer protocol, as soon as a peer joins a channel, it
   notifies all the other information on peers about its presence, that already
   joined the channel is transmitted in form of metadata when streaming
   the live content.  In such a way that each peer maintains a sort of address book
   Address Book with the information necessary to contact other peers
   who are watching the same channel.
   Although Octoshape inventors claim in [Octoshape] that each peer
   records all peers joining a channel, we suspect that it is very
   unlikely that all peers are recorded.  In fact, the corresponding
   overhead traffic would be large, especially when a popular program
   starts in a channel and lots of peers switch to this channel.  Maybe
   only some geographic or topological neighbors are notified and the
   joining peer gets the address book from these nearby neighbors.

   Regarding data distribution strategy, in the Octoshape solution the
   original stream is split into a number K of smaller equal-sized data
   streams, but a number N > K of unique data streams are actually
   constructed, in such a way that a peer receiving any K of the N
   available data streams is able to play the original stream.  For
   instance, if the original live stream is a 400 kbit/sec signal, for
   K=4 and N=12, 12 unique data streams are constructed, and a peer that
   downloads any 4 of the 12 data streams is able to play the live
   stream.  In this way, each peer sends requests of data streams to
   some selected peers, and it receives positive/negative answers
   depending on availability of upload capacity at requested peers.  In
   case of negative answers, a peer continues sending requests until it
   finds K peers willing to upload the minimum number if of data streams
   needed to
   redisplay display the original live stream.  Since the number  This allows a flexible
   use of peers served
   by bandwidth at end users.  In fact, since the original stream is
   split into smaller data streams, a given peer is limited by its upload capacity, the that does not have enough
   upload capacity at each peer should be larger than to transmit the playback rate original whole stream can transmit a
   number of the
   live stream.  Otherwise, artificial peers may be added to offer extra
   bandwidth. smaller data streams that fits its actual upload capacity.

   In order to mitigate the impact of peer loss, the address book is
   also used at each peer to derive the so called Standby List, which
   Octoshape peers use to probe other peers and be sure that they are
   ready to take over if one of the current senders leaves or gets

   Finally, in order to optimize bandwidth utilization, Octoshape
   leverages peers within a network to minimize external bandwidth usage
   and to select the most reliable and "closest" source to each viewer.
   It also chooses the best matching available codecs and players, and
   it scales bit rate up and down according to the available internet Internet

3.1.2.  PPLive
   PPLive [PPLive] was first developed in Huazhong University of Science
   and Technology in 2004, and it is one of the earliest and most
   popular P2P streaming software in China.  The  To give an idea, PPLive
   website reached 50 millions of visitors for the opening ceremony of
   Beijing 2008 Olympics, and the dedicated Olympics channel attracted
   221 millions of views in two weeks.

   Even though PPLive was renamed to PPTV in 2010, we continue using the
   old name PPLive throughout this document.

   PPLive system includes six parts.

   (1) Video the following main components:

      1) video streaming server: providing server, that plays the role of source of video
      content and
   coding the copes with content for adapting the network transmission rate and the
   client playing.

   (2) Peer: coding issues;

      2) peer, also called node or client.  The client, that is PPLive entity
      downloading video content from other peers compose the self-
   organizing network logically and each peer can join or leave
   whenever.  When the client downloads the content, it also provides
   its own uploading video
      content to the other client at the same time.

   (3) Directory server: server which peers;

      3) channel server, that provides the PPLive client, when launched list of available channels
      (live TV or shut down by user, automatically registers user information VoD content) to and
   cancels user information from.

   (4) Tracker server: server a PPLive, as soon as the peer joins
      the system;

      4) tracker server, that records provides a PPLive peer with the information list of all users
      online peers that are watching the same content.  In more detail, when the PPLive client
   requests some content, this server will check if there are other
   peers owning channel as the content and send one the information to
      joining peer is interested in.

   Figure 3 illustrates the client.

   (5) Web server: providing PPLive software updating and downloading.

   (6) Channel list server: server that stores the information of all
   the programs which can be watched by end users, including VoD
   programs and live broadcasting programs.

   PPLive uses two major communication protocols.  The first one is the
   Registration and Peer Discovery protocol, the equivalent of tracker
   protocol, and the second one is the P2P Chunk Distribution protocol,
   the equivalent of peer protocol.  Figure 3 shows the architecture high level diagram of PPLive system.

               +------------+    +--------+    +------------+
               |   Peer 2   |----|   Peer 3   |
               +------------+    +--------+    +------------+
                  |      |          |       |
                  |    +--------------+     |
                  |    |    Peer 1    |     |
                  |    +--------------+     |
                  |            |
                    +---------------+            | Tracker
               |                              |
               |   +----------------------+   |
               |   |Video Streaming Server|
                    +---------------+   |
               |   +----------------------+   |
               |   |    Channel Server    |   |
               |   +----------------------+   |
               |   |    Tracker Server    |   |
               |   +----------------------+   |
               |                              |

   Figure 3, Architecture High level overview of PPlive PPLive system architecture

   As regards the tracker protocol, firstly as soon as a PPLive peer gets joins the channel list
   systems and selects the Channel list server; secondly it chooses a channel and asks to watch, it retrieves from the Tracker
   tracker server for a peer-list associated with list of peers that are watching the selected same channel.

   As regards the peer protocol, a it controls both peer contacts the peers in its peer-
   list to get additional peer-lists, to be merged with the original one
   received by Tracker server with the goal of constructing discovery and
   maintaining an overlay mesh for
   chunk distribution process.  More specifically, peer management and data delivery.
   According to [P2PIPTVMEA], PPLive peers maintain discovery is
   regulated by a constant peer-list
   when the number kind of peers is relatively small.

   For gossip-like mechanism.  After retrieving the video-on-demand (VoD) operation, because different peers
   watch different parts
   list of the channel, a peer buffers chunks up to active peers watching a
   few minutes of content within specific channel from tracker server,
   a sliding window.  Some PPLive sends out probes to establish active peer connections, and
   some of these chunks those peers may be chunks that have been recently played; the remaining chunks
   are chunks scheduled return also their own list of active peers to be played
   help the new peer discover more peers in the next few minutes. initial phase.  Chunk
   distribution process is mainly based on buffer map exchange to
   advertise the availability of cached chunks.  In order more detail, PPLive
   software client exploits two local buffers to upload chunks cache chunks: the
   PPLive TV engine buffer and media player buffer.  The main reason
   behind the double buffer structure is to each other, peers exchange "buffer-map" messages.
   A buffer-map message indicates which address the download rate
   variations when downloading chunks a peer currently has from PPLive network.  In fact,
   received chunks are first buffered and can share, and it includes the offset (the ID of reassembled into the
   first chunk), PPLive TV
   engine buffer; as soon as the length number of the consecutive chunks in PPLive
   TV engine buffer map, and overcomes a string of zeroes
   and ones indicating which predefined threshold, the media player
   buffer downloads chunks are available (starting with from the
   chunk designated by PPLive TV engine buffer; finally,
   when the offset).  PPlive transfer Data over UDP.

   The download policy media player buffer fills up to the required level, the
   actual video playback starts.

   Being the nature of PPLive may be summarized with protocols and algorithm proprietary, most
   of known details have been derived from measurement studies.
   Specifically, it seems that:

      1) number of peers from which a PPLive node downloads live TV
      chunks from is constant and relatively low, and the following
   three points: top-ten peers
      contribute to a major part of the download traffic.
      Meanwhile, session with top-ten peers is quite short, if compared
      with the video session duration.  This would suggest that PPLive
      gets video from only a few peers at any given time, and switches
      periodically from one peer to another; traffic, as shown in

      2) PPLive can send multiple chunk requests provide satisfactory performance for different chunks to
      one peer at one time;

      PPLive popular live TV
      and VoD channels.  For unpopular live TV channels, performance may
      severely degrade, whereas for unpopular VoD channels this problem
      rarely happens, as it shown in [CNSR].  Authors of [CNSR] also
      demonstrate that the workload in most VoD channels is observed to have well
      balanced, whereas for live TV channels the download scheduling policy of
      giving higher priority to rare chunks workload distribution
      is unbalanced, and to chunks closer to play
      out deadline. a small number of peers provide most video

3.1.3.  Zattoo
   Zattoo [Zattoo] is P2P live streaming system which serves over 3 that was launched in
   Switzerland in 2006 in coincidence with the EUFA European Football
   Championship and in few years was able to attract almost 10 million
   registered users over in several European countries.The system delivers
   live streaming using a receiver-based, peer-division multiplexing
   scheme.  Zattoo reliably streams media among peers using the mesh
   structure. countries.

   Figure 4 depicts a typical procedure the high level architecture of single TV channel carried
   over Zattoo network.  First, Zattoo system broadcasts a live TV
   channel, captured from satellites, onto system.  The
   main reference for the Internet.  Each TV
   channel information provided in this document is delivered through a separate P2P network.

      |   ------------------   -------------------------------   |         --------   +------+
      |   |    Broadcast Server         |        |---------|Peer1 |-----------   |---|Peer 1|---|
      |   -------------------------------   |  Servers   +------+   |
      |         --------   |  Authentication Server      |   Administrative Servers   |                      -------------      +--------------+
      |   ------------------------   -------------------------------   |      | Super Node| Repeater node|
      |   | Authentication    Rendezvous Server        |   |                      -------------
      |   | Rendezvous Server      +--------------+
      |   -------------------------------   |   +------+   |
      |   | Feedback Bandwidth Estimation Server |   |---|Peer 2|---|
      |         --------   -------------------------------   |   +------+
      |   |      Other Servers          | |---------|Peer2 |----------|   |   ------------------------|
      |         --------
      ------------------------------|   -------------------------------   |

      Figure 4, Basic architecture High level overview of Zattoo system

   In order to receive a TV channel, users are required to be
   authenticated through Zattoo Authentication Server.  Upon
   authentication, users obtain a ticket identifying architecture

   Broadcast server is in charge of capturing, encoding, encrypting and
   sending the interest TV channel with a specific lifetime.  Then, users contact the Rendezvous
   Server, which plays to the role Zattoo network.  A number N of tracker and based on logical
   sub-streams is derived from the received
   ticket sends back a list joined original stream, and packets of peers carrying the channel.

   As regards the peer protocol, a peer establishes overlay connections
   with other peers randomly selected
   same order in the peer-list received by the
   Rendezvous Server.

   For reliable data delivery, each live stream is partitioned sub-streams are grouped together into
   video the so-called
   segments.  Each video segment is then coded for forward error
   correction with Reed-Solomon via a Reed-Salomon error
   correcting code into n sub-stream
   packets in such a way that having obtained any number k correct packets < N of a received
   packets in the segment is
   sufficient enough to reconstruct the remaining n-k packets of the same video whole segment.  To receive

   Authentication server is the first point of contact for a video segment, each peer then specifies the
   sub-stream(s) of
   joins the video segment it would like to receive from the
   neighboring peers.

   Peers decide how to multiplex system.  It authenticates Zattoo users and assigns them
   with a stream among its neighboring peers
   based on limited lifetime ticket.  Then, a user contacts the availability Rendezvous
   server and specifies the TV channel of interest.  It also presents
   the tickets received by the authentication server.  Provided that the
   presented ticket is valid, the rendezvous server returns a list of upload bandwidth.  With reference to
   such aspect,
   Zattoo peers rely on Bandwdith Estimation Server to
   initially estimate that have already joined the amount of available uplink bandwidth at a
   peer.  Once requested channel and a peer starts to forward substream to other peers, it
   receives QoS feedback from its receivers if
   signed channel ticket.  Hence, rendezvous server plays the quality role of sub-stream
   drops below a threshold.

   Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
   handle longer term bandwidth fluctuations.  According to
   tracker.  At this scheme,
   each peer determines how many sub-streams to transmit and when to
   switch partners.  Specifically, each peer continuously estimates point the
   amount of available uplink bandwidth based initially on probe packets
   sent to Zattoo Bandwidth Estimation Server direct interaction between peers starts
   and subsequently on peer
   QoS feedbacks, it is regulated by using different algorithms depending on the
   underlying transport peer protocol.

   A peer increases its estimated
   available uplink bandwidth, if new Zattoo user contacts the current estimate is below some
   threshold and if there has been no bad quality feedback from
   neighboring peers for a period of time, according to some algorithm
   similar to how TCP maintains its congestion window size.  Each peer
   then admits neighbors based on returned by the currently estimated available
   uplink bandwidth.  In case a new estimate indicates insufficient
   bandwidth rendezvous
   server in order to support the existing number of peer connections, one
   connection at identify a time, preferably starting with set of neighboring peers covering the one requiring
   full set of sub-streams in the
   least bandwidth, TV channel.  This process is closed.  On the other hand, if loss rate denoted
   in Zattoo jargon as Peer Division Multiplexing (PDM).  To ease the
   identification of
   packets from neighboring peers, each contacted peer provides
   also the list of its own known peers, in such a peer's neighbor reaches way that a certain threshold, new Zattoo
   user, if needed, can contact new more peers besides the ones
   indicated by the rendezvous server.  In selecting which peers to
   establish connections with, a peer
   will attempt adopts the criterion of
   topological closeness.  The topological location of a peer is defined
   in Zattoo as (in order of preference) its subset number, its
   autonomous system number and its country code, and its provided to shift
   each peer by the degraded neighboring authentication server.

   Zattoo peer load protocol provides also a mechanism to other
   existing peers, while looking for make PDM process
   adaptive with respect to bandwidth fluctuations.  First of all, a replacement peer.  When one is
   peer controls the admission of new connections based on the load available
   uplink bandwidth.  This is shifted estimated i) at beginning with each peer
   sending probe messages to it the Bandwidth Estimation server, and ii)
   while forwarding sub-streams to other peers based on the degraded neighbor quality-of-
   service feedback received by those peers.  A quality-of-service
   feedback is
   dropped.  As expected sent from the receiver to the sender only when the
   quality of the received sub-stream is below a given threshold.  So if
   a peer's neighbor quality-of-service feedback is lost due to departure,
   the received, a Zattoo peer initiates decrements
   the process estimation of available uplink bandwidth, and if this drops below
   the amount needed to replace supports the lost peer.  To optimize current connections, a proper
   number of connections is closed.  On the PDM configuration, other side, if no quality-
   of-service feedback is received for a given time interval, a Zattoo
   peer may occasionally initiate switching
   existing partnering peers to topologically closer peers.

3.1.4.  PPStream

   The system architecture increments the estimation of PPStream [PPStream] is available uplink bandwidth
   according to a mechanism very similar to the one of PPLive.

   To ensure data availability, PPStream uses some form of chunk
   retransmission request mechanism and shares buffer map at high rate.
   Each data chunk, identified by the play time offset encoded by TCP congestion
   window (double increase or linear increase depending on whether the
   program source,
   estimate is divided into 128 sub-chunks below or a given threshold).

   As it can be seen also in Figure 4, there exist two classes of 8KB size each.  The
   chunk id is used Zattoo
   nodes: simple peers, whose behavior has already been presented, and
   Repeater nodes, that serve as bandwidth multiplier, are able to ensure sequential ordering
   forward any sub-stream and implement the same peer protocol as simple

3.1.4.  PPStream

   PPStream [PPStream] is a very populare P2P streaming software in
   China and in many other countries of received data
   chunk. East Asia.

   The buffer map consists system architecture of one or more 128-bit flags denoting PPStream is very similar to the availability of sub-chunks, and it includes information on time
   offset.  Usually, a buffer map contains only one data chunk at of
   PPLive.  When a
   time, and PPStream peer joins the system, it also contains sending peer's playback status, because as
   soon as a data chunk is played back, retrieves the chunk is deleted or replaced
   by list
   of channels from the next data chunk.

   At channel list server.  After selecting the initiating stage a peer can use up
   channel to four data chunks,
   whereas on a stabilized stage watch, a PPStream peer uses usually one data chunk.
   However, in transient stage, a retrieves from the peer uses variable number list server
   the identifiers of chunks.
   Sub-chunks within each data chunks peers that are fetched nearly in random
   without using rarest or greedy policy.  The same fetching pattern for
   one data chunk seems to repeat itself in watching the subsequent data chunks.
   Moreover, higher bandwidth PPStream peers tend to receive chunks
   earlier selected channel, and thus
   it establishes connections that are used first of all to contribute exchange
   buffer-maps.  In more than lower bandwidth peers.

   Based on the experimental results reported in [P2PIPTVMEA], download
   policy of detail, a PPStream may be summarized with chunk is identified by the following two points:

      top-ten peers do not contribute to a large part of the download
      traffic.  This would suggest that PPStream peer gets the video
      from many peers simultaneously, and session between peers have
      long duration;

      PPStream does not send multiple chunk requests for different
      chunks to one peer at one time; PPStream maintains a constant peer
      list with relatively large number of peers.

3.1.5.  SopCast

   The system architecture of SopCast [SopCast]
   play time offset which is similar to encoded by the one of

   SopCast allows for software updates via HTTP through a centralized
   web server, streaming source and it makes list of channels available via HTTP through
   another centralized server.

   SopCast traffic is encoded and SopCast TV content is divided
   subdivided into
   video chunks or blocks with equal sizes of 10KB.  Sixty percent of
   its traffic is signaling packets and 40% is actual video data
   packets.  SopCast produces more signaling traffic compared to PPLive,
   PPStream, with PPLive producing sub-chunks.  So buffer-maps in PPStream carry the minimum
   play time offset information and are strings of signaling traffic.  It
   has been observed in [P2PIPTVMEA] that SopCast traffic has long-range
   dependency, which also means bits that eventual QoS mitigation mechanisms
   may be ineffective.  Moreover, according to [P2PIPTVMEA], SopCast
   communication mechanism starts with UDP for indicate
   the exchange availability of control
   messages among its peers by using a gossip-like protocol and then
   moves to TCP for sub-chunks.  After receiving the transfer of video segments.  It also seems that
   top-ten buffer-maps from
   the connected peers, a PPStream peer selects peers contribute to about half of the total download traffic.
   Finally, SopCast peer-list can be as large as PPStream peer-list, but
   differently sub-
   chunks from PPStream SopCast peer-list varies over time.

3.1.6. according to a rate-based algorithm, which maximizes the
   utility of uplink and downlink bandwidth.

3.1.5.  Tribler

   Tribler [tribler] is a BitTorrent client that is was able to go very
   much beyond BitTorrent model also thanks to the support for video
   streaming.  Initially developed by a team of researchers at Delft
   University of Technology, Tribler was able to both i) attract
   attention from other universities and media companies and to ii) receive
   European Union research funding (P2P-Next and QLectives projects).

   Differently from BitTorrent, where a tracker server centrally
   coordinates peers in uploads/downloads of chunks among peers and peers directly
   interact with each other only when they actually upload/
   download upload/download
   chunks to/from each other, there is no tracker server in Tribler and,
   as a consequence, there is no need of tracker protocol.

   Peer protocol

   This is instead used to organize peers in an overlay mesh.
   In more detail, Tribler bootstrap process consists in preloading well
   known super-peer addresses into peer local cache, illustrated also in Figure 5, which depicts the high level
   architecture of Tribler.

                        | Superpeer  |
                         /         \
                        /           \
               +------------+    +------------+
               |   Peer 2   |----|   Peer 3   |
               +------------+    +------------+
                     /   |                \
                    /    |                 \
                   /   +--------------+     \
                  /    |    Peer 1    |      \
                 /     +--------------+       \
                /            /        \        \
       +------------+       /        +--------------+
       |   Peer 4   |      /         |    Peer 5    |
       +------------+     /          +--------------+
              \          /                   /
               \        /                   /
                \      /             +------------+
               +------------+        | Superpeer  |
               | Superpeer  |        +------------+

   Figure 5, High level overview of Tribler system architecture

   Regarding peer protocol and the organization of overlay mesh, Tribler
   bootstrap process consists in preloading well known superpeer
   addresses into peer local cache, in such a way that a joining peer
   randomly selects a super-peer superpeer to retrieve a random list of already
   active peers to establish overlay connections with.  A gossip-like
   mechanism called BuddyCast allows Tribler peers to exchange their
   preference lists, list, that is their downloaded file, files, and to build the so
   called Preference Cache.  This cache is used to calculate similarity
   levels among peers and to identify the so called "taste buddies" as
   the peers with highest similarity.  Thanks to this mechanism each
   peer maintains two lists of peers: i) a list of its top-N taste
   buddies along with their current preference lists, and ii) a list of
   random peers.  So a peer alternatively selects a peer from one of the
   lists and sends it its preference list, taste-buddy list and a
   selection of random peers.  The goal behind the propagation of this
   kind of information is the support for the remote search function, a
   completely decentralized search service that consists in querying
   Preference Cache of taste buddies in order to find the torrent file
   associated with an interest file.  If no torrent is found in this
   way, Tribler users may alternatively resort to web-based torrent
   collector servers available for BitTorrent clients.

   As already said, Tribler supports video streaming in two different
   forms: video on demand and live streaming.

   As regards video on demand, a peer first of all keeps informed its
   neighbors about the chunks it has.  Then, on the one side it applies
   suitable chunk-picking policy in order to establish the order
   according to which to request the chunks he wants to download.  This
   policy aims to assure that chunks come to the media player in order
   and in the same time that overall chunk availability is maximized.
   To this end, the chunk-picking policy differentiates among high, mid
   and low priority chunks depending on their closeness with the
   playback position.  High priority chunks are requested first and in
   strict order.  When there are no more high priority chunks to
   request, mid priority chunks are requested according to a rarest-
   first policy.  Finally, when there are no more mid priority chunks to
   request, low priority chunks are requested according to a rarest-
   first policy as well.  On the other side, Tribler peers follow the
   give-to-get policy in order to establish which peer neighbors are
   allowed to request chunks (according to BitTorrent jargon to be
   unchoked).  In more detail, time is subdivided in periods and after
   each period Tribler peers first sort their neighbors according to the
   decreasing numbers of chunks they have forwarded to other peers,
   counting only the chunks they originally received from them.  In case
   of tie, Tribler sorts their neighbors according to the decreasing
   total number of chunks they have forwarded to other peers.  Since
   children could lie regarding the number of chunks forwarded to
   others, Tribler peers do directly not ask their children, but their
   grandchildren.  In this way, Tribler peer unchokes the three highest-
   ranked neighbours and, in order to saturate upload bandwidth and in
   the same time not decrease the performance of individual connections,
   it further unchokes a limited number of neighbors.  Moreover, in
   order to search for better neighbors, Tribler peers randomly select a
   new peer in the rest of the neighbours and optimistically unchoke it
   every two periods.

   As regards live streaming, differently from video on demand scenario,
   the number of chunks cannot be known in advance.  As a consequence a
   sliding window of fixed width is used to identify chunks of interest:
   every chunk that falls out the sliding window is considered out-
   dated, outdated,
   is locally deleted and is considered as deleted by peer neighbors as
   well.  In this way, when a peer joins the network, it learns about
   chunks its neighbors possess and identifies identify the most recent one.  This
   is assumed as beginning of the sliding window at the joining peer,
   which starts downloading and uploading chunks according to the
   description provided for video on demand scenario.  Finally,
   differently from what happens for video on demand scenario, where
   torrent files includes include a hash for each chunk in order to prevent
   malicious attackers from corrupting data, torrent files in live
   streaming scenario include the public key of the stream source.  Each
   chunk is then assigned with absolute sequence number and timestamp
   and signed by source public key.  Such a mechanism allows Tribler
   peers to use the public key included in torrent file and verity the
   integrity of each chunk.


3.1.6.  QQLive

   QQLive [QQLive] is large-scale video broadcast software including
   streaming media encoding, distribution and broadcasting.  Its client
   can apply for web, desktop program or other environments and provides
   abundant interactive function in order to meet the watching
   requirements of different kinds of users.

   Due to the lack of technical details from QQLive vendor, we got some
   knowledge about

   QQLive adopts CDN and P2P architecture for video distribution and is
   different from paper [QQLivePaper], whose authors did
   some measurements other popular P2P streaming applications.  QQLive
   provides video source by source servers and based on this identify CDN and the video content
   can be push to every region by CDN throughout China.  In each region,
   QQLive adopts P2P technology for video content distribution.

   One of the main components aims for QQLive is to use the simplest architecture
   to provide the best user experience.  So QQLive take some servers to
   implement P2P file distribution.  There are two servers in QQLive:
   Stun Server and
   working flow Tracker Server.  Stun Server is responsible for NAT
   traversing.  Tracker Server is responsible for providing content
   address information.  There are a group of these two Servers for
   providing services.  There is no Super Peer in QQLive.

   Main components

   Working flow of QQLive include:

      login server, storing user login information and channel

      authentication server, processing user login authentication;

      channel server, storing all information about channels including
      channel connection nodes watching a channel;

      program server, storing audio and video data information;

      log server, recording the beginning and ending information of

      peer node, watching programs and transporting streaming media.

   Main working flow of QQLive includes startup stage includes startup stage and play stage.

      1) Startup stage includes only interactions between peers and
   centralized QQLive servers, so it may be regarded as associated with
   tracker protocol.  This stage begins when
      Tracker servers.  There is a peer launches QQLive
   client.  Peer provides authentication information built-in URL in an
   authentication message, which it sends to the authentication server.
   Authentication server verifies QQLive provided credentials and if
   these are valid, QQLive client starts communicating with login server
   through SSL.  QQLive client sends a message including QQLive account
   and nickname, and login serve returns a message including information
   such as membership point, total view time, upgrading time and so on.
   At this point, QQLive client requests channel server for updating
   channel list.  QQLive
      software.  When the client firstly loads an old channel list stored
   locally startups and then it overwrites connects to the old list with network,
      the new channel list
   received from channel server.  The full channel list is not obtained
   via a single request.  QQLive client firstly requests for channel
   classification gets the Tracker's address through DNS and then requests tells the channel list within a specific
   channel category selected by
      Tracker the user.  This approach will give
   higher real-time performance to QQLive.

   Play information of its owned video contents.

      2) play stage includes interactions between peers and centralized QQLive
   servers peers or
      peers and between QQLive peers, so it may be regarded as associated
   to both tracker protocol CDN.  Generally, the client will download the video
      content from CDN during the first 30 seconds and peer protocol.  IN more detail, play
   stage then gets
      contents from other peers.  If unfortunately there is structured in no peer
      which owns the content, the following phases:

      Open channel.  QQLive client sends a message to dogin server with will get the content from CDN

   As the ID of chosen channel through UDP, whereas login server replies
      with a message including channel ID, channel name and program
      name.  Afterwards, QQLive client communicates with program server
      through SSL to access program information.  Finally QQLive watches the video, the client
      communicates with channel server through UDP will store the video to obtain initial
      peer information.

      View channel.  QQLive
   the hard disk.  The default storage space is one Gbyte.  If the
   storage space is full, the client establishes connections will delete the oldest content.
   When the client do VCR operation, if the video content is stored in
   hard disk, the client will not do interactions with other peers or

   There are two main protocols in QQLive: tracker protocol and sends packets with fixed length of 118 bytes, which contains
      channel ID.  QQLive client maintains communication with channel
      server by reporting its own information peer
   protocol.  These two protocols are all full private and obtaining updated
      information.  Peer nodes transport stream packet data through UDP
      with fixed-port between 13000 and14000.

      Stop channel.  QQLive client continuously sends five identical encrypt the
   whole message.  The tracker protocol uses UDP
      packets to channel and the port for the
   tracker server is fixed.  For the video streaming, if the client gets
   the streaming from CDN, the client use the HTTP with each data packet fixed length of 93

      Close client.  QQLive port 80 and no
   encryption; if the client sends a gets the streaming from other peers, the
   client use UDP message to notify log
      server transfer the encrypted media streaming and an SSL message to login server, then it continuously
      sends five identical UDP packets to channel server not RTP/

   If there are messages or video content missing, the client will take
   retransmission and the retransmission interval is decided by the
   network condition.  The QQLive doesn't care the strategy of
   transmission and chunk selection which is simple and not similar with each data
      packet fixed length
   BT because of 45 bytes. the CDN support.

3.2.  Tree-based P2P streaming applications
   In tree-based P2P streaming applications peers self-organize in a
   tree-shape overlay network, where peers do not ask for a specific
   chunk, but simply receive it from their so called "parent" node.
   Such content delivery model is denoted as push-based.  Receiving
   peers are denoted as children, whereas sending nodes are denoted as
   parents.  Overhead to maintain overlay topology is usually lower for
   tree-based streaming applications than for mesh-based streaming
   applications, whereas performance in terms of scalability
   and delay are usually
   higher.  On the other side, the greatest drawback of this type of
   application lies in that each node depends on one single node, its father
   parent in overlay tree, to receive streamed content.  Thus, tree-based tree-
   based streaming applications suffer from peer churn phenomenon more
   than mesh-based ones.

3.2.1.  End System Multicast (ESM)

   Even though End System Multicast (ESM) project is ended by now and
   ESM infrastructure is not being currently implemented anywhere, we
   decided to include it in this survey for a twofold reason.  First of
   all, it was probably the first and most significant research work
   proposing the possibility of implementing multicast functionality at
   end hosts in a P2P way.  Secondly, ESM research group at Carnegie
   Mellon University developed the world's first P2P live streaming
   system, system of
   the world, and some members founded later Conviva [conviva] live

   The main property of ESM is that it constructs the multicast tree in
   a two-step process.  The first step aims at the construction of a
   mesh among participating peers, whereas the second step aims at the
   construction of data delivery trees rooted at the stream source.
   Therefore a peer participates in two types of topology management
   structures: a control structure that guarantees peers are always
   connected in a mesh, and a data delivery structure that guarantees
   data gets delivered in an overlay multicast tree.

   There exist two versions of ESM.

   The first version of ESM architecture [ESM1] was conceived for small
   scale multi-source conferencing applications.  Regarding the mesh
   construction phase, when a new member wants to join the group, an
   out-of-bandwidth bootstrap mechanism provides the new member with a
   list of some group member. members.  The new member randomly selects a few
   group members as peer neighbors.  The number of selected neighbors
   does not exceed
   never exceeds a given bound, which reflects the bandwidth of the
   peer's connection to the Internet.  Each peer periodically emits a
   refresh message with monotonically increasing sequence number, which
   is propagated across the mesh in such a way that each peer can
   maintain a list of all the other peers in the system.  When a peer
   leaves, either it notifies its neighbors and the information is
   propagated across the mesh to all the participating peers, or peer
   neighbors detect the condition of abrupt departure and propagate it
   through the mesh.  To improve mesh/tree quality, on the one side
   peers constantly and randomly probe each other to add new links; on
   the other side, peers continually monitor existing links to drop the
   ones that are not perceived as good-quality-links. good-quality links.  This is done
   thanks to the evaluation of a utility function and a cost function,
   which are conceived to guarantee that the shortest overlay delay
   between any pair of peers is comparable to the unicast delay among
   them.  Regarding multicast tree construction phase, peers run a
   distance-vector protocol on top of the tree and use latency as
   routing metric.  In this way, data delivery trees may be constructed
   from the reverse shortest path between source and recipients.

   The second and subsequent version of ESM architecture [ESM2] was
   conceived for an operational large scale single-source Internet
   broadcast system.  As regards the mesh construction phase, a node
   joins the system by contacting the source and retrieving a random
   list of already connected nodes.  Information on active participating
   peers is maintained thanks to a gossip protocol: each peer
   periodically advertises to a randomly selected neighbor a subset of
   nodes he knows and the last timestamps it has heard for each known
   node.  The main difference with the first version is that the second
   version constructs and maintains the data delivery tree in a
   completely distributed manner according to the following criteria: i)
   each node maintains a degree bound on the maximum number of children
   it can accept depending on its uplink bandwidth, ii) tree is
   optimized mainly for bandwidth and secondarily for delay.  To this
   end, a parent selection algorithm allows identifying among the
   neighbors the one that guarantees the best performance in terms of
   throughput and delay.  The same algorithm is also applied either if a
   parent leaves the system or if a node is experiencing poor
   performance (in terms of both bandwidth and packet loss).  As loop
   prevention mechanism, each node keeps also the information about the
   hosts in the path between the source and its parent node.It then constructs a (reverse)
   shortest path spanning trees of the mesh with the root being the
   source. node.

   This second ESM prototype is also able to cope with receiver
   heterogeneity and presence of NAT/firewalls.  In more detail, audio
   stream is kept separated from video stream and multiple bit-rate
   video streams are encoded at source and broadcast in parallel though
   the overlay tree.  Audio is always prioritized over video streams,
   and lower quality video is always prioritized over high quality
   video.  In this way, system can dynamically select the most suitable
   video stream according to receiver bandwidth and network congestion
   level.  Moreover, in order to take presence of hosts behind NAT/
   firewalls, tree is structured in such a way that public host hosts use
   hosts behind NAT/firewalls as parents.

3.3.  Hybrid P2P streaming applications

   This type of applications aims at integrating the main advantages of
   mesh-based and tree-based approaches.  To this end, overlay topology
   is mixed mesh-tree, and content delivery model is push-pull.

3.3.1.  New Coolstreaming

   Coolstreaming, first released in summer 2004 with a mesh-based
   structure, arguably represented the first successful large-scale P2P
   live streaming.  Nevertheless, it suffers poor delay performance and
   high overhead associated each video block transmission.  In the
   attempt of overcoming such a limitation, New Coolstreaming
   [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a
   hybrid pull-push content delivery mechanism.

   Figure 5 illustrates New Coolstreaming architecture.
                  |            +---------+      |
                  |            | Tracker |      |
                  |            +---------+      |
                  |                  |          |
                  |                  |          |
                  |   +---------------------+   |
                  |   |    Content server   |   |
                  |   +---------------------+   |
                        /                     \
                       /                       \
                      /                         \
                     /                           \
               +---------+                   +---------+
               |  Peer1  |                   |  Peer2  |
               +---------+                   +---------+
                /      \                       /      \
               /        \                     /        \
              /          \                   /          \
         +---------+  +---------+     +---------+  +---------+
         |  Peer2  |  |  Peer3  |     |  Peer1  |  |  Peer3  |
         +---------+  +---------+     +---------+  +---------+

                Figure 5, New Coolstreaming Architecture

   The video stream is divided into equal-size blocks or chunks, which
   are assigned with a sequence number to implicitly define the playback
   order in the stream.  Video stream is subdivided into multiple sub-
   streams without any coding, so that each node can retrieve any sub-
   stream independently from different parent nodes.  This consequently
   reduces the impact on content delivery due to a parent departure or
   failure.  The details of hybrid push-pull content delivery scheme are
   as follows:

      a node first subscribes to a sub-stream by connecting to one of
      its partners via a single request (pull) in buffer map, the
      requested partner, i.e., the parent node.  The node can subscribe
      more sub-streams to its partners in this way to obtain higher play

      the selected parent node will continue pushing all blocks of the
      sub-stream to the requesting node.

   This not only reduces the overhead associated with each video block
   transfer, but more importantly it significantly reduces the delay in
   retrieving video content.

   Video content is processed for ease of delivery, retrieval, storage
   and play out.  To manage content delivery, a video stream is divided
   into blocks with equal size, each of which is assigned a sequence
   number to represent its playback order in the stream.  Each block is
   further divided into K sub-blocks and arguably represented the set of i-th sub-blocks of
   all blocks constitutes first successful large-scale P2P
   live streaming.  Nevertheless, it suffers poor delay performance and
   high overhead associated with each video block transmission.  In the i-th sub-stream
   attempt of the video stream, where
   i is overcoming such a value bigger than 0 limitation, New Coolstreaming
   [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and less than K+1.  To retrieve video
   content, a node receives at most K distinct sub-streams from its
   parent nodes.  To store retrieved sub-streams,
   hybrid pull-push content delivery mechanism.

   Like in the old Coolstreaming, a newly joined node uses a double
   buffering scheme having contacts a synchronization buffer special
   bootstrap node and retrieves a cache buffer. partial list of active nodes in the

   The synchronization buffer stores interaction with bootstrap node is the received sub-blocks of each
   sub-stream according only one related to the associated block sequence number
   tracker protocol.  The rest of the
   video stream. New Coolstreaming interactions are
   related to peer protocol.

   The cache buffer newly joined node then picks up establishes a partnership with few active
   nodes by periodically exchanging information on content availability.
   Streaming content is divided in New Coolstreaming in equal-size
   blocks or chunks, which are unambiguously associated with sequence
   numbers that represent the sub-blocks
   according playback order.  Chunks are then grouped
   to form multiple sub-streams.

   Like in most of P2P streaming applications information on content
   availability is exchanged in form of buffer-maps.  However, New
   Coolstreaming buffer-maps differ from the associated sub-stream index usual format of strings of
   bits where each ordered block.
   To advertise bit represents the availability of the latest block of different sub-
   streams in its buffer, a node uses a Buffer Map which is represented
   by two chunk.  Two
   vectors of K elements each.  Each entry of the represent indeed buffer-maps in New Coolstreaming.  The first
   indicates reports the block sequence number numbers of the latest last chunk received sub-
   stream, for a
   given sub-stream.  The second vector is used to explicitly request
   chunks from partner peers.  In more details, the second vector has as
   many bits as sub-streams, and each a peer receiving a bit entry of the second vector if set indicates the
   block sequence index "1" in
   correspondence of the a given sub-stream that is being requested.

   For requested from the
   sending peer to upload chunks belonging to that sub-streams.  Since
   chunks are explicitly requested, data delivery, a node uses delivery may be regarded as
   pull-based.  However, data delivery is push-based as well, since
   every time a hybrid push and pull scheme with
   randomly selected partners.  A node having is requested one or more
   distinct sub-streams from a partner as indicated in its first Buffer
   Map will continue to receive the sub-streams of upload chunks, it uploads all subsequent blocks
   chunks for that sub-stream starting from the same partner until future conditions cause the partner to do
   otherwise.  Moreover, users retrieve video indirectly from one indicated in the source
   through a number
   first vector of strategically located servers.

   To keep the parent-children relationship above a certain level received buffer-map.

   In order to improve quality of
   quality, mesh-tree overlay, each node constantly
   continuously monitors the status of the on-going
   sub-stream reception and re-selects parents according to sub-stream
   availability patterns.  Specifically, if a node observes that the
   block sequence number quality of the sub-stream active connections in terms of
   mutual delay between sub-streams.  If such quality drops below a parent is much smaller
   than any of its other partners by
   predefined threshold, a predetermined amount, the New Coolstreaming node
   then concludes that the parent is lagging sufficiently behind and
   needs to be replaced.  Furthermore, selects a node also evaluates the maximum
   and minimum of the block sequence numbers in new partner
   among its synchronization
   buffer to determine if any parent partners.  Parent re-selection is lagging behind the rest of its
   parents and thus needs also to be replaced. applied in case of
   leaving of the previous parent.

4.  Security Considerations

   This document does not raise security issues.

5.  Author List


   Other authors of this document are listed as below.

      Hui Zhang, NEC Labs America.

      Jun Lei, University of Goettingen.

      Gonzalo Camarillo, Ericsson.

      Yong Liu, Polytechnic University.

      Delfin Montuno, Huawei.

      Lei Xie, Huawei.

6.  Acknowledgments

   We would like to acknowledge Jiang xingfeng for providing good ideas
   for this document.

7.  Informative References

   [Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new
   technology for large-scale streaming over the Internet".

   [CNN] CNN web site, www.cnn.com http://www.cnn.com

   [PPLive] PPLive web site, www.pplive.com http://www.pplive.com

   [P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV
   Systems", June 2007.

   [CNSR] Li, Ruixuan, et al., "Measurement Study on PPLive Based on
   Channel Popularity", May 2011.

   [Zattoo] Zattoo web site, http: //zattoo.com/ http://www.zattoo.com

   [IMC09] Chang, Hyunseok, et al., "Live streaming performance of the
   Zattoo network", November 2009.

   [PPStream] PPStream web site, http:// www.ppstream.com

   [SopCast] SopCast web site, http://www.sopcast.com/

   [tribler] Tribler Protocol Specification, January 2009, on line
   available at http://svn.tribler.org/bt2-design/proto-spec-unified/

   [QQLive] QQLive web site, http://v.qq.com

   [QQLivePaper] Liju Feng, et al., "Research on active monitoring based
   QQLive real-time information Acquisition System", 2009.

   [conviva] Conviva web site, http://www.conviva.com

   [ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June
   2000.  (http://esm.cs.cmu.edu/technology/papers/

   [ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet
   Broadcast System Based on Overlay Multicast", June 2004.  (http://

   [NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming:
   Principles,Measurements and Performance Implications", April 2008.

8.  References

Authors' Addresses

   Gu Yingjie
   No.101 Software Avenue
   Nanjing  210012

   Phone: +86-25-56624760
   Fax:   +86-25-56624702

   Email: guyingjie@huawei.com guyingjie@gmail.com

   Zong Ning (editor)
   No.101 Software Avenue
   Nanjing  210012

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: zongning@huawei.com
   Zhang Yunfei
   China Mobile

   Email: zhangyunfei@chinamobile.com hishigh@gmail.com

   Francesca Lo Piccolo
   Via del Serafico 200
   Rome  00142

   Phone: +39-06-51645136
   Email: flopicco@cisco.com

   Duan Shihui
   No.52 HuaYuan BeiLu
   Beijing  100191

   Phone: +86-10-62300068
   Email: duanshihui@catr.cn