PPSP                                                               Y. Gu, Ed. Gu
Internet-Draft                                              N. Zong, Ed.
Intended status: Standards Track                                  Huawei
Expires: April 19, August 29, 2013                                    Yunfei.                                        Y. Zhang
                                                            China Mobile
                                                        October 16, 2012
                                                              F. Piccolo
                                                                   Cisco
                                                                 S. Duan
                                                                    CATR
                                                       February 25, 2013

                  Survey of P2P Streaming Applications
                       draft-ietf-ppsp-survey-03
                       draft-ietf-ppsp-survey-04

Abstract

   This document presents a survey of some of the most popular Peer-to-Peer Peer-to-
   Peer (P2P) streaming applications on the Internet.  We  Main selection
   criteria were popularity and availability of information on operation
   details at writing time.  In doing this, selected applications will
   not be reviewed as a whole, but we will focus exclusively on the Architecture
   signaling and Peer
   Protocol/Tracker Signaling Protocol description in the presentation, control protocol used to establish and study a selection of well-known P2P streaming systems, including
   Joost, PPlive, andother popular existing systems.  Through the
   survey, we summarize a common P2P streaming process model maintain overlay
   connections among peers and the
   correspondent signaling process for P2P Streaming Protocol
   standardization. to advertise and download streaming
   content.

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on April 19, August 29, 2013.

Copyright Notice

   Copyright (c) 2012 2013 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  Terminologies and concepts . . . . . . . . . . . . . . . . . .  3  4
   3.  Survey  Classification of P2P streaming system Streaming Applications Based on
       Overlay Topology . . . . . . . . . . . . . . . . . .  4
     3.1.  Mesh-based P2P streaming systems . . . . .  5
     3.1.  Mesh-based P2P Streaming Applications  . . . . . . . .  4
       3.1.1.  Joost . .  5
       3.1.1.  Octoshape  . . . . . . . . . . . . . . . . . . . . . .  5  6
       3.1.2.  Octoshape  PPLive . . . . . . . . . . . . . . . . . . . . . . . .  8
       3.1.3.  PPLive  Zattoo . . . . . . . . . . . . . . . . . . . . . . . . 10
       3.1.4.  Zattoo .  PPStream . . . . . . . . . . . . . . . . . . . . . . . 12 11
       3.1.5.  PPStream  SopCast  . . . . . . . . . . . . . . . . . . . . . . . 14 12
       3.1.6.  SopCast  Tribler  . . . . . . . . . . . . . . . . . . . . . . . 15 13
       3.1.7.  TVants  QQLive . . . . . . . . . . . . . . . . . . . . . . . . 16 15
     3.2.  Tree-based P2P streaming systems . . . applications  . . . . . . . . . . 16
       3.2.1.  PeerCast . . . . . . . . .  End System Multicast (ESM) . . . . . . . . . . . . . . 17
       3.2.2.  Conviva  . . . . . . . . . . . . . . . . . . . . . . . 19
     3.3.  Hybrid P2P streaming system  . . . applications  . . . . . . . . . . . . 21 18
       3.3.1.  New Coolstreaming  . . . . . . . . . . . . . . . . . . 21 19
   4.  A common P2P Streaming Process Model . . . . . . . . . . . . . 23
   5.  Security Considerations  . . . . . . . . . . . . . . . . . . . 24
   6. 21
   5.  Author List  . . . . . . . . . . . . . . . . . . . . . . . . . 24
   7. 21
   6.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 25
   8. 21
   7.  Informative References . . . . . . . . . . . . . . . . . . . . 25 21
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 26 22

1.  Introduction

   Toward standardizing the signaling protocols used in today's Peer-to-
   Peer (P2P) streaming applications, we surveyed several popular P2P

   An ever increasing number of multimedia streaming systems regarding their architectures and signaling
   protocols between peers, as well as, between peers have been
   adopting Peer-to-Peer (P2P) paradigm to stream multimedia audio and trackers.  The
   studied P2P streaming systems, running worldwide or domestically.
   This document does not intend
   video contents from a source to cover all design options a large number of P2P
   streaming applications.  Instead, we choose end users.  This is
   the reference scenario of this document, which presents a representative set survey of
   some of the most popular P2P streaming applications available on the
   nowadays Internet.  The presented survey does not aim at being
   exhaustive.  Reviewed applications have indeed been selected mainly
   based on their popularity and focus on the respective information publicly available
   on P2P operation details at writing time.

   In addition, selected applications are not reviewed as a whole, but
   with exclusive focus on signaling characteristics of
   each kind.  Through and control protocols used to
   construct and maintain the survey, overlay connections among peers and to
   advertise and download multimedia content.  More precisely, we generalize a common streaming
   process assume
   throughout the document the high level system model from those reported in
   Figure 1.
                          +--------------------------------+
                          |              Tracker           |
                          |   Information on multimedia    |
                          |   content and peer set         |
                          +--------------------------------+
                             ^  |                    ^  |
                             |  |                    |  |
                      Trcker |  |            Tracker |  |
                    Protocol |  |           Protocol |  |
                             |  |                    |  |
                             |  |                    |  |
                             |  V                    |  V
                       +-------------+          +------------+
                       |    Peer1    |<-------->|  Peer 2    |
                       +-------------+   Peer   +------------+
                                       Protocol

             Figure 1, High level model of P2P streaming systems, and summarize the
   companion signaling process systems assumed
                         as reference througout the base for P2P Streaming Protocol
   (PPSP) standardization.

2.  Terminologies and concepts

   Chunk: A chunk document
   As Figure 1 shows, it is a basic unit of partitioned possible to identify in every P2P streaming media, which
   is used by a peer for the purpose
   system two main types of storage, advertisement and
   exchange among entity: peers [P2PVOD].

   Content Distribution Network (CDN) node: A CDN node refers to a
   network entity that usually is deployed at and trackers.  Peers represent
   end users, which join dynamically the network edge system to store
   content provided by the original servers, send and serves content receive
   streamed media content, whereas trackers represent well-known nodes,
   which are stably connected to the
   clients located nearby topologically.

   Live streaming: The scenario where all clients receive streaming
   content for system and provide peers with
   metadata information about the same ongoing event.  The lags between streamed content and the play points set of the clients active
   peers.  According to this model, it is possible to distinguish among
   two different control and that of signaling protocols:

      the streaming source are small..

   P2P cache: A P2P cache refers to a network entity protocol that caches P2P
   traffic regulates the interaction between trackers and
      peers and will be denoted as "tracker protocol" in the network, document;
      the protocol that regulates the interaction between peers and either transparently or explicitly
   distributes content will
      be denoted as "peer protocol" in the document.

   Hence, whenever possible, we will always try to other peers. identify tracker and
   peer protocols and we will provide the corresponding details.

   This document is organized as follows.  Section 2 introduces
   terminology and concepts used throughout the current survey.  Since
   overlay topology built on connections among peers impacts some
   aspects of tracker and peer protocols, Section 2 classifies P2P
   streaming protocols: application according to the main overlay topologies: mesh-
   based, tree-based and hybrid.  Then, Section 3 presents some of the
   most popular mesh-based P2P streaming protocols refer to multiple
   protocols such applications: Octoshape,
   PPLive, Zattoo, PPStream, SopCast, Tribler, QQLive.  Likewise,
   Section 4 presents End System Multicast as example of tree-based P2P
   streaming control, resource discovery, applications.  Finally Section 5 presents New Coolstreaming
   as example of hybrid-topology P2P streaming application.

2.  Terminologies and concepts

   Chunk: A chunk is a basic unit of data transport, etc. which are needed organized in P2P streaming for
   storage, scheduling, advertisement and exchange among peers.

   Live streaming: It refers to build a P2P scenario where all the audiences
   receive streaming
   system.

   Peer/PPSP peer: content for the same ongoing event.  It is desired
   that the lags between the play points of the audiences and streaming
   source be small.

   Peer: A peer/PPSP peer refers to a participant in a P2P streaming system.  The participant system that
   not only receives streaming content, but also stores caches and uploads streams
   streaming content to other participants.

   PPSP protocols: PPSP protocols refer to the key

   Peer protocol: Control and signaling protocols protocol that regulates
   interaction among various P2P streaming system components, including the tracker
   and peers.

   Swarm:

   Pull: Transmission of multimedia content only if requested by
   receiving peer.

   Push: Transmission of multimedia content without any request from
   receiving peer.

   Swarm: A swarm refers to a group of clients (i.e. peers) sharing peers who exchange data to
   distribute chunks of the same content (e.g. video/audio program, digital file, etc) at a given time.

   Tracker/PPSP tracker:

   Tracker: A tracker/PPSP tracker refers to a directory service which that maintains the lists a
   list of peers/PPSP peers storing chunks
   for participating in a specific audio/video channel or in
   the distribution of a streaming file, file.

   Tracker protocol: Control and answers queries from
   peers/PPSP peers. signaling protocol that regulates
   interaction among peers and trackers.

   Video-on-demand (VoD): A kind of application that allows users It refers to
   select and a scenario where different
   audiences may watch video content on demand

3.  Survey different parts of P2P the same recorded streaming system

   In this section, we summarize some existing
   with downloaded content.

3.  Classification of P2P streaming systems.
   The construction techniques used in these systems Streaming Applications Based on Overlay
    Topology

   Depending on the topology that can be largely
   classified into two categories: tree-based and mesh-based structures.

   Tree-based structure: Group members self-organize into associated with overlay
   connections among peers, it is possible to distinguish among the
   following general types of P2P streaming applications:

      - tree-based: peers are organized to form a tree
   structure, based on which group management tree-shape overlay
      network rooted at the streaming source, and data multimedia content
      delivery is
   performed.  Such structure push-based.  Peers that forward data are called parent
      nodes, and push-based content delivery have small
   maintenance peers that receive it are called children nodes.  Due
      to their structured nature, tree-based P2P streaming applications
      present a very low cost of topology maintenance and are able to
      guarantee good performance in terms of scalability and low delay in retrieving delay.  On
      the
   content(associated with startup delay) and can be easily implemented.
   However, it other side, they are not very resilient to peer churn, that
      may result be very high in low bandwidth usage and less reliability.

   Mesh-based structure: In contrast to tree-based structure, a mesh
   uses multiple links between any two nodes.  Thus, the reliability of
   data transmission is relatively high.  Besides, multiple links
   results P2P environment;

      - mesh-based: peers are organized in high bandwidth usage.  Nevertheless, the cost of
   maintaining such mesh is much larger than that of a tree, randomly connected overlay
      network, and pull-
   based multimedia content delivery lead is pull-based.  This is
      the reason why these systems are also referred to high overhead associated each video
   block transmission, in particular as "data-
      driven".  Due to their unstructured nature, mesh-based P2P
      streaming application are very resilient with respect to peer
      churn and are able to guarantee network resource utilization
      higher than for tree-based applications.  On the delay in retrieving other side, the
   content.

   Hybrid structure: Combine tree-based
      cost to maintain overlay topology may limit performance in terms
      of scalability and mesh-based structure,
   combine pull-based delay, and push-based content pull-based data delivery calls for
      large size buffer where to utilize store chunks;

      - hybrid: this category includes all the
   advantages of two structures.  It has high reliability as much P2P application that
      cannot be classified as simply mesh-based structure, lower delay than mesh-based structure, lower
   overhead associated each video block transmission or tree-based and high topology
   maintenance cost as much as
      present characteristics of both mesh-based structure. and tree-based
      categories.

3.1.  Mesh-based P2P Streaming Applications

   In mesh-based P2P streaming systems

   Mesh-based systems implement application peers self-organize in a mesh distribution graph,
   randomly connected overlay graph where each
   node contacts a subset of peers to obtain interact with a number limited
   subset of chunks.  Every
   node needs to know which chunks are owned by its peers (neighbors) and explicitly
   "pulls" the request chunks it needs. they need
   (pull-based or data-driven delivery).  This type of scheme involves content delivery
   may be associated with high overhead,
   due not only because peers
   formulate requests to in part order to the download chunks they need, but also
   because in some applications peers exchange information about chunks
   they own (in form of buffer so called buffer-maps, a sort of bit maps between nodes (i.e. nodes
   advertise the set with a
   bit "1" in correspondence of chunks they own) and stored in part to the "pull"
   process (i.e. each node sends a request local buffer).  The
   main advantage of this kind of applications lies in order to receive the
   chunks).  Since each node relies on multiple peers to retrieve
   content, mesh based systems offer good resilience to node failures.
   On the negative side they require large buffers to support the chunk
   pull (clearly, large buffers are needed to increase the chances of
   finding a chunk).

   In that a mesh-based P2P streaming system, peers are peer does
   not confined to a
   static topology.  Instead, the peering relationships are established/
   terminated based on the content availability and bandwidth
   availability rely on peers.  A peer dynamically connects to a subset of
   random peers in the system.  Peers periodically exchange information
   about their data availability.  The content is pulled by a single peer from
   its neighbors who have already obtained the for retrieving multimedia content.  Since multiple
   neighbors are maintained at any given moment, mesh-based streaming
   systems  Hence,
   these applications are highly robust very resilient to peer churns.  However, churn.  On the other
   side, overlay connections are not persistent and highly dynamic
   peering relationships make the
   (being driven by content availability), and this makes content
   distribution efficiency unpredictable.  Different data packets  In fact, different chunks may traverse
   be retrieved via different routes
   to users.  Consequently, users network paths, and this may suffer from content turn at end
   users into playback quality degradation ranging from low bit rates,
   to long startup delays, to frequent playback freezes.

3.1.1.  Joost

   Joost announced  Moreover,
   peers have to give up maintain large buffers to increase the probability of
   satisfying chunk requests received by neighbors.

3.1.1.  Octoshape

   Octoshape [Octoshape] is popular for the realization of the P2P technology on
   plug-in CNN [CNN] that has been using Octoshape to broadcast its desktop version last
   year, though it introduced
   living streaming.  Octoshape helps CNN serve a flash version for browsers and iPhone
   application.  The key reason why Joost shut down its desktop version
   is probably the legal issues peak of more than a
   million simultaneous viewers.  But Octoshape has also provided media content.  However,
   several innovative delivery technologies such as
   one of loss resilient
   transport, adaptive bit rate, adaptive path optimization and adaptive
   proximity delivery.

   Figure 2 depicts the most popular P2P VoD application in architecture of the past years, it's
   worthwhile to understand how Joost works.  The peer management and
   data transmission in Joost mainly relies on mesh-based structure.

   The three key components of Joost are servers, super nodes and peers.
   There are five types of servers: Tracker server, Version server,
   Backend server, Octoshape system.
            +------------+   +--------+
            |   Peer 1   |---| Peer 2 |
            +------------+   +--------+
                 |    \    /      |
                 |     \  /       |
                 |      \         |
                 |     / \        |
                 |    /   \       |
                 |  /      \      |
      +--------------+    +-------------+
      |     Peer 4   |----|    Peer3    |
      +--------------+    +-------------+
      *****************************************
                         |
                         |
                 +---------------+
                 | Content server and Graphics server.  Supernodes are
   managing the p2p control of Joost nodes and Joost nodes are all the
   running clients in the Joost network.  The architecture of Joost
   system is shown in Server|
                 +---------------+

      Figure 1.

   First, we introduce the functionalities 2, Architecture of Joost's key components
   through three basic phases.  Then we will discuss Octoshape system

   As it can be seen from the Peer protocol picture, there are no trackers and Tracker
   consequently no tracker protocol of Joost.

   Installation: Backend server is involved in necessary.

   As regards the installation phase.
   Backend server provides peer with an initial channel list in protocol, as soon as a SQLite
   file.  No peer joins a channel, it
   notifies all the other parameters, such as local cache, node ID, or
   listening port, are configured peers about its presence, in this file.

   Bootstrapping: In case of such a newcomer, Tracker server provides several
   super node addresses and possibly some content server addresses.
   Then the way that
   each peer connects Version server for the latest software
   version.  Later, maintains a sort of address book with the peer starts to connect some super nodes information
   necessary to
   obtain the list of contact other available peers and begins streaming video
   contents.  Super nodes who are watching the same channel.
   Although Octoshape inventors claim in Joost only deal with control and [Octoshape] that each peer
   management traffic.  They do not relay/forward any media data.

   When Joost is first launched,
   records all peers joining a login mechanism channel, we suspect that it is initiated using
   HTTPS and TLSv1.  After, a TCP synchronization, the client
   authenticates with a certificate to the login server.  Once the login
   process is done, very
   unlikely that all peers are recorded.  In fact, the client first contacts corresponding
   overhead traffic would be large, especially when a supernode, which address
   is hard coded popular program
   starts in Joost binary to get a list channel and lots of peers and a Joost
   Seeder switch to contact.  Of course, this depends on channel.  Maybe
   only some geographic or topological neighbors are notified and the channel chosen by
   joining peer gets the user.  Once launched, address book from these nearby neighbors.

   Regarding data distribution strategy, in the Joost client checks if there Octoshape solution the
   original stream is split into a more
   recent version available sending an HTTP request.

   Once authenticated to the video service, Joost node uses number K of smaller equal-sized data
   streams, but a number N > K of unique data streams are actually
   constructed, in such a way that a peer receiving any K of the same
   authentication mechanism (TCP synchronization, certificate validation
   and shared key verification) to login N
   available data streams is able to play the backend server.This
   server validates original stream.  For
   instance, if the access to all HTTPS services like channel chat,
   channel list, video content search.

   Joost uses TCP port 80 for HTTP, port 443 original live stream is a 400 kbit/sec signal, for HTTPS transfers
   K=4 and UDP
   port 4166 for video packets exchange mainly from long-tail servers N=12, 12 unique data streams are constructed, and each Joost a peer chooses its own UDP port to exchange with other
   peers.

   Channel switching: Super nodes are responsible for redirecting
   clients that
   downloads any 4 of the 12 data streams is able to content server or peers.

   Peers communicate with servers over HTTP/HTTPs and with super nodes/
   other peers over UDP.

   Tracker Protocol: Because super nodes here are responsible for
   providing play the peerlist/content servers live
   stream.  In this way, each peer sends requests of data streams to
   some selected peers, protocol used
   between tracker server and peers is rather simple.  Peers get the
   addresses it receives positive/negative answers
   depending on availability of super nodes and content servers from Tracker Server over
   HTTP.  After that, Tracker sever will not appear in any stage, e.g.
   channel switching, VoD interaction.  In fact, the protocol spoken
   between peers and super nodes is more like what we normally called
   "Tracker Protocol".  It enables super nodes to check peer status,
   maintain peer lists for several, if not all, channels.  It provides
   peer list/content servers to upload capacity at requested peers.  Thus, in the rest  In
   case of this
   section, when we mention Tracker Protocol, we mean the one used
   between negative answers, a continues sending requests it finds K
   peers and super nodes.

   Joost uses supernodes only willing to control upload the traffic but never as relays
   for video content.  The main minimum number if data streams are sent from the Joost Seeders
   and all the traffic is encrypted secure shared video content from
   piracy.  Joost peers cache the received content to re-stream it when needed by other peers, to recover from missed video blocks.

   Although Joost is a peer-to-peer video distribution technology, it
   relies heavily on a few centralized servers to provide
   redisplay the licensed
   video content and uses original live stream.  Since the peer-to-peer overlay to service content at
   a faster rate.  The centralized nature number of Joost peers served
   by a given peer is the main factor
   that influences limited by its lack of locality awareness and low fairness
   ratio.  Since Joost is directly providing upload capacity, the upload
   capacity at least two thirds each peer should be larger than the playback rate of the
   video content
   live stream.  Otherwise, artificial peers may be added to its clients, only one third will have offer extra
   bandwidth.

   In order to be supplied
   by independent nodes.  This approach does not scale well, and is
   sustainable today only because of the relatively low user population.

   From a network usage perspective, Joost consumes approximately 700
   kbps downstream and 120 kbps upstream, regardless of mitigate the total
   capacity impact of peer loss, the network.  This is assuming the network upstream
   capacity it address book is larger than 1Mbps.

   There may be some type of RTT-savvy selection algorithm
   also used at work, each peer to derive the so called Standby List, which gives priority
   Octoshape peers use to probe other peers with RTT less than or equal and be sure that they are
   ready to the RTT take over if one of a Joost content providing super node.

   Peers will communicate with super nodes in some scenarios using
   Tracker Protocol.

   1.  When a peer starts Joost software, after the installation and
   bootstrapping, the peer will communicate with one current senders leaves or several super
   nodes gets
   congested.

   Finally, in order to get a list of available peers/content servers.

   2.  For on-demand video functions, super nodes periodically exchange
   small UDP packets for peer management purpose.

   3.  When switching between channels, optimize bandwidth utilization, Octoshape
   leverages peers contact super nodes within a network to minimize external bandwidth usage
   and to select the latter help most reliable and "closest" source to each viewer.
   It also chooses the peers find best matching available peers codecs and players, and
   it scales bit rate up and down according to fetch available internet
   connection.

3.1.2.  PPLive

   PPLive [PPLive] is one of the requested
   media data.

   Peer Protocol: The following investigations are mainly motivated from
   [JOOSTEXP ], most popular P2P streaming software in which a data-driven reverse-engineer experiments are
   performed.  We omitted
   China.  The PPLive system includes six parts.

   (1) Video streaming server: providing the analysis process source of video content and directly show
   coding the
   conclusion.  Media data in Joost is split into chunks content for adapting the network transmission rate and then
   encrypted.  Each chunk is packetized with about 5-10 seconds of video
   data.  After receiving peer list from super nodes, a peer negotiates
   with some or, if necessary, all of the
   client playing.

   (2) Peer: also called node or client.  The peers in the list to find out
   what chunks they have.  Then compose the self-
   organizing network logically and each peer makes decision about from which
   peers can join or leave
   whenever.  When the client downloads the content, it also provides
   its own content to get the chunks.  No peer capability information is exchanged
   in other client at the Peer Protocol.
                   +---------------+       +-------------------+
                   | Version Server|       | same time.

   (3) Directory server: server which the PPLive client, when launched
   or shut down by user, automatically registers user information to and
   cancels user information from.

   (4) Tracker Server  |
                   +---------------+       +-------------------+
                             \                       |
                              \ server: server that records the information of all users
   watching the same content.  In more detail, when the PPLive client
   requests some content, this server will check if there are other
   peers owning the content and send the information to the client.

   (5) Web server: providing PPLive software updating and downloading.

   (6) Channel list server: server that stores the information of all
   the programs which can be watched by end users, including VoD
   programs and live broadcasting programs.

   PPLive uses two major communication protocols.  The first one is the
   Registration and Peer Discovery protocol, the equivalent of tracker
   protocol, and the second one is the P2P Chunk Distribution protocol,
   the equivalent of peer protocol.  Figure 3 shows the architecture of
   PPLive system.

            +------------+    +--------+
            |
                               \   Peer 2   |----| Peer 3 | +---------------+
                                \
            +------------+    +--------+
                     | |Graphics Server|
                                 \          | +---------------+
                                  \
                     |          |
                    +--------------+        +-------------+        +--------------+
   |Content Server|--------|    Peer1    |--------|Backend Server|
   +--------------+        +-------------+        +--------------+
                    |    Peer 1    |
                    +--------------+
                            |
                            |
                              +------------+       +---------+
                            | Super Node |-------|  Peer2
                    +---------------+
                    |
                              +------------+       +---------+ Tracker Server|
                    +---------------+

      Figure 1, 3, Architecture of Joost PPlive system

   Joost provides large buffering

   As regards the tracker protocol, firstly a peer gets the channel list
   from the Channel list server; secondly it chooses a channel and thus causes longer start-up delay
   for VoD traffic than for live media streaming traffic.  It affords
   more FEC asks
   the Tracker server for VoD traffic but gives higher priority a peer-list associated with the selected
   channel.

   As regards the peer protocol, a peer contacts the peers in delivery its peer-
   list to
   live media streaming traffic.

   To enhance user viewing experience, Joost provides chat capability
   between viewers and user program rating mechanisms.

3.1.2.  Octoshape

   CNN [CNN] has been working with a P2P Plug-in, from a Denmark-based
   company Octoshape, get additional peer-lists, to broadcast its living streaming.  Octoshape
   helps CNN serve a peak of more than a million simultaneous viewers.
   It has also provided several innovative delivery technologies such as
   loss resilient transport, adaptive bit rate, adaptive path
   optimization and adaptive proximity delivery.  Figure 2 depicts the
   architecture of be merged with the Octoshape system.

   Octoshape maintains a mesh overlay topology.  Its overlay topology
   maintenance scheme is similar to that of P2P file-sharing
   applications, such as BitTorrent.  There is no original one
   received by Tracker server in
   Octoshape, thus no Tracker Protocol is required.  Peers obtain live
   streaming from content servers and peers over Octoshape Protocol.
   Several data streams are constructed from live stream.  No data
   streams are identical and any number K of data streams can
   reconstruct the original live stream.  The number K is based on the
   original media playback rate and with the playback rate goal of each data
   stream.  For example, a 400Kbit/s media is split into four 100Kbit/s
   data streams, constructing and
   maintaining an overlay mesh for peer management and then k = 4.  Data streams are constructed in peers,
   instead of Broadcast server, which release server from large burden.
   The number of data streams constructed in delivery.
   According to [P2PIPTVMEA], PPLive peers maintain a particular peer equals constant peer-list
   when the number of peers downloading data from the particular peer, which is constrained by relatively small.

   For the upload capacity video-on-demand (VoD) operation, because different peers
   watch different parts of the particular peer.  To get
   the best performance, the upload capacity of channel, a peer should be larger
   than the playback rate of the live stream.  If not, an artificial
   peer may be added buffers chunks up to deliver extra bandwidth.

   Each single peer has an address book a
   few minutes of other peers who is watching
   the same channel.  A Standby list is set up based on the address
   book.  The peer periodically probes/asks the peers in the standby
   list to content within a sliding window.  Some of these chunks
   may be sure chunks that they are ready to take over if one of have been recently played; the
   current senders stops or gets congested.  [Octoshape]

   Peer Protocol: The live stream is firstly sent remaining chunks
   are chunks scheduled to a few peers be played in the
   network and then spread next few minutes.  In order
   to the rest of the network.  When upload chunks to each other, peers exchange "buffer-map" messages.
   A buffer-map message indicates which chunks a peer
   joins a channel, currently has
   buffered and can share, and it notifies all the other peers about its presence
   using Peer Protocol, which will drive includes the others to add it into their
   address books.  Although [Octoshape] declares that each peer records
   all offset (the ID of the peers joining
   first chunk), the channel, we suspect that not all length of the peers buffer map, and a string of zeroes
   and ones indicating which chunks are recorded, considering available (starting with the notification traffic will be large and
   peers will
   chunk designated by the offset).  PPlive transfer Data over UDP.

   The download policy of PPLive may be busy summarized with recording when a popular program starts in the following
   three points:

      top-ten peers contribute to a
   channel and lots major part of the download traffic.
      Meanwhile, session with top-ten peers switch to this channel.  Maybe some
   geographic or topological neighbors are notified and is quite short, if compared
      with the peer video session duration.  This would suggest that PPLive
      gets
   its address book video from these nearby neighbors.

   The peer sends requests to some selected only a few peers for the live stream at any given time, and the receivers answers OK or not according to their upload
   capacity.  The switches
      periodically from one peer continues sending to another;

      PPLive can send multiple chunk requests for different chunks to peers until it
   finds enough peers
      one peer at one time;

      PPLive is observed to provide have the needed data streams download scheduling policy of
      giving higher priority to redisplay rare chunks and to chunks closer to play
      out deadline.

3.1.3.  Zattoo

   Zattoo [Zattoo] is P2P live streaming system which serves over 3
   million registered users over European countries.The system delivers
   live streaming using a receiver-based, peer-division multiplexing
   scheme.  Zattoo reliably streams media among peers using the original mesh
   structure.

   Figure 4 depicts a typical procedure of single TV channel carried
   over Zattoo network.  First, Zattoo system broadcasts a live stream.

            +------------+   +--------+ TV
   channel, captured from satellites, onto the Internet.  Each TV
   channel is delivered through a separate P2P network.
      -------------------------------
      |   Peer 1   |---| Peer 2   ------------------        |
            +------------+   +--------+         --------
      |    \    /   |  Broadcast     |     \  /        |---------|Peer1 |-----------
      |   |      \  Servers       |        |     / \         --------          |
      |    /   \   Administrative Servers    |                      -------------
      |  /      \   ------------------------  |
      +--------------+    +-------------+                      |     Peer 4   |----|    Peer3 Super Node|
      |
      +--------------+    +-------------+

      *****************************************   | Authentication Server |
                 +---------------+ | Content Server|
                 +---------------+                      -------------
      |   | Rendezvous Server     | |                           |
      |   | Feedback Server       | |         --------          |
      |   | Other Servers         | |---------|Peer2 |----------|
      |   ------------------------| |         --------
      ------------------------------|

      Figure 2, Architecture 4, Basic architecture of Octoshape Zattoo system

   To spread the burden of data distribution across several peers and
   thus limiting the impact of peer loss, Octoshape splits

   In order to receive a live stream
   into TV channel, users are required to be
   authenticated through Zattoo Authentication Server.  Upon
   authentication, users obtain a number ticket identifying the interest TV
   channel with a specific lifetime.  Then, users contact the Rendezvous
   Server, which plays the role of smaller equal-sized sub-streams.  For example, tracker and based on the received
   ticket sends back a list joined of peers carrying the channel.

   As regards the peer protocol, a
   400kbit/s peer establishes overlay connections
   with other peers randomly selected in the peer-list received by the
   Rendezvous Server.

   For reliable data delivery, each live stream is split and partitioned into
   video segments.  Each video segment is coded for forward error
   correction with Reed-Solomon error correcting code into 12 distinct 100kbit/s
   sub-streams.  Only a subset n sub-stream
   packets such that having obtained k correct packets of these sub-streams needs to reach a
   user for it segment is
   sufficient to reconstruct the "original" live stream.  The number remaining n-k packets of
   distinct sub-streams could be as many as the number of active peers. same video
   segment.  To optimize bandwidth utilization, Octoshape leverages computers
   within receive a network to minimize external bandwidth usage and to select
   the most reliable and "closest" source to video segment, each viewer.  It also
   chooses peer then specifies the best matching available codecs and players and scales bit
   rate up and down according to available internet connection.

3.1.3.  PPLive

   PPLive [PPLive] is one
   sub-stream(s) of the most popular P2P streaming software in
   China.  The PPLive system includes six parts.

   (1) Video streaming server: providing the source of video content and
   coding segment it would like to receive from the content for adapting
   neighboring peers.

   Peers decide how to multiplex a stream among its neighboring peers
   based on the network transmission rate and the
   client playing.

   (2) Peer: also called node or client.  The nodes compose the self-
   organizing network logically and each node can join or withdraw
   whenever.  When the client downloads the content, it also provides
   its own content availability of upload bandwidth.  With reference to
   such aspect, Zattoo peers rely on Bandwdith Estimation Server to
   initially estimate the other client amount of available uplink bandwidth at a
   peer.  Once a peer starts to forward substream to other peers, it
   receives QoS feedback from its receivers if the same time.

   (3) Directory server: when the user start up the PPLive client, the
   client will automatically register the user information quality of sub-stream
   drops below a threshold.

   Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme to
   handle longer term bandwidth fluctuations.  According to this
   server; scheme,
   each peer determines how many sub-streams to transmit and when to
   switch partners.  Specifically, each peer continuously estimates the client exits, the client will cancel its peer.

   (4) Tracker server: this server will record the information
   amount of all
   the users which see available uplink bandwidth based initially on probe packets
   sent to Zattoo Bandwidth Estimation Server and subsequently on peer
   QoS feedbacks, by using different algorithms depending on the same content.  When
   underlying transport protocol.  A peer increases its estimated
   available uplink bandwidth, if the client request current estimate is below some
   content, this server will check
   threshold and if there are other has been no bad quality feedback from
   neighboring peers owning the
   content and send the information for a period of these peers time, according to the client, if on, some algorithm
   similar to how TCP maintains its congestion window size.  Each peer
   then tell admits neighbors based on the client to request the video steaming server for the
   content.

   (5) Web server: providing PPLive software updating and downloading.

   (6) Channel list server: this server store currently estimated available
   uplink bandwidth.  In case a new estimate indicates insufficient
   bandwidth to support the information existing number of all peer connections, one
   connection at a time, preferably starting with the
   programs which can be seen by one requiring the users, including VoD programs and
   broadcasting programs, such as program name, file size and
   attribution.

   PPLive has two major communication protocols.  One
   least bandwidth, is Registration
   and peer discovery protocol, i.e.  Tracker Protocol, and closed.  On the other is
   P2P chunk distribution protocol, i.e.  Peer Protocol.  Figure 3 shows
   the architecture hand, if loss rate of PPLive.

   Tracker Protocol: First, a peer gets the channel list
   packets from the
   Channel server, in a way similar peer's neighbor reaches a certain threshold, the peer
   will attempt to that of Joost.  Then shift the degraded neighboring peer
   chooses load to other
   existing peers, while looking for a channel replacement peer.  When one is
   found, the load is shifted to it and asks the Tracker server for degraded neighbor is
   dropped.  As expected if a peer's neighbor is lost due to departure,
   the peerlist of
   this channel.

   Peer Protocol: The peer contacts initiates the peers in its peerlist process to get
   additional peerlists, which are aggregated with its existing list.
   Through this list, peers can maintain a mesh for peer management and
   data delivery.

   For replace the video-on-demand (VoD) operation, because different peers
   watch different parts of lost peer.  To optimize
   the channel, PDM configuration, a peer buffers up may occasionally initiate switching
   existing partnering peers to a few
   minutes worth topologically closer peers.

3.1.4.  PPStream

   The system architecture of chunks within a sliding window PPStream [PPStream] is similar to share with each
   others.  Some of these chunks may be chunks that have been recently
   played; the remaining chunks are chunks scheduled to be played in the
   next few minutes.  Peers upload chunks to each other. one
   of PPLive.

   To this end,
   peers send to each other "buffer-map" messages; a buffer-map message
   indicates which chunks a peer currently has buffered ensure data availability, PPStream uses some form of chunk
   retransmission request mechanism and can share.
   The buffer-map message includes shares buffer map at high rate.
   Each data chunk, identified by the play time offset (the ID of the first
   chunk), encoded by the length
   program source, is divided into 128 sub-chunks of the 8KB size each.  The
   chunk id is used to ensure sequential ordering of received data
   chunk.  The buffer map, map consists of one or more 128-bit flags denoting
   the availability of sub-chunks, and it includes information on time
   offset.  Usually, a string of zeroes buffer map contains only one data chunk at a
   time, and ones
   indicating which chunks are available (starting with it also contains sending peer's playback status, because as
   soon as a data chunk is played back, the chunk
   designated is deleted or replaced
   by the offset).  PPlive transfer Data over UDP.

   Video Download Policy of PPLive:

      1) Top ten peers contribute to a major part of the download
      traffic.  Meanwhile, the top peer session is quite short compared
      with next data chunk.

   At the video session duration.  This would suggest that PPLive
      gets video from only initiating stage a few peers at any given time, and switches
      periodically from one peer to another;

      2) PPLive can send multiple chunk requests for different chunks use up to
      one four data chunks,
   whereas on a stabilized stage a peer at one time;

      3) PPLive is observed uses usually one data chunk.
   However, in transient stage, a peer uses variable number of chunks.
   Sub-chunks within each data chunks are fetched nearly in random
   without using rarest or greedy policy.  The same fetching pattern for
   one data chunk seems to have repeat itself in the download scheduling policy of
      giving subsequent data chunks.
   Moreover, higher priority to rare chunks and bandwidth PPStream peers tend to receive chunks closer to play
      out deadline
   earlier and thus to contribute more than lower bandwidth peers.

   Based on the experimental results reported in [P2PIPTVMEA], download
   policy of PPStream may be using a sliding window mechanism to
      regulate summarized with the buffering following two points:

      top-ten peers do not contribute to a large part of chunks.

   PPLive the download
      traffic.  This would suggest that PPStream peer gets the video
      from many peers simultaneously, and session between peers have
      long duration;

      PPStream does not send multiple chunk requests for different
      chunks to one peer at one time; PPStream maintains a constant peer
      list with relatively small large number of peers.  [P2PIPTVMEA]
            +------------+    +--------+
            |   Peer 2   |----| Peer 3 |
            +------------+    +--------+
                     |          |
                     |          |
                    +--------------+
                    |    Peer 1    |
                    +--------------+
                            |
                            |
                            |
                    +---------------+
                    | Tracker Server|
                    +---------------+

      Figure 3, Architecture of PPlive

3.1.5.  SopCast

   The system

3.1.4.  Zattoo

   Zattoo architecture of SopCast [SopCast] is P2P live streaming system which serves over 3 million
   registered users over European countries [Zattoo].The system delivers
   live streaming using a receiver-based, peer-division multiplexing
   scheme.  Zattoo reliabily streams media among peers using similar to the mesh
   structure.

   Figure 4 depcits one of
   PPLive.

   SopCast allows for software updates via HTTP through a typical procedure centralized
   web server, and it makes list of single TV channel carried
   over Zattoo network.  First, Zattoo system broadcasts live TV,
   captured from satellites, onto the Internet.  Each channels available via HTTP through
   another centralized server.

   SopCast traffic is encoded and SopCast TV channel content is
   delivered through a separate P2P network.

      -------------------------------
      |   ------------------        |         --------
      |   |  Broadcast     |        |---------|Peer1 |-----------
      |   |  Servers       |        |         --------          |
      |   Administrative Servers    |                      -------------
      |   ------------------------  |                      | Super Node|
      |   | Authentication Server | |                      -------------
      |   | Rendezvous Server     | |                           |
      |   | Feedback Server       | |         --------          |
      |   | Other Servers         | |---------|Peer2 |----------|
      |   ------------------------| |         --------
      ------------------------------|
 Figure 4, Basic architecture divided into
   video chunks or blocks with equal sizes of Zattoo system

   Tracker(Rendezvous Server) Protocol: In order 10KB.  Sixty percent of
   its traffic is signaling packets and 40% is actual video data
   packets.  SopCast produces more signaling traffic compared to receive the signal PPLive,
   PPStream, with PPLive producing the requested channel, registered users are required to minimum of signaling traffic.  It
   has been observed in [P2PIPTVMEA] that SopCast traffic has long-range
   dependency, which also means that eventual QoS mitigation mechanisms
   may be
   authenticated through Zattoo Authentication Server.  Upon
   authentication, users obtain a ticket with specific lifetime.  Then,
   users contact Rendezvous Server ineffective.  Moreover, according to [P2PIPTVMEA], SopCast
   communication mechanism starts with UDP for the ticket and identify exchange of
   interested TV channel.  In return, the Rendezvous Server sends back a
   list joined control
   messages among its peers carrying the channel.

   Peer Protocol: Similar to aforementioned procedures in Joost, PPLive, by using a new Zattoo peer requests gossip-like protocol and then
   moves to join an existing peer among the peer
   list.  Upon TCP for the availability transfer of bandwidth, requested peer decides how
   to multiplex a stream onto its set video segments.  It also seems that
   top-ten peers contribute to about half of neighboring peers.  When
   packets arrive at the peer, sub-streams are stored for reassembly
   constructing the full stream.

   Note Zattoo relies on Bandwdith Estimation Server total download traffic.
   Finally, SopCast peer-list can be as large as PPStream peer-list, but
   differently from PPStream SopCast peer-list varies over time.

3.1.6.  Tribler

   Tribler [tribler] is a BitTorrent client that is able to go very much
   beyond BitTorrent model also thanks to initially
   estimate the amount support for video
   streaming.  Initially developed by a team of available uplink bandwith researchers at a peer.  Once a
   peer starts to forward substream Delft
   University of Technology, Tribler was able to other peers, it receives QoS
   feedback attract attention from
   other receivers if the quality of sub-stream drops
   below universities and media companies and to receive European Union
   research funding (P2P-Next and QLectives projects).

   Differently from BitTorrent, where a threshold.

   For reliable data delivery, tracker server centrally
   coordinates uploads/downloads of chunks among peers and peers
   directly interact with each live stream other only when they actually upload/
   download chunks to/from each other, there is partitioned into
   video segments.  Each video segment no tracker server in
   Tribler and, as a consequence, there is coded for forward error
   correction with Reed-Solomon error correcting code no need of tracker protocol.

   Peer protocol is instead used to organize peers in an overlay mesh.
   In more detail, Tribler bootstrap process consists in preloading well
   known super-peer addresses into n sub-stream
   packets peer local cache, in such a way that having obtained k correct packets of
   a segment is
   sufficient joining peer randomly selects a super-peer to reconstruct the remaining n-k packets of the same video
   segment.  To receive retrieve a video segment, each peer then specifies the
   sub-stream(s) random
   list of the video segment it would like already active peers to receive from the
   neighboring peers.

   Zattoo uses Adaptive Peer-Division Multiplexing (PDM) scheme establish overlay connections with.
   A gossip-like mechanism called BuddyCast allows Tribler peers to
   handle longer term bandwidth fluctuations.  In this scheme, each peer
   determines how many sub-streams to transmit
   exchange their preference lists, that is their downloaded file, and when
   to switch
   partners.  Specifically, each peer continually estimates build the amount
   of available uplink bandwidth based initially on probe packets so called Preference Cache.  This cache is used to the
   Zattoo Bandwidth Estimation Server
   calculate similarity levels among peers and later, based on peer QoS
   feedbacks, using different algorithms depending on to identify the underlying
   transport protocol.  A peer increases its estimated available uplink
   bandwidth, if so called
   "taste buddies" as the current estimate is below some threshold and if
   there has been no bad quality feedback from neighboring peers for a
   period of time, according to some algorithm similar with highest similarity.  Thanks to how TCP
   maintains its congestion window size.  Each this
   mechanism each peer then admits
   neighbors based on the currently estimated available uplink
   bandwidth.  In case a new estimate indicates insufficient bandwidth
   to support the existing number maintains two lists of peer connections, one connection at peers: i) a time, preferably starting with the one requiring the least
   bandwidth, is closed.  On the other hand, if loss rate list of packets
   from its
   top-N taste buddies along with their current preference lists, and
   ii) a peer's neighbor reaches list of random peers.  So a certain threshold, the peer will
   attempt to shift the degraded neighboring peer load to other existing
   peers, while looking for alternatively selects a replacement peer.  When peer
   from one is found, of the
   load is shifted to lists and sends it its preference list, taste-buddy
   list and the degraded neighbor is dropped.  As
   expected if a peer's neighbor is lost due to departure, selection of random peers.  The goal behind the peer
   initiates
   propagation of this kind of information is the process to replace support for the lost peer.  To optimize the PDM
   configuration, remote
   search function, a peer completely decentralized search service that
   consists in querying Preference Cache of taste buddies in order to
   find the torrent file associated with an interest file.  If no
   torrent is found in this way, Tribler users may occasionally initiate switching existing
   partnering peers alternatively resort
   to topologically closer peers.

3.1.5.  PPStream

   The system architecture web-based torrent collector servers available for BitTorrent
   clients.

   As already said, Tribler supports video streaming in two different
   forms: video on demand and working flows live streaming.

   As regards video on demand, a peer first of PPStream is similar all keeps informed its
   neighbors about the chunks it has.  Then, on the one side it applies
   suitable chunk-picking policy in order to
   PPLive [PPStream].  PPStream transfers data using mostly TCP, only
   occasionally UDP.

   Video Download Policy of PPStream

      1) Top ten peers do not contribute establish the order
   according to a large part of which to request the download
      traffic. chunks he wants to download.  This would suggest
   policy aims to assure that PPStream gets the video from
      many peers simultaneously, and its peers have long session
      duration;

      2) PPStream does not send multiple chunk requests for different chunks come to one peer at one time;

   PPStream maintains a constant peer list with relatively large number
   of peers.  [P2PIPTVMEA]

   To ensure data availability, PPStream uses some form of chunk
   retransmission request mechanism the media player in order
   and shares buffer map at high rate,
   although it rarely requests concurrently for in the same data chunk.
   Each data chunk, identified by the play time offset encoded by the
   program source, is divided into 128 sub-chunks of 8KB size each.  The that overall chunk id availability is used to ensure sequential ordering of received data
   chunk.

   The buffer map consists of one or more 128-bit flags denoting maximized.
   To this end, the
   availability of sub-chunks chunk-picking policy differentiates among high, mid
   and having a corresponding time offset.
   Usually a buffer map contains only one data chunk at a time and is
   thus smaller than that of PPLive.  It also contains sending peer's
   playback status to the other peers because as soon as a data chunk is
   played back, the chunk is deleted or replaced by the next data chunk.

   At low priority chunks depending on their closeness with the initiating stage, a peer can use up to 4 data
   playback position.  High priority chunks are requested first and on a
   stabilized stage, a peer uses usually one data chunk.  However, in
   transient stage,
   strict order.  When there are no more high priority chunks to
   request, mid priority chunks are requested according to a peer uses variable number of chunks.  Although,
   sub-chunks within each data rarest-
   first policy.  Finally, when there are no more mid priority chunks to
   request, low priority chunks are fetched nearly in random
   without using rarest or greedy policy, the same fetching pattern for
   one data chunk seems requested according to repeat in a rarest-
   first policy as well.  On the following data chunks.
   Moreover, high bandwidth PPStream other side, Tribler peers tend follow the
   give-to-get policy in order to receive establish which peer neighbors are
   allowed to request chunks
   earlier and thus (according to BitTorrent jargon to contributes be
   unchoked).  In more than lower bandwidth peers.

3.1.6.  SopCast

   The system architecture and working flows of SopCast detail, time is similar to
   PPLive.  SOPCast transfer data mainly using UDP, occasionally TCP;

   Top ten subdivided in periods and after
   each period Tribler peers contribute first sort their neighbors according to about half the
   decreasing numbers of chunks they have forwarded to other peers,
   counting only the chunks they originally received from them.  In case
   if tie, Tribler sorts their neighbors according to the decreasing
   total download traffic.
   SOPCast's download policy is similar number of chunks they have forwarded to PPLive's policy in that it
   switches periodically between provider other peers.  However, SOPCast seems
   to always need more than one  Since
   children could lie regarding the number of chunks forwarded to
   others, Tribler peers do directly not ask their children, but their
   grandchildren.  In this way, Tribler peer unchokes the three highest-
   ranked neighbours and, in order to get saturate upload bandwidth and in
   the video, while same time not decrease the performance of individual connections,
   it further unchokes a limited number of neighbors.  Moreover, in PPLive
   order to search for better neighbors, Tribler peers randomly select a
   single
   new peer could be in the only rest of the neighbours and optimistically unchoke it
   every two periods.

   As regards live streaming, differently from video provider;

   SOPCast's peer list can on demand scenario,
   the number of chunks cannot be as large as PPStream's peer list.  But
   SOPCast's peer list varies over time.  [P2PIPTVMEA]

   SopCast allows for software update through (HTTP) known in advance.  As a centralized web
   server and makes available channel list through (HTTP) another
   centralized server.

   SopCast traffic is encoded and SopCast TV content consequence a
   sliding window of fixed width is divided into
   video used to identify chunks or blocks with equal sizes of 10KB.  Sixty percent of
   its traffic interest:
   every chunk that falls out the sliding window is signaling packets and 40% considered out-
   dated, is actual video data
   packets.  SopCast produces more signaling traffic compared to PPLive,
   PPStream, locally deleted and TVAnts, whereas PPLive produces the least.  Its traffic is also noted to have long-range dependency, indicating that
   mitigating considered as deleted by peer
   neighbors as well.  In this way, when a peer joins the network, it with QoS mechanisms may be difficult.  It is reported
   that SopCast communication mechanism starts with UDP for
   learns about chunks its neighbors possess and identifies the exchange most
   recent one.  This is assumed as beginning of control messages among its peers using a gossip-like protocol the sliding window at
   the joining peer, which starts downloading and
   then moves uploading chunks
   according to TCP for the transfer of description provided for video segments.  This use of
   TCP on demand scenario.
   Finally, differently from what happens for data transfer seems to contradict others findings.

3.1.7.  TVants

   The system architecture and working flows of TVants is similar video on demand scenario,
   where torrent files includes a hash for each chunk in order to
   PPLive.  TVAnts is more balanced between TCP and UDP
   prevent malicious attackers from corrupting data, torrent files in data
   transmission;

   The system architecture and working flows
   live streaming scenario include the public key of TVants is similar to
   PPLive.  TVAnts the stream source.
   Each chunk is more balanced between TCP then assigned with absolute sequence number and UDP in data
   transmission;

   TVAnts' peer list is also large
   timestamp and varies over time.  [P2PIPTVMEA]

   For data delivery, signed by source public key.  Such a mechanism allows
   Tribler peers exhibit mild preference to exchange data
   among themselves in use the same Autonomous System and also among peers public key included in the same subnet.  TVAnts peer also exhibits some preference to
   download from closer peers.  TVAnts peer exploits location
   information torrent file and download mostly from high-bandwidth peers.  However,
   it does not seem to enforce any tit-for-tat mechanisms in
   verity the data
   delivery.

   TVAnts seems to be sensitive to network impairments such as changes
   in network capacity, packet loss, integrity of each chunk.

3.1.7.  QQLive

   QQLive [QQLive] is large-scale video broadcast software including
   streaming media encoding, distribution and delay.  For capacity loss, a
   peer will always seek broadcasting.  Its client
   can apply for more peers web, desktop program or other environments and provides
   abundant interactive function in order to download.  In meet the process watching
   requirements of different kinds of
   trying users.

   Due to avoid bad paths the lack of technical details from QQLive vendor, we got some
   knowledge about QQLive from paper [QQLivePaper], whose authors did
   some measurements and selecting good peers to continue
   downloading data, aggressive based on this identify the main components and potentially harmful behavior for
   both application
   working flow of QQLive.

   Main components of QQLive include:

      login server, storing user login information and the network results when bottleneck is affecting channel
      information;

      authentication server, processing user login authentication;

      channel server, storing all potential peers.

   When information about channels including
      channel connection nodes watching a peer experiences limited access capacity, it reacts by
   increasing redundancy (with FEC or ARQ mechanism) as if reacting to
   loss channel;

      program server, storing audio and thus causes higher download rate.  To recover from packet
   losses, it uses some kind of ARQ mechanism.  Although network
   conditions do impact video stream distribution such as the network
   delay impacting the start-up phase, they seem to have little impact
   on data information;

      log server, recording the network topology discovery beginning and maintenance process.

3.2.  Tree-based P2P streaming systems

   Tree-based systems implement a tree distribution graph, rooted at the
   source ending information of content.  In principle, each node receives data from a
   parent
      channels;

      peer node, which watching programs and transporting streaming media.

   Main working flow of QQLive includes startup stage and play stage.

   Startup stage includes only interactions between peers and
   centralized QQLive servers, so it may be the source or regarded as associated with
   tracker protocol.  This stage begins when a peer.  If peers do not
   change too often, such systems require little overhead, since packets
   are forwarded from node to node without the need for extra messages.
   However, in high churn environments (i.e. fast turnover of peers peer launches QQLive
   client.  Peer provides authentication information in an
   authentication message, which it sends to the tree), the tree must be continuously destroyed authentication server.
   Authentication server verifies QQLive provided credentials and rebuilt, if
   these are valid, QQLive client starts communicating with login server
   through SSL.  QQLive client sends a
   process that requires considerable control message overhead.  As including QQLive account
   and nickname, and login serve returns a
   side effect, nodes must buffer data for at least the message including information
   such as membership point, total view time, upgrading time required to
   repair the tree, in order to avoid packet loss.  One major drawback
   of tree-based streaming systems is their vulnerability to peer churn.
   A peer departure will temporarily disrupt video delivery to all peers
   in and so on.
   At this point, QQLive client requests channel server for updating
   channel list.  QQLive client firstly loads an old channel list stored
   locally and then it overwrites the sub-tree rooted at old list with the departed peer.

3.2.1.  PeerCast

   PeerCast adopts a Tree structure. new channel list
   received from channel server.  The architecture of PeerCast full channel list is
   shown in Figure 6.

   Peers in one not obtained
   via a single request.  QQLive client firstly requests for channel construct the Broadcast Tree
   classification and then requests the Broadcast
   server is the root of the Tree.  A Tracker can be implemented
   independently or merged in the Broadcast server.  Tracker in Tree
   based P2P streaming application selects channel list within a specific
   channel category selected by the parent nodes for those
   new user.  This approach will give
   higher real-time performance to QQLive.

   Play stage includes interactions between peers who join in the Tree.  A Transfer node in the Tree receives and transfers data simultaneously.

   Peer Protocol: The peer joins a channel centralized QQLive
   servers and between QQLive peers, so it may be regarded as associated
   to both tracker protocol and gets the broadcast server
   address.  First of all, the peer protocol.  IN more detail, play
   stage is structured in the following phases:

      Open channel.  QQLive client sends a request message to dogin server with
      the server, ID of chosen channel through UDP, whereas login server replies
      with a message including channel ID, channel name and
   the program
      name.  Afterwards, QQLive client communicates with program server answers OK or not according
      through SSL to its idle capability.  If
   the broadcast access program information.  Finally QQLive client
      communicates with channel server has enough idle capability, it will include the through UDP to obtain initial
      peer in its child-list.  Otherwise, the broadcast server will choose
   at most eight nodes information.

      View channel.  QQLive client establishes connections with peers
      and sends packets with fixed length of 118 bytes, which contains
      channel ID.  QQLive client maintains communication with channel
      server by reporting its children own information and answer the peer.  The peer
   records the obtaining updated
      information.  Peer nodes and contacts one of them, until it finds a node
   that can transport stream packet data through UDP
      with fixed-port between 13000 and14000.

      Stop channel.  QQLive client continuously sends five identical UDP
      packets to channel server it.

   In stead with each data packet fixed length of requesting the channel by the peer, 93
      bytes.

      Close client.  QQLive client sends a Transfer node
   pushes live stream to its children, which can be a transfer node or a
   receiver.  A node in the tree will notify its status to its parent
   periodically, and the latter will update its child-list according to
   the received notifications.

               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |   Broadcast server  |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                  +---------+
            |Transfer1|                  |Transfer2|
            +---------+                  +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |Receiver1|  |Receiver2|     |Receiver3|  |Receiver4|
      +---------+  +---------+     +---------+  +---------+

      Figure 6, Architecture of PeerCast system

   Each PeerCast node has a peering layer that is between the
   application layer and the transport layer.  The peering layer of each
   node coordinates among similar nodes to establish and maintain a
   multicast tree.  Moreover, the peering layer also supports a simple,
   lightweight redirect primitive.  This primitive allows a peer p to
   direct another peer c which is either opening a data-transfer session
   with p, or has a session already established with p to a target peer
   t to try to establish a data-transfer session.  Peer discovery starts
   at the root (source) or some selected sub-tree root and goes
   recursively down the tree structure.  When a peer leaves normally, it
   informs its parent who then releases the peer, and it also redirects
   all its immediate children to find new parents starting at some
   target node.

   The peering layer allows for different policies of topology
   maintenance.  In choosing a parent from among the children of a given
   peer, a child can be chosen randomly, one at a time in some fixed
   order, or based on least access latency with respect to the choosing
   peer.

3.2.2.  Conviva

   Conviva [conviva] is a real-time media control platform for Internet
   multimedia broadcasting.  For its early prototype, End System
   Multicast (ESM) [ESM] is the underlying networking technology on
   organizing and maintaining an overlay broadcasting topology.  Next we
   present the overview of ESM.  ESM adopts a Tree structure.  The
   architecture of ESM is shown in Figure 7.

   ESM has two versions of protocols: one for smaller scale conferencing
   apps with multiple sources, and the other for larger scale
   broadcasting apps with Single source.  We focus on the latter version
   in this survey.

   ESM maintains a single tree for its overlay topology.  Its basic
   functional components include two parts: a bootstrap protocol, a
   parent selection algorithm, and a light-weight probing protocol for
   tree topology construction and maintenance; a separate control
   structure decoupled from tree, where a gossip-like algorithm is used
   for each member to know a small random subset of group members;
   members also maintain pathes from source.

   Upon joining, a node gets a subset of group membership from the
   source (the root node); it then finds parent using a parent selection
   algorithm.  The node uses light-weight probing heuristics to a subset
   of members it knows, and evaluates remote nodes and chooses a
   candidate parent.  It also uses the parent selection algorithm to
   deal with performance degradation due to node and network churns.

   ESM Supports for NATs.  It allows NATs to be parents of public hosts,
   and public hosts can be parents of all hosts including NATs as
   children.

               ------------------------------
               |            +---------+      |
               |            | Tracker |      |
               |            +---------+      |
               |                  |          |
               |                  |          |
               |   +---------------------+   |
               |   |    Broadcast server |   |
               |   +---------------------+   |
               |------------------------------
                     /                     \
                    /                       \
                   /                         \
                  /                           \
            +---------+                   +---------+
            |  Peer1   |                  |  Peer2  |
            +---------+                   +---------+
             /      \                       /      \
            /        \                     /        \
           /          \                   /          \
      +---------+  +---------+     +---------+  +---------+
      |  Peer3  |  |  Peer4  |     |  Peer5  |  |  Peer6  |
      +---------+  +---------+     +---------+  +---------+

      Figure 7, Architecture UDP message to notify log
      server and an SSL message to login server, then it continuously
      sends five identical UDP packets to channel server with each data
      packet fixed length of ESM system

   ESM constructs the multicast tree 45 bytes.

3.2.  Tree-based P2P streaming applications

   In tree-based P2P streaming applications peers self-organize in a two-step process.  It
   constructs first
   tree-shape overlay network, where peers do not ask for a mesh specific
   content chunk, but simply receive it from their so called "parent"
   node.  Such content delivery model is denoted as push-based.
   Receiving peers are denoted as children, whereas sending nodes are
   denoted as parents.  Overhead to maintain overlay topology is usually
   lower for tree-based streaming applications than for mesh-based
   streaming applications, whereas performance in terms of scalability
   and delay are usually higher.  On the participating peers; the mesh having other side, the following properties:

      1) The shortest path delay between any pair greatest
   drawback of peers this type of application lies in the mesh that each node depends
   on one single node, its father in overlay tree, to receive streamed
   content.  Thus, tree-based streaming applications suffer from peer
   churn phenomenon more than mesh-based ones.

3.2.1.  End System Multicast (ESM)

   Even though End System Multicast (ESM) project is at most K times the unicast delay between them, where K ended by now and
   ESM infrastructure is not being currently implemented anywhere, we
   decided to include it in this survey for a
      small constant.

      2) Each peer has a limited number twofold reason.  First of neighbors in
   all, it was probably the mesh which
      does not exceed a given (per-member) bound chosen to reflect first and most significant research work
   proposing the
      bandwidth possibility of implementing multicast functionality at
   end hosts in a P2P way.  Secondly, ESM research group at Carnegie
   Mellon University developed the peer's connection to the Internet.

   It then world's first P2P live streaming
   system, and some members founded later Conviva [conviva] live
   platform.

   The main property of ESM is that it constructs the multicast tree in
   a (reverse) shortest path spanning trees of two-step process.  The first step aims at the construction of a
   mesh with among participating peers, whereas the root being second step aims at the
   construction of data delivery trees rooted at the stream source.
   Therefore a peer participates in two types of topology management: management
   structures: a control structure in which that guarantees peers make sure they are always
   connected in a mesh mesh, and a data delivery structure in which peers make sure that guarantees
   data gets delivered to them in an overlay multicast tree.

   There exist two versions of ESM.

   The first version of ESM architecture [ESM1] was conceived for small
   scale multi-source conferencing applications.  Regarding the mesh
   construction phase, when a tree structure.

   To improve mesh/tree structural and operating quality, each peer
   randomly probes one another to add new links that have perceived gain
   in utility; and each peer continually monitors existing links member wants to drop
   those links that have perceived drop in utility.  Switching parent
   occurs if join the group, an
   out-of-bandwidth bootstrap mechanism provides the new member with a
   list of some group member.  The new member randomly selects a few
   group members as peer leaves or fails; if there is neighbors.  The number of selected neighbors
   does not exceed a persistent congestion
   or low given bound, which reflects the bandwidth condition; or if there of the
   peer's connection to the Internet.  Each peer periodically emits a
   refresh message with monotonically increasing sequence number, which
   is propagated across the mesh in such a better clustering
   configuration.  To allow for more public hosts to be available for
   becoming parents of NATs, public hosts preferentially choose NATs as
   parents.

   The data delivery structure, obtained from running way that each peer can
   maintain a distance vector
   protocol on top list of all the mesh using latency between other peers in the system.  When a peer
   leaves, either it notifies its neighbors as and the
   routing metric, information is maintained using various mechanisms.  Each
   propagated across the mesh to all participating peers, or peer
   maintains
   neighbors detect the condition of abrupt departure and propagate it
   through the mesh.  To improve mesh/tree quality, on the one side
   peers constantly and keeps up randomly probe each other to date add new links; on
   the routing cost to every other
   member, together with the path that leads to such cost.  To ensure
   routing table stability, data continues side, peers continually monitor existing links to be forwarded along the old
   routes for sufficient time until drop the routing tables converge.  The
   time
   ones that are not perceived as good-quality-links.  This is set done
   thanks to be larger than the cost evaluation of any path with a valid
   route, but smaller than infinite cost.  To make better use of the
   path bandwidth, streams of different bit-rates are forwarded
   according to the following priority scheme: audio being higher than
   video streams utility function and lower quality video being higher than quality
   video.  Moreover, bit-rates of stream a cost function,
   which are adapted conceived to guarantee that the peer
   performance capability.

3.3.  Hybrid P2P streaming system

   The object shortest overlay delay
   between any pair of the hybrid P2P streaming system peers is comparable to use the
   comprehensive advantage unicast delay among
   them.  Regarding multicast tree construction phase, peers run a
   distance-vector protocol on top of tree-mesh topology the tree and pull-push mode in
   order to achieve balance among system robust, scalability use latency as
   routing metric.  In this way, data delivery trees may be constructed
   from the reverse shortest path between source and
   application real-time performance.

3.3.1.  New Coolstreaming recipients.

   The Coolstreaming, first released in summer 2004 with second and subsequent version of ESM architecture [ESM2] was
   conceived for an operational large scale single-source Internet
   broadcast system.  As regards the mesh construction phase, a mesh-based
   structure, arguably represented node
   joins the first successful large-scale P2P
   live streaming.  As system by contacting the above analysis, it has poor delay performance source and high overhead associated retrieving a random
   list of already connected nodes.  Information on active participating
   peers is maintained thanks to a gossip protocol: each video block transmission.  After
   that, New coolstreaming [NEWCOOLStreaming] adopts peer
   periodically advertises to a hybrid mesh and
   tree structure with hybrid pull randomly selected neighbor a subset of
   nodes he knows and push mechanism.  All the peers
   are organized into mesh-based topology in last timestamps it has heard for each known
   node.

   The main difference with the similar way like pplive
   to ensure high reliability.

   Besides, content delivery mechanism first version is that the most important part of New
   Coolstreaming.  Fig.8 is second version
   constructs and maintains the content data delivery architecture.  The
   video stream is divided into blocks with equal size, tree in which each
   block is assigned a sequence number completely
   distributed manner according to represent its playback order
   in the stream.  We divide each video stream into multiple sub-streams
   without any coding, in which following criteria: i) each node
   maintains a degree bound on the maximum number of children it can retrieve any sub-stream
   independently from different
   accept depending on its uplink bandwidth, ii) tree is optimized
   mainly for bandwidth and secondarily for delay.  To this end, a
   parent nodes.  This subsequently reduces selection algorithm allows identifying among the impact to content delivery due to neighbors the
   one that guarantees the best performance in terms of throughput and
   delay.  The same algorithm is also applied either if a parent departure leaves
   the system or failure.
   The details if a node is experiencing poor performance (in terms of hybrid push
   both bandwidth and pull content delivery scheme are shown packet loss).  As loop prevention mechanism, each
   node keeps also the information about the hosts in the following:

   (1) A node first subscribes to a sub-stream by connecting to one of path between
   the source and its partners via parent node.It then constructs a single request (pull) in BM, (reverse)
   shortest path spanning trees of the requested
   partner, i.e., mesh with the root being the
   source.

   This second ESM prototype is also able to cope with receiver
   heterogeneity and presence of NAT/firewalls.  In more detail, audio
   stream is kept separated from video stream and multiple bit-rate
   video streams are encoded at source and broadcast in parallel though
   the parent node.( The node overlay tree.  Audio is always prioritized over video streams,
   and lower quality video is always prioritized over high quality
   videos.  In this way, system can subscribe more sub-
   streams dynamically select the most suitable
   video stream according to its partners receiver bandwidth and network congestion
   level.  Moreover, in this way order to obtain higher play quality.)

   (2) The selected parent node will continue pushing all blocks take presence of hosts behind NAT/
   firewalls, tree is structured in need such a way that public host use
   hosts behind NAT/firewalls as parents.

3.3.  Hybrid P2P streaming applications

   This type of applications aims at integrating the sub-stream to the requested node.

   This not only reduces main advantages of
   mesh-based and tree-based approaches.  To this end, overlay topology
   is mixed mesh-tree, and content delivery model is push-pull.

3.3.1.  New Coolstreaming

   Coolstreaming, first released in summer 2004 with a mesh-based
   structure, arguably represented the first successful large-scale P2P
   live streaming.  Nevertheless, it suffers poor delay performance and
   high overhead associated with each video block
   transfer, but more importantly, significantly reduces transmission.  In the timing
   involved in retrieving video content.
   attempt of overcoming such a limitation, New Coolstreaming
   [NEWCOOLStreaming] adopts a hybrid mesh-tree overlay structure and a
   hybrid pull-push content delivery mechanism.

   Figure 5 illustrates New Coolstreaming architecture.
                   ------------------------------
                  |            +---------+      |
                  |            | Tracker |      |
                  |            +---------+      |
                  |                  |          |
                  |                  |          |
                  |   +---------------------+   |
                  |   |    Content server   |   |
                  |   +---------------------+   |
                  |------------------------------
                        /                     \
                       /                       \
                      /                         \
                     /                           \
               +---------+                   +---------+
               |  Peer1  |                   |  Peer2  |
               +---------+                   +---------+
                /      \                       /      \
               /        \                     /        \
              /          \                   /          \
         +---------+  +---------+     +---------+  +---------+
         |  Peer2  |  |  Peer3  |     |  Peer1  |  |  Peer3  |
         +---------+  +---------+     +---------+  +---------+

                Figure 8 Content Delivery 5, New Coolstreaming Architecture

   The video stream is divided into equal-size blocks or chunks, which
   are assigned with a sequence number to implicitly define the playback
   order in the stream.  Video stream is subdivided into multiple sub-
   streams without any coding, so that each node can retrieve any sub-
   stream independently from different parent nodes.  This consequently
   reduces the impact on content delivery due to a parent departure or
   failure.  The details of hybrid push-pull content delivery scheme are
   as follows:

      a node first subscribes to a sub-stream by connecting to one of
      its partners via a single request (pull) in buffer map, the
      requested partner, i.e., the parent node.  The node can subscribe
      more sub-streams to its partners in this way to obtain higher play
      quality;

      the selected parent node will continue pushing all blocks of the
      sub-stream to the requesting node.

   This not only reduces the overhead associated with each video block
   transfer, but more importantly it significantly reduces the delay in
   retrieving video content.

   Video content is processed for ease of delivery, retrieval, storage
   and play out.  To manage content delivery, a video stream is divided
   into blocks with equal size, each of which is assigned a sequence
   number to represent its playback order in the stream.  Each block is
   further divided into K sub-blocks and the set of ith i-th sub-blocks of
   all blocks constitutes the ith i-th sub-stream of the video stream, where
   i is a value bigger than 0 and less than K+1.  To retrieve video
   content, a node receives at most K distinct sub-streams from its
   parent nodes.  To store retrieved sub-streams, a node uses a double
   buffering scheme having a synchronization buffer and a cache buffer.
   The synchronization buffer stores the received sub-blocks of each
   sub-stream according to the associated block sequence number of the
   video stream.  The cache buffer then picks up the sub-blocks
   according to the associated sub-stream index of each ordered block.
   To advertise the availability of the latest block of different sub-
   streams in its buffer, a node uses a Buffer Map which is represented
   by two vectors of K elements each.  Each entry of the first vector
   indicates the block sequence number of the latest received sub-
   stream, and each bit entry of the second vector if set indicates the
   block sequence index of the sub-stream that is being requested.

   For data delivery, a node uses a hybrid push and pull scheme with
   randomly selected partners.  A node having requested one or more
   distinct sub-streams from a partner as indicated in its first Buffer
   Map will continue to receive the sub-streams of all subsequent blocks
   from the same partner until future conditions cause the partner to do
   otherwise.  Moreover, users retrieve video indirectly from the source
   through a number of strategically located servers.

   To keep the parent-children relationship above a certain level of
   quality, each node constantly monitors the status of the on-going
   sub-stream reception and re-selects parents according to sub-stream
   availability patterns.  Specifically, if a node observes that the
   block sequence number of the sub-stream of a parent is much smaller
   than any of its other partners by a predetermined amount, the node
   then concludes that the parent is lagging sufficiently behind and
   needs to be replaced.  Furthermore, a node also evaluates the maximum sub-
   stream, and minimum each bit entry of the block sequence numbers in its synchronization
   buffer to determine second vector if any parent is lagging behind set indicates the rest
   block sequence index of its
   parents the sub-stream that is being requested.

   For data delivery, a node uses a hybrid push and thus needs also to be replaced.

4. pull scheme with
   randomly selected partners.  A common P2P Streaming Process Model

   As shown in Figure 8, a common P2P streaming process can be
   summarized based on Section 3:

      1) When node having requested one or more
   distinct sub-streams from a peer wants partner as indicated in its first Buffer
   Map will continue to receive streaming content:

         1.1) Peer acquires a list the sub-streams of peers/parent nodes all subsequent blocks
   from the
         tracker.

         1.2) Peer exchanges its content availability with the peers on same partner until future conditions cause the obtained peer list, or requests partner to be adopted by the parent
         nodes.

         1.3) Peer identifies the peers with desired content, or the
         available parent node.

         1.4) Peer requests for the content from the identified peers,
         or receives the content do
   otherwise.  Moreover, users retrieve video indirectly from its parent node.

      2) When a peer wants to share streaming content with others:

         2.1) Peer sends information to the tracker about the swarms it
         belongs to, plus streaming status and/or content availability.

                  +---------------------------------------------------------+
                  |   +--------------------------------+                    |
                  |   |              Tracker           |                    |
                  |   +--------------------------------+                    |
                  |        ^  |                    ^                        |
                  |        |  |                    |                        |
                  |  query |  | peer list/         |streaming Status/       |
                  |        |  | Parent nodes       |Content availability/   |
                  |        |  |                    |node capability         |
                  |        |  |                    |                        |
                  |        |  V                    |                        |
                  |   +-------------+         +------------+                |
                  |   |    Peer1    |<------->|  Peer 2    |                |
                  |   +-------------+ content/+------------+                |
                  |                   join requests                         |
                  +---------------------------------------------------------+
   Figure 8, A common P2P streaming process model

   The functionality the source
   through a number of Tracker and data transfer in Mesh-based
   application and Tree-based is strategically located servers.

   To keep the parent-children relationship above a little different.  In certain level of
   quality, each node constantly monitors the Mesh-based
   applications, such as Joost status of the on-going
   sub-stream reception and PPLive, Tracker maintains re-selects parents according to sub-stream
   availability patterns.  Specifically, if a node observes that the lists
   block sequence number of the sub-stream of peers storing chunks for a specific channel or streaming file.  It
   provides peer list for peers to download from, as well as upload to,
   each other.  In parent is much smaller
   than any of its other partners by a predetermined amount, the Tree-based applications, such as PeerCast node
   then concludes that the parent is lagging sufficiently behind and
   Canviva, Tracker directs new peers
   needs to find parent nodes be replaced.  Furthermore, a node also evaluates the maximum
   and minimum of the data
   flows from block sequence numbers in its synchronization
   buffer to determine if any parent is lagging behind the rest of its
   parents and thus needs also to child only.

5. be replaced.

4.  Security Considerations

   This document does not raise security issues.

6.

5.  Author List

   The authors of this document are listed as below.

      Hui Zhang, NEC Labs America.

      Jun Lei, University of Goettingen.

      Gonzalo Camarillo, Ericsson.

      Yong Liu, Polytechnic University.

      Delfin Montuno, Huawei.

      Lei Xie, Huawei.

      Shihui Duan, CATR.

7.

6.  Acknowledgments

   We would like to acknowledge Jiang xingfeng for providing good ideas
   for this document.

8.

7.  Informative References

   [PPLive]   "www.pplive.com".

   [PPStream]
              "www.ppstream.com".

   [CNN]      "www.cnn.com".

   [JOOSTEXP]
              Lei, Jun, et al., "An Experimental Analysis of Joost Peer-
              to-Peer VoD Service".

   [P2PVOD]   Huang, Yan, et al., "Challenges, Design and Analysis of a
              Large-scale P2P-VoD System", 2008.

   [Octoshape] Alstrup, Stephen, et al., "Introducing Octoshape-a new
   technology for large-scale streaming over the Internet".

   [Zattoo]   "http: //zattoo.com/".

   [Conviva]  "http://www.rinera.com/".

   [ESM]      Zhang, Hui., "End System Multicast,
              http://www.cs.cmu.edu/~hzhang/Talks/ESMPrinceton.pdf",
              May .

   [Survey]   Liu, Yong, et al., "A survey on peer-to-peer video
              streaming systems", 2008.

   [CNN] CNN web site, www.cnn.com

   [PPLive] PPLive web site, www.pplive.com

   [P2PIPTVMEA] Silverston, Thomas, et al., "Measuring P2P IPTV Systems".

   [Challenge]
              Li, Bo,
   Systems", June 2007.

   [Zattoo] Zattoo web site, http: //zattoo.com/

   [PPStream] PPStream web site, www.ppstream.com

   [SopCast] SopCast web site, http://www.sopcast.com/

   [tribler] Tribler Protocol Specification, January 2009, on line
   available at http://svn.tribler.org/bt2-design/proto-spec-unified/
   trunk/proto-spec-current.pdf

   [QQLive] QQLive web site, http://v.qq.com

   [QQLivePaper] Liju Feng, et al., "Peer-to-Peer Live Video Streaming "Research on the
              Internet: Issues, Existing Approaches, and Challenges", active monitoring based
   QQLive real-time information Acquisition System", 2009.

   [conviva] Conviva web site, http://www.conviva.com

   [ESM1] Chu, Yang-hua, et al., "A Case for End System Multicast", June 2007.
   2000. (http://esm.cs.cmu.edu/technology/papers/
   Sigmetrics.CaseForESM.2000.pdf)

   [ESM2] Chu, Yang-hua, et al., "Early Experience with an Internet
   Broadcast System Based on Overlay Multicast", June 2004. (http://
   static.usenix.org/events/usenix04/tech/general/full_papers/chu/
   chu.pdf)

   [NEWCOOLStreaming] Li, Bo, et al., "Inside the New Coolstreaming:
   Principles,Measurements and Performance Implications",
              Apr. April 2008.

Authors' Addresses

   Gu Yingjie (editor)
   Huawei
   No.101 Software Avenue
   Nanjing, Jiangsu Province
   Nanjing  210012
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: guyingjie@huawei.com
   Zong Ning (editor)
   Huawei
   No.101 Software Avenue
   Nanjing, Jiangsu Province
   Nanjing  210012
   P.R.China

   Phone: +86-25-56624760
   Fax:   +86-25-56624702
   Email: zongning@huawei.com

   Zhang Yunfei
   China Mobile

   Email: zhangyunfei@chinamobile.com

   Francesca Lo Piccolo
   Cisco

   Email: flopicco@cisco.com

   Duan Shihui
   CATR
   No.52 HuaYuan BeiLu
   Beijing  100191
   P.R.China

   Phone: +86-10-62300068
   Email: duanshihui@catr.cn