Network Working Group                                          M. Tuexen
INTERNET DRAFT                                                Siemens AG
                                                                  Q. Xie
                                                                Motorola
                                                              R. Stewart
                                                                 E. Lear
                                                                M. Shore
                                                                   Cisco
                                                                  L. Ong
                                                         Nortel
                                                    Point Reyes Networks
                                                             J. Loughney
                                                             M. Stillman
                                                                   Nokia
expires July 10,
Expires August 27, 2001                                   January 10,                                February 27, 2001

                Requirements for Reliable Server Pooling
                  <draft-ietf-rserpool-reqts-00.txt>
                   <draft-ietf-rserpool-reqts-01.txt>

Status of this Memo

This document is an Internet-Draft and is in full conformance with all
provisions of Section 10 of [RFC2026].

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups. Note that other groups
may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet Drafts as reference material
or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt

The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.

Abstract

The goal is to develop an architecture and protocols for the management
and operation of server pools supporting highly reliable applications,
and for client access mechanisms to a server pool.

This document defines a basic set requirements and architecture for management and
access to reliable server pools, including requirements from a variety of
applications, building blocks and interfaces, different styles of

pooling, security requirements and performance requirements such as
failover times
pooling. A comparison is made to existing protocols and coping with heterogeneous latencies.

Important requirements of this solutions to the
problem space. A proposed architecture for fulfilling these requirements
are

     -    network fault tolerance,

     -    highly available services,

     -    resistance against malicious attacks,

     - presented and scalability. finally illustrated by examples.

Table of Contents

1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.1. Overview

This document defines the reliable server pooling architecture and the
requirements for the protocols used.  Reliable server pools can be used
for providing high available services by using a set of servers in a
pool.

Real time applications must also be supported which leads to
requirements on the processing time needed. Scalability is another
important requirement.

Given that the server pool can be attacked by hackers, if one or more of
the servers are hijacked then the server pool is compromised.
Therefore, the security requirement is to catalog the threats to the
reliable server pool and identify appropriate responses to those
threats.  . . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.2. Terminology

     Operation scope:
          the part of the network visible by ENRP.

     Pool:
          A collection of clients or servers providing the same service.

     Pool Element:
          A client or server which belongs to a pool.

     Pool User:
          A client which gets served by a pool element.

1.3.  Abbreviations

     ASAP: Aggregate Server Access Protocol

     ENRP: Endpoint Name Resolution Protocol

     PE:   Pool element

     PU:   Pool user

     SCTP: Stream Control Transmission Protocol

     TCP:  Transmission Control Protocol . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.3. Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . .   4
2.  Reliable Server Pooling Architecture

In this section, we discuss what a typical reliable server pool
architecture may look like. Requirements  . . . . . . . . . . . . . . . . . . . . . . . . . .   4
2.1.  Common Rserpool Functional Areas

The following functional areas or components may likely be present in a
typical Rserpool system architecture:

     -    A number of logical "Server Pools" to provide distinct
          application services.

          Each of those server pools will likely be composed of some
          number Communication Model . . . . . . . . . . . . . . . . . . . . . .   4
2.2. Processing Power  . . . . . . . . . . . . . . . . . . . . . . .   5
2.3. Transport Protocol  . . . . . . . . . . . . . . . . . . . . . .   5
2.4. Support of "Pool Elements (PEs)" - which are application
          programs running on distributed host machines, collectively
          providing the desired application services via, for example,
          data sharing and/or load sharing.

          Each server pool will be identifiable in the operation scope
          of the system by a unique "name".

     -    Some "Pool Users (PUs)" which are the users of the application
          services provided by the various server pools.

     -    PUs may or may not be part of a server pool themselves,
          depending on whether or not they wish to be accessed by pool
          name by others in the operation scope of the system.

     -    A "Name Space" which contains all the defined names within the
          operation scope of the system.

     -    One or more "Name Servers" which carry out various maintenance
          functions (e.g., registration and de-registration, integrity
          checking) for the "Name Space".

2.2.  Rserpool protocol overview

ENRP is designed to provide a fully distributed fault-tolerant real-time
translation service that maps a name to a set of transport addresses
pointing to a specific group of networked communication endpoints
registered under that name.  ENRP employs a client-server model with
which an ENRP server will respond to the name translation service
requests from endpoint clients on both the local host RSerPool Unaware Clients . . . . . . . . . . . . . .   5
2.5. Registering and remote hosts.

Aggregate Deregistering . . . . . . . . . . . . . . . . .   5
2.6. Server Access Protocol (ASAP) in conjunction with ENRP
provides a fault tolerant data transfer mechanism over IP networks.
ASAP uses a name-based addressing model which isolates a logical
communication endpoint from its IP address(es), thus effectively
eliminating the binding between the communication endpoint and its
physical IP address(es) which normally constitutes a single point of
failure.

In addition, ASAP defines each logical communication destination as a
named group, providing full transparent support for server-pooling and
load sharing. It also allows dynamic system scalability - members of a
server pool can be added or removed at any time without interrupting the
service.

The fault tolerant server pooling is gained by combining two parts,
namely ASAP and the Endpoint Name Resolution Part (ENRP).  ASAP provides
the user interface for name to address translation, load sharing
management, and fault management.  ENRP defines the fault tolerant name
translation service.

2.3.  Typical Interactions between Rserpool Components

The following drawing shows the typical Rserpool components and their
possible interactions with each other:

  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ~                                                       Name space ~
  ~                                                                  ~
  ~  .........................          .........................    ~
  ~ Selection  .        Server Pool 1 . .        Server Pool 2 .    ~
  ~ .  +-------+ +-------+ .    (d) .  +-------+ +-------+ .    ~
  ~ .  |PE(1,A)| |PE(1,C)|<-------------->|PE(2,B)| |PE(2,A)|<---+  ~
  ~ .  +-------+ +-------+ . .  +-------+ +-------+ . |  ~
  ~ .      ^            ^ . .      ^         ^ . |  ~
  ~ .      |      (a)   | . .      |         | . |  ~
  ~ .      +----------+ | .   5
2.7. Timing Requirements .      |         | . |  ~
  ~ .  +-------+      | | . .      |         | . |  ~
  ~ .  |PE(1,B)|<---+ | | . .      |         | . |  ~
  ~ .  +-------+    | | | . .      |         | . |  ~
  ~ .      ^        | | | . .      |         | . |  ~
  ~  .......|........|.|.|....          .......|.........|....... |  ~
  ~         |        | | |                     |         |        |  ~
  ~      (c)|     (a)| | |(a)               (a)|      (a)|     (c)|  ~
  ~         |        | | |                     |         |        |  ~
  ~         |        v v v                     v         v        |  ~
  ~         |     +++++++++++++++    (e)     +++++++++++++++      |  ~
  ~         |     + ENRP-Server +<---------->+ ENRP-Server +      |  ~
  ~         |     +++++++++++++++            +++++++++++++++      |  ~
  ~         v            ^                          ^             |  ~
  ~     *********        |                          |             |  ~
  ~     * PU(A) *<-------+                       (b)|             |  ~
  ~     *********   (b)                             |             |  ~
  ~                                                 v             |  ~
  ~         :::::::::::::::::               *****************     |  ~
  ~         : . . . .   6
2.8. Failover Support  . . . . . . . . . . . . . . . . . . . . . . .   6
2.9. Robustness  . . . . . . . . . . . . . . . . . . . . . . . . . .   6
2.10. Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . .   6
2.11. Scalability  . . . . . . . . . . . . . . . . . . . . . . . . .   7
2.12. Security Requirements  . . . . . . . . . . . . . . . . . . . .   7
2.12.1. General  . . . . . . . . . . . . . . . . . . . . . . . . . .   7
2.12.2. Name Space Services  . . . . . . . . . . . . . . . . . . . .   7
2.12.3. Security State . . . . . . . . . . . . . . . . . . . . . . .   7
3. Relation to Other Solutions . . . . . . . . . . . . . . . . . . .   8
3.1. CORBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   8
3.2. DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   8
3.2.1. Name/Address Resolution . . . . . . . . . . . . . . . . . . .   9
3.2.2. Administration of DNS . . . . . . . . . . . . . . . . . . . .   9
3.2.3. Dynamic Monitoring of Server Status . . . . . . . . . . . . .  10
3.2.4. Proxy Hiding  . . . . . . . . . . . . . . . . . . . . . . . .  10
3.2.5. Scalability . . . . . . . . . . . . . . . . . . . . . . . . .  10
3.3. Service Location Protocol . . . . . . . . . . . . . . . . . . .  10
4. Reliable Server Pooling Architecture  . . . . . . . . . . . . . .  11
4.1. Common RSerPool Functional Areas  . . . . . . . . . . . . . . .  12
4.2. RSerPool Protocol Overview  . . . . . . . . . . . . . . . . . .  12
4.3. Typical Interactions between RSerPool Components  . . . . . . .  13
5. Examples  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15
5.1. Two File Transfer Examples  . . . . . . . . . . . . . . . . . .  15
5.1.1. The RSerPool Aware Client . . . . . . . . . . . . . . . . . .  16
5.1.2. The RSerPool Unaware Client . . . . . . . . . . . . . . . . .  17
5.2.1. Decomposed GWC and GK Scenario  . . . . . . . . . . . . . . .  18
5.2.2. Collocated GWC and GK Scenario  . . . . . . . . . . . . . . .  20
6. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  21
7. References  . . . . . . . . . . . . . . . . . . . . . . . . . . .  21
8. Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  22

1.  Introduction

1.1.  Overview

The Internet is always on. Many users expect services to be always
available; many business depend upon connectivity 24 hours a day, 7 days
a week, 365 days a year. In order to fulfill this, many proprietary
solutions and operating system dependent solutions have been developed
to provide highly reliable and highly available servers.

This document defines requirements for reliable server pooling and a
proposed architecture, which can be used to provide highly available
services. The way this is achieved is by using servers grouped into
pools.  Therefore, if a client wants to access a server pool, it will be
able to use any of the servers in the server pool taking into account
the server pool policy.

Highly available services also put the same high reliability
requirements upon the transport layer protocol beneath RSerPool - it
must provide strong survivability in the face of network component
failures.

Supporting real time applications is another main focus of RSerPool
which leads to requirements on the processing time needed.

Scalability is another important requirement.

RSerPool introduces new security vulnerabilities into existing
applications, both in the pool formation and pool member selection
process and in the failover process.  Therefore, during the protocol
development process it will be necessary to catalogue the threats to
RSerPool and identify appropriate responses to those threats.

1.2.  Terminology

This document uses the following terms:

     Operation scope:
          The part of the network visible to pool users by a specific
          instance of the reliable server pooling protocols.

     Pool (or server pool):
          A collection of servers providing the same application
          functionality.

     Pool handle (or pool name):
          A logical pointer to a pool. Each server pool will be
          identifiable in the operation scope of the system by a unique
          pool handle or "name".

     Pool element:
          A server entity having registered to a pool.

     Pool user:
          A server pool user.

     Pool element handle (or endpoint handle):
          A logical pointer to a particular pool element in a pool,
          consisting of the name of the pool and a destination transport
          address of the pool element.

     Name space:
          A cohesive structure of pool names and relations that may be
          queried by an internal or external agent.

     Name server:
          Entity which the responsible for managing and maintaining the
          name space within the RSerPool operation scope.

1.3.  Abbreviations

     ASAP: Aggregate Server Access Protocol

     ENRP: Endpoint Name Resolution Protocol

     DPE:  Distributed Processing Environment

     PE:   Pool element

     PU:   Pool user

     SCTP: Stream Control Transmission Protocol

     SLP:  Service Location Protocol

     TCP:  Transmission Control Protocol

2.  Requirements

2.1.  Communication Model

The general architecture should be based on a peer to peer model.
However, the binding should be based on a client server model.

2.2.  Processing Power

It should be possible to use the protocol stack in small devices, like
handheld wireless devices. The solution must scale to devices with a
differing range of processing power.

2.3.  Transport Protocol

The protocols used for the pool handling should not cause network
congestion. This means that it should not generate heavy traffic, even
in case of failures, and has to use flow control and congestion
avoidance algorithms which are interoperable with currently deployed
techniques, especially the flow control of TCP [RFC793] and SCTP
[RFC2960]. Therefore, for large pools, only a subset of all possible IP-
addresses are returned by the name servers.

The architecture should not rely on multicast capabilities of the
underlying layer. Nevertheless, it can make use of it if multicast
capabilities are available.

Network failures have to be handled and concealed from the application
layer as much as possible by the transport protocol. This means that the
underlying transport protocol must provide a strong network failure
handling capability on top of an acknowledged error-free non-duplicated
data delivery service. Therefore SCTP is the required transport protocol
for RSerPool.

2.4.  Support of RSerPool Unaware Clients

Furthermore, it is expected that there will be a transition phase with
some systems supporting the RSerPool architecture and some are not. To
make this transition as seamless as possible it should be possible for
hosts not supporting this architecture to use also the new pooling
services via some mechanism.

2.5.  Registering and Deregistering

Another important requirement is that servers should be able to register
to (become PEs) and deregister from a server pool transparently without
an interruption in service.

Servers should be able to register in multiple server pools which may
belong to different namespaces.

2.6.  Server Selection

The RSerPool mechanisms must be able to support different server
selection mechanisms. These are called server pool policies.

Examples of server pool policies are:

     -    Round Robin

     -    Least used

     -    Most used

The set of supported policies must be extensible in the sense that new
policies can be added as required.

There must be a way for the client to provide information to the name
server about the pool elements.

The name servers should be extensible using a plug-in architecture.
These plug-ins would provide a more refined server selection by the name
servers using additional information provided by clients as hints.

For some applications it is important that a client repeatedly connects
to the same server in a pool if it is possible, i. e., if that server is
still alive. This feature should be supported through the use of pool
element handles.

2.7.  Timing Requirements

A server pool can consist of a large number (up to 500) of pool
elements.  This upper limit is important since the system will be used
for real time applications. So handling of name resolution has to be
fast.

Another consequence of the real time requirement is the supervision of
the pool elements. The name resolution should not result in a pool
element which is not operational.

2.8.  Failover Support

The RSerPool architecture must be able to detect server failure quickly
and be able to perform failover without service interruption.

2.9.  Robustness

The solution must allow itself to be implemented and deployed in such a
way that there is no single point of failure in the system.

2.10.  Naming

Server pools are identified by pool handles. These pool handles are only
valid inside the operation scope. Interoperability between different

namespaces has to be provided by other mechanisms.

2.11.  Scalability

The RSerPool architecture should not require a limitation on the number
of server pools or on the number of pool users.

2.12.  Security Requirements

2.12.1.  General

     -    The scaling characteristics of the security architecture
          should be compatible with those given previously.

     -    The security architecture should support hosts having a wide
          range of processing powers.

2.12.2.  Name Space Services

     -    It must not be possible for an attacker to falsely register as
          a pool element with the name server either by masquerading as
          another pool element or by registering in violation of local
          authorization policy.

     -    It must not be possible for an attacker to deregister a server
          which has successfully registered with the name server.

     -    It must not be possible for an attacker to spoof the response
          to a query to the name server

     -    It must be possible to prohibit unauthorized queries to the
          name server.

     -    It must be possible to protect the privacy of queries to the
          name server and responses to those queries from the name
          server.

     -    Communication among name servers must be afforded the same
          protections as communication between clients and name servers.

2.12.3.  Security State

The security context of an application is a subset of the overall
context, and context or state sharing is explicitly out-of-scope for
RSerPool.  Because RSerPool does introduce new security vulnerabilities
to existing applications application designers employing RSerPool should
be aware of problems inherent in failing over secured connections.
Security services necessarily retain some state and this state may have

to be moved or re-established. Examples of this state include
authentication or retained ciphertext for ciphers operating in cipher
block chaining (CBC) or cipher feedback (CFB) mode.  These problems must
be addressed by the application or by future work on RSerPool.

3.  Relation to Other Clients :<------------->* Proxy/Gateway * <---+  ~
  ~         :::::::::::::::::               *****************        ~
  ~                                                                  ~
  ~                                                                  ~
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Figure 2.1 Rserpool components Solutions

This section is intended to cover some existing solutions which overlap
somewhat with the problems space of RSerPool.

3.1.  CORBA

Often referred to as a Distributed Processing Environment (DPE), CORBA
was mainly designed to provide location transparency for distributed
applications. However, the following limitations may exist when applying
CORBA to the design of real time fault-tolerant system:

     1.   CORBA has not been focused on high availability. The recent
          development of a high availability version of CORBA by OMG may
          be a step in the right direction towards improving this
          situation. Nevertheless, the maturity, implementability, and their possible interactions.

Note,
          real-time performance of the design is yet to be proven.

     2.   CORBA's distribution model encourages an object-based view,
          i.e., each communication endpoint is normally an object. This
          level of granularity is likely to be somewhat inefficient for
          designing real-time fault-tolerant applications.

     3.   CORBA, in general, has a large signature that makes the "Proxy/Gateway" use of
          it a challenge in Figure 2.1 real-time environments. Small devices with
          limited memory and CPU resource (e.g., H.323 or SIP terminals)
          will find CORBA hard to fit in.

     4.   CORBA has lacked easily usable support for the asynchronous
          communication model, and this may be an issue in many
          applications. An apparently improved API for asynchronous
          communication has been added to the CORBA standards recently,
          but many, if not most, CORBA implementations do not yet
          support it. There is just as yet insufficient user experience with
          it to make conclusions regarding this feature's usability.

3.2.  DNS

This section will answer the question why we decided DNS is not
appropriate as the sole solution for RSerPool. In addition, it
highlights specific technical differences between RSerPool and DNS.

During the 49th IETF December 13, 2000 plenary meeting Randy Bush
presented a talk entitled "The DNS Today: Are we Overloading the
Saddlebags on an Old Horse?" This talk underlined the issue that DNS is
currently overloaded with extraneous tasks and has the potential to
break down entirely due to a special kind growing number of PU
which is designed to allow non-Rserpool (e.g., legacy) clients feature enhancements.

One requirement to access
services provided any solution proposed by Rserpool applications. RSerPool would be to avoid
any additional burdens upon DNS. The solution should provide a mechanism
separate from DNS so that those applications that need the added
reliability can implement the RSerPool protocols and they alone will
sustain the cost/benefit. Interworking between DNS and RSerPool will be
considered so that additional burdens to DNS will not be added.

3.2.1.  Name/Address Resolution

The technical requirement for DNS name/address resolution is that given
a name, find a host associated with this name and return its IP
address(es). In Figure 2.1 other words, in DNS we can identify have the following possible interactions:

     (a)  Server Pool Elements <-> ENRP Server: (ASAP)

          Each PE in mapping:

     -    name -> a pool uses ASAP to register or de-register itself
          as well as host machine

     -    address(es) -> IP address(es) to exchange other auxiliary information with the
          ENRP Server. reach that (hardware) host
          machine

The ENRP Server also uses ASAP to monitor the
          operational status technical requirement for RSerPool name/address resolution is that
given a name (or pool handle), find a server pool associated with this
name and return a list of each PE in transport addresses (i.e., IP addresses plus
port numbers) for reaching a set of currently operational servers inside
the pool.

     (b)  PU <-> ENRP Server: (ASAP)

          A PU normally uses ASAP to request In other words, in RSerPool we have the ENRP Server for following mapping:

     -    name -> a name-
          to-address translation service before the PU can send user
          messages addressed handle to a server pool by which is often distributed
          across multiple host machines

     -    address -> IP addresses and port numbers to reach a set of
          functionally identical (software) server entities.

3.2.2.  Administration of DNS

DNS is designed such that the pool's name.

     (c)  PU <-> PE: (ASAP)

          ASAP can be used system administrator makes administrative
changes to DNS records when a host is added to or deleted from the
network. This is a static approach that is controlled through a person
whose responsibility is to exchange some auxiliary information keep the DNS zone (specific piece of the
          two parties before they engage in user data transfer.

     (d)  Server Pool <-> Server Pool: (ASAP)

          A PE in
database) up to date.

RSerPool is designed for software server entities to register themselves
with a name server pool dynamically. They can become also de-register themselves for
purposes of preventative maintenance or can be de-registered by a PU to another name
server that believes the server entity is no longer operational. This is

a dynamic approach, which is coordinated through servers in the pool when and
among RSerPool name servers.

3.2.3.  Dynamic Monitoring of Server Status

DNS does no monitoring of host status. It is passive in the
          PE tries to initiate communication with sense that
it does not monitor or store information on the other pool. In state of the host such a case,
as whether the interactions described in B) host is up or down or what kind of load it is currently
experiencing.

RSerPool monitors the state of each server entity on various hosts on a
continual basis and C) above
          will apply.

     (e)  ENRP Server <-> ENRP Server: (ENRP)

          ENRP can be used to fulfill various Name Space operation,
          administration, collect several state variables including
up/down state and maintenance (OAM) functions.

2.4.  Typical Protocol Architecture for Rserpool

Some text current load. If a server is needed.
               *********        ***********
               * PE/PU *        *ENRP Srvr*
               *********        ***********

               +-------+        +----+----+
  To other <-->| ASAP  |<------>|ASAP|ENRP| <---> To Peer ENRP
  PE/PU        +-------+        +----+----+       Name Servers
               | SCTP  |        |  SCTP   |
               +-------+        +---------+
               |  IP   |        |   IP    |
               +-------+        +---------+

2.5.  Two file Transfer examples

In this section we present two separate file transfer examples using
ENRP no longer operational,
eventually it will be dropped from the list of available servers
maintained by the name server, so that subsequent application name
queries will not resolve to this server address.

3.2.4.  Proxy Hiding

Proxy hiding is a huge issue for DNS global load balancing because a DNS
host stub resolver typically does not support recursion and ASAP. We present two separate examples demonstrating an
ENRP/ASAP aware client therefore
the DNS server sees a query from the host's DNS server, and not the
host.

This is a client huge problem because ISPs typically don't distribute their DNS
servers near the POPs. So you can end up load balancing a Seattle user
to the Virginia Web server because that's where the DNS server is
located. That means that DNS is using pretty worthless as a Proxy or Gateway global load
balancing mechanism.

3.2.5.  Scalability

The other issue is scaling and specifically, DNS TTL. This needs to
perform be
of the file transfer. In order of the time scale to which you can respond with a load
change. Since stability requires smoothing of load metrics anyway, this example we will use
doesn't imply a FTP [RFC959]
model with some modifications. The first example (the rserpool aware
one) will modify FTP concepts slightly so that tiny or zero TTL, particularly if IP failover is
available. But it does imply a smaller TTL than would otherwise be the file transfer takes
place over SCTP streams. In
case (minutes instead of hours).

3.3.  Service Location Protocol

SLP is comprised of three components: User agent, service agents and
directory agents. User agents work on the second example we will use TCP between user's behalf to contact a
service. The UA retrieves service information from service agents or
directory agents. A service agent works on behalf of one or more
services to advertise services. A directory agent collects service
advertisements.

The directory agent of SLP functions simply as a cache and is passive in
this regard. Also, the unaware client directory agent is optional and SLP can function
without it. It is incumbent upon the Proxy.  The Proxy itself will use servers to update the

modified FTP with SCTP cache as illustrated
necessary by reregistering. The directory server is not required in
small networks as the first example.

Please note user agents can contact service agents directly
using multicast. User agents are encouraged to locate a directory at
regular intervals if they can't find one initially.

The most fundamental difference between SLP and RSerPool is that SLP is
service-oriented while RSerPool is communication-oriented. More
specifically, what SLP provides to its user is a mapping function from a
name of a service to the location of the service provider, in the example when using SCTP it does NOT follow FTP
[RFC959] precisely but we use FTP like concepts form
of a URL string. Whether the service provider is reachable and attempt to adhere how to
access it by the basic FTP model. These examples use FTP for illustrative purposes,
FTP was chosen since many URL are out of the basic concept are well known scope of SLP. Therefore, the
granularity of SLP operation is at application service level.

In contrast, what RSerPool provides to its user is a mapping function
from a communication destination name to a set of routable and should
be familiar reachable
transport addresses that leads to readers.

2.5.1.  The ENRP/ASAP aware file transfer over SCTP.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~                                                       Name space ~
~                                                                  ~
~                                                                  ~
~  .........................                                       ~
~  . "File Transfer Pool"  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .  |PE(1,A)| |PE(1,C)|  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .      ^            ^   .                                       ~
~  .      +----------+ |   .                                       ~
~  .  +-------+      | |   .                                       ~
~  .  |PE(1,B)|<---+ | |   .                                       ~
~  .  +-------+    | | |   .                                       ~
~  .......|........|.|.|....                                       ~
~         |        | | |                                           ~
~     ASAP|    ASAP| | |ASAP                                       ~
~         |        | | |                                           ~
~         v        v v v                                           ~
~     *********   +++++++++++++++                                  ~
~     * PU(X) *   + ENRP-Server +                                  ~
~     *********   +++++++++++++++                                  ~
~         ^              ^                                         ~
~         |     ASAP     |                                         ~
~         +--------------+                                         ~
~                                                                  ~
~                                                                  ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To effect a file transfer group of distributed software server
entities registered under that name that collectively represent the following steps would take place.

     1)   The application in PU(X) would send
named communication destination. RSerPool also takes the responsibility
of reliably delivering a login request user message to one of these server entities.
What service(s) this group of servers are providing at the
          Pool name "File Transfer Pool".

     2)   PU(X)'s ASAP layer would not find application
level or whether the name in its local cache,
          I.e. it encounters group is just a "cache miss".

     3)   PU(X)'s ASAP layer would send component of an ASAP request to its "local"
          ENRP server to request application service
provider is out of the list scope of Pool Elements (PE's). The
          ASAP layer queues RSerPool. In other words, the data
granularity of RSerPool operation is at communication server entity
level.

Moreover, RSerPool is designed to be sent in local buffers until
          the ENRP server responds.

     4)   The ENRP server would return a list distributed fault-tolerant and
real time translation service. SLP does not state either of the three PE's A, B these as
design requirements and
          C thus does not attempt to the ASAP layer fulfill them. In
addition, SLP defines optional security features which support
authentication and integrity. These are mandatory to implement but
optional in PU(X).

     5)   PU(X)'s ASAP layer would now populate its local cache with terms of use. RSerPool has stronger security requirements.

The SLP directory agent does not support fault tolerance or robustness
in contrast to the the name mapping "File Transfer Pool" to servers which do support it.  The name
servers also monitor the returned list and,
          using state of the servers which are registered in
the pool but the SLP directory agents do not perform this function. SLP
uses multicast and in RSerPool multicast is optional.

4.  Reliable Server Pooling Architecture

In this section, we discuss what a typical reliable server pool
architecture may look like.

4.1.  Common RSerPool Functional Areas

The following functional areas or components may likely be present in a
typical RSerPool system architecture:

     -    A number of logical "Server Pools" to provide distinct
          application services.

          Each of those server pools will likely be composed of some
          number of "Pool Elements (PEs)" - which are application
          programs running on distributed host machines, collectively
          providing the desired application services via, for example,
          data sharing and/or load sharing.

          Each server selection algorithm pool will be identifiable in the reply message from operation scope
          of the ENRP server, would select system by a member of unique "name".

     -    Some "Pool Users (PUs)" which are the pool and
          transmit its request. Note that users of the selection algorithm could
          involve queries to other entities, but for illustrative
          purposes we will assume that application
          services provided by the "File Transfer Pool" is using
          a simple round robin selection scheme that would various server pools.

     -    PEs may or may not involve
          any reference be PU, depending on whether or not they
          wish to access other balancing or plug pools in agents.

     6)   The sending the operation scope of the "login" request to
          system.

     -    A "Name Space" which contains all the selected PE would
          implicitly bring up an SCTP association. In our example defined names within the
          requesting PU would allow a large number of streams inbound to
          it.

     7)   The selected server (for illustrative purposes 'PE(1,A)')
          would respond on stream 0
          operation scope of the SCTP association with a
          challenge for a password.

     8)   The user would type the password in, system.

     -    One or more "Name Servers" which carry out various maintenance
          functions (e.g., registration and send it to de-registration, integrity
          checking) for the handle
          returned to it in "Name Space".

4.2.  RSerPool Protocol Overview

The RSerPool requested features can be obtained with the successful send call (the send help of the
          login) over stream 0 to complete the login.

     9)   PE(1,A) would accept the password two
protocols: ENRP (Endpoint Name Resolution Protocol) and send back ASAP (Aggregate
Server Access Protocol).

ENRP is designed to provide a prompt on
          stream 0.

     10)  The user would now type fully distributed fault-tolerant real-time
translation service that maps a "get filename" name to retrieve the file
          he wanted. This would generate a send set of a request transport addresses
pointing to stream 0
          but it would specify a particular SCTP stream in place specific group of networked communication endpoints
registered under that name. ENRP employs a client-server model with
which an
          IP address and port for the ENRP server will respond to connect to.

     11)  The server would read the command name translation service
requests from endpoint clients running on stream 0 and begin
          sending the file according to its file transfer protocol same host or running on
          the specified SCTP stream number.

     12)  If during the file transfer conversation, PE(1,A) fails,
          assuming the PE's were sharing state of file transfer,
different hosts.

ASAP in conjunction with ENRP provides a fail-
          over to PE(1,B) could be initiated. PE(1,B) would continue the fault tolerant data transfer until complete.  In parallel
mechanism over IP networks. ASAP uses a request would be made
          to ENRP to request name-based addressing model
which isolates a cache update for logical communication endpoint from its IP address(es),

thus effectively eliminating the server pool "File
          Transfer Pool" binding between the communication
endpoint and its physical IP address(es) which normally constitutes a report would
single point of failure.

In addition, ASAP defines each logical communication destination as a
server pool, providing full transparent support for server-pooling and
load sharing. It also be made that PE(1,A) is
          non-responsive (this would cause appropriate audits that may
          remove PE(1,A) from the allows dynamic system scalability - members of a
server pool if can be added or removed at any time without interrupting the ENRP servers had not
          already detected
service.

The fault tolerant server pooling is gained by combining two parts,
namely ASAP and ENRP. ASAP provides the failure).

     13)  At file transfer completion user interface for name to
address translation, load sharing management, and fault management. ENRP
defines the "end of transfer" record would
          be passed over our stream.

     14) fault tolerant name translation service. The user now types quit, ending our session and shutting down protocol stack
used is described by the following figure 1.

               *********        ***********
               * PE/PU *        *ENRP Srvr*
               *********        ***********

               +-------+        +----+----+
  To other <-->| ASAP  |<------>|ASAP|ENRP| <---To Peer ENRP
  PE/PU        +-------+        +----+----+       Name Servers
               | SCTP association.

     15)  A subsequent request for login would result in a 'cache hit'
          using the last cache entry and possibly picking PE(1,C),
          repeating the sequence  |        |  SCTP   |
               +-------+        +---------+
               |  IP   |        |   IP    |
               +-------+        +---------+
                    Figure 1: Typical protocol stack

4.3.  Typical Interactions between (6) to (14).

2.5.2. RSerPool Components

The ENRP/ASAP unaware client file transfer using TCP and SCTP.

In this example we investigate the use of a Proxy server assuming following drawing shows the

same set of scenario as illustrated above. typical RSerPool components and their
possible interactions with each other:

  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ~                                                       Name space ~
~                                                                  ~
~                                                  operation scope ~
  ~  .........................          .........................    ~
  ~  . "File Transfer Pool"        Server Pool 1  .          .        Server Pool 2  .    ~
  ~  .  +-------+ +-------+  .    (d)   .  +-------+ +-------+  .    ~
  ~  .  |PE(1,A)| |PE(1,C)|  . |PE(1,C)|<-------------->|PE(2,B)| |PE(2,A)|<---+  ~
  ~  .  +-------+ +-------+  .          .  +-------+ +-------+  . |  ~
  ~  .      ^            ^   .          .      ^         ^      . |  ~
  ~  .      |      (a)   |   .          .      |         |      . |  ~
  ~  .      +----------+ |   .          .      |         |      . |  ~
  ~  .  +-------+      | |   .          .      |         |      . |  ~
  ~  .  |PE(1,B)|<---+ | |   .          .      |         |      . |  ~
  ~  .  +-------+    | | |   .          .      |         |      . |  ~
  ~  .      ^        | | |   .          .      |         |      . |  ~
  ~  .......|........|.|.|....          .......|.........|....... |  ~
  ~         |        | | |                     |         |        |  ~
  ~      (c)|     (a)| |    ASAP| | |ASAP |(a)               (a)|      (a)|     (c)|  ~
  ~         |        | | |                     |         |        |  ~
  ~         |        v v v                     v         v        |  ~
  ~         |     +++++++++++++++    (e)     +++++++++++++++      |  ~
  ~         |     + ENRP-Server +<--ENRP-->+ +<---------->+ ENRP-Server +      |  ~
  ~         |     +++++++++++++++            +++++++++++++++      |  ~
  ~         |         v            ^                          ^             |  ~
  ~     *********        |                          |     ASAP             |  ~
  ~         +---------------------------------+ ASAP|     * PU(A) *<-------+                       (b)|             |  ~
  ~     *********   (b)                             |             |  ~
  ~                                                 v     v             |  ~
  ~         :::::::::::::::::      (f)      *****************     |  ~
  ~         :   FTP Client Other Clients :<------------->* Proxy/Gateway * <---+  ~
  ~         :::::::::::::::::               *****************        ~
~                                                                  ~
~                                                                  ~
~                                                                  ~
~                                                                  ~
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     Figure 2: RSerPool components and their possible interactions.

In this example figure 2 we can identify the steps will occur:

     A) following possible interactions:

     (a)  Server Pool Elements <-> ENRP Server: (ASAP)

          Each PE in a pool uses ASAP to register or de-register itself
          as well as to exchange other auxiliary information with the
          ENRP Server. The Proxy/Gateway listens on ENRP Server also uses ASAP to monitor the TCP port
          operational status of each PE in a pool.

     (b)  PU <-> ENRP Server: (ASAP)

          A PU normally uses ASAP to request the ENRP Server for FTP.

     B)   The Client connects a name-
          to-address translation service before the PU can send user
          messages addressed to a server pool by the Proxy and sends pool's name.

     (c)  PU <-> PE: (ASAP)

          ASAP can be used to exchange some auxiliary information of the
          two parties before they engage in user data transfer.

     (d)  Server Pool <-> Server Pool: (ASAP)

          A PE in a login request.

     C)   The Proxy/Gateway follows steps (1) - (7) above server pool can become a PU to find and
          begin another pool when the login process.

     D)   When
          PE tries to initiate communication with the challenge for password arrives at other pool. In
          such a case, the Proxy (Step 8)
          it is forwarded interactions described in B) and C) above
          will apply.

     (e)  ENRP Server <-> ENRP Server: (ENRP)

          ENRP can be used to fulfill various Name Space operation,
          administration, and maintenance (OAM) functions.

     (f)  Other Clients <-> Proxy/Gateway: standard protocols

          The proxy/gateway enables clients ("other clients"), which are
          not RSerPool aware, to access services provided by an RSerPool
          based server pool. It should be noted that these
          proxies/gateways may become a single point of failure.

5.  Examples

In this section the basic concepts of ENRP and ASAP will be described.
First an RSerPool aware FTP client.

     E)   The user responds on the TCP connection which server is forwarded
          completing step 8) above.

     F)   When step 9 above happens, the login acceptance considered. The interaction with
an RSerPool aware and an non-aware client is forwarded
          through the TCP connection.

     G)   A new TCP connection given. Finally, a telephony
example is established between the listening FTP considered.

5.1.  Two File Transfer Examples

In this section we present two separate file transfer examples using
ENRP and ASAP. We present two separate examples demonstrating an
ENRP/ASAP aware client and the proxy. Then the get command is forwarded
          through the proxy to the selected PE.

     H)   As the subsequent steps commence, the data a client that is transmitted from
          the selected PE using a Proxy or Gateway to
perform the proxy server and then forwarded from file transfer. In this example we will use a FTP [RFC959]
model with some modifications. The first example (the RSerPool aware
one) will modify FTP concepts so that the proxy file transfer takes place over
SCTP. In the new TCP data connection to the client.
          Commands from the server are forwarded to the command second example we will use TCP
          connection by between the proxy unaware client
and user input (from the Proxy.  The Proxy itself will use the modified FTP client)
          is forwarded to stream 0 by with RSerPool
as illustrated in the proxy.

     Note first example.

Please note that in this example high availability is maintained
     between the Proxy and the server pool example we do NOT follow FTP [RFC959] precisely
but a single point of
     failure exists between the use FTP client and the Proxy, i.e. the
     command TCP connection like concepts and its one IP address it is using for
     commands.

2.6.  Telephony Signaling Example

This example shows attempt to adhere to the basic FTP model.
These examples use of ASAP/Rserpool to support server pooling FTP for high availability illustrative purposes, FTP was chosen since
many of a telephony application such as a Voice over IP
Gateway Controller service.

2.6.1.  Initial Server Selection using RSerpool

In this stage, the Signaling Gateway (SG) attempts to locate the Gateway

Controller server basic concept are well known and should be familiar to handle an incoming signaling message.
       ........................................
readers.

5.1.1.  The RSerPool Aware Client

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~                                                  operation scope ~
~  .........................                                       ~
~  .  Gateway Controller Pool "File Transfer Pool"  .                                       ~
~  .  +-------+           .
       .                  |PE(1,C)|-+         .
       . +-------+ |  .                                       ~
~ +-> |PE(1,A)| |PE(1,C)|  .                                       ~
~ |.  +-------+ +-------+        |         .
       .           |PE(1,A)|-----+  |  .                                       ~
~ |.      ^            ^   .           +-------+     |                                       ~
~ |.      +----------+ |   .
       .                                       ~
~ |.  +-------+      | |   .
       .    |PE(1,B)|---------+                                       ~
~ |.  |PE(1,B)|<---+ | |   .
       .                                       ~
~ |.  +-------+    | | |   .
       ...................... |..|..| .........                                       ~
~ |.      ^        | | |   .                                       ~
~ |.......|........|.|.|....                                       ~
~ |  ASAP |    ASAP| | |ASAP                                       ~
~ |(d)    |(c)     | | |                                           ~
~ |       v        v v v                                           ~
~ |   *********   +++++++++++++++
            * SG(X)                                  ~
~ + ->* PU(X) *   + ENRP-Server +                                  ~
~     *********   +++++++++++++++                                  ~
~         ^     ASAP     ^                                         ~
~         |     <-(b)    |
                |      ASAP         |
                +-------------------+
As shown in the figure,                                         ~
~         +--------------+                                         ~
~               (a)->                                              ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           Figure 3: Architecture for RSerPool aware client.

To effect a file transfer the following sequence takes place:

     -    the Pool Elements (PEs) steps would take place.

     (1)  The application in the pool have registered with the
          ENRP Name Server(s) under the appropriate name for the GWC
          function

     -    the Signaling Gateway (SG) receives an incoming signaling
          message to be forwarded to the GWC. SG(X)'s PU(X) would send a login request. The
          PU(X)'s ASAP layer would send an ASAP request to its "local" ENRP
          server to request the list of Pool Elements (PE's). pool elements (using (a)). The
          pool handle to identify the pool would be "File Transfer
          Pool". The ASAP layer queues the data
          to be sent in local buffers until the ENRP server responds.

     -    the login request.

     (2)  The ENRP server would return a list of the three PE's A, B AND
          C PE(1,A),
          PE(1,B) and PE(1,C) to the ASAP layer in SG(X) together with information to be
          used PU(X) (using (b)).

     (3)  The ASAP layer selects one of the PE's, for load-sharing traffic across example PE(1,B).
          It transmitts the gateway controller
          pool.

2.6.2.  Access to login request, the Pool

ASAP then uses other FTP control data
          finally starts the transmission of the requested files (using
          (c)). For this information the multiple stream feature of SCTP could be
          used.

     (4)  If during the file transfer conversation, PE(1,B) fails,
          assuming the PE's were sharing state of file transfer, a fail-
          over to decide which server PE(1,A) could be initiated. PE(1,A) would continue the
          transfer until complete (see (d)). In parallel a request would
          be made to send ENRP to request a cache update for the
message to, using an adaptation layer MxUA, as defined in [RFC2719], server pool
          "File Transfer Pool" and

an SCTP association to a report would also be made that server.
       ........................................
          PE(1,B) is non-responsive (this would cause appropriate audits
          that may remove PE(1,B) from the pool if the ENRP servers had
          not already detected the failure) (using (a)).

5.1.2.  The RSerPool Unaware Client

In this example we investigate the use of a Proxy server assuming the
same set of scenario as illustrated above.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~                                                  operation scope ~
~  .........................                                       ~
~  .  Gateway Controller Pool "File Transfer Pool"  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .  |PE(1,A)| |PE(1,C)|  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .           +-------+      ^            ^   .                                       ~
~  .           |PE(1,A)|      +----------+ |   .                                       ~
~  .  +-------+      |              .
       .    +-------+  ^ |   .                                       ~
~  .    |PE(1,B)|  |PE(1,B)|<---+ | |   .                                       ~
~  .  +-------+    | |              .
       .........^......|.....  |. .   ......... | +----+   .                                       ~
~  .......^........|.|.|....                                       ~
~         |        | | |                                           ~
~         |    ASAP| | |ASAP                                       ~
~         |        | | |                                           ~
~         |        v v v                                           ~
~         |       +++++++++++++++          +++++++++++++++         ~
~         |       + ENRP-Server +<--ENRP-->+ ENRP-Server +         ~
~         |       +++++++++++++++          +++++++++++++++         ~
~         |                                ASAP   ^                ~
~         | +----------+     ASAP       (c)                (b) |  ^             ~
~         +---------------------------------+  |  |  MxUA/SCTP  |             ~
~                                           |  v  |
            *********        +++++++++++++++
            * SG(X) (a)            ~
~                                           v     v                ~
~         :::::::::::::::::     (e)->     *****************        ~
~         :   FTP Client  :<------------->* Proxy/Gateway *        + ENRP-Server +
            *********        +++++++++++++++

As shown in the figure, subsequent signaling messages may be sent to
different servers in the pool according to the load sharing method
agreed upon between the ENRP server and the pool elements.

2.6.3.  Variations

Some possible variations in this architecture might include:

     -    combining the ENRP-Server function internally into the same
          physical system as the SG

3.  General Requirements

The general architecture should be based on a peer to peer model.
However, the binding should be based on a client server model.

It should be possible to use the protocol stack in small devices, like
cellular phones. Therefore it must be possible to implement the
protocols on clients with a large range of processing power.

Furthermore, it is expected that there will be a transition phase with
some systems supporting the rserpool architecture and some are not. To
make this transition as seamless as possible it should be possible for
hosts not supporting this architecture to use also the new pooling
services via some mechanism.

Another important requirement is that servers should be able to enter
(become PEs) and leave server pools transparently without an
interruption in service. It must also be possible that ENRP servers
integrate themselves into the name space system.

The protocols used        ~
~         :::::::::::::::::     (f)       *****************        ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          Figure 4: Architecture for RserPool unaware client.

In this example the pool handling should not cause network
congestion. This means that it should not generate heavy traffic, even
in case of failures, and has to use flow control steps will occur:

     (1)  The FTP client and congestion
avoidance algorithms which are interoperable with currently deployed
techniques, especially the flow control of TCP [RFC793] and SCTP
[RFC2960]. Therefore, for large pools, only a subset of all possible IP-
addresses Proxy/Gateway are returned by the system.

There must be a way for using the TCP-based
          ftp protocol.  The client to provide information to sends the ENRP
server about login request to the pool elements. proxy
          (using (e))

     (2)  The architecture should not rely on multicast capabilities of the
underlying layer. Nevertheless, it can make use of it if multicast
capabilities are available.

4.  Namespaces proxy behaves like a client and Pools

Services are provided to performs the clients through a pool of servers.  Clients
will access these servers by name. The namespaces are flat.  Selection actions
          described under (1), (2) and (3) of a server in the pool above description
          (using (a), (b) and (c)).

     (3)  The ftp communication continues and will be performed on behalf of translated by the client. If
more
          proxy into the RSerPool aware dialect. This interworking uses
          (f)  and (c).

Note that one server registers under a name, this name becomes a name of
a server pool.  The name resolution results in access to one specific
server out of a pool of servers.  The selection of the server this example high availability is
transparent to maintained between the client
Proxy and is governed by the server pool policy.

Servers are registered by name in a namespace to join but a server pool.
There will be no globally unique namespace available, so multiple
independent namespaces must be supported.

Since single point of failure exists between
the FTP client and the Proxy, i.e. the command TCP connection and its
one IP address it is necessary using for commands.

5.2.  Telephony Signaling Example

This example shows the use of ASAP/RSerPool to support multiple namespaces, it should also be
possible server pooling
for high availability of a host to refer to entities in namespaces the host does not
belong to.

It must also be possible for telephony application such as a host to be registered in more than one
namespace or Voice over IP
Gateway Controller (GWC) and Gatekeeper services (GK).

In this example, we show two different scenarios of deploying these
services using multiple names RSerPool in one namespace.

A namespace can consist order to illustrate the flexibility of a large the
RSerPool architecture.

5.2.1.  Decomposed GWC and GK Scenario

In this scenario, both GWC and GK services are deployed as separate
pools with some number (up to 500) of pools. This
upper limit is important since PEs, as shown in the system will be used for real time
applications. So handling following diagram. Each
of name resolution has to be fast.  Another
consequence the pools will register their unique pool handle (i.e. name) with the
ENRP Server. We also assume that there are a Signaling Gateway (SG) and
a Media Gateway (MG) present and both are RSerPool aware.

                           ...................
                           .    Gateway      .
                           . Controller Pool .
    .................      .   +-------+     .
    .   Gatekeeper  .      .   |PE(2,A)|     .
    .     Pool      .      .   +-------+     .
    .   +-------+   .      .   +-------+     .
    .   |PE(1,A)|   .      .   |PE(2,B)|     .
    .   +-------+   .      .   +-------+     .
    .   +-------+   . (d)  .   +-------+     .
    .   |PE(1,B)|<------------>|PE(2,C)|<-------------+
    .   +-------+   .      .   +-------+     .        |
    .................      ........^..........        |
                                   |                  |
                                (c)|               (e)|
                                   |                  v
        +++++++++++++++        *********       *****************
        + ENRP-Server +        * SG(X) *       * Media Gateway *
        +++++++++++++++        *********       *****************
               ^                   ^
               |                   |
               |     <-(a)         |
               +-------------------+
                      (b)->

             Figure 5: Deployment of Decomposed GWC and GK.

As shown in the real time requirement is figure 5, the supervision on following sequence takes place:

     (1)  the pool
entities. The name resolution should not result in a pool element which
is not able Signaling Gateway (SG) receives an incoming signaling
          message to provide the required service.

The registration and deregistration process is a dynamic one.  It must be possible for hosts forwarded to register in a pool without affecting the other GWC. SG(X)'s ASAP layer would
          send an ASAP request to its "local" ENRP server to request the
          list of pool elements (PE's) of a pool. This will be GWC (using (a)). The key used
          for example, if a pool this query is under
high load and more servers are installed to provide the service of the
pool. It must also be possible to remove host from a pool without
affecting the rest.

5.  Server selection

Services are provided by a pool of servers. If a client wants to connect
to a server one handle of the servers of GWC. The ASAP layer
          queues the pool has data to be chosen. This
functionality is called server selection.

Server selection is driven by server pool policy.  Some examples of
selection policies are load balancing and round robin. The set of
supported policies should be extensible sent in the sense that new policies
can be added as required.

The ENRP servers should be extensible using a plug-in architecture.
Then clients can provide some hints for local buffers until the ENRP servers. Combining this
information with the plug-ins will result in a more refined
          server
selection by responds.

     (2)  the ENRP servers.

The server selection should not be based on internal features would return a list of the
underlying transport protocol. This means, three PE's A, B and
          C to the ASAP layer in SG(X) together with information to be
          used for example, in load-sharing traffic across the case of
SCTP that only gateway controller
          pool (using (b)).

     (3)  the state of associations ASAP layer in SG(X) will be taken into account select one PE (e.g., PE(2,C)) and
not
          send the state of signaling message to it (using (c)). The selection is
          based on the paths load sharing information of the associations.

For some applications it is important gateway
          controller pool.

     (4)  to progress the call, PE(2,C) finds that a client repeatedly connect it needs to talk to
          the same server in a pool.  This feature should be supported if Gatekeeper. Assuming it is
possible, i. e. if that server is still alive.

6.  Reliability aspects

Host failures are handled by has already had gatekeeper pool's
          information in its local cache (e.g., obtained and stored from
          recent query to ENRP Server), PE(2,C) selects PE(1,B) and
          sends the pool concept. If one pool element fails call control message to it (using (d)).

          We assume PE(1,B) responds back to PE(2,C) and there are other pool elements which are able authorizes the
          call to proceed.

     (5)  PE(2,C) issues media control commands to provide the Media Gateway
          (using (e)).

RSerPool will provide service
than robustness to the other pool elements system if some failure
would occur in the system.

For instance, if PE(1, B) in the Gatekeeper Pool crashed after receiving
the call control message from PE(2, C) in step (d) above, what most
likely will be used.

Transaction failover happen is not provided by reliable server pooling.  If a
host fails during processing that, due to the absence of a transaction this transaction may be
lost. Some services may provide reply from the
Gatekeeper, a way to handle timer expiration event will trigger the failure, but this is
not guaranteed.

Network failures have call state machine
within PE(2, C) to be handled by resend the transport protocol. This
means that the underlying control message. The A-SAP layer protocol must provide at least a
acknowledged error-free non-duplicated transfer data delivery service.
Therefore SCTP is PE(2,
C) will then notice the preferred (required) transport protocol for
Rserpool.

7.  Security aspects

Security is a major point failure of this architecture. There are several
aspects which have to be considered:

     -    The PE(1, B) through (likely) the
endpoint unreachability detection by the transport layer protocol used should support concepts for
          dealing with denial of service attacks.

     -    The security architecture must be scalable.

     -    The ENRP Client has beneath A-
SAP and automatically deliver the re-sent call control message to use authentication before registration,
          deregistration.

     -    It should the
alternate GK pool member PE(1, A). With appropriate intra-pool call
state sharing support, PE(1, A) will be possible that able to correctly handle the name resolution is encrypted.

     -    The communication between ENRP Servers must fulfill
call and reply to PE(2, C) and hence progress the same
          requirements as call.

5.2.2.  Collocated GWC and GK Scenario

In this scenario, the communication between ENRP clients GWC and
          servers.

     -    Different trust relationships should be supported.

     -    The server registration (becoming GK services are collocated (e.g., they are
implemented as a PE) should be based on an
          authentication.

     -    The security architecture must support hosts having single process). In such a wide
          range case, one can form a pool
that provides both GWC and GK services as shown in figure 6 below.

     ........................................
     .  Gateway Controller/Gatekeeper Pool  .
     .                  +-------+           .
     .                  |PE(3,A)|           .
     .                  +-------+           .
     .           +-------+                  .
     .           |PE(3,C)|<---------------------------+
     .           +-------+                  .         |
     .    +-------+  ^                      .         |
     .    |PE(3,B)|  |                      .         |
     .    +-------+  |                      .         |
     ................|.......................         |
                     |                                |
                     +-------------+                  |
                                   |                  |
                                (c)|               (e)|
                                   v                  v
        +++++++++++++++        *********       *****************
        + ENRP-Server +        * SG(X) *       * Media Gateway *
        +++++++++++++++        *********       *****************
               ^                   ^
               |                   |
               |     <-(a)         |
               +-------------------+
                      (b)->

             Figure 6: Deployment of processing power.

     -    It should be possible to encrypt the communication between the
          client Collocated GWC and GK.

The same sequence as described in 5.2.1 takes place, except that step
(4) now becomes internal to the host.

8. PE(3,C) (again, we assume Server C is
selected by SG).

6.  Acknowledgements

The authors would like to thank Melinda Shore, Bernard Aboba, Matt Holdrege,
Christopher Ross, Werner Vogels and many others for their valuable invaluable
comments and suggestions.

9.

7.  References

[RFC793]    J. B. Postel, "Transmission Control Protocol", RFC 793,
            September 1981.

[RFC959]    J. B. Postel, J. Reynolds, "File Transfer Protocol (FTP)",
            RFC 959, October 1985.

[RFC2026]   S. Bradner, "The Internet Standards Process -- Revision 3",
            RFC 2026, October 1996.

[RFC2608]   E. Guttman et al., "Service Location Protocol, Version 2",
            RFC 2608, June 1999.

[RFC2719]   L. Ong et al., "Framework Architecture for Signaling
            Transport", RFC 2719, October 1999.

[RFC2960]   R. R. Stewart et al., "Stream Control Transmission
            Protocol", RFC 2960, November 2000.

10.

8.  Authors' Addresses

Michael Tuexen                Tel.:   +49 89 722 47210
Siemens AG                    e-mail: Michael.Tuexen@icn.siemens.de
ICN WN CS SE 51
D-81359 Munich
Germany

Qiaobing Xie                  Tel.:   +1 847 632 3028
Motorola, Inc.                e-mail: qxie1@email.mot.com
1501 W. Shure Drive, #2309
Arlington Heights, Il 60004
USA

Randall Stewart               Tel.:   +1 815 477 2127
Cisco Systems, Inc.           e-mail: rrs@cisco.com
24 Burning Bush Trail
Crystal Lake, Il 60012
USA

Eliot Lear                    Tel.:   +1 408 527 4020
Cisco Systems, Inc.           e-mail: lear@cisco.com
170 W. Tasman Dr.
San Jose, CA 95134
USA

Melinda Shore                 Tel.:   +1 607 272 7512
Cisco Systems, Inc.           e-mail: mshore@cisco.com
809 Hayts Rd
Ithaca, NY 14850
USA

Lyndon Ong                    Tel.:   +1 408 495 5072
Nortel 321 8237
Point Reyes Networks          e-mail: long@nortelnetworks.com
4401 Great America Parkway
Santa Clara, long@pointreyesnet.com
1991 Concourse Drive
San Jose, CA 95054
USA

John Loughney                 Tel.:
Nokia Research Center         e-mail: john.loughney@nokia.com
PO Box 407
FIN-00045 Nokia Group
Finland

Maureen Stillman              Tel.:   +1 607 273 0724 62
Nokia                         e-mail: maureen.stillman@nokia.com
127 W. State Street
Ithaca, NY 14850
USA

              This Internet Draft expires July 10, August 27, 2001.