[Docs] [txt|pdf] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits]

Versions: (draft-tuexen-rserpool-reqts) 00 01 02 RFC 3237

Network Working Group                                          M. Tuexen
INTERNET DRAFT                                                Siemens AG
                                                                  Q. Xie
                                                                Motorola
                                                              R. Stewart
                                                                 E. Lear
                                                                M. Shore
                                                                   Cisco
                                                                  L. Ong
                                                    Point Reyes Networks
                                                             J. Loughney
                                                             M. Stillman
                                                                   Nokia
Expires August 27, 2001                                February 27, 2001

                Requirements for Reliable Server Pooling
                   <draft-ietf-rserpool-reqts-01.txt>

Status of this Memo

This document is an Internet-Draft and is in full conformance with all
provisions of Section 10 of [RFC2026].

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups. Note that other groups
may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet Drafts as reference material
or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt

The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.

Abstract

The goal is to develop an architecture and protocols for the management
and operation of server pools supporting highly reliable applications,
and for client access mechanisms to a server pool.

This document defines a basic set requirements for reliable server
pooling. A comparison is made to existing protocols and solutions to the
problem space. A proposed architecture for fulfilling these requirements
are presented and finally illustrated by examples.



Tuexen et al.                                                   [Page 1]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


Table of Contents

1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.1. Overview  . . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . .   3
1.3. Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . .   4
2. Requirements  . . . . . . . . . . . . . . . . . . . . . . . . . .   4
2.1. Communication Model . . . . . . . . . . . . . . . . . . . . . .   4
2.2. Processing Power  . . . . . . . . . . . . . . . . . . . . . . .   5
2.3. Transport Protocol  . . . . . . . . . . . . . . . . . . . . . .   5
2.4. Support of RSerPool Unaware Clients . . . . . . . . . . . . . .   5
2.5. Registering and Deregistering . . . . . . . . . . . . . . . . .   5
2.6. Server Selection  . . . . . . . . . . . . . . . . . . . . . . .   5
2.7. Timing Requirements . . . . . . . . . . . . . . . . . . . . . .   6
2.8. Failover Support  . . . . . . . . . . . . . . . . . . . . . . .   6
2.9. Robustness  . . . . . . . . . . . . . . . . . . . . . . . . . .   6
2.10. Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . .   6
2.11. Scalability  . . . . . . . . . . . . . . . . . . . . . . . . .   7
2.12. Security Requirements  . . . . . . . . . . . . . . . . . . . .   7
2.12.1. General  . . . . . . . . . . . . . . . . . . . . . . . . . .   7
2.12.2. Name Space Services  . . . . . . . . . . . . . . . . . . . .   7
2.12.3. Security State . . . . . . . . . . . . . . . . . . . . . . .   7
3. Relation to Other Solutions . . . . . . . . . . . . . . . . . . .   8
3.1. CORBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   8
3.2. DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   8
3.2.1. Name/Address Resolution . . . . . . . . . . . . . . . . . . .   9
3.2.2. Administration of DNS . . . . . . . . . . . . . . . . . . . .   9
3.2.3. Dynamic Monitoring of Server Status . . . . . . . . . . . . .  10
3.2.4. Proxy Hiding  . . . . . . . . . . . . . . . . . . . . . . . .  10
3.2.5. Scalability . . . . . . . . . . . . . . . . . . . . . . . . .  10
3.3. Service Location Protocol . . . . . . . . . . . . . . . . . . .  10
4. Reliable Server Pooling Architecture  . . . . . . . . . . . . . .  11
4.1. Common RSerPool Functional Areas  . . . . . . . . . . . . . . .  12
4.2. RSerPool Protocol Overview  . . . . . . . . . . . . . . . . . .  12
4.3. Typical Interactions between RSerPool Components  . . . . . . .  13
5. Examples  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15
5.1. Two File Transfer Examples  . . . . . . . . . . . . . . . . . .  15
5.1.1. The RSerPool Aware Client . . . . . . . . . . . . . . . . . .  16
5.1.2. The RSerPool Unaware Client . . . . . . . . . . . . . . . . .  17
5.2.1. Decomposed GWC and GK Scenario  . . . . . . . . . . . . . . .  18
5.2.2. Collocated GWC and GK Scenario  . . . . . . . . . . . . . . .  20
6. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . . . .  21
7. References  . . . . . . . . . . . . . . . . . . . . . . . . . . .  21
8. Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  22







Tuexen et al.                                                   [Page 2]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


1.  Introduction

1.1.  Overview

The Internet is always on. Many users expect services to be always
available; many business depend upon connectivity 24 hours a day, 7 days
a week, 365 days a year. In order to fulfill this, many proprietary
solutions and operating system dependent solutions have been developed
to provide highly reliable and highly available servers.

This document defines requirements for reliable server pooling and a
proposed architecture, which can be used to provide highly available
services. The way this is achieved is by using servers grouped into
pools.  Therefore, if a client wants to access a server pool, it will be
able to use any of the servers in the server pool taking into account
the server pool policy.

Highly available services also put the same high reliability
requirements upon the transport layer protocol beneath RSerPool - it
must provide strong survivability in the face of network component
failures.

Supporting real time applications is another main focus of RSerPool
which leads to requirements on the processing time needed.

Scalability is another important requirement.

RSerPool introduces new security vulnerabilities into existing
applications, both in the pool formation and pool member selection
process and in the failover process.  Therefore, during the protocol
development process it will be necessary to catalogue the threats to
RSerPool and identify appropriate responses to those threats.

1.2.  Terminology

This document uses the following terms:

     Operation scope:
          The part of the network visible to pool users by a specific
          instance of the reliable server pooling protocols.

     Pool (or server pool):
          A collection of servers providing the same application
          functionality.

     Pool handle (or pool name):
          A logical pointer to a pool. Each server pool will be
          identifiable in the operation scope of the system by a unique



Tuexen et al.                                                   [Page 3]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


          pool handle or "name".

     Pool element:
          A server entity having registered to a pool.

     Pool user:
          A server pool user.

     Pool element handle (or endpoint handle):
          A logical pointer to a particular pool element in a pool,
          consisting of the name of the pool and a destination transport
          address of the pool element.

     Name space:
          A cohesive structure of pool names and relations that may be
          queried by an internal or external agent.

     Name server:
          Entity which the responsible for managing and maintaining the
          name space within the RSerPool operation scope.

1.3.  Abbreviations

     ASAP: Aggregate Server Access Protocol

     ENRP: Endpoint Name Resolution Protocol

     DPE:  Distributed Processing Environment

     PE:   Pool element

     PU:   Pool user

     SCTP: Stream Control Transmission Protocol

     SLP:  Service Location Protocol

     TCP:  Transmission Control Protocol

2.  Requirements

2.1.  Communication Model

The general architecture should be based on a peer to peer model.
However, the binding should be based on a client server model.






Tuexen et al.                                                   [Page 4]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


2.2.  Processing Power

It should be possible to use the protocol stack in small devices, like
handheld wireless devices. The solution must scale to devices with a
differing range of processing power.

2.3.  Transport Protocol

The protocols used for the pool handling should not cause network
congestion. This means that it should not generate heavy traffic, even
in case of failures, and has to use flow control and congestion
avoidance algorithms which are interoperable with currently deployed
techniques, especially the flow control of TCP [RFC793] and SCTP
[RFC2960]. Therefore, for large pools, only a subset of all possible IP-
addresses are returned by the name servers.

The architecture should not rely on multicast capabilities of the
underlying layer. Nevertheless, it can make use of it if multicast
capabilities are available.

Network failures have to be handled and concealed from the application
layer as much as possible by the transport protocol. This means that the
underlying transport protocol must provide a strong network failure
handling capability on top of an acknowledged error-free non-duplicated
data delivery service. Therefore SCTP is the required transport protocol
for RSerPool.

2.4.  Support of RSerPool Unaware Clients

Furthermore, it is expected that there will be a transition phase with
some systems supporting the RSerPool architecture and some are not. To
make this transition as seamless as possible it should be possible for
hosts not supporting this architecture to use also the new pooling
services via some mechanism.

2.5.  Registering and Deregistering

Another important requirement is that servers should be able to register
to (become PEs) and deregister from a server pool transparently without
an interruption in service.

Servers should be able to register in multiple server pools which may
belong to different namespaces.

2.6.  Server Selection

The RSerPool mechanisms must be able to support different server
selection mechanisms. These are called server pool policies.



Tuexen et al.                                                   [Page 5]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


Examples of server pool policies are:

     -    Round Robin

     -    Least used

     -    Most used

The set of supported policies must be extensible in the sense that new
policies can be added as required.

There must be a way for the client to provide information to the name
server about the pool elements.

The name servers should be extensible using a plug-in architecture.
These plug-ins would provide a more refined server selection by the name
servers using additional information provided by clients as hints.

For some applications it is important that a client repeatedly connects
to the same server in a pool if it is possible, i. e., if that server is
still alive. This feature should be supported through the use of pool
element handles.

2.7.  Timing Requirements

A server pool can consist of a large number (up to 500) of pool
elements.  This upper limit is important since the system will be used
for real time applications. So handling of name resolution has to be
fast.

Another consequence of the real time requirement is the supervision of
the pool elements. The name resolution should not result in a pool
element which is not operational.

2.8.  Failover Support

The RSerPool architecture must be able to detect server failure quickly
and be able to perform failover without service interruption.

2.9.  Robustness

The solution must allow itself to be implemented and deployed in such a
way that there is no single point of failure in the system.

2.10.  Naming

Server pools are identified by pool handles. These pool handles are only
valid inside the operation scope. Interoperability between different



Tuexen et al.                                                   [Page 6]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


namespaces has to be provided by other mechanisms.

2.11.  Scalability

The RSerPool architecture should not require a limitation on the number
of server pools or on the number of pool users.

2.12.  Security Requirements

2.12.1.  General

     -    The scaling characteristics of the security architecture
          should be compatible with those given previously.

     -    The security architecture should support hosts having a wide
          range of processing powers.

2.12.2.  Name Space Services

     -    It must not be possible for an attacker to falsely register as
          a pool element with the name server either by masquerading as
          another pool element or by registering in violation of local
          authorization policy.

     -    It must not be possible for an attacker to deregister a server
          which has successfully registered with the name server.

     -    It must not be possible for an attacker to spoof the response
          to a query to the name server

     -    It must be possible to prohibit unauthorized queries to the
          name server.

     -    It must be possible to protect the privacy of queries to the
          name server and responses to those queries from the name
          server.

     -    Communication among name servers must be afforded the same
          protections as communication between clients and name servers.

2.12.3.  Security State

The security context of an application is a subset of the overall
context, and context or state sharing is explicitly out-of-scope for
RSerPool.  Because RSerPool does introduce new security vulnerabilities
to existing applications application designers employing RSerPool should
be aware of problems inherent in failing over secured connections.
Security services necessarily retain some state and this state may have



Tuexen et al.                                                   [Page 7]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


to be moved or re-established. Examples of this state include
authentication or retained ciphertext for ciphers operating in cipher
block chaining (CBC) or cipher feedback (CFB) mode.  These problems must
be addressed by the application or by future work on RSerPool.

3.  Relation to Other Solutions

This section is intended to cover some existing solutions which overlap
somewhat with the problems space of RSerPool.

3.1.  CORBA

Often referred to as a Distributed Processing Environment (DPE), CORBA
was mainly designed to provide location transparency for distributed
applications. However, the following limitations may exist when applying
CORBA to the design of real time fault-tolerant system:

     1.   CORBA has not been focused on high availability. The recent
          development of a high availability version of CORBA by OMG may
          be a step in the right direction towards improving this
          situation. Nevertheless, the maturity, implementability, and
          real-time performance of the design is yet to be proven.

     2.   CORBA's distribution model encourages an object-based view,
          i.e., each communication endpoint is normally an object. This
          level of granularity is likely to be somewhat inefficient for
          designing real-time fault-tolerant applications.

     3.   CORBA, in general, has a large signature that makes the use of
          it a challenge in real-time environments. Small devices with
          limited memory and CPU resource (e.g., H.323 or SIP terminals)
          will find CORBA hard to fit in.

     4.   CORBA has lacked easily usable support for the asynchronous
          communication model, and this may be an issue in many
          applications. An apparently improved API for asynchronous
          communication has been added to the CORBA standards recently,
          but many, if not most, CORBA implementations do not yet
          support it. There is as yet insufficient user experience with
          it to make conclusions regarding this feature's usability.

3.2.  DNS

This section will answer the question why we decided DNS is not
appropriate as the sole solution for RSerPool. In addition, it
highlights specific technical differences between RSerPool and DNS.





Tuexen et al.                                                   [Page 8]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


During the 49th IETF December 13, 2000 plenary meeting Randy Bush
presented a talk entitled "The DNS Today: Are we Overloading the
Saddlebags on an Old Horse?" This talk underlined the issue that DNS is
currently overloaded with extraneous tasks and has the potential to
break down entirely due to a growing number of feature enhancements.

One requirement to any solution proposed by RSerPool would be to avoid
any additional burdens upon DNS. The solution should provide a mechanism
separate from DNS so that those applications that need the added
reliability can implement the RSerPool protocols and they alone will
sustain the cost/benefit. Interworking between DNS and RSerPool will be
considered so that additional burdens to DNS will not be added.

3.2.1.  Name/Address Resolution

The technical requirement for DNS name/address resolution is that given
a name, find a host associated with this name and return its IP
address(es). In other words, in DNS we have the following mapping:

     -    name -> a host machine

     -    address(es) -> IP address(es) to reach that (hardware) host
          machine

The technical requirement for RSerPool name/address resolution is that
given a name (or pool handle), find a server pool associated with this
name and return a list of transport addresses (i.e., IP addresses plus
port numbers) for reaching a set of currently operational servers inside
the pool. In other words, in RSerPool we have the following mapping:

     -    name -> a handle to a server pool which is often distributed
          across multiple host machines

     -    address -> IP addresses and port numbers to reach a set of
          functionally identical (software) server entities.

3.2.2.  Administration of DNS

DNS is designed such that the system administrator makes administrative
changes to DNS records when a host is added to or deleted from the
network. This is a static approach that is controlled through a person
whose responsibility is to keep the DNS zone (specific piece of the
database) up to date.

RSerPool is designed for software server entities to register themselves
with a name server dynamically. They can also de-register themselves for
purposes of preventative maintenance or can be de-registered by a name
server that believes the server entity is no longer operational. This is



Tuexen et al.                                                   [Page 9]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


a dynamic approach, which is coordinated through servers in the pool and
among RSerPool name servers.

3.2.3.  Dynamic Monitoring of Server Status

DNS does no monitoring of host status. It is passive in the sense that
it does not monitor or store information on the state of the host such
as whether the host is up or down or what kind of load it is currently
experiencing.

RSerPool monitors the state of each server entity on various hosts on a
continual basis and can collect several state variables including
up/down state and current load. If a server is no longer operational,
eventually it will be dropped from the list of available servers
maintained by the name server, so that subsequent application name
queries will not resolve to this server address.

3.2.4.  Proxy Hiding

Proxy hiding is a huge issue for DNS global load balancing because a DNS
host stub resolver typically does not support recursion and therefore
the DNS server sees a query from the host's DNS server, and not the
host.

This is a huge problem because ISPs typically don't distribute their DNS
servers near the POPs. So you can end up load balancing a Seattle user
to the Virginia Web server because that's where the DNS server is
located. That means that DNS is pretty worthless as a global load
balancing mechanism.

3.2.5.  Scalability

The other issue is scaling and specifically, DNS TTL. This needs to be
of the order of the time scale to which you can respond with a load
change. Since stability requires smoothing of load metrics anyway, this
doesn't imply a tiny or zero TTL, particularly if IP failover is
available. But it does imply a smaller TTL than would otherwise be the
case (minutes instead of hours).

3.3.  Service Location Protocol

SLP is comprised of three components: User agent, service agents and
directory agents. User agents work on the user's behalf to contact a
service. The UA retrieves service information from service agents or
directory agents. A service agent works on behalf of one or more
services to advertise services. A directory agent collects service
advertisements.




Tuexen et al.                                                  [Page 10]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


The directory agent of SLP functions simply as a cache and is passive in
this regard. Also, the directory agent is optional and SLP can function
without it. It is incumbent upon the servers to update the cache as
necessary by reregistering. The directory server is not required in
small networks as the user agents can contact service agents directly
using multicast. User agents are encouraged to locate a directory at
regular intervals if they can't find one initially.

The most fundamental difference between SLP and RSerPool is that SLP is
service-oriented while RSerPool is communication-oriented. More
specifically, what SLP provides to its user is a mapping function from a
name of a service to the location of the service provider, in the form
of a URL string. Whether the service provider is reachable and how to
access it by the URL are out of the scope of SLP. Therefore, the
granularity of SLP operation is at application service level.

In contrast, what RSerPool provides to its user is a mapping function
from a communication destination name to a set of routable and reachable
transport addresses that leads to a group of distributed software server
entities registered under that name that collectively represent the
named communication destination. RSerPool also takes the responsibility
of reliably delivering a user message to one of these server entities.
What service(s) this group of servers are providing at the application
level or whether the group is just a component of an application service
provider is out of the scope of RSerPool. In other words, the
granularity of RSerPool operation is at communication server entity
level.

Moreover, RSerPool is designed to be a distributed fault-tolerant and
real time translation service. SLP does not state either of these as
design requirements and thus does not attempt to fulfill them. In
addition, SLP defines optional security features which support
authentication and integrity. These are mandatory to implement but
optional in terms of use. RSerPool has stronger security requirements.

The SLP directory agent does not support fault tolerance or robustness
in contrast to the the name servers which do support it.  The name
servers also monitor the state of the servers which are registered in
the pool but the SLP directory agents do not perform this function. SLP
uses multicast and in RSerPool multicast is optional.

4.  Reliable Server Pooling Architecture

In this section, we discuss what a typical reliable server pool
architecture may look like.






Tuexen et al.                                                  [Page 11]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


4.1.  Common RSerPool Functional Areas

The following functional areas or components may likely be present in a
typical RSerPool system architecture:

     -    A number of logical "Server Pools" to provide distinct
          application services.

          Each of those server pools will likely be composed of some
          number of "Pool Elements (PEs)" - which are application
          programs running on distributed host machines, collectively
          providing the desired application services via, for example,
          data sharing and/or load sharing.

          Each server pool will be identifiable in the operation scope
          of the system by a unique "name".

     -    Some "Pool Users (PUs)" which are the users of the application
          services provided by the various server pools.

     -    PEs may or may not be PU, depending on whether or not they
          wish to access other pools in the operation scope of the
          system.

     -    A "Name Space" which contains all the defined names within the
          operation scope of the system.

     -    One or more "Name Servers" which carry out various maintenance
          functions (e.g., registration and de-registration, integrity
          checking) for the "Name Space".

4.2.  RSerPool Protocol Overview

The RSerPool requested features can be obtained with the help of two
protocols: ENRP (Endpoint Name Resolution Protocol) and ASAP (Aggregate
Server Access Protocol).

ENRP is designed to provide a fully distributed fault-tolerant real-time
translation service that maps a name to a set of transport addresses
pointing to a specific group of networked communication endpoints
registered under that name. ENRP employs a client-server model with
which an ENRP server will respond to the name translation service
requests from endpoint clients running on the same host or running on
different hosts.

ASAP in conjunction with ENRP provides a fault tolerant data transfer
mechanism over IP networks. ASAP uses a name-based addressing model
which isolates a logical communication endpoint from its IP address(es),



Tuexen et al.                                                  [Page 12]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


thus effectively eliminating the binding between the communication
endpoint and its physical IP address(es) which normally constitutes a
single point of failure.

In addition, ASAP defines each logical communication destination as a
server pool, providing full transparent support for server-pooling and
load sharing. It also allows dynamic system scalability - members of a
server pool can be added or removed at any time without interrupting the
service.

The fault tolerant server pooling is gained by combining two parts,
namely ASAP and ENRP. ASAP provides the user interface for name to
address translation, load sharing management, and fault management. ENRP
defines the fault tolerant name translation service. The protocol stack
used is described by the following figure 1.

               *********        ***********
               * PE/PU *        *ENRP Srvr*
               *********        ***********

               +-------+        +----+----+
  To other <-->| ASAP  |<------>|ASAP|ENRP| <---To Peer ENRP
  PE/PU        +-------+        +----+----+       Name Servers
               | SCTP  |        |  SCTP   |
               +-------+        +---------+
               |  IP   |        |   IP    |
               +-------+        +---------+
                    Figure 1: Typical protocol stack

4.3.  Typical Interactions between RSerPool Components

The following drawing shows the typical RSerPool components and their
possible interactions with each other:


















Tuexen et al.                                                  [Page 13]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  ~                                                  operation scope ~
  ~  .........................          .........................    ~
  ~  .        Server Pool 1  .          .        Server Pool 2  .    ~
  ~  .  +-------+ +-------+  .    (d)   .  +-------+ +-------+  .    ~
  ~  .  |PE(1,A)| |PE(1,C)|<-------------->|PE(2,B)| |PE(2,A)|<---+  ~
  ~  .  +-------+ +-------+  .          .  +-------+ +-------+  . |  ~
  ~  .      ^            ^   .          .      ^         ^      . |  ~
  ~  .      |      (a)   |   .          .      |         |      . |  ~
  ~  .      +----------+ |   .          .      |         |      . |  ~
  ~  .  +-------+      | |   .          .      |         |      . |  ~
  ~  .  |PE(1,B)|<---+ | |   .          .      |         |      . |  ~
  ~  .  +-------+    | | |   .          .      |         |      . |  ~
  ~  .      ^        | | |   .          .      |         |      . |  ~
  ~  .......|........|.|.|....          .......|.........|....... |  ~
  ~         |        | | |                     |         |        |  ~
  ~      (c)|     (a)| | |(a)               (a)|      (a)|     (c)|  ~
  ~         |        | | |                     |         |        |  ~
  ~         |        v v v                     v         v        |  ~
  ~         |     +++++++++++++++    (e)     +++++++++++++++      |  ~
  ~         |     + ENRP-Server +<---------->+ ENRP-Server +      |  ~
  ~         |     +++++++++++++++            +++++++++++++++      |  ~
  ~         v            ^                          ^             |  ~
  ~     *********        |                          |             |  ~
  ~     * PU(A) *<-------+                       (b)|             |  ~
  ~     *********   (b)                             |             |  ~
  ~                                                 v             |  ~
  ~         :::::::::::::::::      (f)      *****************     |  ~
  ~         : Other Clients :<------------->* Proxy/Gateway * <---+  ~
  ~         :::::::::::::::::               *****************        ~
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     Figure 2: RSerPool components and their possible interactions.

In figure 2 we can identify the following possible interactions:

     (a)  Server Pool Elements <-> ENRP Server: (ASAP)

          Each PE in a pool uses ASAP to register or de-register itself
          as well as to exchange other auxiliary information with the
          ENRP Server. The ENRP Server also uses ASAP to monitor the
          operational status of each PE in a pool.

     (b)  PU <-> ENRP Server: (ASAP)

          A PU normally uses ASAP to request the ENRP Server for a name-
          to-address translation service before the PU can send user
          messages addressed to a server pool by the pool's name.




Tuexen et al.                                                  [Page 14]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


     (c)  PU <-> PE: (ASAP)

          ASAP can be used to exchange some auxiliary information of the
          two parties before they engage in user data transfer.

     (d)  Server Pool <-> Server Pool: (ASAP)

          A PE in a server pool can become a PU to another pool when the
          PE tries to initiate communication with the other pool. In
          such a case, the interactions described in B) and C) above
          will apply.

     (e)  ENRP Server <-> ENRP Server: (ENRP)

          ENRP can be used to fulfill various Name Space operation,
          administration, and maintenance (OAM) functions.

     (f)  Other Clients <-> Proxy/Gateway: standard protocols

          The proxy/gateway enables clients ("other clients"), which are
          not RSerPool aware, to access services provided by an RSerPool
          based server pool. It should be noted that these
          proxies/gateways may become a single point of failure.

5.  Examples

In this section the basic concepts of ENRP and ASAP will be described.
First an RSerPool aware FTP server is considered. The interaction with
an RSerPool aware and an non-aware client is given. Finally, a telephony
example is considered.

5.1.  Two File Transfer Examples

In this section we present two separate file transfer examples using
ENRP and ASAP. We present two separate examples demonstrating an
ENRP/ASAP aware client and a client that is using a Proxy or Gateway to
perform the file transfer. In this example we will use a FTP [RFC959]
model with some modifications. The first example (the RSerPool aware
one) will modify FTP concepts so that the file transfer takes place over
SCTP. In the second example we will use TCP between the unaware client
and the Proxy.  The Proxy itself will use the modified FTP with RSerPool
as illustrated in the first example.

Please note that in the example we do NOT follow FTP [RFC959] precisely
but use FTP like concepts and attempt to adhere to the basic FTP model.
These examples use FTP for illustrative purposes, FTP was chosen since
many of the basic concept are well known and should be familiar to
readers.



Tuexen et al.                                                  [Page 15]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


5.1.1.  The RSerPool Aware Client

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~                                                  operation scope ~
~  .........................                                       ~
~  . "File Transfer Pool"  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~ +-> |PE(1,A)| |PE(1,C)|  .                                       ~
~ |.  +-------+ +-------+  .                                       ~
~ |.      ^            ^   .                                       ~
~ |.      +----------+ |   .                                       ~
~ |.  +-------+      | |   .                                       ~
~ |.  |PE(1,B)|<---+ | |   .                                       ~
~ |.  +-------+    | | |   .                                       ~
~ |.      ^        | | |   .                                       ~
~ |.......|........|.|.|....                                       ~
~ |  ASAP |    ASAP| | |ASAP                                       ~
~ |(d)    |(c)     | | |                                           ~
~ |       v        v v v                                           ~
~ |   *********   +++++++++++++++                                  ~
~ + ->* PU(X) *   + ENRP-Server +                                  ~
~     *********   +++++++++++++++                                  ~
~         ^     ASAP     ^                                         ~
~         |     <-(b)    |                                         ~
~         +--------------+                                         ~
~               (a)->                                              ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           Figure 3: Architecture for RSerPool aware client.

To effect a file transfer the following steps would take place.

     (1)  The application in PU(X) would send a login request. The
          PU(X)'s ASAP layer would send an ASAP request to its ENRP
          server to request the list of pool elements (using (a)). The
          pool handle to identify the pool would be "File Transfer
          Pool". The ASAP layer queues the login request.

     (2)  The ENRP server would return a list of the three PE's PE(1,A),
          PE(1,B) and PE(1,C) to the ASAP layer in PU(X) (using (b)).

     (3)  The ASAP layer selects one of the PE's, for example PE(1,B).
          It transmitts the login request, the other FTP control data
          finally starts the transmission of the requested files (using
          (c)). For this the multiple stream feature of SCTP could be
          used.

     (4)  If during the file transfer conversation, PE(1,B) fails,
          assuming the PE's were sharing state of file transfer, a fail-



Tuexen et al.                                                  [Page 16]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


          over to PE(1,A) could be initiated. PE(1,A) would continue the
          transfer until complete (see (d)). In parallel a request would
          be made to ENRP to request a cache update for the server pool
          "File Transfer Pool" and a report would also be made that
          PE(1,B) is non-responsive (this would cause appropriate audits
          that may remove PE(1,B) from the pool if the ENRP servers had
          not already detected the failure) (using (a)).

5.1.2.  The RSerPool Unaware Client

In this example we investigate the use of a Proxy server assuming the
same set of scenario as illustrated above.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~                                                  operation scope ~
~  .........................                                       ~
~  . "File Transfer Pool"  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .  |PE(1,A)| |PE(1,C)|  .                                       ~
~  .  +-------+ +-------+  .                                       ~
~  .      ^            ^   .                                       ~
~  .      +----------+ |   .                                       ~
~  .  +-------+      | |   .                                       ~
~  .  |PE(1,B)|<---+ | |   .                                       ~
~  .  +-------+    | | |   .                                       ~
~  .......^........|.|.|....                                       ~
~         |        | | |                                           ~
~         |    ASAP| | |ASAP                                       ~
~         |        | | |                                           ~
~         |        v v v                                           ~
~         |       +++++++++++++++          +++++++++++++++         ~
~         |       + ENRP-Server +<--ENRP-->+ ENRP-Server +         ~
~         |       +++++++++++++++          +++++++++++++++         ~
~         |                                ASAP   ^                ~
~         |     ASAP       (c)                (b) |  ^             ~
~         +---------------------------------+  |  |  |             ~
~                                           |  v  | (a)            ~
~                                           v     v                ~
~         :::::::::::::::::     (e)->     *****************        ~
~         :   FTP Client  :<------------->* Proxy/Gateway *        ~
~         :::::::::::::::::     (f)       *****************        ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          Figure 4: Architecture for RserPool unaware client.

In this example the steps will occur:

     (1)  The FTP client and the Proxy/Gateway are using the TCP-based
          ftp protocol.  The client sends the login request to the proxy



Tuexen et al.                                                  [Page 17]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


          (using (e))

     (2)  The proxy behaves like a client and performs the actions
          described under (1), (2) and (3) of the above description
          (using (a), (b) and (c)).

     (3)  The ftp communication continues and will be translated by the
          proxy into the RSerPool aware dialect. This interworking uses
          (f)  and (c).

Note that in this example high availability is maintained between the
Proxy and the server pool but a single point of failure exists between
the FTP client and the Proxy, i.e. the command TCP connection and its
one IP address it is using for commands.

5.2.  Telephony Signaling Example

This example shows the use of ASAP/RSerPool to support server pooling
for high availability of a telephony application such as a Voice over IP
Gateway Controller (GWC) and Gatekeeper services (GK).

In this example, we show two different scenarios of deploying these
services using RSerPool in order to illustrate the flexibility of the
RSerPool architecture.

5.2.1.  Decomposed GWC and GK Scenario

In this scenario, both GWC and GK services are deployed as separate
pools with some number of PEs, as shown in the following diagram. Each
of the pools will register their unique pool handle (i.e. name) with the
ENRP Server. We also assume that there are a Signaling Gateway (SG) and
a Media Gateway (MG) present and both are RSerPool aware.



















Tuexen et al.                                                  [Page 18]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


                           ...................
                           .    Gateway      .
                           . Controller Pool .
    .................      .   +-------+     .
    .   Gatekeeper  .      .   |PE(2,A)|     .
    .     Pool      .      .   +-------+     .
    .   +-------+   .      .   +-------+     .
    .   |PE(1,A)|   .      .   |PE(2,B)|     .
    .   +-------+   .      .   +-------+     .
    .   +-------+   . (d)  .   +-------+     .
    .   |PE(1,B)|<------------>|PE(2,C)|<-------------+
    .   +-------+   .      .   +-------+     .        |
    .................      ........^..........        |
                                   |                  |
                                (c)|               (e)|
                                   |                  v
        +++++++++++++++        *********       *****************
        + ENRP-Server +        * SG(X) *       * Media Gateway *
        +++++++++++++++        *********       *****************
               ^                   ^
               |                   |
               |     <-(a)         |
               +-------------------+
                      (b)->

             Figure 5: Deployment of Decomposed GWC and GK.

As shown in the figure 5, the following sequence takes place:

     (1)  the Signaling Gateway (SG) receives an incoming signaling
          message to be forwarded to the GWC. SG(X)'s ASAP layer would
          send an ASAP request to its "local" ENRP server to request the
          list of pool elements (PE's) of GWC (using (a)). The key used
          for this query is the pool handle of the GWC. The ASAP layer
          queues the data to be sent in local buffers until the ENRP
          server responds.

     (2)  the ENRP server would return a list of the three PE's A, B and
          C to the ASAP layer in SG(X) together with information to be
          used for load-sharing traffic across the gateway controller
          pool (using (b)).

     (3)  the ASAP layer in SG(X) will select one PE (e.g., PE(2,C)) and
          send the signaling message to it (using (c)). The selection is
          based on the load sharing information of the gateway
          controller pool.





Tuexen et al.                                                  [Page 19]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


     (4)  to progress the call, PE(2,C) finds that it needs to talk to
          the Gatekeeper. Assuming it has already had gatekeeper pool's
          information in its local cache (e.g., obtained and stored from
          recent query to ENRP Server), PE(2,C) selects PE(1,B) and
          sends the call control message to it (using (d)).

          We assume PE(1,B) responds back to PE(2,C) and authorizes the
          call to proceed.

     (5)  PE(2,C) issues media control commands to the Media Gateway
          (using (e)).

RSerPool will provide service robustness to the system if some failure
would occur in the system.

For instance, if PE(1, B) in the Gatekeeper Pool crashed after receiving
the call control message from PE(2, C) in step (d) above, what most
likely will happen is that, due to the absence of a reply from the
Gatekeeper, a timer expiration event will trigger the call state machine
within PE(2, C) to resend the control message. The A-SAP layer at PE(2,
C) will then notice the failure of PE(1, B) through (likely) the
endpoint unreachability detection by the transport protocol beneath A-
SAP and automatically deliver the re-sent call control message to the
alternate GK pool member PE(1, A). With appropriate intra-pool call
state sharing support, PE(1, A) will be able to correctly handle the
call and reply to PE(2, C) and hence progress the call.

5.2.2.  Collocated GWC and GK Scenario

In this scenario, the GWC and GK services are collocated (e.g., they are
implemented as a single process). In such a case, one can form a pool
that provides both GWC and GK services as shown in figure 6 below.



















Tuexen et al.                                                  [Page 20]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


     ........................................
     .  Gateway Controller/Gatekeeper Pool  .
     .                  +-------+           .
     .                  |PE(3,A)|           .
     .                  +-------+           .
     .           +-------+                  .
     .           |PE(3,C)|<---------------------------+
     .           +-------+                  .         |
     .    +-------+  ^                      .         |
     .    |PE(3,B)|  |                      .         |
     .    +-------+  |                      .         |
     ................|.......................         |
                     |                                |
                     +-------------+                  |
                                   |                  |
                                (c)|               (e)|
                                   v                  v
        +++++++++++++++        *********       *****************
        + ENRP-Server +        * SG(X) *       * Media Gateway *
        +++++++++++++++        *********       *****************
               ^                   ^
               |                   |
               |     <-(a)         |
               +-------------------+
                      (b)->

             Figure 6: Deployment of Collocated GWC and GK.

The same sequence as described in 5.2.1 takes place, except that step
(4) now becomes internal to the PE(3,C) (again, we assume Server C is
selected by SG).

6.  Acknowledgements

The authors would like to thank Bernard Aboba, Matt Holdrege,
Christopher Ross, Werner Vogels and many others for their invaluable
comments and suggestions.

7.  References

[RFC793]    J. B. Postel, "Transmission Control Protocol", RFC 793,
            September 1981.

[RFC959]    J. B. Postel, J. Reynolds, "File Transfer Protocol (FTP)",
            RFC 959, October 1985.

[RFC2026]   S. Bradner, "The Internet Standards Process -- Revision 3",
            RFC 2026, October 1996.



Tuexen et al.                                                  [Page 21]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


[RFC2608]   E. Guttman et al., "Service Location Protocol, Version 2",
            RFC 2608, June 1999.

[RFC2719]   L. Ong et al., "Framework Architecture for Signaling
            Transport", RFC 2719, October 1999.

[RFC2960]   R. R. Stewart et al., "Stream Control Transmission
            Protocol", RFC 2960, November 2000.

8.  Authors' Addresses

Michael Tuexen                Tel.:   +49 89 722 47210
Siemens AG                    e-mail: Michael.Tuexen@icn.siemens.de
ICN WN CS SE 51
D-81359 Munich
Germany


Qiaobing Xie                  Tel.:   +1 847 632 3028
Motorola, Inc.                e-mail: qxie1@email.mot.com
1501 W. Shure Drive, #2309
Arlington Heights, Il 60004
USA


Randall Stewart               Tel.:   +1 815 477 2127
Cisco Systems, Inc.           e-mail: rrs@cisco.com
24 Burning Bush Trail
Crystal Lake, Il 60012
USA


Eliot Lear                    Tel.:   +1 408 527 4020
Cisco Systems, Inc.           e-mail: lear@cisco.com
170 W. Tasman Dr.
San Jose, CA 95134
USA


Melinda Shore                 Tel.:   +1 607 272 7512
Cisco Systems, Inc.           e-mail: mshore@cisco.com
809 Hayts Rd
Ithaca, NY 14850
USA







Tuexen et al.                                                  [Page 22]


Internet Draft  Requirements for Reliable Server Pooling   February 2001


Lyndon Ong                    Tel.:   +1 408 321 8237
Point Reyes Networks          e-mail: long@pointreyesnet.com
1991 Concourse Drive
San Jose, CA
USA


John Loughney                 Tel.:
Nokia Research Center         e-mail: john.loughney@nokia.com
PO Box 407
FIN-00045 Nokia Group
Finland


Maureen Stillman              Tel.:   +1 607 273 0724 62
Nokia                         e-mail: maureen.stillman@nokia.com
127 W. State Street
Ithaca, NY 14850
USA





              This Internet Draft expires August 27, 2001.


























Tuexen et al.                                                  [Page 23]


Html markup produced by rfcmarkup 1.129b, available from https://tools.ietf.org/tools/rfcmarkup/