[Docs] [txt|pdf] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits]

Versions: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 RFC 6787

 Internet Engineering Task Force                    Saravanan Shanmugham
 Internet-Draft                                       Cisco Systems Inc.
 draft-ietf-speechsc-mrcpv2-05                          October 18, 2004
 Expires: April 18, 2005
 
 
 
 
 
 
              Media Resource Control Protocol Version 2(MRCPv2)
 
 
 Status of this Memo
 
    By submitting this Internet-Draft, we certify that any applicable
    patent or other IPR claims of which we are aware have been
    disclosed, and any of which we become aware will be disclosed, in
    accordance with RFC 3668.
 
    Internet-Drafts are working documents of the Internet Engineering
    Task Force (IETF), its areas, and its working groups.  Note that
    other groups may also distribute working documents as Internet-
    Drafts.
 
    Internet-Drafts are draft documents valid for a maximum of six
    months and may be updated, replaced, or obsoleted by other documents
    at any time.  It is inappropriate to use Internet-Drafts as
    reference material or to cite them other than as "work in progress".
 
    The list of current Internet-Drafts can be accessed at
    http://www.ietf.org/ietf/1id-abstracts.txt .
 
    The list of Internet-Draft Shadow Directories can be accessed at
    http://www.ietf.org/shadow.html .
 
    This Internet-Draft will expire on April 18, 2005.
 
 
 Copyright Notice
 
    Copyright (C) The Internet Society (2004).  All Rights Reserved.
 
 
 Abstract
 
    This document describes a proposal for a Media Resource Control
    Protocol Version 2 (MRCPv2) and aims to meet the requirements
    specified in the SPEECHSC working group requirements document. It is
    based on the Media Resource Control Protocol (MRCP), also called
 
 
 S. Shanmugham, et. al.                                          Page 1
 
                            MRCPv2 Protocol              October, 2004
 
    MRCPv1 developed jointly by Cisco Systems, Inc., Nuance
    Communications, and Speechworks Inc.
 
    The MRCPv2 protocol will control media service resources like speech
    synthesizers, recognizers, signal generators, signal detectors, fax
    servers etc. over a network. This protocol depends on a session
    management protocol such as the Session Initiation Protocol (SIP) to
    establish a separate MRCPv2 control session between the client and
    the server. It also depends on SIP to establish the media pipe and
    associated parameters between the media source or sink and the media
    server. Once this is done, the MRCPv2 protocol exchange can happen
    over the control session established above allowing the client to
    command and control the media processing resources that may exist on
    the media server.
 
 
 Table of Contents
 
      Status of this Memo..............................................1
      Copyright Notice.................................................1
      Abstract.........................................................1
      Table of Contents................................................2
      1.   Introduction:...............................................4
      2.   Notational Convention.......................................5
      3.   Architecture:...............................................5
      3.1.  MRCPv2 Media Resources:....................................7
      3.2.  Server and Resource Addressing.............................8
      4.   MRCPv2 Protocol Basics......................................8
      4.1.  Connecting to the Server...................................8
      4.2.  Managing Resource Control Channels.........................8
      4.3.  Media Streams and RTP Ports...............................15
      4.4.  MRCPv2 Message Transport..................................16
      4.5.  Resource Types............................................17
      5.   MRCPv2 Specification.......................................17
      5.1.  Request...................................................18
      5.2.  Response..................................................19
      5.3.  Event.....................................................20
      6.   MRCP Generic Features......................................21
      6.1.  Generic Message Headers...................................21
      6.2.  SET-PARAMS................................................30
      6.3.  GET-PARAMS................................................30
      7.   Resource Discovery.........................................31
      8.   Speech Synthesizer Resource................................32
      8.1.  Synthesizer State Machine.................................33
      8.2.  Synthesizer Methods.......................................33
      8.3.  Synthesizer Events........................................34
      8.4.  Synthesizer Header Fields.................................34
      8.5.  Synthesizer Message Body..................................40
      8.6.  SPEAK.....................................................43
      8.7.  STOP......................................................44
      8.8.  BARGE-IN-OCCURRED.........................................45
 
 S Shanmugham                  IETF-Draft                        Page 2
 
                            MRCPv2 Protocol              October, 2004
 
      8.9.  PAUSE.....................................................47
      8.10. RESUME....................................................48
      8.11. CONTROL...................................................49
      8.12. SPEAK-COMPLETE............................................50
      8.13. SPEECH-MARKER.............................................51
      8.14. DEFINE-LEXICON............................................52
      9.   Speech Recognizer Resource.................................53
      9.1.  Recognizer State Machine..................................54
      9.2.  Recognizer Methods........................................54
      9.3.  Recognizer Events.........................................55
      9.4.  Recognizer Header Fields..................................55
      9.5.  Recognizer Message Body...................................69
      9.6.  DEFINE-GRAMMAR............................................83
      9.7.  RECOGNIZE.................................................87
      9.8.  STOP......................................................89
      9.9.  GET-RESULT................................................90
      9.10. START-OF-SPEECH...........................................91
      9.11. START-INPUT-TIMERS........................................92
      9.12. RECOGNITION-COMPLETE......................................92
      9.13. START-PHRASE-ENROLLMENT...................................94
      9.14. ENROLLMENT-ROLLBACK.......................................95
      9.15. END-PHRASE-ENROLLMENT.....................................96
      9.16. MODIFY-PHRASE.............................................96
      9.17. DELETE-PHRASE.............................................97
      9.18. INTERPRET.................................................97
      9.19. INTERPRETATION-COMPLETE...................................98
      9.20. DTMF Detection...........................................100
      10.  Recorder Resource.........................................100
      10.1. Recorder State Machine...................................100
      10.2. Recorder Methods.........................................100
      10.3. Recorder Events..........................................100
      10.4. Recorder Header Fields...................................101
      10.5. Recorder Message Body....................................105
      10.6. RECORD...................................................105
      10.7. STOP.....................................................106
      10.8. RECORD-COMPLETE..........................................107
      10.9. START-INPUT-TIMERS.......................................107
      11.  Speaker Verification and Identification...................109
      11.1. Speaker Verification State Machine.......................110
      11.2. Speaker Verification Methods.............................110
      11.3. Verification Events......................................111
      11.4. Verification Header Fields...............................111
      11.5. Verification Result Elements.............................119
      11.6. START-SESSION............................................123
      11.7. END-SESSION..............................................124
      11.8. QUERY-VOICEPRINT.........................................124
      11.9. DELETE-VOICEPRINT........................................125
      11.10. VERIFY..................................................126
      11.11. VERIFY-FROM-BUFFER......................................126
      11.12. VERIFY-ROLLBACK.........................................129
      11.13. STOP....................................................130
 
 S Shanmugham                  IETF-Draft                        Page 3
 
                            MRCPv2 Protocol              October, 2004
 
      11.14. START-INPUT-TIMERS......................................131
      11.15. VERIFICATION-COMPLETE...................................131
      11.16. START-OF-SPEECH.........................................132
      11.17. CLEAR-BUFFER............................................132
      11.18. GET-INTERMEDIATE-RESULT.................................132
      12.  Security Considerations...................................133
      13.  Examples:.................................................133
      14.  Reference Documents.......................................145
      15.  Appendix..................................................146
      15.1. ABNF Message Definitions.................................146
      15.2. XML Schema and DTD.......................................161
      Full Copyright Statement.......................................168
      Intellectual Property..........................................169
      Contributors...................................................169
      Acknowledgements...............................................170
      Editors' Addresses.............................................170
 
 
 1.   Introduction:
 
    The MRCPv2 protocol is designed for a client device to control media
    processing resources on the network allowing to process and
    audio/video stream. Some of these media processing resources could
    be speech recognition, speech synthesis engines, speaker
    verification or speaker identification engines. This allows a vendor
    to implement distributed Interactive Voice Response platforms such
    as VoiceXML [7] browsers.
 
       The protocol requirements of SPEECHSC require that the protocol
    is capable of reaching a media processing server and setting up
    communication channels to the media resources, to send/recieve
    control messages and media streams to/from the server. The Session
    Initiation Protocol (SIP) protocol described in [4] meets these
    requirements and is used to setup and tear down media and control
    pipes to the server. In addition, the SIP re-INVITE can be used to
    change the characteristics of these media and control pipes mid-
    session.  The MRCPv2 protocol hence is designed to leverage and
    build upon a session management protocols such as Session Initiation
    Protocol (SIP) and Session Description Protocol (SDP). SDP is used
    to describe the parameters of the media pipe associated with that
    session. It is mandatory to support SIP as the session level
    protocol to ensure interoperability. Other protocols can be used at
    the session level by prior agreement.
 
       The MRCPv2 protocol depends on SIP and SDP to create the session,
    and setup the media channels to the server. It also depends on SIP
    and SDP to establish MRCPv2 control channels between the client and
    the server for each media processing resource required for that
    session. The MRCPv2 protocol exchange between the client and the
    media resource can then happen on that control channel. The MRCPv2
 
 
 S Shanmugham                  IETF-Draft                        Page 4
 
                            MRCPv2 Protocol              October, 2004
 
    protocol exchange happening on this control channel does not change
    the state of the SIP session, the media or other parameters of the
    session SIP initiated. It merely controls and affects the state of
    the media processing resource associated with that MRCPv2 channel.
 
       The MRCPv2 protocol defines the messages to control the different
    media processing resources and the state machines required to guide
    their operation. It also describes how these messages are carried
    over a transport layer such as TCP, SCTP or TLS.
 
 
 2.   Notational Convention
 
    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
    "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this
    document are to be interpreted as described in RFC 2119[9].
 
    Since many of the definitions and syntax are identical to HTTP/1.1,
    this specification only points to the section where they are defined
    rather than copying it. For brevity, [HX.Y] is to be taken to refer
    to Section X.Y of the current HTTP/1.1 specification (RFC 2616 [1]).
 
    All the mechanisms specified in this document are described in both
    prose and an augmented Backus-Naur form (ABNF). It is described in
    detail in RFC 2234 [3].
 
    The complete message format in ABNF form is provided in Appendix
    section 12.1 and is the normative format definition.
 
    Media Resource
         An entity on the MRCP Server that can be controlled through the
         MRCP protocol
 
    MRCP Server
         Aggregate of one or more "Media Resource" entities on a Server,
         exposed through the MRCP protocol.("Server" for short)
 
    MRCP Client
         An entity controlling one or more Media Resources through the
         MRCP protocol. ("Client" for short)
 
 
 
 3.   Architecture:
 
    The system consists of a client that requires the generation of
    media streams or requires the processing of media streams and a
    media resource server that has the resources or engines to process
    or generate these streams. The client establishes a session using
    SIP and SDP with the server to use its media processing resources. A
    SIP URI refers to the MRCPv2 server.
 
 S Shanmugham                  IETF-Draft                        Page 5
 
                            MRCPv2 Protocol              October, 2004
 
 
    The session management protocol (SIP) will use SDP with the
    offer/answer model described RFC 3264 to describe and setup the
    MRCPv2 control channels. Separate MRCPv2 control channels are need
    for controlling the different media processing resources associated
    with that session. Within a SIP session, the individual resource
    control channels for the different resources are added or removed
    through the SDP offer/answer model and the SIP re-INVITE dialog.
 
    The server, through the SDP exchange, provides the client with a
    unique channel identifier and a port number(TCP or SCTP). The client
    MAY then open a new TCP connection with the server using this port
    number. Multiple MRCPv2 channels can share a TCP connection between
    the client and the server. All MRCPv2 messages exchanged between the
    client and the server will also carry the specified channel
    identifier that MUST be unique among all MRCPv2 control channels
    that are active on that server. The client can use this channel to
    control the media processing resource associated with that channel.
 
    The session management protocol (SIP) will also establish media
    pipes between the client (or source/sink of media) and the MRCP
    server using SDP m-lines. A media pipe maybe shared by one or more
    media processing resources under that SIP session or each media
    processing resource may have its own media pipe.
 
         MRCPv2 client                  MRCPv2 Media Resource Server
      |--------------------|             |-----------------------------|
      ||------------------||             ||---------------------------||
      || Application Layer||             || TTS  | ASR  | SV   | SI   ||
      ||------------------||             ||Engine|Engine|Engine|Engine||
      ||Media Resource API||             ||---------------------------||
      ||------------------||             || Media Resource Management ||
      || SIP  |  MRCPv2   ||             ||---------------------------||
      ||Stack |           ||             ||   SIP  |    MRCPv2        ||
      ||      |           ||             ||  Stack |                  ||
      ||------------------||             ||---------------------------||
      ||   TCP/IP Stack   ||----MRCPv2---||       TCP/IP Stack        ||
      ||                  ||             ||                           ||
      ||------------------||-----SIP-----||---------------------------||
      |--------------------|             |-----------------------------|
               |                             /
              SIP                           /
               |                           /
      |-------------------|              RTP
      |                   |              /
      | Media Source/Sink |-------------/
      |                   |
      |-------------------|
 
                     Fig 1: Architectural Diagram
 
 
 S Shanmugham                  IETF-Draft                        Page 6
 
                            MRCPv2 Protocol              October, 2004
 
   MRCPv2 Media Resource Types:
 
    The MRCP server may offer one or more of the following media
    processing resources to its clients.
 
    Basic Synthesizer
 
    A speech synthesizer resource with very limited capabilities, that
    can be achieved through the playing out concatenated audio file
    clips. The speech data is described as SSML data but with limited
    support for its elements. It MUST support <speak>, <audio>, <sayas>
    and <mark> tags in SSML.
 
 
    Speech Synthesizer
 
    A full capability speech synthesizer capable of rendering regular
    speech and SHOULD have full SSML support.
 
 
    Recorder
 
    A resource capable of recording audio and saving it to an URI. It
    also has some end-pointing capabilities for detecting beginning
    speech and silence at the end of recording.
 
 
    DTMF Recognizer
 
    A limited DTMF only recognizer that is able to recognize DTMF digits
    in the input stream to match supplied digit grammar. It could also
    do a semantic interpretation based on semantic tags in the grammar.
 
 
    Speech Recognizer
 
    A full speech recognizer that is capable of receiving audio and
    interpreting it to recognition results. It also has a natural
    language semantic interpreter to post process the recognized data
    according to the semantic data in the grammar and provide semantic
    results along with the recognized input. The recognizer may also
    support enrolled grammars, where the client can enroll and create
    new personal grammars for use in future grammars.
 
 
    Speaker Verification
 
    A resource capable of verifying the authenticity of a person by
    matching his voice to a saved voice-print. This may also involve
    matching the callers voice with more than one voice-print, also
    called multi-verification or speaker identification.
 
 S Shanmugham                  IETF-Draft                        Page 7
 
                            MRCPv2 Protocol              October, 2004
 
 
 
 3.1. Server and Resource Addressing
 
    The MRCPv2 server as a whole is a generic SIP server and addressed
    by a specific SIP URL registered by the server.
 
    Example:
 
      sip:mrcpv2@mediaserver.com
 
 
 4.   MRCPv2 Protocol Basics
 
    MRCPv2 requires the use of a connection oriented transport layer
    protocol such as TCP or SCTP to guarantee reliable sequencing and
    delivery of MRCPv2 control messages between the client and the
    server. If security is needed a TLS connection is used to carry
    MRCPv2 messages. One or more TCP,  SCTP or TLS connections between
    the client and the server can be shared between different MRCPv2
    channels to the server. The individual messages carry the channel
    identifier to differentiate messages on different channels. The
    message format for MRCPv2 is text based with mechanisms to carry
    embedded binary data. This allows data like recognition grammars,
    recognition results, synthesizer speech markup etc. to be carried in
    the MRCPv2 message between the client and the server resource. The
    protocol does not address session and media establishment and
    management and relies of SIP and SDP to do this.
 
 4.1. Connecting to the Server
 
    The MRCPv2 protocol depends on a session establishment and
    management protocol such as SIP in conjunction with SDP. The client
    finds and reaches a MRCPv2 server across the SIP network using the
    INVITE and other SIP dialog exchanges. The SDP offer/answer exchange
    model over SIP is used to establish resource control channels for
    each resource. The SDP offer/answer exchange is also used to
    establish media pipes between the source or sink of audio and the
    server.
 
 
 4.2. Managing Resource Control Channels
 
    The client needs a separate MRCPv2 resource control channel to
    control each media processing resource under the SIP session. A
    unique channel identifier string identifies these resource control
    channels. The channel identifier string consists of a hexadecimal
    number specifying the channel ID followed by a string token
    specifying the type of resource separated by an "@". The server
    generates the hexadecimal channel ID and MUST make sure it does not
    clash with any other MRCP channel allocated to that server. MRCPv2
 
 S Shanmugham                  IETF-Draft                        Page 8
 
                            MRCPv2 Protocol              October, 2004
 
    defines the following type of media processing resources. Additional
    resource types, their associated methods/events and state machines
    can be added by future specification proposing to extend the
    capabilities of MRCPv2.
 
           Resource Type       Resource Description
            speechrecog         Speech Recognition
            dtmfrecog           DTMF Recognition
            speechsynth         Speech Synthesis
            basicsynth          Poorman's Speech Synthesizer
            speakverify         Speaker Verification
            recorder            Speech Recording
 
    Additional resource types, their associated methods/events and state
    machines can be added by future specification proposing to extend
    the capabilities of MRCPv2.
 
    The SIP INVITE or re-INVITE dialog exchange and the SDP offer/answer
    exchange it carries, will contain m-lines describing the resource
    control channel it wants to allocate. There MUST be one SDP m-line
    for each MRCPv2 resource that needs to be controlled. This m-line
    will have a media type field of "control" and a transport type field
    of "TCP", "SCTP" or "TCP/TLS". The port number field of the m-line
    MUST contain the discard port of the transport protocol (say port 9
    for TCP) in the SDP offer from the client and MUST contain the TCP
    listen port on the server in the SDP answer. The client may then
    setup a TCP or TLS connection to that server port or share an
    already established connection to that port. The format field of the
    m-line MUST contain "application/mrcpv2". The client must specify
    the resource type identifier in the resource attribute associated
    with the control m-line of the SDP offer. The server MUST respond
    with the full Channel-Identifier (which includes the resource type
    identifier and an unique hexadecimal identifier), in the "channel"
    attribute associated with the control m-line of the SDP answer.
 
    All servers MUST support TLS, SHOULD support TCP and MAY support
    SCTP and it is up to the client to choose which mode of transport it
    wants to use for an MRCPv2 session. When using TCP, SCTP or TLS the
    m-lines MUST conform to IETF draft[20] which describes the usage of
    SDP for connection oriented transport. When using TLS the SDP m-line
    for the control pipe MUST conform to the IETF draft[21] in addition
    to the IETF draft[20]. IETF draft[21] specifies the usage of SDP for
    establishing a secure connection oriented transport over TLS.
 
    When the client wants to add a media processing resource to the
    session, it MUST initiate a re-INVITE dialog. The SDP offer/answer
    exchange contained in this SIP dialog will contain an additional
    control m-line for the new resource that needs to be allocated. The
    server, on seeing the new m-line, will allocate the resource and
    respond with a corresponding control m-line in the SDP answer
    response.
 
 S Shanmugham                  IETF-Draft                        Page 9
 
                            MRCPv2 Protocol              October, 2004
 
 
    The a=setup attribute as described in [20] MUST be "active" for the
    offer from the client and MUST be "passive" for the answer from the
    MRCP server. The a=connection attribute MUST have a value of "new"
    on the very first control m-line offer from the client to a MRCP
    server. Subsequent control m-lines offers from the client to the
    MRCP server MAY contain "new" or "existing", depending on whether
    the client wants to share a existing connection oriented pipe. The
    value of "existing" tells the server that the client wants to reuse
    an existing transport connection between the client and the server.
    The server can respond with a value of "existing", if wants to allow
    sharing of existing pipes or can reply with a value of "new", in
    which case the client MUST initiate new connection oriented pipe.
 
    Note: Only SDP m-lines having a common SDP format field of
    "application/mrcpv2" can share connection orient pipes between them.
    Such pipe is reserved exclusively for MRCPv2 communication and
    cannot be shared with any other protocol.
 
    When the client wants to de-allocate the resource from this session,
    it MUST initiate a SIP re-INVITE dialog with the server and MUST
    offer the control m-line with a port 0. The server MUST then answer
    the control m-line with a response of port 0. This de-allocates the
    usage of the associated MRCP identifier and resource. But may not
    close the TCP, SCTP or TLS connection if it is currently being
    shared among multiple MRCP channels. When all MRCP channels that may
    be sharing the connection are released and the associated SIP
    connections are closed, the client or server disconnect the shared
    connection oriented pipe.
 
    Example 1:
    This exchange adds a resource control channel for a synthesizer.
    Since a synthesizer would be generating an audio stream, this
    interaction also creates a receive-only audio stream for the server
    to send audio to.
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314161 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: ...
 
           v=0
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4
 
 S Shanmugham                  IETF-Draft                       Page 10
 
                            MRCPv2 Protocol              October, 2004
 
           s=-
           c=IN IP4 224.2.17.12
           m=control 9 TCP application/mrcpv2
           a=setup:active
           a=connection:new
           a=resource:speechsynth
           a=cmid:1
           m=audio 49170 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=recvonly
           a=mid:1
 
    S->C:
           SIP/2.0 200 OK
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314161 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: ...
 
           v=0
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4
           s=-
           c=IN IP4 224.2.17.12
           m=control 32416 TCP application/mrcpv2
           a=setup:passive
           a=connection:new
           a=channel:32AECB234338@speechsynth
           a=cmid:1
           m=audio 48260 RTP/AVP 00 96
           a=rtpmap:0 pcmu/8000
           a=sendonly
           a=mid:1
 
    C->S:
           ACK sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314162 ACK
           Content-Length: 0
 
    Example 2:
 
 
 S Shanmugham                  IETF-Draft                       Page 11
 
                            MRCPv2 Protocol              October, 2004
 
    This exchange continues from example 1 allocates an additional
    resource control channel for a recognizer. Since a recognizer would
    need to receive an audio stream for recognition, this interaction
    also updates the audio stream to sendrecv making it a 2-way audio
    stream.
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: ...
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=-
           c=IN IP4 224.2.17.12
           m=control 9 TCP application/mrcpv2
           a=setup:active
           a=connection:existing
           a=resource:speechrecog
           a=cmid:1
           m=control 9 TCP application/mrcpv2
           a=setup:active
           a=connection:existing
           a=resource:speechsynth
           a=cmid:1
           m=audio 49170 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=rtpmap:96 telephone-event/8000
           a=fmtp:96 0-15
           a=sendrecv
           a=mid:1
 
    S->C:
           SIP/2.0 200 OK
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 131
 
 S Shanmugham                  IETF-Draft                       Page 12
 
                            MRCPv2 Protocol              October, 2004
 
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=-
           c=IN IP4 224.2.17.12
           m=control 32416 TCP application/mrcpv2
           a=setup:passive
           a=connection:existing
           a=channel:32AECB234338@speechrecog
           a=cmid:1
           m=control 32416 TCP application/mrcpv2
           a=setup:passive
           a=connection:existing
           a=channel:32AECB234339@speechsynth
           a=cmid:1
           m=audio 48260 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=rtpmap:96 telephone-event/8000
           a=fmtp:96 0-15
           a=sendrecv
           a=mid:1
 
    C->S:
           ACK sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314164 ACK
           Content-Length: 0
 
 
    Example 3:
    This exchange continues from example 2 and de-allocates recognizer
    channel. Since a recognizer would not need to receive an audio
    stream any more, this interaction also updates the audio stream to
    recvonly.
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
 
 S Shanmugham                  IETF-Draft                       Page 13
 
                            MRCPv2 Protocol              October, 2004
 
           Content-Length: ...
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=-
           c=IN IP4 224.2.17.12
           m=control 0 TCP application/mrcpv2
           a=resource:speechrecog
           a=cmid:1
           m=control 9 TCP application/mrcpv2
           a=resource:speechsynth
           a=cmid:1
           m=audio 49170 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=recvonly
           a=mid:1
 
    S->C:
           SIP/2.0 200 OK
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip:sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 131
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=-
           c=IN IP4 224.2.17.12
           m=control 0 TCP application/mrcpv2
           a=channel:32AECB234338@speechrecog
           a=cmid:1
           m=control 32416 TCP application/mrcpv2
           a=channel:32AECB234339@speechsynth
           a=cmid:1
           m=audio 48260 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=sendonly
           a=mid:1
 
    C->S:
           ACK sip:mresources@mediaserver.com SIP/2.0
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;
                branch=z9hG4bK74bf9
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
 
 S Shanmugham                  IETF-Draft                       Page 14
 
                            MRCPv2 Protocol              October, 2004
 
           Call-ID: a84b4c76e66710
           CSeq: 314164 ACK
           Content-Length: 0
 
 4.3. Media Streams and RTP Ports
 
    The client or the server would need to add audio (or other media)
    pipes between the client and the server and associate them with the
    resource that would process or generate the media. One or more
    resources could be associated with a single media channel or each
    resource could be assigned a separate media channel. For example, a
    synthesizer and a recognizer could be associated to the same media
    pipe(m=audio line), if it is opened in "sendrecv" mode.
    Alternatively, the recognizer could have its own "sendonly" audio
    pipe and the synthesizer could have its own "recvonly" audio pipe.
 
    The association between control channels and their corresponding
    media channels is established through the mid attribute defined in
    RFC 3388[20]. If there are more than 1 audio m-line, then each audio
    m-line MUST have a "mid" attribute. Each control m-line MUST have a
    "cmid" attribute that matches the "mid" attribute of the audio m-
    line it is associated with.
 
      cmid-attribute      =    "a=cmid:" identification-tag
 
      identification-tag = token
 
    A single audio m-line can be associated with multiple resources or
    each resource can have its own audio m-line. For example, if the
    client wants to allocate a recognizer and a synthesizer and
    associate them to a single 2-way audio pipe, the SDP offer should
    contain two control m-lines and a single audio m-line with an
    attribute of "sendrecv". Each of the control m-lines should have a
    "cmid" attribute whose value matches the "mid" of the audio m-line.
    If the client wants to allocate a recognizer and a synthesizer each
    with its own separate audio pipe, the SDP offer would carry two
    control m-lines (one for the recognizer and another for the
    synthesizer) and two audio m-lines (one with the attribute
    "sendonly" and another with attribute "recvonly"). The "cmid"
    attribute of the recognizer control m-line would match the "mid"
    value of the "sendonly" audio m-line and the "cmid" attribute of the
    synthesizer control m-line would match the "mid" attribute of the
    "recvonly" m-line.
 
    When a server receives media(say audio) on a media pipe that is
    associated with more than one media processing resource, it is the
    responsibility of the server to receive and fork it to the resources
    that need it. If the multiple resources in a session are generating
    audio (or other media), that needs to be sent on a single associated
    media pipe, it is the responsibility of the server to mix the
    streams before sending on the media pipe. The media stream in either
 
 S Shanmugham                  IETF-Draft                       Page 15
 
                            MRCPv2 Protocol              October, 2004
 
    direction may contain more than one Synchronized Source (SSRC)
    identifier due to multiple sources contributing to the media on the
    pipe and the client or server SHOULD be able to deal with it.
 
    If a server does not have the capability to mix or fork media, in
    the above cases, then the server SHOULD disallow the client from
    associating multiple such resources to a single audio pipe, by
    rejecting the SIP INVITE with a SIP 501 "Not Implemented" error.
 
 4.4. MRCPv2 Message Transport
 
    The MRCPv2 resource messages defined in this document are
    transported over a TCP, SCTP or TLS pipe between the client and the
    server. The setting up of this transport pipe and the resource
    control channel is discussed in Section 3.2. Multiple resource
    control channels between a client and a server that belong to
    different SIP sessions can share one or more TLS, TCP or SCTP pipes
    between them and the server and client MUST support this operation.
    The individual MRCPv2 messages carry the MRCPv2 channel identifier
    in their Channel-Identifier header field MUST be used to
    differentiate MRCPv2 messages from different resource channels. All
    MRCPv2 servers MUST support TLS, SHOULD support TCP and MAY support
    SCTP and it is up to the client to choose which mode of transport it
    wants to use for an MRCPv2 session.
 
    Example 1:
 
    C->S:  MRCP/2.0 483 SPEAK 543257
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
            <paragraph>
              <sentence>You have 4 new messages.</sentence>
              <sentence>The first is from <say-as
              type="name">Stephanie Williams</say-as>
              and arrived at <break/>
              <say-as type="time">3:45pm</say-as>.</sentence>
 
              <sentence>The subject is <prosody
              rate="-20%">ski trip</prosody></sentence>
            </paragraph>
           </speak>
 
    S->C:  MRCP/2.0 81 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
 S Shanmugham                  IETF-Draft                       Page 16
 
                            MRCPv2 Protocol              October, 2004
 
 
    S->C:  MRCP/2.0 89 SPEAK-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
 
    Most examples from here on show only the MRCPv2 messages and do not
    show the SIP messages and headers that may have been used to
    establish the MRCPv2 control channel.
 
 
 5.   MRCPv2 Specification
 
    The MRCPv2 PDU is textual using an ISO 10646 character set in the
    UTF-8 encoding (RFC 2044) to allow many different languages to be
    represented. However, to assist in compact representations, MRCPv2
    also allows other character sets such as ISO 8859-1 to be used when
    desired. The MRCPv2 protocol headers(the first line of an MRCP
    message) and field names use only the US-ASCII subset of UTF-8.
    Internationalization only applies to certain fields like grammar,
    results, speech markup etc, and not to MRCPv2 as a whole.
 
    Lines are terminated by CRLF. Also, some parameters in the PDU may
    contain binary data or a record spanning multiple lines. Such fields
    have a length value associated with the parameter, which indicates
    the number of octets immediately following the parameter.
 
    All MRCPv2 messages, responses and events MUST carry the Channel-
    Identifier header field in it, for the server or client to
    differentiate messages from different control channels that may
    share the same transport connection.
 
    The MRCPv2 message set consists of requests from the client to the
    server, responses from the server to the client and asynchronous
    events from the server to the client. All these messages consist of
    a start-line, one or more header fields (also known as "headers"),
    an empty line (i.e. a line with nothing preceding the CRLF)
    indicating the end of the header fields, and an optional message
    body.
 
      generic-message  =    start-line
                            message-header
                            CRLF
                            [ message-body ]
 
      start-line       =    request-line / response-line / event-line
 
      message-header   =   1*(generic-header / resource-header)
 
      resource-header  =    recognizer-header
                       /    synthesizer-header
                       /    recorder-header
                       /    verifier-header
 
 S Shanmugham                  IETF-Draft                       Page 17
 
                            MRCPv2 Protocol              October, 2004
 
                       /    extension-header
 
      header-extension =    1*(ALPHANUM / "-") CRLF
 
    The message-body contains resource-specific and message-specific
    data that needs to be carried between the client and server as a
    MIME entity. The information contained here and the actual MIME-
    types used to carry the data are specified later when addressing the
    specific messages.
 
    If a message contains data in the message body, the header fields
    will contain content-headers indicating the MIME-type and encoding
    of the data in the message body.
 
 5.1. Request
 
    A MRCPv2 request consists of a Request line followed by zero or more
    message headers and an optional message body containing data
    specific to the request message.
 
    The Request message from a client to the server includes within the
    first line, the method to be applied, a method tag for that request
    and the version of protocol in use.
 
      request-line   =    mrcp-version SP message-length SP method-name
                          SP request-id CRLF
 
    The mrcp-version field is the MRCPv2 protocol version that is being
    used by the client. Request, response and event messages include the
    version of MRCP in use, and follow [H3.1] (with HTTP replaced by
    MRCP, and HTTP/1.1 replaced by MRCP/2.0) regarding version ordering,
    compliance requirements, and upgrading of version numbers. To be
    compliant with this specification, applications sending MRCP
    messages MUST include a mrcp-version of "MRCP/2.0".
 
 
      mrcp-version   =    "MRCP" "/" 1*DIGIT "." 1*DIGIT
 
    The message-length field specifies the length of the message and
    MUST be the 2nd token from the beginning of the message. This is to
    make the framing and parsing of the message simpler to do.
 
      message-length =    1*DIGIT
 
    The request-id field is a unique identifier representable as a
    unsigned 32 bit integer created by the client and sent to the
    server. The initial value of the request-id is arbitrary.
    Consecutive requests within a MRCP session MUST contain strictly
    monotonically increasing and contiguous request-id's. The server
    resource MUST use this identifier in its response to this request.
    If the request does not complete with the response future
 
 S Shanmugham                  IETF-Draft                       Page 18
 
                            MRCPv2 Protocol              October, 2004
 
    asynchronous events associated with this request MUST carry the
    request-id.
 
      request-id    =    1*DIGIT
 
    The method-name field identifies the specific request that the
    client is making to the server. Each resource supports a certain
    list of requests or methods that can be issued to it, and will be
    addressed in later sections.
 
      method-name    =    generic-method      ; Section 6
                     /    synthesizer-method
                     /    recorder-method
                     /    recognizer-method
                     /    verifier-method
                     /    extension-methods
 
      extension-methods = 1*(ALPHA / "-")
 
 5.2. Response
 
    After receiving and interpreting the request message, the server
    resource responds with an MRCPv2 response message. It consists of a
    status line optionally followed by a message body.
 
      response-line  =    mrcp-version SP message-length SP request-id
                     SP status-code SP request-state CRLF
 
    The mrcp-version field used here MUST be the same as the one used in
    the Request Line and specifies the version of MRCPv2 protocol
    running on the server.
 
    The request-id used in the response MUST match the one sent in the
    corresponding request message.
 
    The status-code field is a 3-digit code representing the success or
    failure or other status of the request.
 
    The request-state field indicates if the job initiated by the
    Request is PENDING, IN-PROGRESS or COMPLETE. The COMPLETE status
    means that the Request was processed to completion and that there
    are will be no more events from that resource to the client with
    that request-id. The PENDING status means that the job has been
    placed on a queue and will be processed in first-in-first-out order.
    The IN-PROGRESS status means that the request is being processed and
    is not yet complete. A PENDING or IN-PROGRESS status indicates that
    further Event messages will be delivered with that request-id.
 
      request-state    =  "COMPLETE"
                       /  "IN-PROGRESS"
                       /  "PENDING"
 
 S Shanmugham                  IETF-Draft                       Page 19
 
                            MRCPv2 Protocol              October, 2004
 
 Status Codes
 
    The status codes are classified under the Success(2XX) codes,
    Client Failure(4XX) codes and Server Failure (5XX).
 
 Success 2xx
 
       200       Success
       201       Success with some optional headers ignored.
 
 Client Failure 4xx
 
       401       Method not allowed
       402       Method not valid in this state
       403       Unsupported Header
       404       Illegal Value for Header
       405       Not found (e.g. Resource URI not initialized
                 or doesn't exist)
       406       Mandatory Header Missing
       407       Method or Operation Failed(e.g. Grammar compilation
                 failed in the recognizer. Detailed cause codes MAY BE
                 available through a resource specific header field.)
       408       Unrecognized or unsupported message entity
       409       Unsupported Header Value
       421-499   Resource specific Failure codes
 
 Server Failure 5xx
 
       501       Server Internal Error
       502       Protocol Version not supported
       503       Proxy Timeout. The MRCP Proxy did not receive a
                 response from the MRCP server.
       504       Message too large.
 
 
 5.3. Event
 
    The server resource may need to communicate a change in state or the
    occurrence of a certain event to the client. These messages are used
    when a request does not complete immediately and the response
    returns a status of PENDING or IN-PROGRESS. The intermediate results
    and events of the request are indicated to the client through the
    event message from the server. Events have the request-id of the
    request that is in progress and generating these events and status
    value. The status value is COMPLETE if the request is done and this
    was the last event, else it is IN-PROGRESS.
 
      event-line       =  mrcp-version SP message-length SP event-name
                          SP request-id SP request-state CRLF
 
 
 
 S Shanmugham                  IETF-Draft                       Page 20
 
                            MRCPv2 Protocol              October, 2004
 
    The mrcp-version used here is identical to the one used in the
    Request/Response Line and indicates the version of MRCPv2 protocol
    running on the server.
 
    The request-id used in the event MUST match the one sent in the
    request that caused this event.
 
    The request-state indicates if the Request/Command causing this
    event is complete or still in progress, and is the same as the one
    mentioned in section 5.3. The final event will contain a COMPLETE
    status indicating the completion of the request.
 
    The event-name identifies the nature of the event generated by the
    media resource. The set of valid event names are dependent on the
    resource generating it, and will be addressed in later sections.
 
      event-name       =  synthesizer-event
                       /  recognizer-event
                       /  recorder-event
                       /  verifier-event
                       /  extension-event
 
      extension-event  =  1*(ALPHA /"-")
 
 6.   MRCP Generic Features
    The protocol supports a set of methods, and headers that are common
    to all resources and are discussed in this section
 
      generic-method      =    "SET-PARAMS"
                          /    "GET-PARAMS"
 
 6.1. Generic Message Headers
 
    MRCPv2 header fields, which include general-header (section 5.5) and
    resource-specific-header (section 7.4 and section 8.4), follow the
    same generic format as that given in Section 3.1 of RFC 822 [8].
    Each header field consists of a name followed by a colon (":") and
    the field value. Field names are case-insensitive. The field value
    MAY be preceded by any amount of LWS, though a single SP is
    preferred. Header fields can be extended over multiple lines by
    preceding each extra line with at least one SP or HT.
 
      message-header = field-name ":" [ field-value ]
      field-name     = token
      field-value    = *LWS field-content *( CRLF 1*LWS field-content)
      field-content  = <the OCTETs making up the field-value
                        and consisting of either *TEXT or combinations
                        of token, separators, and quoted-string>
 
    The field-content does not include any leading or trailing LWS:
    linear white space occurring before the first non-whitespace
 
 S Shanmugham                  IETF-Draft                       Page 21
 
                            MRCPv2 Protocol              October, 2004
 
    character of the field-value or after the last non-whitespace
    character of the field-value. Such leading or trailing LWS MAY be
    removed without changing the semantics of the field value. Any LWS
    that occurs between field-content MAY be replaced with a single SP
    before interpreting the field value or forwarding the message
    downstream.
 
    The order in which header fields with differing field names are
    received is not significant. However, it is "good practice" to send
    general-header fields first, followed by request-header or response-
    header fields, and ending with the entity-header fields.
 
    Multiple message-header fields with the same field-name MAY be
    present in a message if and only if the entire field-value for that
    header field is defined as a comma-separated list [i.e., #(values)].
 
    It MUST be possible to combine the multiple header fields into one
    "field-name: field-value" pair, without changing the semantics of
    the message, by appending each subsequent field-value to the first,
    each separated by a comma. The order in which header fields with the
    same field-name are received is therefore significant to the
    interpretation of the combined field value, and thus a proxy MUST
    NOT change the order of these field values when a message is
    forwarded.
 
      generic-header      =    channel-identifier
                          /    active-request-id-list
                          /    proxy-sync-id
                          /    content-id
                          /    content-type
                          /    content-length
                          /    content-base
                          /    content-location
                          /    content-encoding
                          /    cache-control
                          /    logging-tag
                          /    set-cookie
                          /    set-cookie2
                          /    vendor-specific
 
    Header field          where     s  g  A
    __________________________________________________________
    Channel-Identifier      R       m  m  m
    Channel-Identifier      r       m  m  m
    Active-Request-Id-List  R       -  -  O
    Active-Request-Id-List  r       -  -  O
    Proxy-Sync-Id           R       -  -  O
    Content-Id              R       o  o  o
    Content-Type            R       o  o  o
    Content-Length          R       o  o  o
    Content-Base            R       o  o  o
 
 S Shanmugham                  IETF-Draft                       Page 22
 
                            MRCPv2 Protocol              October, 2004
 
    Content-Location        R       o  o  o
    Content-Encoding        R       o  o  o
    Cache-Control           R       o  o  o
    Logging-Tag             R       o  o  -
    Set-Cookie              R       o  o  o
    Set-Cookie2             R       o  o  o
    Vendor-Specific         R       o  o  o
 
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - Generic MRCP
    message, (B) - BARGE-IN-OCCURED, (C) - START-OF-SPEECH, (o) -
    Optional(Refer text for further constraints), (R) - Request, (r) -
    Response
 
    All headers in MRCPv2 will be case insensitive consistent with HTTP
    and SIP protocol header definitions.
 
 Channel-Identifier
 
    All MRCPv2 methods, responses and events MUST contain the Channel-
    Identifier header field. The value of this field is a hexadecimal
    string and is allocated by the server when the control channel was
    added to the session through a SDP offer/answer exchange. This field
    consists of 2 parts separated by the '@' symbol. The first part is a
    32 bit hexadecimal integer that is positive, identifying the MRCP
    session. The second part is a string token which specifies one of
    the media processing resource types listed in Section 3.2. The
    hexadecimal digit string MUST BE unique within the server and is
    common to all resource channels established through a single SIP
    session.
 
      channel-identifier  = "Channel-Identifier" ":" channel-id CRLF
 
      Channel-id          = 1*HEXDIG "@" 1*VCHAR
 
 Active-Request-Id-List
 
    In a request, this field indicates the list of request-ids that the
    request should apply to. This is useful when there are multiple
    Requests that are PENDING or IN-PROGRESS and you want this request
    to apply to one or more of these specifically.
 
    In a response, this field returns the list of request-ids that the
    operation modified or affected. There could be one or more requests
    that returned a request-state of PENDING or IN-PROGRESS. When a
    method affecting one or more PENDING or IN-PROGRESS requests is sent
    from the client to the server, the response MUST contain the list of
    request-ids that were affected or modified by this command in its
    header field.
 
    The active-request-id-list is only used in requests and responses,
    not in events.
 
 S Shanmugham                  IETF-Draft                       Page 23
 
                            MRCPv2 Protocol              October, 2004
 
 
    For example, if a STOP request with no active-request-id-list is
    sent to a synthesizer resource(a wildcard STOP) which has one or
    more SPEAK requests in the PENDING or IN-PROGRESS state, all SPEAK
    requests MUST be cancelled, including the one IN-PROGRESS and the
    response to the STOP request would contain the request-id of all the
    SPEAK requests that were terminated in the active-request-id-list.
    In this case, no SPEAK-COMPLETE or RECOGNITION-COMPLETE events will
    be sent for these terminated requests.
 
      active-request-id-list  =  "Active-Request-Id-List" ":"
                                  request-id *("," request-id) CRLF
 
 Proxy-Sync-Id
 
    When any server resource generates a barge-in-able event, it will
    generate a unique Tag and send it as a header field in an event to
    the client. The client then acts as a proxy to the server resource
    and sends a BARGE-IN-OCCURRED method to the synthesizer server
    resource with the Proxy-Sync-Id it received from the server
    resource. When the recognizer and synthesizer resources are part of
    the same session, they may choose to work together to achieve
    quicker interaction and response. Here the proxy-sync-id helps the
    resource receiving the event, proxied by the client, to decide if
    this event has been processed through a direct interaction of the
    resources.
 
      proxy-sync-id    =  "Proxy-Sync-Id" ":" 1*VCHAR CRLF
 
 Accept-Charset
 
    See [H14.2]. This specifies the acceptable character set for
    entities returned in the response or events associated with this
    request. This is useful in specifying the character set to use in
    the NLSML results of a RECOGNITION-COMPLETE event.
 
 Content-Type
 
    See [H14.17]. Note that the content types suitable for MRCPv2 are
    restricted to speech markup, grammar, recognition results etc. and
    are specified later in this document. The multi-part content type
    "multi-part/mixed" is supported to communicate multiple of the above
    mentioned contents, in which case the body parts cannot contain any
    MRCPv2 specific headers.
 
 Content-Id
 
    This field contains an ID or name for the content, by which it can
    be referred to.  The definition of this field is in full compliance
    with RFC 2111[15] and is needed in multi-part messages. In MRCPv2
    whenever the content needs to be stored, by either the client or the
 
 S Shanmugham                  IETF-Draft                       Page 24
 
                            MRCPv2 Protocol              October, 2004
 
    server, it is stored associated with this ID. Such content can be
    referenced during the session in URI form using the session: URI
    scheme described in a later section.
 
 Content-Base
 
    The content-base entity-header field may be used to specify the base
    URI for resolving relative URLs within the entity.
 
      content-base      = "Content-Base" ":" absoluteURI CRLF
 
    Note, however, that the base URI of the contents within the entity-
    body may be redefined within that entity-body. An example of this
    would be a multi-part MIME entity, which in turn can have multiple
    entities within it.
 
 Content-Encoding
 
    The content-encoding entity-header field is used as a modifier to
    the media-type. When present, its value indicates what additional
    content coding have been applied to the entity-body, and thus what
    decoding mechanisms must be applied in order to obtain the media-
    type referenced by the content-type header field. Content-encoding
    is primarily used to allow a document to be compressed without
    losing the identity of its underlying media type.
 
      content-encoding  = "Content-Encoding" ":"
                               *WSP content-coding
                               *(*WSP "," *WSP content-coding *WSP )
                               CRLF
 
    Content coding is defined in [H3.5]. An example of its use is
 
      Content-Encoding: gzip
 
    If multiple encoding have been applied to an entity, the content
    coding MUST be listed in the order in which they were applied.
 
 Content-Location
 
    The content-location entity-header field MAY BE used to supply the
    resource location for the entity enclosed in the message when that
    entity is accessible from a location separate from the requested
    resource's URI. Refer [H14.14]
 
      content-location =  "Content-Location" ":"
                          ( absoluteURI / relativeURI ) CRLF
 
    The content-location value is a statement of the location of the
    resource corresponding to this particular entity at the time of the
    request. The server MAY use this header field to optimize certain
 
 S Shanmugham                  IETF-Draft                       Page 25
 
                            MRCPv2 Protocol              October, 2004
 
    operations. When providing this header field the entity being sent
    should not have been modified, from what was retrieved from the
    content-location URI.
 
    For example, if the client provided a grammar markup inline, and it
    had previously retrieved it from a certain URI, that URI can be
    provided as part of the entity, using the content-location header
    field. This allows a resource like the recognizer to look into its
    cache to see if this grammar was previously retrieved, compiled and
    cached. In which case, it might optimize by using the previously
    compiled grammar object.
 
    If the content-location is a relative URI, the relative URI is
    interpreted relative to the content-base URI.
 
 
 Content-Length
 
    This field contains the length of the content of the message body
    (i.e. after the double CRLF following the last header field). Unlike
    HTTP, it MUST be included in all messages that carry content beyond
    the header portion of the message. If it is missing, a default value
    of zero is assumed. It is interpreted according to [H14.13].
 
 Cache-Control
 
    If the server plans on implementing caching it MUST adhere to the
    cache correctness rules of HTTP 1.1 (RFC2616), when accessing and
    caching HTTP URI. In particular, the expires and cache-control
    headers of the cached URI or document must be honored and will
    always take precedence over the Cache-Control defaults set by this
    header field. The cache-control directives are used to define the
    default caching algorithms on the server for the session or request.
    The scope of the directive is based on the method it is sent on. If
    the directives are sent on a SET-PARAMS method, it MUST apply for
    all requests for external documents the server makes during that
    session. If the directives are sent on any other messages they MUST
    only apply to external document requests the server makes for that
    method. An empty cache-control header on the GET-PARAMS method is a
    request for the server to return the current cache-control
    directives setting on the server.
 
      cache-control       = "Cache-Control" ":" cache-directive
                                       *("," *LWS cache-directive) CRLF
 
      cache-directive     = "max-age" "=" delta-seconds
                          / "max-stale" [ "=" delta-seconds ]
                          / "min-fresh" "=" delta-seconds
 
      delta-seconds       = 1*DIGIT
 
 
 S Shanmugham                  IETF-Draft                       Page 26
 
                            MRCPv2 Protocol              October, 2004
 
    Here delta-seconds is a decimal time value to be specified as the
    number of seconds from the time that the message response or data
    was received by the server.
 
    These directives allow the server to override the basic expiration
    mechanism.
 
    max-age
 
    Indicates that the client is ok with the server using a response
    whose age is no greater than the specified time in seconds. Unless a
    max-stale directive is also included, the client is not willing to
    accept the media server using a stale response.
 
    min-fresh
 
    Indicates that the client is willing to accept the server using a
    response whose freshness lifetime is no less than its current age
    plus the specified time in seconds. That is, the client wants the
    server to use a response that will still be fresh for at least the
    specified number of seconds.
 
    max-stale
 
    Indicates that the client is willing to accept the server using a
    response or data that has exceeded its expiration time. If max-stale
    is assigned a value, then the client is willing to accept the server
    using a response that has exceeded its expiration time by no more
    than the specified number of seconds. If no value is assigned to
    max-stale, then the client is willing to accept the server using a
    stale response of any age.
 
 
    The server cache MAY BE requested to use stale response/data without
    validation, but only if this does not conflict with any "MUST"-level
    requirements concerning cache validation (e.g., a "must-revalidate"
    cache-control directive) in the HTTP 1.1 specification pertaining
    the URI.
 
    If both the MRCPv2 cache-control directive and the cached entry on
    the server include "max-age" directives, then the lesser of the two
    values is used for determining the freshness of the cached entry for
    that request.
 
 Logging-Tag
 
    This header field MAY BE sent as part of a SET-PARAMS/GET-PARAMS
    method to set the logging tag for logs generated by the server. Once
    set, the value persists until a new value is set or the session is
    ended.  The MRCPv2 server SHOULD provide a mechanism to subset its
    output logs so that system administrators can examine or extract
 
 S Shanmugham                  IETF-Draft                       Page 27
 
                            MRCPv2 Protocol              October, 2004
 
    only the log file portion during which the logging tag was set to a
    certain value.
 
    MRCPv2 clients using this feature SHOULD take care to ensure that no
    two clients specify the same logging tag.  In the event that two
    clients specify the same logging tag, the effect on the MRCPv2
    server's output logs in undefined.
 
           logging-tag    =    "Logging-Tag" ":" 1*VCHAR CRLF
 
 Set-Cookie and Set-Cookie2:
 
    Since the HTTP client on the MRCP server fetches document for
    processing on behalf of the MRCP client, the cookie store in the
    HTTP client of the MRCP server is considered to be an extension of
    the cookie store in the HTTP client of the MRCP client. This
    requires that the MRCP client and server be able to synchronize
    their cookie stores as needed. The MRCP client should be able to
    push its stored cookies to the MRCP server and get new cookies that
    the MRCPv2 server stored back to the MRCP client. The set-cookie and
    set-cookie2 entity-header fields MAY BE included in MRCPv2 requests
    to update the cookie store on a server and be returned in final
    MRCPv2 responses or events to subsequently update the client's own
    cookie store. The stored cookies on the server persist for the
    duration of the MRCPv2 session and MUST be destroyed at the end of
    the session. Since the type of cookie header is dictated by the HTTP
    origin server, MRCPv2 clients and servers SHOULD support both the
    set-cookie and set-cookie2 entity header fields.
 
          set-cookie      =       "Set-Cookie:" cookies CRLF
          cookies         =       cookie *("," *LWS cookie)
          cookie          =       NAME "=" VALUE *(";" cookie-av)
          NAME            =       attribute
          VALUE           =       value
          cookie-av       =       "Comment" "=" value
                          /       "Domain" "=" value
                          /       "Max-Age" "=" value
                          /       "Path" "=" value
                          /       "Secure"
                          /       "Version" "=" 1*DIGIT
                          /       "Age" "=" delta-seconds
 
          set-cookie2     =       "Set-Cookie2:" cookies2 CRLF
          cookies2        =       cookie2 *("," *LWS cookie2)
          cookie2         =       NAME "=" VALUE *(";" cookie-av2)
          NAME            =       attribute
          VALUE           =       value
          cookie-av2      =       "Comment" "=" value
                          /       "CommentURL" "=" <"> http_URL <">
                          /       "Discard"
                          /       "Domain" "=" value
 
 S Shanmugham                  IETF-Draft                       Page 28
 
                            MRCPv2 Protocol              October, 2004
 
                          /       "Max-Age" "=" value
                          /       "Path" "=" value
                          /       "Port" [ "=" <"> portlist <"> ]
                          /       "Secure"
                          /       "Version" "=" 1*DIGIT
                          /       "Age" "=" delta-seconds
          portlist        =       portnum *("," *LWS portnum)
          portnum         =       1*DIGIT
 
    The set-cookie and set-cookie2 header fields are specified in  RFC
    2109 and RFC 2965 respectively. The "Age" attribute is introduced in
    this specification to indicate the age of the cookie and is
    OPTIONAL. An MRCPv2 client or server SHOULD calculate the age of the
    cookie according to the age calculation rules in the HTTP/1.1
    specification (RFC 2616) and append the "Age" attribute accordingly.
 
    The media client or server MUST supply defaults for the Domain and
    Path attributes if omitted by the HTTP origin server as specified in
    RFC 2109 (set-cookie) and RFC 2965 (set-cookie2). Note that there
    will be no leading dot present in the Domain attribute value in this
    case. Although an explicitly specified Domain value received via the
    HTTP protocol may be modified to include a leading dot, a media
    client or server MUST NOT modify the Domain value when received via
    the MRCPv2 protocol.
 
    A media client or server MAY combine multiple cookie header fields
    of the same type into a single "field-name: field-value" pair as
    described in Section 6.1.
 
    The set-cookie and set-cookie2 headers MAY BE specified in any
    request that subsequently results in the server performing an HTTP
    access. When a server receives new cookie information from an HTTP
    origin server, and assuming the cookie store is modified according
    to RFC 2109 or RFC2965, the server MUST return the new cookie
    information in the MRCPv2 COMPLETE response or event as appropriate
    to allow the client to update its own cookie store.
 
    The SET-PARAMS request MAY specify the set-cookie and set-cookie2
    headers to update the cookie store on a server. The GET-PARAMs
    request MAY BE used to return the entire cookie store of "Set-
    Cookie" or "Set-Cookie2" type cookies to the client.
 
 Vendor Specific Parameters
 
    This set of headers allows for the client to set Vendor Specific
    parameters.
 
      vendor-specific     =    "Vendor-Specific-Parameters" ":"
                               vendor-specific-av-pair
                               *[";" vendor-specific-av-pair] CRLF
      vendor-specific-av-pair = vendor-av-pair-name "="
 
 S Shanmugham                  IETF-Draft                       Page 29
 
                            MRCPv2 Protocol              October, 2004
 
                               vendor-av-pair-value
 
    This header MAY BE sent in the SET-PARAMS/GET-PARAMS method and is
    used to set vendor-specific parameters on the server side. The
    vendor-av-pair-name can be any Vendor specific field name and
    conforms to the XML vendor-specific attribute naming convention. The
    vendor-av-pair-value is the value to set the attribute to and needs
    to be quoted.
 
    When asking the server to get the current value of these parameters,
    this header can be sent in the GET-PARAMS method with the list of
    vendor-specific attribute names to get separated by a semicolon.
 
 6.2. SET-PARAMS
 
    The SET-PARAMS method, from the client to server, tells the MRCP
    resource to define session parameters, like voice characteristics
    and prosody on synthesizers or recognition timers on recognizers
    etc. If the server accepted and set all parameters it MUST return a
    Response-Status of 200. If it chose to ignore some optional headers
    that can be safely ignored with affecting operation of the server it
    MUST return 201.
 
    If some of the headers being set are unsupported for the resource or
    have illegal values, the server MUST reject the request with a 403,
    Bad Parameter, and MUST include in the response the header fields
    that could not be set. The header specified in SET-PARAMS affect the
    session level values. They do not apply for request level scope and
    for request that are in-PROGRESS.
 
    Example:
      C->S:MRCP/2.0 124 SET-PARAMS 543256
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: female
           Voice-category: adult
           Voice-variant: 3
 
 
      S->C:MRCP/2.0 47 543256 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
 
 6.3. GET-PARAMS
 
    The GET-PARAMS method, from the client to server, asks the MRCPv2
    resource for its current session parameters, like voice
    characteristics and prosody on synthesizers and recognition-timer on
    recognizers etc. The client SHOULD send the list of parameters it
    wants to read from the server by listing a set of empty header
    fields. If a specific list is not specified then the server SHOULD
    return all the settable headers including vendor-specific parameters
    and their current values. The wild card use can be very intensive as
 
 S Shanmugham                  IETF-Draft                       Page 30
 
                            MRCPv2 Protocol              October, 2004
 
    the number of settable parameters can be large depending on the
    vendor.  Hence it is RECOMMENDED that the client does not use the
    wildcard GET-PARAMS operation very often. Note that the GET-PARAMS
    returns header values that have been set for the whol session and do
    not return values that have a request level scope.
 
    Example:
      C->S:MRCP/2.0 136 GET-PARAMS 543256
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender:
           Voice-category:
           Voice-variant:
           Vendor-Specific-Parameters:com.mycorp.param1;
                       com.mycorp.param2
 
      S->C:MRCP/2.0 163 543256 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender:female
           Voice-category: adult
           Voice-variant: 3
           Vendor-Specific-Parameters:com.mycorp.param1="Company Name";
                          com.mycorp.param2="124324234@mycorp.com"
 
 
 7.   Resource Discovery
 
    The list and capability of media resources on a server can be found
    using the SIP OPTIONS method requesting the capability of the
    server. The server SHOULD respond to such a request with an SDP
    description of its capabilities according to RFC 3264. The MRCPv2
    capabilities are described by a single m-line containing the media
    type "control", transport type "TLS", "TCP" or "SCTP" and a format
    of "application/mrcpv2". There should be one "resource" attribute
    for each media resource that the server supports with the resource
    type identifier as its value.
 
    The SDP description MUST also contain m-lines describing the audio
    capabilities, and the coders it supports.
 
 
    Example 4:
    The client uses the SIP OPTIONS method to query the capabilities of
    the MRCPv2 server.
 
    C->S:
           OPTIONS sip:mrcp@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: <sip:mrcp@mediaserver.com>
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 63104 OPTIONS
 
 S Shanmugham                  IETF-Draft                       Page 31
 
                            MRCPv2 Protocol              October, 2004
 
           Contact: <sip:sarvi@cisco.com>
           Accept: application/sdp
           Content-Length: 0
 
 
    S->C:
           SIP/2.0 200 OK
           To: <sip:mrcp@mediaserver.com>;tag=93810874
           From: Sarvi <sip:sarvi@Cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 63104 OPTIONS
           Contact: <sip:mrcp@mediaserver.com>
           Allow: INVITE, ACK, CANCEL, OPTIONS, BYE
           Accept: application/sdp
           Accept-Encoding: gzip
           Accept-Language: en
           Supported: foo
           Content-Type: application/sdp
           Content-Length: 274
 
           v=0
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
           m=control 9 TCP application/mrcpv2
           a=resource:speechsynth
           a=resource:speechrecog
           a=resource:speakverify
           m=audio 0 RTP/AVP 0 1 3
           a=rtpmap:0 PCMU/8000
           a=rtpmap:1 1016/8000
           a=rtpmap:3 GSM/8000
 
 
 
 8.   Speech Synthesizer Resource
 
    This resource is capable of converting text provided by the client
    and generating a speech stream in real-time.  Depending on the
    implementation and capability of this resource, the client can
    control parameters like voice characteristics, speaker speed, etc.
 
    The synthesizer resource is controlled by MRCPv2 requests from the
    client. Similarly the resource can respond to these requests or
    generate asynchronous events to the server to indicate certain
    conditions during the processing of the stream.
 
    This section applies for the following resource types.
 
           1. speechsynth
 
 S Shanmugham                  IETF-Draft                       Page 32
 
                            MRCPv2 Protocol              October, 2004
 
           2. basicsynth
 
    The capability of these resources are addressed in Section 4.5.
 
 8.1. Synthesizer State Machine
 
    The synthesizer maintains states to correlate MRCPv2 requests from
    the client. The state transitions shown below describe the states of
    the synthesizer and reflect the request at the head of the queue. A
    SPEAK request in the PENDING state can be deleted or stopped by a
    STOP request and does not affect the state of the resource.
 
         Idle                   Speaking                  Paused
         State                  State                     State
          |                       |                          |
          |----------SPEAK------->|                 |--------|
          |<------STOP------------|             CONTROL      |
          |<----SPEAK-COMPLETE----|                 |------->|
          |<----BARGE-IN-OCCURRED-|                          |
          |              |--------|                          |
          |          CONTROL      |-----------PAUSE--------->|
          |              |------->|<----------RESUME---------|
          |                       |               |----------|
          |                       |              PAUSE       |
          |                       |               |--------->|
          |                       |----------|               |
          |                       |      SPEECH-MARKER       |
          |                       |<---------|               |
          |----------|            |             |------------|
          |         STOP          |          SPEAK           |
          |          |            |             |----------->|
          |<---------|            |                          |
          |<--------------------STOP-------------------------|
          |----------|            |                          |
          |     LOAD-LEXICON      |                          |
          |          |            |                          |
          |<---------|            |                          |
          |<--------------------BARGE-IN-OCCURRED------------|
 
 8.2. Synthesizer Methods
 
    The synthesizer supports the following methods.
 
    synthesizer-method    =  "SPEAK"    ; A
                          /  "STOP"     ; B
                          /  "PAUSE"    ; C
                          /  "RESUME"   ; D
                          /  "BARGE-IN-OCCURRED" ; E
                          /  "CONTROL"  ; F
                          /  "LOAD-LEXICON"  ; G
 
 
 S Shanmugham                  IETF-Draft                       Page 33
 
                            MRCPv2 Protocol              October, 2004
 
 8.3. Synthesizer Events
 
    The synthesizer may generate the following events.
 
      synthesizer-event   =  "SPEECH-MARKER" ; H
                          /  "SPEAK-COMPLETE" ; I
 
 8.4. Synthesizer Header Fields
 
    A synthesizer message may contain header fields containing request
    options and information to augment the Request, Response or Event
    the message it is associated with.
 
      synthesizer-header  =  jump-size
                          /  kill-on-barge-in
                          /  speaker-profile
                          /  completion-cause
                          /  completion-reason
                          /  voice-parameter
                          /  prosody-parameter
                          /  speech-marker
                          /  speech-language
                          /  fetch-hint
                          /  audio-fetch-hint
                          /  fetch-timeout
                          /  failed-uri
                          /  failed-uri-cause
                          /  speak-restart
                          /  speak-length
                          /  load-lexicon
                          /  lexicon-search-order
 
    Header field          where     s  g  A  B  C  D  E  F  G  H  I
    _______________________________________________________________
    Jump-Size               R       -  -  -  -  -  -  -  o  -  -  -
    Kill-On-Barge-In        R       -  -  o  -  -  -  -  -  -  -  -
    Speaker-Profile         R       o  o  o  -  -  -  -  o  -  -  -
    Completion-Cause        R       -  -  -  -  -  -  -  -  -  -  m
    Completion-Cause       4XX      -  -  o  -  -  -  -  -  -  -  -
    Completion-Reason       R       -  -  -  -  -  -  -  -  -  -  m
    Completion-Reason      4XX      -  -  o  -  -  -  -  -  -  -  -
    Voice-Parameter         R       o  o  o  -  -  -  -  o  -  -  -
    Prosody-Parameter       R       o  o  o  -  -  -  -  o  -  -  -
    Speech-Marker           R       -  -  -  -  -  -  -  -  -  m  m
    Speech-Marker          2XX      -  -  m  m  m  m  -  m  -  -  -
    Speech-Language         R       o  o  o  -  -  -  -  -  -  -  -
    Fetch-Hint              R       o  o  o  -  -  -  -  -  -  -  -
    Audio-Fetch-Hint        R       o  o  o  -  -  -  -  -  -  -  -
    Fetch-Timeout           R       o  o  o  -  -  -  -  -  -  -  -
    Failed-URI              R       -  -  -  -  -  -  -  -  -  -  o
    Failed-URI             4XX      -  o  -  -  -  -  -  -  -  -  o
 
 S Shanmugham                  IETF-Draft                       Page 34
 
                            MRCPv2 Protocol              October, 2004
 
    Failed-URI-Cause        R       -  -  -  -  -  -  -  -  -  -  o
    Failed-URI-Cause       4XX      -  o  -  -  -  -  -  -  -  -  o
    Speak-Restart          2XX      -  -  -  -  -  -  -  o  -  -  -
    Speak-Length            R       -  o  -  -  -  -  -  o  -  -  -
    Load-Lexicon            R       -  -  -  -  -  -  -  -  o  -  -
    Lexicon-Search-Order    R       -  -  -  -  -  -  -  -  m  -  -
 
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - SPEAK, (B) -
    STOP, (C) - PAUSE, (D) RESUME, (E) - BARGE-IN-OCCURRED, (F) -
    CONTROL, (G) - LOAD-LEXICON (o) - Optional(Refer text for further
    constraints), (R) - Request, (r) - Response
 
 Jump-Size
 
    This header MAY BE specified in a CONTROL method and controls the
    jump size to move forward or rewind backward on an active SPEAK
    request. A + or - indicates a relative value to what is being
    currently played. This MAY BE specified in a SPEAK request to
    indicate an offset into the speech markup that the SPEAK request
    should start speaking from. The different speech length units
    supported are dependent on the synthesizer implementation. If it
    does not support a unit or the operation the resource SHOULD respond
    with a status code of 404 "Illegal or Unsupported value for
    parameter".
 
      jump-size           =    "Jump-Size" ":" speech-length-value CRLF
      speech-length-value =    numeric-speech-length
                          /    text-speech-length
      text-speech-length  =    1*VCHAR SP "Tag"
 
      numeric-speech-length=   ("+" / "-") 1*DIGIT SP
                               numeric-speech-unit
      numeric-speech-unit =    "Second"
                          /    "Word"
                          /    "Sentence"
                          /    "Paragraph"
 
 Kill-On-Barge-In
 
    This header MAY BE sent as part of the SPEAK method to enable kill-
    on-barge-in support. If enabled, the SPEAK method is interrupted by
    DTMF input detected by a signal detector resource or by the start of
    speech sensed or recognized by the speech recognizer resource.
 
      kill-on-barge-in    =    "Kill-On-Barge-In" ":" boolean-value CRLF
      boolean-value       =    "true" / "false"
 
    If the recognizer or signal detector resource is on the same server
    as the synthesizer, the server SHOULD recognize their interactions
    by their common MRCPv2 channel identifier (ignoring the portion
 
 
 S Shanmugham                  IETF-Draft                       Page 35
 
                            MRCPv2 Protocol              October, 2004
 
    after "@" which is the resource type) and work with each other to
    provide kill-on-barge-in support.
 
    The client MUST send a BARGE-IN-OCCURRED method to the synthesizer
    resource when it receives a bargin-in-able event from any source.
    This source could be a synthesizer resource or signal detector
    resource and MAY BE local or distributed. If this field is not
    specified, the value defaults to "true".
 
 Speaker Profile
 
    This header MAY BE part of the SET-PARAMS/GET-PARAMS or SPEAK
    request from the client to the server and specifies the profile of
    the speaker by a uri, which may be a set of voice parameters like
    gender, accent etc.
 
      speaker-profile     =    "Speaker-Profile" ":" uri CRLF
 
 Completion Cause
 
    This header field MUST be specified in a SPEAK-COMPLETE event coming
    from the synthesizer resource to the client. This indicates the
    reason behind the SPEAK request completion.
 
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP
                               1*VCHAR CRLF
 
    Cause-Code  Cause-Name     Description
      000       normal         SPEAK completed normally.
      001       barge-in       SPEAK request was terminated because
                               of barge-in.
      002       parse-failure  SPEAK request terminated because of a
                               failure to parse the speech markup text.
      003       uri-failure    SPEAK request terminated because, access
                               to one of the URIs failed.
      004       error          SPEAK request terminated prematurely due
                               to synthesizer error.
      005       language-unsupported
                               Language not supported.
      006       lexicon-load-failure
                               Lexicon loading failed.
 
 
 Completion Reason
 
    This header field MAY be specified in a SPEAK-COMPLETE event coming
    from the synthesizer resource to the client. This contains the
    reason text behind the SPEAK request completion. This field can be
    use to communicate text describing the reason for the failure, such
    as an error in parsing the speech markup text.
 
 
 S Shanmugham                  IETF-Draft                       Page 36
 
                            MRCPv2 Protocol              October, 2004
 
      completion-reason   =    "Completion-Reason" ":"
                               quoted-string CRLF
 
 Voice-Parameters
 
    This set of headers defines the voice of the speaker.
 
      voice-parameter     =    "Voice-" voice-param-name ":"
                               voice-param-value CRLF
 
    voice-param-name is any one of the attribute names under the voice
    element specified in W3C's Speech Synthesis Markup Language
    Specification[10]. The voice-param-value is any one of the value
    choices of the corresponding voice element attribute specified in
    the above section.
 
    These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to
    define/get default values for the entire session or MAY BE sent in
    the SPEAK request to define default values for that speak request.
    Furthermore these attributes can be part of the speech text marked
    up in SML.
 
    These voice parameter header fields can also be sent in a CONTROL
    method to affect a SPEAK request in progress and change its behavior
    on the fly. If the synthesizer resource does not support this
    operation, it should respond back to the client with a status of
    unsupported.
 
 Prosody-Parameters
 
    This set of headers defines the prosody of the speech.
 
      prosody-parameter   =    "Prosody-" prosody-param-name ":"
                               prosody-param-value CRLF
 
    prosody-param-name is any one of the attribute names under the
    prosody element specified in W3C's Speech Synthesis Markup Language
    Specification[10]. The prosody-param-value is any one of the value
    choices of the corresponding prosody element attribute specified in
    the above section.
 
    These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to
    define/get default values for the entire session or MAY BE sent in
    the SPEAK request to define default values for that speak request.
    Further more these attributes can be part of the speech text marked
    up in SML.
 
    The prosody parameter header fields in the SET-PARAMS or SPEAK
    request only apply if the speech data is of type text/plain and does
    not use a speech markup format.
 
 
 S Shanmugham                  IETF-Draft                       Page 37
 
                            MRCPv2 Protocol              October, 2004
 
    These prosody parameter header fields MAY also be sent in a CONTROL
    method to affect a SPEAK request in progress and change its behavior
    on the fly. If the synthesizer resource does not support this
    operation, it should respond back to the client with a status of
    unsupported.
 
 Speech Marker
 
    This header field contains a marker tag that may be embedded in the
    speech data. Most speech markup formats provide mechanisms to embed
    marker fields between speech texts. The synthesizer will generate
    SPEECH-MARKER events when it reaches these marker fields. This field
    SHOULD be part of the SPEECH-MARKER event and will contain the
    marker tag values. This header may have additional timestamp
    information in a "timestamp" field separated by a semicolon. This is
    the NTP timestamp and MUST be synced with the RTP timestamp. This
    header field SHOULD also be returned in responses to STOP and
    CONTROL methods and in the SPEAK-COMPLETE event. In these messages
    the marker tag SHOULD be the last tag encountered and would be "" if
    none was encountered. The marker tag SHOULD have timestamp
    information which reflects what point into the current SPEAK
    request, the particular message was generated.
 
      timestamp      =         "timestamp" "=" time-stamp-value CRLF
 
      speech-marker  =         "Speech-Marker" ":" 1*VCHAR
                               [";" timestamp ]CRLF
 
 Speech Language
 
    This header field specifies the default language of the speech data
    if it is not specified in it. The value of this header field should
    follow RFC 3066 for its values. This MAY occur in SPEAK, SET-PARAMS
    or GET-PARAMS request.
 
      speech-language          =    "Speech-Language" ":" 1*VCHAR CRLF
 
 Fetch Hint
 
    When the synthesizer needs to fetch documents or other resources
    like speech markup or audio files, etc., this header field controls
    URI access properties. This defines when the synthesizer should
    retrieve content from the server. A value of "prefetch" indicates a
    file may be downloaded when the request is received, whereas "safe"
    indicates a file that should only be downloaded when actually
    needed. The default value is "prefetch". This header field MAY occur
    in SPEAK, SET-PARAMS or GET-PARAMS requests.
 
      fetch-hint               =    "Fetch-Hint" ":" 1*ALPHA CRLF
 
 
 
 S Shanmugham                  IETF-Draft                       Page 38
 
                            MRCPv2 Protocol              October, 2004
 
 Audio Fetch Hint
 
    When the synthesizer needs to fetch documents or other resources
    like speech audio files, etc., this header field controls URI access
    properties. This defines whether or not the synthesizer can attempt
    to optimize speech by pre-fetching audio. The value is either "safe"
    to say that audio is only fetched when it is needed, never before;
    "prefetch" to permit, but not require the platform to pre-fetch the
    audio; or "stream" to allow it to stream the audio fetches. The
    default value is "prefetch". This header field MAY occur in SPEAK,
    SET-PARAMS or GET-PARAMS. requests.
 
      audio-fetch-hint         =    "Audio-Fetch-Hint" ":" 1*ALPHA CRLF
 
 Fetch Timeout
 
    When the synthesizer needs to fetch documents or other resources
    like speech audio files, etc., this header field controls URI access
    properties. This defines the synthesizer timeout for content the
    server may need to fetch from the network. This is specified in
    milliseconds. The default value is platform-dependent. This header
    field MAY occur in SPEAK, SET-PARAMS or GET-PARAMS.
 
      fetch-timeout            =    "Fetch-Timeout" ":" 1*DIGIT CRLF
 
 Failed URI
 
    When a synthesizer method needs a synthesizer to fetch or access a
    URI and the access fails the server SHOULD provide the failed URI in
    this header field in the method response.
 
      failed-uri               =    "Failed-URI" ":" Uri CRLF
 
 Failed URI Cause
 
    When a synthesizer method needs a synthesizer to fetch or access a
    URI and the access fails the server SHOULD provide the URI specific
    or protocol specific response code through this header field in the
    method response. This field has been defined as alphanumeric to
    accommodate all protocols, some of which might have a response
    string instead of a numeric response code.
 
      failed-uri-cause    =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF
 
 Speak Restart
 
    When a CONTROL request to jump backward is issued to a currently
    speaking synthesizer resource and the target jump point is beyond
    the start of the current SPEAK request, the current SPEAK request
    SHALL re-start from the beginning of its speech data and the
 
 
 S Shanmugham                  IETF-Draft                       Page 39
 
                            MRCPv2 Protocol              October, 2004
 
    response to the CONTROL request SHOULD contain this header
    indicating a restart. This header MAY occur in the CONTROL response.
 
      speak-restart       =    "Speak-Restart" ":" boolean-value CRLF
 
 Speak Length
 
    This header MAY be specified in a CONTROL method to control the
    length of speech to speak, relative to the current speaking point in
    the currently active SPEAK request. A - value is illegal in this
    field. If a field with a Tag unit is specified, then the media must
    speak till the tag is reached or the SPEAK request complete, which
    ever comes first. This MAY BE specified in a SPEAK request to
    indicate the length to speak in the speech data and is relative to
    the point in speech the SPEAK request starts. The different speech
    length units supported are dependent on the synthesizer
    implementation. If it does not support a unit or the operation the
    resource SHOULD respond with a status code of 404 "Illegal or
    Unsupported value for header".
 
      speak-length        =    "Speak-Length" ":" speech-length-value
                               CRLF
      speech-length-value =    numeric-speech-length
                          /    text-speech-length
      text-speech-length  =    1*VCHAR SP "Tag"
 
      numeric-speech-length=   ("+" / "-") 1*DIGIT SP
                               numeric-speech-unit
      numeric-speech-unit =    "Second"
                          /    "Word"
                          /    "Sentence"
                          /    "Paragraph"
 Load-Lexicon
    This header field is used to indicate whether a lexicon has to be
    loaded or unloaded. The default value for this field is "true".
 
           load-lexicon = "Load-Lexicon" : Boolean-value CRLF
 
 Lexicon-Search-Order
    This header field is used to specify the list of active Lexicon URIs
    and the search order among the active lexicons. Note, the lexicons
    specified within the SSML document still takes precedence over the
    lexicons specified here.
 
           Lexicon-search-order = "Lexicon-Search-Order" : uri-list CRLF
 
 
 
 8.5. Synthesizer Message Body
 
 
 
 S Shanmugham                  IETF-Draft                       Page 40
 
                            MRCPv2 Protocol              October, 2004
 
    A synthesizer message may contain additional information associated
    with the Method, Response or Event in its message body.
 
 Synthesizer Speech Data
 
    Marked-up text for the synthesizer to speak is specified as a MIME
    entity in the message body. The message to be spoken by the
    synthesizer can be specified inline by embedding the data in the
    message body or by reference by providing the URI to the data. In
    either case the data and the format used to markup the speech needs
    to be supported by the server.
 
    All MRCPv2 servers MUST support plain text speech data and W3C's
    Speech Synthesis Markup Language[10] as a minimum and hence MUST
    support the MIME types text/plain and application/synthesis+ssml at
    a minimum.
 
    If the speech data needs to be specified by URI reference the MIME
    type text/uri-list is used to specify the one or more URI that will
    list what needs to be spoken. If a list of speech URI is specified,
    speech data provided by each URI must be spoken in the order in
    which the URI are specified.
 
    If the data to be spoken consists of a mix of URI and inline speech
    data the multipart/mixed MIME-type is used and embedded with the
    MIME-blocks for text/uri-list, application/synthesis+ssml or
    text/plain. The character set and encoding used in the speech data
    may be specified according to standard MIME-type definitions. The
    multi-part MIME-block can contain actual audio data in .wav or sun
    audio format. This is used when the client has audio clips that it
    may have recorded and has it stored in memory or a local device and
    it needs to play it as part of the SPEAK request. The audio MIME-
    parts, can be sent by the client as part of the multi-part MIME-
    block. This audio will be referenced in the speech markup data that
    will be another part in the multi-part MIME-block according to the
    multipart/mixed MIME-type specification.
 
    Example 1:
         Content-Type: text/uri-list
         Content-Length: 176
 
         http://www.example.com/ASR-Introduction.sml
         http://www.example.com/ASR-Document-Part1.sml
         http://www.example.com/ASR-Document-Part2.sml
         http://www.example.com/ASR-Conclusion.sml
 
    Example 2:
         Content-Type: application/synthesis+ssml
         Content-Length: 104
 
         <?xml version="1.0"?>
 
 S Shanmugham                  IETF-Draft                       Page 41
 
                            MRCPv2 Protocol              October, 2004
 
         <speak>
         <paragraph>
                  <sentence>You have 4 new messages.</sentence>
                  <sentence>The first is from <say-as
                  type="name">Stephanie Williams</say-as>
                  and arrived at <break/>
                  <say-as type="time">3:45pm</say-as>.</sentence>
 
                  <sentence>The subject is <prosody
                  rate="-20%">ski trip</prosody></sentence>
         </paragraph>
         </speak>
 
    Example 3:
         Content-Type: multipart/mixed; boundary="break"
 
         --break
         Content-Type: text/uri-list
         Content-Length: 176
 
         http://www.example.com/ASR-Introduction.sml
         http://www.example.com/ASR-Document-Part1.sml
         http://www.example.com/ASR-Document-Part2.sml
         http://www.example.com/ASR-Conclusion.sml
 
         --break
         Content-Type: application/synthesis+ssml
         Content-Length: 104
 
         <?xml version="1.0"?>
         <speak>
         <paragraph>
                  <sentence>You have 4 new messages.</sentence>
                  <sentence>The first is from <say-as
                  type="name">Stephanie Williams</say-as>
                  and arrived at <break/>
                  <say-as type="time">3:45pm</say-as>.</sentence>
 
                  <sentence>The subject is <prosody
                  rate="-20%">ski trip</prosody></sentence>
         </paragraph>
         </speak>
         --break--
 
    Lexicon Data
 
      Synthesizer lexicon data from the client to the server can be
    provided inline or by reference. Either way they are carried as MIME
    entities in the message body of the MRCPv2 request message.
 
 
 
 S Shanmugham                  IETF-Draft                       Page 42
 
                            MRCPv2 Protocol              October, 2004
 
    When a lexicon is specified in-line in the message, the client MUST
    provide a content-id for that lexicon as part of the content
    headers. The server MUST store the lexicon associated with that
    content-id for the duration of the session. A stored lexicon can be
    overwritten by defining a new lexicon with the same content-id.
    Lexicons that have been associated with a content-id can be
    referenced through a special "session:" URI scheme.
 
 
    If lexicon data needs to be specified by external URI reference, the
    MIME-type text/uri-list is used to list the one or more URI that
    will specify the lexicon data. All media servers MUST support the
    HTTP uri access mechanism.
 
    If the data to be defined consists of a mix of URI and inline
    lexicon data the multipart/mixed MIME-type is used. The character
    set and encoding used in the lexicon data may be specified according
    to standard MIME-type definitions.
 
 8.6. SPEAK
 
    The SPEAK method from the client to the server provides the
    synthesizer resource with the speech text and initiates speech
    synthesis and streaming. The SPEAK method can carry voice and
    prosody header fields that define the behavior of the voice being
    synthesized, as well as the actual marked-up text to be spoken. If
    specific voice and prosody parameters are specified as part of the
    speech markup text, it will take precedence over the values
    specified in the header fields and those set using a previous SET-
    PARAMS request.
 
    When applying voice parameters there are 3 levels of scope. The
    highest precedence are those specified within the speech markup
    text, followed by those specified in the header fields of the SPEAK
    request and hence apply for that SPEAK request only, followed by the
    session default values which can be set using the SET-PARAMS request
    and apply for the whole session moving forward.
 
    If the resource is idle and the SPEAK request is being actively
    processed the resource will respond with a success status code and a
    request-state of IN-PROGRESS.
 
    If the resource is in the speaking or paused states, i.e. it is in
    the middle of processing a previous SPEAK request, the status
    returns success and a request-state of PENDING. This means that this
    SPEAK request will be placed in the request queue and will be
    processed in the other order received after the currently active
    SPEAK request and previously queued SPEAK requests are completed.
 
    For the synthesizer resource, this is the only request that can
    return a request-state of IN-PROGRESS or PENDING.
 
 S Shanmugham                  IETF-Draft                       Page 43
 
                            MRCPv2 Protocol              October, 2004
 
    When the text to be synthesized is complete, the resource will issue
    a SPEAK-COMPLETE event with the request-id of the SPEAK message and
    a request-state of COMPLETE.
 
    Example:
      C->S:MRCP/2.0 489 SPEAK 543257
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
 
      S->C:MRCP/2.0 28 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
 
      S->C:MRCP/2.0 79 SPEAK-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Completion-Cause: 000 normal
 
 
 8.7. STOP
 
    The STOP method from the client to the server tells the resource to
    stop speaking if it is speaking something.
 
    The STOP request can be sent with an active-request-id-list header
    field to stop the zero or more specific SPEAK requests that may be
    in queue and return a response code of 200(Success). If no active-
    request-id-list header field is sent in the STOP request it will
    terminate all outstanding SPEAK requests.
 
    If a STOP request successfully terminated one or more PENDING or IN-
    PROGRESS SPEAK requests, then the response message body contains an
    active-request-id-list header field listing the SPEAK request-ids
 
 S Shanmugham                  IETF-Draft                       Page 44
 
                            MRCPv2 Protocol              October, 2004
 
    that were terminated. Otherwise there will be no active-request-id-
    list header field in the response. No SPEAK-COMPLETE events will be
    sent for these terminated requests.
 
    If a SPEAK request that was IN-PROGRESS and speaking was stopped the
    next pending SPEAK request, if any, would become IN-PROGRESS and
    move to the speaking state.
 
    If a SPEAK request that was IN-PROGRESS and in the paused state was
    stopped the next pending SPEAK request, if any, would become IN-
    PROGRESS and move to the paused state.
 
    Example:
      C->S:MRCP/2.0 423 SPEAK 543258
           Channel-Identifier: 32AECB23433802@speechsynth
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
 
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
      C->S:MRCP/2.0 44 STOP 543259 200
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 66 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 
 8.8. BARGE-IN-OCCURRED
 
    The BARGE-IN-OCCURRED method is a mechanism for the client to
    communicate a barge-in-able event it detects to the speech resource.
 
    This event is useful in two scenarios,
 
 
 S Shanmugham                  IETF-Draft                       Page 45
 
                            MRCPv2 Protocol              October, 2004
 
    1. The client has detected some events like DTMF digits or other
    barge-in-able events and wants to communicate that to the
    synthesizer.
    2. The recognizer resource and the synthesizer resource are in
    different servers. In which case the client MUST act as a proxy and
    receive event from the recognition resource, and then send a BARGE-
    IN-OCCURRED method to the synthesizer. In such cases, the BARGE-IN-
    OCCURRED method would also have a proxy-sync-id header field
    received from the resource generating the original event.
 
    If a SPEAK request is active with kill-on-barge-in enabled, and the
    BARGE-IN-OCCURRED event is received, the synthesizer should stop
    streaming out audio. It should also terminate any speech requests
    queued behind the current active one, irrespective of whether they
    have barge-in enabled or not. If a barge-in-able prompt was playing
    and it was terminated, the response MUST contain the request-ids of
    all SPEAK requests that were terminated in its active-request-id-
    list. There will be no SPEAK-COMPLETE events generated for these
    requests.
 
    If the synthesizer and the recognizer are part of the same session
    they could be optimized for a quicker kill-on-barge-in response by
    the recognizer and synthesizer interacting directly. In these cases,
    the client MUST still proxy the START-OF-SPEECH event through a
    BARGE-IN-OCCURRED method, but the synthesizer resource may have
    already stopped and sent a SPEAK-COMPLETE event with a barge in
    completion cause code.  If there were no SPEAK requests terminated
    as a result of the BARGE-IN-OCCURRED method, the response would
    still be a 200 success but MUST NOT contain an active-request-id-
    list header field.
 
      C->S:MRCP/2.0 433 SPEAK 543258
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
 
 S Shanmugham                  IETF-Draft                       Page 46
 
                            MRCPv2 Protocol              October, 2004
 
           </speak>
 
      S->C:MRCP/2.0 47 543258 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
      C->S:MRCP/2.0 69 BARGE-IN-OCCURRED 543259 200
           Channel-Identifier: 32AECB23433802@speechsynth
           Proxy-Sync-Id: 987654321
 
      S->C:MRCP/2.0 72 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 
 8.9. PAUSE
 
    The PAUSE method from the client to the server tells the resource to
    pause speech, if it is speaking something. If a PAUSE method is
    issued on a session when a SPEAK is not active the server SHOULD
    respond with a status of 402 or "Method not valid in this state". If
    a PAUSE method is issued on a session when a SPEAK is active and
    paused the server SHOULD respond with a status of 200 or "Success".
    If a SPEAK request was active the server MUST return an active-
    request-id-list header with the request-id of the SPEAK request that
    was paused.
 
      C->S:MRCP/2.0 434 SPEAK 543258
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
 
 S Shanmugham                  IETF-Draft                       Page 47
 
                            MRCPv2 Protocol              October, 2004
 
      C->S:MRCP/2.0 43 PAUSE 543259
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 68 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 8.10.     RESUME
 
    The RESUME method from the client to the server tells a paused
    synthesizer resource to continue speaking. If a RESUME method is
    issued on a session with no active SPEAK request, the server SHOULD
    respond with a status of 402 or "Method not valid in this state". If
    a RESUME method is issued on a session with an active SPEAK request
    is speaking(i.e. not paused) the server SHOULD respond with a status
    of 200 or "Success". If a SPEAK request was active the server MUST
    return an active-request-id-list header with the request-id of the
    SPEAK request that was resumed
 
    Example:
      C->S:MRCP/2.0 434 SPEAK 543258
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
               <sentence>You have 4 new messages.</sentence>
               <sentence>The first is from <say-as
               type="name">Stephanie Williams</say-as>
               and arrived at <break/>
               <say-as type="time">3:45pm</say-as>.</sentence>
 
               <sentence>The subject is <prosody
               rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS@speechsynth
           Channel-Identifier: 32AECB23433802
 
      C->S:MRCP/2.0 44 PAUSE 543259
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 47 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 S Shanmugham                  IETF-Draft                       Page 48
 
                            MRCPv2 Protocol              October, 2004
 
 
      C->S:MRCP/2.0 44 RESUME 543260
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 66 543260 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 8.11.     CONTROL
 
    The CONTROL method from the client to the server tells a synthesizer
    that is speaking to modify what it is speaking on the fly.  This
    method is used to make the synthesizer jump forward or backward in
    what it is speaking, change speaker rate, and speaker parameters,
    etc. It affects the active or IN-PROGRESS SPEAK request. Depending
    on the implementation and capability of the synthesizer resource it
    may allow this operation or one or more of its headers.
 
    When a CONTROL to jump forward is issued and the operation goes
    beyond the end of the active SPEAK method's text, the CONTROL
    request succeeds. Also, the active SPEAK request completes and
    returns a SPEAK-COMPLETE event following the response to the CONTROL
    method. If there are more SPEAK requests in the queue, the
    synthesizer resource will start at the beginning of the next SPEAK
    request in the queue.
 
    When a CONTROL to jump backwards is issued and the operation jumps
    to the beginning or beyond the beginning of the speech data of the
    active SPEAK request, the response to the CONTROL request contains
    the speak-restart header, and the active SPEAK request starts from
    the beginning of its speech data.
 
    These two behaviors can be used to rewind or fast-forward across
    multiple speech requests, if the client wants to break up a speech
    markup text to multiple SPEAK requests.
 
    If a SPEAK request was active when the CONTROL method was received
    the server MUST return an active-request-id-list header with the
    Request-id of the SPEAK request that was active.
 
    Example:
      C->S:MRCP/2.0 434 SPEAK 543258
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
 
 S Shanmugham                  IETF-Draft                       Page 49
 
                            MRCPv2 Protocol              October, 2004
 
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
 
      S->C:MRCP/2.0 47 543258 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
      C->S:MRCP/2.0 63 CONTROL 543259
           Channel-Identifier: 32AECB23433802@speechsynth
           Prosody-rate: fast
 
      S->C:MRCP/2.0 67 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
      C->S:MRCP/2.0 68 CONTROL 543260
           Channel-Identifier: 32AECB23433802@speechsynth
           Jump-Size: -15 Words
 
      S->C:MRCP/2.0 69 543260 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
 8.12.     SPEAK-COMPLETE
 
    This is an Event message from the synthesizer resource to the client
    indicating that the SPEAK request was completed. The request-id
    header field WILL match the request-id of the SPEAK request that
    initiated the speech that just completed. The request-state field
    should be COMPLETE indicating that this is the last Event with that
    request-id, and that the request with that request-id is now
    complete. The completion-cause header field specifies the cause code
    pertaining to the status and reason of request completion such as
    the SPEAK completed normally or because of an error or kill-on-
    barge-in etc.
 
    Example:
      C->S:MRCP/2.0 434 SPEAK 543260
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
 
 S Shanmugham                  IETF-Draft                       Page 50
 
                            MRCPv2 Protocol              October, 2004
 
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
 
             <sentence>The subject is <prosody
             rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
      S->C:MRCP/2.0 48 543260 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 73 SPEAK-COMPLETE 543260 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Completion-Cause: 000 normal
 
 8.13.     SPEECH-MARKER
 
    This is an event generated by the synthesizer resource to the client
    when it hits a marker tag in the speech markup it is currently
    processing. The request-id field in the header matches the SPEAK
    request request-id that initiated the speech. The request-state
    field should be IN-PROGRESS as the speech is still not complete and
    there is more to be spoken. The actual speech marker tag hit,
    describing where the synthesizer is in the speech markup, is
    returned in the speech-marker header field, with an NTP timestamp.
    The SPEECH-MARKER event is also generated with a marker value of ""
    and the NTP timestamp, when a SPEAK-REQUEST in Pending-State(in the
    queue) moves to IN-PROGRESS and starts speaking. The NTP timestamp
    MUST be synchronized with the RTP timestamp used to generate the
    speech stream.
 
    Example:
      C->S:MRCP/2.0 434 SPEAK 543261
           Channel-Identifier: 32AECB23433802@speechsynth
           Voice-gender: neutral
           Voice-category: teenager
           Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
 
 S Shanmugham                  IETF-Draft                       Page 51
 
                            MRCPv2 Protocol              October, 2004
 
           <paragraph>
             <sentence>You have 4 new messages.</sentence>
             <sentence>The first is from <say-as
             type="name">Stephanie Williams</say-as>
             and arrived at <break/>
             <say-as type="time">3:45pm</say-as>.</sentence>
             <mark name="here"/>
             <sentence>The subject is
                <prosody rate="-20%">ski trip</prosody>
             </sentence>
             <mark name="ANSWER"/>
           </paragraph>
           </speak>
 
 
      S->C:MRCP/2.0 48 543261 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
      S->C:MRCP/2.0 73 SPEECH-MARKER 543261 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
           Speech-Marker: here
 
      S->C:MRCP/2.0 74 SPEECH-MARKER 543261 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
           Speech-Marker: ANSWER
 
      S->C:MRCP/2.0 73 SPEAK-COMPLETE 543261 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Completion-Cause: 000 normal
 
 8.14.     DEFINE-LEXICON
 
    The DEFINE-LEXICON method, from the client to the server, provides a
    lexicon and tells the server to load, unload, activate or deactivate
    the lexicon.
 
    If the server resource is in the speaking or paused state, the
    DEFINE-LEXICON request MUST respond with a failure status.
 
    If the resource is in the idle state and is able to successfully
    load/unload/activate/deactivate the lexicon the status MUST return a
    success code and the request-state MUST be COMPLETE.
 
    If the synthesizer could not define the lexicon for some reason, say
    the download failed or the lexicon was in an unsupported form, the
    MRCPv2 response for the DEFINE-LEXICON method MUST contain a failure
    status code of 407, and a completion-cause header field describing
    the failure reason.
 
 
 
 
 S Shanmugham                  IETF-Draft                       Page 52
 
                            MRCPv2 Protocol              October, 2004
 
 
 9.   Speech Recognizer Resource
 
    The speech recognizer resource is capable of receiving an incoming
    voice stream and providing the client with an interpretation of what
    was spoken in textual form.
 
    This section applies for the following resource types.
           1. speechrecog
           2. dtmfrecog
 
    The difference between the above two resources is in their level of
    support for recognition grammars. The "dtmfrecog" resource is
    capable of recognizing DTMF digits only and hence will accept DTMF
    grammars only. The "speechrecog" can recognize regular speech as
    well as DTMF digits and hence SHOULD support grammars describing
    speech or DTMF. The recognition resource may support recognition in
    the normal or hotword modes or both. For implementations where a
    single recognition resource does not support both modes, they can be
    implemented as separate resources and allocated to the same SIP
    session with different MRCP session identifiers and share the RTP
    audio feed.
 
 Normal Mode Recognition
    Regular mode recognition tries to match all of the speech or dtmf
    from the time it starts recognizing to the grammar and returns a no-
    match status if it fails to match or times out.
 
 Hotword Mode Recognition
    Hotword mode is where the recognizer looks for a specific speech
    grammar or dtmf sequence and ignores speech or DTMF that does not
    match. It does not timeout nor generate a no-match and will complete
    only for a successful match of grammar.
 
 Voice Enrolled Grammars
    A recognition resource may optionally support Voice Enrolled
    Grammars. With this functionality enrollment is performed using a
    person's voice.  For example, a list of contacts can be created and
    maintained by recording the person's names using the caller's voice.
    This technique is sometimes also called speaker-dependent
    recognition.
 
    Voice Enrollment has a concept of an enrollment session.  A session
    to add a new phrase to a personal grammar involves the initial
    enrollment followed by a repeat of enough utterances before
    committing the new phrase to the personal grammar.  Each time an
    utterance is recorded, it is compared for similarity with the other
    samples and a clash test is performed against other entries in the
    personal grammar to ensure there are no similar and confusable
    entries.
 
 
 S Shanmugham                  IETF-Draft                       Page 53
 
                            MRCPv2 Protocol              October, 2004
 
    Enrollment is done using a Recognizer resource.  Controlling which
    utterances are to be considered for enrollment of a new phrase is
    done by setting a header field in the Recognize request.
 
 
 9.1. Recognizer State Machine
 
    The recognizer resource is controlled by MRCPv2 requests from the
    client. Similarly the resource can respond to these requests or
    generate asynchronous events to the server to indicate certain
    conditions during the processing of the stream. Hence the recognizer
    maintains states to correlate MRCPv2 requests from the client. The
    state transitions are described below.
 
         Idle                   Recognizing               Recognized
         State                  State                     State
          |                       |                          |
          |---------RECOGNIZE---->|---RECOGNITION-COMPLETE-->|
          |<------STOP------------|<-----RECOGNIZE-----------|
          |                       |                          |
          |                       |              |-----------|
          |              |--------|       GET-RESULT         |
          |       START-OF-SPEECH |              |---------->|
          |------------| |------->|                          |
          |            |          |----------|               |
          |      DEFINE-GRAMMAR   | START-INPUT-TIMERS       |
          |<-----------|          |<---------|               |
          |                       |                          |
          |                       |------|                   |
          |-------|               |   RECOGNIZE              |
          |      STOP             |<-----|                   |
          |<------|                                          |
          |                                                  |
          |<-------------------STOP--------------------------|
          |<-------------------DEFINE-GRAMMAR----------------|
 
    If a recognition resource support voice enrolled grammars, starting
    an enrollment session does not change the state of the recognizer
    resource.  Once an enrollment session is started, then utterances
    are enrolled by calling the RECOGNIZE method repeatedly.  The state
    of the Speech Recognizer resources goes from IDLE to RECOGNIZING
    state each time RECOGNIZE is called.
 
 9.2. Recognizer Methods
 
    The recognizer supports the following methods.
 
    recognizer-method     =    recog-only-method
                          /    enrollment-method
 
    recog-only-method     =    "DEFINE-GRAMMAR"   ; A
 
 S Shanmugham                  IETF-Draft                       Page 54
 
                            MRCPv2 Protocol              October, 2004
 
                          /    "RECOGNIZE"        ; B
                          /    "INTERPRET"        ; C
                          /    "GET-RESULT"       ; D
                          /    "START-INPUT-TIMERS" ; E
                          /    "STOP"             ; F
 
    It is OPTIONAL for a recognizer resource to support voice enrolled
    grammars. If the recognizer resource does support voice enrolled
    grammars it MUST support the following methods.
 
      enrollment-method   =    "START-PHRASE-ENROLLMENT" ; G
                          /    "ENROLLMENT-ROLLBACK"     ; H
                          /    "END-PHRASE-ENROLLMENT"   ; I
                          /    "MODIFY-PHRASE"           ; J
                          /    "DELETE-PHRASE"           ; K
 
 9.3. Recognizer Events
 
    The recognizer may generate the following events.
      recognizer-event    =    "START-OF-SPEECH"         ; L
                          /    "RECOGNITION-COMPLETE"    ; M
                          /    "INTERPRETATION-COMPLETE" ; N
 
 
 9.4. Recognizer Header Fields
 
    A recognizer message may contain header fields containing request
    options and information to augment the Method, Response or Event
    message it is associated with.
 
      recognizer-header   =    recog-only-header
                          /    enrollment-header
 
      recog-only-header   =    confidence-threshold
                          /    sensitivity-level
                          /    speed-vs-accuracy
                          /    n-best-list-length
                          /    no-input-timeout
                          /    recognition-timeout
                          /    waveform-uri
                          /    input-waveform-uri
                          /    completion-cause
                          /    completion-reason
                          /    recognizer-context-block
                          /    start-input-timers
                          /    speech-complete-timeout
                          /    speech-incomplete-timeout
                          /    dtmf-interdigit-timeout
                          /    dtmf-term-timeout
                          /    dtmf-term-char
                          /    fetch-timeout
 
 S Shanmugham                  IETF-Draft                       Page 55
 
                            MRCPv2 Protocol              October, 2004
 
                          /    failed-uri
                          /    failed-uri-cause
                          /    save-waveform
                          /    new-audio-channel
                          /    speech-language
                          /    ver-buffer-utterance
                          /    recognition-mode
                          /    cancel-if-queue
                          /    hotword-max-duration
                          /    hotword-min-duration
                          /    interpret-text
 
    If a recognition resource supports voice enrolled grammars, the
    following header fields apply towards using that functionality.
 
      enrollment-header  =  num-min-consistent-pronunciations
                          / consistency-threshold
                          / clash-threshold
                          / personal-grammar-uri
                          / phrase-id
                          / phrase-nl
                          / weight
                          / save-best-waveform
                          / new-phrase-id
                          / confusable-phrases-uri
                          / abort-phrase-enrollment
 
    Header field          where    s g A B C D E F G H I J K L M N
          __________________________________________________________
    Confidence-Threshold    R      o o - o - o - - - - - - - - - -
    Sensitivity-Level       R      o o - o - - - - - - - - - - - -
    Speed-Vs-Accuracy       R      o o - o - - - - - - - - - - - -
    N-Best-List-Length      R      o o - o - o - - - - - - - - - -
    No-Input-Timeout        R      o o - o - - - - - - - - - - - -
    Recognition-Timeout     R      o o - o - - - - - - - - - - - -
    Waveform-URI            R      - - - - - - - - - - - - - - o -
    Waveform-URI           2XX     - - - - - - - - - - o - - - - -
    Input-Waveform-URI      R      - - - o - - - - - - - - - - - -
    Completion-Cause        R      - - - - - - - - - - - - - - m m
    Completion-Cause       2XX     - - o o o - - - - - - - - - - -
    Completion-Cause       4XX     - - m m m - - - - - - - - - - -
    Completion-Reason       R      - - - - - - - - - - - - - - m m
    Completion-Reason      2XX     - - o o o - - - - - - - - - - -
    Completion-Reason      4XX     - - m m m - - - - - - - - - - -
    Recognizer-Context-Bl.  R      o o - - - - - - - - - - - - - -
    Start-Input-Timers      R      - - - o - - - - - - - - - - - -
    Speech-Complete-Time.   R      o o - o - - - - - - - - - - - -
    Speech-Incomplete-Time. R      o o - o - - - - - - - - - - - -
    DTMF-Interdigit-Timeo.  R      o o - o - - - - - - - - - - - -
    DTMF-Term-Timeout       R      o o - o - - - - - - - - - - - -
    DTMF-Term-Char          R      o o - o - - - - - - - - - - - -
 
 S Shanmugham                  IETF-Draft                       Page 56
 
                            MRCPv2 Protocol              October, 2004
 
    Fetch-Timeout           R      o o o o - - - - - - - - - - - -
    Failed-URI              R      - - - - - - - - - - - - - - o o
    Failed-URI             4XX     - - o o - - - - - - - - - - - -
    Failed-URI-Cause        R      - - - - - - - - - - - - - - o o
    Failed-URI-Cause       4XX     - - o o - - - - - - - - - - - -
    Save-Waveform           R      o o - o - - - - - - - - - - - -
    New-Audio-Channel       R      - - - o - - - - - - - - - - - -
    Speech-Language         R      o o o o - - - - - - - - - - - -
    Ver-Buffer-Utterance    R      o o - o - - - - - - - - - - - -
    Recognition-Mode        R      - - - o - - - - - - - - - - - -
    Cancel-If-Queue         R      - - - o - - - - - - - - - - - -
    Hotword-Max-Duration    R      o o - o - - - - - - - - - - - -
    Hotword-Min-Duration    R      o o - o - - - - - - - - - - - -
    Interpret-Text          R      - - - - m - - - - - - - - - - -
 
    Num-Min-Consistent-Pr   R      o o - - - - - - o - - - - - - -
    Consistency-Threshold   R      o o - - - - - - o - - - - - - -
    Clash-Threshold         R      o o - - - - - - o - - - - - - -
    Personal-Grammar-URI    R      o o - - - - - - o - - o o - - -
    Phrase-ID               R      - - - - - - - - m - - m m - - -
    Phrase-NL               R      - - - - - - - - o - - o - - - -
    Weight                  R      - - - - - - - - o - - o - - - -
    Save-Best-Waveform      R      o o - - - - - - o - - - - - - -
    New-Phrase-ID           R      - - - - - - - - - - - o - - - -
    Confusable-Phrases-URI  R      - - - o - - - - - - - - - - - -
    Abort-Phrase-Enrollment R      - - - - - - - - - - o - - - - -
 
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - DEFINE, (B) -
    RECOGNIZE, (C) -INTERPRET, (D) GET-RESULT, (E) - START-INPUT-TIMERS,
    (F) - STOP, (G) - START-PHRASE-ENROLLMENT, (H) - ENROLLMENT-
    ROLLBACK, (I) - END-PHRASE-ENROLLMENT, (J) - MODIFY-PHRASE, (K) -
    DELETE-PHRASE, (L) - START-OF-SPEECH, (M) - RECOGNITION-COMPLETE,
    (M) - INTERPRETATION-COMPLETE  (o) - Optional(Refer text for further
    constraints), (R) - Request, (r) - Response
 
    For enrollment-specific header fields that can appear as part of
    SET-PARAMS or GET-PARAMS methods, the following general rule
    applies:  the START-PHRASE-ENROLLMENT method must be called before
    these header fields can be set through the SET-PARAMS method or
    retrieved through the GET-PARAMS method.
 
    Note that the waveform-uri header field of the Recognizer resource
    can also appear in the response to the END-PHRASE-ENROLLMENT method.
 
 
 Confidence Threshold
 
    When a recognition resource recognizes or matches a spoken phrase
    with some portion of the grammar, it associates a confidence level
    with that conclusion. The confidence-threshold header tells the
    recognizer resource what confidence level should be considered a
 
 S Shanmugham                  IETF-Draft                       Page 57
 
                            MRCPv2 Protocol              October, 2004
 
    successful match. This is a float value between 0.0-1.0 indicating
    the recognizer's confidence in the recognition. If the recognizer
    determines that its confidence in all its recognition results is
    less than the confidence threshold, then it MUST return no-match as
    the recognition result. This header field MAY occur in RECOGNIZE,
    SET-PARAMS or GET-PARAMS. The default value for this field is
    platform specific.
 
      confidence-threshold=    "Confidence-Threshold" ":" FLOAT CRLF
 
 Sensitivity Level
 
    To filter out background noise and not mistake it for speech, the
    recognizer may support a variable level of sound sensitivity. The
    sensitivity-level header is a float value between 0.0 and 1.0 and
    allows the client to set the sensity level for the recognizer. This
    header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. A
    higher value for this field means higher sensitivity. The default
    value for this field is platform specific.
 
      sensitivity-level   =    "Sensitivity-Level" ":" FLOAT CRLF
 
 Speed Vs Accuracy
 
    Depending on the implementation and capability of the recognizer
    resource it may be tunable towards Performance or Accuracy. Higher
    accuracy may mean more processing and higher CPU utilization,
    meaning less calls per server and vice versa. This header is a float
    value between 0.0 and 1.0 and allows this field to be tuned by the
    speed-vs-accuracy header. This header field MAY occur in RECOGNIZE,
    SET-PARAMS or GET-PARAMS. A higher value for this field means higher
    speed. The default value for this field is platform specific.
 
      speed-vs-accuracy   =     "Speed-Vs-Accuracy" ":" FLOAT CRLF
 
 N Best List Length
 
    When the recognizer matches an incoming stream with the grammar, it
    may come up with more than one alternative matches because of
    confidence levels in certain words or conversation paths.  If this
    header field is not specified, by default, the recognition resource
    will only return the best match above the confidence threshold. The
    client, by setting this header, could ask the recognition resource
    to send it more than 1 alternative. All alternatives must still be
    above the confidence-threshold. A value greater than one does not
    guarantee that the recognizer will send the requested number of
    alternatives. This header field MAY occur in RECOGNIZE, SET-PARAMS
    or GET-PARAMS. The minimum value for this field is 1. The default
    value for this field is 1.
 
      n-best-list-length  =    "N-Best-List-Length" ":" 1*DIGIT CRLF
 
 S Shanmugham                  IETF-Draft                       Page 58
 
                            MRCPv2 Protocol              October, 2004
 
 
 No Input Timeout
 
    When recognition is started and there is no speech detected for a
    certain period of time, the recognizer can send a RECOGNITION-
    COMPLETE event to the client and terminate the recognition
    operation. The no-input-timeout header field can set this timeout
    value. The value is in milliseconds. This header field MAY occur in
    RECOGNIZE, SET-PARAMS or GET-PARAMS. The value for this field ranges
    from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The
    default value for this field is platform specific.
 
      no-input-timeout    =    "No-Input-Timeout" ":" 1*DIGIT CRLF
 
 Recognition Timeout
 
    When recognition is started and there is no match for a certain
    period of time, the recognizer can send a RECOGNITION-COMPLETE event
    to the client and terminate the recognition operation. It is the
    timer that is started when START-OF-SPEECH event is generated by the
    resource and specifies the maximum duration of the utterance. When
    this timer expires the recognition request would complete with a
    status code of "008 too-much-speech-timeout". The recognition-
    timeout header field sets this timeout value. The value is in
    milliseconds. The value for this field ranges from 0 to MAXTIMEOUT,
    where MAXTIMEOUT is platform specific. The default value is 10
    seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or
    GET-PARAMS.
 
 
      recognition-timeout =    "Recognition-Timeout" ":" 1*DIGIT CRLF
 
 Waveform URI
 
    If the save-waveform header field is set to true, the recognizer
    MUST record the incoming audio stream of the recognition into a file
    and provide a URI for the client to access it. This header MUST be
    present in the RECOGNITION-COMPLETE event if the save-waveform
    header field was set to true. The URI value of the header MUST be
    NULL if there was some error condition preventing the server from
    recording. Otherwise, the URI generated by the server SHOULD be
    globally unique across the server and all its recognition sessions.
    The URI SHOULD BE available until the session is torn down.
 
    Similarly, if the save-best-waveform header field is set to true,
    the recognizer MUST save the audio stream for the best repetition of
    the phrase that was used during the enrollment session.  The
    recognizer MUST then record the recognized audio and make it
    available to the client in the form of a URI returned in the
    waveform-uri header field in the response to the END-PHRASE-
    ENROLLMENT method. The URI value of the header MUST be NULL if there
 
 S Shanmugham                  IETF-Draft                       Page 59
 
                            MRCPv2 Protocol              October, 2004
 
    was some error condition preventing the server from recording.
    Otherwise, the URI generated by the server SHOULD be globally unique
    across the server and all its recognition sessions. The URI SHOULD
    BE available until the session is torn down.
 
      waveform-uri        =    "Waveform-URI" ":" Uri CRLF
 
 Input-Waveform-Uri
 
    This optional header field specifies an audio file that has to be
    processed according to the RECOGNIZE operation.  This enables the
    client to recognize from a specified buffer or audio file. It MAY be
    part of the RECOGNIZE method.
 
      input-waveform-uri    = "Input-Waveform-URI" ":" Uri CRLF
 
 Completion Cause
 
    This header field MUST be part of a RECOGNITION-COMPLETE, event
    coming from the recognizer resource to the client. This indicates
    the reason behind the RECOGNIZE method completion. This header field
    MUST BE sent in the DEFINE-GRAMMAR and RECOGNIZE responses, if they
    return with a failure status and a COMPLETE state.
 
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP
                               1*VCHAR CRLF
 
      Cause-Code     Cause-Name     Description
 
        000           success       RECOGNIZE completed with a match or
                                    DEFINE-GRAMMAR succeeded in
                                    downloading and compiling the
                                    grammar
        001           no-match      RECOGNIZE completed, but no match
                                    was found
        002           no-input-timeout
                                    RECOGNIZE completed without a match
                                    due to a no-input-timeout
        003           recognition-timeout
                                    RECOGNIZE completed without a match
                                    due to a recognition-timeout
        004           gram-load-failure
                                    RECOGNIZE failed due grammar load
                                    failure.
        005           gram-comp-failure
                                    RECOGNIZE failed due to grammar
                                    compilation failure.
        006           error         RECOGNIZE request terminated
                                    prematurely due to a recognizer
                                    error.
        007           speech-too-early
 
 S Shanmugham                  IETF-Draft                       Page 60
 
                            MRCPv2 Protocol              October, 2004
 
                                    RECOGNIZE request terminated because
                                    speech was too early. This happens
                                    when the audio stream is already
                                    "in-speech" when the RECOGNIZE
                                    request was received.
        008           too-much-speech-timeout
                                    RECOGNIZE request terminated because
                                    speech was too long.
        009           uri-failure   Failure accessing a URI.
        010           language-unsupported
                                    Language not supported.
        011           cancelled     A new RECOGNIZE cancelled this one.
        012           semantics-failure
                                    Recognition succeeded but semantic
                                    interpretation of the recognized
                                    input failed. The RECOGNITION-
                                    COMPLETE event MUST contain the
                                    Recognition result with only input
                                    text and no interpretation.
 
 Completion Reason
 
    This header field MAY be specified in a RECOGNITION-COMPLETE event
    coming from the recognizer resource to the client. This contains the
    reason text behind the RECOGNIZE request completion. This field can
    be use to communicate text describing the reason for the failure,
    such as an error in parsing the grammar markup text.
 
      completion-reason   =    "Completion-Reason" ":"
                               quoted-string CRLF
 
 Recognizer Context Block
 
    This header MAY BE sent as part of the SET-PARAMS or GET-PARAMS
    request. If the GET-PARAMS method, contains this header field with
    no value, then it is a request to the recognizer to return the
    recognizer context block. The response to such a message MAY contain
    a recognizer context block as a message entity.  If the server
    returns a recognizer context block, the response MUST contain this
    header field and its value MUST match the content-id of that entity.
 
    If the SET-PARAMS method contains this header field, it MUST contain
    a message entity containing the recognizer context data, and a
    content-id matching this header field.  This content-id should match
    the content-id that came with the context data during the GET-PARAMS
    operation.
 
    Each recognition vendor choosing to use this mechanism to handoff
    recognizer context data between servers MUST distinguish its vendor
    specific block of data by using an IANA-registered content type in
    the IANA MIME vendor tree.
 
 S Shanmugham                  IETF-Draft                       Page 61
 
                            MRCPv2 Protocol              October, 2004
 
 
 
      recognizer-context-block =    "Recognizer-Context-Block" ":"
                                    1*VCHAR CRLF
 
 Start Input Timers
 
    This header MAY BE sent as part of the RECOGNIZE request. A value of
    false tells the recognizer to start recognition, but not to start
    the no-input timer yet. The recognizer should not start the timers
    until the client sends a START-INPUT-TIMERS request to the
    recognizer. This is useful in the scenario when the recognizer and
    synthesizer engines are not part of the same session. Here when a
    kill-on-barge-in prompt is being played, you want the RECOGNIZE
    request to be simultaneously active so that it can detect and
    implement kill-on-barge-in. But at the same time you don't want the
    recognizer to start the no-input timers until the prompt is
    finished. The default value is "true".
 
      start-input-timers  =    "Start-Input-Timers" ":"
                                    boolean-value CRLF
 
 Speech Complete Timeout
 
    This header field specifies the length of silence required following
    user speech before the speech recognizer finalizes a result (either
    accepting it or throwing a nomatch event). The speech-complete-
    timeout value is used when the recognizer currently has a complete
    match of an active grammar, and specifies how long it should wait
    for more input declaring a match.  By contrast, the incomplete
    timeout is used when the speech is an incomplete match to an active
    grammar. The value is in milliseconds.
 
      speech-complete-timeout= "Speech-Complete-Timeout" ":"
                               1*DIGIT CRLF
 
    A long speech-complete-timeout value delays the result completion
    and therefore makes the computer's response slow. A short speech-
    complete-timeout may lead to an utterance being broken up
    inappropriately. Reasonable complete timeout values are typically in
    the range of 0.3 seconds to 1.0 seconds.  The value for this field
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific.
    The default value for this field is platform specific. This header
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS.
 
 Speech Incomplete Timeout
 
    This header field specifies the required length of silence following
    user speech after which a recognizer finalizes a result.  The
    incomplete timeout applies when the speech prior to the silence is
    an incomplete match of all active grammars.  In this case, once the
 
 S Shanmugham                  IETF-Draft                       Page 62
 
                            MRCPv2 Protocol              October, 2004
 
    timeout is triggered, the partial result is rejected (with a nomatch
    event). The value is in milliseconds. The value for this field
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific.
    The default value for this field is platform specific.
 
      speech-incomplete-timeout= "Speech-Incomplete-Timeout" ":"
                               1*DIGIT CRLF
 
    The speech-incomplete-timeout also applies when the speech prior to
    the silence is a complete match of an active grammar, but where it
    is possible to speak further and still match the grammar.  By
    contrast, the complete timeout is used when the speech is a complete
    match to an active grammar and no further words can be spoken.
 
    A long speech-incomplete-timeout value delays the result completion
    and therefore makes the computer's response slow. A short speech-
    incomplete-timeout may lead to an utterance being broken up
    inappropriately.
 
    The speech-incomplete-timeout is usually longer than the speech-
    complete-timeout to allow users to pause mid-utterance (for example,
    to breathe). This header field MAY occur in RECOGNIZE, SET-PARAMS or
    GET-PARAMS.
 
 DTMF Interdigit Timeout
 
    This header field specifies the inter-digit timeout value to use
    when recognizing DTMF input. The value is in milliseconds.  The
    value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT
    is platform specific. The default value is 5 seconds. This header
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS.
 
      dtmf-interdigit-timeout= "DTMF-Interdigit-Timeout" ":"
                               1*DIGIT CRLF
 
 DTMF Term Timeout
 
    This header field specifies the terminating timeout to use when
    recognizing DTMF input. The DTMF-Term-Timeout applies only when no
    additional input is allowed by the grammar; otherwise, the
    DTMF-Interdigit-Timeout applies. The value is in milliseconds. The
    value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT
    is platform specific. The default value is 10 seconds. This header
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS.
 
      dtmf-term-timeout   =    "DTMF-Term-Timeout" ":" 1*DIGIT CRLF
 
 DTMF-Term-Char
 
    This header field specifies the terminating DTMF character for DTMF
    input recognition. The default value is NULL which is specified as
 
 S Shanmugham                  IETF-Draft                       Page 63
 
                            MRCPv2 Protocol              October, 2004
 
    an empty header field. This header field MAY occur in RECOGNIZE,
    SET-PARAMS or GET-PARAMS.
 
      dtmf-term-char      =    "DTMF-Term-Char" ":" VCHAR CRLF
 
 Fetch Timeout
 
    When the recognizer needs to fetch grammar documents this header
    field controls URI access properties. This defines the recognizer
    timeout for content that the server may need to fetch from the
    network. The value is in milliseconds.  The value for this field
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific.
    The default value for this field is platform specific. This header
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS.
 
      fetch-timeout       =    "Fetch-Timeout" ":" 1*ALPHA CRLF
 
 Failed URI
 
    When a recognizer method needs a recognizer to fetch or access a URI
    and the access fails the server SHOULD provide the failed URI in
    this header field in the method response.
 
      failed-uri               =    "Failed-URI" ":" Uri CRLF
 
 Failed URI Cause
 
    When a recognizer method needs a recognizer to fetch or access a URI
    and the access fails the server SHOULD provide the URI specific or
    protocol specific response code through this header field in the
    method response. This field has been defined as alphanumeric to
    accommodate all protocols, some of which might have a response
    string instead of a numeric response code.
 
      failed-uri-cause         =    "Failed-URI-Cause" ":" 1*ALPHANUM
                                    CRLF
 
 Save Waveform
 
    This header field allows the client to indicate to the recognizer
    that it MUST save the audio stream that was recognized. The
    recognizer MUST then record the recognized audio, without end-
    pointing and make it available to the client in the form of a URI
    returned in the waveform-uri header field in the RECOGNITION-
    COMPLETE event. If there was an error in recording the stream or the
    audio clip is otherwise not available, the recognizer MUST return an
    empty waveform-uri header field. The default value for this fields
    is "false".
 
      save-waveform       =    "Save-Waveform" ":" boolean-value CRLF
 
 
 S Shanmugham                  IETF-Draft                       Page 64
 
                            MRCPv2 Protocol              October, 2004
 
 New Audio Channel
 
    This header field MAY BE specified in a RECOGNIZE message and allows
    the client to tell the server that, from that point on, it will be
    sending audio data from a new audio source, channel or speaker. If
    the recognition resource had collected any line statistics or
    information, it MUST discard it and start fresh for this RECOGNIZE.
    Note that if there are multiple resources on the same SIP session
    that may be collecting or using these line statistics, the client
    MUST reset the line statistics for all these resource. This helps in
    the case where the client MAY want to reuse an open recognition
    session with a media resource for multiple telephone calls.
 
      new-audio-channel   =    "New-Audio-Channel" ":" boolean-value
                               CRLF
 
 Speech-Language
 
    This header field specifies the language of recognition grammar data
    within a session or request, if it is not specified within the data.
    The value of this header field should follow RFC 3066 for its
    values. This MAY occur in DEFINE-GRAMMAR, RECOGNIZE, SET-PARAMS or
    GET-PARAMS request.
 
      speech-language          =    "Speech-Language" ":" 1*VCHAR CRLF
 
 Ver-Buffer-Utterance
 
    This header field is the same as the one described for the
    Verification resource. This tells the server to buffer the utterance
    associated with this recognition request into the verification
    buffer. Sending this header field is not valid if the verification
    buffer is not instantiated for the session. This buffer is shared
    across resource within a session and gets instantiated when a
    verification resource is added to this session and is released when
    the resource is released from the session.
 
 Recognition-Mode
 
     This header field specifies what mode the RECOGNIZE command should
     start up in. The value choices are "normal" or "hotword". If the
     value is "normal", the RECOGNIZE starts matching all speech and DTMF
     from that point to the grammars specified in the RECOGNIZE commands.
     If any portion of the speech does not match the grammar, the
     RECOGNIZE command completes with a no-match status. Also, timers may
     be active to detect speech in the audio, and the RECOGNIZE command
     finish because of timeout waiting for speech. If the value of this
     header field is "hotword", the RECOGNIZE command starts up in
     hotword mode, where it only looks for particular keywords or DTMF
     sequences specified in the grammar and ignore silence or other
 
 
 S Shanmugham                  IETF-Draft                       Page 65
 
                            MRCPv2 Protocol              October, 2004
 
     speech in the audio stream. The default value for this header field
     is "normal".
 
      recognition-mode         =    "Recognition-Mode" ":" 1*ALPHA CRLF
 
 Cancel-If-Queue
 
     This header field specifies what should happen to this RECOGNIZE
     method when the client queues more RECOGNIZE methods to the
     resource. The value for this header field is Boolean. A value of
     "true" for this header field in a RECOGNIZE method, means this
     RECOGNIZE method when active MUST terminate, with a Completion-Cause
     of "cancelled", when the client queues another RECOGNIZE command to
     the resource. A value of "false" for this header field in a
     RECOGNIZE method, means that the RECOGNIZE method will continue till
     its operation is complete and if the client queues more RECOGNIZE
     methods to the resource, they are queued. When the current RECOGNIZE
     method is stopped or completes with a successful match, the first
     RECOGNIZE method in the queue becomes active. If the current
     RECOGNIZE fails, all RECOGNIZE methods in the pending queue are
     cancelled and will generate a RECOGNITION-COMPLETE event with a
     Completion-Cause of "cancelled". This field MUST exist in all
     RECOGNIZE methods.
 
      cancel-if-queue     =    "Cancel-If-Queue" ":" Boolean-value CRLF
 
 Hotword-Max-Duration
 
    This header MAY BE sent in a hotword mode RECOGNIZE request.  It
    specifies the maximum length of an utterance (in seconds) that
    should be considered for Hotword recognition.  This header, along
    with Hotword-Min-Duration, can be used to tune performance by
    preventing the recognizer from evaluating utterances that are too
    short or too long to be the Hotword.  The value is in milliseconds.
    The default is platform dependent.
 
      hotword-max-duration     =    "Hotword-Max-Duration" ":" 1*DIGIT
                                    CRLF
 
 Hotword-Min-Duration
 
    This header MAY BE sent in a hotword mode RECOGNIZE request.  It
    specifies the minimum length of an utterance (in seconds) that can
    be considered for Hotword.  This header, along with Hotword-Max-
    Duration, can be used to tune performance by preventing the
    recognizer from evaluating utterances that are too short or too long
    to be the hot word.  The value is in milliseconds. The default value
    is platform dependent.
 
      hotword-min-duration     = "Hotword-Min-Duration" ":" 1*DIGIT CRLF
 
 
 S Shanmugham                  IETF-Draft                       Page 66
 
                            MRCPv2 Protocol              October, 2004
 
 Interpret-Text
 
    This header field is used to provide the text for which a natural
    language interpretation is desired. The value of this field has a
    content-id that refers to a MIME entity of type plain/text in the
    body of the message. This header field MUST be used when invoking
    the INTERPRET method.
 
              interpret-text = "Interpret-Text" : 1*VCHAR CRLF
 
 Num-Min-Consistent-Pronunciations
 
    This header MAY BE specified in a START-PHRASE-ENROLLMENT, SET-
    PARAMS, or GET-PARAMS method and is used to specify the minimum
    number of consistent pronunciations that must be obtained to voice
    enroll a new phrase. The minimum value is 1. The default value is
    platform specific and MAY BE greater than 1.
 
      num-min-consistent-pronunciations  =
                   "Num-Min-Consistent-Pronunciations" ":" 1*DIGIT CRLF
 
 
 Consistency-Threshold
 
    This header MAY BE sent as part of the START-PHRASE-ENROLLMENT, SET-
    PARAMS, or GET-PARAMS method.  Used during voice-enrollment, this
    header specifies how similar an utterance needs to be, to a
    previously enrolled pronunciation of the same phrase to be
    considered "consistent." The higher the threshold, the closer the
    match between an utterance and previous pronunciations must be for
    the pronunciation to be considered consistent. The range for this
    threshold is a float value between is 0.0 to 1.0. The default value
    for this field is platform specific.
 
      consistency-threshold = "Consistency-Threshold" ":" FLOAT CRLF
 
 
 Clash-Threshold
 
    This header MAY BE sent as part of the START-PHRASE-ENROLLMENT, SET-
    PARMS, or GET-PARAMS method.  Used during voice-enrollment, this
    header specifies how similar the pronunciations of two different
    phrases can be before they are considered to be clashing. For
    example, pronunciations of phrases such as "John Smith" and "Jon
    Smits" may be so similar that they are difficult to distinguish
    correctly. A smaller threshold reduces the number of clashes
    detected. The range for this threshold is float value between 0.0
    and 1.0. The default value for this field is platform specific.
 
      clash-threshold     =    "Clash-Threshold" ":" 1*DIGIT CRLF
 
 
 S Shanmugham                  IETF-Draft                       Page 67
 
                            MRCPv2 Protocol              October, 2004
 
 
 Personal-Grammar-URI
 
    This header specifies the speaker-trained grammar to be used or
    referenced during enrollment operations.  For example, a contact
    list for user "Jeff" could be stored at the Personal-Grammar-
    URI="http://myserver/myenrollmentdb/jeff-list". There is no default
    value for this header field.
 
      personal-grammar-uri = "Personal-Grammar-URI" ":" Uri CRLF
 
 
 Phrase-Id
 
    This header identifies a phrase in a personal grammar and will also
    be returned when doing recognition.  This header field MAY occur in
    START-PHRASE-ENROLLMENT, MODIFY-PHRASE or DELETE-PHRASE requests.
    There is no default value for this header field.
 
      phrase-id           =    "Phrase-ID" ":" 1*VCHAR CRLF
 
 
 Phrase-NL
 
    This is a string specifying the natural language statement to
    execute when the phrase is recognized.  This header field MAY occur
    in START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. There is no
    default value for this header field.
 
      phrase-nl           =    "Phrase-NL" ":" 1*VCHAR CRLF
 
 
 Weight
 
    The value of this header field represents the occurrence likelihood
    of this branch of the grammar.  The weights are normalized to sum to
    one at compilation time, so use the value of '1' if you want all
    branches to have the same weight. This header field MAY occur in
    START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. The default
    value for this field is platform specific.
 
      weight         = "Weight" ":" WEIGHT CRLF
 
 
 Save-Best-Waveform
 
    This header field allows the client to indicate to the recognizer
    that it MUST save the audio stream for the best repetition of the
    phrase that was used during the enrollment session.  The recognizer
    MUST then record the recognized audio and make it available to the
    client in the form of a URI returned in the waveform-uri header
 
 S Shanmugham                  IETF-Draft                       Page 68
 
                            MRCPv2 Protocol              October, 2004
 
    field in the response to the END-PHRASE-ENROLLMENT method.  If there
    was an error in recording the stream or the audio clip is otherwise
    not available, the recognizer MUST return an empty waveform-uri
    header field.
 
      save-best-waveform  = "Save-Best-Waveform" ":" Boolean-value CRLF
 
 
 New-Phrase-Id
 
    This header field replaces the id used to identify the phrase in a
    personal grammar.  The recognizer returns the new id when using an
    enrollment grammar.  This header field MAY occur in MODIFY-PHRASE
    requests.
 
      new-phrase-id       =    "New-Phrase-ID" ":" 1*VCHAR CRLF
 
 
 Confusable-Phrases-URI
 
    This optional header field specifies the grammar that defines
    invalid phrases for enrollment.  For example, typical applications
    do not allow an enrolled phrase that is also a command word.  This
    header field MAY occur in RECOGNIZE requests.
 
      confusable-phrases-uri   =    "Confusable-Phrases-URI" ":"
                                    Uri CRLF
 
 
 Abort-Phrase-Enrollment
 
    This header field can optionally be specified in the END-PHRASE-
    ENROLLMENT method to abort the phrase enrollment, rather than
    committing the phrase to the personal grammar.
 
      abort-phrase-enrollment  =    "Abort-Phrase-Enrollment" ":"
                                    Boolean- value CRLF
 
 
 9.5. Recognizer Message Body
 
    A recognizer message may carry additional data associated with the
    method, response or event. The client may send the grammar to be
    recognized in DEFINE-GRAMMAR or RECOGNIZE requests. When the grammar
    is sent in the DEFINE-GRAMMAR method, the server should be able to
    download compile and optimize the grammar. The RECOGNIZE request
    MUST contain a list of grammars that need to be active during the
    recognition. The server resource may send the recognition results in
    the RECOGNITION-COMPLETE event or the GET-RESULT response. This data
    will be carried in the message body of the corresponding MRCPv2
    message.
 
 S Shanmugham                  IETF-Draft                       Page 69
 
                            MRCPv2 Protocol              October, 2004
 
 
 Recognizer Grammar Data
 
    Recognizer grammar data from the client to the server can be
    provided inline or by reference. Either way they are carried as MIME
    entities in the message body of the MRCPv2 request message. The
    grammar specified inline or by reference specifies the grammar used
    to match in the recognition process and this data is specified in
    one of the standard grammar specification formats like W3C's XML or
    ABNF or Sun's Java Speech Grammar Format etc.  All MRCPv2 servers
    MUST support W3C's XML based grammar markup format [11](MIME-type
    application/srgs+xml) and SHOULD support the ABNF form (MIME-type
    application/srgs).
 
    When a grammar is specified in-line in the message, the client MUST
    provide a content-id for that grammar as part of the content
    headers. The server MUST store the grammar associated with that
    content-id for the duration of the session. A stored grammar can be
    overwritten by defining a new grammar with the same content-id.
    Grammars that have been associated with a content-id can be
    referenced through a special "session:" URI scheme.
 
    Example:
      session:help@root-level.store
 
    If grammar data needs to be specified by external URI reference, the
    MIME-type text/uri-list is used to list the one or more URI that
    will specify the grammar data. All servers MUST support the HTTP uri
    access mechanism.
 
    If the data to be defined consists of a mix of URI and inline
    grammar data the multipart/mixed MIME-type is used and embedded with
    the MIME-blocks for text/uri-list, application/srgs or
    application/srgs+xml. The character set and encoding used in the
    grammar data may be specified according to standard MIME-type
    definitions.
 
    When more than one grammar URI or inline grammar block is specified
    in a message body of the RECOGNIZE request, it is an active list of
    grammar alternatives to listen.  The ordering of the list implies
    the precedence of the grammars, with the first grammar in the list
    having the highest precedence.
 
    Example 1:
         Content-Type: application/srgs+xml
         Content-Id: <request1@form-level.store>
         Content-Length: 104
 
         <?xml version="1.0"?>
 
         <!-- the default grammar language is US English -->
 
 S Shanmugham                  IETF-Draft                       Page 70
 
                            MRCPv2 Protocol              October, 2004
 
         <grammar xml:lang="en-US" version="1.0">
 
         <!-- single language attachment to tokens -->
         <rule id="yes">
                    <one-of>
                        <item xml:lang="fr-CA">oui</item>
                        <item xml:lang="en-US">yes</item>
                    </one-of>
            </rule>
 
         <!-- single language attachment to a rule expansion -->
            <rule id="request">
                    may I speak to
                    <one-of xml:lang="fr-CA">
                        <item>Michel Tremblay</item>
                        <item>Andre Roy</item>
                    </one-of>
            </rule>
 
            <!-- multiple language attachment to a token -->
            <rule id="people1">
                    <token lexicon="en-US,fr-CA"> Robert </token>
            </rule>
 
            <!-- the equivalent single-language attachment expansion -->
            <rule id="people2">
                    <one-of>
                        <item xml:lang="en-US">Robert</item>
                        <item xml:lang="fr-CA">Robert</item>
                    </one-of>
            </rule>
 
            </grammar>
 
    Example 2:
        Content-Type: text/uri-list
        Content-Length: 176
 
        session:help@root-level.store
        http://www.example.com/Directory-Name-List.grxml
        http://www.example.com/Department-List.grxml
        http://www.example.com/TAC-Contact-List.grxml
        session:menu1@menu-level.store
 
    Example 3:
        Content-Type: multipart/mixed; boundary="break"
 
        --break
        Content-Type: text/uri-list
        Content-Length: 176
        http://www.example.com/Directory-Name-List.grxml
 
 S Shanmugham                  IETF-Draft                       Page 71
 
                            MRCPv2 Protocol              October, 2004
 
        http://www.example.com/Department-List.grxml
        http://www.example.com/TAC-Contact-List.grxml
 
        --break
        Content-Type: application/srgs+xml
        Content-Id: <request1@form-level.store>
        Content-Length: 104
 
        <?xml version="1.0"?>
 
        <!-- the default grammar language is US English -->
        <grammar xml:lang="en-US" version="1.0">
 
        <!-- single language attachment to tokens -->
        <rule id="yes">
                    <one-of>
                        <item xml:lang="fr-CA">oui</item>
                        <item xml:lang="en-US">yes</item>
                    </one-of>
           </rule>
 
        <!-- single language attachment to a rule expansion -->
           <rule id="request">
                    may I speak to
                    <one-of xml:lang="fr-CA">
                        <item>Michel Tremblay</item>
                        <item>Andre Roy</item>
                    </one-of>
           </rule>
 
           <!-- multiple language attachment to a token -->
           <rule id="people1">
                    <token lexicon="en-US,fr-CA"> Robert </token>
           </rule>
 
           <!-- the equivalent single-language attachment expansion -->
           <rule id="people2">
                    <one-of>
                        <item xml:lang="en-US">Robert</item>
                        <item xml:lang="fr-CA">Robert</item>
                    </one-of>
           </rule>
 
           </grammar>
        --break--
 
 Recognizer Result Data
 
    Recognition result data from the server is carried in the MRCPv2
    message body of the RECOGNITION-COMPLETE event or the GET-RESULT
    response message as MIME entities. All servers MUST support Natural
 
 S Shanmugham                  IETF-Draft                       Page 72
 
                            MRCPv2 Protocol              October, 2004
 
    Language Semantics Markup Language (NLSML), an XML markup based on
    an early draft from the W3C.  This is the default standard for
    returning recognition results back to the client, and hence MUST
    support the MIME-type application/x-nlsml.
 
    MRCP-specific additions to this result format have been made and is
    fully described in section 9.6 with a normative definition of the
    DTD and schema in the Appendix.
 
    Example 1:
        Content-Type: application/x-nlsml
        Content-Length: 104
 
        <?xml version="1.0"?>
        <result grammar="http://theYesNoGrammar">
            <interpretation>
                <instance>
                    <myApp:yes_no>
                        <response>yes</response>
                    </myApp:yes_no>
                </instance>
                <input>ok</input>
            </interpretation>
        </result>
 
 
 
 Enrollment Result Data
 
    Enrollment results come as part of the RECOGNIZE-COMPLETE event as
    part of the Recognition result XML data. The XML Schema and DTD for
    this XML data is provided in section 9.7 with a normative definition
    of the DTD and scheme in the Appendix.
 
 
 
 Recognizer Context Block
 
    When the client has to change servers within a call, this is a block
    of data that the client MAY collect from the first server and
    provide to the second server. This may be because the client needs a
    different language support or because the server issued a redirect.
    Here the first recognizer resource may have collected acoustic and
    other data during its recognition. When we switch servers,
    communicating this data may allow the recognition resource on the
    new server to provide better recognition based on the acoustic data
    collected by the previous recognizer. This block of data is vendor-
    specific and MUST be carried as MIME-type application/octets in the
    body of the message.
 
 
 
 S Shanmugham                  IETF-Draft                       Page 73
 
                            MRCPv2 Protocol              October, 2004
 
    This block of data is communicated in the SET-PARAMS and GET-PARAMS
    method/response messages. In the GET-PARAMS method, if an empty
    recognizer-context-block header field is present, then the
    recognizer should return its vendor-specific context block in the
    message body as a MIME-entity with a specific content-id.  The
    content-id value should also be specified in the recognizer-context-
    block header field in the GET-PARAMS response.  The SET-PARAMS
    request wishing to provide this vendor-specific data should send it
    in the message body as a MIME-entity with the same content-id that
    it received from the GET-PARAMS.  The content-id should also be sent
    in the recognizer-context-block header field of the SET-PARAMS
    message.
 
    Each automatic speech recognition (ASR) vendor choosing to use this
    mechanism to handoff recognizer context data among its servers
    should distinguish its vendor-specific block of data from other
    vendors by choosing a unique content-id that they should recognize.
 
 
 
 9.6. Natural Language Semantic Markup Language
 
    The general purpose of the NL Semantics Markup is to represent
    information automatically extracted from a user's utterances by a
    semantic interpretation component, where utterance is to be taken in
    the general sense of a meaningful user input in any modality
    supported by the platform. A specific architecture can take
    advantage of this representation by using it to convey content among
    various system components that generate and make use of the markup.
    In MRCP it is to be used to convey these results between a
    recognition resource on the MRCP server and the MRCP client.
 
    Components that generate NLSML:
         1. Automatic Speech Recognition (ASR)
         2. Natural language understanding
         3. Other input media interpreters (e.g. DTMF, pointing,
           keyboard)
         4. Reusable dialog components
         5. Multimedia integration
 
    Components that use NLSML:
         1. Dialog manager
         2. Multimedia integration
 
    A platform may also choose to use this general format as the basis
    of a general semantic result that is carried along and filled out
    during each stage of processing. In addition, future systems may
    also potentially make use of this markup to convey abstract semantic
    content to be rendered into natural language by a natural language
    generation component.
 
 
 S Shanmugham                  IETF-Draft                       Page 74
 
                            MRCPv2 Protocol              October, 2004
 
 Markup Functions
 
    A semantic interpretation system that supports the Natural Language
    Semantics Markup Language is responsible for interpreting natural
    language inputs and formatting the interpretation as defined in this
    document. Semantic interpretation is typically either included as
    part of the speech recognition process, or involves one or more
    additional components, such as natural language interpretation
    components and dialog interpretation components.
 
    The elements of the markup fall into the following general
    functional categories:
 
    Interpretation:
 
    Elements and attributes representing the semantics of the user's
    utterance, including the <result>, <interpretation>, and <instance>
    elements. The <result> element contains the full result of
    processing one utterance. It may contain multiple <interpretation>
    elements if the interpretation of the utterance results in multiple
    alternative meanings due to uncertainty in speech recognition or
    natural language understanding. There are at least two reasons for
    providing multiple interpretations:
 
       1. another component, such as a dialog manager, might have
         additional information, for example, information from a
         database, that would allow it to select a preferred
         interpretation from among the possible interpretations returned
         from the semantic interpreter.
 
       2. a dialog manager that was unable to select between several
         competing interpretations could use this information to go back
         to the user and find out what was intended. For example, Did
         you say "Boston" or "Austin"?
 
    Side Information:
 
    Elements and attributes representing additional information about
    the interpretation, over and above the interpretation itself. Side
    information includes
 
       1. Whether an interpretation was achieved (the <nomatch> element)
         and the system's confidence in an interpretation (the
         "confidence" attribute of <interpretation>).
 
       2. Alternative interpretations (<interpretation>)
 
 
       3. Input formats and ASR information: The <input> element,
         representing the input to the semantic interpreter.
 
 
 S Shanmugham                  IETF-Draft                       Page 75
 
                            MRCPv2 Protocol              October, 2004
 
    Multi-modal integration:
 
    When more than one modality is available for input, the
    interpretation of the inputs needs to be coordinated. The "mode"
    attribute of <input> supports this by indicating whether the
    utterance was input by speech, dtmf, pointing, etc.
    The"timestamp_start" and "timestamp_end" attributes of
    <interpretation> also provide for temporal coordination by
    indicating when inputs occurred.
 
 
 Overview of NLSML Elements and their Relationships
 
    The elements in NLSML fall into two categories:
 
       1. description of the input that was processed.
 
       2. description of the meaning which was extracted from the input.
 
    Next to each element are its attributes. In addition, some elements
    can contain multiple instances of other elements. For example, a
    <result> can contain multiple <interpretations>, each of which is
    taken to be an alternative. Similarly, <input> can contain multiple
    child <input> elements which are taken to be cumulative. A URI
    reference to an XForms data model is permitted but not required.
    To illustrate the basic usage of these elements, as a simple
    example, consider the utterance ok (interpreted as "yes"). The
    example illustrates how that utterance and its interpretation would
    be represented in the NL Semantics markup.
 
    <result grammar="http://theYesNoGrammar>
      <interpretation>
         <instance>
          <yes_no>
            <response>yes</response>
          <yes_no>
          </instance>
        <input>ok</input>
      </interpretation>
    </result>
 
    This example includes only the minimum required information. There
    is an overall <result> element which includes one interpretation,
    containing the application-specific elements "<yes_no>" and
    "<response>".
 
 Elements and Attributes
 
   RESULT Root Element
 
    Attributes: grammar, x-model xmlns
 
 S Shanmugham                  IETF-Draft                       Page 76
 
                            MRCPv2 Protocol              October, 2004
 
 
    The root element of the markup is <result>. The <result> element
    includes one or more <interpretation> elements. Multiple
    interpretations can result from ambiguities in the input or in the
    semantic interpretation. If the "grammar" and "x-model" attributes
    don't apply to all of the interpretations in the result they can be
    overridden for individual interpretations at the <interpretation>
    level.
 
    Attributes:
 
       1. grammar: The grammar or recognition rule matched by this
         result. The format of the grammar attribute will match the rule
         reference semantics defined in the grammar specification.
         Specifically, the rule reference will be in the external XML
         form for grammar rule references. The dialog markup interpreter
         needs to know the grammar rule that is matched by the utterance
         because multiple rules may be simultaneously active. The value
         is the grammar URI used by the dialog markup interpreter to
         specify the grammar. The grammar can be overridden by a grammar
         attribute in the <interpretation> element if the input was
         ambiguous as to which grammar it matched.
 
       2. x-model: The URI which defines the XForms data model used for
         this result. The x-model can be overridden by an x-model
         attribute in the <interpretation> element if the input was
         ambiguous as to which x-model it matched.(optional)
 
    <result grammar="http://grammar"
      <interpretation>
       ....
      </interpretation>
    </result>
 
   INTERPRETATION Element
 
    Attributes: confidence, grammar, x-model
 
    An <interpretation> element contains a single semantic
    interpretation.
 
    Attributes:
 
       1. confidence: An integer from 0-100 indicating the semantic
         analyzer's confidence in this interpretation. At this point
         there is no formal, platform-independent, definition of
         confidence. (optional)
 
       2. grammar: The grammar or recognition rule matched by this
         interpretation (if needed to override the grammar specification
         at the <interpretation> level.) This attribute will only be
 
 S Shanmugham                  IETF-Draft                       Page 77
 
                            MRCPv2 Protocol              October, 2004
 
         needed under <interpretation> if it is necessary to override a
         grammar that was defined at the <result> level.) (optional)
 
       3. x-model: The URI which defines the XForms data model used for
         this interpretation. (As in the case of "grammar", this
         attribute only needs to be defined under <interpretation> if it
         is necessary to override the x-model specification at the
         <interpretation> level.) (optional)
 
    Interpretations must be sorted best-first by some measure of
    "goodness". The goodness measure is "confidence" if present,
    otherwise, it is some platform-specific indication of quality.
 
    The x-model and grammar are expected to be specified most frequently
    at the <result> level, because most often one data model will be
    sufficient for the entire result. However, it can be overridden at
    the <interpretation> level because it is possible that different
    interpretations may have different data models - perhaps because
    they match different grammar rules.
 
    The <interpretation> element includes an optional <input> element
    which contains the input being analyzed, and an <instance> element
    containing the interpretation of the utterance.
 
       <interpretation confidence="75" grammar="http://grammar"
        x-model="http://dataModel">
        ...
       </interpretation>
 
   INSTANCE Element
 
    The <instance> element contains the interpretation of the utterance.
    If a reference to a data model is present (that is, if there is an
    "x-model" attribute on the <result> or <interpretation> elements),
    the markup describing the instance should conform to that data
    model. When there is semantic markup in the grammar that does not
    create semantic objects, but instead only does a semantic
    translation of a portion of the input, such as translating "coke" to
    "coca-cola", the instance contains the whole input but with the
    translation applied. The NLSML looks like in example 2 below. If
    there is no semantic objects created, nor any semantic translation
    the instance value is the same as the input value.
 
    Attributes:
 
       1. confidence: Each element of the instance may have a confidence
         attribute, defined in the NL semantics namespace. The
         confidence attribute contains an integer value in the range
         from 0-100 reflecting the system's confidence in the analysis
         of that slot. The meaning of confidence scores has not been
 
 
 S Shanmugham                  IETF-Draft                       Page 78
 
                            MRCPv2 Protocol              October, 2004
 
         defined in a platform-independent way. The default value of
         "confidence" is 100. (optional)
 
    Example 1:
 
    <instance name="nameAddress">
      <nameAddress>
          <street confidence=75>123 Maple Street</street>
          <city>Mill Valley</city>
          <state>CA</state>
          <zip>90952</zip>
      </nameAddress>
    <instance>
    <input>
      My address is 123 Maple Street,
      Mill Valley, California, 90952
    </input>
 
    Example 2:
 
    <instance>
        I would like to buy a coca-cola
    <instance>
    <input>
      I would like buy a coke
    </input>
 
 
   INPUT Element
 
    The <input> element is the text representation of a user's input. It
    includes an optional "confidence" attribute which indicates the
    recognizer's confidence in the recognition result (as opposed to the
    confidence in the interpretation, which is indicated by the
    "confidence" attribute of <interpretation>). Optional "timestamp-
    start" and "timestamp-end" attributes indicate the start and end
    times of a spoken utterance, in ISO 8601 format.
 
    Attributes:
 
       1. timestamp-start: The time at which the input began. (optional)
 
       2. timestamp-end: The time at which the input ended. (optional)
 
       3. mode: The modality of the input, for example, speech, dtmf,
         etc. (optional)
 
       4. confidence: the confidence of the recognizer in the correctness
         of the input in the range 0.0 to 1.0 (optional)
 
 
 
 S Shanmugham                  IETF-Draft                       Page 79
 
                            MRCPv2 Protocol              October, 2004
 
    Note that it may not make sense for temporally overlapping inputs to
    have the same mode; however, this constraint is not expected to be
    enforced by platforms.
 
    When there is no time zone designator, ISO 8601 time representations
    default to local time.
 
    There are three possible formats for the <input> element.
 
    a) The <input> element can contain simple text:
 
           <input>onions</input>
 
      A future possibility is for <input> to contain not only text but
      additional markup that represents prosodic information that was
      contained in the original utterance and extracted by the speech
      recognizer. This depends on the availability of ASR's that are
      capable of producing prosodic information.
 
    b) An <input> tag can also contain additional <input> tags. Having
      additional input elements allows the representation to support
      future multi-modal inputs as well as finer-grained speech
      information, such as timestamps for individual words and word-
      level confidences.
 
      <input>
         <input mode="speech" confidence="0.5"
           timestamp-start="2000-04-03T0:00:00"
           timestamp-end="2000-04-03T0:00:00.2">fried</input>
         <input mode="speech" confidence="1.0"
           timestamp-start="2000-04-03T0:00:00.25"
           timestamp-end="2000-04-03T0:00:00.6">onions</input>
      </input>
 
    c) Finally, the <interpretation> element can contain <nomatch> and
      <noinput> elements, which describe situations in which the speech
      recognizer (or other media interpreter) received input that it was
      unable to process, or did not receive any input at all,
      respectively.
 
   NOMATCH Element
 
    The <nomatch> element under <input> is used to indicate that the
    semantic interpreter was unable to successfully match any input with
    confidence above the threshold. It can optionally contain the text
    of the best of the (rejected) matches.
 
    <interpretation>
       <instance/>
          <input confidence="0.1">
             <nomatch/>
 
 S Shanmugham                  IETF-Draft                       Page 80
 
                            MRCPv2 Protocol              October, 2004
 
          </input>
    </interpretation>
    <interpretation>
       <instance/>
       <input mode="speech" confidence="0.1">
         <nomatch>I want to go to New York</nomatch>
       </input>
    </interpretation>
 
   NOINPUT Element
 
    <noinput> indicates that there was no input-- a timeout occurred in
    the speech recognizer due to silence.
 
    <interpretation>
       <instance/>
       <input>
          <noinput/>
       </input>
    </interpretation>
 
    If there are multiple levels of inputs, it appears that the most
    natural place for <nomatch> and <noinput> elements is under the
    highest level of <input> for <no input>, and under the appropriate
    level of <interpretation> for <nomatch>. So <noinput> means "no
    input at all" and <nomatch> means "no match in speech modality" or
    "no match in dtmf modality". For example, to represent garbled
    speech combined with dtmf "1 2 3 4", we would have the following:
 
    <input>
       <input mode="speech"><nomatch/></input>
       <input mode="dtmf">1 2 3 4</input>
    </input>
 
    While <noinput> could be represented as an attribute of input,
    <nomatch> cannot, since it could potentially include PCDATA content
    with the best match. For parallelism, <noinput> is also an element.
 
 9.7. Enrollment Results
    It will contain the following elements/tags to provide information
    associated with the voice enrollment.
 
      1. Num-Clashes
      2. Num-Good-Repetitions
      3. Num-Repetitions-Still-Needed
      4. Consistency-Status
      5. Clash-Phrase-Ids
      6. Transcriptions
      7. Confusable-Phrases
 
 
 
 S Shanmugham                  IETF-Draft                       Page 81
 
                            MRCPv2 Protocol              October, 2004
 
    1. Num-Clashes
 
    This is not a header field, but part of the recognition results. It
    is returned in a RECOGNITION-COMPLETE event.  Its value represents
    the number of clashes that this pronunciation has with other
    pronunciations in an active enrollment session.  The header field
    Clash-Threshold determines the sensitivity of the clash measurement.
    Clash testing can be turned off completely by setting Clash-
    Threshold to 0.
 
      num-clashes    = "<num-clashes>" 1*DIGIT "</num-clashes>" CRLF
 
 
    2. Num-Good-Repetitions
 
    This is not a header field, but part of the recognition results. It
    is returned in a RECOGNITION-COMPLETE event.  Its value represents
    the number of consistent pronunciations obtained so far in an active
    enrollment session.
 
      num-good-repetitions = "<num-good-repetitions>" 1*DIGIT
                             "</num-good-repetitions>"  CRLF
 
 
    3. Num-Repetitions-Still-Needed
 
    This is not a header field, but part of the recognition results. It
    is returned in a RECOGNITION-COMPLETE event.  Its value represents
    the number of consistent pronunciations that must still be obtained
    before the new phrase can be added to the enrollment grammar.  The
    number of consistent pronunciations required is determined by the
    header Num-Min-Consistent-Pronunciations, whose default value is
    two.  The returned value must be 0 before the system will allow you
    to end an enrollment session for a new phrase.
 
      num-repetitions-still-needed =
                     "<num-repetitions-still-needed>" 1*DIGIT
                     "</num-repetitions-still-needed>" CRLF
 
 
    4. Consistency-Status
 
    This is not a header field, but part of the recognition results. It
    is returned in a RECOGNITION-COMPLETE event. This is used to
    indicate how consistent the repetitions are when learning a new
    phrase. It can have the values of CONSISTENT, INCONSISTENT and
    UNDECIDED.
 
      consistency-status       = "<consistency-status>" 1*ALPHA
                                 "</consistency-status>" CRLF
 
 
 S Shanmugham                  IETF-Draft                       Page 82
 
                            MRCPv2 Protocol              October, 2004
 
 
    5. Clash-Phrase-Ids
 
    This is not a header field, but part of the recognition results. It
    is returned in a RECOGNITION-COMPLETE event.  This gets filled with
    the phrase ids of the clashing pronunciation(s).  This field is
    absent if there are no clashes.  This MAY occur in RECOGNITION-
    COMPLETE events.
 
      phrase-id           = "<item>" 1*ALPHA "</item>" CRLF
      clash-phrase-ids    = "<clash-phrase-ids>" 1*phrase-id
      "</clash-phrase-ids>" CRLF
 
 
    6. Transcriptions
 
    This is not a header field, but part of the recognition results. It
    is optionally returned in a RECOGNITION-COMPLETE event.  This gets
    filled with the transcriptions returned in the last repetition of
    the phrase being enrolled. This MAY occur in RECOGNITION-COMPLETE
    events.
 
      transcription       = "<item>" 1*OCTET "</item>" CRLF
      transcriptions      = "<transcriptions>" 1*transcription
                            "</transcriptions>" CRLF
 
 
    7. Confusable-Phrases
 
    This is not a header field, but part of the recognition results. It
    is optionally returned in a RECOGNITION-COMPLETE event.  This gets
    filled with the list of phrases from a command grammar that are
    confusable with the phrase being added to the personal grammar.
    This MAY occur in RECOGNITION-COMPLETE events.
 
      Confusable-phrase   = "<item>" 1*OCTET "</item>" CRLF
      confusable-phrases  = "<confusable-phrases>" 1*confusable-phrase
                            "</confusable-phrases>" CRLF
 
 
 
 9.8. DEFINE-GRAMMAR
 
    The DEFINE-GRAMMAR method, from the client to the server, provides a
    grammar and tells the server to define, download if needed and
    compile the grammar.
 
    If the server resource is in the recognition state, the DEFINE-
    GRAMMAR request MUST respond with a failure status.
 
 
 
 S Shanmugham                  IETF-Draft                       Page 83
 
                            MRCPv2 Protocol              October, 2004
 
    If the resource is in the idle state and is able to successfully
    load and compile the grammar the status MUST return a success code
    and the request-state MUST be COMPLETE.
 
    If the recognizer could not define the grammar for some reason, say
    the download failed or the grammar failed to compile, or the grammar
    was in an unsupported form, the MRCPv2 response for the DEFINE-
    GRAMMAR method MUST contain a failure status code of 407, and a
    completion-cause header field describing the failure reason.
 
    Example:
      C->S:MRCP/2.0 589 DEFINE-GRAMMAR 543257
           Channel-Identifier: 32AECB23433801@speechrecog
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <!-- single language attachment to tokens -->
           <rule id="yes">
               <one-of>
                   <item xml:lang="fr-CA">oui</item>
                   <item xml:lang="en-US">yes</item>
               </one-of>
           </rule>
 
           <!-- single language attachment to a rule expansion -->
           <rule id="request">
               may I speak to
               <one-of xml:lang="fr-CA">
                   <item>Michel Tremblay</item>
                   <item>Andre Roy</item>
               </one-of>
           </rule>
 
           </grammar>
 
      S->C:MRCP/2.0 73 543257 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
 
 
      C->S:MRCP/2.0 334 DEFINE-GRAMMAR 543258
           Channel-Identifier: 32AECB23433801@speechrecog
           Content-Type: application/srgs+xml
           Content-Id: <helpgrammar@root-level.store>
           Content-Length: 104
 
 S Shanmugham                  IETF-Draft                       Page 84
 
                            MRCPv2 Protocol              October, 2004
 
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <rule id="request">
               I need help
           </rule>
 
      S->C:MRCP/2.0 73 543258 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
 
      C->S:MRCP/2.0 723 DEFINE-GRAMMAR 543259
           Channel-Identifier: 32AECB23433801@speechrecog
           Content-Type: application/srgs+xml
           Content-Id: <request2@field-level.store>
           Content-Length: 104
 
           <?xml version="1.0" encoding="UTF-8"?>
 
           <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN"
                             "http://www.w3.org/TR/speech-
           grammar/grammar.dtd">
 
           <grammar xmlns="http://www.w3.org/2001/06/grammar"
           xml:lang="en"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://www.w3.org/2001/06/grammar
                      http://www.w3.org/TR/speech-grammar/grammar.xsd"
                      version="1.0" mode="voice" root="basicCmd">
 
           <meta name="author" content="Stephanie Williams"/>
 
           <rule id="basicCmd" scope="public">
             <example> please move the window </example>
             <example> open a file </example>
 
             <ruleref
                uri="http://grammar.example.com/politeness.grxml#startPo
           lite"/>
 
             <ruleref uri="#command"/>
             <ruleref
                uri="http://grammar.example.com/politeness.grxml#endPoli
           te"/>
 
           </rule>
 
           <rule id="command">
 
 S Shanmugham                  IETF-Draft                       Page 85
 
                            MRCPv2 Protocol              October, 2004
 
             <ruleref uri="#action"/> <ruleref uri="#object"/>
           </rule>
 
           <rule id="action">
              <one-of>
                 <item weight="10"> open   <tag>TAG-CONTENT-1</tag>
                     </item>
                 <item weight="2">  close  <tag>TAG-CONTENT-2</tag>
                     </item>
                 <item weight="1">  delete <tag>TAG-CONTENT-3</tag>
                     </item>
                 <item weight="1">  move   <tag>TAG-CONTENT-4</tag>
                     </item>
               </one-of>
           </rule>
 
           <rule id="object">
             <item repeat="0-1">
               <one-of>
                 <item> the </item>
                 <item> a </item>
               </one-of>
             </item>
 
             <one-of>
                 <item> window </item>
                 <item> file </item>
                 <item> menu </item>
             </one-of>
           </rule>
 
           </grammar>
 
 
      S->C:MRCP/2.0 69 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
 
      C->S:MRCP/2.0 155 RECOGNIZE 543260
           Channel-Identifier: 32AECB23433801@speechrecog
           N-Best-List-Length: 2
           Content-Type: text/uri-list
           Content-Length: 176
 
           session:request1@form-level.store
           session:request2@field-level.store
           session:helpgramar@root-level.store
 
      S->C:MRCP/2.0 48 543260 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
 
 S Shanmugham                  IETF-Draft                       Page 86
 
                            MRCPv2 Protocol              October, 2004
 
      S->C:MRCP/2.0 48 START-OF-SPEECH 543260 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 486 RECOGNITION-COMPLETE 543260 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Waveform-URI: http://web.media.com/session123/audio.wav
           Content-Type: applicationt/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result x-model="http://IdentityModel"
             xmlns:xf="http://www.w3.org/2000/xforms"
             grammar="session:request1@form-level.store">
                <interpretation>
                     <xf:instance name="Person">
                       <Person>
                           <Name> Andre Roy </Name>
                       </Person>
                     </xf:instance>
                     <input>   may I speak to Andre Roy </input>
                </interpretation>
           </result>
 
 9.9. RECOGNIZE
 
    The RECOGNIZE method from the client to the server tells the
    recognizer to start recognition and provides it with a grammar to
    match for. The RECOGNIZE method can carry headers to control the
    sensitivity, confidence level and the level of detail in results
    provided by the recognizer. These headers override the current
    defaults set by a previous SET-PARAMS method.
 
    The RECOGNIZE method can be started in normal or hotword mode, and
    is specified by the Recognition-Mode header field. The default value
    is "normal".
 
    Note that the recognizer may also enroll the collected utterance in
    a personal grammar if the Enroll-utterance header field is set to
    true and an Enrollment is active (via an earlier execution of the
    START-PHRASE-ENROLLMENT method). If so, and if the RECOGNIZE request
    contains a Content-Id header field then the resulting grammar (which
    includes the personal grammar as a sub-grammar) can be referenced
    from elsewhere by using "session:foo", where "foo" is the value of
    the Content-Id header field.
 
    If the resource is in the recognizing state, the RECOGNIZE request
    MUST respond with a failure status. If the resource is in the Idle
    state and was able to successfully start the recognition, the server
    MUST return a success code and a request-state of IN-PROGRESS. This
 
 
 S Shanmugham                  IETF-Draft                       Page 87
 
                            MRCPv2 Protocol              October, 2004
 
    means that the recognizer is active and that the client should
    expect further events with this request-id.
 
    If the resource could not start a recognition, it MUST return a
    failure status code of 407 and contain a completion-cause header
    field describing the cause of failure.
 
    For the recognizer resource, this is the only request that can
    return request-state of IN-PROGRESS, meaning that recognition is in
    progress. When the recognition completes by matching one of the
    grammar alternatives or by a time-out without a match or for some
    other reason, the recognizer resource MUST send the client a
    RECOGNITION-COMPLETE event with the result of the recognition and a
    request-state of COMPLETE.
 
    For large grammars that can take a long time to compile and for
    grammars which are used repeatedly, the client could issue a DEFINE-
    GRAMMAR request with the grammar ahead of time. In such a case the
    client can issue the RECOGNIZE request and reference the grammar
    through the "session:" special URI. This also applies in general if
    the client wants to restart recognition with a previous inline
    grammar.
 
    Note that since the audio and the messages are carried over separate
    communication paths there may be a race condition between the start
    of the flow of audio and the receipt of the RECOGNIZE method. For
    example, if audio flow is started by the client at the same time as
    the RECOGNIZE method is sent, either the audio or the RECOGNIZE will
    arrive at the recognizer first. As another example, the client may
    chose to continuously send audio to the Server and signal the Server
    to recognize using the RECOGNIZE method.  A number of mechanisms
    exist to resolve this condition and the mechanism chosen is left to
    the implementers of recognition resource. The recognizer should
    expect the media to start flowing when it receives the recognize
    request, and shouldn't buffer anything it receives beforehand.
 
 
    Example:
      C->S:MRCP/2.0 479 RECOGNIZE 543257
           Channel-Identifier: 32AECB23433801@speechrecog
           Confidence-Threshold: 0.9
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <!-- single language attachment to tokens -->
 
 S Shanmugham                  IETF-Draft                       Page 88
 
                            MRCPv2 Protocol              October, 2004
 
           <rule id="yes">
                    <one-of>
                             <item xml:lang="fr-CA">oui</item>
                             <item xml:lang="en-US">yes</item>
                    </one-of>
                </rule>
 
           <!-- single language attachment to a rule expansion -->
                <rule id="request">
                    may I speak to
                    <one-of xml:lang="fr-CA">
                             <item>Michel Tremblay</item>
                             <item>Andre Roy</item>
                    </one-of>
                </rule>
 
             </grammar>
 
      S->C:MRCP/2.0 48 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 49 START-OF-SPEECH 543257 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 467 RECOGNITION-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Waveform-URI: http://web.media.com/session123/audio.wav
           Content-Type: application/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result x-model="http://IdentityModel"
             xmlns:xf="http://www.w3.org/2000/xforms"
             grammar="session:request1@form-level.store">
               <interpretation>
                   <xf:instance name="Person">
                       <Person>
                           <Name> Andre Roy </Name>
                       </Person>
                   </xf:instance>
                     <input>   may I speak to Andre Roy </input>
               </interpretation>
           </result>
 
 9.10.     STOP
 
    The STOP method from the client to the server tells the resource to
    stop recognition if one is active. If a RECOGNIZE request is active
    and the STOP request successfully terminated it, then the response
    header contains an active-request-id-list header field containing
 
 S Shanmugham                  IETF-Draft                       Page 89
 
                            MRCPv2 Protocol              October, 2004
 
    the request-id of the RECOGNIZE request that was terminated. In this
    case, no RECOGNITION-COMPLETE event will be sent for the terminated
    request. If there was no recognition active, then the response MUST
    NOT contain an active-request-id-list header field. Either way the
    response MUST contain a status of 200(Success).
 
    Example:
      C->S:MRCP/2.0 573 RECOGNIZE 543257
           Channel-Identifier: 32AECB23433801@speechrecog
           Confidence-Threshold: 0.9
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <!-- single language attachment to tokens -->
           <rule id="yes">
                    <one-of>
                             <item xml:lang="fr-CA">oui</item>
                             <item xml:lang="en-US">yes</item>
                    </one-of>
                </rule>
 
           <!-- single language attachment to a rule expansion -->
                <rule id="request">
                    may I speak to
                    <one-of xml:lang="fr-CA">
                             <item>Michel Tremblay</item>
                             <item>Andre Roy</item>
                    </one-of>
                </rule>
 
           </grammar>
 
      S->C:MRCP/2.0 47 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      C->S:MRCP/2.0 28 STOP 543258 200
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 67 543258 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Active-Request-Id-List: 543257
 
 9.11.     GET-RESULT
 
 
 
 S Shanmugham                  IETF-Draft                       Page 90
 
                            MRCPv2 Protocol              October, 2004
 
    The GET-RESULT method from the client to the server can be issued
    when the recognizer is in the recognized state. This request allows
    the client to retrieve results for a completed recognition.  This is
    useful if the client decides it wants more alternatives or more
    information. When the server receives this request it should re-
    compute and return the results according to the recognition
    constraints provided in the GET-RESULT request.
 
    The GET-RESULT request could specify constraints like a different
    confidence-threshold, or n-best-list-length. This feature is
    optional and the automatic speech recognition (ASR) engine may
    return a status of unsupported feature.
 
    Example:
      C->S:MRCP/2.0 73 GET-RESULT 543257
           Channel-Identifier: 32AECB23433801@speechrecog
           Confidence-Threshold: 0.9
 
 
      S->C:MRCP/2.0 487 543257 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Content-Type: application/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result x-model="http://IdentityModel"
             xmlns:xf="http://www.w3.org/2000/xforms"
             grammar="session:request1@form-level.store">
               <interpretation>
                   <xf:instance name="Person">
                       <Person>
                           <Name> Andre Roy </Name>
                       </Person>
                   </xf:instance>
                             <input>   may I speak to Andre Roy </input>
               </interpretation>
           </result>
 
 9.12.     START-OF-SPEECH
 
    This is an event from the recognizer to the client indicating that
    it has detected speech or a DTMF digit. This event is useful in
    implementing kill-on-barge-in scenarios when the synthesizer
    resource is in a different session than the recognizer resource and
    hence is not aware of an incoming audio source. In these cases, it
    is up to the client to act as a proxy and turn around and issue the
    BARGE-IN-OCCURRED method to the synthesizer resource. The recognizer
    resource also sends a unique proxy-sync-id in the header for this
    event, which is sent to the synthesizer in the BARGE-IN-OCCURRED
    method to the synthesizer.
 
 
 S Shanmugham                  IETF-Draft                       Page 91
 
                            MRCPv2 Protocol              October, 2004
 
    This event should be generated irrespective of whether the
    synthesizer and recognizer are on the same server or not.
 
 9.13.     START-INPUT-TIMERS
 
    This request is sent from the client to the recognition resource
    when it knows that a kill-on-barge-in prompt has finished playing.
    This is useful in the scenario when the recognition and synthesizer
    engines are not in the same session. Here when a kill-on-barge-in
    prompt is being played, you want the RECOGNIZE request to be
    simultaneously active so that it can detect and implement kill on
    barge-in. But at the same time you don't want the recognizer to
    start the no-input timers until the prompt is finished. The header
    Start-Input-Timers header field in the RECOGNIZE request will allow
    the client to say if the timers should be started or not. The
    recognizer should not start the timers until the client sends a
    START-INPUT-TIMERS method to the recognizer.
 
 9.14.     RECOGNITION-COMPLETE
 
    This is an Event from the recognizer resource to the client
    indicating that the recognition completed. The recognition result is
    sent in the MRCPv2 body of the message. The request-state field MUST
    be COMPLETE indicating that this is the last event with that
    request-id, and that the request with that request-id is now
    complete. The recognizer context still holds the results and the
    audio waveform input of that recognition till the next RECOGNIZE
    request is issued. A URI to the audio waveform MAY BE returned to
    the client in a waveform-uri header field in the RECOGNITION-
    COMPLETE event. The client can use this URI to retrieve or playback
    the audio.
 
    Note if an enrollment session was active on with the recognizer that
    the event can contain recognition or enrollment results depending on
    what was spoken.
 
 
    Example 1:
      C->S:MRCP/2.0 487 RECOGNIZE 543257
           Channel-Identifier: 32AECB23433801@speechrecog
           Confidence-Threshold: 0.9
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <!-- single language attachment to tokens -->
 
 S Shanmugham                  IETF-Draft                       Page 92
 
                            MRCPv2 Protocol              October, 2004
 
           <rule id="yes">
                    <one-of>
                             <item xml:lang="fr-CA">oui</item>
                             <item xml:lang="en-US">yes</item>
                    </one-of>
                </rule>
 
           <!-- single language attachment to a rule expansion -->
                <rule id="request">
                    may I speak to
                    <one-of xml:lang="fr-CA">
                             <item>Michel Tremblay</item>
                             <item>Andre Roy</item>
                    </one-of>
                </rule>
 
           </grammar>
 
      S->C:MRCP/2.0 48 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 49 START-OF-SPEECH 543257 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
      S->C:MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Waveform-URI: http://web.media.com/session123/audio.wav
           Content-Type: application/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result x-model="http://IdentityModel"
             xmlns:xf="http://www.w3.org/2000/xforms"
             grammar="session:request1@form-level.store">
               <interpretation>
                   <xf:instance name="Person">
                       <Person>
                           <Name> Andre Roy </Name>
                       </Person>
                   </xf:instance>
                             <input>   may I speak to Andre Roy </input>
               </interpretation>
           </result>
 
 
    Example 2:
 
      S->C:MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
 
 S Shanmugham                  IETF-Draft                       Page 93
 
                            MRCPv2 Protocol              October, 2004
 
           Content-Type: application/x-nlsml
           Content-Length: 123
 
           <?xml version= "1.0"?>
           <result grammar="Personal-Grammar-URI"
                   xmlns:mrcp="http://www.ietf.org/mrcp2">
              <mrcp:result-type type="ENROLLMENT" />
              <mrcp:enrollment-result>
                <num-clashes> 2 </num-clashes>
                <num-good-repetitions> 1 </num-good-repetitions>
                <num-repetitions-still-needed>
                   1
                </num-repetitions-still-needed>
                <consistency-status> consistent </consistency-status>
                <clash-phrase-ids>
                     <item> Jeff </item> <item> Andre </item>
                </clash-phrase-ids>
                <transcriptions>
                     <item> m ay b r ow k er </item>
                     <item> m ax r aa k ah </item>
                </transcriptions>
                <confusable-phrases>
                     <item>
                          <phrase> call </phrase>
                          <confusion-level> 10 </confusion-level>
                     </item>
                </confusable-phrases>
              </mrcp:enrollment-result>
           </result>
 
 9.15.     START-PHRASE-ENROLLMENT
 
    The START-PHRASE-ENROLLMENT method sent from the client to the
    server starts a new phrase enrollment session during which the
    client may call RECOGNIZE to enroll a new utterance.  This consists
    of a set of calls to RECOGNIZE in which the caller speaks a phrase
    several times so the system can "learn" it. The phrase is then added
    to a personal grammar (speaker-trained grammar), and the system can
    recognize it later.
 
    Only one phrase enrollment session may be active at a time. The
    Personal-Grammar-URI identifies the grammar that is used during
    enrollment to store the personal list of phrases.  Once RECOGNIZE is
    called, the result is returned in a RECOGNITION-COMPLETE event and
    may contain either an enrollment result OR a recognition result for
    a regular recognition.
 
    Calling END-PHRASE-ENROLLMENT ends the ongoing phrase enrollment
    session, which is typically done after a sequence of successful
    calls to RECOGNIZE.  This method can be called to commit the new
 
 
 S Shanmugham                  IETF-Draft                       Page 94
 
                            MRCPv2 Protocol              October, 2004
 
    phrase to the personal grammar or to abort the phrase enrollment
    session.
 
    The Personal-Grammar-URI, which specifies the grammar to contain the
    new enrolled phrase, will be created if it does not exist. Also, the
    personal grammar may ONLY contain phrases added via a phrase
    enrollment session.
 
    The Phrase-ID passed to this method will be used to identify this
    phrase in the grammar and will be returned as the speech input when
    doing a RECOGNIZE on the grammar. The Phrase-NL similarly will be
    returned in a RECOGNITION-COMPLETE event in the same manner as other
    NL in a grammar. The tag-format of this NL is vendor specific.
 
    If the client has specified Save-Best-Waveform as true, then the
    response after ending the phrase enrollment session should contain
    the location/URI of a recording of the best repetition of the
    learned phrase.
 
    Example:
    C->S:  MRCP/2.0 123 START-PHRASE-ENROLLMENT 543258
           Channel-Identifier: 32AECB23433801@speechrecog
           Num-Min-Consistent-Pronunciations: 2
           Consistency-Threshold: 30
           Clash-Threshold: 12
           Personal-Grammar-URI: <personal grammar uri>
           Phrase-Id: <phrase id>
           Phrase-NL: <NL phrase>
           Weight: 1
           Save-Best-Waveform: true
 
    S->C:  MRCP/2.0 49 543258 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
 
 9.16.     ENROLLMENT-ROLLBACK
 
    The ENROLLMENT-ROLLBACK method discards the last live utterance from
    the RECOGNIZE operation. This method should be invoked when the
    caller provides undesirable input such as non-speech noises, side-
    speech, commands, utterance from the RECOGNIZE grammar, etc. Note
    that this method does not provide a stack of rollback states.
    Executing ENROLLMENT-ROLLBACK twice in succession without an
    intervening recognition operation has no effect on the second
    attempt.
 
    Example:
    C->S:  MRCP/2.0 49 ENROLLMENT-ROLLBACK 543261
           Channel-Identifier: 32AECB23433801@speechrecog
 
    S->C:  MRCP/2.0 49 543261 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
 
 S Shanmugham                  IETF-Draft                       Page 95
 
                            MRCPv2 Protocol              October, 2004
 
 
 9.17.     END-PHRASE-ENROLLMENT
 
    The END-PHRASE-ENROLLMENT method can only be called during an active
    phrase enrollment session, which was started by calling the method
    START-PHRASE-ENROLLMENT.  It may NOT be called during an ongoing
    RECOGNIZE operation. It should be called when successive calls to
    RECOGNIZE have succeeded and Num-Repetitions-Still-Needed has been
    returned as 0 in the RECOGNITION-COMPLETE event to commit the new
    phrase in the grammar.  Alternatively, it can be called by
    specifying the Abort-Phrase-Enrollment header to abort the phrase
    enrollment session.
 
    If the client has specified Save-Best-Waveform as true in the START-
    PHRASE-ENROLLMENT request, then the response should contain the
    location/URI of a recording of the best repetition of the learned
    phrase.
 
    Example:
    C->S:  MRCP/2.0 49 END-PHRASE-ENROLLMENT 543262
           Channel-Identifier: 32AECB23433801@speechrecog
 
 
    S->C:  MRCP/2.0 123 543262 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Waveform-URI: <waveform uri>
 
 
 9.18.     MODIFY-PHRASE
 
    The MODIFY-PHRASE method sent from the client to the server is used
    to change the phrase ID, NL phrase and/or weight for a given phrase
    in a personal grammar.
 
    If no fields are supplied then calling this method has no effect and
    it is silently ignored.
 
 Example:
    C->S:  MRCP/2.0 123 MODIFY-PHRASE 543265
           Channel-Identifier: 32AECB23433801@speechrecog
           Personal-Grammar-URI: <personal grammar uri>
           Phrase-Id: <phrase id>
           New-Phrase-Id: <new phrase id>
           Phrase-NL: <NL phrase>
           Weight: 1
 
    S->C:  MRCP/2.0 49 543265 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
 
 
 
 
 S Shanmugham                  IETF-Draft                       Page 96
 
                            MRCPv2 Protocol              October, 2004
 
 9.19.     DELETE-PHRASE
 
    The DELETE-PHRASE method sent from the client to the server is used
    to delete a phase in a personal grammar added through voice
    enrollment or text enrollment. If the specified phrase doesn't
    exist, this method has no effect and it is silently ignored.
 
 Example:
    C->S:  MRCP/2.0 123 DELETE-PHRASE 543266
           Channel-Identifier: 32AECB23433801@speechrecog
           Personal-Grammar-URI: <personal grammar uri>
           Phrase-Id: <phrase id>
 
    S->C:  MRCP/2.0 49 543266 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
 
 9.20.     INTERPRET
 
    The INTERPRET method from the client to the server takes as input an
    interpret-text header, containing the text for which the semantic
    interpretation is desired, and returns, via the INTERPRETATION-
    COMPLETE event, an interpretation result which is very similar to
    the one returned from a RECOGNIZE method invocation.  Only portions
    of the result relevant to acoustic matching are excluded from the
    result.  The interpret-text header MUST be included in the INTERPRET
    request.
 
    Recognizer grammar data is treated in the same way as it is when
    issuing a RECOGNIZE method call.
 
    If a RECOGNIZE, RECORD or another INTERPRET operation is already in
    progress, invoking this method will cause the response to have a
    status code of 402, "Method not valid in this state", and a COMPLETE
    request state.
 
 Example:
 
    C->S:  MRCP/2.0 123 INTERPRET 543266
           Channel-Identifier: 32AECB23433801@speechrecog
           Interpret-Text: may I speak to Andre Roy
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
              <!-- single language attachment to tokens -->
                <rule id="yes">
                   <one-of>
                     <item xml:lang="fr-CA">oui</item>
 
 S Shanmugham                  IETF-Draft                       Page 97
 
                            MRCPv2 Protocol              October, 2004
 
                     <item xml:lang="en-US">yes</item>
                   </one-of>
                </rule>
 
              <!-- single language attachment to a rule expansion -->
                <rule id="request">
                     may I speak to
                     <one-of xml:lang="fr-CA">
                          <item>Michel Tremblay</item>
                          <item>Andre Roy</item>
                     </one-of>
                </rule>
           </grammar>
 
    S->C:  MRCP/2.0 49 543266 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
    S->C:  MRCP/2.0 49 543267 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Content-Type: application/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result   x-model="http://IdentityModel"
                     xmlns:xf="http://www.w3.org/2000/xforms"
                     grammar="session:request1@form-level.store">
                <interpretation>
                     <xf:instance name="Person">
                          <Person>
                               <Name> Andre Roy </Name>
                          </Person>
                     </xf:instance>
                     <input>   may I speak to Andre Roy </input>
                </interpretation>
           </result>
 
 9.21.     INTERPRETATION-COMPLETE
 
    This event from the recognition resource to the client indicates
    that the INTERPRET operation is complete.  The interpretation result
    is sent in the body of the MRCP message.  The request state MUST be
    set to COMPLETE.
 
    The completion-cause header MUST be included in this event and MUST
    be set to an appropriate value from the list of cause codes.
 
 Example:
 
    C->S:  MRCP/2.0 123 INTERPRET 543266
           Channel-Identifier: 32AECB23433801@speechrecog
 
 S Shanmugham                  IETF-Draft                       Page 98
 
                            MRCPv2 Protocol              October, 2004
 
           Interpret-Text: may I speak to Andre Roy
           Content-Type: application/srgs+xml
           Content-Id: <request1@form-level.store>
           Content-Length: 104
 
           <?xml version="1.0"?>
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
              <!-- single language attachment to tokens -->
                <rule id="yes">
                   <one-of>
                     <item xml:lang="fr-CA">oui</item>
                     <item xml:lang="en-US">yes</item>
                   </one-of>
                </rule>
 
              <!-- single language attachment to a rule expansion -->
                <rule id="request">
                     may I speak to
                     <one-of xml:lang="fr-CA">
                          <item>Michel Tremblay</item>
                          <item>Andre Roy</item>
                     </one-of>
                </rule>
           </grammar>
 
    S->C:  MRCP/2.0 49 543266 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
    S->C:  MRCP/2.0 49 543267 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Content-Type: application/x-nlsml
           Content-Length: 276
 
           <?xml version="1.0"?>
           <result   x-model="http://IdentityModel"
                     xmlns:xf="http://www.w3.org/2000/xforms"
                     grammar="session:request1@form-level.store">
                <interpretation>
                     <xf:instance name="Person">
                          <Person>
                               <Name> Andre Roy </Name>
                          </Person>
                     </xf:instance>
                     <input>   may I speak to Andre Roy </input>
                </interpretation>
           </result>
 
 
 
 
 S Shanmugham                  IETF-Draft                       Page 99
 
                            MRCPv2 Protocol              October, 2004
 
 9.22.     DTMF Detection
 
    Digits received as DTMF tones will be delivered to the automatic
    speech recognition (ASR) engine in the RTP stream according to RFC
    2833. The automatic speech recognizer (ASR) MUST support RFC 2833 to
    recognize digits and it MAY support recognizing DTMF tones in the
    audio.
 
 10.  Recorder Resource
    This resource captures the received audio and video and stores it as
    file. Their main applications would be for capturing speech audio
    that may be applied for recognition at a later time or recording
    voice or video mails. Both these applications require functionality
    above and beyond those specified by protocols such as RTSP such as
    Audio End-pointing(i.e detecting speech or silence). Detection of
    speech or silence may be required to start or stop recording. The
    support for video is optional and is mainly capturing video mails
    that may require the speech or audio processing mentioned above.
 
 10.1.     Recorder State Machine
 
                Idle                   Recording
                State                  State
                 |                       |
                 |---------RECORD------->|
                 |                       |
                 |<------STOP------------|
                 |                       |
                 |<--RECORD-COMPLETE-----|
                 |                       |
                 |              |--------|
                 |       START-OF-SPEECH |
                 |              |------->|
                 |                       |
 
 
 
 10.2.     Recorder Methods
    The recorder supports the following methods.
 
      recorder-Method     =    "RECORD               ; A
                          /    "STOP"                ; B
                          /    "START-INPUT-TIMERS"  ; C
 
 
 10.3.     Recorder Events
 
    The recorder may generate the following events.
 
      recorder-Event      =    "START-OF-SPEECH"    ; D
                          /    "RECORD-COMPLETE"    ; E
 
 S Shanmugham                  IETF-Draft                      Page 100
 
                            MRCPv2 Protocol              October, 2004
 
 
 10.4.     Recorder Header Fields
 
    A recorder messages may contain header fields containing request
    options and information to augment the Method, Response or Event
    message it is associated with.
 
      recorder-header     =    sensitivity-level
                          /    no-input-timeout
                          /    completion-cause
                          /    completion-reason
                          /    failed-uri
                          /    failed-uri-cause
                          /    record-uri
                          /    media-type
                          /    max-time
                          /    final-silence
                          /    capture-on-speech
                          /    ver-buffer-utterance
                          /    start-input-timers
                          /    new-audio-channel
 
    Header field          where     s g A B C D E
    _______________________________________________
    Sensitivity-Level       R       o o o - - - -
    No-Input-Timeout        R       o o o - - - -
    Completion-Cause        R       - - - - - - m
    Completion-Cause       2XX      - - - o - - -
    Completion-Cause       4XX      - - - m - - -
    Completion-Reason       R       - - - - - - m
    Completion-Reason      2XX      - - - o - - -
    Completion-Reason      4XX      - - - m - - -
    Start-Input-Timers      R       - - - o - - -
    Fetch-Timeout           R       o o o - - - -
    Failed-URI              R       - - - - - - o
    Failed-URI             4XX      - - o - - - -
    Failed-URI-Cause        R       - - - - - - o
    Failed-URI-Cause       4XX      - - o - - - -
    New-Audio-Channel       R       - - o - - - -
    Ver-Buffer-Utterance    R       - o o o - - - -
    Capture-On-Speech       R       o o o - - - -
    Media-Type              R       - - m - - - -
    Max-Time                R       o o o - - - -
    Final-Silence           R       o o o - - - -
    Record-URI              R       - - m - - - -
 
 
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - RECORD, (B) -
    STOP, (C) - START-TIMERS , (D) - START-OF-SPEECH, (E) - RECORD-
    COMPLETE, (o) - Optional(Refer text for further constraints), (R) -
    Request, (r) - Response
 
 S Shanmugham                  IETF-Draft                      Page 101
 
                            MRCPv2 Protocol              October, 2004
 
 
 
 Sensitivity Level
 
    To filter out background noise and not mistake it for speech, the
    recorder may support a variable level of sound sensitivity. The
    sensitivity-level header allows the client to set this value on the
    recorder. This header field MAY occur in RECORD, SET-PARAMS or GET-
    PARAMS. A higher value for this field means higher sensitivity. The
    default value for this field is platform specific.
 
      sensitivity-level   =    "Sensitivity-Level" ":" 1*DIGIT CRLF
 
 No Input Timeout
 
    When recorder is started and there is no speech detected for a
    certain period of time, the recorder can send a RECORDER-COMPLETE
    event to the client and terminate the record operation. The no-
    input-timeout header field can set this timeout value. The value is
    in milliseconds. This header field MAY occur in RECORD, SET-PARAMS
    or GET-PARAMS. The value for this field ranges from 0 to MAXTIMEOUT,
    where MAXTIMEOUT is platform specific. The default value for this
    field is platform specific.
 
      no-input-timeout    =    "No-Input-Timeout" ":" 1*DIGIT CRLF
 
 Completion Cause
 
    This header field MUST be part of a RECORD-COMPLETE, event coming
    from the recorder resource to the client. This indicates the reason
    behind the RECORD method completion. This header field MUST be sent
    in the RECORD responses, if they return with a failure status and a
    COMPLETE state.
 
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP
                               1*VCHAR CRLF
 
      Cause-Code Cause-Name         Description
 
        000     success-silence     RECORD completed with a silence at
                                    the end
        001     success-maxtime     RECORD completed after reaching
                                    Maximum recording time specified in
                                    record method.
        002     noinput-timeout     RECORD failed due to no input
        003     uri-failure         Failure accessing the record URI.
        004     error               RECORD request terminated
                                    prematurely due to a recorder error.
 
 Completion Reason
 
 
 S Shanmugham                  IETF-Draft                      Page 102
 
                            MRCPv2 Protocol              October, 2004
 
    This header field MAY be specified in a RECORD-COMPLETE event coming
    from the recorder resource to the client. This contains the reason
    text behind the RECORD request completion. This field can be use to
    communicate text describing the reason for the failure.
 
      completion-reason   =    "Completion-Reason" ":"
                               quoted-string CRLF
 
 Failed URI
 
    When a record method needs to post the audio to an URI and access to
    the URI fails, the server SHOULD provide the failed URI in this
    header field in the method response.
 
      failed-uri               =    "Failed-URI" ":" Uri CRLF
 
 Failed URI Cause
 
    When a record method needs to post the audio to an URI and access to
    the URI fails, the server SHOULD provide the URI specific or
    protocol specific response code through this header field in the
    method response. This field has been defined as alphanumeric to
    accommodate all protocols, some of which might have a response
    string instead of a numeric response code.
 
      failed-uri-cause         =    "Failed-URI-Cause" ":" 1*ALPHANUM
                                    CRLF
 
 Record URI
 
    When a record method contains this header field the server must
    capture the audio and store it. If the header field is empty, it
    MUST store it locally and generate a URI that points to it. This URI
    is then returned in the STOP response of the RECORD-COMPLETE events.
    If the header in the RECORD method specifies a URI the server must
    capture and store the audio at that location. If this header field
    is not specified in the RECORD message the server MUST capture the
    audio and send it in the STOP response or the RECORD-COMPLETE event
    as a message body. In the case, the message carrying the audio
    content would have this header field with a cid value pointing to
    the Content-ID in the message body.
 
      record-uri               =    "Record-URI" ":" Uri CRLF
 
 Media Type
 
    A RECORD method MUST contain this header field and specifies to the
    server the file format in which to store the captured audio or
    video.
 
      Media-type               =    "Media-Type" ":" media-type CRLF
 
 S Shanmugham                  IETF-Draft                      Page 103
 
                            MRCPv2 Protocol              October, 2004
 
 
 Max Time
 
    When recorder is started this specifies the maximum length of the
    recording, calculated from the time the actual capture and store
    begins and is not necessarily the time the RECORD method is
    recieved. After this time, the recording stops and the server must
    return a RECORD-COMPLETE event back to the client and will have a
    request-state of "COMPLETE".This header field MAY occur in RECORD,
    SET-PARAMS or GET-PARAMS. The value for this field ranges from 0 to
    MAXTIMEOUT, where MAXTIMEOUT is platform specific. A value of zero
    means infinity and hence the recording will continue until one of
    the other stop conditions are met. The default value for this field
    is 0.
 
      max-time  =    "Max-Time" ":" 1*DIGIT CRLF
 
 Final Silence
 
    When recorder is started and the actual capture begins, this header
    field specifies the length of silence in the audio that is to be
    interpreted as the end of the recording. This header field MAY occur
    in RECORD, SET-PARAMS or GET-PARAMS. The value for this field ranges
    from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. A value
    of zero means infinity and hence the recording will continue until
    one of the other stop conditions are met. The default value for this
    field is platform specific.
 
      final-silence  =    "Final-Silence" ":" 1*DIGIT CRLF
 
 Capture On Speech
 
    When recorder is started this header field specifies if the recorder
    should start capturing immediately(false) or wait for the end-
    pointing functionality to detect speech(true) before it start
    capturing. This header field MAY occur in the RECORD, SET-PARAMS or
    GET-PARAMS. The value for this field is a Boolean. The default value
    for this field is false.
 
    capture-on-speech     =    "Capture-On-Speech " ":" 1*DIGIT CRLF
 
 Ver-Buffer-Utterance
 
    This header field is the same as the one described for the
    Verification resource. This tells the server to buffer the utterance
    associated with this recording request into the verification buffer.
    Sending this header field is not valid if the verification buffer is
    not instantiated for the session. This buffer is shared across
    resources within a session and gets instantiated when a verification
    resource is added to this session and is released when the resource
    is released from the session.
 
 S Shanmugham                  IETF-Draft                      Page 104
 
                            MRCPv2 Protocol              October, 2004
 
 
 Start Input Timers
 
    This header MAY BE sent as part of the RECORD request. A value of
    false tells the recorder resource to start the operation, but not to
    start the no-input timer yet. The recorder resource should not start
    the timers until the client sends a START-INPUT-TIMERS request to
    the recorder resource. This is useful in the scenario when the
    recorder and synthesizer resources are not part of the same session.
    Here when a kill-on-barge-in prompt is being played, you may want
    the RECORD request to be simultaneously active so that it can detect
    and implement kill-on-barge-in. But at the same time you don't want
    the recorder resource to start the no-input timers until the prompt
    is finished. The default value is "true".
 
      start-input-timers  =    "Start-Input-Timers" ":"
                                    boolean-value CRLF
 
 New Audio Channel
 
    This header field is the same as the one described for the
    Recognizer resource.
 
 
 
 10.5.     Recorder Message Body
    The STOP response or the RECORD-COMPLETE events MAY contain a
    message body carrying the captured audio. This happens if the RECORD
    method did not have a Record-Uri header field in it. In this case,
    message carrying the audio content would have a Record-Uri header
    field with a cid value pointing to the message part that contains
    the recorded audio
 
 10.6.     RECORD
    The RECORD method moves the recorder resource to the Recording
    State. Depending on the header fields specified in the RECORD method
    the resource may start recording the audio immediately or wait for
    the end pointing functionality to detect speech in the audio. It
    then saves the audio to the URI supplied in the recording-uri header
    field. If the recording-uri is not specified, the server MUST
    capture the media onto a local disk and return a URI pointing to the
    recorded audio in the RECORD-COMPLETE event. The server MUST support
    HTTP and file URI schemes.
 
    If a RECORD operation is already in progress, invoking this method
    will cause the response to have a status code of 402, "Method not
    valid in this state", and a COMPLETE request state.
 
    If the recording-uri is not valid, a status code of 404, "Illegal
    Value for Header", will be returned in the response. If it is
 
 
 S Shanmugham                  IETF-Draft                      Page 105
 
                            MRCPv2 Protocol              October, 2004
 
    impossible for the server to create the requested file, a status
    code of 407, "Method or Operation Failed", will be returned.
 
    When the recording operation is initiated the response will indicate
    an IN-PROGRESS request state.  The server MAY generate a subsequent
    START-OF-SPEECH event when speech is detected.  Upon completion of
    the recording operation, the server will generate a RECORDING-
    COMPLETE event.
 
    Example:
 
           C->S:MRCP/2.0 386 RECORD 543257
                Channel-Identifier: 32AECB23433802@recorder
                Record-URI: file://mediaserver/recordings/myfile.wav
                Capture-On-Speech: true
                Final-Silence: 300
                Max-Time: 6000
 
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
           S->C:MRCP/2.0 54 RECORDING-COMPLETE 456234 COMPLETE
                Channel-Identifier: 32AECB23433802@recorder
                Completion-Cause: 000 success-silence
                Record-URI: file://mediaserver/recordings/myfile.wav
 
 10.7.     STOP
    The STOP method moves the recorder from the recording state back to
    the idle state. If the recording was a success the STOP response
    contains a Record-URI header pointing to the recorded audio file on
    the server or to a MIME part in the body of the message containing
    the recorded audio file. The STOP method may have a Trim-Length
    header field, in which case the specified length of audio is trimmed
    from the end of the recording after the stop.
 
 
    Example:
 
           C->S:MRCP/2.0 386 RECORD 543257
                Channel-Identifier: 32AECB23433802@recorder
                Record-URI: file://mediaserver/recordings/myfile.wav
                Capture-On-Speech: true
                Final-Silence: 300
                Max-Time: 6000
 
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
 
 S Shanmugham                  IETF-Draft                      Page 106
 
                            MRCPv2 Protocol              October, 2004
 
           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
           C->S:MRCP/2.0 386 STOP 543257
                Channel-Identifier: 32AECB23433802@recorder
                Trim-Length: 200
 
           S->C:MRCP/2.0 48 456234 200 COMPLETE
                Channel-Identifier: 32AECB23433802@recorder
                Completion-Cause: 000 success
                Record-URI: file://mediaserver/recordings/myfile.wav
 
 
 10.8.     RECORD-COMPLETE
    If the recording completes due to no-input, silence after speech or
    max-time the server MUST generate the RECORD-COMPLETE event to the
    client with a request-state of "COMPLETE". If the recording was a
    success the RECORD-COMPLETE event contains a Record-URI header
    pointing to the recorded audio file on the server or to a MIME part
    in the body of the message containing the recorded audio file.
 
    Example:
 
           C->S:MRCP/2.0 386 RECORD 543257
                Channel-Identifier: 32AECB23433802@recorder
                Record-URI: file://mediaserver/recordings/myfile.wav
                Capture-On-Speech: true
                Final-Silence: 300
                Max-Time: 6000
 
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS
                Channel-Identifier: 32AECB23433802@recorder
 
           S->C:MRCP/2.0 48 RECORD-COMPLETE 456234 COMPLETE
                Channel-Identifier: 32AECB23433802@recorder
                Completion-Cause: 000 success
                Record-URI: file://mediaserver/recordings/myfile.wav
 
 
 10.9.     START-INPUT-TIMERS
 
    This request is sent from the client to the recorder resource when
    it knows that a kill-on-barge-in prompt has finished playing. This
    is useful in the scenario when the recorder and synthesizer
    resources are not in the same session. Here when a kill-on-barge-in
    prompt is being played, you want the RECORD request to be
    simultaneously active so that it can detect and implement kill on
    barge-in. But at the same time you don't want the recorder resource
 
 S Shanmugham                  IETF-Draft                      Page 107
 
                            MRCPv2 Protocol              October, 2004
 
    to start the no-input timers until the prompt is finished. The
    header Start-Input-Timers header field in the RECORD request will
    allow the client to say if the timers should be started or not. In
    the above case the recorder resource should not start the timers
    until the client sends a START-INPUT-TIMERS method to the recorder.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 S Shanmugham                  IETF-Draft                      Page 108
 
                            MRCPv2 Protocol              October, 2004
 
 11.  Speaker Verification and Identification
 
    This section describes the methods, responses and events needed for
    doing Speaker Verification / Identification.
 
    Speaker verification is a voice authentication feature that can be
    used to identify the speaker in order to grant the user access to
    sensitive information and transactions.  To do this, a recorded
    utterance is compared to a voiceprint previously stored for that
    user.  Verification consists of two phases: a designation phase to
    establish the claimed identity of the caller and an execution phase
    in which a voiceprint is either created (training) or used to
    authenticate the claimed identity (verification). The resource name
    is 'speakverify'.
 
    Speaker identification identifies the speaker from a set of valid
    users, such as family members.  It may also be referred to,
    sometimes as Multi-Verification. Identification can be performed on
    a small set of users or for a large population.  This feature is
    useful for applications where multiple users share the same account
    number, but where the individual speaker must be uniquely identified
    from the group.  Speaker identification is also done in two phases,
    a designation phase and an execution phase.
 
    It is possible for a speaker verification resource to share the same
    session as an existing recognizer resource or a speaker verification
    session can be set up to operate in standalone mode, without a
    recognizer resource sharing the same session.  In order to share the
    same session, the SDP/SIP INVITE message for the verification
    resource MUST also include the recognizer resource request.
    Otherwise, an independent verification resource, running on the same
    physical server or a separate one, will be set up.
 
    Some of the speaker verification methods, described below, apply
    only to a specific mode of operation.
 
    The verification resource supports buffering that allow the user to
    buffer the verification data from an utterance and then process this
    utterance later.  This is different from collecting waveforms and
    processing them using the VERIFY method that operates directly on
    the incoming audio stream, because this buffering mechanism does not
    simply accumulate utterance data to a buffer.  This buffer is iwned
    by the verification resource but shares write access with other
    input resources such as the recognizer and recorder resources. When
    both the recognition and verification resources share the same
    session, additional information gathered by the recognition resource
    may be saved with these buffers to improve verification performance.
    This buffer can be cleared by a CLEAR-BUFFER request from the client
    and is freed when the resource is 'speakverify' is freed.
 
 
 
 S Shanmugham                  IETF-Draft                      Page 109
 
                            MRCPv2 Protocol              October, 2004
 
 11.1.     Speaker Verification State Machine
 
    Speaker Verification has a concept of a training or verification
    sessions.  Starting one of these sessions does not change the state
    of the verification resource, i.e. it remains idle.  Once a
    verification or training session is started, then utterances are
    trained or verified by calling the VERIFY or VERIFY-FROM-BUFFER
    method.  The state of the Speaker Verification resources goes from
    IDLE to VERIFYING state each time VERIFY or VERIFY-FROM-BUFFER is
    called.
 
    As mentioned above, the verification resource has a verification
    buffer associated with it. This allows the buffering of speech
    utterances for the purposes of verification, identification or
    training from the buffered speech. This buffer is owned by the
    verification resource but other input resources such as the
    recognition resource or recorder resource share write access to it.
    This allows the speech received as part of a recognition or
    recording scenario to be later used for verification, identification
    or training.
 
    Note that access the buffer is limited to one operation at time.
    Hence when resource is doing read, write or delete operation such as
    a RECOGNIZE with ver-buffer-utternance turned on, another operation
    involving the buffer such a CLEAR-BUFFER would fail with a status of
    402.
 
 11.2.     Speaker Verification Methods
 
    Speaker Verification supports the following methods.
      verification-method  = "START-SESSION"      ; A
                          / "END-SESSION"         ; B
                          / "QUERY-VOICEPRINT"    ; C
                          / "DELETE-VOICEPRINT"   ; D
                          / "VERIFY"              ; E
                          / "VERIFY-FROM-BUFFER"  ; F
                          / "VERIFY-ROLLBACK"     ; G
                          / "STOP"                ; H
                          / "CLEAR-BUFFER"        ; I
                          / "START-INPUT-TIMERS"  ; J
                          / "GET-INTERMEDIATE-RESULT" ; K
 
    These methods allow the client to control the mode and target of
    verification or identification operations within the context of a
    session. All the verification input cycles that occur within a
    session may be used to create, update, or validate against the
    voiceprint specified during the session. At the beginning of each
    session the verification resource is reset to a known state.
 
    Verification/identification operations can be executed against live
    or buffered audio. The verification resource provides methods for
 
 S Shanmugham                  IETF-Draft                      Page 110
 
                            MRCPv2 Protocol              October, 2004
 
    for collecting and evaluating live audio data, and methods for
    controlling the verification resource and adjusting its configured
    behavior.
 
    There are no specific methods for collecting buffered audio data.
    This is accomplished by calling VERIFY, RECOGNIZE or RECORD as
    appropriate for the resource, with the header ver-buffer-utterance.
    Then, when the following method is called verification is performed
    using the set of buffered audio.
 
           1. VERIFY-FROM-BUFFER
 
    The following methods provide controls for verification of live
    audio utterances :
 
           1. VERIFY
           2. START-INPUT-TIMERS
 
    The following methods provide controls for configuring the
    verification resource and for establishing resource states :
 
           1. START-SESSION
           2. END-SESSION
           3. QUERY-VOICEPRINT
           4. DELETE-VOICEPRINT
           5. VERIFY-ROLLBACK
           6. STOP
           7. CLEAR-BUFFER
 
    The following method allows the polling a Verification in progress
    for intermediate results.
 
           8. GET-INTERMEDIATE-RESULTS
 
 11.3.     Verification Events
 
    Speaker Verification may generate the following events.
 
      verification-event   =  "VERIFICATION-COMPLETE" ; L
                          /   "START-OF-SPEECH"       ; M
 
 11.4.     Verification Header Fields
 
    A Speaker Verification request may contain header fields containing
    request options and information to augment the Request, Response or
    Event message it is associated with.
 
    verification-header  =     repository-uri
                          /    voiceprint-identifier
                          /    verification-mode
                          /    adapt-model
 
 S Shanmugham                  IETF-Draft                      Page 111
 
                            MRCPv2 Protocol              October, 2004
 
                          /    abort-model
                          /    security-level
                          /    num-min-verification-phrases
                          /    num-max-verification-phrases
                          /    no-input-timeout
                          /    save-waveform
                          /    waveform-uri
                          /    voiceprint-exists
                          /    ver-buffer-utterance
                          /    input-waveform-uri
                          /    completion-cause
                          /    completion-reason
                          /    speech-complete-timeout
                          /    new-audio-channel
                          /    abort-verification
                          /    start-input-timers
 
 
 
    Header field          where    s g A B C D E F G H I J K L M
    _____________________________________________________________
    Repository-URI          R      - - m - m m - - - - - - - - -
    Voiceprint-Identifier   R      - - m - m m - - - - - - - - -
    Verification-Mode       R      o o o - - - - - - - - - - - -
    Adapt-Model             R      o o o - - - - - - - - - - - -
    Abort-Model             R      - - - o - - - - - - - - - - -
    Security-Level          R      o o o - - - - - - - - - - - -
    Num-Min-Verification-P. R      o o o - - - - - - - - - - - -
    Num-Max-Verification-P. R      o o o - - - - - - - - - - - -
    No-Input-Timeout        R      o o - - - - o - - - - - - - -
    Save-Waveform           R      o o - - - - o - - - - - - - -
    Waveform-URI            R      - - - - - - - - - - - - - o -
    Input-Waveform-URI      R      - - - - - - o - - - - - - - -
    Ver-Buffer-Utterance    R      o o - - - - o - - - - - - - -
    Completion-Cause        R      - - - - - - - - - - - - - m -
    Completion-Cause       2XX     - - - - m m - o - - - - - - -
    Completion-Cause       4XX     - - - - m m m m - - - - - - -
    Completion-Reason       R      - - - - - - - - - - - - - m -
    Completion-Reason      2XX     - - - - m m - o - - - - - - -
    Completion-Reason      4XX     - - - - m m m m - - - - - - -
    Start-Input-Timers      R      - - - - - - o - - - - - - - -
    Fetch-Timeout           R      o o o o - - - - - - - - - - -
    Failed-URI              R      - - - - - - - - - - - - - o -
    Failed-URI             4XX     - - o o - - - - - - - - - - -
    Failed-URI-Cause        R      - - - - - - - - - - - - - o -
    Failed-URI-Cause       4XX     - - o o - - - - - - - - - - -
    New-Audio-Channel       R      - - - o - - o - - - o - - - -
    Abort-Verification      R      - - - - - - - - - m - - - - -
    Speech-Complete-Timeout R      o o - - - - o - - - - - - - -
    Voice-Print-Exists     2XX     - - - - m m - - - - - - - - -
 
 
 S Shanmugham                  IETF-Draft                      Page 112
 
                            MRCPv2 Protocol              October, 2004
 
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - START-SESSION,
    (B) - END-SESSION, (C) - QUERY-VOICE-PRINT, (D) DELETE-VOICE-PRINT,
    (E) - VERIFY, (F) - VERIFY-FROM-BUFFER, (G) - VERIFY-ROLLBACK, (H) -
    STOP, (I) - CLEAR-BUFFER, (J) - START-INPUT-TIMERS , (K) - GET-
    INTERMEDIATE-RESULTS, (L) - VERIFICATION-COMPLETE, (M) - START-OF-
    SPEECH, (o) - Optional(Refer text for further constraints), (R) -
    Request, (r) - Response
 
 
 Repository-URI
 
    This header specifies the voiceprint repository to be used or
    referenced during speaker verification or identification operations.
    This header field is required in START-SESSION, QUERY-VOICEPRINT and
    DELETE-VOICEPRINT methods.
 
      repository-uri = "Repository-URI" ":" Uri CRLF
 
 Voiceprint-Identifier
 
    This header field specifies the claimed identity for voice
    verification applications.  The claimed identity may be used to
    specify an existing voiceprint or to establish a new voiceprint.
    This header field is required in QUERY-VOICEPRINT and DELETE-
    VOICEPRINT methods. The Voiceprint-Identifier is required in the
    SESSION-START method for verification operations. For Identification
    or Multi-Verification operations this header may contain a list of
    voice print identifiers separated by semi-colon. For identification
    operations you could also specify a voice print group identifier
    instead of a list of voice print identifiers. All voice print group
    identifiers have an extension of ".vpg". The creation of such group
    identifier objects is left to mechanism outside this protocol.
 
      voiceprint-identifier =  "Voiceprint-Identifier" ":"
                               1*VCHAR "." 3VCHAR
                               *[";" 1*VCHAR "." 3VCHAR] CRLF
 
 Verification-Mode
 
    This header field specifies the mode of the verification resource
    and is set in SESSION-START method. Acceptable values indicate
    whether the verification session should train a voiceprint ("train")
    or verify/identify using an existing voiceprint ("verify").
 
    Training and verification sessions both require the voiceprint
    Repository-URI to be specified in the START-SESSION.  In many usage
    scenarios, however, the system cannot know the speaker's claimed
    identity until the speaker says, for example, their account number.
    In order to allow the first few utterances of a dialog to be both
    recognized and verified, the verification resource on the MRCP
    server retains an audio buffer. In this audio buffer, the MRCP
 
 S Shanmugham                  IETF-Draft                      Page 113
 
                            MRCPv2 Protocol              October, 2004
 
    server will accumulate recognized utterances in memory.  The
    application can later execute a verification method and apply the
    buffered utterances to the current verification session. The
    buffering methods are used for this purpose. When buffering is used,
    subsequent input utterances are added to the audio buffer for later
    analysis.
 
    Some voice user interfaces may require additional user input that
    should not be analyzed for verification. For example, the user's
    input may have been recognized with low confidence and thus require
    a confirmation cycle. In such cases, the client should not execute
    the VERIFY or VERIFY-FROM-BUFFER methods to collect and analyze the
    caller's input. A separate recognizer resource can analyze the
    caller's response without any participation on behalf of the
    verification resource.
 
    Once the following conditions have been met:
    1. Voiceprint identity has been successfully established through the
       voiceprint identifier header fields of the -VOICEPRINT method,
       and
    2. the verification mode has been set to one of "train" or "verify",
    the verification resource may begin providing verification
    information during verification operations. The verification
    resource MUST reach one of the two major states ("train" or
    "verify") if the above two conditions hold, or it MUST report an
    error condition in the MRCP status code to indicate why the
    verification resource is not ready for action.
 
    The value of verification-mode is persistent within a verification
    session. Changing the mode to a different value than the previous
    setting causes the verification resource to report an error if the
    previous setting was either "train" or "verify". If the mode is
    changed back to its previous value, the operation may continue.
      verification-mode = "Verification-Mode" ":"
                           verification-mode-string
      verification-mode-string = "train"
                               / "verify"
 
 
 Adapt-Model
 
    This header field indicates the desired behavior of the verification
    resource after a successful verification execution. If the value of
    this header is "true", the audio collected during the verification
    session is may be to update the voiceprint to account for ongoing
    changes in a speaker's incoming speech characteristics. If the value
    is "false" (the default), the voiceprint is not updated with the
    latest audio. This header field MAY only occur in START-SESSION
    method.
 
      adapt-model = "Adapt-Model" ":" Boolean-value CRLF
 
 S Shanmugham                  IETF-Draft                      Page 114
 
                            MRCPv2 Protocol              October, 2004
 
 
 
 Abort-Model
 
    The Abort-Model header field indicates the desired behavior of the
    verification resource upon session termination. If the value of this
    header is "true", the pending changes to a voiceprint due to
    verification training or verification adaptation are discarded. If
    the value is "false" (the default), the pending changes for a
    training session or a successful verification session are committed
    to the voiceprint repository. A value of "true" for Abort-Model
    overrides a value of "true" for the Adapt-Model header field. This
    header field MAY only occur in END-SESSION method.
 
      abort-model = "Abort-Model" ":" Boolean-value CRLF
 
 
 
 Security-Level
 
    The Security-Level header field determines the range of verification
    scores in which a decision of 'accepted' may be declared. This
    header field MAY occur in SET-PARAMS, GET-PARAMS and START-SESSION
    methods. It can be "high" (highest security level), "medium-high",
    "medium" (normal security level), "medium-low", or "low" (low
    security level). The default value is platform specific.
 
      security-level = "Security-Level" ":" security-level-string CRLF
      security-level-string = "high" /
            "medium-high" /
            "medium" /
            "medium-low" /
            "low"
 
 
 Num-Min-Verification-Phrases
 
    The Num-Min-Verification-Phrases header field is used to specify the
    minimum number of valid utterances before a positive decision is
    given for verification. The value for this header is integer and the
    default value is 1. The verification resource should not announce a
    decision of 'accepted' unless the Num-Min-Verification-Phrases
    utterances are available. The minimum value is 1.
 
      num-min-verification-phrases = "Num-Min-Verification-Phrases" ":"
                                      1*DIGIT CRLF
 
 
 
 
 
 
 S Shanmugham                  IETF-Draft                      Page 115
 
                            MRCPv2 Protocol              October, 2004
 
 Num-Max-Verification-Phrases
 
    The Num-Max-Verification-Phrases header field is used to specify the
    number of valid utterances required before a decision is forced for
    verification. The verification resource MUST NOT return a decision
    of 'undecided' once Num-Max-Verification-Phrases have been collected
    and used to determine a verification score. The value for this
    header is integer and the minimum value is 1.
 
      num-min-verification-phrases = "Num-Max-Verification-Phrases" ":"
                                      1*DIGIT CRLF
 
 
 No-Input-Timeout
 
    The No-Input-Timeout header field sets the length of time from the
    start of the verification timers (see START-INPUT-TIMERS) until the
    declaration of a no-input event in the VERIFICATION-COMPLETE server
    event message. The value is in milliseconds. This header field MAY
    occur in VERIFY, SET-PARAMS or GET-PARAMS. The value for this field
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific.
    The default value for this field is platform specific.
 
      no-input-timeout = "No-Input-Timeout" ":" 1*DIGIT CRLF
 
 
 Save-Waveform
 
    This header field allows the client to indicate to the verification
    resource that it MUST save the audio stream that was used for
    verification/identification. The verification resource MUST then
    record the audio and make it available to the client in the form of
    a URI returned in the waveform-uri header field in the
    VERIFICATION-COMPLETE event. If there was an error in recording the
    stream or the audio clip is otherwise not available, the
    verification resource MUST return an empty waveform-uri header
    field. The default value for this field is "false". This header
    field MAY appear in the VERIFY method, but NOT in the VERIFY-FROM-
    BUFFER method since it can control whether or not to save the
    waveform for live verification / identification operations only.
 
         save-waveform       =    "Save-Waveform" ":" boolean-value CRLF
 
 
 Waveform-URI
 
    If the save-waveform header field is set to true, the verification
    resource MUST record the incoming audio stream of the verification
    into a file and provide a URI for the client to access it. This
    header MUST be present in the VERIFICATION-COMPLETE event if the
    save-waveform header field is set to true. The URI value of the
 
 S Shanmugham                  IETF-Draft                      Page 116
 
                            MRCPv2 Protocol              October, 2004
 
    header MUST be NULL if there was some error condition preventing the
    server from recording. Otherwise, the URI generated by the server
    SHOULD be globally unique across the server and all its verification
    sessions. The URI SHOULD BE available until the session is torn
    down. Since the save-waveform header field applies only to live
    verification / identification operations, the waveform-uri will only
    be returned in the VERIFICATION-COMPLETE event for live verification
    / identification operations.
 
       waveform-uri = "Waveform-URI" ":" Uri CRLF
 
 
 
 Voiceprint-Exists
 
    This header field is returned in a QUERY-VOICEPRINT or DELETE-
    VOICEPRINT response.  This is the status of the voiceprint specified
    in the QUERY-VOICEPRINT method. For the DELETE-VOICEPRINT method
    this field indicates the status of the voiceprint as the method
    execution started.
 
      voiceprint-exists    = "Voiceprint-Exists" ":" Boolean-value CRLF
 
 
 Ver-Buffer-Utterance
 
    This header field is used to indicate that this utterance could be
    later considered for Speaker Verification.  This way, an application
    can buffer utterances while doing regular recognition or
    verification activities and speaker verification can later be
    requested on the buffered utterances.  This header field is OPTIONAL
    in the RECOGNIZE, VERIFY or RECORD method. The default value for
    this field is "false".
 
      ver-buffer-utterance = "Ver-Buffer-Utterance" : Boolean-value CRLF
 
 
 Input-Waveform-Uri
 
    This optional header field specifies an audio file that has to be
    processed according to the current verification mode, either to
    train the voiceprint or verify the user.  This enables the client to
    implement the buffering use case also in the case where the
    recognizer and verification resources live in two sessions.  It MAY
    be part of the VERIFY method.
 
      input-waveform-uri    = "Input-Waveform-URI" ":" Uri CRLF
 
 
 Completion-Cause
 
 
 S Shanmugham                  IETF-Draft                      Page 117
 
                            MRCPv2 Protocol              October, 2004
 
    This header field MUST be part of a VERIFICATION-COMPLETE event
    coming from the verification resource to the client. This indicates
    the reason behind the VERIFY or VERIFY-FROM-BUFFER method
    completion. This header field MUST BE sent in the VERIFY, VERIFY-
    FROM-BUFFER, QUERY-VOICEPRINT responses, if they return with a
    failure status and a COMPLETE state.
 
      completion-cause = "Completion-Cause" ":" 1*DIGIT SP
                         1*VCHAR CRLF
 
      Cause-Code  Cause-Name         Description
        000       success            VERIFY or VERIFY-FROM-BUFFER
                                     request
                                     completed successfully. The verify
                                     decision can be "accepted",
                                     "rejected", or "undecided".
        001       error              VERIFY or VERIFY-FROM-BUFFER
                                     Request terminated prematurely due
                                     to a verification resource or
                                     system error.
        002       no-input-timeout   VERIFY request completed with no
                                     result due to a no-input-timeout.
        003       too-much-speech-timeout   VERIFY request completed
                                     result due to too much speech
        004       speech-too-early   VERIFY request completed with no
                                     result due to spoke too soon.
        005       buffer-empty       VERIFY-FROM-BUFFER request
                                     completed
                                     with no result due to empty buffer.
        006       out-of-sequence    Verification operation failed due
                                     to out-of-sequence method
                                     invocations. For example calling
                                     VERIFY before QUERY-VOICEPRINT.
        007       repository-uri-failure
                                     Failure accessing Repository URI.
        008       repository-uri-missing
                                     Repository-uri is not specified.
        009       voiceprint-id-missing
                                     Voiceprint-identification is not
                                     specified.
        010       voiceprint-id-not-exist
                                     Voiceprint-identification doesn't
                                     exist in the voiceprint repository.
 
 Completion Reason
 
    This header field MAY be specified in a VERIFICATION-COMPLETE event
    coming from the verifier resource to the client. This contains the
    reason text behind the VERIFY request completion. This field can be
    use to communicate text describing the reason for the failure.
 
 
 S Shanmugham                  IETF-Draft                      Page 118
 
                            MRCPv2 Protocol              October, 2004
 
      completion-reason   =    "Completion-Reason" ":"
                               quoted-string CRLF
 
 Speech Complete Timeout
 
    This header field is the same as the one described for the
    Recognizer resource.
 
 New Audio Channel
 
    This header field is the same as the one described for the
    Recognizer resource.
 
 Abort-Verification
 
    This header field MUST BE sent in a STOP method to indicate if the
    current VERIFY method in progress should be aborted or if it should
    stop verifying and return the verification results until that point
    in time. A value of "true" will abort the request and discard the
    results. A value of "false" would stop verification and return the
    verification result in the STOP response.
 
      Abort-verification = "Abort-Verification " : Boolean-value CRLF
 
 Start Input Timers
 
    This header MAY BE sent as part of a VERIFY request. A value of
    false tells the verification resource to start the VERIFY operation,
    but not to start the no-input timer yet. The verification resource
    should not start the timers until the client sends a START-INPUT-
    TIMERS request to the resource. This is useful in the scenario when
    the verifier and synthesizer resources are not part of the same
    session. Here when a kill-on-barge-in prompt is being played, you
    may want the VERIFY request to be simultaneously active so that it
    can detect and implement kill-on-barge-in. But at the same time you
    don't want the verification resource to start the no-input timers
    until the prompt is finished. The default value is "true".
 
      start-input-timers =     "Start-Input-Timers" ":"
                                    boolean-value CRLF
 
 
 
 11.5.     Verification Result Elements
 
 
    The verification results will be returned as XML data in a
    VERIFICATION-COMPLETE event containing an NLSML document, having a
    MIME-type application/x-nlsml.  The XML Schema and DTD for this
    portion XML data is provided in a normative form in the Appendix.
    MRCP-specific tag additions to this XML result format described in
 
 S Shanmugham                  IETF-Draft                      Page 119
 
                            MRCPv2 Protocol              October, 2004
 
    this section MUST be in the MRCPv2 namespace.  In the result
    structure, they must either be prefixed by a namespace prefix
    declared within the result or must be children of an element
    identified as belonging to the respective namespace.  For details on
    how to use XML Namespaces, see [21].  Section 2 of [21] provides
    details on how to declare namespaces and namespace prefixes.
 
    Example 1:
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI"
             xmlns:mrcp="http://www.ietf.org/mrcp2">
             <mrcp:result-type type="VERIFICATION" />
             <mrcp:verification-result>
               <voiceprint id="johnsmith">
                 <adapted> true </adapted>
                 <incremental>
                   <num-frames> 50 </num-frames>
                   <device> cellular-phone </device>
                   <gender> female </gender>
                   <decision> accepted </decision>
                   <verification-score> 0.98514 </verification-score>
                 </incremental>
                 <cumulative>
                   <num-frames> 1000 </num-frames>
                   <device> cellular-phone </device>
                   <gender> female </gender>
                   <decision> accepted </decision>
                   <verification-score> 0.91725</verification-score>
                 </cumulative>
               </voiceprint>
               <voiceprint id="marysmith">
                 <cumulative>
                   <verification-score> 0.93410 </verification-score>
                 </cumulative>
               </voiceprint>
               <voiceprint uri="juniorsmith">
                 <cumulative>
                   <verification-score> 0.74209 </verification-score>
                 </cumulative>
               </voiceprint>
             </mrcp:verification-result>
           </result>
 
    Example 2:
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI"
             xmlns:mrcp="http://www.ietf.org/mrcp2">
             xmlns:xmpl="http://www.example.org/2003/12/mrcp2">
             <mrcp:result-type type="VERIFICATION" />
             <mrcp:verification-result>
               <voiceprint id="johnsmith">
 
 S Shanmugham                  IETF-Draft                      Page 120
 
                            MRCPv2 Protocol              October, 2004
 
                 <incremental>
                   <num-frames> 50 </num-frames>
                   <device> cellular-phone </device>
                   <gender> female </gender>
                   <needmoredata> true </needmoredata>
                   <verification-score> 0.88514 </verification-score>
                    <xmpl:raspiness> high </xmpl:raspiness>
                    <xmpl:emotion> sadness </xmpl:emotion>
                 </incremental>
                 <cumulative>
                   <num-frames> 1000 </num-frames>
                   <device> cellular-phone </device>
                   <gender> female </gender>
                   <needmoredata> false </needmoredata>
                   <verification-score> 0.9345 </verification-score>
                 </cumulative>
               </voiceprint>
             </mrcp:verification-result>
           </result>
 
    Enrollment results XML markup can contain the following
    elements/tags:
 
       1. Voice-Print
       2. Incremental
       3. Cumulative
       4. Decision
       5. Utterance-Length
       6. Device
       7. Gender
       8. Adapted
       9. Verification-Score
       10. Vendor-Specific-Results
 
 
    1. VoicePrint
     This element in the verification results provides information on how
     the speech data matched a single voice print. The result data
     returned may have more than one such entity in it in the case of
     Identification or Multi-Verification. Each voice-print element and
     the XML data within the element describe verification result
     information for how well the speech data matched that particular
     voice-print. The list of voice-print element data are ordered
     according to their cumulative verification match scores, with the
     highest as the first.
 
    2. Cumulative
    Within each voice-print element there MUST BE a "cumulative" element
    with the cumulative scores of how well multiple utterances matched
    the voice-print.
 
 
 S Shanmugham                  IETF-Draft                      Page 121
 
                            MRCPv2 Protocol              October, 2004
 
    3. Incremental
    The first voice-print element there MAY contain an "incremental"
    element with the incremental scores of how well the last utterance
    matched the voice-print.
 
 
    4. Decision
    This element is found within the incremental or cumulative element
    within the verification results.   Its value indicates the decision
    as determined by verification.  It can have the values of
    "accepted", "rejected" or "undecided".
 
    5. Utterance-Length
    This element is found within the incremental or cumulative element
    within the verification results. Its value indicates the size of the
    last utterance or the cumulated set of utterances in milliseconds.
 
    6. Device
    This element is found within the incremental or cumulative element
    within the verification results. Its value indicates the apparent
    type of device used by the caller as determined by verification.  It
    can have the values of "cellular-phone", "electret-phone", "carbon-
    button-phone" and "unknown".
 
    7. Gender
    This element is found within the incremental or cumulative element
    within the verification results. Its value indicates the apparent
    gender of the speaker as determined by verification. It can have the
    values of "male", "female" or "unknown".
 
    8. Adapted
    This element is found within the voice-print element within the
    verification results. When verification is trying to confirm the
    voiceprint, this indicates if the voiceprint has been adapted as a
    consequence of analyzing the source utterances.  It is not returned
    during verification training. The value can be "true" or "false".
 
    9. Verification-Score
    This element is found within the incremental or cumulative element
    within the verification results. Its value indicates the score of
    the last utterance as determined by verification.
 
    During verification, the higher the score the more likely it is that
    the speaker is the same one as the one who spoke the voiceprint
    utterances.  During training, the higher the score the more likely
    the speaker is to have spoken all of the analyzed utterances.  The
    value is a floating point between 0.0 and 1.0. If there are no such
    utterances the score is 0. It should be noted that though the value
    of the verification score is between 0.0 and 1.0 it should NOT BE
    interpreted as a probability value.
 
 
 S Shanmugham                  IETF-Draft                      Page 122
 
                            MRCPv2 Protocol              October, 2004
 
    11. Vendor-Specific-Results
    This section describes the method used to describe vendor specific
    results using the xml syntax. Vendor-specific additions to the
    default result format MUST belong to the vendor's own namespace.  In
    the result structure, they must either be prefixed by a namespace
    prefix declared within the result or must be children of an element
    identified as belonging to the respective namespace.
 
 
 11.6.     START-SESSION
 
    The START-SESSION method starts a Speaker Verification or
    Identification session.  Execution of this method forces the
    verification resource into a known initial state. If this method is
    called during an ongoing verification session, the previous session
    is implicitly aborted. If this method is invoked when VERIFY or
    VERIFY-FROM-BUFFER is active, it would fail with a status code of
    402.
 
    Upon completion of the START-SESSION method, the verification
    resource MUST terminate any ongoing verification sessions, and clear
    any voiceprint designation.
 
    A verification session needs to establish the voice print repository
    that will be used as part of this session. This is specified through
    the "Repository-URI" header field, in which a URI pointing to the
    location of the voiceprint repository is given.
 
    It also establishes the voice-print that is going to be matched or
    trained during that verification session through the Voiceprint-
    Identifier header field. If this is an Identification session or if
    you wanted to do Multi-Verification, this header would contain a
    list of semi-colon separated voice print identifiers.
 
    The header field "Adapt-Model" may also be present in the start
    session method to indicate whether or not to adapt a voiceprint with
    data collected during the session (if the voiceprint verification
    phase succeeds). By default the voiceprint model should NOT be
    adapted with data from a verification session.
 
    The START-SESION must also establish if the session is for a train
    or verify a voice-print. Hence the Verification-Mode header field
    MUST BE sent in this method. The value of the "Verification-Mode"
    header field MUST be one of either "train" or "verify".
 
    Before a verification/identification resource is started, only
    VERIFY-ROLLBACK and generic SET-PARAMS and GET-PARAMS operations can
    be performed. The server should return 402(Method not valid in this
    state) for all other operations, such as VERIFY, QUERY-VOICEPRINT.
 
 
 
 S Shanmugham                  IETF-Draft                      Page 123
 
                            MRCPv2 Protocol              October, 2004
 
    A single session can be active at one time.
 
 Example:
    C->S:  MRCP/2.0 123 START-SESSION 314161
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/voiceprintdbase/
           Voiceprint-Identifier: johnsmith.voiceprint
           Adapt-Model: true
 
    S->C:  MRCP/2.0 49 314161 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.7.     END-SESSION
 
    The END-SESSION method terminates an ongoing verification session
    and releases the verification voiceprint model in one of three ways:
    a. aborting - the voiceprint adaptation or creation may be aborted
       so that the voiceprint remains unchanged (or is not created).
    b. committing - when terminating a voiceprint training session, the
       new voiceprint is committed to the repository.
    c. adapting - an existing voiceprint is modified using a successful
       verification.
 
    The header field "Abort-Model" may be included in the END-SESSION to
    control whether or not to abort any pending changes to the
    voiceprint. The default behavior is to commit (not abort) any
    pending changes to the designated voiceprint.
 
    The END-SESSION method may be safely executed multiple times without
    first executing the START-SESSION method. Any additional executions
    of this method without an intervening use of the START-SESSION
    method have no effect on the system.
 
 
 Example:
    This example assumes there are a training session or a verification
    session in progress.
 
    C->S:  MRCP/2.0 123 END-SESSION 314174
           Channel-Identifier: 32AECB23433801@speakverify
           Abort-Model: true
 
    S->C:  MRCP/2.0 49 314174 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.8.     QUERY-VOICEPRINT
 
    The QUERY-VOICEPRINT method is used to get a status on a particular
    voice-print and can be used to find if a voice-print or repository
    exists and if its trained.
 
 
 S Shanmugham                  IETF-Draft                      Page 124
 
                            MRCPv2 Protocol              October, 2004
 
    The response to the QUERY-VOICEPRINT method request will contain an
    indication of the status of the designated voiceprint in the
    "Voiceprint-Exists" header field, allowing the client to determine
    whether to use the current voiceprint for verification, train a new
    voiceprint, or choose a different voiceprint.
 
    A Voiceprint is completely specified by providing a repository
    location and a voiceprint identifier. The particular voice-print or
    identity within the repository is specified by string identifier
    unique within the repository. The "Voiceprint-Identity" header field
    MUST carry this unique voiceprint identifier within a given
    repository.
 
 
 Example1:
    This example assumes a verification session is in progress and the
    voiceprint exists in the voiceprint repository.
 
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314168
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/voice-prints/
           Voiceprint-Identifier: johnsmith.voiceprint
 
    S->C:  MRCP/2.0 123 314168 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/voice-prints/
           Voiceprint-Identifier: johnsmith.voiceprint
           Voiceprint-Exists: true
 
 Example2:
    This example assumes that the URI provided in the 'Repository-URI'
    header field is a bad URI.
 
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314168
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/bad-uri/
           Voiceprint-Identifier: johnsmith.voiceprint
 
    S->C:  MRCP/2.0 123 314168 405 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/bad-uri/
           Voiceprint-Identifier: johnsmith.voiceprint
           Completion-Cause: 007 repository-uri-failure
 
 
 11.9.     DELETE-VOICEPRINT
 
    The DELETE-VOICEPRINT method removes a voiceprint from a repository
    or speaker identification repository. This method MUST carry
    Repository-URI and the Voiceprint-Identifier header fields.
 
 
 S Shanmugham                  IETF-Draft                      Page 125
 
                            MRCPv2 Protocol              October, 2004
 
    If a voiceprint record doesn't exist, the DELETE-VOICEPRINT method
    can silently ignore the message and still return 200 status code.
 
 Example:
    This example demonstrates a message to remove a specific voiceprint.
 
    C->S:  MRCP/2.0 123 DELETE-VOICEPRINT 314168
           Channel-Identifier: 32AECB23433801@speakverify
                     Repository-URI: http://www.example.com/bad-uri/
           Voiceprint-Identifier: johnsmith.voiceprint
 
    S->C:  MRCP/2.0 49 314168 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.10.    VERIFY
 
    The VERIFY method is used to send the utterance's audio stream to
    the verification resource, which will then process it according to
    the current Verification-Mode, either to train/adapt the voiceprint
    or verify/identify the user. If the voiceprint is new or was deleted
    by a previous DELETE-VOICEPRINT method, the VERIFY method would
    train the voiceprint. If the voiceprint already exits, it is adapted
    and not re-trained by the VERIFY command.
 
    When both a recognizer and verification resource share the same
    session, the VERIFY method MUST be called prior to calling the
    RECOGNIZE method on the recognizer resource.  In such cases, server
    vendors will know that verification must be enabled for a subsequent
    call to RECOGNIZE.
 
 Example:
    C->S:  MRCP/2.0 49 VERIFY 543260
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 49 543260 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speakverify
 
    When the VERIFY request is done, the MRCP server should send a
    'VERIFICATION-COMPLETE' event to the client.
 
 
 11.11.    VERIFY-FROM-BUFFER
 
    The VERIFY-FROM-BUFFER method begins an ongoing evaluation of the
    currently buffered audio against the voiceprint. Only one VERIFY or
    VERIFY-FROM-BUFFER method can be active at any one time.
 
    The buffered audio is not consumed by this evaluation operation and
    thus VERIFY-FROM-BUFFER may be called multiple times using different
    voiceprints.
 
 
 S Shanmugham                  IETF-Draft                      Page 126
 
                            MRCPv2 Protocol              October, 2004
 
    For VERIFY-FROM-BUFFER method, the server can optionally return an
    "IN-PROGRESS" response followed by the "VERIFICATION-COMPLETE"
    event.
 
    When the VERIFY-FROM-BUFFER method is invoked and the verification
    buffer is in use the server MUST return an IN-PRGORESS response and
    waits until the buffer is available for verify processing again. The
    verification buffer is owned by the verification resource but shares
    write access with other input resources on the same session, such as
    recognition and recording. Hence, it is considered to be in use, if
    there is a read or write operation such as, a RECORD or RECOGNIZE
    with the ver-buffer-utterance header field set to "true", on a
    resource that shares this buffer. Note that, if RECORD or RECOGNIZE
    command returns with a failure cause code, the VERIFY-FROM-BUFFER
    command waiting to process that buffer MUST also fail with a
    Completion-Cause of 005 (buffer-empty).
 
 Example:
    This example illustrates the usage of some buffering methods. In
    this scenario the client first performed a live verification, but
    the utterance is rejected. In the meantime, the utterance is also
    saved to the audio buffer. Then, another voiceprint is used to do
    verification against the audio buffer and the utterance is accepted.
    Here, we assume both 'num-min-verification-phrases' and 'num-max-
    verification-phrases' are 1.
 
    C->S:  MRCP/2.0 123 START-SESSION 314161
           Channel-Identifier: 32AECB23433801@speakverify
           Adapt-Model: true
           Repository-URI: http://www.example.com/voice-prints
           Voiceprint-Identifier: johnsmith.voiceprint
 
    S->C:  MRCP/2.0 49 314161 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
    C->S:  MRCP/2.0 123 VERIFY 314162
           Channel-Identifier: 32AECB23433801@speakverify
           Ver-buffer-utterance: true
 
    S->C:  MRCP/2.0 49 314164 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 314162 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
           Completion-Cause: 000 success
           Content-Type: application/x-nlsml
           Content-Length: 123
 
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI">
           <extensions>
 
 S Shanmugham                  IETF-Draft                      Page 127
 
                            MRCPv2 Protocol              October, 2004
 
              <result-type type="VERIFICATION" />
              <verification-result>
                <voiceprint id="johnsmith">
                <incremental>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> rejected </decision>
                     <verification-score> 0.05465 </verification-score>
                </incremental>
                <cumulative>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> rejected </decision>
                     <verification-score> 0.09664 </verification-score>
                </cumulative>
                </voiceprint>
              </verification-result>
           </extensions>
           </result>
 
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314163
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/voiceprints/
           Voiceprint-Identifier: johnsmith
 
    S->C:  MRCP/2.0 123 314163 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
           Repository-URI: http://www.example.com/voiceprints/
           Voiceprint-Identifier: johnsmith.voiceprint
           Voiceprint-Exists: true
 
    C->S:  MRCP/2.0 123 START-SESSION 314164
           Channel-Identifier: 32AECB23433801@speakverify
           Adapt-Model: true
           Repository-URI: http://www.example.com/voice-prints
           Voiceprint-Identifier: marysmith.voiceprint
 
    S->C:  MRCP/2.0 49 314164 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
    C->S:  MRCP/2.0 123 VERIFY-FROM-BUFFER 314165
           Channel-Identifier: 32AECB23433801@speakverify
           Verification-Mode: verify
 
    S->C:  MRCP/2.0 49 314165 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 314165 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 S Shanmugham                  IETF-Draft                      Page 128
 
                            MRCPv2 Protocol              October, 2004
 
           Completion-Cause: 000 success
           Content-Type: application/x-nlsml
           Content-Length: 123
 
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI">
           <extensions>
              <result-type type="VERIFICATION" />
              <verification-result>
                <voiceprint id="marysmith">
                <incremental>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.98 </verification-score>
                </incremental>
                <cumulative>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.85 </verification-score>
                </cumulative>
                </voiceprint>
              </verification-result>
           </extensions>
           </result>
 
 
    C->S:  MRCP/2.0 49 END-SESSION 314166
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 49 314166 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.12.    VERIFY-ROLLBACK
 
    The VERIFY-ROLLBACK method discards the last buffered utterance or
    discards the last live utterances (when the mode is "train" or
    "verify"). This method should be invoked when the caller provides
    undesirable input such as non-speech noises, side-speech, out-of-
    grammar utterances, commands, etc. Note that this method does not
    provide a stack of rollback states. Executing VERIFY-ROLLBACK twice
    in succession without an intervening recognition operation has no
    effect on the second attempt.
 
 Example:
    C->S:  MRCP/2.0 49 VERIFY-ROLLBACK 314165
           Channel-Identifier: 32AECB23433801@speakverify
 
 
 S Shanmugham                  IETF-Draft                      Page 129
 
                            MRCPv2 Protocol              October, 2004
 
    S->C:  MRCP/2.0 49 314165 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.13.    STOP
 
    The STOP method from the client to the server tells the verification
    resource to stop the VERIFY or VERIFY-FROM-BUFFER request if one is
    active. If such a request is active and the STOP request
    successfully terminated it, then the response header contains an
    active-request-id-list header field containing the request-id of the
    VERIFY or VERIFY-FROM-BUFFER request that was terminated. In this
    case, no VERIFICATION-COMPLETE event will be sent for the terminated
    request. If there was no verify request active, then the response
    MUST NOT contain an active-request-id-list header field. Either way
    the response MUST contain a status of 200(Success).
 
    The STOP method can carry a "Abort-Verification" header field which
    specifies if the verification result until that point should be
    discarded or returned. If this header field is not present or if the
    value is "true", the verification result is discarded and STOP
    response does not contain any result data. If the field is present
    and its value is "false", the STOP_ response MUST contain a
    "Completion-Cause" header field and carry the Verification result
    data in its body.
 
    An aborted VERIFY request does an automatic roll-back and will not
    affect the cumulative score. A VERIFY request that was stopped with
    no "Abort-Verification" header field or with the "Abort-
    Verification" header field set to "false" will affect cumulative
    scores and would need to be explicitly rolled-back if it should not
    be considered for cumulative scores.
 
 Example:
    This example assumes a voiceprint identity has already been
    established.
 
    C->S:  MRCP/2.0 123 VERIFY 314177
           Channel-Identifier: 32AECB23433801@speakverify
           Verification-Mode: verify
 
    S->C:  MRCP/2.0 49 314177 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speakverify
 
    C->S:  MRCP/2.0 49 STOP 314178
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 123 314178 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
           Active-Request-Id-List: 314177
 
 
 
 S Shanmugham                  IETF-Draft                      Page 130
 
                            MRCPv2 Protocol              October, 2004
 
 11.14.    START-INPUT-TIMERS
 
    This request is sent from the client to the verification resource to
    start the no-input timer, usually once the audio prompts to the
    caller have played to completion.
 
 Example:
    C->S:  MRCP/2.0 49 START-INPUT-TIMERS 543260
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 49 543260 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.15.    VERIFICATION-COMPLETE
 
    The VERIFICATION-COMPLETE event follows a call to VERIFY or VERIFY-
    FROM-BUFFER and is used to communicate to the client the
    verification results.  This event will contain only verification
    results.
 
    Example:
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 543259 COMPLETE
           Completion-Cause: 000 success
           Content-Type: application/x-nlsml
           Content-Length: 123
 
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI">
           <extensions>
              <result-type type="VERIFICATION" />
              <verification-result>
                <voiceprint id="johnsmith">
                <incremental>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.85 </verification-score>
                </incremental>
                <cumulative>
                     <num-frames> 150 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.75 </verification-score>
                </cumulative>
                </voiceprint>
              </verification-result>
           </extensions>
           </result>
 
 
 S Shanmugham                  IETF-Draft                      Page 131
 
                            MRCPv2 Protocol              October, 2004
 
 11.16.    START-OF-SPEECH
 
    The START-OF-SPEECH event is returned from the server to the client
    once the server has detected speech.  This event is always returned
    by the verification resource when speech has been detected,
    irrespective of the fact that both the recognizer and verification
    resource are sharing the same session or not.
 
 
    Example:
    S->C:  MRCP/2.0 49 START-OF-SPEECH 543259 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.17.    CLEAR-BUFFER
 
    The CLEAR-BUFFER method can be used to clear the verification
    buffer. This buffer is used to buffer speech during a recognition,
    record or verification operations that may later be used for
    verification from buffer. As noted before, the verification resource
    is shared by other input resources like, recognizers and recorders.
    Hence, a CLEAR-BUFFER would fail if the verification buffer is in
    use. This happens when any one of the input resources that shares
    this buffer has an active read or write operation such as RECORD,
    RECOGNIZE or VERIFY with the ver-buffer-utterance header field set
    to "true".
 
    Example:
    C->S:  MRCP/2.0 49 CLEAR-BUFFER 543260
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 49 543260 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 11.18.    GET-INTERMEDIATE-RESULT
 
    The GET-INTERMEDIATE-RESULT method can be used to poll for
    intermediate results of a verification request that is in progress.
    This does not change the state of the resource. It just collects the
    verification results until that point and returns the information in
    the method response. The response to this method will contain only
    verification results. The method response MUST NOT contain a
    Completion-Cause header field as the request is not complete yet.
    If the resource does not have a verification in progress the
    response would have a 402 failure code and no result in the body.
 
    Example:
    C->S:  MRCP/2.0 49 GET-INTERMEDIATE-RESULTS 543260
           Channel-Identifier: 32AECB23433801@speakverify
 
    S->C:  MRCP/2.0 49 543260 200 COMPLETE
           Channel-Identifier: 32AECB23433801@speakverify
 
 S Shanmugham                  IETF-Draft                      Page 132
 
                            MRCPv2 Protocol              October, 2004
 
           Content-Type: application/x-nlsml
           Content-Length: 123
 
           <?xml version="1.0"?>
           <result grammar="What-Grammar-URI">
           <extensions>
              <result-type type="VERIFICATION" />
              <verification-result>
                <voiceprint id="marysmith">
                <incremental>
                     <num-frames> 50 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.85 </verification-score>
                </incremental>
                <cumulative>
                     <num-frames> 150 </num-frames>
                     <device> cellular-phone </device>
                     <gender> female </gender>
                     <decision> accepted </decision>
                     <verification-score> 0.65 </verification-score>
                </cumulative>
                </voiceprint>
              </verification-result>
           </extensions>
           </result>
 
 12.  Security Considerations
 
    The MRCPv2 protocol may carry sensitive information such as account
    numbers, passwords etc as well as use media for identification and
    verification purposes. For this reason it is important that the
    client have the option of secure communication with the server for
    both the control messages as well as the media, though the client is
    not required to use it. This is achieved by imposing following
    requirements on MRCPv2 server implementations. All MRCPv2 servers
    MUST implement digest authentication (sip:) and SHOULD implement
    sips: in its SIP implementation. All MRCPv2 servers must support TLS
    for the transport of control messages between the client and server.
    All MRCPv2 servers MUST support Secure Real-Time Transport Protocol
    (SRTP) as an option to send and receive media.
 
 13.  Examples:
 
 13.1.     Message Flow
 
    The following is an example of a typical MRCPv2 session of speech
    synthesis and recognition between a client and a server.
 
 
 
 S Shanmugham                  IETF-Draft                      Page 133
 
                            MRCPv2 Protocol              October, 2004
 
    Opening a session to the MRCPv2 server. This is exchange does not
    allocate a resource or setup media. It simply establishes a SIP
    session with the MRCPv2 server.
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314159 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 142
 
           v=0
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
 
    S->C:
           SIP/2.0 200 OK
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314159 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 131
 
           v=0
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
 
    C->S:
           ACK sip:mrcp@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314160 ACK
           Content-Length: 0
 
    The client requests the server to create synthesizer resource
    control channel to do speech synthesis. This also adds a media pipe
    to send the generated speech. Note that in this example, the client
    request the reuse of an existing MRCPv2 SCTP pipe between the client
    and the server.
 
 S Shanmugham                  IETF-Draft                      Page 134
 
                            MRCPv2 Protocol              October, 2004
 
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314161 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 142
 
           v=0
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
           m=control 9 SCTP application/mrcpv2
           a=setup:active
           a=connection:existing
           a=resource:speechsynth
           a=cmid:1
           m=audio 49170 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=recvonly
           a=mid:1
 
 
    S->C:
           SIP/2.0 200 OK
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314161 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 131
 
           v=0
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
           m=control 32416 SCTP application/mrcpv2
           a=setup:passive
           a=connection:existing
           a=channel:32AECB23433802@speechsynth
           a=cmid:1
           m=audio 48260 RTP/AVP 0
           a=rtpmap:0 pcmu/8000
           a=sendonly
 
 S Shanmugham                  IETF-Draft                      Page 135
 
                            MRCPv2 Protocol              October, 2004
 
           a=mid:1
 
    C->S:
           ACK sip:mrcp@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314162 ACK
           Content-Length: 0
 
    This exchange allocates an additional resource control channel for a
    recognizer. Since a recognizer would need to receive an audio stream
    for recognition, this interaction also updates the audio stream to
    sendrecv making it a 2-way audio stream.
 
    C->S:
           INVITE sip:mresources@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 142
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
           m=control 9 SCTP application/mrcpv2
           a=setup:active
           a=connection:existing
           a=resource:speechsynth
           a=cmid:1
           m=audio 49170 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=recvonly
           a=mid:1
           m=control 9 SCTP application/mrcpv2
           a=setup:active
           a=connection:existing
           a=resource:speechrecog
           a=cmid:2
           m=audio 49180 RTP/AVP 0 96
           a=rtpmap:0 pcmu/8000
           a=rtpmap:96 telephone-event/8000
           a=fmtp:96 0-15
           a=sendonly
 
 S Shanmugham                  IETF-Draft                      Page 136
 
                            MRCPv2 Protocol              October, 2004
 
           a=mid:2
 
 
    S->C:
           SIP/2.0 200 OK
           To: MediaServer <sip:mresources@mediaserver.com>
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314163 INVITE
           Contact: <sip: sarvi@cisco.com>
           Content-Type: application/sdp
           Content-Length: 131
 
           v=0
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4
           s=SDP Seminar
           i=A session for processing media
           c=IN IP4 224.2.17.12/127
           m=control 32416 SCTP application/mrcpv2
           a=channel:32AECB23433801@speechsynth
           a=cmid:1
           m=audio 48260 RTP/AVP 0
           a=rtpmap:0 pcmu/8000
           a=sendonly
           a=mid:1
           m=control 32416 SCTP application/mrcpv2
           a=channel:32AECB23433802@speechrecog
           a=cmid:2
           m=audio 48260 RTP/AVP 0
           a=rtpmap:0 pcmu/8000
           a=rtpmap:96 telephone-event/8000
           a=fmtp:96 0-15
           a=recvonly
           a=mid:2
 
    C->S:
           ACK sip:mrcp@mediaserver.com SIP/2.0
           Max-Forwards: 6
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 314164 ACK
           Content-Length: 0
 
    A MRCPv2 SPEAK request initiates speech.
 
    C->S:  MRCP/2.0 386 SPEAK 543257
           Channel-Identifier: 32AECB23433802@speechsynth
           Kill-On-Barge-In: false
           Voice-gender: neutral
           Voice-category: teenager
 
 S Shanmugham                  IETF-Draft                      Page 137
 
                            MRCPv2 Protocol              October, 2004
 
                     Prosody-volume: medium
           Content-Type: application/synthesis+ssml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
                    <sentence>You have 4 new messages.</sentence>
                    <sentence>The first is from <say-as
                    type="name">Stephanie Williams</say-as> <mark
           name="Stephanie"/>
                    and arrived at <break/>
                    <say-as type="time">3:45pm</say-as>.</sentence>
 
                    <sentence>The subject is <prosody
                    rate="-20%">ski trip</prosody></sentence>
           </paragraph>
           </speak>
 
    S->C:  MRCP/2.0 49 543257 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
    The synthesizer hits the special marker in the message to be spoken
    and faithfully informs the client of the event.
 
    S->C:  MRCP/2.0 46 SPEECH-MARKER 543257 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
           Speech-Marker: Stephanie
 
    The synthesizer finishes with the SPEAK request.
 
    S->C:  MRCP/2.0 48 SPEAK-COMPLETE 543257 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
 
    The recognizer is issued a request to listen for the customer
    choices.
 
    C->S:MRCP/2.0 343 RECOGNIZE 543258
           Channel-Identifier: 32AECB23433801@speechrecog
           Content-Type: application/srgs+xml
           Content-Length: 104
 
           <?xml version="1.0"?>
 
           <!-- the default grammar language is US English -->
           <grammar xml:lang="en-US" version="1.0">
 
           <!-- single language attachment to a rule expansion -->
                <rule id="request">
                    Can I speak to
                    <one-of xml:lang="fr-CA">
 
 S Shanmugham                  IETF-Draft                      Page 138
 
                            MRCPv2 Protocol              October, 2004
 
                             <item>Michel Tremblay</item>
                             <item>Andre Roy</item>
                    </one-of>
                </rule>
 
           </grammar>
 
    S->C:  MRCP/2.0 49 543258 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
 
    The client issues the next MRCPv2 SPEAK method. It is generally
    RECOMMENDED when playing a prompt to the user with kill-on-barge-in
    and asking for input, that the client issue the RECOGNIZE request
    ahead of the SPEAK request for optimum performance and user
    experience. This way, it is guaranteed that the recognizer is online
    before the prompt starts playing and the user's speech will not be
    truncated at the beginning (especially for power users).
 
    C->S:  MRCP/2.0 289 SPEAK 543259
           Channel-Identifier: 32AECB23433802@speechsynth
           Kill-On-Barge-In: true
           Content-Type: application/sml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <speak>
           <paragraph>
                    <sentence>Welcome to ABC corporation.</sentence>
                    <sentence>Who would you like Talk to.</sentence>
           </paragraph>
           </speak>
 
    S->C:  MRCP/2.0 52 543259 200 IN-PROGRESS
           Channel-Identifier: 32AECB23433802@speechsynth
 
    Since the last SPEAK request had Kill-On-Barge-In set to "true", the
    speech synthesizer is interrupted when the user starts speaking. And
    the client is notified.
 
    Now, since the recognition and synthesizer resources are on the same
    session, they may have worked with each other to deliver kill-on-
    barge-in. Whether the synthesizer and recognizer are in the same
    session or not the recognizer MUST generate the START-OF-SPEECH
    event to the client.
 
    The client MUST then blindly turn around and issued a BARGE-IN-
    OCCURRED method to the synthesizer resource(if a SPEAK request was
    active). The synthesizer, if kill-on-barge-in was enabled on the
    current SPEAK request, would have then interrupted it and issued a
    SPEAK-COMPLETE event to the client.
 
 
 S Shanmugham                  IETF-Draft                      Page 139
 
                            MRCPv2 Protocol              October, 2004
 
    The completion-cause code differentiates if this is normal
    completion or a kill-on-barge-in interruption.
 
    S->C:  MRCP/2.0 49 START-OF-SPEECH 543258 IN-PROGRESS
           Channel-Identifier: 32AECB23433801@speechrecog
           Proxy-Sync-Id: 987654321
 
 
    C->S:  MRCP/2.0 69 BARGE-IN-OCCURRED 543259
           Channel-Identifier: 32AECB23433802@speechsynth
           Proxy-Sync-Id: 987654321
 
    S->C:  MRCP/2.0 72 543259 200 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Active-Request-Id-List: 543258
 
    S->C:  MRCP/2.0 73 SPEAK-COMPLETE 543259 COMPLETE
           Channel-Identifier: 32AECB23433802@speechsynth
           Completion-Cause: 001 barge-in
 
    The recognition resource matched the spoken stream to a grammar and
    generated results. The result of the recognition is returned by the
    server as part of the RECOGNITION-COMPLETE event.
 
    S->C:  MRCP/2.0 412 RECOGNITION-COMPLETE 543258 COMPLETE
           Channel-Identifier: 32AECB23433801@speechrecog
           Completion-Cause: 000 success
           Waveform-URI: http://web.media.com/session123/audio.wav
           Content-Type: application/x-nlsml
           Content-Length: 104
 
           <?xml version="1.0"?>
           <result x-model="http://IdentityModel"
             xmlns:xf="http://www.w3.org/2000/xforms"
             grammar="session:request1@form-level.store">
               <interpretation>
                   <xf:instance name="Person">
                       <Person>
                           <Name> Andre Roy </Name>
                       </Person>
                   </xf:instance>
                   <input>   may I speak to Andre Roy </input>
               </interpretation>
           </result>
 
    When the client wants to tear down the whole session and all its
    resources, it MUST issue a SIP BYE to close the SIP session. This
    will de-allocate all the control channels and resources allocated
    under the session.
 
      C->S:BYE sip:mrcp@mediaserver.com SIP/2.0
 
 S Shanmugham                  IETF-Draft                      Page 140
 
                            MRCPv2 Protocol              October, 2004
 
           Max-Forwards: 6
           From: Sarvi <sip:sarvi@cisco.com>;tag=a6c85cf
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=1928301774
           Call-ID: a84b4c76e66710
           CSeq: 231 BYE
           Content-Length: 0
 
 13.2.     Recognition Result Examples
 
   Simple ASR Ambiguity
 
    System: To which city will you be traveling?
    User: I want to go to Pittsburgh.
 
    <result grammar="http://flight">
      <interpretation confidence="60">
         <instance>
            <airline>
               <to_city>Pittsburgh</to_city>
            <airline>
         <instance>
         <input mode="speech">
            I want to go to Pittsburgh
         </input>
      </interpretation>
      <interpretation confidence="40"
         <instance>
            <airline>
               <to_city>Stockholm</to_city>
            </airline>
         </instance>
         <input>I want to go to Stockholm</input>
      </interpretation>
    </result>
 
   Mixed Initiative:
 
    System: What would you like?
    User: I would like 2 pizzas, one with pepperoni and cheese, one with
    sausage and a bottle of coke, to go.
 
    This representation includes an order object which in turn contains
    objects named "food_item", "drink_item" and "delivery_method". This
    representation assumes there are no ambiguities in the speech or
    natural language processing. Note that this representation also
    assumes some level of intrasentential anaphora resolution, i.e., to
    resolve the two "one's" as "pizza".
 
    <result grammar="http://foodorder">
      <interpretation confidence="100" >
         <instance>
 
 S Shanmugham                  IETF-Draft                      Page 141
 
                            MRCPv2 Protocol              October, 2004
 
          <order>
            <food_item confidence="100">
              <pizza>
                <ingredients confidence="100">
                  pepperoni
                </ingredients>
                <ingredients confidence="100">
                  cheese
                </ingredients>
              </pizza>
              <pizza>
                <ingredients>sausage</ingredients>
              </pizza>
            </food_item>
            <drink_item confidence="100">
              <size>2-liter</size>
            </drink_item>
            <delivery_method>to go</delivery_method>
          </order>
        </instance>
        <input mode="speech">I would like 2 pizzas,
             one with pepperoni and cheese, one with sausage
             and a bottle of coke, to go.
        </input>
      </interpretation>
    </result>
 
   DTMF Input
 
    A combination of dtmf input and speech would be represented using
    nested input elements. For example:
 
    User: My pin is (dtmf 1 2 3 4)
 
    <input>
      <input mode="speech" confidence ="1.0"
         timestamp-start="2000-04-03T0:00:00"
         timestamp-end="2000-04-03T0:00:01.5">My pin is
      </input>
      <input mode="dtmf" confidence ="1.0"
         timestamp-start="2000-04-03T0:00:01.5"
         timestamp-end="2000-04-03T0:00:02.0">1 2 3 4
      </input>
    </input>
 
    Note that grammars that recognize mixtures of speech and DTMF are
    not currently possible in VoiceXML; however this representation may
    be needed for other applications of NLSML, and it may be introduced
    in future versions of VoiceXML.
 
 
 
 S Shanmugham                  IETF-Draft                      Page 142
 
                            MRCPv2 Protocol              October, 2004
 
   Interpreting Meta-Dialog and Meta-Task Utterances
 
    The natural language requires that the semantics specification must
    be capable of representing a number of types of meta-dialog and
    meta-task utterances (Task-Specific Information/Meta-task
    Information Requirements 1-8 and Generic Information about the
    Communication Process Requirements 1-6). This specification is
    flexible enough so that meta utterances can be represented on an
    application-specific basis without defining specific formats in this
    specification.
 
    Here are two examples of how meta-task and meta-dialog utterances
    might be represented.
 
    System: What toppings do you want on your pizza?
    User: What toppings do you have?
 
    <interpretation grammar="http://toppings">
       <instance>
          <question>
             <questioned_item>toppings<questioned_item>
             <questioned_property>
              availability
             </questioned_property>
          </question>
       </instance>
       <input mode="speech">
         what toppings do you have?
       </input>
    </interpretation>
 
    User: slow down.
 
    <interpretation grammar="http://generalCommandsGrammar">
       <instance>
        <command>
           <action>reduce speech rate</action>
           <doer>system</doer>
        </command>
       </instance>
      <input mode="speech">slow down</input>
    </interpretation>
 
   Anaphora and Deixis
 
    This specification can be used on an application-specific basis to
    represent utterances that contain unresolved anaphoric and deictic
    references. Anaphoric references, which include pronouns and
    definite noun phrases that refer to something that was mentioned in
    the preceding linguistic context, and deictic references, which
    refer to something that is present in the non-linguistic context,
 
 S Shanmugham                  IETF-Draft                      Page 143
 
                            MRCPv2 Protocol              October, 2004
 
    present similar problems in that there may not be sufficient
    unambiguous linguistic context to determine what their exact role in
    the interpretation should be. In order to represent unresolved
    anaphora and deixis using this specification, one strategy would be
    for the developer to define a more surface-oriented representation
    that leaves the specific details of the interpretation of the
    reference open. (This assumes that a later component is responsible
    for actually resolving the reference)
 
    Example: (ignoring the issue of representing the input from the
    pointing gesture.)
 
    System: What do you want to drink?
    User: I want this (clicks on picture of large root beer.)
 
    <result>
       <interpretation>
          <instance>
           <doer>I</doer>
           <action>want</action>
           <object>this</object>
          </instance>
          <input mode="speech">I want this</input>
       </interpretation>
    </result>
 
    Future versions of the W3C Speech Interface Framework may address
    issues of representing resolved anaphora.
 
   Distinguishing Individual Items from Sets with One Member
 
    For programming convenience, it is useful to be able to distinguish
    between individual items and sets containing one item in the XML
    representation of semantic results. For example, a pizza order might
    consist of exactly one pizza, but a pizza might contain zero or more
    toppings. Since there is no standard way of marking this distinction
    directly in XML, in the current framework, the developer is free to
    adopt any conventions that would convey this information in the XML
    markup. One strategy would be for the developer to wrap the set of
    items in a grouping element, as in the following example.
 
    <order>
       <pizza>
          <topping-group>
             <topping>mushrooms</topping>
          </topping-group>
       </pizza>
       <drink>coke</drink>
    </order>
 
 
 
 S Shanmugham                  IETF-Draft                      Page 144
 
                            MRCPv2 Protocol              October, 2004
 
    In this example, the programmer can assume that there is supposed to
    be exactly one pizza and one drink in the order, but the fact that
    there is only one topping is an accident of this particular pizza
    order.
 
    If a data model is used this distinction can be made in the data
    model by stating that the value of the "maxOccurs" attribute can be
    greater than 1.
 
   Extensibility
 
    One of the natural language requirements states that the
    specification must be extensible. The specification supports this
    requirement because of its flexibility, as discussed in the
    discussions of meta utterances and anaphora. NLSML can easily be
    used in sophisticated systems to convey application-specific
    information that more basic systems would not make use of, for
    example defining speech acts. Defining standard representations for
    items such as dates, times, etc. could also be done.
 
 
    Normative Reference
 
    [1]    Fielding, R., Gettys, J., Mogul, J., Frystyk. H.,
           Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
           transfer protocol -- HTTP/1.1", RFC 2616, June 1999.
 
    [2]    Schulzrinne, H., Rao, A., and R. Lanphier, "Real Time
           Streaming Protocol (RTSP)", RFC 2326, April 1998
 
    [3]    Crocker, D. and P. Overell, "Augmented BNF for syntax
           specifications: ABNF", RFC 2234, November 1997.
 
    [4]    Rosenberg, J., Schulzrinne, H., Schooler, E., Camarillo, G.,
           Johnston, A. Peterson, J., Sparks, R., Handley, M., Schooler,
           E., "SIP: Session Initiation Protocol", RFC 3261,
           June 2002.
 
    [6]    Handley, M. and V. Jacobson, "SDP: session description
           protocol", RFC 2327, April 1998.
 
     [7]   World Wide Web Consortium, "Voice Extensible Markup Language
           (VoiceXML) Version 2.0", W3C Candidate Recommendation, March
           2004.
 
     [8]   Crocker, D., "STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT
           MESSAGES", RFC 822, August 1982.
 
     [9]   Bradner, S., "Key words for use in RFCs to Indicate
           Requirement Levels", RFC 2119, March 1997.
 
 
 S Shanmugham                  IETF-Draft                      Page 145
 
                            MRCPv2 Protocol              October, 2004
 
     [10]  World Wide Web Consortium, "Speech Synthesis Markup Language
           (SSML) Version 1.0", W3C Candidate Recommendation, September
           2004.
 
     [11]  World Wide Web Consortium, "Speech Recognition Grammar
           Specification Version 1.0", W3C Candidate Recommendation,
           March 2004.
 
     [12]  Bradner, S., "The Internet Standards Process - Revision 3",
           RFC 2026, October 1996
 
     [13]  Yergeau, F., "UTF-8, a transformation format of Unicode and
           ISO 10646", RFC 2044, October 1996
 
     [14]  Freed, N., Borenstein, N., "Multipupose Internet Mail
           Extensions (MIME) Part Two: Media Types", RFC 2046, November
           1996
 
     [15]  Levinson, E., "Content-ID and Message-ID Uniform Resource
           Locators", RFC 2111, March 1997
 
     [16]  Schulzrinne, H., Petrack, S., "RTP Payload for DTMF Digits,
           Telephony Tones and Telephony Signals", RFC 2833, May 2000
 
     [17]  Alvestrand, H., "Tags for the Identification of Languages",
           RFC 3066, January 2001
 
     [18]  Camarillo, G., Eriksson, G., Holler, J., "Grouping of Media
           Lines in the Session Description Protocol (SDP) ", RFC 3388,
           December 2002
 
     [19]  T. Bray et al., "Namespaces in XML", W3C Recommendation, 14
           January 1999.
 
     [20]  Yon, D., Camarillo, G., "Connection-Oriented Media Transport
           in the Session Description Protocol  (SDP)", draft-ietf-
           mmusic-sdp-comedia-09.txt, (work in progress), September
           2004.
 
     [21]  Lenox, J., "Connection-Oriented Media Transport over the
           Transport Layer Security(TLS) Protocol in the Session
           Description Protocol (SDP)", (work in progress), draft-ietf-
           mmusic-comedia-tls-02.txt
 
 
 
    Appendix
 
    A.1 ABNF Message Definitions
 
        LWS    =    [*WSP CRLF] 1*WSP ; linear whitespace
 
 S Shanmugham                  IETF-Draft                      Page 146
 
                            MRCPv2 Protocol              October, 2004
 
 
        SWS    =    [LWS] ; sep whitespace
 
        UTF8-NONASCII    =    %xC0-DF 1UTF8-CONT
                         /    %xE0-EF 2UTF8-CONT
                         /    %xF0-F7 3UTF8-CONT
                          /    %xF8-Fb 4UTF8-CONT
                          /    %xFC-FD 5UTF8-CONT
 
        UTF8-CONT   =    %x80-BF
        VCHAR       =    %x21-7E
        param       =    *pchar
 
        quoted-string    =    SWS DQUOTE *(qdtext / quoted-pair )
                               DQUOTE
 
        qdtext      =    LWS / %x21 / %x23-5B / %x5D-7E
                     /    UTF8-NONASCII
 
        quoted-pair =    "\" (%x00-09 / %x0B-0C
                     /    %x0E-7F)
 
        token       =    1*(alphanum / "-" / "." / "!" / "%" / "*"
                     / "_" / "+" / "`" / "'" / "~" )
 
        reserved    =    ";" / "/" / "?" / ":" / "@" / "&" / "="
                     / "+" / "$" / ","
 
        mark        =    "-" / "_" / "." / "!" / "~" / "*" / "'"
                     /    "(" / ")"
 
        unreserved  =    alphanum / mark
 
        pchar       =    unreserved / escaped
                     /    ":" / "@" / "&" / "=" / "+" / "$" / ","
 
        alphanum    =    ALPHA / DIGIT
 
        escaped      =    "%" HEXDIG HEXDIG
 
        fragment    =    *uric
 
        uri         =    [ absoluteURI / relativeURI ]
                          [ "#" fragment ]
 
        absoluteURI =    scheme ":" ( hier-part / opaque-part )
 
        relativeURI =    ( net-path / abs-path / rel-path )
                          [ "?" query ]
 
        hier-part   =    ( net-path / abs-path ) [ "?" query ]
 
 S Shanmugham                  IETF-Draft                      Page 147
 
                            MRCPv2 Protocol              October, 2004
 
 
        net-path    =    "//" authority [ abs-path ]
 
        abs-path    =    "/" path-segments
 
        rel-path    =    rel-segment [ abs-path ]
 
        rel-segment =    1*( unreserved / escaped / ";" / "@"
                     /    "&" / "=" / "+" / "$" / "," )
 
        opaque-part =    uric-no-slash *uric
 
        uric        =    reserved / unreserved / escaped
 
        uric-no-slash    =    unreserved / escaped / ";" / "?" / ":"
                          / "@" / "&" / "=" / "+" / "$" / ","
 
        path-segments    =    segment *( "/" segment )
 
        segment      =    *pchar *( ";" param )
 
        scheme      =    ALPHA *( ALPHA / DIGIT / "+" / "-" / "." )
 
        authority   =    srvr / reg-name
 
        srvr        =    [ [ userinfo "@" ] hostport ]
 
        reg-name    =    1*( unreserved / escaped / "$" / ","
                     /    ";" / ":" / "@" / "&" / "=" / "+" )
 
        query       =    *uric
 
        userinfo    =    ( user ) [ ":" password ] "@"
 
        user        =    1*( unreserved / escaped
                     /    user-unreserved )
 
        user-unreserved  =    "&" / "=" / "+" / "$" / "," / ";"
                          /    "?" / "/"
 
        password    =    *( unreserved / escaped
                          /    "&" / "=" / "+" / "$" / "," )
 
        hostport    =    host [ ":" port ]
 
        host        =    hostname / IPv4address / IPv6reference
 
        hostname    =    *( domainlabel "." ) toplabel [ "." ]
 
        domainlabel =    alphanum / alphanum *( alphanum / "-" )
                          alphanum
 
 S Shanmugham                  IETF-Draft                      Page 148
 
                            MRCPv2 Protocol              October, 2004
 
 
        toplabel    =    ALPHA / ALPHA *( alphanum / "-" )
                          alphanum
 
        IPv4address =    1*3DIGIT "." 1*3DIGIT "." 1*3DIGIT "."
                          1*3DIGIT
 
        IPv6reference    =    "[" IPv6address "]"
 
        IPv6address =    hexpart [ ":" IPv4address ]
 
        Hexpart      =    hexseq / hexseq "::" [ hexseq ] / "::"
                          [ hexseq ]
 
        hexseq      =    hex4 *( ":" hex4)
 
        hex4        =    1*4HEXDIG
 
        port        =    1*DIGIT
 
        cmid-attribute   =    "a=cmid:" identification-tag
 
        identification-tag    =    token
 
        generic-message  =    start-line
                               message-header
                               CRLF
                               [ message-body ]
 
        message-body      =    *OCTET
 
        start-line  =    request-line / status-line / event-line
 
        request-line =    mrcp-version SP message-length SP method-name
                          SP request-id CRLF
 
        status-line =    mrcp-version SP message-length SP request-id
                          SP status-code SP request-state CRLF
 
        event-line  =    mrcp-version SP message-length SP event-name
                          SP request-id SP request-state CRLF
 
        method-name =    generic-method
                     /    synthesizer-method
                     /    recorder-method
                     /    recognizer-method
                     /    verifier-method
                     /    extension-method
 
        extension-method =    1*(ALPHA / "-")
 
 
 S Shanmugham                  IETF-Draft                      Page 149
 
                            MRCPv2 Protocol              October, 2004
 
        generic-method   =    "SET-PARAMS"
                          /    "GET-PARAMS"
 
        request-state    =    "COMPLETE"
                          /    "IN-PROGRESS"
                          /    "PENDING"
 
        event-name       =    synthesizer-event
                          /    recognizer-event
                          /    recorder-event
                          /    verifier-event
                          /    extension-event
 
        extension-event  =    1*(ALPHA / "-")
 
        message-header   =    1*(generic-header / resource-header)
 
        resource-header  =    recognizer-header
                          /    synthesizer-header
                          /    recorder-header
                          /    verifier-header
                          /    extension-header
 
        extension-header =    1*(ALPHANUM) CRLF
 
        generic-header   =    channel-identifier
                          /    active-request-id-list
                          /    proxy-sync-id
                          /    content-id
                          /    content-type
                          /    content-length
                         /    content-base
                          /    content-location
                          /    content-encoding
                          /    cache-control
                          /    logging-tag
                          /    vendor-specific
 
        ; -- content-id is as defined in RFC 2111, RFC2046 and RFC822
 
        mrcp-version      =    "MRCP" "/" 1*DIGIT "." 1*DIGIT
 
        request-id       =    1*DIGIT
 
        status-code      =    1*DIGIT
 
        channel-identifier    =    "Channel-Identifier" ":"
                                    channel-id CRLF
 
        channel-id       =    1*HEXDIG "@" 1*VCHAR
 
 
 S Shanmugham                  IETF-Draft                      Page 150
 
                            MRCPv2 Protocol              October, 2004
 
        active-request-id-list =    "Active-Request-Id-List" ":"
                                    request-id *("," request-id) CRLF
 
        proxy-sync-id    =    "Proxy-Sync-Id" ":" 1*VCHAR CRLF
 
        content-length   =    "Content-Length" ":" 1*DIGIT CRLF
 
        content-base      =    "Content-Base" ":" absoluteURI CRLF
 
        Content-Type      =    "Content-Type" ":" media-type
 
        media-type       =    type "/" subtype *( ";" parameter )
 
        type        =    token
 
        subtype      =    token
 
        parameter   =    attribute "=" value
 
        attribute   =    token
 
        value       =    token / quoted-string
 
        content-encoding =    "Content-Encoding" ":"
                               *WSP content-coding
                               *(*WSP "," *WSP content-coding *WSP )
                               CRLF
 
        content-coding   =    token
 
 
        content-location =    "Content-Location" ":"
                               ( absoluteURI / relativeURI )  CRLF
 
        cache-control    =    "Cache-Control" ":"
                               [*WSP cache-directive
                               *( *WSP "," *WSP cache-directive *WSP )]
                               CRLF
 
        cache-directive  =    "max-age" "=" delta-seconds
                          /    "max-stale" "=" [ delta-seconds ]
                          /    "min-fresh" "=" delta-seconds
 
        logging-tag      =    "Logging-Tag" ":" 1*VCHAR CRLF
 
        vendor-specific  =    "Vendor-Specific-Parameters" ":"
                               [vendor-specific-av-pair
                               *[";" vendor-specific-av-pair]] CRLF
 
        vendor-specific-av-pair    =    vendor-av-pair-name "="
                                         vendor-av-pair-value
 
 S Shanmugham                  IETF-Draft                      Page 151
 
                            MRCPv2 Protocol              October, 2004
 
 
        vendor-av-pair-name   =    1*VCHAR
 
        vendor-av-pair-value  =    1*VCHAR
 
        set-cookie  =    "Set-Cookie:" cookies CRLF
 
        cookies      =    cookie *("," *LWS cookie)
 
        cookie      =    NAME "=" VALUE *(";" cookie-av)
 
        NAME        =    attribute
 
        VALUE       =    value
 
        cookie-av   =    "Comment" "=" value
                     /    "Domain" "=" value
                     /    "Max-Age" "=" value
                     /    "Path" "=" value
                     /    "Secure"
                     /    "Version" "=" 1*DIGIT
                     /    "Age" "=" delta-seconds
 
        set-cookie2 =    "Set-Cookie2:" cookies2 CRLF
 
        cookies2    =    cookie2 *("," *LWS cookie2)
 
        cookie2      =    NAME "=" VALUE *(";" cookie-av2)
 
        NAME   =    attribute
 
        VALUE  =    value
 
        cookie-av2  =    "Comment" "=" value
                     /    "CommentURL" "=" <"> http_URL <">
                     /    "Discard"
                     /    "Domain" "=" value
                     /    "Max-Age" "=" value
                     /    "Path" "=" value
                     /    "Port" [ "=" <"> portlist <"> ]
                     /    "Secure"
                     /    "Version" "=" 1*DIGIT
                     /    "Age" "=" delta-seconds
 
        portlist    =    portnum *("," *LWS portnum)
 
        portnum      =    1*DIGIT
 
        ; Synthesizer ABNF
 
        synthesizer-method    =    "SPEAK"
 
 S Shanmugham                  IETF-Draft                      Page 152
 
                            MRCPv2 Protocol              October, 2004
 
                               /    "STOP"
                               /    "PAUSE"
                               /    "RESUME"
                               /    "BARGE-IN-OCCURRED"
                               /    "CONTROL"
 
        synthesizer-event      =    "SPEECH-MARKER"
                               /    "SPEAK-COMPLETE"
 
        synthesizer-header    =    jump-size
                               /    kill-on-barge-in
                               /    speaker-profile
                               /    completion-cause
                               /    completion-reason
                               /    voice-parameter
                               /    prosody-parameter
                               /    speech-marker
                               /    speech-language
                               /    fetch-hint
                               /    audio-fetch-hint
                               /    fetch-timeout
                               /    failed-uri
                               /    failed-uri-cause
                               /    speak-restart
                               /    speak-length
                               /    load-lexicon
                               /    lexicon-search-order
 
 
        jump-size   =    "Jump-Size" ":" speech-length-value CRLF
 
        speech-length-value   =    numeric-speech-length
                               /    text-speech-length
 
        text-speech-length    =    1*ALPHA SP "Tag"
 
        numeric-speech-length =    ("+" / "-") 1*DIGIT SP
                                    numeric-speech-unit
 
        numeric-speech-unit   =    "Second"
                               /    "Word"
                               /    "Sentence"
                               /    "Paragraph"
 
        delta-seconds         =    1*DIGIT
 
        kill-on-barge-in      =    "Kill-On-Barge-In" ":" boolean-value
                                    CRLF
 
        boolean-value         =    "true" / "false"
 
 
 S Shanmugham                  IETF-Draft                      Page 153
 
                            MRCPv2 Protocol              October, 2004
 
        speaker-profile       =    "Speaker-Profile" ":" absoluteURI
                                    CRLF
 
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP
                                    1*VCHAR CRLF
 
        completion-reason      =    "Completion-Reason" ":"
                                    quoted-string CRLF
 
        voice-parameter       =    "Voice-" voice-param-name ":"
                                    [voice-param-value] CRLF
 
        voice-param-name      =    1*VCHAR
 
        voice-param-value      =    1*VCHAR
 
        prosody-parameter      =    "Prosody-" prosody-param-name ":"
                                    [prosody-param-value] CRLF
 
        prosody-param-name    =    1*VCHAR
 
        prosody-param-value   =    1*VCHAR
 
        timestamp        =    "Timestamp" "=" time-stamp-value CRLF
 
        speech-marker    =    "Speech-Marker" ":" 1*VCHAR
                               [";" timestamp ] CRLF
 
        speech-marker    =    "Speech-Marker" ":" 1*VCHAR CRLF
 
        speech-language  =    "Speech-Language" ":" [1*VCHAR] CRLF
 
        fetch-hint       =    "Fetch-Hint" ":" [1*ALPHA] CRLF
 
        audio-fetch-hint =    "Audio-Fetch-Hint" ":" [1*ALPHA] CRLF
 
        fetch-timeout    =    "Fetch-Timeout" ":" [1*DIGIT] CRLF
 
        failed-uri       =    "Failed-URI" ":" absoluteURI CRLF
 
        failed-uri-cause =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF
 
        speak-restart    =    "Speak-Restart" ":" boolean-value CRLF
 
        speak-length      =    "Speak-Length" ":" speech-length-value
                               CRLF
        speech-length-value   =    numeric-speech-length
                               /    text-speech-length
 
        text-speech-length    =    1*ALPHA SP "Tag"
 
 
 S Shanmugham                  IETF-Draft                      Page 154
 
                            MRCPv2 Protocol              October, 2004
 
        numeric-speech-length =    ("+" / "-") 1*DIGIT SP
                                    numeric-speech-unit
 
        numeric-speech-unit   =    "Second"
                               /    "Word"
                               /    "Sentence"
                               /    "Paragraph"
 
        load-lexicon           =    "Load-Lexicon" : boolean CRLF
 
        lexicon-search-order  =    "Lexicon-Search-Order" : absoluteURI
                                    *[";" absoluteURI] CRLF
 
        ; Recognizer ABNF
 
        recognizer-method      =    recog-only-method
                               /    enrollment-method
 
        recog-only-method      =    "DEFINE-GRAMMAR"
                               /    "RECOGNIZE"
                               /    "INTERPRET"
                               /    "GET-RESULT"
                               /    "START-INPUT-TIMERS"
                               /    "STOP"
 
        enrollment-method      =    "START-PHRASE-ENROLLMENT"
                               /    "ENROLLMENT-ROLLBACK"
                               /    "END-PHRASE-ENROLLMENT"
                               /    "MODIFY-PHRASE"
                               /    "DELETE-PHRASE"
 
        recognizer-event      =    "START-OF-SPEECH"
                               /    "RECOGNITION-COMPLETE"
                               /    "INTERPRETATION-COMPLETE"
 
        recognizer-header      =    recog-only-header
                               /    enrollment-header
 
 
        recog-only-header      =    confidence-threshold
                               /    sensitivity-level
                               /    speed-vs-accuracy
                               /    n-best-list-length
                               /    no-input-timeout
                               /    recognition-timeout
                               /    waveform-uri
                               /    input-waveform-uri
                               /    completion-cause
                               /    completion-reason
                               /    recognizer-context-block
                               /    start-input-timers
 
 S Shanmugham                  IETF-Draft                      Page 155
 
                            MRCPv2 Protocol              October, 2004
 
                               /    speech-complete-timeout
                               /    speech-incomplete-timeout
                               /    dtmf-interdigit-timeout
                               /    dtmf-term-timeout
                               /    dtmf-term-char
                               /    fetch-timeout
                               /    failed-uri
                               /    failed-uri-cause
                               /    save-waveform
                               /    new-audio-channel
                               /    speech-language
                               /    ver-buffer-utterance
                               /    recognition-mode
                               /    cancel-if-queue
                               /    hotword-max-duration
                               /    hotword-min-duration
                               /    interpret-text
 
        enrollment-header      =    num-min-consistent-pronunciations
                               /    consistency-threshold
                               /    clash-threshold
                               /    personal-grammar-uri
                               /    phrase-id
                               /    phrase-nl
                               /    weight
                               /    save-best-waveform
                               /    new-phrase-id
                               /    confusable-phrases-uri
                               /    abort-phrase-enrollment
 
        confidence-threshold  =    "Confidence-Threshold" ":"
                                    [1*DIGIT] CRLF
 
        sensitivity-level      =    "Sensitivity-Level" ":" [1*DIGIT]
                                    CRLF
 
        speed-vs-accuracy      =    "Speed-Vs-Accuracy" ":" [1*DIGIT]
                                    CRLF
 
        n-best-list-length    =    "N-Best-List-Length" ":" [1*DIGIT]
                                    CRLF
 
        no-input-timeout      =    "No-Input-Timeout" ":" [1*DIGIT]
                                    CRLF
 
        recognition-timeout   =    "Recognition-Timeout" ":" [1*DIGIT]
                                    CRLF
 
        waveform-uri           =    "Waveform-URI" ":" absoluteURI CRLF
 
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP
 
 S Shanmugham                  IETF-Draft                      Page 156
 
                            MRCPv2 Protocol              October, 2004
 
                                    1*VCHAR CRLF
 
        recognizer-context-block   =    "Recognizer-Context-Block" ":"
                                    [1*VCHAR] CRLF
 
        start-input-timers    =    "Start-Input-Timers" ":"
                                    boolean-value CRLF
 
        speech-complete-timeout    =    "Speech-Complete-Timeout" ":"
                                         [1*DIGIT] CRLF
 
        speech-incomplete-timeout  =    "Speech-Incomplete-Timeout" ":"
                                        [1*DIGIT] CRLF
 
        dtmf-interdigit-timeout    =    "DTMF-Interdigit-Timeout" ":"
                                        [1*DIGIT] CRLF
 
        dtmf-term-timeout      =    "DTMF-Term-Timeout" ":" [1*DIGIT]
                                    CRLF
 
        dtmf-term-char   =    "DTMF-Term-Char" ":" [VCHAR] CRLF
 
        fetch-timeout    =    "Fetch-Timeout" ":" [1*DIGIT] CRLF
 
        save-waveform    =    "Save-Waveform" ":" [boolean-value] CRLF
 
        new-audio-channel =    "New-Audio-Channel" ":"
                               boolean-value CRLF
 
        recognition-mode =    "Recognition-Mode" ":" 1*ALPHA CRLF
 
        cancel-if-queue  =    "Cancel-If-Queue" ":" boolean-value CRLF
 
        hotword-max-duration  =    "Hotword-Max-Duration" ":"
                                    1*DIGIT CRLF
 
        hotword-min-duration  =    "Hotword-Min-Duration" ":"
                                    1*DIGIT CRLF
 
 
        num-min-consistent-pronunciations    =
                "Num-Min-Consistent-Pronunciations" ":" 1*DIGIT CRLF
 
 
        consistency-threshold =    "Consistency-Threshold" ":" 1*DIGIT
                                    CRLF
 
        clash-threshold       =    "Clash-Threshold" ":" 1*DIGIT CRLF
 
        personal-grammar-uri  =    "Personal-Grammar-URI" ":" Uri CRLF
 
 
 S Shanmugham                  IETF-Draft                      Page 157
 
                            MRCPv2 Protocol              October, 2004
 
        phrase-id        =    "Phrase-ID" ":" 1*VCHAR CRLF
 
        phrase-nl        =    "Phrase-NL" ":" 1*VCHAR CRLF
 
        weight           ="Weight" ":" WEIGHT CRLF
 
        save-best-waveform    =    "Save-Best-Waveform" ":"
                                    boolean-value CRLF
 
        new-phrase-id    =    "New-Phrase-ID" ":" 1*VCHAR CRLF
 
        confusable-phrases-uri =    "Confusable-Phrases-URI" ":"
                                    Uri CRLF
 
        abort-phrase-enrollment    =    "Abort-Phrase-Enrollment" ":"
                                         boolean- value CRLF
 
 
        ; Verifier ABNF
 
        verifier-method  =    "START-SESSION"
                          /    "END-SESSION"
                          /    "QUERY-VOICEPRINT"
                          /    "DELETE-VOICEPRINT"
                          /    "VERIFY"
                          /    "VERIFY-FROM-BUFFER"
                          /    "VERIFY-ROLLBACK"
                          /    "STOP"
                          /    "START-INPUT-TIMERS"
 
 
        verifier-event   =    "VERIFICATION-COMPLETE"
                          /    "START-OF-SPEECH"
 
 
        verifier-header  =    repository-uri
                          /    voiceprint-identifier
                          /    verification-mode
                          /    adapt-model
                          /    abort-model
                          /    security-level
                          /    num-min-verification-phrases
                          /    num-max-verification-phrases
                          /    no-input-timeout
                          /    save-waveform
                          /    waveform-uri
                          /    voiceprint-exists
                          /    ver-buffer-utterance
                          /    input-waveform-uri
                          /    completion-cause
                          /    completion-reason
 
 S Shanmugham                  IETF-Draft                      Page 158
 
                            MRCPv2 Protocol              October, 2004
 
                          /    speech-complete-timeout
                          /    new-audio-channel
                          /    abort-verification
 
 
        repository-uri   =    "Respository-URI" ":" Uri CRLF
 
        voiceprint-identifier =    "Voiceprint-Identifier" ":"
                                    1*VCHAR "." 3VCHAR
                                    [";" 1*VCHAR "." 3VCHAR] CRLF
 
        verification-mode =    "Verification-Mode" ":"
                               verification-mode-string
 
        verification-mode-string   =    "train" / "verify"
 
        adapt-model      =    "Adapt-Model" ":" Boolean-value CRLF
 
        abort-model      =    "Abort-Model" ":" Boolean-value CRLF
 
        security-level   =    "Security-Level" ":"
                               security-level-string CRLF
 
        security-level-string =    "high"
                               /    "medium-high"
                               /    "medium"
                               /    "medium-low"
                               /    "low"
 
        num-min-verification-phrases =  "Num-Min-Verification-Phrases"
                                         ":" 1*DIGIT CRLF
 
        num-min-verification-phrases =  "Num-Max-Verification-Phrases"
                                         ":" 1*DIGIT CRLF
 
        no-input-timeout =    "No-Input-Timeout" ":" [1*DIGIT] CRLF
 
        save-waveform    =    "Save-Waveform" ":"
                               boolean-value CRLF
 
        waveform-uri      =    "Waveform-URI" ":" Uri CRLF
 
        voiceprint-exists =    "Voiceprint-Exists" ":"
                               boolean-value CRLF
 
        ver-buffer-utterance  =    "Ver-Buffer-Utterance" ":"
                               boolean-value CRLF
 
        input-waveform-uri    =    "Input-Waveform-URI" ":" Uri CRLF
 
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP
 
 S Shanmugham                  IETF-Draft                      Page 159
 
                            MRCPv2 Protocol              October, 2004
 
                                    1*VCHAR CRLF
 
        abort-verification    =    "Abort-Verification " :
                                    boolean-value CRLF
 
 
        ; Recorder ABNF
 
        recorder-method       =    "RECORD
                               /    "STOP"
 
 
 
        recorder-event        =    "START-OF-SPEECH"
                               /    "RECORD-COMPLETE"
 
 
        recorder-header       =    sensitivity-level
                               /    no-input-timeout
                               /    completion-cause
                               /    completion-reason
                               /    failed-uri
                               /    failed-uri-cause
                               /    record-uri
                               /    media-type
                               /    max-time
                               /    final-silence
                               /    capture-on-speech
                               /    new-audio-channel
 
 
        sensitivity-level      =    "Sensitivity-Level" ":" [1*DIGIT]
                                    CRLF
 
        no-input-timeout      =    "No-Input-Timeout" ":" [1*DIGIT]
                                    CRLF
 
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP
                                    1*VCHAR CRLF
 
        failed-uri       =    "Failed-URI" ":" Uri CRLF
 
        failed-uri-cause =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF
 
        record-uri       =    "Record-URI" ":" Uri CRLF
 
        media-type       =    "Media-Type" ":" media-type CRLF
 
        max-time         =    "Max-Time" ":" 1*DIGIT CRLF
 
        final-silence    =    "Final-Silence" ":" 1*DIGIT CRLF
 
 S Shanmugham                  IETF-Draft                      Page 160
 
                            MRCPv2 Protocol              October, 2004
 
 
        capture-on-speech =    "Capture-On-Speech " ":"
                               1*DIGIT CRLF
 
    A.2 XML Schema and DTD
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 S Shanmugham                  IETF-Draft                      Page 161
 
                            MRCPv2 Protocol              October, 2004
 
 
    A.2.1 Recognition Results
 
    NLSML Schema Definition
 
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema  xmlns:xs="http://www.w3.org/2001/XMLSchema"
                targetNamespace="http://www.ietf.org/xml/schema/mrcp2"
                xmlns="http://www.ietf.org/xml/schema/mrcp2"
                elementFormDefault="qualified"
                attributeFormDefault="unqualified" >
      <xs:element name="result">
      <xs:annotation>
      <xs:documentation> Natural Language Semantic Markup Schema
      </xs:documentation>
      </xs:annotation>
      <xs:complexType>
      <xs:sequence>
           <xs:element name="interpretation" maxOccurs="unbounded">
           <xs:complexType>
           <xs:sequence>
                <xs:element name="instance" minOccurs="0">
                <xs:complexType>
                <xs:sequence>
                     <xs:any/>
                </xs:sequence>
                </xs:complexType>
                </xs:element>
                <xs:element name="input">
                <xs:complexType mixed="true">
                <xs:choice>
                     <xs:element name="noinput" minOccurs="0"/>
                     <xs:element name="nomatch" minOccurs="0"/>
                     <xs:element name="input" minOccurs="0"/>
                </xs:choice>
                <xs:attribute  name="confidence"
                               type="confidenceinfo"
                               default="1.0"/>
                <xs:attribute  name="timestamp-start"
                               type="xs:string"/>
                <xs:attribute  name="timestamp-end"
                               type="xs:string"/>
                </xs:complexType>
                </xs:element>
           </xs:sequence>
           <xs:attribute  name="confidence" type="confidenceinfo"
                          default="1.0"/>
           <xs:attribute  name="grammar" type="xs:anyURI"
                          use="optional"/>
           <xs:attribute  name="x-model" type="xs:anyURI"
                          use="optional"/>
 
 S Shanmugham                  IETF-Draft                      Page 162
 
                            MRCPv2 Protocol              October, 2004
 
           </xs:complexType>
           </xs:element>
      </xs:sequence>
      <xs:attribute  name="grammar" type="xs:anyURI"
                     use="optional"/>
      <xs:attribute  name="x-model" type="xs:anyURI"
                     use="optional"/>
      </xs:complexType>
      </xs:element>
 
      <xs:simpleType name="confidenceinfo">
      <xs:restriction base="xs:float">
           <xs:minInclusive value="0.0"/>
           <xs:maxInclusive value="1.0"/>
      </xs:restriction>
      </xs:simpleType>
    </xs:schema>
 
    NLSML Document Type Definition
 
           <!--      NLSML Results DTD
           -->
 
           <!ELEMENT result (interpretation*)>
           <!ATTLIST result
            grammar CDATA #IMPLIED
            x-model CDATA #IMPLIED
           >
           <!ELEMENT interpretation (instance,input?)>
           <!ATTLIST interpretation
            confidence CDATA "1.0"
            grammar CDATA #IMPLIED
            x-model CDATA #IMPLIED
           >
           <!ELEMENT input (#PCDATA|noinput|nomatch|input)*>
           <!ATTLIST input
            mode (dtmf | speech) "speech"
            timestamp-start CDATA #IMPLIED
            timestamp-end CDATA #IMPLIED
            confidence CDATA "1.0"
           >
           <!ELEMENT nomatch (#PCDATA)*>
           <!ELEMENT noinput EMPTY>
           <!ELEMENT instance (#PCDATA|EMPTY)*>
 
    A.2.2 Enrollment Results
 
    Enrollment Results Schema Definition
      <!-- MRCP Enrollment Schema
           (See http://www.oasis-open.org/committees/relax-ng/spec.html)
      -->
 
 S Shanmugham                  IETF-Draft                      Page 163
 
                            MRCPv2 Protocol              October, 2004
 
 
           <element name="enrollment-result"
                    datatypeLibrary="http://www.w3.org/2001/XMLSchema-
           datatypes"
                    ns="" xmlns="http://relaxng.org/ns/structure/1.0">
             <interleave>
               <element name="num-clashes">
                 <data type="nonNegativeInteger"/>
               </element>
               <element name="num-good-repetitions">
                 <data type="nonNegativeInteger"/>
               </element>
               <element name="num-repetitions-still-needed">
                 <data type="nonNegativeInteger"/>
               </element>
               <element name="consistency-status">
                 <choice>
                   <value>CONSISTENT</value>
                   <value>INCONSISTENT</value>
                   <value>UNDECIDED</value>
                 </choice>
               </element>
               <optional>
                 <element name="clash-phrase-ids">
                   <oneOrMore>
                     <element name="item">
                       <data type="token"/>
                     </element>
                   </oneOrMore>
                 </element>
               </optional>
               <optional>
                 <element name="transcriptions">
                   <oneOrMore>
                     <element name="item">
                       <text/>
                     </element>
                   </oneOrMore>
                 </element>
               </optional>
               <optional>
                 <element name="confusable-phrases">
                   <oneOrMore>
                     <element name="item">
                       <text/>
                     </element>
                   </oneOrMore>
                 </element>
               </optional>
             </interleave>
           </element>
 
 S Shanmugham                  IETF-Draft                      Page 164
 
                            MRCPv2 Protocol              October, 2004
 
 
   Enrollment Results Document Type Definition
 
           <!--      MRCP Enrollment Results DTD
           -->
           <!ELEMENT enrollment-result (num-clashes,
                     num-good-repetitions,num-repetitions-still-needed,
                     consistency-status, clash-phrase-ids?,
                     transcriptions?, confusable-phrases?)>
           <!ELEMENT num-clashes (#PCDATA)>
           <!ELEMENT num-good-repetitions (#PCDATA)>
           <!ELEMENT num-repetitions-still-needed (#PCDATA)>
           <!ELEMENT consistency-status (#PCDATA)>
           <!ELEMENT clash-phrase-ids (item)>
           <!ELEMENT transcriptions (item)>
           <!ELEMENT confusable-phrases (item)>
           <!ELEMENT item (#PCDATA)>
 
   A.2.3 Verification Results
 
   Verification Results Schema Definition
 
      <!-- MRCP Verification Results Schema
           (See http://www.oasis-open.org/committees/relax-ng/spec.html)
       -->
 
           <grammar datatypeLibrary="http://www.w3.org/2001/XMLSchema-
           datatypes"
                    ns="" xmlns="http://relaxng.org/ns/structure/1.0">
 
             <start>
               <element name="verification-result">
                 <element name="num-frames">
                   <ref name="num-framesContent"/>
                 </element>
                 <element name="voiceprint">
                   <ref name="firstVoiceprintContent"/>
                 </element>
                 <zeroOrMore>
                   <element name="voiceprint">
                     <ref name="restVoiceprintContent"/>
                   </element>
                 </zeroOrMore>
               </element>
             </start>
 
             <define name="firstVoiceprintContent">
               <attribute name="id">
                 <data type="string"/>
               </attribute>
               <interleave>
 
 S Shanmugham                  IETF-Draft                      Page 165
 
                            MRCPv2 Protocol              October, 2004
 
                 <optional>
                   <element name="adapted">
                     <data type="boolean"/>
                   </element>
                   <element name="needmoredata">
                     <ref name="needmoredataContent"/>
                   </element>
                 </optional>
                 <element name="incremental">
                   <ref name="firstCommonContent"/>
                 </element>
                 <element name="cumulative">
                   <ref name="firstCommonContent"/>
                 </element>
               </interleave>
             </define>
 
             <define name="restVoiceprintContent">
               <attribute name="id">
                 <data type="string"/>
               </attribute>
               <interleave>
                 <optional>
                   <element name="incremental">
                     <ref name="restCommonContent"/>
                   </element>
                 </optional>
                 <element name="cumulative">
                   <ref name="restCommonContent"/>
                 </element>
               </interleave>
             </define>
 
             <define name="firstCommonContent">
               <interleave>
                 <choice>
                   <element name="decision">
                     <ref name="decisionContent"/>
                   </element>
                 </choice>
                 <element name="device">
                   <ref name="deviceContent"/>
                 </element>
                 <element name="gender">
                   <ref name="genderContent"/>
                 </element>
                 <zeroOrMore>
                   <element name="verification-score">
                     <ref name="verification-scoreContent"/>
                   </element>
                 </zeroOrMore>
 
 S Shanmugham                  IETF-Draft                      Page 166
 
                            MRCPv2 Protocol              October, 2004
 
               </interleave>
             </define>
 
             <define name="restCommonContent">
               <interleave>
                 <optional>
                   <element name="decision">
                     <ref name="decisionContent"/>
                   </element>
                 </optional>
                 <optional>
                   <element name="utterance-length">
                     <ref name="utterance-lengthContent"/>
                   </element>
                 </optional>
                 <optional>
                   <element name="device">
                     <ref name="deviceContent"/>
                   </element>
                 </optional>
                 <optional>
                   <element name="gender">
                     <ref name="genderContent"/>
                   </element>
                 </optional>
                 <zeroOrMore>
                   <element name="verification-score">
                     <ref name="verification-scoreContent"/>
                   </element>
                 </zeroOrMore>
                </interleave>
             </define>
 
             <define name="decisionContent">
               <choice>
                 <value>accepted</value>
                 <value>rejected</value>
                 <value>undecided</value>
               </choice>
             </define>
 
             <define name="needmoredataContent">
               <data type="boolean"/>
             </define>
 
             <define name="utterance-lengthContent">
               <data type="nonNegativeInteger"/>
             </define>
 
             <define name="deviceContent">
               <choice>
 
 S Shanmugham                  IETF-Draft                      Page 167
 
                            MRCPv2 Protocol              October, 2004
 
                 <value>cellular-phone</value>
                 <value>electret-phone</value>
                 <value>carbon-button-phone</value>
                 <value>unknown</value>
               </choice>
             </define>
 
             <define name="genderContent">
               <choice>
                 <value>male</value>
                 <value>female</value>
                 <value>unknown</value>
               </choice>
             </define>
 
             <define name="verification-scoreContent">
               <data type="float">
                 <param name="minInclusive">0</param>
                 <param name="maxInclusive">1</param>
               </data>
             </define>
 
           </grammar>
 
   Verification Results Document Type Definition
           <!--      MRCP Verification Results DTD
           -->
 
           <!ELEMENT verification-result (voiceprint+)>
           <!ELEMENT voiceprint (adapted?, incremental?, cumulative)>
           <!ATTLIST voiceprint id CDATA #REQUIRED>
           <!ELEMENT incremental ((decision | needmoredata)?,
                     num-frames?, device?, gender?, verification-score)>
           <!ELEMENT cumulative  ((decision | needmoredata)?,
                     num-frames?, device?, gender?, verification-score)>
           <!ELEMENT decision (#PCDATA)>
           <!ELEMENT needmoredata (#PCDATA)>
           <!ELEMENT num-frames (#PCDATA)>
           <!ELEMENT device (#PCDATA)>
           <!ELEMENT gender (#PCDATA)>
           <!ELEMENT adapted (#PCDATA)>
           <!ELEMENT verification-score (#PCDATA)>
 
 Full Copyright Statement
 
    Copyright (C) The Internet Society (2004). This document is subject
    to the rights, licenses and restrictions contained in BCP 78, and
    except as set forth therein, the authors retain all their rights.
 
    This document and the information contained herein are provided on
    an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE
 
 S Shanmugham                  IETF-Draft                      Page 168
 
                            MRCPv2 Protocol              October, 2004
 
    REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE
    INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
    THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
    WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
 
 Intellectual Property
 
    The IETF takes no position regarding the validity or scope of any
    Intellectual Property Rights or other rights that might be claimed
    to pertain to the implementation or use of the technology described
    in this document or the extent to which any license under such
    rights might or might not be available; nor does it represent that
    it has made any independent effort to identify any such rights.
    Information on the procedures with respect to rights in RFC
    documents can be found in BCP 78 and BCP 79.
 
    Copies of IPR disclosures made to the IETF Secretariat and any
    assurances of licenses to be made available, or the result of an
    attempt made to obtain a general license or permission for the use
    of such proprietary rights by implementers or users of this
    specification can be obtained from the IETF on-line IPR repository
    at http://www.ietf.org/ipr.
 
    The IETF invites any interested party to bring to its attention any
    copyrights, patents or patent applications, or other proprietary
    rights that may cover technology that may be required to implement
    this standard.  Please address the information to the IETF at ietf-
    ipr@ietf.org.
 
 
 Contributors
      Daniel C. Burnett
      Nuance Communications
      1005 Hamilton Court
      Menlo Park, CA 94025-1422
      USA
 
      Email:  burnett@nuance.com
 
 
      Pierre Forgues
      Nuance Communications Ltd.
      111 Duke Street
      Suite 4100
      Montreal, Quebec
      Canada H3C 2M1
 
      Email:  forgues@nuance.com
 
      Charles Galles
 
 S Shanmugham                  IETF-Draft                      Page 169
 
                            MRCPv2 Protocol              October, 2004
 
      Intervoice, Inc.
      17811 Waterview Parkway
      Dallas, Texas 75252
 
      Email:  charles.galles@intervoice.com
 
      Klaus Reifenrath
      Scansoft, Inc
      Guldensporenpark 32
      Building D
      9820 Merelbeke
      Belgium
 
      Email: klaus.reifenrath@scansoft.com
 
 Acknowledgements
 
    Andre Gillet (Nuance Communications)
    Andrew Hunt (ScanSoft)
    Aaron Kneiss (ScanSoft)
    Brian Eberman (ScanSoft)
    Corey Stohs (Cisco Systems Inc)
    Dan Burnett (Nuance Communications)
    Jeff Kusnitz (IBM Corp)
    Ganesh N Ramaswamy (IBM Corp)
    Klaus Reifenrath (ScanSoft)
    Kristian Finlator (ScanSoft)
    Martin Dragomirecky (Cisco Systems Inc)
    Peter Monaco (Nuance Communications)
    Pierre Forgues (Nuance Communications)
    Ran Zilca (IBM Corp)
    Suresh Kaliannan (Cisco Systems Inc.)
    Skip Cave (Intervoice Inc)
    Magnus WesterLund (Ericsson Inc.)
    Thomas Gal (LumenVox Inc.)
 
 Editors' Addresses
 
    Saravanan Shanmugham
    Cisco Systems Inc.
    170 W Tasman Drive,
    San Jose,
    CA 95134
 
    Email: sarvi@cisco.com
 
 
 
 
 
 
 S Shanmugham                  IETF-Draft                      Page 170
 

Html markup produced by rfcmarkup 1.129d, available from https://tools.ietf.org/tools/rfcmarkup/