Network Working Group R. Gellens
Internet-Draft Core Technology Consulting
Intended status: Standards Track January 9, 2018
Expires: July 13, 2018

Negotiating Human Language in Real-Time Communications


Users have various human (natural) language needs, abilities, and preferences regarding spoken, written, and signed languages. This document adds new SDP media-level attributes so that when establishing interactive communication sessions ("calls"), it is possible to negotiate (communicate and match) the caller's language and media needs with the capabilities of the called party. This is especially important with emergency calls, where a call can be handled by a call taker capable of communicating with the user, or a translator or relay operator can be bridged into the call during setup, but this applies to non-emergency calls as well (as an example, when calling a company call center).

This document describes the need and a solution using new Session Description Protocol (SDP) media attributes.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on July 13, 2018.

Copyright Notice

Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents ( in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

Table of Contents

1. Introduction

A mutually comprehensible language is helpful for human communication. This document addresses the negotiation of human (natural) language and media modality (spoken, signed, written) in real-time communications. A companion document [RFC8255] addresses language selection in email.

Unless the caller and callee know each other or there is contextual or out-of- band information from which the language(s) and media modalities can be determined, there is a need for spoken, signed, or written languages to be negotiated based on the caller's needs and the callee's capabilities. This need applies to both emergency and non-emergency calls. For example, it is helpful for a caller to a company call center or a Public Safety Answering Point (PSAP) to be able to indicate preferred signed, written, and/or spoken languages, and for the callee to be able to indicate its capabilities in this area, allowing the call to proceed using the language(s) and media forms supported by both.

For various reasons, including the ability to establish multiple streams using different media (e.g., voice, text, video), it makes sense to use a per-stream negotiation mechanism known as the Session Description Protocol (SDP). Utilizing Session Description Protocol (SDP) [RFC4566] enables the solution described in this document to be applied to all interactive communications negotiated using SDP, in emergency as well as non-emergency scenarios.

By treating language as another SDP attribute that is negotiated along with other aspects of a media stream, it becomes possible to accommodate a range of users' needs and called party facilities. For example, some users may be able to speak several languages, but have a preference. Some called parties may support some of those languages internally but require the use of a translation service for others, or may have a limited number of call takers able to use certain languages. Another example would be a user who is able to speak but is deaf or hard-of-hearing and and desires a voice stream to send spoken language plus a text stream to receive written language. Making language a media attribute allows the standard session negotiation mechanism to handle this by providing the information and mechanism for the endpoints to make appropriate decisions.

The term "negotiation" is used here rather than "indication" because human language (spoken/written/signed) can be negotiated in the same manner as media (audio/text/video) and codecs. For example, if we think of a user calling an airline reservation center, the user may have a set of languages he or she speaks, with perhaps preferences for one or a few, while the airline reservation center will support a fixed set of languages. Negotiation should select the user's most preferred language that is supported by the call center. Both sides should be aware of which language was negotiated.

In the offer/answer model used here, the offer contains a set of languages per media (and direction) that the offerer is capable of using, and the answer contains one language per media (and direction) that the answerer will support. Supporting languages and/or modalities can require taking extra steps, such as having a call handled by an agent who speaks a requested language and/or with the ability to use a requested modality, or bridging external translation or relay resources into the call, etc. The answer indicates the media and languages that the answerer is committing to support (possibly after additional steps have been taken). This model also provides knowledge so both ends know what has been negotiated. Note that additional steps required to support the indicated languages or modalities may or may not be in place in time for any early media.

Since this is a protocol mechanism, the user equipment (UE client) needs to know the user's preferred languages; while this document does not address how clients determine this, reasonable techniques could include a configuration mechanism with a default of the language of the user interface; in some cases, a UE could tie language and media preferences, such as a preference for a video stream using a signed language and/or a text or audio stream using a written/spoken language.

1.1. Applicability

Within this document, it is assumed that the negotiating endpoints have already been determined, so that a per-stream negotiation based on the Session Description Protocol (SDP) can proceed.

When setting up interactive communications sessions it is necessary to route signaling messages to the appropriate endpoint(s). This document does not address the problem of language-based routing.

2. Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 RFC 8174 when, and only when, they appear in all capitals, as shown here.

3. Desired Semantics

The desired solution is a media attribute (preferably per direction) that may be used within an offer to indicate the preferred language(s) of each (direction of a) media stream, and within an answer to indicate the accepted language. The semantics of including multiple languages for a media stream within an offer is that the languages are listed in order of preference.

(Negotiating multiple simultaneous languages within a media stream is out of scope of this document.)

4. The existing 'lang' attribute

RFC 4566 [RFC4566] specifies an attribute 'lang' which appears similar to what is needed here, but is not sufficiently specific or flexible for the needs of this document. In addition, 'lang' is not mentioned in [RFC3264] and there are no known implementations in SIP. Further, it is useful to be able to specify language per direction (sending and receiving). This document therefore defines two new attributes.

5. Solution

An SDP attribute (per direction) seems the natural choice to negotiate human (natural) language of an interactive media stream, using the language tags of BCP 47 [RFC5646].

5.1. The 'hlang-send' and 'hlang-recv' attributes

This document defines two media-level attributes starting with 'hlang' (short for "human interactive language") to negotiate which human language is selected for use in each interactive media stream. (Note that not all streams will necessarily be used.) There are two attributes, one ending in "-send" and the other in "-recv", registered in Section 6. Each can appear in offers and answers for media streams.

In an offer, the 'hlang-send' value is a list of one or more language(s) the offerer is willing to use when sending using the media, and the 'hlang-recv' value is a list of one or more language(s) the offerer is willing to use when receiving using the media. The list of languages is in preference order (first is most preferred). When a media is intended for interactive communication using a language in one direction only (e.g., a user with difficulty speaking but able to hear who indicates a desire to send using text and receive using audio), either hlang-send or hlang-recv MAY be omitted. When a media is not primarily intended for language (for example, a video or audio stream intended for background only) both SHOULD be omitted. Otherwise, both SHOULD have the same value. Note that specifying different languages for each direction (as opposed to the same or essentially the same language in different modalities) can make it difficult to complete the call (e.g., specifying a desire to send audio in Hungarian and receive audio in Portuguese).

In an answer, 'hlang-send' is the language the answerer will send if using the media for language (which in most cases is one of the languages in the offer's 'hlang-recv'), and 'hlang-recv' is the language the answerer expects to receive if using the media for language (which in most cases is one of the languages in the offer's 'hlang-send').

In an offer, each value MUST be a list of one or more language tags per BCP 47 [RFC5646], separated by white space. In an answer, each value MUST be one language tag per BCP 47. BCP 47 describes mechanisms for matching language tags. Note that [RFC5646] Section 4.1 advises to "tag content wisely" and not include unnecessary subtags.

When placing an emergency call, and in any other case where the language cannot be inferred from context, in an offer each media stream primarily intended for human language communication SHOULD specify both (or for asymmetrical language use, one of) the 'hlang-send' and 'hlang-recv' attributes.

Clients acting on behalf of end users are expected to set one or both 'hlang-send' and 'hlang-recv' attributes on each media stream primarily intended for human communication in an offer when placing an outgoing session, and either ignore or take into consideration the attributes when receiving incoming calls, based on local configuration and capabilities. Systems acting on behalf of call centers and PSAPs are expected to take into account the attributes when processing inbound calls.

Note that media and language negotiation might result in more media streams being accepted than are needed by the users (e.g., if more preferred and less preferred combinations of media and language are all accepted). This is not a problem.

5.2. No Language in Common

A consideration with the ability to negotiate language is if the call proceeds or fails if the callee does not support any of the languages requested by the caller. This document does not mandate either behavior.

If the call is rejected due to lack of any languages in common, it is suggested to use SIP response code 488 (Not Acceptable Here) or 606 (Not Acceptable) [RFC3261] and include a Warning header field [RFC3261] in the SIP response. The Warning header field contains a warning code of [TBD: IANA VALUE, e.g., 308] and a warning text indicating that there are no mutually-supported languages; the text SHOULD also contain the supported languages and media.


5.3. Usage Notes

A sign-language tag with a video media stream is interpreted as an indication for sign language in the video stream. A non-sign-language tag with a text media stream is interpreted as an indication for written language in the text stream. A non-sign-language tag with an audio media stream is interpreted as an indication for spoken language in the audio stream.

This document does not define any other use for language tags in video media (such as how to indicate visible captions in the video stream).

In the IANA registry of language subtags per BCP 47 [RFC5646], a language subtag with a Type field "extlang" combined with a Prefix field value "sgn" indicates a sign-language tag. The absence of such "sgn" prefix indicates a non-sign-language tag.

This document does not define the use of sign-language tags in text or audio media.

This document does not define the use of language tags in media other than interactive streams of audio, video, and text (such as "message" or "application"). Such use could be supported by future work or by application agreement.

5.4. Examples

Some examples are shown below. For clarity, only the most directly relevant portions of the SDP block are shown.

An offer or answer indicating spoken English both ways:

An offer indicating American Sign Language both ways:

An offer requesting spoken Spanish both ways (most preferred), spoken Basque both ways (second preference), or spoken English both ways (third preference):

An answer to the above offer indicating spoken Spanish both ways:

An alternative answer to the above offer indicating spoken Italian both ways (as the callee does not support any of the requested languages but chose to proceed with the call):

An offer or answer indicating written Greek both ways:

An offer requesting the following media streams: video for the caller to send using Argentine Sign Language, text for the caller to send using written Spanish (most preferred) or written Portuguese, audio for the caller to receive spoken Spanish (most preferred) or spoken Portuguese:

An answer for the above offer, indicating text in which the callee will receive written Spanish, and audio in which the callee will send spoken Spanish. The answering party had no video capability:

An offer requesting the following media streams: text for the caller to send using written English (most preferred) or written Spanish, audio for the caller to receive spoken English (most preferred) or spoken Spanish, supplemental video:

An answer for the above offer, indicating text in which the callee will receive written Spanish, audio in which the callee will send spoken Spanish, and supplemental video:

Note that, even though the examples show the same (or essentially the same) language being used in both directions (even when the modality differs), there is no requirement that this be the case. However, in practice, doing so is likely to increase the chances of successful matching.

6. IANA Considerations

6.1. att-field Table in SDP Parameters

IANA is kindly requested to add two entries to the 'att-field (media level only)' table of the SDP parameters registry:

The first entry is for hlang-recv:

Attribute Name:
Contact Name:
Randall Gellens
Contact Email Address:
Attribute Value:
Attribute Syntax:
hlang-value =
hlang-offv / hlang-ansv
; hlang-offv used in offers
; hlang-ansv used in answers
hlang-offv =
Language-Tag *( SP Language-Tag )
; Language-Tag as defined in BCP 47
SP =
1*" " ; one or more space (%x20) characters
hlang-ansv =

Attribute Semantics:
Described in Section 5.1 of TBD: THIS DOCUMENT
Usage Level:
Mux Category:
Charset Dependent:
See Section 5.1 of TBD: THIS DOCUMENT
O/A Procedures:
See Section 5.1 of TBD: THIS DOCUMENT

The second entry is for hlang-send:

Attribute Name:
Contact Name:
Randall Gellens
Contact Email Address:
Attribute Value:
Attribute Syntax:
hlang-value =
hlang-offv / hlang-ansv

Attribute Semantics:
Described in Section 5.1 of TBD: THIS DOCUMENT
Usage Level:
Mux Category:
Charset Dependent:
See Section 5.1 of TBD: THIS DOCUMENT
O/A Procedures:
See Section 5.1 of TBD: THIS DOCUMENT

6.2. Warn-Codes Sub-Registry of SIP Parameters

IANA is requested to add a new value in the warn-codes sub-registry of SIP parameters in the 300 through 329 range that is allocated for indicating problems with keywords in the session description. The reference is to this document. The warn text is "Incompatible language specification: Requested languages not supported. Supported languages and media are: [list of supported languages and media]."

7. Security Considerations

The Security Considerations of BCP 47 [RFC5646] apply here. In addition, if the 'hlang-send' or 'hlang-recv' values are altered or deleted en route, the session could fail or languages incomprehensible to the caller could be selected; however, this is also a risk if any SDP parameters are modified en route.

8. Privacy Considerations

Language and media information can suggest a user's nationality, background, abilities, disabilities, etc.

9. Changes from Previous Versions

RFC EDITOR: Please remove this section prior to publication.

9.1. Changes from draft-ietf-slim-...-04 to draft-ietf-slim-...-06

9.2. Changes from draft-ietf-slim-...-02 to draft-ietf-slim-...-03

9.3. Changes from draft-ietf-slim-...-01 to draft-ietf-slim-...-02

9.4. Changes from draft-ietf-slim-...-00 to draft-ietf-slim-...-01

9.5. Changes from draft-gellens-slim-...-03 to draft-ietf-slim-...-00

9.6. Changes from draft-gellens-slim-...-02 to draft-gellens-slim-...-03

9.7. Changes from draft-gellens-slim-...-01 to draft-gellens-slim-...-02

9.8. Changes from draft-gellens-slim-...-00 to draft-gellens-slim-...-01

9.9. Changes from draft-gellens-mmusic-...-02 to draft-gellens-slim-...-00

9.10. Changes from draft-gellens-mmusic-...-01 to -02

9.11. Changes from draft-gellens-mmusic-...-00 to -01

9.12. Changes from draft-gellens-...-02 to draft-gellens-mmusic-...-00

9.13. Changes from draft-gellens-...-01 to -02

9.14. Changes from draft-gellens-...-00 to -01

10. Contributors

Gunnar Hellstrom deserves special mention for his reviews and assistance.

11. Acknowledgments

Many thanks to Bernard Aboba, Harald Alvestrand, Flemming Andreasen, Francois Audet, Eric Burger, Keith Drage, Doug Ewell, Christian Groves, Andrew Hutton, Hadriel Kaplan, Ari Keranen, John Klensin, Mirja Kuhlewind, Paul Kyzivat, John Levine, Alexey Melnikov, Addison Phillips, James Polk, Eric Rescorla, Pete Resnick, Alvaro Retana, Natasha Rooney, Brian Rosen, Peter Saint-Andre, and Dale Worley for reviews, corrections, suggestions, and participating in in-person and email discussions.

12. References

12.1. Normative References

[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M. and E. Schooler, "SIP: Session Initiation Protocol", RFC 3261, DOI 10.17487/RFC3261, June 2002.
[RFC4566] Handley, M., Jacobson, V. and C. Perkins, "SDP: Session Description Protocol", RFC 4566, DOI 10.17487/RFC4566, July 2006.
[RFC5646] Phillips, A. and M. Davis, "Tags for Identifying Languages", BCP 47, RFC 5646, DOI 10.17487/RFC5646, September 2009.
[RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, May 2017.

12.2. Informational References

[RFC3264] Rosenberg, J. and H. Schulzrinne, "An Offer/Answer Model with Session Description Protocol (SDP)", RFC 3264, DOI 10.17487/RFC3264, June 2002.
[RFC8255] Tomkinson, N. and N. Borenstein, "Multiple Language Content Type", RFC 8255, DOI 10.17487/RFC8255, October 2017.

Author's Address

Randall Gellens Core Technology Consulting EMail: URI: