< draft-montenegro-httpbis-speed-mobility-01.txt   draft-montenegro-httpbis-speed-mobility-02.txt >
Network Working Group R. Trace Network Working Group R. Trace
Internet-Draft A. Foresti Internet-Draft A. Foresti
Expires: September 2, 2012 S. Singhal Expires: December 17, 2012 S. Singhal
O. Mazahir O. Mazahir
H. Nielsen H. Nielsen
B. Raymor B. Raymor
R. Rao R. Rao
G. Montenegro G. Montenegro
Microsoft Microsoft
Mar 2012 June 15, 2012
HTTP Speed+Mobility HTTP Speed+Mobility
draft-montenegro-httpbis-speed-mobility-01 draft-montenegro-httpbis-speed-mobility-02
Abstract Abstract
The design of HTTP--how every application and service on the web
communicates today--can positively impact user experience,
operational and environmental costs, and even the battery life of the
devices you carry around.
Improving HTTP starts with speed. Apps--not just browsers--should
get faster too. More and more, apps are how people access web
services, in addition to their browser. Improving HTTP should also
make mobile better, particularly to ensure great battery life and low
network cost on constrained devices. People and their apps should
stay in control of network access. Finally, to achieve rapid
adoption, HTTP 2.0 needs to retain as much compatibility as possible
with the existing Web infrastructure. Done right, HTTP 2.0 can help
people connect their devices and applications to the Internet fast,
reliably, and securely over a number of diverse networks, with great
battery life and low cost.
This document describes "HTTP Speed+Mobility," a proposal for HTTP This document describes "HTTP Speed+Mobility," a proposal for HTTP
2.0 that emphasizes performance improvements and security while at 2.0 that emphasizes performance improvements and security while at
the same time accounting for the important needs of mobile devices the same time accounting for the important needs of mobile devices
and applications. The proposal starts from both the Google SPDY and applications. The proposal starts from both the Google SPDY
protocol and the work the IETF has done around WebSockets. The protocol and the work the IETF has done around WebSockets. The
proposal is not a final product but rather is intended to form a proposal is not a final product but rather is intended to form a
baseline for working group discussion. baseline for working group discussion.
Status of this Memo Status of this Memo
skipping to change at page 2, line 13 skipping to change at page 1, line 42
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 2, 2012. This Internet-Draft will expire on December 17, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 13 skipping to change at page 2, line 21
described in the Simplified BSD License. described in the Simplified BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1. Maintain existing HTTP semantics . . . . . . . . . . . 6 1.1.1. Maintain existing HTTP semantics . . . . . . . . . . . 6
1.1.2. Layered Architecture . . . . . . . . . . . . . . . . . 6 1.1.2. Layered Architecture . . . . . . . . . . . . . . . . . 6
1.1.3. Use of Existing standards . . . . . . . . . . . . . . 6 1.1.3. Use of Existing standards . . . . . . . . . . . . . . 6
1.1.4. Client is in control of content . . . . . . . . . . . 7 1.1.4. Client is in control of content . . . . . . . . . . . 7
1.1.5. Network Cost and Power . . . . . . . . . . . . . . . . 7 1.1.5. Network Cost and Power . . . . . . . . . . . . . . . . 8
1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 9 1.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 9
1.3. Protocol Overview . . . . . . . . . . . . . . . . . . . . 9 1.3. Protocol Overview . . . . . . . . . . . . . . . . . . . . 10
2. Setting up the session . . . . . . . . . . . . . . . . . . . . 11 1.3.1. Connection Management . . . . . . . . . . . . . . . . 12
3. Session layer and Framing . . . . . . . . . . . . . . . . . . 12 1.4. Proxies . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1. WebSocket framing protocol . . . . . . . . . . . . . . . . 12 2. Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2. WebSocket Keepalive messages . . . . . . . . . . . . . . . 12 3. Session layer and Framing . . . . . . . . . . . . . . . . . . 15
3.3. WebSocket Close . . . . . . . . . . . . . . . . . . . . . 13 3.1. Opening and Closing Sessions . . . . . . . . . . . . . . . 15
3.4. WebSocket errors (Session errors) . . . . . . . . . . . . 13 3.2. Origin of Multiplexed Content . . . . . . . . . . . . . . 15
4. Streams Layer . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3. WebSocket Framing Protocol . . . . . . . . . . . . . . . . 16
4.1. Modeling SYN_STREAM in a WebSocket frame . . . . . . . . . 14 3.4. Closing HTTP Speed+Mobility Sessions . . . . . . . . . . . 17
4.2. Compression . . . . . . . . . . . . . . . . . . . . . . . 15 4. Streams Layer . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3. Control Frames . . . . . . . . . . . . . . . . . . . . . . 15 4.1. Stream Management . . . . . . . . . . . . . . . . . . . . 18
4.3.1. SYN_STREAM . . . . . . . . . . . . . . . . . . . . . . 16 4.1.1. Stream Creation . . . . . . . . . . . . . . . . . . . 18
4.3.2. SYN_REPLY . . . . . . . . . . . . . . . . . . . . . . 16 4.1.2. Stream Data Exchange . . . . . . . . . . . . . . . . . 18
4.3.3. HEADERS . . . . . . . . . . . . . . . . . . . . . . . 16 4.1.3. Stream Half-Close . . . . . . . . . . . . . . . . . . 19
4.4. SPDY frames removed in this proposal . . . . . . . . . . . 16 4.1.4. Stream Close . . . . . . . . . . . . . . . . . . . . . 19
5. HTTP Layering . . . . . . . . . . . . . . . . . . . . . . . . 18 4.1.5. Error Handling . . . . . . . . . . . . . . . . . . . . 19
5.1. Connection Management . . . . . . . . . . . . . . . . . . 18 4.2. Stream Control Frames . . . . . . . . . . . . . . . . . . 20
5.2. Use of GOAWAY . . . . . . . . . . . . . . . . . . . . . . 18 4.2.1. SYN_STREAM . . . . . . . . . . . . . . . . . . . . . . 20
5.3. Server Push Transactions . . . . . . . . . . . . . . . . . 19 4.2.2. SYN_REPLY . . . . . . . . . . . . . . . . . . . . . . 21
6. Open Issues for Discussion in the Workgroup . . . . . . . . . 20 4.2.3. RST_STREAM . . . . . . . . . . . . . . . . . . . . . . 22
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 22 4.2.4. CREDIT_UPDATE . . . . . . . . . . . . . . . . . . . . 23
8. Normative References . . . . . . . . . . . . . . . . . . . . . 23 4.3. Data Frames . . . . . . . . . . . . . . . . . . . . . . . 24
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4. Name/Value Header Block . . . . . . . . . . . . . . . . . 24
4.5. Compression . . . . . . . . . . . . . . . . . . . . . . . 25
5. Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.1. Stream Priority . . . . . . . . . . . . . . . . . . . . . 27
5.2. Credit Control . . . . . . . . . . . . . . . . . . . . . . 27
5.3. Credit Control Declaration . . . . . . . . . . . . . . . . 27
5.4. Credit Balance Updates . . . . . . . . . . . . . . . . . . 28
5.5. Turning Credit Control Off for a Stream . . . . . . . . . 29
5.6. Increasing and Decreasing Stream Credit . . . . . . . . . 29
5.7. Implementation Guidance and Considerations . . . . . . . . 29
6. General Notes . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1. HTTP Layering . . . . . . . . . . . . . . . . . . . . . . 31
6.2. Relationship to SPDY . . . . . . . . . . . . . . . . . . . 31
6.3. Server Push . . . . . . . . . . . . . . . . . . . . . . . 31
6.4. Open Issues . . . . . . . . . . . . . . . . . . . . . . . 31
6.4.1. Flow Control . . . . . . . . . . . . . . . . . . . . . 32
6.4.2. Streams Issues . . . . . . . . . . . . . . . . . . . . 32
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 33
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.1. Normative References . . . . . . . . . . . . . . . . . . . 34
8.2. Informative References . . . . . . . . . . . . . . . . . . 34
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35
1. Introduction 1. Introduction
Over the course of its almost two decades of existence, the HTTP Over the course of its almost two decades of existence, the HTTP
protocol has enabled the web to experience phenomenal growth and protocol has enabled the web to experience phenomenal growth and
change the world in more ways than its creators might have imagined. change the world in more ways than its creators might have imagined.
HTTP's designers got many design principles right, including HTTP's designers got many design principles right, including
simplicity and robustness. These charateristics allow billions of simplicity and robustness. These characteristics allow billions of
devices to support and use HTTP in a multitude of communication devices to support and use HTTP in a multitude of communication
scenarios. scenarios. However, it is time to improve upon HTTP 1.1.
Improving HTTP starts with speed. Web sites have become complex. A Improving HTTP starts with speed. Web sites have become complex. A
single site could comprise hundreds of different elements (from single site could comprise hundreds of different elements (from
images to videos to ads to news feeds and so on) that need to get images to videos to ads to news feeds and so on) that need to get
retrieved by the client before the page can be fully displayed. retrieved by the client before the page can be fully displayed.
Users expect all of this to happen securely and instantly across all Users expect all of this to happen securely and instantly across all
their devices and applications. In many scenarios, HTTP fails to their devices and applications. In many scenarios, HTTP fails to
meet these expectations. Speed improvements need to apply not only meet these expectations. Speed improvements need to apply not only
for browsers but also for apps. More and more, apps are how people for browsers but also for apps. More and more, apps are how people
access web services, in addition to their browser. A key attribute access web services, in addition to their browser. A key attribute
of mobile applications is that they may access only a subset of the of mobile applications is that they may access only a subset of the
web site's data, relying on local application logic to process the web site's data, relying on local application logic to process the
data and create a presentation and interaction layer. data and create a presentation and interaction layer.
The design of HTTP--how every application and service on the web
communicates today--can positively impact user experience,
operational and environmental costs, and even the battery life of the
devices you carry around. Improving HTTP should also ensure great
battery life and low network cost on constrained devices. People and
their apps should stay in control of network access. Finally, to
achieve rapid adoption, HTTP 2.0 needs to retain as much
compatibility as possible with the existing Web infrastructure. Done
right, HTTP 2.0 can help people connect their devices and
applications to the Internet fast, reliably, and securely over a
number of diverse networks, with great battery life and low cost.
At the core of the speed problem is that HTTP does not allow for out- At the core of the speed problem is that HTTP does not allow for out-
of-order or interleaved responses. This requires the establishment of-order or interleaved responses. This requires the establishment
of multiple TCP connections for concurrency (pipelining is formally of multiple TCP connections for concurrency (pipelining is formally
supported by the protocol but is seldom implemented in practice). supported by the protocol but is seldom implemented in practice).
The overhead in terms of additional roundtrips and dealing with TCP The overhead in terms of additional roundtrips and dealing with TCP
slow start causes a significant performance penalty. This leads to a slow start causes a significant performance penalty. This leads to a
variety of issues, such as additional round trips for connection variety of issues, such as additional round trips for connection
setup, slow-start delays, and potentially connection rationing: the setup, slow-start delays, and potentially connection rationing: the
client may not be able to dedicate too many connections to any single client may not be able to dedicate many connections to any single
server, and the server needs to protect itself from denial-of-service server, and the server needs to protect itself from denial-of-service
attacks. As a result, users are often disappointed in the perceived attacks. As a result, users are often disappointed in the perceived
performance of websites. performance of websites.
Improving HTTP should also make mobile apps and devices better. When Improving HTTP should also make mobile apps and devices better. When
HTTP was first developed mobile communication was virtually non- HTTP was first developed, mobile communication was virtually non-
existent, but today the mobile Web is an integral and fast-growing existent, but today the mobile Web is an integral and fast-growing
part of the Web. The different conditions on mobile communications part of the Web. The different conditions on mobile communications
require rethinking of how protocols work. For example, people want require rethinking of how protocols work. For example, people want
their mobile devices to have better battery life. HTTP 2.0 can help their mobile devices to have better battery life. HTTP 2.0 can help
decrease the power consumption of network access. Mobile devices decrease the power consumption of network access. Mobile devices
also give people a choice of networks with different costs and also give people a choice of networks with different costs and
bandwidth limits. Embedded sensors and clients face similar issues. bandwidth limits. Embedded sensors and clients face similar issues.
Mobile considerations require that HTTP be network efficient while Mobile considerations require that HTTP be network efficient while
simultaneously being sensitive to the limited power, computation, and simultaneously being sensitive to the limited power, computation, and
connectivity capabilities of the client device. To support mobile connectivity capabilities of the client device. To support mobile
devices, HTTP needs to be able to "scale down" to allow clients to devices, HTTP needs to be able to "scale down" to allow clients to
control the level of data received, the format of that data, and even control the level of data received, the format of that data, and even
the timing of that data. the timing of that data.
1.1. Overview 1.1. Overview
This draft describes our proposal for "HTTP Speed+Mobility". The This draft describes our proposal for "HTTP Speed+Mobility". The
approach proposed focuses on all the web's end users---emphasizing approach targets broad HTTP applicability while emphasizing
performance improvements while at the same time accounting for the performance improvements and accounting for the important needs of
important needs of mobile devices and applications. mobile devices and applications.
The proposal's intended outcome is a protocol that can be quickly and The proposal's intended outcome is a protocol that can be quickly and
widely adopted in the industry, and start delivering real value to widely adopted in the industry, and start delivering real value to
end users without imposing undue burden on hardware and software end users without imposing undue burden on hardware and software
vendors, as well as administrators of legacy equipment. Implementors vendors, as well as administrators of legacy equipment. Implementers
should also find it easy to understand due to the familiarity of some should also find it easy to understand due to the familiarity of some
of its key concepts, which are aligned with innovations that were of its key concepts, which are aligned with innovations that were
adopted in recent IETF specifications like WebSockets. Most adopted in recent IETF specifications like WebSockets. Most
important, the proposal seeks to establish a baseline for working important, the proposal seeks to establish a baseline for working
group discussion on the potential improvements that would define HTTP group discussion on the potential improvements that would define HTTP
2.0. 2.0.
This HTTP Speed+Mobility proposal adheres to the following This HTTP Speed+Mobility proposal adheres to the following
principles: principles:
o Maintain existing HTTP semantics. The request-response nature of o Maintain existing HTTP semantics. The request-response nature of
the HTTP protocol and semantics of its messages as they traverse the HTTP protocol and semantics of its messages as they traverse
diverse networks must be preserved. Any deviation from this diverse networks must be preserved. Any deviation from this
principle would represent an extension to HTTP and should be principle would represent a major extension to HTTP and should be
treated as such. treated as such (see section 2.1 in [I-D.iab-extension-recs]).
o Maintain the integrity of the layered architecture. o Maintain the integrity of the layered architecture.
o Use existing standards when available to make it easy for the o Use existing standards when available to make it easy for the
protocol to work with the current web infrastructure including protocol to work with the current web infrastructure including
switches, routers, proxies, load balancers, security systems, DNS switches, routers, proxies, load balancers, security systems, DNS
servers, and NATs. For example, the proposal reuses the servers, and NATs. For example, the proposal reuses the
WebSockets handshake and framing mechanism to establish a WebSockets handshake and framing mechanism to establish a
bidirectional link that is compatible with existing proxies and bidirectional link that is compatible with existing proxies and
connection models. connection models.
skipping to change at page 6, line 17 skipping to change at page 6, line 27
These principles are described in more detail below. These principles are described in more detail below.
1.1.1. Maintain existing HTTP semantics 1.1.1. Maintain existing HTTP semantics
HTTP at its core is a simple request-response protocol. The working HTTP at its core is a simple request-response protocol. The working
group has clearly stated that it is a goal to preserve the semantics group has clearly stated that it is a goal to preserve the semantics
of HTTP. Thus, we believe that the request-response nature of the of HTTP. Thus, we believe that the request-response nature of the
HTTP protocol must be preserved. The core HTTP 2.0 protocol should HTTP protocol must be preserved. The core HTTP 2.0 protocol should
focus on optimizing these HTTP semantics, while improving the focus on optimizing these HTTP semantics, while improving the
transport via a new session layer. Additional capabilities that transport via a new multiplexing layer. Additional capabilities that
introduce new communication models like unrequested responses must be introduce new communication models like unrequested responses should
treated as an extension to the core protocol, and explored separately be treated in a different specification and explored separately from
from the core protocol. this proposal.
1.1.2. Layered Architecture 1.1.2. Layered Architecture
HTTP relies on an in-order, reliable transport to ensure delivery of HTTP relies on an in-order, reliable transport to ensure delivery of
application data. TCP has almost exclusively provided the reliable, application data. TCP has almost exclusively provided the reliable,
ordered delivery of HTTP messages from one computer to another since ordered delivery of HTTP messages from one computer to another since
its inception. TCP accounts for adverse network conditions such as its inception. TCP accounts for adverse network conditions such as
congestion, or other unpredictable network behavior. Any HTTP 2.0 congestion, or other unpredictable network behavior. Any HTTP 2.0
proposal should leverage the reliable transport and not attempt to proposal should leverage the reliable transport and not attempt to
replicate functions generally accepted as addressed by other layers. replicate functions generally accepted as addressed by other layers.
Conversely, any proposals for enhancing functionality typically Conversely, any proposals for enhancing functionality typically
provided by other layers of the networking stack (e.g., congestion provided by other layers of the networking stack (e.g., congestion
control provided by the transport layer) should be brought to the control provided by the transport layer) should be brought to the
attention of, and discussed in, proper IETF forums (e.g., TCPM WG). attention of, and discussed in, proper IETF forums (e.g., TCPM WG).
During the HTTPbis charter proposal discussion, the security and
applications area directors suggested an additional paragraph on
security work and authentication. If new work is undertaken in this
regard, it should be done by existing IETF security groups in this
area.
1.1.3. Use of Existing standards 1.1.3. Use of Existing standards
HTTP 2.0 should prefer models that are compatible with the existing HTTP 2.0 should prefer models that are compatible with the existing
Internet and, where possible, reuse existing protocol mechanisms. Internet and, where possible, reuse existing protocol mechanisms.
One primary example is in protocol negotiation where the WG should One primary example is in protocol negotiation where the WG should
avoid a proliferation of methods, and instead consider using the HTTP avoid a proliferation of methods, and instead use the HTTP 1.1
1.1 Upgrade header as it is used in the WebSocket protocol. This Upgrade header similar to how it is used in the WebSocket protocol.
will help HTTP 2.0 to be readily deployed on the existing internet, This will help HTTP 2.0 to be readily deployed on the existing
and maintain compatibility with existing web sites and client Internet, and maintain compatibility with existing web sites and
environments (such as some educational networks). client environments (such as some educational networks).
1.1.4. Client is in control of content 1.1.4. Client is in control of content
HTTP is used in a vast array of scenarios and a variety of network HTTP is used in a vast array of scenarios and a variety of network
architectures. There is no "one size fits all" deployment of HTTP. architectures. There is no "one size fits all" deployment of HTTP.
For example, at times it may not be optimal to use compression in For example, at times it may not be optimal to use compression in
certain environments. For constrained sensors from the "Internet of certain environments. For constrained sensors from the "Internet of
things" scenario, CPU resources may be at a premium. Having a high things" scenario, resources may be at a premium. Having a high
performance but flexible HTTP 2.0 solution will enable performance but flexible HTTP 2.0 solution will enable
interoperability for a wider variety of scenarios. There also may be interoperability for a wider variety of scenarios. There also may be
aspects of security that are not appropriate for all implementations. aspects of security that are not appropriate for all implementations.
Encryption must be optional to allow HTTP 2.0 to meet certain Encryption must be optional to allow HTTP 2.0 to meet certain
scenarios and regulations. HTTP 2.0 is a universal replacement for scenarios and regulations. HTTP 2.0 is a universal replacement for
HTTP 1.X, and there are some instances in which imposing TLS is not HTTP 1.X, and there are some instances in which imposing TLS is not
required (or allowed). For example, a "random thought of the day" required (or allowed). For example, a sizable portion of HTTP
web service has very little need for it, nor does a sensor spewing requests and responses actually happen in "backend" scenarios, in
out a temperature reading every few seconds. which the messages are transported over physically trusted
infrastructure between endpoints owned by the same organization.
Furthermore, a "random thought of the day" web service or a sensor
spewing out a temperature reading every few seconds may choose not to
use TLS. In such situations, it may not be worth the additional
expense of deploying TLS, nor might it be desirable to hinder caching
of the content by encrypting it end-to-end.
Because of the variety of clients on the Internet and the number of Because of the variety of clients on the Internet and the number of
connection scenarios, clients are in the best position to define what connection scenarios, clients are in the best position to define what
content is downloaded. The browser or app has firsthand information content is downloaded. The browser or app has firsthand information
on what the user is currently doing and what data is already locally on what the app is currently doing and what data is already locally
available. For example, most of the browsers in use today have available. For example, most of the browsers in use today have
powerful caches that should be leveraged to store web elements that powerful caches that should be leveraged to store web elements that
change infrequently. change infrequently.
Increasingly, apps, rather than browsers, originate HTTP requests. In addition to browsers, apps increasingly originate HTTP requests.
The content retrieved by apps is usually different from that The content retrieved by apps is usually different from that
downloaded by browsers; in fact, multiple apps may access the same downloaded by browsers; in fact, multiple apps may access the same
content for different purposes. Each app may access different content for different purposes. Each app may access different
subsets of the server content, with different priorities, and in subsets of the server content, with different priorities, and in
different sequences according to their own rendering requirements and different sequences according to their own rendering requirements and
user interaction models. The server cannot always know the needs or user interaction models. The server cannot always know the needs or
intents of a particular application. intents of a particular application.
HTTP 2.0 proposals should not force the browser or app to download HTTP 2.0 proposals should not force the browser or app to download
content that has not been requested and may already be cached. content that has not been requested and that is already cached.
Furthermore, the client must have the option to decline unwanted or Furthermore, the client must have the option to decline unwanted or
unneeded content. Clients need the ability to inform the server unneeded content. Clients need the ability to inform the server
about cached elements that do not need to be downloaded. Ideally about cached elements that do not need to be downloaded. Ideally
this feedback from the client to the server would allow for this feedback from the client to the server would allow for
incremental approval of content to enable an efficient "push" incremental approval of content to enable an efficient "push"
extension to deliver the right content, with the right security and extension to deliver the right content, with the right security and
with the right formatting. with the right formatting.
1.1.5. Network Cost and Power 1.1.5. Network Cost and Power
skipping to change at page 8, line 34 skipping to change at page 8, line 47
times, bandwidth cost or battery life may be the deciding factor. times, bandwidth cost or battery life may be the deciding factor.
HTTP 2.0 must allow developers to optimize for the specific HTTP 2.0 must allow developers to optimize for the specific
constraints of their problem space (which might change over time) constraints of their problem space (which might change over time)
rather than imposing a monolithic solution to a generic problem. For rather than imposing a monolithic solution to a generic problem. For
example, server push is a good optimization for many scenarios where example, server push is a good optimization for many scenarios where
content updates to web pages revisited over time are infrequent, the content updates to web pages revisited over time are infrequent, the
client has plenty of bandwidth as well as the needed processing power client has plenty of bandwidth as well as the needed processing power
to either handle the updates instantly, or cache them for later to either handle the updates instantly, or cache them for later
processing. On the other hand, it is not likely to be appropriate in processing. On the other hand, it is not likely to be appropriate in
situations where content is being transmitted over a costed link. situations where content is being transmitted over a costed link.
Neither it will be when the client is running several applications Neither will it be when the client is running several applications
that use network bandwidth concurrently, and bursty, server-initiated that use network bandwidth concurrently, and bursty, server-initiated
content transmissions would interfere with their smooth operation. content transmissions would interfere with their smooth operation.
Rather than forcing developers to choose between using all the Rather than forcing developers to choose between using all the
features of HTTP 2.0 or sticking with HTTP 1.1, it would be better to features of HTTP 2.0 or sticking with HTTP 1.1, it would be better to
provide mechanisms for developers to fine tune the capabilities of provide mechanisms for developers to fine tune the capabilities of
HTTP 2.0 to a specific set of requirements. HTTP 2.0 to a specific set of requirements.
In summary, the goals of higher speed, lower cost, lower power may In summary, the goals of higher speed, lower cost and lower power may
often be aligned. For instance, having less data sent on the wire often be aligned. For instance, having less data sent on the wire
will allow pages to load faster, allow the radio to power down sooner will allow pages to load faster, allow the radio to power down sooner
and consume less bandwidth. But given the variety of the scenarios and consume less bandwidth. But given the variety of the scenarios
where HTTP 2.0 will be used, this will not always be the case. For where HTTP 2.0 will be used, this will not always be the case. For
example, a device whose battery is about to run out, or whose cache example, a device whose battery is about to run out, whose
is near capacity can provide a better user experience by disabling communication monetary costs are prohibitive, or whose cache is near
server push updates while retaining the other optimizations available capacity can provide a better user experience by disabling a
in HTTP 2.0. Accordingly, the working group should consider power capability that consumes bandwidth with potentially unwanted content,
and cost as well as speed. while continuing to use other optimizations available in HTTP 2.0.
Accordingly, the working group should consider power and cost as well
as speed.
1.2. Definitions 1.2. Definitions
Client: The endpoint initiating the WebSocket session. client: A program that establishes HTTP Speed+Mobility connections
for the purpose of sending requests.
Connection: A transport-level connection between two endpoints.
Endpoint: Either the client or server of a connection. connection: A TCP layer virtual circuit established between two
programs for the purpose of communication.
Frame: A header-prefixed sequence of bytes sent over a HTTP Speed+ frame: A header-prefixed sequence of bytes sent over a HTTP Speed+
Mobility WebSocket. Mobility WebSocket.
Server: The endpoint which did not initiate the WebSocket session. message: The basic unit of HTTP communication, consisting of a
structured sequence of octets matching the syntax defined in
[RFC2616] and transmitted via a connection.
Session: A synonym for a WebSocket. request: An HTTP request message, as defined in [RFC2616].
Session error: An error on the WebSocket. response: An HTTP response message, as defined in [RFC2616].
Stream: A bi-directional flow of bytes across a virtual channel server: An application program that accepts connections in order to
service requests by sending back responses. Any given program may
be capable of being both a client and a server; our use of these
terms refers only to the role being performed by the program for a
particular connection, rather than to the program's capabilities
in general. Likewise, any server may act as an origin server,
proxy, gateway, or tunnel, switching behavior based on the nature
of each request.
origin server: As defined in [RFC2616] section 1.3, a server on
which a given resource resides or is to be created.
origin: As defined in [RFC6454] section 3.2, a representation of a
security principal. Roughly speaking, two URIs are part of the
same origin if they have the same scheme, host, and port.
user agent: The client that initiates a request. These are often
browsers, editors, spiders (web-traversing robots), or other end
user tools.
proxy: An intermediary program that acts as both a server and a
client for the purpose of making requests on behalf of other
clients. Requests are serviced internally or by passing them on,
with possible translation, to other servers. A proxy MUST
implement both the client and server requirements of this
specification. A "transparent proxy" is a proxy that does not
modify the request or response beyond what is required for proxy
authentication and identification. A "non-transparent proxy" is a
proxy that modifies the request or response in order to provide
some added service to the user agent, such as group annotation
services, media type transformation, protocol reduction, or
anonymity filtering. Except where either transparent or non-
transparent behavior is explicitly stated, the HTTP proxy
requirements apply to both types of proxies.
endpoint: Either the client or server of a connection.
receiver: Endpoint receiving network data in a HTTP Speed+Mobility
session. This can be either the client or the server.
sender: Endpoint sending network data in a HTTP Speed+Mobility
session. This can be either the client or the server.
session: A single channel between a client and server over which
there will be multiplexed HTTP requests and responses.
session error: An error on the HTTP Speed+Mobility session.
stream: A bi-directional flow of bytes across a virtual channel
within a HTTP Speed+Mobility session. within a HTTP Speed+Mobility session.
Stream error: An error on an individual stream. stream error: An error on an individual stream.
1.3. Protocol Overview 1.3. Protocol Overview
HTTP Speed+Mobility is a proposal for an HTTP 2.0 transport protocol
that includes multiplexing HTTP content for improving transmission of
HTTP content and efficient use of TCP connections.
This protocol comprises four parts: This protocol comprises four parts:
1. Setting up a session (Handshake): Uses WebSocket upgrade 1. Negotiation: Setting up a session (Handshake) is the WebSocket
Upgrade with additional headers.
2. Session maintenance and framing: Uses WebSocket framing, 2. Session Layer: This defines maintenance and framing of a HTTP
including control frames such as keepalive (PING-PONG) and Speed+Mobility session and is defined as a WebSocket extension
WebSocket Close [RFC6455].
3. Multiplexing within the session: Uses SPDY 3. Multiplexing Layer: This defines the framing and maintenance for
[I-D.mbelshe-httpbis-spdy] stream semantics implemented via a multiplexing HTTP requests over a single HTTP Speed+Mobility
WebSocket extension session. This proposal borrows from the SPDY
[I-D.mbelshe-httpbis-spdy] stream semantics and is defined as a
WebSocket extension.
4. HTTP layering: Same as SPDY 4. HTTP layering: This proposal borrows from the SPDY
[I-D.mbelshe-httpbis-spdy] proposal.
WebSocket provides a standards-based (RFC 6455) model for The WebSocket protocol [RFC6455] provides a standards-based model for
establishing a bi-directional session (or a socket) between a client establishing a bi-directional session between a client and a server
and a server across the web. The RFC describes the following: across the web. The RFC describes the following:
o A mechanism to create a session between a client and a server o A mechanism to create a session between a client and a server
(Upgrade) and optionally secure the session using TLS (Upgrade) and optionally secure the session using TLS
o A light-weight framing model to send data asynchronously and bi- o A light-weight framing model to send data asynchronously and bi-
directionally within the session directionally within the session
o A set of control messages to keep the session alive (PING-PONG), o A set of control messages to keep the session alive (PING-PONG),
and to close the session (CLOSE) and to close the session (CLOSE)
o An extension model to optionally layer semantics such as o An extension model to optionally layer semantics such as
multiplexing and compression multiplexing and compression
In keeping with our principle to leverage existing standards where In keeping with the principle to leverage existing standards where
possible, this HTTP Speed+Mobility proposal uses WebSocket as the possible, this HTTP Speed+Mobility proposal uses WebSockets as the
session layer between the client and the server. Using WebSocket as session layer between the client and the server. Using WebSockets as
a session layer has several advantages. First, we do not have to a session layer has some advantages. First, we do not have to invent
invent a new set of control messages, since we can use the ones a new set of control messages, since we can use the ones defined by
defined by the WebSocket standard. Second, network intermediaries the WebSocket standard. Second, clients and servers have the
(the middleboxes) do not have to be modified to cope with a new flexibility to decide whether they want to use TLS or not.
protocol for establishing and maintaining bidirectional sessions
across the web. Finally, clients and servers have the flexibility to
decide whether they want to secure the session or not.
Using WebSocket also makes it easy to enable multiplexing within the Using WebSockets also makes it easy to enable multiplexing within the
session. In fact, this proposal takes the concept of streams and the session. In fact, this proposal takes the concept of streams and the
stream related control messages as defined in SPDY, and models them stream related control messages, and models them as a WebSocket
as a WebSocket extension. Barring some important issues as noted in extension.
the issues section, the HTTP layering on streams is identical to what
was presented in the SPDY proposal.
Furthermore, this proposal removes all congestion management control Furthermore, this proposal specifies a simple receive buffer
frames proposed in SPDY, in accordance with the principle of management scheme based on a credit control mechanism.
preserving a layered architecture. Instead, any TCP issues raised in
the SPDY proposal should be submitted to the relevant working group
for consideration.
Finally, this proposal regards server push as being outside of the Finally, this proposal regards server push as being outside of the
scope of HTTP 2.0 because it is not in line with existing HTTP scope of HTTP 2.0 itself, because it is not in line with existing
semantics. Having said that, given the relevance of server push with HTTP semantics. Having said that, given the benefits of populating
mobility and in anticipation of such an extension, this proposal does the client cache proactively, we believe that the Working Group
offer some thoughts on server push in section 5. should create a specification separate from HTTP 2.0 to define such a
solution.
2. Setting up the session 1.3.1. Connection Management
The opening handshake is the standard WebSocket handshake based on By default, and because it reuses the WebSocket handshake, HTTP
HTTP Upgrade. To advertise support for the HTTP 2.0 extension, the Speed+Mobility uses port 80 for unsecured connections and port 443
client request MUST include the "http2" extension token in the |Sec- for connections tunneled over Transport Layer Security (TLS)
WebSocket-Extensions| header in its opening handshake: [RFC2818].
GET /chat HTTP/1.1 Clients SHOULD attempt to use a single HTTP Speed+Mobility connection
to a given origin server. The server MUST be able to handle multiple
connections from the same client and MUST be able to handle
concurrent establishments and disconnects.
1.4. Proxies
Based on the existing Internet, proxies are an important
consideration for any HTTP 2.0 proposal. There are many cases where
the presence of a proxy (both explicit and transparent) will impede
negotiation of any new protocol. In existing environments, the only
reliable method of traversing proxies with non-HTTP 1.x
communications is by tunneling over TLS / SSL.
However, given the importance of HTTP 2.0 and the desire to continue
to use proxies, we believe that proxies will eventually adopt HTTP
2.0 and will support communication without TLS, although such
adoption may take a long time.
WebSockets provides the best of both environments. WebSockets may be
negotiated over a secure tunnel to traverse an incompatible proxy or
may be used in the clear, when appropriate, with a proxy that
understands HTTP 2.0.
2. Negotiation
HTTP Speed+Mobility negotiates a session using the WebSockets
handshake based on HTTP Upgrade. To advertise support for the HTTP
2.0 extension, the client request MUST include the "x-httpsm"
extension token in the |Sec-WebSocket-Extensions| header in its
opening handshake:
GET /default.htm HTTP/1.1
Host: server.example.com Host: server.example.com
Upgrade: websocket Upgrade: websocket
Connection: Upgrade Connection: Upgrade, X-InitialCreditBalance
Origin: http://example.com Origin: http://example.com
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Version: 13 Sec-WebSocket-Version: 13
Sec-WebSocket-Extensions: http2 Sec-WebSocket-Extensions: x-httpsm
X-InitialCreditBalance: 131072
To accept the HTTP 2.0 extension requested by the client, the server To accept the HTTP 2.0 extension requested by the client, the server
MUST include the "http2" extension token in the |Sec-WebSocket- MUST include the "x-httpsm" extension token in the |Sec-WebSocket-
Extensions| header in its opening handshake. Otherwise, the client Extensions| header in its opening handshake. Otherwise, the client
MUST fail the WebSocket connection: MUST fail the WebSocket connection:
HTTP/1.1 101 Switching Protocols HTTP/1.1 101 Switching Protocols
Upgrade: websocket Upgrade: websocket
Connection: Upgrade Connection: Upgrade, X-InitialCreditBalance
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Sec-WebSocket-Extensions: http2 Sec-WebSocket-Extensions: x-httpsm
X-InitialCreditBalance: 65536
For more details, please refer to [RFC6455]. The Sec-WebSocket-Extensions defines the version of the protocol.
For incompatible future revisions to the protocol, the extension name
will need to be revised.
This draft defines a new header to declare the initial credit balance
for endpoints that need to use flow control. This header is defined
in Section 5.3 below.
HTTP Speed+Mobility may be extended to allow for new negotiated
options by adding new headers to the upgrade exchange.
When the negotiation of HTTP Speed+Mobility is successful, the server
MUST respond to the GET request with a SYN_REPLY message with an
Stream ID of 1, containing the response to the original GET request.
Any required data frames for this response MUST be identified with
the stream ID of 1. For more information on SYN_REPLY see
Section 4.2.2 below.
For more details on WebSockets, refer to [RFC6455].
3. Session layer and Framing 3. Session layer and Framing
At the end of an HTTP upgrade as described above, the bi-directional At the end of the WebSockets upgrade as described above, the bi-
WebSocket between the client and the server becomes the new session directional WebSocket between the client and the server becomes the
layer. In keeping with the principle around re-using existing new session layer. The session layer for HTTP Speed+Mobility uses
standards, the session layer for HTTP Speed+Mobility uses the the WebSocket base framing protocol for both data frames and control
WebSocket base framing protocol for both data frames and control
frames. frames.
3.1. WebSocket framing protocol 3.1. Opening and Closing Sessions
Once connected, the client and server can exchange framed messages One of the motivations for a multiplexing solution is to have a more
using the framing protocol shown below. For more details, please efficient use of the TCP transport. Implementations should minimize
refer to RFC6455. the number of connections to reduce the impact of TCP slow start and
to avoid latency from creating new connections. Ideally there will
be a single session between a client and a server. An implementation
SHOULD use this session to multiplex the maximum amount of data
between the two endpoints. Implementations MAY create multiple
simultaneous sessions between two endpoints.
For best performance, it is expected that a client will not close
open TCP connections until it is certain that it no longer has use
for it (e.g., the user closes the HTTP app or navigates away from all
web pages referencing a connection), or until the server closes the
connection. Servers SHOULD leave connections open for as long as
possible, but MAY terminate idle connections if necessary.
3.2. Origin of Multiplexed Content
A single session MAY contain HTTP content from multiple origins. A
client implementation SHOULD only multiplex requests destined to
multiple origins into a single connection under the following
conditions:
o Anonymous / Clear: For sessions that do not require authentication
or SSL/TLS, implementations MAY multiplex content to multiple
origins in the same session. This is the primary use case for
sending requests to a Proxy.
o Basic / Digest Authentication: For sessions to an origin server
that requires per-request authentication, implementations MAY
multiplex content to multiple origins.
o Multi-Part Authentication (e.g., Kerberos): To be done.
o For a secure connection, if the client provides a Server Name
Indication (SNI) extension during the TLS handshake then all
subsequent SYN_STREAM messages (see Section 4.2.1) on that
connection MUST specify a Host specification that exactly matches
the server name provided in the Server Name Indication (SNI)
(Section 3.1 of [RFC4366]). If the server receives a SYN_STREAM
with a non-matching Host specification then it MUST respond with a
400 Bad Request. If the client receives a SYN_STREAM with a non-
matching Host specification then it MUST issue a stream error.
3.3. WebSocket Framing Protocol
This specification defines the x-httpsm WebSocket extension to enable
multiplexing of HTTP content within a single WebSocket session. Once
the upgrade is accepted, the client and server can exchange framed
messages using the WebSockets framing protocol. The standard
WebSocket frame from [RFC6455] is included for reference.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-------+-+-------------+-------------------------------+ +-+-+-+-+-------+-+-------------+-------------------------------+
|F|R|R|R| opcode|M| Payload len | Extended payload length | |F|R|R|R| opcode|M| Payload len | Extended payload length |
|I|S|S|S| (4) |A| (7) | (16/64) | |I|S|S|S| (4) |A| (7) | (16/64) |
|N|V|V|V| |S| | (if payload len==126/127) | |N|V|V|V| |S| | (if payload len==126/127) |
| |1|2|3| |K| | | | |1|2|3| |K| | |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - + +-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
| Extended payload length continued, if payload len == 127 | | Extended payload length continued, if payload len == 127 |
+ - - - - - - - - - - - - - - - +-------------------------------+ + - - - - - - - - - - - - - - - +-------------------------------+
| |Masking-key, if MASK set to 1 | | |Masking-key, if MASK set to 1 |
+-------------------------------+-------------------------------+ +-------------------------------+-------------------------------+
| Masking-key (continued) | Payload Data | | Masking-key (continued) | Payload Data |
+-------------------------------- - - - - - - - - - - - - - - - + +-------------------------------- - - - - - - - - - - - - - - - +
: Payload Data continued ... : : Payload Data continued ... :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Payload Data continued ... | | Payload Data continued ... |
+---------------------------------------------------------------+ +---------------------------------------------------------------+
3.2. WebSocket Keepalive messages The payload data for this extension is multiplexed streams as defined
in Section 4 below.
Keepalive messages in WebSocket are modeled using the ping and pong
control frames.
The Ping frame contains an opcode of 0x9. Upon receipt of a Ping
frame, an endpoint MUST send a Pong frame in response, unless it
already received a Close frame. A Ping frame may serve either as a
keepalive or as a means to verify that the remote endpoint is still
responsive.
The Pong frame contains an opcode of 0xA. A Pong frame is sent in
response to a Ping frame. A Pong frame MAY be sent unsolicited.
This serves as a unidirectional heartbeat. A response to an
unsolicited Pong frame is not expected.
For more details, please see [RFC6455].
3.3. WebSocket Close
Closing a session uses the standard WebSocket close handshake as The x-httpsm extension defines 4 extension opcodes to establish and
defined in [RFC6455]. The GOAWAY control frame in SPDY is replaced maintain streams:
by the WebSocket Close control frame. GOAWAY specific data is mapped
as follows:
o Status Code is replaced with the status code in the WebSocket (opcode TBD) - SYN_STREAM: See Section 4.2.1.
Close frame. For example:
* OK (0) is replaced by 1000 (normal closure) (opcode TBD) - SYN_REPLY: See Section 4.2.2.
* PROTOCOL_ERROR (1) is replaced by 1002 (protocol error) (opcode TBD) - RST_STREAM: See Section 4.2.3.
o Last-good-stream-id is carried as extension data in the WebSocket (opcode TBD) - CREDIT_UPDATE: See Section 4.2.4.
Close frame.
3.4. WebSocket errors (Session errors) 3.4. Closing HTTP Speed+Mobility Sessions
The SPDY proposal details session errors and determines that a GOAWAY Closing a session uses the standard WebSocket close handshake as
frame MUST be sent when that happens. For the HTTP Speed+Mobility defined in [RFC6455].
protocol, closing a session MUST use the WebSocket Close frame as
described in section 3.3 of this document.
For best performance, it is expected that clients will not close open For best performance, it is expected that clients will not close open
TCP connections until the user closes the HTTP app or navigates away TCP connections until the user closes the HTTP app or navigates away
from all web pages referencing a connection, or until the server from all web pages referencing a connection, or until the server
closes the connection. Servers are encouraged to leave connections closes the connection. Servers are encouraged to leave connections
open for as long as possible, but can terminate idle connections if open for as long as possible, but can terminate idle connections if
necessary. necessary.
4. Streams Layer 4. Streams Layer
Once the session is established, HTTP Speed+Mobility allows creating Once the session is established, HTTP Speed+Mobility allows creating
streams to send and receive HTTP data. The stream operations and streams to send and receive HTTP data. The stream operations and
semantics are borrowed directly from SPDY. As noted earlier, semantics are borrowed from SPDY. As noted earlier, WebSockets is
WebSocket is the protocol used for framing data that is sent and the protocol used for framing data that is sent and received within
received within the session (and consequently each stream). Stream the session (and consequently each stream). Stream operations (such
operations (such as SYN_STREAM) are modeled using a WebSocket as SYN_STREAM) are implemented as a WebSocket extension.
extension.
4.1. Modeling SYN_STREAM in a WebSocket frame 4.1. Stream Management
The SYN_STREAM frame is carried as extension data (as seen in section 4.1.1. Stream Creation
3.1) in a binary data frame. The opcode is set to 0x2. A possible
future refinement is for SYN_STREAM to use a control opcode reserved
for WebSocket extensions as defined in section 5.8 Extensibility in
RFC6455.
The payload length is the length of the "Extension data" (which is A stream is created by sending a SYN_STREAM (Section 4.2.1). The
the length of the SYN_STREAM frame) + length of "Application data first stream is created by the GET request that initiates the upgrade
(zero for SYN_STREAM). In other words, the payload length is as to HTTP Speed+Mobility and will have a stream ID of 1. Each
shown below: subsequent SYN_STREAM sent by the client will increment the stream ID
by 1. Stream IDs do not wrap; when a client or server cannot create
a new stream id without exceeding a 32 bit value, it MUST NOT create
a new stream.
o Control (1 bit) If a server receives a SYN_STREAM with a stream id which is less than
any previously received SYN_STREAM, it MUST issue a session error
(Section 4.1.5.1) with the status PROTOCOL_ERROR.
o Version (15 bits) It is a protocol error to send two SYN_STREAMs with the same
stream-ID. If a recipient receives a second SYN_STREAM for the same
stream, it MUST issue a stream error (Section 4.1.5.2) with the
status code PROTOCOL_ERROR.
o Type (16 bits) Upon receipt of a SYN_STREAM, the recipient can reject the stream by
sending a stream error (Section 4.1.5.2) with the error code
REFUSED_STREAM. Note, however, that the creating endpoint may have
already sent additional frames for that stream which cannot be
immediately stopped.
o Flags (8 bits) Once the stream is created, the creator may immediately send data
frames for that stream, without needing to wait for the recipient to
acknowledge.
o Length (24 bits) Both endpoints can send data on the stream.
o Length of SYN_STREAM specific data 4.1.2. Stream Data Exchange
The "Payload data" is defined as "Extension data" concatenated with Once a stream is created, it can be used to send arbitrary amounts of
"Application data." The payload data for a SYN_STREAM frame consists data. Generally this means that a series of data frames will be sent
of the SYN_STREAM frame (shown below) tunneled "as-is" in the on the stream until a frame containing the FLAG_FIN flag is set. The
Extension data of the WebSocket frame. FLAG_FIN can be set on a SYN_STREAM (Section 4.2.1), SYN_REPLY
(Section 4.2.2), or a data (Section 4.3) frame. Once the FLAG_FIN
has been sent, the stream is considered to be half-closed.
+------------------------------------+ 4.1.3. Stream Half-Close
|1| version | 1 |
+------------------------------------+
| Flags (8) | Length (24 bits) |
+------------------------------------+
|X| Stream-ID (31bits) |
+------------------------------------+
|X| Associated-To-Stream-ID (31bits) |
+------------------------------------+
| Pri|Unused | Slot | |
+-------------------+ |
| Number of Name/Value pairs (int32) | <+
+------------------------------------+ |
| Length of name (int32) | | This section is the "Name/Value
+------------------------------------+ | Header Block", and is compressed
| Name (string) | | unless opted out.
+------------------------------------+ |
| Length of value (int32) | |
+------------------------------------+ |
| Value (string) | |
+------------------------------------+ |
| (repeats) | <+
A Multiplexing Extension for WebSockets When one side of the stream sends a frame with the FLAG_FIN flag set,
[draft-tamplin-hybi-google-mux] is being designed in the HyBi working the stream is half-closed from that endpoint. The sender of the
group that also defines a multiplexing model. There is an FLAG_FIN MUST NOT send further frames on that stream. When both
opportunity to converge these designs for use by both WebSockets and sides have half-closed, the stream is closed.
HTTP Speed+Mobility.
4.2. Compression If an endpoint receives a data frame after the stream is half-closed
from the sender (e.g. the endpoint has already received a prior frame
for the stream with the FIN flag set), it MUST send a RST_STREAM to
the sender with the status STREAM_ALREADY_CLOSED.
Throughout this document, header compression is enabled by default. 4.1.4. Stream Close
However, either the client or the server may opt out of using
compression when transmitting headers. This opt out model is
described with added flags in the SYN_STREAM, HEADERS and SYN_REPLY
frames.
4.3. Control Frames There are 3 ways that streams can be terminated:
The following set of stream-related control frames are taken directly Normal termination: Normal stream termination occurs when both
from SPDY. sender and recipient have half-closed the stream by sending a
FLAG_FIN.
Abrupt termination: Either the client or server can send a
RST_STREAM at any time. A RST_STREAM contains an error code to
indicate the reason for failure. When a RST_STREAM is sent from
the stream originator, it indicates a failure to complete the
stream and that no further data will be sent on the stream. When
a RST_STREAM is sent from the stream recipient, the sender, upon
receipt, should stop sending any data on the stream. The stream
recipient should be aware that there is a race between data
already in transit from the sender and the time the RST_STREAM is
received. See Stream Error Handling (Section 4.1.5.2).
TCP connection teardown: If the TCP connection is torn down while
un-closed streams exist, then the endpoint must assume that the
stream was abnormally interrupted and may be incomplete.
If an endpoint receives a data frame after the stream is closed, it
must send a RST_STREAM to the sender with the status PROTOCOL_ERROR.
4.1.5. Error Handling
The framing layer has only two types of errors, and they are always
handled consistently. Any reference in this specification to "issue
a session error" refers to Section 4.1.5.1. Any reference to "issue
a stream error" refers to Section 4.1.5.2.
4.1.5.1. Session Error Handling
A session error is any error which prevents further processing of the
session layer or which corrupts the session compression state. When
a session error occurs, the endpoint encountering the error MUST send
a WebSockets CLOSE [RFC6455].
4.1.5.2. Stream Error Handling
A stream error is an error related to a specific stream-id which does
not affect processing of other streams at the session layer. Upon a
stream error, the endpoint MUST send a RST_STREAM (Section 4.2.3)
frame which contains the stream id of the stream where the error
occurred and the error status which caused the error. After sending
the RST_STREAM, the stream is closed to the sending endpoint. After
sending the RST_STREAM, if the sender receives any frames other than
a RST_STREAM for that stream id, it will result in sending additional
RST_STREAM frames. An endpoint MUST NOT send a RST_STREAM in
response to an RST_STREAM, as doing so would lead to RST_STREAM
loops. Sending a RST_STREAM does not cause the HTTP Speed+Mobility
session to be closed.
If an endpoint has multiple RST_STREAM frames to send in succession
for the same stream-id and the same error code, it MAY coalesce them
into a single RST_STREAM frame.
4.2. Stream Control Frames
In Speed+Mobility four new opcodes are introduced:
o SYN_STREAM o SYN_STREAM
o SYN_REPLY o SYN_REPLY
o RST_STREAM o RST_STREAM
o HEADERS
All of the frames are identical to SPDY with the few additions o CREDIT_UPDATE
described below:
4.3.1. SYN_STREAM In addition, all frames in HTTP Speed+Mobility include a 32-bit
stream identifier in the Extension data.
In addition, this protocol adds two new flags: one to make 4.2.1. SYN_STREAM
Compression opt out, and one to make Server Push opt in.
o 0x04= FLAG_NO_HEADER_COMPRESSION: indicates the Name/Value header The SYN_STREAM control frame is used to initiate a new stream and
send the headers for a request. SYN_STREAM is specified as the
extension opcode in the WebSocket frame. The SYN_STREAM Extension
data is carried in the WebSocket payload:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+---------------------------------------------------------------+
| Stream-ID |
+---------------------------------------------------------------+
| Flags | Pri | Unused |
+---------------------------------------------------------------+
| Length of name |
+---------------------------------------------------------------+
| Name |
+---------------------------------------------------------------+
| Length of value |
+---------------------------------------------------------------+
| Value |
+---------------------------------------------------------------+
| (repeats) |
+---------------------------------------------------------------+
Flags: Flags related to this frame. Valid flags are:
0x01 = FLAG_FIN: marks this frame as the last frame to be
transmitted on this stream and puts the sender in the half-closed
(Section 4.1.3) state.
0x02 = FLAG_NO_HEADER_COMPRESSION: indicates the Name/Value header
block is not compressed. block is not compressed.
o 0x08 = FLAG_PUSH_ALLOWED: a stream created with this flag allows Priority: A 3-bit priority (Section 5.1) field.
the server to push related responses in separate unidirectional
streams. This flag MUST only be sent by the client.
4.3.2. SYN_REPLY Unused: 21 bits of unused space, reserved for future use.
In addition, this protocol adds one new flag, to allow opting out of Name/Value Header Block: A set of name/value pairs carried as part of
compression. the SYN_STREAM. See Section 4.4.
o 0x04= FLAG_NO_HEADER_COMPRESSION: indicates the Name/Value header 4.2.2. SYN_REPLY
block is not compressed
4.3.3. HEADERS The SYN_REPLY control frame indicates the acceptance of a stream
creation by the recipient of a SYN_STREAM control frame. SYN_REPLY
is specified as the extension opcode in the WebSocket frame. The
SYN_REPLY Extension data is carried in the WebSocket payload:
In addition, this protocol adds one new flag, to allow opting out of 0 1 2 3
compression. 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+---------------------------------------------------------------+
| Stream-ID |
|---------------+-----------------------------------------------|
| Flags | Unused |
+---------------+-----------------------------------------------+
| Length of name |
+---------------------------------------------------------------+
| Name |
+---------------------------------------------------------------+
| Length of value |
+---------------------------------------------------------------+
| Value |
+---------------------------------------------------------------+
| (repeats) |
+---------------------------------------------------------------+
o 0x02= FLAG_NO_HEADER_COMPRESSION: indicates the the Name/Value Flags: Flags related to this frame. Valid flags are:
header block is not compressed.
4.4. SPDY frames removed in this proposal 0x01 = FLAG_FIN: marks this frame as the last frame to be
transmitted on this stream and puts the sender in the half-closed
(Section 4.1.3) state.
This proposal simplifies the session control messages to remove items 0x02= FLAG_NO_HEADER_COMPRESSION: indicates the Name/Value header
that are redundant to WebSockets control frames, break compatibility block is not compressed.
with existing HTTP semantics, or implement concepts best addressed at
the transport layer. The reasons for the deletions are outlined as
follows:
SETTINGS: The information in the settings control message are Name/Value Header Block: A set of name/value pairs carried as part of
concepts best reserved for the transport layer. the SYN_STREAM. See Section 4.4.
PING: WebSockets already has a keepalive mechanism in Ping / Pong. 4.2.3. RST_STREAM
Other functions, such as RTT estimation, are associated with flow
control, which is a function of the transport layer.
GOAWAY: Replaced with the WebSockets Close Frame which is documented The RST_STREAM control frame allows for abnormal termination of a
in sections 5.5.1 and 7 of WebSockets [RFC 6455] stream. When sent by the creator of a stream, it indicates the
creator wishes to cancel the stream. When sent by the recipient of a
stream, it indicates an error or that the recipient did not want to
accept the stream, so the stream should be closed. RST_STREAM is
specified as the extension opcode in the WebSocket frame. The
RST_STREAM Extension data is carried in the WebSocket payload:
WINDOW_UPDATE: Flow control is a function of the transport layer. 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+---------------------------------------------------------------+
| Stream-ID |
+-------------------------------+-------------------------------+
| Status Code |
+-------------------------------+
Status code (16 bits): An indicator for why the stream is being
terminated. The following status codes are defined:
CREDENTIAL: This is removed from HTTP Speed+Mobility because we 1 - PROTOCOL_ERROR: This is a generic error, and should only be used
believe it is not compatible with options such as TLS SNI. For if a more specific error is not available.
this proposal, a session MUST only target one origin as described
in [RFC6454].
5. HTTP Layering 2 - INVALID_STREAM: This is returned when a frame is received for a
stream which is not active.
This proposal adopts the HTTP integration model used by SPDY. The 3 - REFUSED_STREAM: Indicates that the stream was refused before any
request-response semantics would be the same as well as stateless processing has been done on the stream.
authentication.
The places where HTTP Speed+Mobility differs from SPDY are a result 5 - CANCEL: Used by the creator of a stream to indicate that the
of its relationship with WebSockets and the removal of the CREDENTIAL stream is no longer needed.
frame.
While not addressed in this proposal, stateful authentication is 6 - INTERNAL_ERROR: This is a generic error which can be used when
something that can be added to the proposal and is captured in the the implementation has internally failed, not due to anything in
Open Issues for Discussion section. the protocol.
Lastly Section 5.3 articulates some thoughts on server push and 7 - FLOW_CONTROL_ERROR: The endpoint detected that its peer violated
discusses mechanisms to allow control from the client. the flow control protocol.
5.1. Connection Management 8 - STREAM_IN_USE: The endpoint received a SYN_REPLY for a stream
already open.
By default, and because it reuses the WebSocket handshake, HTTP 9 - STREAM_ALREADY_CLOSED: The endpoint received a data or SYN_REPLY
Speed+Mobility uses port 80 for unsecured connections and port 443 frame for a stream which is half closed.
for connections tunneled over Transport Layer Security (TLS)
[RFC2818].
Clients SHOULD attempt to use a single HTTP Speed+Mobility connection Note: 0 is not a valid status code for a RST_STREAM.
to a given origin [RFC6454]. The server MUST be able to handle
multiple connections from the same client and MUST be able to handle
concurrent establishments and disconnects. As noted above, a client
MUST only send requests for a single origin over a HTTP Speed+
Mobility connection.
For a secure connection, if the client provides a Server Name After receiving a RST_STREAM on a stream, the recipient must not send
Indication (SNI) extension during TLS handshake then all subsequent additional frames for that stream, and the stream moves into the
SYN_STREAM messages on that connection MUST specify a Host closed state.
specification that exactly matches the server name provided in the
SNI [RFC4366]. If the server receives a SYN_STREAM with a non-
matching Host specification then it MUST respond with a 400 Bad
Request. If the client receives a SYN_STREAM with a non-matching
Host specification then it MUST issue a stream error.
5.2. Use of GOAWAY 4.2.4. CREDIT_UPDATE
HTTP Speed+Mobility replaces the GOAWAY message with a WebSockets 0 1 2 3
CLOSE message per section 2.2 of this document. The last-stream-ID 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
is included in the CLOSE message as extension data to provide the +---------------------------------------------------------------+
opportunity for graceful server shutdown. | Stream-ID |
+---------------------------------------------------------------+
| Credit-Addition |
+---------------------------------------------------------------+
5.3. Server Push Transactions Credit-Addition: The value, in bytes, that the recipient must add to
the stream's credit balance. The value ranges from 0 to 4294967295
(0xffffffff) inclusive. 4294967295 (0xffffffff) is a special value
that designates "infinite" (see Section 5.5).
The HTTP Speed+Mobility protocol does not enable server push by 4.3. Data Frames
default, requiring instead that the client explicitly request it
using the FLAG_PUSH_ALLOWED flag in the SYN_STREAM frame sent from
the client to the server. Server push is a new concept introduced in
SPDY wherein a server pushes content to a client even if the client
may not have requested it. We have disabled server push by default
because it violates two of our design principles, namely preserving
HTTP semantics and keeping the client in control of content.
Server push, as described in the SPDY proposal, has the limitation 0 1 2 3
that the client cannot communicate its push requirements to the 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
server. The result is that the server may push content to the client +---------------------------------------------------------------+
which is already cached locally, or to a client with a full cache, | Stream-ID |
thereby causing unnecessary cache evictions. Furthermore, the server +---------------+-----------------------------------------------+
does not have sufficient context to know whether the access is coming | Flags |
from a browser or an app. We see a trend toward "tailored" apps, +---------------+
where each app may access different subsets of the server content,
with different priorities, and in different sequences according to
their own rendering requirements and user interaction models. For
small devices as defined in the "internet of things", it is likely
that there will be devices that run highly specialized apps to
consume exactly what they need given limited cache space. In all of
these scenarios, server push would result in needless traffic to the
client resulting in bandwidth consumption and reduction in battery
life.
We believe that for server push to be truly effective, HTTP 2.0 Stream data frames are modeled as WebSocket binary data frames with
requires a feedback model enabling an app to give context to the extension data:
server about its push needs. This proposal does not formally define
such a mechanism. One way to enable this capability is for a client
to include identifiers for content it already has in its cache, and
send this as a hint in a SYN_STREAM message. The server could now
use this information to only push deltas to that known cached
content.
This is an area that requires significant working group discussion. Flags: Flags related to this frame. Valid flags are:
Given the principle around maintaining existing HTTP semantics, we
need to determine in the working group if server push should remain a
part of HTTP 2.0.
6. Open Issues for Discussion in the Workgroup 0x01 = FLAG_FIN: signifies that this frame represents the last frame
to be transmitted on this stream. See Stream Close
(Section 4.1.4) below.
During the drafting of this proposal a number of question came up Data frame processing requirements:
that warrant deeper investigation. This is by no means a complete
list of discussions around HTTP 2.0 but simply the current list of
issues that the authors of this document wanted to explore further in
the Working Group
Streams Issues: If an endpoint receives a data frame for a stream-id which is not
open, it MUST send issue a stream error (Section 4.1.5.2) with the
error code INVALID_STREAM for the stream-id.
o SPDY defines max and min control frame size but does not define If the endpoint which created the stream receives a data frame
what size to start with or how to discover what the endpoint can before receiving a SYN_REPLY on that stream, it is a protocol
support. SPDY defines that an endpoint receiving a SYN that is error, and the recipient MUST issue a stream error
too large must sent a RST with an error FRAME_TOO_LARGE. In order (Section 4.1.5.2) with the status code PROTOCOL_ERROR for the
to maintain compression context, does the large SYN need to be stream-id.
decompressed?
o Interleaving Headers and data needs more definition. How does a 4.4. Name/Value Header Block
server process a header before it is fully delivered?
HTTP Issues: The Name/Value Header Block is found in the SYN_STREAM and SYN_REPLY
control frames, and shares a common format:
o Are mixed origin requests allowed? How does this work with TLS +------------------------------------+
and SNI? | Number of Name/Value pairs (int32) |
+------------------------------------+
| Length of name (int32) |
+------------------------------------+
| Name (string) |
+------------------------------------+
| Length of value (int32) |
+------------------------------------+
| Value (string) |
+------------------------------------+
| (repeats) |
o What are the changes to the stream layer to naturally enable Number of Name/Value pairs: The number of repeating name/value pairs
stateful authentication? following this field.
o How can we support chunked encoding. List of Name/Value pairs:
o Do we want to support HTTP trailers? Length of Name: a 32-bit value containing the number of octets in
the name field.
Optimizations: Name: 0 or more octets, 8-bit sequences of data, excluding 0.
o There is potential optimization between the WebSockets Op codes Length of Value: a 32-bit value containing the number of octets in
and the Stream frames. This needs more investigation. the value field.
o There is potential optimization with adding settings to the Value: 0 or more octets, 8-bit sequences of data, excluding 0.
WebSockets upgrade handshake (compression, push, header size,
etc.)
o Investigate options for implementing a message for a client to Each header name must have at least one value. Header names are
inform server push of cache contents. encoded using the US-ASCII character set and must be all lower case.
The length of each name must be greater than zero. A recipient of a
zero-length name MUST issue a stream error (Section 4.1.5.2) with the
status code PROTOCOL_ERROR for the stream-id.
Server Push issues: Duplicate header names are not allowed. To send two identically
named headers, send a header with two values, where the values are
separated by a single NUL (0) byte. A header value can either be
empty (e.g. the length is zero) or it can contain multiple, NUL-
separated values, each with length greater than zero. The value
never starts nor ends with a NUL character. Recipients of illegal
value fields MUST issue a stream error (Section 4.1.5.2) with the
status code PROTOCOL_ERROR for the stream-id.
o Should server push stay in HTTP 2.0 or be defined in a different 4.5. Compression
specification?
o How can a client describe its cached content or indicate its The Name/Value Header Block is a section of the SYN_STREAM and
content needs, to facilitate efficient server push behavior SYN_REPLY frames used to carry header meta-data. This block MAY be
compressed using zlib compression. Within this specification, any
reference to 'zlib' is referring to the ZLIB Compressed Data Format
Specification Version 3.3 as part of [RFC1950].
o Should server push negotiation be done as part of the WebSocket For each HEADERS compression instance, the initial state is
handshake rather than inside SYN_STREAM? initialized using the dictionary specified in
[I-D.mbelshe-httpbis-spdy] section 2.6.10.1.
Compression issues: Implementations MUST support header compression as specified in
[I-D.mbelshe-httpbis-spdy] except for the following.
o Should header compression be negotiated at a session level as part Throughout this document, header compression is enabled by default.
of the WebSocket handshake rather than transmitted on a per packet However, either the client or the server MAY opt out of using
basis? compression when transmitting headers. This opt out model is
described with added flags in the SYN_STREAM, HEADERS and SYN_REPLY
frames.
Security Issues: 5. Flow Control
o Is there a DDOS possibility with the way stateful authentication 5.1. Stream Priority
is specified?
o How does interleaving requests for cross origin content over TLS Each stream has a 3-bit priority field where 7 represents the highest
work? Are there vulnerabilities there? priority and 0 represents the lowest priority. The stream priority
is specified in the SYN_STREAM and cannot be re-specified for the
lifetime of the stream.
When selecting data to send, the sender SHOULD select the data from
the highest priority stream that has data ready for transmission. If
multiple streams of the same priority have data ready for
transmission then the sender SHOULD be fair in sending data between
those streams. See Section 5.7.
5.2. Credit Control
Credit control is used by memory-sensitive endpoints to advertise
their limited buffering capability. This is to prevent the sender
from sending too much data, in a given time interval, thus causing
the recipient's buffers to overflow.
An endpoint MAY demand that its peer honor credit control. An
endpoint MUST honor the credit control if the peer demands it.
Section 5.3 explains how an endpoint demands credit control.
Credit control is directional and is demanded by an endpoint to
control how much its peer can send.
An endpoint that is honoring its peer's credit control will maintain
a credit balance, for each stream, that controls how much data the
endpoint can send to its peer. The credit balance is always in units
of bytes. The demanding endpoint will send CREDIT_UPDATE messages,
for a given stream, to update how much data the honoring peer is
allowed to send. The credit balance applies to the data payload of
data frames. Credit control is applied on an HTTP S+M per-hop basis.
5.3. Credit Control Declaration
During the HTTP S+M handshake, an endpoint MAY demand that the peer
honor credit control when sending data, for all streams, on that
connection. If the endpoint does not demand credit control, then it
MUST NOT send CREDIT_UPDATE messages.
Credit control is demanded by specifying an HTTP header in the GET
that upgrades the HTTP/1.1 connection to HTTP S+M. The header name is
"X-InitialCreditBalance". The header value indicates the initial
credit balance that the peer has for sending data on streams. The
header value is a base-10 number ranging from 0 to 4294967294
(0xfffffffe), inclusive. If the header is not present then that
indicates the endpoint does not advertise credits and will never send
CREDIT_UPDATE messages on that connection.
In the following example, the client does not advertise flow control
because it wants uninhibited responses throughput. Thus the server
will send data frames to the client without credit tracking.
However, the server indicates an initial credit balance of 64KB,
which means the client will keep track of the CREDIT_UPDATE messages
from the server to know when it can send data frames for a given
stream.
Upgrade Request:
GET / default.htm HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Origin: http://example.com
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Version: 13
Sec-WebSocket-Extensions: x-httpsm
Upgrade Response:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade, X-InitialCreditBalance
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Sec-WebSocket-Extensions: x-httpsm
X-InitialCreditBalance: 65536
5.4. Credit Balance Updates
If an endpoint is honoring credit control then the endpoint MUST
maintain a credit balance for each of the streams on that connection.
The client MUST NOT send more data than there is credit available.
Upon sending a data frame, the endpoint MUST decrement the credit
balance by the number of bytes in the payload of the data frame.
Upon receipt of a CREDIT_UPDATE message, the endpoint MUST increment
the credit balance by the amount indicated in the CREDIT_UPDATE
message. If the resultant sum exceeds 4294967294 (0xfffffffe) then
that is a stream error. The demanding endpoint knows the initial
credit balance and the amount of data received thus far so it MUST
NOT emit a CREDIT_UPDATE message that would cause the credit balance
to exceed 4294967294 (0xfffffffe).
5.5. Turning Credit Control Off for a Stream
If an endpoint demanded credit control then all streams start with
the specified initial credit balance. Any time, before having sent a
frame with FLAG_FIN set on the stream, the demanding endpoint MAY
emit an "infinite" CREDIT_UPDATE message to terminate any further
credit control on that stream. Upon sending an "infinite"
CREDIT_UPDATE, the sender MUST NOT send any more CREDIT_UPDATE
messages for that stream. Upon receipt of an "infinite"
CREDIT_UPDATE message, the sender has an unlimited number of credits.
5.6. Increasing and Decreasing Stream Credit
An endpoint MAY increase the credit available to the peer by
specifying a value in the CREDIT_UPDATE message that is larger than
how much data was sent by the peer or consumed. For example, having
demanded an initial credit balance of 64KB, the endpoint may send a
CREDIT_UPDATE of 512KB for a newly created stream shortly after
creation, thus increasing the available credit for that stream to
576KB.
An endpoint MAY replenish less credit by specifying a value in the
CREDIT_UPDATE message that is smaller than how much data was actually
consumed. For example, after demanding an initial credit balance of
64KB and upon receiving 40KB of data, the endpoint may not send back
a CREDIT_UPDATE message thus forcing the available credit down to
24KB. Note that it is not possible for an endpoint to revoke credit
that it already advertised to the peer.
5.7. Implementation Guidance and Considerations
This document does not mandate a specific algorithm for selecting
data to send from amongst multiple streams. The exact logic used
will be implementation-specific. Within a priority level, the
implemented algorithm should try to be fair to avoid one or more
streams from monopolizing the send opportunities and hence starving
the other streams. One such solution would be to implement a Deficit
Round Robin scheme within a priority class and have a higher priority
always preempt a lower priority.
This document does not mandate a specific algorithm for deciding when
to send CREDIT_UPDATE messages. For example, a simple implementation
may always emit a CREDIT_UPDATE immediately upon consuming the
received data. Another implementation may coalesce multiple
CREDIT_UPDATE messages into one. Yet another implementation may
delay emitting a CREDIT_UPDATE message until a specific time or the
next set of received data, whichever comes first, to reduce packet
chatter.
This document does not mandate a specific algorithm for adjusting the
credit balance. For example, implementations may monitor their
memory state to determine when they can afford to increase or reduce
the credit balance. Other implementations may also interface with
the lower stack layers (e.g., TCP) to compute bandwidth-delay-
products to tune the credit balance. Some implementations (e.g.,
devices) may be very constrained and may not have any logic to tune
the credit balance.
6. General Notes
6.1. HTTP Layering
This proposal adopts the HTTP integration model used by SPDY. The
request-response semantics would be the same as well as stateless
authentication.
This proposal does not support some HTTP concepts as documented in
[RFC2616] including Chunked Encoding and HTTP trailers.
While not addressed in this proposal, stateful authentication is
something that will be addressed at a later date
6.2. Relationship to SPDY
This proposal borrows on many of the concepts of the SPDY proposal.
There are some key areas where we differ from SPDY as outlined below.
Much of where HTTP Speed+Mobility differs from SPDY are a result of
its relationship with WebSockets where we use the existing standard
for the following:
Negotiation: Uses WebSockets Upgrade. This also negotiates streams
settings and version allowing the simplification of the streams
frames
Session Framing: Defined as a WebSockets Extension. Allows reuse of
the length and opcode data to simplify the streams frames.
Lastly, this document simplifies the number of messages in the
streams layer.
6.3. Server Push
Server push is a new concept introduced in [I-D.mbelshe-httpbis-spdy]
wherein a server pushes content to a client even if the client may
not have requested it. This is an area that requires significant
working group discussion. Given the principle around maintaining
existing HTTP semantics, we are not documenting it here and would
like to see the working group document this separately from HTTP 2.0.
6.4. Open Issues
There are a number of open issues that are still under investigation.
This is by no means a complete list of discussions around HTTP 2.0
but simply the current list of issues that the authors of this
document wanted to explore further.
6.4.1. Flow Control
Describe how intermediaries may add/ adjust credit control
parameters.
Deeper investigation into frame buffering requirements.
What to do if a control frame is too big. What to do in the case of
a buffer overrun.
Do we want to add the ability to change priority on a stream?
6.4.2. Streams Issues
Do we need to negotiate maximum streams in the Upgrade header?
7. Acknowledgements 7. Acknowledgements
Thanks to the following individuals who have also contributed with Thanks to the following individuals who provided helpful feedback and
discussions and text: Dave Thaler, Ivan Pashov, Jitu Padhye, Jean contributed to discussions on this document: Dave Thaler, Ivan
Paoli, Michael Champion, NK Srinivas, Sharad Agarwal and Rob Mauceri. Pashov, Jitu Padhye, Jean Paoli, Michael Champion, NK Srinivas,
Sharad Agarwal and Rob Mauceri.
This document incorporates materials from This document incorporates materials from [I-D.mbelshe-httpbis-spdy].
http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00.
8. Normative References 8. References
8.1. Normative References
[RFC1950] Deutsch, L. and J-L. Gailly, "ZLIB Compressed Data Format
Specification version 3.3", RFC 1950, May 1996.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., [RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H.,
Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999. Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.
[RFC2818] Rescorla, E., "HTTP Over TLS", RFC 2818, May 2000.
[RFC4366] Blake-Wilson, S., Nystrom, M., Hopwood, D., Mikkelsen, J.,
and T. Wright, "Transport Layer Security (TLS)
Extensions", RFC 4366, April 2006.
[RFC6454] Barth, A., "The Web Origin Concept", RFC 6454,
December 2011.
[RFC6455] Fette, I. and A. Melnikov, "The WebSocket Protocol",
RFC 6455, December 2011.
[I-D.mbelshe-httpbis-spdy] [I-D.mbelshe-httpbis-spdy]
Belshe, M. and R. Peon, "SPDY Protocol", Belshe, M. and R. Peon, "SPDY Protocol",
draft-mbelshe-httpbis-spdy-00 (work in progress), draft-mbelshe-httpbis-spdy-00 (work in progress),
February 2012. February 2012.
8.2. Informative References
[I-D.iab-extension-recs]
Carpenter, B., Aboba, B., and S. Cheshire, "Design
Considerations for Protocol Extensions",
draft-iab-extension-recs-14 (work in progress), June 2012.
Authors' Addresses Authors' Addresses
Rob Trace Rob Trace
Microsoft Microsoft
Email: Rob.Trace@microsoft.com Email: Rob.Trace@microsoft.com
Adalberto Foresti Adalberto Foresti
Microsoft Microsoft
 End of changes. 145 change blocks. 
408 lines changed or deleted 875 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/
X-Generator: pyht 0.35