idnits 2.17.00 (12 Aug 2021) /tmp/idnits42193/draft-ietf-httpbis-priority-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document date (18 January 2022) is 116 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '-1' is mentioned on line 1298, but not defined -- Looks like a reference, but probably isn't: '6' on line 1298 -- Looks like a reference, but probably isn't: '0' on line 1298 -- Looks like a reference, but probably isn't: '7' on line 1298 -- Possible downref: Normative reference to a draft: ref. 'HTTP' == Outdated reference: A later version (-07) exists of draft-ietf-httpbis-http2bis-06 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 HTTP K. Oku 3 Internet-Draft Fastly 4 Intended status: Standards Track L. Pardue 5 Expires: 22 July 2022 Cloudflare 6 18 January 2022 8 Extensible Prioritization Scheme for HTTP 9 draft-ietf-httpbis-priority-12 11 Abstract 13 This document describes a scheme that allows an HTTP client to 14 communicate its preferences for how the upstream server prioritizes 15 responses to its requests, and also allows a server to hint to a 16 downstream intermediary how its responses should be prioritized when 17 they are forwarded. This document defines the Priority header field 18 for communicating the initial priority in an HTTP version-independent 19 manner, as well as HTTP/2 and HTTP/3 frames for reprioritizing 20 responses. These share a common format structure that is designed to 21 provide future extensibility. 23 About This Document 25 This note is to be removed before publishing as an RFC. 27 Status information for this document may be found at 28 https://datatracker.ietf.org/doc/draft-ietf-httpbis-priority/. 30 Discussion of this document takes place on the HTTP Working Group 31 mailing list (mailto:ietf-http-wg@w3.org), which is archived at 32 https://lists.w3.org/Archives/Public/ietf-http-wg/. Working Group 33 information can be found at https://httpwg.org/. 35 Source for this draft and an issue tracker can be found at 36 https://github.com/httpwg/http-extensions/labels/priorities. 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on 22 July 2022. 55 Copyright Notice 57 Copyright (c) 2022 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 62 license-info) in effect on the date of publication of this document. 63 Please review these documents carefully, as they describe your rights 64 and restrictions with respect to this document. Code Components 65 extracted from this document must include Revised BSD License text as 66 described in Section 4.e of the Trust Legal Provisions and are 67 provided without warranty as described in the Revised BSD License. 69 Table of Contents 71 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 72 1.1. Notational Conventions . . . . . . . . . . . . . . . . . 5 73 2. Motivation for Replacing RFC 7540 Priorities . . . . . . . . 5 74 2.1. Disabling RFC 7540 Priorities . . . . . . . . . . . . . . 6 75 2.1.1. Advice when Using Extensible Priorities as the 76 Alternative . . . . . . . . . . . . . . . . . . . . . 7 77 3. Applicability of the Extensible Priority Scheme . . . . . . . 7 78 4. Priority Parameters . . . . . . . . . . . . . . . . . . . . . 8 79 4.1. Urgency . . . . . . . . . . . . . . . . . . . . . . . . . 8 80 4.2. Incremental . . . . . . . . . . . . . . . . . . . . . . . 9 81 4.3. Defining New Priority Parameters . . . . . . . . . . . . 10 82 4.3.1. Registration . . . . . . . . . . . . . . . . . . . . 10 83 5. The Priority HTTP Header Field . . . . . . . . . . . . . . . 11 84 6. Reprioritization . . . . . . . . . . . . . . . . . . . . . . 12 85 7. The PRIORITY_UPDATE Frame . . . . . . . . . . . . . . . . . . 12 86 7.1. HTTP/2 PRIORITY_UPDATE Frame . . . . . . . . . . . . . . 13 87 7.2. HTTP/3 PRIORITY_UPDATE Frame . . . . . . . . . . . . . . 14 88 8. Merging Client- and Server-Driven Priority Parameters . . . . 16 89 9. Client Scheduling . . . . . . . . . . . . . . . . . . . . . . 17 90 10. Server Scheduling . . . . . . . . . . . . . . . . . . . . . . 17 91 10.1. Intermediaries with Multiple Backend Connections . . . . 19 92 11. Scheduling and the CONNECT Method . . . . . . . . . . . . . . 19 93 12. Retransmission Scheduling . . . . . . . . . . . . . . . . . . 19 94 13. Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . 20 95 13.1. Coalescing Intermediaries . . . . . . . . . . . . . . . 20 96 13.2. HTTP/1.x Back Ends . . . . . . . . . . . . . . . . . . . 21 97 13.3. Intentional Introduction of Unfairness . . . . . . . . . 21 98 14. Why use an End-to-End Header Field? . . . . . . . . . . . . . 21 99 15. Security Considerations . . . . . . . . . . . . . . . . . . . 22 100 16. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 101 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 102 17.1. Normative References . . . . . . . . . . . . . . . . . . 23 103 17.2. Informative References . . . . . . . . . . . . . . . . . 24 104 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 25 105 Appendix B. Change Log . . . . . . . . . . . . . . . . . . . . . 26 106 B.1. Since draft-ietf-httpbis-priority-11 . . . . . . . . . . 26 107 B.2. Since draft-ietf-httpbis-priority-10 . . . . . . . . . . 26 108 B.3. Since draft-ietf-httpbis-priority-09 . . . . . . . . . . 26 109 B.4. Since draft-ietf-httpbis-priority-08 . . . . . . . . . . 26 110 B.5. Since draft-ietf-httpbis-priority-07 . . . . . . . . . . 26 111 B.6. Since draft-ietf-httpbis-priority-06 . . . . . . . . . . 26 112 B.7. Since draft-ietf-httpbis-priority-05 . . . . . . . . . . 27 113 B.8. Since draft-ietf-httpbis-priority-04 . . . . . . . . . . 27 114 B.9. Since draft-ietf-httpbis-priority-03 . . . . . . . . . . 27 115 B.10. Since draft-ietf-httpbis-priority-02 . . . . . . . . . . 27 116 B.11. Since draft-ietf-httpbis-priority-01 . . . . . . . . . . 27 117 B.12. Since draft-ietf-httpbis-priority-00 . . . . . . . . . . 28 118 B.13. Since draft-kazuho-httpbis-priority-04 . . . . . . . . . 28 119 B.14. Since draft-kazuho-httpbis-priority-03 . . . . . . . . . 28 120 B.15. Since draft-kazuho-httpbis-priority-02 . . . . . . . . . 28 121 B.16. Since draft-kazuho-httpbis-priority-01 . . . . . . . . . 29 122 B.17. Since draft-kazuho-httpbis-priority-00 . . . . . . . . . 29 123 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 29 125 1. Introduction 127 It is common for representations of an HTTP [HTTP] resource to have 128 relationships to one or more other resources. Clients will often 129 discover these relationships while processing a retrieved 130 representation, which may lead to further retrieval requests. 131 Meanwhile, the nature of the relationship determines whether the 132 client is blocked from continuing to process locally available 133 resources. An example of this is visual rendering of an HTML 134 document, which could be blocked by the retrieval of a CSS file that 135 the document refers to. In contrast, inline images do not block 136 rendering and get drawn incrementally as the chunks of the images 137 arrive. 139 HTTP/2 [HTTP2] and HTTP/3 [HTTP3] support multiplexing of requests 140 and responses in a single connection. An important feature of any 141 implementation of a protocol that provides multiplexing is the 142 ability to prioritize the sending of information. For example, to 143 provide meaningful presentation of an HTML document at the earliest 144 moment, it is important for an HTTP server to prioritize the HTTP 145 responses, or the chunks of those HTTP responses, that it sends to a 146 client. 148 HTTP/2 and HTTP/3 servers can schedule transmission of concurrent 149 response data by any means they choose. Servers can ignore client 150 priority signals and still successfully serve HTTP responses. 151 However, servers that operate in ignorance of how clients issue 152 requests and consume responses can cause suboptimal client 153 application performance. Priority signals allow clients to 154 communicate their view of request priority. Servers have their own 155 needs that are independent of client needs, so they often combine 156 priority signals with other available information in order to inform 157 scheduling of response data. 159 RFC 7540 [RFC7540] stream priority allowed a client to send a series 160 of priority signals that communicate to the server a "priority tree"; 161 the structure of this tree represents the client's preferred relative 162 ordering and weighted distribution of the bandwidth among HTTP 163 responses. Servers could use these priority signals as input into 164 prioritization decision-making. 166 The design and implementation of RFC 7540 stream priority was 167 observed to have shortcomings, explained in Section 2. HTTP/2 168 [HTTP2] has consequently deprecated the use of these stream priority 169 signals. The prioritization scheme and priority signals defined 170 herein can act as a substitute for RFC 7540 stream priority. 172 This document describes an extensible scheme for prioritizing HTTP 173 responses that uses absolute values. Section 4 defines priority 174 parameters, which are a standardized and extensible format of 175 priority information. Section 5 defines the Priority HTTP header 176 field, a protocol-version-independent and end-to-end priority signal. 177 Clients can send this header field to signal their view of how 178 responses should be prioritized. Similarly, servers behind an 179 intermediary can use it to signal priority to the intermediary. 180 After sending a request, a client can change their view of response 181 priority (see Section 6) by sending HTTP-version-specific frames 182 defined in Section 7.1 and Section 7.2. 184 Header field and frame priority signals are input to a server's 185 response prioritization process. They are only a suggestion and do 186 not guarantee any particular processing or transmission order for one 187 response relative to any other response. Section 10 and Section 12 188 provide consideration and guidance about how servers might act upon 189 signals. 191 1.1. Notational Conventions 193 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 194 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 195 "OPTIONAL" in this document are to be interpreted as described in 196 BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all 197 capitals, as shown here. 199 The terms Dictionary, sf-boolean, sf-dictionary, and sf-integer are 200 imported from [STRUCTURED-FIELDS]. 202 Example HTTP requests and responses use the HTTP/2-style formatting 203 from [HTTP2]. 205 This document uses the variable-length integer encoding from [QUIC]. 207 The term control stream is used to describe both the HTTP/2 stream 208 with identifier 0x0 and the HTTP/3 control stream; see Section 6.2.1 209 of [HTTP3]. 211 The term HTTP/2 priority signal is used to describe the priority 212 information sent from clients to servers in HTTP/2 frames; see 213 Section 5.3.2 of [HTTP2]. 215 2. Motivation for Replacing RFC 7540 Priorities 217 RFC 7540 stream priority (see Section 5.3 of [RFC7540]) is a complex 218 system where clients signal stream dependencies and weights to 219 describe an unbalanced tree. It suffered from limited deployment and 220 interoperability and was deprecated in a revision of HTTP/2 [HTTP2]. 221 HTTP/2 retains these protocol elements in order to maintain wire 222 compatibility (see Section 5.3.2 of [HTTP2]), which means that they 223 might still be used even in the presence of alternative signaling, 224 such as the scheme this document describes. 226 Many RFC 7540 server implementations do not act on HTTP/2 priority 227 signals. 229 Prioritization can use information that servers have about resources 230 or the order in which requests are generated. For example, a server, 231 with knowledge of an HTML document structure, might want to 232 prioritize the delivery of images that are critical to user 233 experience above other images. With RFC 7540 it is difficult for 234 servers to interpret signals from clients for prioritization as the 235 same conditions could result in very different signaling from 236 different clients. This document describes signaling that is simpler 237 and more constrained, requiring less interpretation and allowing less 238 variation. 240 RFC 7540 does not define a method that can be used by a server to 241 provide a priority signal for intermediaries. 243 RFC 7540 priority is expressed relative to other requests sharing the 244 same connection at the same time. It is difficult to incorporate 245 such design into applications that generate requests without 246 knowledge of how other requests might share a connection, or into 247 protocols that do not have strong ordering guarantees across streams, 248 like HTTP/3 [HTTP3]. 250 Experiments from independent research ([MARX]) have shown that 251 simpler schemes can reach at least equivalent performance 252 characteristics compared to the more complex RFC 7540 setups seen in 253 practice, at least for the web use case. 255 2.1. Disabling RFC 7540 Priorities 257 The problems and insights set out above provided the motivation for 258 an alternative to RFC 7540 stream priority (see Section 5.3 of 259 [HTTP2]). 261 The SETTINGS_NO_RFC7540_PRIORITIES HTTP/2 setting is defined by this 262 document in order to allow endpoints to omit or ignore HTTP/2 263 priority signals (see Section 5.3.2 of [HTTP2]), as described below. 264 The value of SETTINGS_NO_RFC7540_PRIORITIES MUST be 0 or 1. Any 265 value other than 0 or 1 MUST be treated as a connection error (see 266 Section 5.4.1 of [HTTP2]) of type PROTOCOL_ERROR. The initial value 267 is 0. 269 If endpoints use SETTINGS_NO_RFC7540_PRIORITIES they MUST send it in 270 the first SETTINGS frame. Senders MUST NOT change the 271 SETTINGS_NO_RFC7540_PRIORITIES value after the first SETTINGS frame. 272 Receivers that detect a change MAY treat it as a connection error of 273 type PROTOCOL_ERROR. 275 Clients can send SETTINGS_NO_RFC7540_PRIORITIES with a value of 1 to 276 indicate that they are not using HTTP/2 priority signals. The 277 SETTINGS frame precedes any HTTP/2 priority signal sent from clients, 278 so servers can determine whether they need to allocate any resources 279 to signal handling before signals arrive. A server that receives 280 SETTINGS_NO_RFC7540_PRIORITIES with a value of 1 MUST ignore HTTP/2 281 priority signals. 283 Servers can send SETTINGS_NO_RFC7540_PRIORITIES with a value of 1 to 284 indicate that they will ignore HTTP/2 priority signals sent by 285 clients. 287 Endpoints that send SETTINGS_NO_RFC7540_PRIORITIES are encouraged to 288 use alternative priority signals (for example, Section 5 or 289 Section 7.1) but there is no requirement to use a specific signal 290 type. 292 2.1.1. Advice when Using Extensible Priorities as the Alternative 294 Before receiving a SETTINGS frame from a server, a client does not 295 know if the server is ignoring HTTP/2 priority signals. Therefore, 296 until the client receives the SETTINGS frame from the server, the 297 client SHOULD send both the HTTP/2 priority signals and the signals 298 of this prioritization scheme (see Section 5 and Section 7.1). 300 Once the client receives the first SETTINGS frame that contains the 301 SETTINGS_NO_RFC7540_PRIORITIES parameter with value of 1, it SHOULD 302 stop sending the HTTP/2 priority signals. This avoids sending 303 redundant signals that are known to be ignored. 305 Similarly, if the client receives SETTINGS_NO_RFC7540_PRIORITIES with 306 value of 0 or if the settings parameter was absent, it SHOULD stop 307 sending PRIORITY_UPDATE frames (Section 7.1), since those frames are 308 likely to be ignored. However, the client MAY continue sending the 309 Priority header field (Section 5), as it is an end-to-end signal that 310 might be useful to nodes behind the server that the client is 311 directly connected to. 313 3. Applicability of the Extensible Priority Scheme 315 The priority scheme defined by this document is primarily focused on 316 the prioritization of HTTP response messages (see Section 3.4 of 317 [HTTP]). It defines new priority parameters (Section 4) and a means 318 of conveying those parameters (Section 5 and Section 7), which is 319 intended to communicate the priority of responses to a server that is 320 responsible for prioritizing them. Section 10 provides 321 considerations for servers about acting on those signals in 322 combination with other inputs and factors. 324 The CONNECT method (see Section 9.3.6 of [HTTP]) can be used to 325 establish tunnels. Signaling applies similarly to tunnels; 326 additional considerations for server prioritization are given in 327 Section 11. 329 Section 9 describes how clients can optionally apply elements of this 330 scheme locally to the request messages that they generate. 332 Some forms of HTTP extensions might change HTTP/2 or HTTP/3 stream 333 behavior or define new data carriage mechanisms. Such extensions can 334 define themselves how this priority scheme is to be applied. 336 4. Priority Parameters 338 The priority information is a sequence of key-value pairs, providing 339 room for future extensions. Each key-value pair represents a 340 priority parameter. 342 The Priority HTTP header field (Section 5) is an end-to-end way to 343 transmit this set of priority parameters when a request or a response 344 is issued. After sending a request, a client can change their view 345 of response priority (Section 6) by sending HTTP-version-specific 346 PRIORITY_UPDATE frames defined in Section 7.1 and Section 7.2. 347 Frames transmit priority parameters on a single hop only. 349 Intermediaries can consume and produce priority signals in a 350 PRIORITY_UPDATE frame or Priority header field. Sending a 351 PRIORITY_UPDATE frame preserves the signal from the client carried by 352 the Priority header field, but provides a signal that overrides that 353 for the next hop; see Section 14. Replacing or adding a Priority 354 header field overrides any signal from a client and can affect 355 prioritization for all subsequent recipients. 357 For both the Priority header field and the PRIORITY_UPDATE frame, the 358 set of priority parameters is encoded as a Structured Fields 359 Dictionary (see Section 3.2 of [STRUCTURED-FIELDS]). 361 This document defines the urgency(u) and incremental(i) priority 362 parameters. When receiving an HTTP request that does not carry these 363 priority parameters, a server SHOULD act as if their default values 364 were specified. 366 An intermediary can combine signals from requests and responses that 367 it forwards. Note that omission of priority parameters in responses 368 is handled differently from omission in requests; see Section 8. 370 Receivers parse the Dictionary as defined in Section 4.2 of 371 [STRUCTURED-FIELDS]. Where the Dictionary is successfully parsed, 372 this document places the additional requirement that unknown priority 373 parameters, priority parameters with out-of-range values, or values 374 of unexpected types MUST be ignored. 376 4.1. Urgency 378 The urgency parameter (u) takes an integer between 0 and 7, in 379 descending order of priority. 381 The value is encoded as an sf-integer. The default value is 3. 383 Endpoints use this parameter to communicate their view of the 384 precedence of HTTP responses. The chosen value of urgency can be 385 based on the expectation that servers might use this information to 386 transmit HTTP responses in the order of their urgency. The smaller 387 the value, the higher the precedence. 389 The following example shows a request for a CSS file with the urgency 390 set to 0: 392 :method = GET 393 :scheme = https 394 :authority = example.net 395 :path = /style.css 396 priority = u=0 398 A client that fetches a document that likely consists of multiple 399 HTTP resources (e.g., HTML) SHOULD assign the default urgency level 400 to the main resource. This convention allows servers to refine the 401 urgency using knowledge specific to the web-site (see Section 8). 403 The lowest urgency level (7) is reserved for background tasks such as 404 delivery of software updates. This urgency level SHOULD NOT be used 405 for fetching responses that have impact on user interaction. 407 4.2. Incremental 409 The incremental parameter (i) takes an sf-boolean as the value that 410 indicates if an HTTP response can be processed incrementally, i.e., 411 provide some meaningful output as chunks of the response arrive. 413 The default value of the incremental parameter is false (0). 415 If a client makes concurrent requests with the incremental parameter 416 set to false, there is no benefit serving responses with the same 417 urgency concurrently because the client is not going to process those 418 responses incrementally. Serving non-incremental responses with the 419 same urgency one by one, in the order in which those requests were 420 generated is considered to be the best strategy. 422 If a client makes concurrent requests with the incremental parameter 423 set to true, serving requests with the same urgency concurrently 424 might be beneficial. Doing this distributes the connection 425 bandwidth, meaning that responses take longer to complete. 426 Incremental delivery is most useful where multiple partial responses 427 might provide some value to clients ahead of a complete response 428 being available. 430 The following example shows a request for a JPEG file with the 431 urgency parameter set to 5 and the incremental parameter set to true. 433 :method = GET 434 :scheme = https 435 :authority = example.net 436 :path = /image.jpg 437 priority = u=5, i 439 4.3. Defining New Priority Parameters 441 When attempting to define new priority parameters, care must be taken 442 so that they do not adversely interfere with prioritization performed 443 by existing endpoints or intermediaries that do not understand the 444 newly defined priority parameter. Since unknown priority parameters 445 are ignored, new priority parameters should not change the 446 interpretation of, or modify, the urgency (see Section 4.1) or 447 incremental (see Section 4.2) priority parameters in a way that is 448 not backwards compatible or fallback safe. 450 For example, if there is a need to provide more granularity than 451 eight urgency levels, it would be possible to subdivide the range 452 using an additional priority parameter. Implementations that do not 453 recognize the parameter can safely continue to use the less granular 454 eight levels. 456 Alternatively, the urgency can be augmented. For example, a 457 graphical user agent could send a visible priority parameter to 458 indicate if the resource being requested is within the viewport. 460 Generic priority parameters are preferred over vendor-specific, 461 application-specific or deployment-specific values. If a generic 462 value cannot be agreed upon in the community, the parameter's name 463 should be correspondingly specific (e.g., with a prefix that 464 identifies the vendor, application or deployment). 466 4.3.1. Registration 468 New priority parameters can be defined by registering them in the 469 HTTP Priority Parameters Registry. The registry governs the keys 470 (short textual strings) used in the Structured Fields Dictionary (see 471 Section 3.2 of [STRUCTURED-FIELDS]). Since each HTTP request can 472 have associated priority signals, there is value in having short key 473 lengths, especially single-character strings. In order to encourage 474 extension while avoiding unintended conflict among attractive key 475 values, the HTTP Priority Parameters Registry operates two 476 registration policies depending on key length. 478 * Registration requests for priority parameters with a key length of 479 one use the Specification Required policy, as per Section 4.6 of 480 [RFC8126]. 482 * Registration requests for priority parameters with a key length 483 greater than one use the Expert Review policy, as per Section 4.5 484 of [RFC8126]. A specification document is appreciated, but not 485 required. 487 When reviewing registration requests, the designated expert(s) can 488 consider the additional guidance provided in Section 4.3 but cannot 489 use it as a basis for rejection. 491 Registration requests should use the following template: 493 Name: [a name for the Priority Parameter that matches key] 495 Description: [a description of the priority parameter semantics and 496 value] 498 Reference: [to a specification defining this priority parameter] 500 See the registry at https://iana.org/assignments/http-priority 501 (https://iana.org/assignments/http-priority) for details on where to 502 send registration requests. 504 5. The Priority HTTP Header Field 506 The Priority HTTP header field carries priority parameters (see 507 Section 4). It can appear in requests and responses. It is an end- 508 to-end signal that indicates the endpoint's view of how HTTP 509 responses should be prioritized. Section 8 describes how 510 intermediaries can combine the priority information sent from clients 511 and servers. Clients cannot interpret the appearance or omission of 512 a Priority response header field as acknowledgement that any 513 prioritization has occurred. Guidance for how endpoints can act on 514 Priority header values is given in Section 9 and Section 10. 516 Priority is a Dictionary (Section 3.2 of [STRUCTURED-FIELDS]): 518 Priority = sf-dictionary 519 An HTTP request with a Priority header field might be cached and re- 520 used for subsequent requests; see [CACHING]. When an origin server 521 generates the Priority response header field based on properties of 522 an HTTP request it receives, the server is expected to control the 523 cacheability or the applicability of the cached response, by using 524 header fields that control the caching behavior (e.g., Cache-Control, 525 Vary). 527 6. Reprioritization 529 After a client sends a request, it may be beneficial to change the 530 priority of the response. As an example, a web browser might issue a 531 prefetch request for a JavaScript file with the urgency parameter of 532 the Priority request header field set to u=7 (background). Then, 533 when the user navigates to a page which references the new JavaScript 534 file, while the prefetch is in progress, the browser would send a 535 reprioritization signal with the priority field value set to u=0. 536 The PRIORITY_UPDATE frame (Section 7) can be used for such 537 reprioritization. 539 7. The PRIORITY_UPDATE Frame 541 This document specifies a new PRIORITY_UPDATE frame for HTTP/2 542 [HTTP2] and HTTP/3 [HTTP3]. It carries priority parameters and 543 references the target of the prioritization based on a version- 544 specific identifier. In HTTP/2, this identifier is the Stream ID; in 545 HTTP/3, the identifier is either the Stream ID or Push ID. Unlike 546 the Priority header field, the PRIORITY_UPDATE frame is a hop-by-hop 547 signal. 549 PRIORITY_UPDATE frames are sent by clients on the control stream, 550 allowing them to be sent independent of the stream that carries the 551 response. This means they can be used to reprioritize a response or 552 a push stream; or signal the initial priority of a response instead 553 of the Priority header field. 555 A PRIORITY_UPDATE frame communicates a complete set of all priority 556 parameters in the Priority Field Value field. Omitting a priority 557 parameter is a signal to use its default value. Failure to parse the 558 Priority Field Value MAY be treated as a connection error. In HTTP/2 559 the error is of type PROTOCOL_ERROR; in HTTP/3 the error is of type 560 H3_GENERAL_PROTOCOL_ERROR. 562 A client MAY send a PRIORITY_UPDATE frame before the stream that it 563 references is open (except for HTTP/2 push streams; see Section 7.1). 564 Furthermore, HTTP/3 offers no guaranteed ordering across streams, 565 which could cause the frame to be received earlier than intended. 566 Either case leads to a race condition where a server receives a 567 PRIORITY_UPDATE frame that references a request stream that is yet to 568 be opened. To solve this condition, for the purposes of scheduling, 569 the most recently received PRIORITY_UPDATE frame can be considered as 570 the most up-to-date information that overrides any other signal. 571 Servers SHOULD buffer the most recently received PRIORITY_UPDATE 572 frame and apply it once the referenced stream is opened. Holding 573 PRIORITY_UPDATE frames for each stream requires server resources, 574 which can be bounded by local implementation policy. Although there 575 is no limit to the number of PRIORITY_UPDATE frames that can be sent, 576 storing only the most recently received frame limits resource 577 commitment. 579 7.1. HTTP/2 PRIORITY_UPDATE Frame 581 The HTTP/2 PRIORITY_UPDATE frame (type=0x10) is used by clients to 582 signal the initial priority of a response, or to reprioritize a 583 response or push stream. It carries the stream ID of the response 584 and the priority in ASCII text, using the same representation as the 585 Priority header field value. 587 The Stream Identifier field (see Section 5.1.1 of [HTTP2]) in the 588 PRIORITY_UPDATE frame header MUST be zero (0x0). Receiving a 589 PRIORITY_UPDATE frame with a field of any other value MUST be treated 590 as a connection error of type PROTOCOL_ERROR. 592 HTTP/2 PRIORITY_UPDATE Frame { 593 Length (24), 594 Type (8) = 0x10, 596 Unused Flags (8). 598 Reserved (1), 599 Stream Identifier (31), 601 Reserved (1), 602 Prioritized Stream ID (31), 603 Priority Field Value (..), 604 } 606 Figure 1: HTTP/2 PRIORITY_UPDATE Frame Payload 608 The Length, Type, Unused Flag(s), Reserved, and Stream Identifier 609 fields are described in Section 4 of [HTTP2]. The PRIORITY_UPDATE 610 frame payload contains the following additional fields: 612 Reserved: A reserved 1-bit field. The semantics of this bit are 613 undefined. It MUST remain unset (0x0) when sending and MUST be 614 ignored when receiving. 616 Prioritized Stream ID: A 31-bit stream identifier for the stream 617 that is the target of the priority update. 619 Priority Field Value: The priority update value in ASCII text, 620 encoded using Structured Fields. This is the same representation 621 as the Priority header field value. 623 When the PRIORITY_UPDATE frame applies to a request stream, clients 624 SHOULD provide a Prioritized Stream ID that refers to a stream in the 625 "open", "half-closed (local)", or "idle" state. Servers can discard 626 frames where the Prioritized Stream ID refers to a stream in the 627 "half-closed (local)" or "closed" state. The number of streams which 628 have been prioritized but remain in the "idle" state plus the number 629 of active streams (those in the "open" or either "half-closed" state; 630 see Section 5.1.2 of [HTTP2]) MUST NOT exceed the value of the 631 SETTINGS_MAX_CONCURRENT_STREAMS parameter. Servers that receive such 632 a PRIORITY_UPDATE MUST respond with a connection error of type 633 PROTOCOL_ERROR. 635 When the PRIORITY_UPDATE frame applies to a push stream, clients 636 SHOULD provide a Prioritized Stream ID that refers to a stream in the 637 "reserved (remote)" or "half-closed (local)" state. Servers can 638 discard frames where the Prioritized Stream ID refers to a stream in 639 the "closed" state. Clients MUST NOT provide a Prioritized Stream ID 640 that refers to a push stream in the "idle" state. Servers that 641 receive a PRIORITY_UPDATE for a push stream in the "idle" state MUST 642 respond with a connection error of type PROTOCOL_ERROR. 644 If a PRIORITY_UPDATE frame is received with a Prioritized Stream ID 645 of 0x0, the recipient MUST respond with a connection error of type 646 PROTOCOL_ERROR. 648 Servers MUST NOT send PRIORITY_UPDATE frames. If a client receives a 649 PRIORITY_UPDATE frame, it MUST respond with a connection error of 650 type PROTOCOL_ERROR. 652 7.2. HTTP/3 PRIORITY_UPDATE Frame 654 The HTTP/3 PRIORITY_UPDATE frame (type=0xF0700 or 0xF0701) is used by 655 clients to signal the initial priority of a response, or to 656 reprioritize a response or push stream. It carries the identifier of 657 the element that is being prioritized and the updated priority in 658 ASCII text that uses the same representation as that of the Priority 659 header field value. PRIORITY_UPDATE with a frame type of 0xF0700 is 660 used for request streams, while PRIORITY_UPDATE with a frame type of 661 0xF0701 is used for push streams. 663 The PRIORITY_UPDATE frame MUST be sent on the client control stream 664 (see Section 6.2.1 of [HTTP3]). Receiving a PRIORITY_UPDATE frame on 665 a stream other than the client control stream MUST be treated as a 666 connection error of type H3_FRAME_UNEXPECTED. 668 HTTP/3 PRIORITY_UPDATE Frame { 669 Type (i) = 0xF0700..0xF0701, 670 Length (i), 671 Prioritized Element ID (i), 672 Priority Field Value (..), 673 } 675 Figure 2: HTTP/3 PRIORITY_UPDATE Frame 677 The PRIORITY_UPDATE frame payload has the following fields: 679 Prioritized Element ID: The stream ID or push ID that is the target 680 of the priority update. 682 Priority Field Value: The priority update value in ASCII text, 683 encoded using Structured Fields. This is the same representation 684 as the Priority header field value. 686 The request-stream variant of PRIORITY_UPDATE (type=0xF0700) MUST 687 reference a request stream. If a server receives a PRIORITY_UPDATE 688 (type=0xF0700) for a Stream ID that is not a request stream, this 689 MUST be treated as a connection error of type H3_ID_ERROR. The 690 Stream ID MUST be within the client-initiated bidirectional stream 691 limit. If a server receives a PRIORITY_UPDATE (type=0xF0700) with a 692 Stream ID that is beyond the stream limits, this SHOULD be treated as 693 a connection error of type H3_ID_ERROR. Generating an error is not 694 mandatory because HTTP/3 implementations might have practical 695 barriers to determining the active stream concurrency limit that is 696 applied by the QUIC layer. 698 The push-stream variant PRIORITY_UPDATE (type=0xF0701) MUST reference 699 a promised push stream. If a server receives a PRIORITY_UPDATE 700 (type=0xF0701) with a Push ID that is greater than the maximum Push 701 ID or which has not yet been promised, this MUST be treated as a 702 connection error of type H3_ID_ERROR. 704 Servers MUST NOT send PRIORITY_UPDATE frames of either type. If a 705 client receives a PRIORITY_UPDATE frame, this MUST be treated as a 706 connection error of type H3_FRAME_UNEXPECTED. 708 8. Merging Client- and Server-Driven Priority Parameters 710 It is not always the case that the client has the best understanding 711 of how the HTTP responses deserve to be prioritized. The server 712 might have additional information that can be combined with the 713 client's indicated priority in order to improve the prioritization of 714 the response. For example, use of an HTML document might depend 715 heavily on one of the inline images; existence of such dependencies 716 is typically best known to the server. Or, a server that receives 717 requests for a font [RFC8081] and images with the same urgency might 718 give higher precedence to the font, so that a visual client can 719 render textual information at an early moment. 721 An origin can use the Priority response header field to indicate its 722 view on how an HTTP response should be prioritized. An intermediary 723 that forwards an HTTP response can use the priority parameters found 724 in the Priority response header field, in combination with the client 725 Priority request header field, as input to its prioritization 726 process. No guidance is provided for merging priorities; this is 727 left as an implementation decision. 729 Absence of a priority parameter in an HTTP response indicates the 730 server's disinterest in changing the client-provided value. This is 731 different from the request header field, in which omission of a 732 priority parameter implies the use of their default values (see 733 Section 4). 735 As a non-normative example, when the client sends an HTTP request 736 with the urgency parameter set to 5 and the incremental parameter set 737 to true 739 :method = GET 740 :scheme = https 741 :authority = example.net 742 :path = /menu.png 743 priority = u=5, i 745 and the origin responds with 747 :status = 200 748 content-type = image/png 749 priority = u=1 751 the intermediary might alter its understanding of the urgency from 5 752 to 1, because it prefers the server-provided value over the client's. 753 The incremental value continues to be true, the value specified by 754 the client, as the server did not specify the incremental(i) 755 parameter. 757 9. Client Scheduling 759 A client MAY use priority values to make local processing or 760 scheduling choices about the requests it initiates. 762 10. Server Scheduling 764 It is generally beneficial for an HTTP server to send all responses 765 as early as possible. However, when serving multiple requests on a 766 single connection, there could be competition between the requests 767 for resources such as connection bandwidth. This section describes 768 considerations regarding how servers can schedule the order in which 769 the competing responses will be sent when such competition exists. 771 Server scheduling is a prioritization process based on many inputs, 772 with priority signals being only one form of input. Factors such as 773 implementation choices or deployment environment also play a role. 774 Any given connection is likely to have many dynamic permutations. 775 For these reasons, it is not possible to describe a universal 776 scheduling algorithm. This document provides some basic, non- 777 exhaustive recommendations for how servers might act on priority 778 parameters. It does not describe in detail how servers might combine 779 priority signals with other factors. Endpoints cannot depend on 780 particular treatment based on priority signals. Expressing priority 781 is only a suggestion. 783 It is RECOMMENDED that, when possible, servers respect the urgency 784 parameter (Section 4.1), sending higher urgency responses before 785 lower urgency responses. 787 The incremental parameter indicates how a client processes response 788 bytes as they arrive. It is RECOMMENDED that, when possible, servers 789 respect the incremental parameter (Section 4.2). 791 Non-incremental responses of the same urgency SHOULD be served by 792 prioritizing bandwidth allocation in ascending order of the stream 793 ID, which corresponds to the order in which clients make requests. 794 Doing so ensures that clients can use request ordering to influence 795 response order. 797 Incremental responses of the same urgency SHOULD be served by sharing 798 bandwidth among them. Payload of incremental responses are used in 799 parts, or chunks, as they are received. A client might benefit more 800 from receiving a portion of all these resources rather than the 801 entirety of a single resource. How large a portion of the resource 802 is needed to be useful in improving performance varies. Some 803 resource types place critical elements early; others can use 804 information progressively. This scheme provides no explicit mandate 805 about how a server should use size, type or any other input to decide 806 how to prioritize. 808 There can be scenarios where a server will need to schedule multiple 809 incremental and non-incremental responses at the same urgency level. 810 Strictly abiding the scheduling guidance based on urgency and request 811 generation order might lead to suboptimal results at the client, as 812 early non-incremental responses might prevent serving of incremental 813 responses issued later. The following are examples of such 814 challenges. 816 1. At the same urgency level, a non-incremental request for a large 817 resource followed by an incremental request for a small resource. 819 2. At the same urgency level, an incremental request of 820 indeterminate length followed by a non-incremental large 821 resource. 823 It is RECOMMENDED that servers avoid such starvation where possible. 824 The method to do so is an implementation decision. For example, a 825 server might pre-emptively send responses of a particular incremental 826 type based on other information such as content size. 828 Optimal scheduling of server push is difficult, especially when 829 pushed resources contend with active concurrent requests. Servers 830 can consider many factors when scheduling, such as the type or size 831 of resource being pushed, the priority of the request that triggered 832 the push, the count of active concurrent responses, the priority of 833 other active concurrent responses, etc. There is no general guidance 834 on the best way to apply these. A server that is too simple could 835 easily push at too high a priority and block client requests, or push 836 at too low a priority and delay the response, negating intended goals 837 of server push. 839 Priority signals are a factor for server push scheduling. The 840 concept of parameter value defaults applies slightly differently 841 because there is no explicit client-signalled initial priority. A 842 server can apply priority signals provided in an origin response; see 843 the merging guidance given in Section 8. In the absence of origin 844 signals, applying default parameter values could be suboptimal. By 845 whatever means a server decides to schedule a pushed response, it can 846 signal the intended priority to the client by including the Priority 847 field in a PUSH_PROMISE or HEADERS frame. 849 10.1. Intermediaries with Multiple Backend Connections 851 An intermediary serving an HTTP connection might split requests over 852 multiple backend connections. When it applies prioritization rules 853 strictly, low priority requests cannot make progress while requests 854 with higher priorities are in flight. This blocking can propagate to 855 backend connections, which the peer might interpret as a connection 856 stall. Endpoints often implement protections against stalls, such as 857 abruptly closing connections after a certain time period. To reduce 858 the possibility of this occurring, intermediaries can avoid strictly 859 following prioritization and instead allocate small amounts of 860 bandwidth for all the requests that they are forwarding, so that 861 every request can make some progress over time. 863 Similarly, servers SHOULD allocate some amount of bandwidths to 864 streams acting as tunnels. 866 11. Scheduling and the CONNECT Method 868 When a request stream carries the CONNECT method, the scheduling 869 guidance in this document applies to the frames on the stream. A 870 client that issues multiple CONNECT requests can set the incremental 871 parameter to true. Servers that implement the recommendations for 872 handling of the incremental parameter in Section 10 are likely to 873 schedule these fairly, avoiding one CONNECT stream from blocking 874 others. 876 12. Retransmission Scheduling 878 Transport protocols such as TCP and QUIC provide reliability by 879 detecting packet losses and retransmitting lost information. In 880 addition to the considerations in Section 10, scheduling of 881 retransmission data could compete with new data. The remainder of 882 this section discusses considerations when using QUIC. 884 Section 13.3 of [QUIC] states "Endpoints SHOULD prioritize 885 retransmission of data over sending new data, unless priorities 886 specified by the application indicate otherwise". When an HTTP/3 887 application uses the priority scheme defined in this document and the 888 QUIC transport implementation supports application indicated stream 889 priority, a transport that considers the relative priority of streams 890 when scheduling both new data and retransmission data might better 891 match the expectations of the application. However, there are no 892 requirements on how a transport chooses to schedule based on this 893 information because the decision depends on several factors and 894 trade-offs. It could prioritize new data for a higher urgency stream 895 over retransmission data for a lower priority stream, or it could 896 prioritize retransmission data over new data irrespective of 897 urgencies. 899 Section 6.2.4 of [QUIC-RECOVERY] also highlights consideration of 900 application priorities when sending probe packets after Probe Timeout 901 timer expiration. A QUIC implementation supporting application- 902 indicated priorities might use the relative priority of streams when 903 choosing probe data. 905 13. Fairness 907 Typically, HTTP implementations depend on the underlying transport to 908 maintain fairness between connections competing for bandwidth. When 909 HTTP requests are forwarded through intermediaries, progress made by 910 each connection originating from end clients can become different 911 over time, depending on how intermediaries coalesce or split requests 912 into backend connections. This unfairness can expand if priority 913 signals are used. Section 13.1 and Section 13.2 discuss mitigations 914 against this expansion of unfairness. 916 Conversely, Section 13.3 discusses how servers might intentionally 917 allocate unequal bandwidth to some connections depending on the 918 priority signals. 920 13.1. Coalescing Intermediaries 922 When an intermediary coalesces HTTP requests coming from multiple 923 clients into one HTTP/2 or HTTP/3 connection going to the backend 924 server, requests that originate from one client might carry signals 925 indicating higher priority than those coming from others. 927 It is sometimes beneficial for the server running behind an 928 intermediary to obey Priority header field values. As an example, a 929 resource-constrained server might defer the transmission of software 930 update files that have the background urgency. However, in the worst 931 case, the asymmetry between the priority declared by multiple clients 932 might cause responses going to one user agent to be delayed totally 933 after those going to another. 935 In order to mitigate this fairness problem, a server could use 936 knowledge about the intermediary as another input in its 937 prioritization decisions. For instance, if a server knows the 938 intermediary is coalescing requests, then it could avoid serving the 939 responses in their entirety and instead distribute bandwidth (for 940 example, in a round-robin manner). This can work if the constrained 941 resource is network capacity between the intermediary and the user 942 agent, as the intermediary buffers responses and forwards the chunks 943 based on the prioritization scheme it implements. 945 A server can determine if a request came from an intermediary through 946 configuration, or by consulting if that request contains one of the 947 following header fields: 949 * Forwarded [FORWARDED], X-Forwarded-For 951 * Via (see Section 7.6.3 of [HTTP]) 953 13.2. HTTP/1.x Back Ends 955 It is common for CDN infrastructure to support different HTTP 956 versions on the front end and back end. For instance, the client- 957 facing edge might support HTTP/2 and HTTP/3 while communication to 958 back end servers is done using HTTP/1.1. Unlike with connection 959 coalescing, the CDN will "de-mux" requests into discrete connections 960 to the back end. HTTP/1.1 and older do not support response 961 multiplexing in a single connection, so there is not a fairness 962 problem. However, back end servers MAY still use client headers for 963 request scheduling. Back end servers SHOULD only schedule based on 964 client priority information where that information can be scoped to 965 individual end clients. Authentication and other session information 966 might provide this linkability. 968 13.3. Intentional Introduction of Unfairness 970 It is sometimes beneficial to deprioritize the transmission of one 971 connection over others, knowing that doing so introduces a certain 972 amount of unfairness between the connections and therefore between 973 the requests served on those connections. 975 For example, a server might use a scavenging congestion controller on 976 connections that only convey background priority responses such as 977 software update images. Doing so improves responsiveness of other 978 connections at the cost of delaying the delivery of updates. 980 14. Why use an End-to-End Header Field? 982 In contrast to the prioritization scheme of HTTP/2 that uses a hop- 983 by-hop frame, the Priority header field is defined as end-to-end. 985 The way that a client processes a response is a property associated 986 with the client generating that request, not that of an intermediary. 987 Therefore, it is an end-to-end property. How these end-to-end 988 properties carried by the Priority header field affect the 989 prioritization between the responses that share a connection is a 990 hop-by-hop issue. 992 Having the Priority header field defined as end-to-end is important 993 for caching intermediaries. Such intermediaries can cache the value 994 of the Priority header field along with the response and utilize the 995 value of the cached header field when serving the cached response, 996 only because the header field is defined as end-to-end rather than 997 hop-by-hop. 999 15. Security Considerations 1001 Section 7 describes considerations for server buffering of 1002 PRIORITY_UPDATE frames. 1004 Section 10 presents examples where servers that prioritize responses 1005 in a certain way might be starved of the ability to transmit payload. 1007 The security considerations from [STRUCTURED-FIELDS] apply to 1008 processing of priority parameters defined in Section 4. 1010 16. IANA Considerations 1012 This specification registers the following entry in the Hypertext 1013 Transfer Protocol (HTTP) Field Name Registry established by [HTTP]: 1015 Field name: Priority 1017 Status: permanent 1019 Specification document(s): This document 1021 This specification registers the following entry in the HTTP/2 1022 Settings registry established by [RFC7540]: 1024 Name: SETTINGS_NO_RFC7540_PRIORITIES 1026 Code: 0x9 1028 Initial value: 0 1030 Specification: This document 1031 This specification registers the following entry in the HTTP/2 Frame 1032 Type registry established by [RFC7540]: 1034 Frame Type: PRIORITY_UPDATE 1036 Code: 0x10 1038 Specification: This document 1040 This specification registers the following entries in the HTTP/3 1041 Frame Type registry established by [HTTP3]: 1043 Frame Type: PRIORITY_UPDATE 1045 Code: 0xF0700 and 0xF0701 1047 Specification: This document 1049 Upon publication, please create the HTTP Priority Parameters registry 1050 at https://iana.org/assignments/http-priority 1051 (https://iana.org/assignments/http-priority) and populate it with the 1052 entries in Table 1; see Section 4.3.1 for its associated procedures. 1054 +======+==================================+===============+ 1055 | Name | Description | Specification | 1056 +======+==================================+===============+ 1057 | u | The urgency of an HTTP response. | Section 4.1 | 1058 +------+----------------------------------+---------------+ 1059 | i | Whether an HTTP response can be | Section 4.2 | 1060 | | processed incrementally. | | 1061 +------+----------------------------------+---------------+ 1063 Table 1: Initial Priority Parameters 1065 17. References 1067 17.1. Normative References 1069 [HTTP] Fielding, R. T., Nottingham, M., and J. Reschke, "HTTP 1070 Semantics", Work in Progress, Internet-Draft, draft-ietf- 1071 httpbis-semantics-19, 12 September 2021, 1072 . 1075 [HTTP2] Thomson, M. and C. Benfield, "Hypertext Transfer Protocol 1076 Version 2 (HTTP/2)", Work in Progress, Internet-Draft, 1077 draft-ietf-httpbis-http2bis-06, 18 November 2021, 1078 . 1081 [HTTP3] Bishop, M., "Hypertext Transfer Protocol Version 3 1082 (HTTP/3)", Work in Progress, Internet-Draft, draft-ietf- 1083 quic-http-34, 2 February 2021, 1084 . 1087 [QUIC] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 1088 Multiplexed and Secure Transport", RFC 9000, 1089 DOI 10.17487/RFC9000, May 2021, 1090 . 1092 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1093 Requirement Levels", BCP 14, RFC 2119, 1094 DOI 10.17487/RFC2119, March 1997, 1095 . 1097 [RFC8126] Cotton, M., Leiba, B., and T. Narten, "Guidelines for 1098 Writing an IANA Considerations Section in RFCs", BCP 26, 1099 RFC 8126, DOI 10.17487/RFC8126, June 2017, 1100 . 1102 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1103 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1104 May 2017, . 1106 [STRUCTURED-FIELDS] 1107 Nottingham, M. and P-H. Kamp, "Structured Field Values for 1108 HTTP", RFC 8941, DOI 10.17487/RFC8941, February 2021, 1109 . 1111 17.2. Informative References 1113 [CACHING] Fielding, R. T., Nottingham, M., and J. Reschke, "HTTP 1114 Caching", Work in Progress, Internet-Draft, draft-ietf- 1115 httpbis-cache-19, 12 September 2021, 1116 . 1119 [FORWARDED] 1120 Petersson, A. and M. Nilsson, "Forwarded HTTP Extension", 1121 RFC 7239, DOI 10.17487/RFC7239, June 2014, 1122 . 1124 [I-D.lassey-priority-setting] 1125 Lassey, B. and L. Pardue, "Declaring Support for HTTP/2 1126 Priorities", Work in Progress, Internet-Draft, draft- 1127 lassey-priority-setting-00, 25 July 2019, 1128 . 1131 [MARX] Marx, R., Decker, T.D., Quax, P., and W. Lamotte, "Of the 1132 Utmost Importance: Resource Prioritization in HTTP/3 over 1133 QUIC", DOI 10.5220/0008191701300143, 1134 SCITEPRESS Proceedings of the 15th International 1135 Conference on Web Information Systems and Technologies 1136 (pages 130-143), September 2019, 1137 . 1139 [QUIC-RECOVERY] 1140 Iyengar, J., Ed. and I. Swett, Ed., "QUIC Loss Detection 1141 and Congestion Control", RFC 9002, DOI 10.17487/RFC9002, 1142 May 2021, . 1144 [RFC7540] Belshe, M., Peon, R., and M. Thomson, Ed., "Hypertext 1145 Transfer Protocol Version 2 (HTTP/2)", RFC 7540, 1146 DOI 10.17487/RFC7540, May 2015, 1147 . 1149 [RFC8081] Lilley, C., "The "font" Top-Level Media Type", RFC 8081, 1150 DOI 10.17487/RFC8081, February 2017, 1151 . 1153 Appendix A. Acknowledgements 1155 Roy Fielding presented the idea of using a header field for 1156 representing priorities in 1157 https://www.ietf.org/proceedings/83/slides/slides-83-httpbis-5.pdf 1158 (https://www.ietf.org/proceedings/83/slides/slides-83-httpbis-5.pdf). 1159 In https://github.com/pmeenan/http3-prioritization-proposal 1160 (https://github.com/pmeenan/http3-prioritization-proposal), Patrick 1161 Meenan advocated for representing the priorities using a tuple of 1162 urgency and concurrency. The ability to disable HTTP/2 1163 prioritization is inspired by [I-D.lassey-priority-setting], authored 1164 by Brad Lassey and Lucas Pardue, with modifications based on feedback 1165 that was not incorporated into an update to that document. 1167 The motivation for defining an alternative to HTTP/2 priorities is 1168 drawn from discussion within the broad HTTP community. Special 1169 thanks to Roberto Peon, Martin Thomson and Netflix for text that was 1170 incorporated explicitly in this document. 1172 In addition to the people above, this document owes a lot to the 1173 extensive discussion in the HTTP priority design team, consisting of 1174 Alan Frindell, Andrew Galloni, Craig Taylor, Ian Swett, Kazuho Oku, 1175 Lucas Pardue, Matthew Cox, Mike Bishop, Roberto Peon, Robin Marx, Roy 1176 Fielding. 1178 Yang Chi contributed the section on retransmission scheduling. 1180 Appendix B. Change Log 1182 _RFC EDITOR: please remove this section before publication_ 1184 B.1. Since draft-ietf-httpbis-priority-11 1186 * Changes to address Last Call/IESG feedback 1188 B.2. Since draft-ietf-httpbis-priority-10 1190 * Editorial changes 1192 * Add clearer IANA instructions for Priority Parameter initial 1193 population 1195 B.3. Since draft-ietf-httpbis-priority-09 1197 * Editorial changes 1199 B.4. Since draft-ietf-httpbis-priority-08 1201 * Changelog fixups 1203 B.5. Since draft-ietf-httpbis-priority-07 1205 * Relax requirements of receiving SETTINGS_NO_RFC7540_PRIORITIES 1206 that changes value (#1714, #1725) 1208 * Clarify how intermediaries might use frames vs. headers (#1715, 1209 #1735) 1211 * Relax requirement when receiving a PRIORITY_UPDATE with an invalid 1212 structured field value (#1741, #1756) 1214 B.6. Since draft-ietf-httpbis-priority-06 1216 * Focus on editorial changes 1218 * Clarify rules about Sf-Dictionary handling in headers 1219 * Split policy for parameter IANA registry into two sections based 1220 on key length 1222 B.7. Since draft-ietf-httpbis-priority-05 1224 * Renamed SETTINGS_DEPRECATE_RFC7540_PRIORITIES to 1225 SETTINGS_NO_RFC7540_PRIORITIES 1227 * Clarify that senders of the HTTP/2 setting can use any alternative 1228 (#1679, #1705) 1230 B.8. Since draft-ietf-httpbis-priority-04 1232 * Renamed SETTINGS_DEPRECATE_HTTP2_PRIORITIES to 1233 SETTINGS_DEPRECATE_RFC7540_PRIORITIES (#1601) 1235 * Reoriented text towards RFC7540bis (#1561, #1601) 1237 * Clarify intermediary behavior (#1562) 1239 B.9. Since draft-ietf-httpbis-priority-03 1241 * Add statement about what this scheme applies to. Clarify 1242 extensions can use it but must define how themselves (#1550, 1243 #1559) 1245 * Describe scheduling considerations for the CONNECT method (#1495, 1246 #1544) 1248 * Describe scheduling considerations for retransmitted data (#1429, 1249 #1504) 1251 * Suggest intermediaries might avoid strict prioritization (#1562) 1253 B.10. Since draft-ietf-httpbis-priority-02 1255 * Describe considerations for server push prioritization (#1056, 1256 #1345) 1258 * Define HTTP/2 PRIORITY_UPDATE ID limits in HTTP/2 terms (#1261, 1259 #1344) 1261 * Add a Priority Parameters registry (#1371) 1263 B.11. Since draft-ietf-httpbis-priority-01 1265 * PRIORITY_UPDATE frame changes (#1096, #1079, #1167, #1262, #1267, 1266 #1271) 1268 * Add section to describe server scheduling considerations (#1215, 1269 #1232, #1266) 1271 * Remove specific instructions related to intermediary fairness 1272 (#1022, #1264) 1274 B.12. Since draft-ietf-httpbis-priority-00 1276 * Move text around (#1217, #1218) 1278 * Editorial change to the default urgency. The value is 3, which 1279 was always the intent of previous changes. 1281 B.13. Since draft-kazuho-httpbis-priority-04 1283 * Minimize semantics of Urgency levels (#1023, #1026) 1285 * Reduce guidance about how intermediary implements merging priority 1286 signals (#1026) 1288 * Remove mention of CDN-Loop (#1062) 1290 * Editorial changes 1292 * Make changes due to WG adoption 1294 * Removed outdated Consideration (#118) 1296 B.14. Since draft-kazuho-httpbis-priority-03 1298 * Changed numbering from [-1,6] to [0,7] (#78) 1300 * Replaced priority scheme negotiation with HTTP/2 priority 1301 deprecation (#100) 1303 * Shorten parameter names (#108) 1305 * Expand on considerations (#105, #107, #109, #110, #111, #113) 1307 B.15. Since draft-kazuho-httpbis-priority-02 1309 * Consolidation of the problem statement (#61, #73) 1311 * Define SETTINGS_PRIORITIES for negotiation (#58, #69) 1313 * Define PRIORITY_UPDATE frame for HTTP/2 and HTTP/3 (#51) 1315 * Explain fairness issue and mitigations (#56) 1317 B.16. Since draft-kazuho-httpbis-priority-01 1319 * Explain how reprioritization might be supported. 1321 B.17. Since draft-kazuho-httpbis-priority-00 1323 * Expand urgency levels from 3 to 8. 1325 Authors' Addresses 1327 Kazuho Oku 1328 Fastly 1330 Email: kazuhooku@gmail.com 1332 Lucas Pardue 1333 Cloudflare 1335 Email: lucaspardue.24.7@gmail.com