idnits 2.17.00 (12 Aug 2021) /tmp/idnits4422/draft-ietf-tsvwg-ecn-l4s-id-25.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document date (4 March 2022) is 71 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 1520 == Missing Reference: 'RFCXXXX' is mentioned on line 1520, but not defined == Outdated reference: A later version (-06) exists of draft-briscoe-docsis-q-protection-02 == Outdated reference: A later version (-02) exists of draft-cardwell-iccrg-bbr-congestion-control-01 == Outdated reference: A later version (-18) exists of draft-ietf-tcpm-accurate-ecn-16 == Outdated reference: draft-ietf-tls-dtls13 has been published as RFC 9147 == Outdated reference: A later version (-23) exists of draft-ietf-tsvwg-aqm-dualq-coupled-22 == Outdated reference: A later version (-17) exists of draft-ietf-tsvwg-l4s-arch-16 == Outdated reference: A later version (-03) exists of draft-ietf-tsvwg-l4sops-02 -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) -- Obsolete informational reference (is this intentional?): RFC 6347 (Obsoleted by RFC 9147) Summary: 0 errors (**), 0 flaws (~~), 10 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Services (tsv) K. De Schepper 3 Internet-Draft Nokia Bell Labs 4 Intended status: Experimental B. Briscoe, Ed. 5 Expires: 5 September 2022 Independent 6 4 March 2022 8 Explicit Congestion Notification (ECN) Protocol for Very Low Queuing 9 Delay (L4S) 10 draft-ietf-tsvwg-ecn-l4s-id-25 12 Abstract 14 This specification defines the protocol to be used for a new network 15 service called low latency, low loss and scalable throughput (L4S). 16 L4S uses an Explicit Congestion Notification (ECN) scheme at the IP 17 layer that is similar to the original (or 'Classic') ECN approach, 18 except as specified within. L4S uses 'scalable' congestion control, 19 which induces much more frequent control signals from the network and 20 it responds to them with much more fine-grained adjustments, so that 21 very low (typically sub-millisecond on average) and consistently low 22 queuing delay becomes possible for L4S traffic without compromising 23 link utilization. Thus even capacity-seeking (TCP-like) traffic can 24 have high bandwidth and very low delay at the same time, even during 25 periods of high traffic load. 27 The L4S identifier defined in this document distinguishes L4S from 28 'Classic' (e.g. TCP-Reno-friendly) traffic. It gives an incremental 29 migration path so that suitably modified network bottlenecks can 30 distinguish and isolate existing traffic that still follows the 31 Classic behaviour, to prevent it degrading the low queuing delay and 32 low loss of L4S traffic. This specification defines the rules that 33 L4S transports and network elements need to follow with the intention 34 that L4S flows neither harm each other's performance nor that of 35 Classic traffic. Examples of new active queue management (AQM) 36 marking algorithms and examples of new transports (whether TCP-like 37 or real-time) are specified separately. 39 Status of This Memo 41 This Internet-Draft is submitted in full conformance with the 42 provisions of BCP 78 and BCP 79. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF). Note that other groups may also distribute 46 working documents as Internet-Drafts. The list of current Internet- 47 Drafts is at https://datatracker.ietf.org/drafts/current/. 49 Internet-Drafts are draft documents valid for a maximum of six months 50 and may be updated, replaced, or obsoleted by other documents at any 51 time. It is inappropriate to use Internet-Drafts as reference 52 material or to cite them other than as "work in progress." 54 This Internet-Draft will expire on 5 September 2022. 56 Copyright Notice 58 Copyright (c) 2022 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 63 license-info) in effect on the date of publication of this document. 64 Please review these documents carefully, as they describe your rights 65 and restrictions with respect to this document. Code Components 66 extracted from this document must include Revised BSD License text as 67 described in Section 4.e of the Trust Legal Provisions and are 68 provided without warranty as described in the Revised BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 73 1.1. Latency, Loss and Scaling Problems . . . . . . . . . . . 5 74 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 7 75 1.3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 9 76 2. Choice of L4S Packet Identifier: Requirements . . . . . . . . 10 77 3. L4S Packet Identification . . . . . . . . . . . . . . . . . . 11 78 4. Transport Layer Behaviour (the 'Prague Requirements') . . . . 11 79 4.1. Codepoint Setting . . . . . . . . . . . . . . . . . . . . 12 80 4.2. Prerequisite Transport Feedback . . . . . . . . . . . . . 12 81 4.3. Prerequisite Congestion Response . . . . . . . . . . . . 13 82 4.3.1. Guidance on Congestion Response in the RFC Series . . 16 83 4.4. Filtering or Smoothing of ECN Feedback . . . . . . . . . 19 84 5. Network Node Behaviour . . . . . . . . . . . . . . . . . . . 19 85 5.1. Classification and Re-Marking Behaviour . . . . . . . . . 19 86 5.2. The Strength of L4S CE Marking Relative to Drop . . . . . 21 87 5.3. Exception for L4S Packet Identification by Network Nodes 88 with Transport-Layer Awareness . . . . . . . . . . . . . 22 89 5.4. Interaction of the L4S Identifier with other 90 Identifiers . . . . . . . . . . . . . . . . . . . . . . . 22 91 5.4.1. DualQ Examples of Other Identifiers Complementing L4S 92 Identifiers . . . . . . . . . . . . . . . . . . . . . 22 93 5.4.1.1. Inclusion of Additional Traffic with L4S . . . . 22 94 5.4.1.2. Exclusion of Traffic From L4S Treatment . . . . . 24 95 5.4.1.3. Generalized Combination of L4S and Other 96 Identifiers . . . . . . . . . . . . . . . . . . . . 25 98 5.4.2. Per-Flow Queuing Examples of Other Identifiers 99 Complementing L4S Identifiers . . . . . . . . . . . . 27 100 5.5. Limiting Packet Bursts from Links . . . . . . . . . . . . 27 101 5.5.1. Limiting Packet Bursts from Links Fed by an L4S 102 AQM . . . . . . . . . . . . . . . . . . . . . . . . . 27 103 5.5.2. Limiting Packet Bursts from Links Upstream of an L4S 104 AQM . . . . . . . . . . . . . . . . . . . . . . . . . 28 105 6. Behaviour of Tunnels and Encapsulations . . . . . . . . . . . 28 106 6.1. No Change to ECN Tunnels and Encapsulations in General . 28 107 6.2. VPN Behaviour to Avoid Limitations of Anti-Replay . . . . 29 108 7. L4S Experiments . . . . . . . . . . . . . . . . . . . . . . . 30 109 7.1. Open Questions . . . . . . . . . . . . . . . . . . . . . 30 110 7.2. Open Issues . . . . . . . . . . . . . . . . . . . . . . . 32 111 7.3. Future Potential . . . . . . . . . . . . . . . . . . . . 32 112 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 33 113 9. Security Considerations . . . . . . . . . . . . . . . . . . . 33 114 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 34 115 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 35 116 11.1. Normative References . . . . . . . . . . . . . . . . . . 35 117 11.2. Informative References . . . . . . . . . . . . . . . . . 35 118 Appendix A. Rationale for the 'Prague L4S Requirements' . . . . 45 119 A.1. Rationale for the Requirements for Scalable Transport 120 Protocols . . . . . . . . . . . . . . . . . . . . . . . . 46 121 A.1.1. Use of L4S Packet Identifier . . . . . . . . . . . . 46 122 A.1.2. Accurate ECN Feedback . . . . . . . . . . . . . . . . 46 123 A.1.3. Capable of Replacement by Classic Congestion 124 Control . . . . . . . . . . . . . . . . . . . . . . . 46 125 A.1.4. Fall back to Classic Congestion Control on Packet 126 Loss . . . . . . . . . . . . . . . . . . . . . . . . 47 127 A.1.5. Coexistence with Classic Congestion Control at Classic 128 ECN bottlenecks . . . . . . . . . . . . . . . . . . . 48 129 A.1.6. Reduce RTT dependence . . . . . . . . . . . . . . . . 51 130 A.1.7. Scaling down to fractional congestion windows . . . . 52 131 A.1.8. Measuring Reordering Tolerance in Time Units . . . . 53 132 A.2. Scalable Transport Protocol Optimizations . . . . . . . . 56 133 A.2.1. Setting ECT in Control Packets and Retransmissions . 56 134 A.2.2. Faster than Additive Increase . . . . . . . . . . . . 56 135 A.2.3. Faster Convergence at Flow Start . . . . . . . . . . 57 136 Appendix B. Compromises in the Choice of L4S Identifier . . . . 57 137 Appendix C. Potential Competing Uses for the ECT(1) Codepoint . 62 138 C.1. Integrity of Congestion Feedback . . . . . . . . . . . . 62 139 C.2. Notification of Less Severe Congestion than CE . . . . . 63 140 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 64 142 1. Introduction 144 This specification defines the protocol to be used for a new network 145 service called low latency, low loss and scalable throughput (L4S). 146 L4S uses an Explicit Congestion Notification (ECN) scheme at the IP 147 layer with the same set of codepoint transitions as the original (or 148 'Classic') Explicit Congestion Notification (ECN [RFC3168]). 149 RFC 3168 required an ECN mark to be equivalent to a drop, both when 150 applied in the network and when responded to by a transport. Unlike 151 Classic ECN marking, the network applies L4S marking more immediately 152 and more aggressively than drop, and the transport response to each 153 mark is reduced and smoothed relative to that for drop. The two 154 changes counterbalance each other so that the throughput of an L4S 155 flow will be roughly the same as a comparable non-L4S flow under the 156 same conditions. Nonetheless, the much more frequent ECN control 157 signals and the finer responses to these signals result in very low 158 queuing delay without compromising link utilization, and this low 159 delay can be maintained during high load. For instance, queuing 160 delay under heavy and highly varying load with the example DCTCP/ 161 DualQ solution cited below on a DSL or Ethernet link is sub- 162 millisecond on average and roughly 1 to 2 milliseconds at the 99th 163 percentile without losing link utilization [DualPI2Linux], [DCttH19]. 164 Note that the inherent queuing delay while waiting to acquire a 165 discontinuous medium such as WiFi has to be minimized in its own 166 right, so it would be additional to the above (see section 6.3 of the 167 L4S architecture [I-D.ietf-tsvwg-l4s-arch]). 169 L4S relies on 'scalable' congestion controls for these delay 170 properties and for preserving low delay as flow rate scales, hence 171 the name. The congestion control used in Data Center TCP (DCTCP) is 172 an example of a scalable congestion control, but DCTCP is applicable 173 solely to controlled environments like data centres [RFC8257], 174 because it is too aggressive to co-exist with existing TCP-Reno- 175 friendly traffic. The DualQ Coupled AQM, which is defined in a 176 complementary experimental 177 specification [I-D.ietf-tsvwg-aqm-dualq-coupled], is an AQM framework 178 that enables scalable congestion controls derived from DCTCP to co- 179 exist with existing traffic, each getting roughly the same flow rate 180 when they compete under similar conditions. Note that a scalable 181 congestion control is still not safe to deploy on the Internet unless 182 it satisfies the requirements listed in Section 4. 184 L4S is not only for elastic (TCP-like) traffic - there are scalable 185 congestion controls for real-time media, such as the L4S variant of 186 the SCReAM [RFC8298] real-time media congestion avoidance technique 187 (RMCAT). The factor that distinguishes L4S from Classic traffic is 188 its behaviour in response to congestion. The transport wire 189 protocol, e.g. TCP, QUIC, SCTP, DCCP, RTP/RTCP, is orthogonal (and 190 therefore not suitable for distinguishing L4S from Classic packets). 192 The L4S identifier defined in this document is the key piece that 193 distinguishes L4S from 'Classic' (e.g. Reno-friendly) traffic. It 194 gives an incremental migration path so that suitably modified network 195 bottlenecks can distinguish and isolate existing Classic traffic from 196 L4S traffic to prevent the former from degrading the very low delay 197 and loss of the new scalable transports, without harming Classic 198 performance at these bottlenecks. Initial implementation of the 199 separate parts of the system has been motivated by the performance 200 benefits. 202 1.1. Latency, Loss and Scaling Problems 204 Latency is becoming the critical performance factor for many (most?) 205 applications on the public Internet, e.g. interactive Web, Web 206 services, voice, conversational video, interactive video, interactive 207 remote presence, instant messaging, online gaming, remote desktop, 208 cloud-based applications, and video-assisted remote control of 209 machinery and industrial processes. In the 'developed' world, 210 further increases in access network bit-rate offer diminishing 211 returns, whereas latency is still a multi-faceted problem. In the 212 last decade or so, much has been done to reduce propagation time by 213 placing caches or servers closer to users. However, queuing remains 214 a major intermittent component of latency. 216 The Diffserv architecture provides Expedited Forwarding [RFC3246], so 217 that low latency traffic can jump the queue of other traffic. If 218 growth in high-throughput latency-sensitive applications continues, 219 periods with solely latency-sensitive traffic will become 220 increasingly common on links where traffic aggregation is low. For 221 instance, on the access links dedicated to individual sites (homes, 222 small enterprises or mobile devices). These links also tend to 223 become the path bottleneck under load. During these periods, if all 224 the traffic were marked for the same treatment, at these bottlenecks 225 Diffserv would make no difference. Instead, it becomes imperative to 226 remove the underlying causes of any unnecessary delay. 228 The bufferbloat project has shown that excessively-large buffering 229 ('bufferbloat') has been introducing significantly more delay than 230 the underlying propagation time. These delays appear only 231 intermittently -- only when a capacity-seeking (e.g. TCP) flow is 232 long enough for the queue to fill the buffer, making every packet in 233 other flows sharing the buffer sit through the queue. 235 Active queue management (AQM) was originally developed to solve this 236 problem (and others). Unlike Diffserv, which gives low latency to 237 some traffic at the expense of others, AQM controls latency for _all_ 238 traffic in a class. In general, AQM methods introduce an increasing 239 level of discard from the buffer the longer the queue persists above 240 a shallow threshold. This gives sufficient signals to capacity- 241 seeking (aka. greedy) flows to keep the buffer empty for its intended 242 purpose: absorbing bursts. However, RED [RFC2309] and other 243 algorithms from the 1990s were sensitive to their configuration and 244 hard to set correctly. So, this form of AQM was not widely deployed. 246 More recent state-of-the-art AQM methods, e.g. FQ-CoDel [RFC8290], 247 PIE [RFC8033], Adaptive RED [ARED01], are easier to configure, 248 because they define the queuing threshold in time not bytes, so it is 249 invariant for different link rates. However, no matter how good the 250 AQM, the sawtoothing sending window of a Classic congestion control 251 will either cause queuing delay to vary or cause the link to be 252 underutilized. Even with a perfectly tuned AQM, the additional 253 queuing delay will be of the same order as the underlying speed-of- 254 light delay across the network, thereby roughly doubling the total 255 round-trip time. 257 If a sender's own behaviour is introducing queuing delay variation, 258 no AQM in the network can 'un-vary' the delay without significantly 259 compromising link utilization. Even flow-queuing (e.g. [RFC8290]), 260 which isolates one flow from another, cannot isolate a flow from the 261 delay variations it inflicts on itself. Therefore those applications 262 that need to seek out high bandwidth but also need low latency will 263 have to migrate to scalable congestion control. 265 Altering host behaviour is not enough on its own though. Even if 266 hosts adopt low latency behaviour (scalable congestion controls), 267 they need to be isolated from the behaviour of existing Classic 268 congestion controls that induce large queue variations. L4S enables 269 that migration by providing latency isolation in the network and 270 distinguishing the two types of packets that need to be isolated: L4S 271 and Classic. L4S isolation can be achieved with a queue per flow 272 (e.g. [RFC8290]) but a DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled] is 273 sufficient, and actually gives better tail latency. Both approaches 274 are addressed in this document. 276 The DualQ solution was developed to make very low latency available 277 without requiring per-flow queues at every bottleneck. This was 278 because per-flow-queuing (FQ) has well-known downsides - not least 279 the need to inspect transport layer headers in the network, which 280 makes it incompatible with privacy approaches such as IPSec VPN 281 tunnels, and incompatible with link layer queue management, where 282 transport layer headers can be hidden, e.g. 5G. 284 Latency is not the only concern addressed by L4S: It was known when 285 TCP congestion avoidance was first developed that it would not scale 286 to high bandwidth-delay products (footnote 6 of Jacobson and 287 Karels [TCP-CA]). Given regular broadband bit-rates over WAN 288 distances are already [RFC3649] beyond the scaling range of Reno 289 congestion control, 'less unscalable' Cubic [RFC8312] and 290 Compound [I-D.sridharan-tcpm-ctcp] variants of TCP have been 291 successfully deployed. However, these are now approaching their 292 scaling limits. Unfortunately, fully scalable congestion controls 293 such as DCTCP [RFC8257] outcompete Classic ECN congestion controls 294 sharing the same queue, which is why they have been confined to 295 private data centres or research testbeds. 297 It turns out that these scalable congestion control algorithms that 298 solve the latency problem can also solve the scalability problem of 299 Classic congestion controls. The finer sawteeth in the congestion 300 window have low amplitude, so they cause very little queuing delay 301 variation and the average time to recover from one congestion signal 302 to the next (the average duration of each sawtooth) remains 303 invariant, which maintains constant tight control as flow-rate 304 scales. A background paper [DCttH19] gives the full explanation of 305 why the design solves both the latency and the scaling problems, both 306 in plain English and in more precise mathematical form. The 307 explanation is summarised without the maths in Section 4 of the L4S 308 architecture [I-D.ietf-tsvwg-l4s-arch]. 310 1.2. Terminology 312 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 313 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 314 "OPTIONAL" in this document are to be interpreted as described in 315 [RFC2119]. In this document, these words will appear with that 316 interpretation only when in ALL CAPS. Lower case uses of these words 317 are not to be interpreted as carrying RFC-2119 significance. 319 Note: The L4S architecture [I-D.ietf-tsvwg-l4s-arch] repeats the 320 following definitions, but if there are accidental differences those 321 below take precedence. 323 Classic Congestion Control: A congestion control behaviour that can 324 co-exist with standard Reno [RFC5681] without causing 325 significantly negative impact on its flow rate [RFC5033]. With 326 Classic congestion controls, such as Reno or Cubic, because flow 327 rate has scaled since TCP congestion control was first designed in 328 1988, it now takes hundreds of round trips (and growing) to 329 recover after a congestion signal (whether a loss or an ECN mark) 330 as shown in the examples in section 5.1 of the L4S 331 architecture [I-D.ietf-tsvwg-l4s-arch] and in [RFC3649]. 332 Therefore control of queuing and utilization becomes very slack, 333 and the slightest disturbances (e.g. from new flows starting) 334 prevent a high rate from being attained. 336 Scalable Congestion Control: A congestion control where the average 337 time from one congestion signal to the next (the recovery time) 338 remains invariant as the flow rate scales, all other factors being 339 equal. This maintains the same degree of control over queueing 340 and utilization whatever the flow rate, as well as ensuring that 341 high throughput is robust to disturbances. For instance, DCTCP 342 averages 2 congestion signals per round-trip whatever the flow 343 rate, as do other recently developed scalable congestion controls, 344 e.g. Relentless TCP [Mathis09], TCP Prague 345 [I-D.briscoe-iccrg-prague-congestion-control], [PragueLinux], 346 BBRv2 [BBRv2], [I-D.cardwell-iccrg-bbr-congestion-control] and the 347 L4S variant of SCREAM for real-time media [SCReAM], [RFC8298]). 348 See Section 4.3 for more explanation. 350 Classic service: The Classic service is intended for all the 351 congestion control behaviours that co-exist with Reno [RFC5681] 352 (e.g. Reno itself, Cubic [RFC8312], 353 Compound [I-D.sridharan-tcpm-ctcp], TFRC [RFC5348]). The term 354 'Classic queue' means a queue providing the Classic service. 356 Low-Latency, Low-Loss Scalable throughput (L4S) service: The 'L4S' 357 service is intended for traffic from scalable congestion control 358 algorithms, such as TCP Prague 359 [I-D.briscoe-iccrg-prague-congestion-control], which was derived 360 from DCTCP [RFC8257]. The L4S service is for more general traffic 361 than just TCP Prague -- it allows the set of congestion controls 362 with similar scaling properties to Prague to evolve, such as the 363 examples listed above (Relentless, SCReAM). The term 'L4S queue' 364 means a queue providing the L4S service. 366 The terms Classic or L4S can also qualify other nouns, such as 367 'queue', 'codepoint', 'identifier', 'classification', 'packet', 368 'flow'. For example: an L4S packet means a packet with an L4S 369 identifier sent from an L4S congestion control. 371 Both Classic and L4S services can cope with a proportion of 372 unresponsive or less-responsive traffic as well, but in the L4S 373 case its rate has to be smooth enough or low enough not to build a 374 queue (e.g. DNS, VoIP, game sync datagrams, etc). 376 Reno-friendly: The subset of Classic traffic that is friendly to the 377 standard Reno congestion control defined for TCP in [RFC5681]. 378 The TFRC spec. [RFC5348] indirectly implies that 'friendly' is 379 defined as "generally within a factor of two of the sending rate 380 of a TCP flow under the same conditions". Reno-friendly is used 381 here in place of 'TCP-friendly', given the latter has become 382 imprecise, because the TCP protocol is now used with so many 383 different congestion control behaviours, and Reno is used in non- 384 TCP transports such as QUIC [RFC9000]. 386 Classic ECN: The original Explicit Congestion Notification (ECN) 387 protocol [RFC3168], which requires ECN signals to be treated the 388 same as drops, both when generated in the network and when 389 responded to by the sender. For L4S, the names used for the four 390 codepoints of the 2-bit IP-ECN field are unchanged from those 391 defined in [RFC3168]: Not ECT, ECT(0), ECT(1) and CE, where ECT 392 stands for ECN-Capable Transport and CE stands for Congestion 393 Experienced. A packet marked with the CE codepoint is termed 394 'ECN-marked' or sometimes just 'marked' where the context makes 395 ECN obvious. 397 Site: A home, mobile device, small enterprise or campus, where the 398 network bottleneck is typically the access link to the site. Not 399 all network arrangements fit this model but it is a useful, widely 400 applicable generalization. 402 1.3. Scope 404 The new L4S identifier defined in this specification is applicable 405 for IPv4 and IPv6 packets (as for Classic ECN [RFC3168]). It is 406 applicable for the unicast, multicast and anycast forwarding modes. 408 The L4S identifier is an orthogonal packet classification to the 409 Differentiated Services Code Point (DSCP) [RFC2474]. Section 5.4 410 explains what this means in practice. 412 This document is intended for experimental status, so it does not 413 update any standards track RFCs. Therefore it depends on [RFC8311], 414 which is a standards track specification that: 416 * updates the ECN proposed standard [RFC3168] to allow experimental 417 track RFCs to relax the requirement that an ECN mark must be 418 equivalent to a drop (when the network applies markings and/or 419 when the sender responds to them). For instance, in the ABE 420 experiment [RFC8511] this permits a sender to respond less to ECN 421 marks than to drops; 423 * changes the status of the experimental ECN nonce [RFC3540] to 424 historic; 426 * makes consequent updates to the following additional proposed 427 standard RFCs to reflect the above two bullets: 429 - ECN for RTP [RFC6679]; 431 - the congestion control specifications of various DCCP 432 congestion control identifier (CCID) profiles [RFC4341], 433 [RFC4342], [RFC5622]. 435 This document is about identifiers that are used for interoperation 436 between hosts and networks. So the audience is broad, covering 437 developers of host transports and network AQMs, as well as covering 438 how operators might wish to combine various identifiers, which would 439 require flexibility from equipment developers. 441 2. Choice of L4S Packet Identifier: Requirements 443 This subsection briefly records the process that led to the chosen 444 L4S identifier. 446 The identifier for packets using the Low Latency, Low Loss, Scalable 447 throughput (L4S) service needs to meet the following requirements: 449 * it SHOULD survive end-to-end between source and destination end- 450 points: across the boundary between host and network, between 451 interconnected networks, and through middleboxes; 453 * it SHOULD be visible at the IP layer; 455 * it SHOULD be common to IPv4 and IPv6 and transport-agnostic; 457 * it SHOULD be incrementally deployable; 459 * it SHOULD enable an AQM to classify packets encapsulated by outer 460 IP or lower-layer headers; 462 * it SHOULD consume minimal extra codepoints; 464 * it SHOULD be consistent on all the packets of a transport layer 465 flow, so that some packets of a flow are not served by a different 466 queue to others. 468 Whether the identifier would be recoverable if the experiment failed 469 is a factor that could be taken into account. However, this has not 470 been made a requirement, because that would favour schemes that would 471 be easier to fail, rather than those more likely to succeed. 473 It is recognised that any choice of identifier is unlikely to satisfy 474 all these requirements, particularly given the limited space left in 475 the IP header. Therefore a compromise will always be necessary, 476 which is why all the above requirements are expressed with the word 477 'SHOULD' not 'MUST'. 479 After extensive assessment of alternative schemes, "ECT(1) and CE 480 codepoints" was chosen as the best compromise. Therefore this scheme 481 is defined in detail in the following sections, while Appendix B 482 records its pros and cons against the above requirements. 484 3. L4S Packet Identification 486 The L4S treatment is an experimental track alternative packet marking 487 treatment to the Classic ECN treatment in [RFC3168], which has been 488 updated by [RFC8311] to allow experiments such as the one defined in 489 the present specification. [RFC4774] discusses some of the issues 490 and evaluation criteria when defining alternative ECN semantics. 491 Like Classic ECN, L4S ECN identifies both network and host behaviour: 492 it identifies the marking treatment that network nodes are expected 493 to apply to L4S packets, and it identifies packets that have been 494 sent from hosts that are expected to comply with a broad type of 495 sending behaviour. 497 For a packet to receive L4S treatment as it is forwarded, the sender 498 sets the ECN field in the IP header to the ECT(1) codepoint. See 499 Section 4 for full transport layer behaviour requirements, including 500 feedback and congestion response. 502 A network node that implements the L4S service always classifies 503 arriving ECT(1) packets for L4S treatment and by default classifies 504 CE packets for L4S treatment unless the heuristics described in 505 Section 5.3 are employed. See Section 5 for full network element 506 behaviour requirements, including classification, ECN-marking and 507 interaction of the L4S identifier with other identifiers and per-hop 508 behaviours. 510 4. Transport Layer Behaviour (the 'Prague Requirements') 511 4.1. Codepoint Setting 513 A sender that wishes a packet to receive L4S treatment as it is 514 forwarded, MUST set the ECN field in the IP header (v4 or v6) to the 515 ECT(1) codepoint. 517 4.2. Prerequisite Transport Feedback 519 For a transport protocol to provide scalable congestion control 520 (Section 4.3) it MUST provide feedback of the extent of CE marking on 521 the forward path. When ECN was added to TCP [RFC3168], the feedback 522 method reported no more than one CE mark per round trip. Some 523 transport protocols derived from TCP mimic this behaviour while 524 others report the accurate extent of ECN marking. This means that 525 some transport protocols will need to be updated as a prerequisite 526 for scalable congestion control. The position for a few well-known 527 transport protocols is given below. 529 TCP: Support for the accurate ECN feedback requirements [RFC7560] 530 (such as that provided by AccECN [I-D.ietf-tcpm-accurate-ecn]) by 531 both ends is a prerequisite for scalable congestion control in 532 TCP. Therefore, the presence of ECT(1) in the IP headers even in 533 one direction of a TCP connection will imply that both ends 534 support accurate ECN feedback. However, the converse does not 535 apply. So even if both ends support AccECN, either of the two 536 ends can choose not to use a scalable congestion control, whatever 537 the other end's choice. 539 SCTP: A suitable ECN feedback mechanism for SCTP could add a chunk 540 to report the number of received CE marks 541 (e.g. [I-D.stewart-tsvwg-sctpecn]), and update the ECN feedback 542 protocol sketched out in Appendix A of the original standards 543 track specification of SCTP [RFC4960]. 545 RTP over UDP: A prerequisite for scalable congestion control is for 546 both (all) ends of one media-level hop to signal ECN 547 support [RFC6679] and use the new generic RTCP feedback format of 548 [RFC8888]. The presence of ECT(1) implies that both (all) ends of 549 that media-level hop support ECN. However, the converse does not 550 apply. So each end of a media-level hop can independently choose 551 not to use a scalable congestion control, even if both ends 552 support ECN. 554 QUIC: Support for sufficiently fine-grained ECN feedback is provided 555 by the v1 IETF QUIC transport [RFC9000]. 557 DCCP: The ACK vector in DCCP [RFC4340] is already sufficient to 558 report the extent of CE marking as needed by a scalable congestion 559 control. 561 4.3. Prerequisite Congestion Response 563 As a condition for a host to send packets with the L4S identifier 564 (ECT(1)), it SHOULD implement a congestion control behaviour that 565 ensures that, in steady state, the average duration between induced 566 ECN marks does not increase as flow rate scales up, all other factors 567 being equal. This is termed a scalable congestion control. This 568 invariant duration ensures that, as flow rate scales, the average 569 period with no feedback information about capacity does not become 570 excessive. It also ensures that queue variations remain small, 571 without having to sacrifice utilization. 573 With a congestion control that sawtooths to probe capacity, this 574 duration is called the recovery time, because each time the sawtooth 575 yields, on average it take this time to recover to its previous high 576 point. A scalable congestion control does not have to sawtooth, but 577 it has to coexist with scalable congestion controls that do. 579 For instance, for DCTCP [RFC8257], TCP Prague 580 [I-D.briscoe-iccrg-prague-congestion-control], [PragueLinux] and the 581 L4S variant of SCReAM [RFC8298], the average recovery time is always 582 half a round trip (or half a reference round trip), whatever the flow 583 rate. 585 As with all transport behaviours, a detailed specification (probably 586 an experimental RFC) is expected for each congestion control, 587 following the guidelines for specifying new congestion control 588 algorithms in [RFC5033]. In addition it is expected to document 589 these L4S-specific matters, specifically the timescale over which the 590 proportionality is averaged, and control of burstiness. The recovery 591 time requirement above is worded as a 'SHOULD' rather than a 'MUST' 592 to allow reasonable flexibility for such implementations. 594 The condition 'all other factors being equal', allows the recovery 595 time to be different for different round trip times, as long as it 596 does not increase with flow rate for any particular RTT. 598 Saying that the recovery time remains roughly invariant is equivalent 599 to saying that the number of ECN CE marks per round trip remains 600 invariant as flow rate scales, all other factors being equal. For 601 instance, an average recovery time of half of 1 RTT is equivalent to 602 2 ECN marks per round trip. For those familiar with steady-state 603 congestion response functions, it is also equivalent to say that the 604 congestion window is inversely proportional to the proportion of 605 bytes in packets marked with the CE codepoint (see section 2 of 606 [PI2]). 608 In order to coexist safely with other Internet traffic, a scalable 609 congestion control MUST NOT tag its packets with the ECT(1) codepoint 610 unless it complies with the following bulleted requirements: 612 1. A scalable congestion control MUST be capable of being replaced 613 by a Classic congestion control (by application and/or by 614 administrative control). If a Classic congestion control is 615 activated, it will not tag its packets with the ECT(1) codepoint 616 (see Appendix A.1.3 for rationale). 618 2. As well as responding to ECN markings, a scalable congestion 619 control MUST react to packet loss in a way that will coexist 620 safely with Classic congestion controls such as standard 621 Reno [RFC5681], as required by [RFC5033] (see Appendix A.1.4 for 622 rationale). 624 3. In uncontrolled environments, monitoring MUST be implemented to 625 support detection of problems with an ECN-capable AQM at the path 626 bottleneck that appears not to support L4S and might be in a 627 shared queue. Such monitoring SHOULD be applied to live traffic 628 that is using Scalable congestion control. Alternatively, 629 monitoring need not be applied to live traffic, if monitoring has 630 been arranged to cover the paths that live traffic takes through 631 uncontrolled environments. 633 A function to detect the above problems with an ECN-capable AQM 634 MUST also be implemented and used. The detection function SHOULD 635 be capable of making the congestion control adapt its ECN-marking 636 response in real-time to coexist safely with Classic congestion 637 controls such as standard Reno [RFC5681], as required by 638 [RFC5033]. This could be complemented by more detailed offline 639 detection of potential problems. If only offline detection is 640 used and potential problems with such an AQM are detected on 641 certain paths, the scalable congestion control MUST be replaced 642 by a Classic congestion control, at least for the problem paths. 644 See Section 4.3.1, Appendix A.1.5 and the L4S operational 645 guidance [I-D.ietf-tsvwg-l4sops] for rationale. 647 Note that a scalable congestion control is not expected to change 648 to setting ECT(0) while it transiently adapts to coexist with 649 Classic congestion controls, whereas a replacement congestion 650 control that solely behaves in the Classic way will set ECT(0). 652 4. In the range between the minimum likely RTT and typical RTTs 653 expected in the intended deployment scenario, a scalable 654 congestion control MUST converge towards a rate that is as 655 independent of RTT as is possible without compromising stability 656 or efficiency (see Appendix A.1.6 for rationale). 658 5. A scalable congestion control SHOULD remain responsive to 659 congestion when typical RTTs over the public Internet are 660 significantly smaller because they are no longer inflated by 661 queuing delay. It would be preferable for the minimum window of 662 a scalable congestion control to be lower than 1 segment rather 663 than use the timeout approach described for TCP in S.6.1.2 of the 664 ECN spec [RFC3168] (or an equivalent for other transports). 665 However, a lower minimum is not set as a formal requirement for 666 L4S experiments (see Appendix A.1.7 for rationale). 668 6. A scalable congestion control's loss detection SHOULD be 669 resilient to reordering over an adaptive time interval that 670 scales with throughput and adapts to reordering (as in 671 RACK [RFC8985]), as opposed to counting only in fixed units of 672 packets (as in the 3 DupACK rule of New Reno [RFC5681] and 673 [RFC6675], which is not scalable). As data rates increase (e.g., 674 due to new and/or improved technology), congestion controls that 675 detect loss by counting in units of packets become more likely to 676 incorrectly treat reordering events as congestion-caused loss 677 events (see Appendix A.1.8 for further rationale). This 678 requirement does not apply to congestion controls that are solely 679 used in controlled environments where the network introduces 680 hardly any reordering. 682 7. A scalable congestion control is expected to limit the queue 683 caused by bursts of packets. It would not seem necessary to set 684 the limit any lower than 10% of the minimum RTT expected in a 685 typical deployment (e.g. additional queuing of roughly 250 us for 686 the public Internet). This would be converted to a number of 687 packets under the worst-case assumption that the bottleneck link 688 capacity equals the current flow rate. No normative requirement 689 to limit bursts is given here and, until there is more industry 690 experience from the L4S experiment, it is not even known whether 691 one is needed - it seems to be in an L4S sender's self-interest 692 to limit bursts. 694 Each sender in a session can use a scalable congestion control 695 independently of the congestion control used by the receiver(s) when 696 they send data. Therefore there might be ECT(1) packets in one 697 direction and ECT(0) or Not-ECT in the other. 699 Later (Section 5.4.1.1) this document discusses the conditions for 700 mixing other "'Safe' Unresponsive Traffic" (e.g. DNS, LDAP, NTP, 701 voice, game sync packets) with L4S traffic. To be clear, although 702 such traffic can share the same queue as L4S traffic, it is not 703 appropriate for the sender to tag it as ECT(1), except in the 704 (unlikely) case that it satisfies the above conditions. 706 4.3.1. Guidance on Congestion Response in the RFC Series 708 RFC 3168 requires the congestion responses to a CE-marked packet and 709 a dropped packet to be the same. RFC 8311 is a standards-track 710 update to RFC 3168 intended to enable experimentation with ECN, 711 including the L4S experiment. RFC 8311 allows an experimental 712 congestion control's response to a CE-marked packet to differ from 713 the response to a dropped packet, provided that the differences are 714 documented in an experimental RFC, such as the present document. 716 BCP 124 [RFC4774] gives guidance to protocol designers, when 717 specifying alternative semantics for the ECN field. RFC 8311 718 explained that it did not need to update the best current practice in 719 BCP 124 in order to relax the 'equivalence with drop' requirement 720 because, although BCP 124 quotes the same requirement from RFC 3168, 721 the BCP does not impose requirements based on it. BCP 124 describes 722 three options for incremental deployment, with Option 3 (in 723 Section 4.3 of BCP 124) best matching the L4S case. Option 3's 724 requirement for end-nodes is that they respond to CE marks "in a way 725 that is friendly to flows using IETF-conformant congestion control." 726 This echoes other general congestion control requirements in the RFC 727 series, for example [RFC5033], which says "...congestion controllers 728 that have a significantly negative impact on traffic using standard 729 congestion control may be suspect", or [RFC8085] concerning UDP 730 congestion control says "Bulk-transfer applications that choose not 731 to implement TFRC or TCP-like windowing SHOULD implement a congestion 732 control scheme that results in bandwidth (capacity) use that competes 733 fairly with TCP within an order of magnitude." 735 The third normative bullet in Section 4.3 above (which concerns L4S 736 response to congestion from a Classic ECN AQM) aims to ensure that 737 these 'coexistence' requirements are satisfied, but it makes some 738 compromises. This subsection highlights and justifies those 739 compromises and Appendix A.1.5 and the L4S operational 740 guidance [I-D.ietf-tsvwg-l4sops] give detailed analysis, examples and 741 references (the normative text in that bullet takes precedence if any 742 informative elaboration leads to ambiguity). The approach is based 743 on an assessment of the risk of harm, which is a combination of the 744 prevalence of the conditions necessary for harm to occur, and the 745 potential severity of the harm if they do. 747 Prevalence: There are three cases: 749 * Drop Tail: Coexistence between L4S and Classic flows is not in 750 doubt where the bottleneck does not support any form of ECN, 751 which has remained by far the most prevalent case since the ECN 752 RFC was published in 2001. 754 * L4S: Coexistence is not in doubt if the bottleneck supports 755 L4S. 757 * Classic ECN [RFC3168]: The compromises centre around cases 758 where the bottleneck supports Classic ECN but not L4S. But it 759 depends on which sub-case: 761 - Shared Queue with Classic ECN: The members of the Transport 762 Working group are not aware of any current deployments of 763 single-queue Classic ECN bottlenecks in the Internet. 764 Nonetheless, at the scale of the Internet, rarity need not 765 imply small numbers, nor that there will be rarity in 766 future. 768 - Per-Flow-queues with Classic ECN: Most AQMs with per-flow- 769 queuing (FQ) deployed from 2012 onwards had Classic ECN 770 enabled by default, specifically FQ-CoDel [RFC8290] and 771 COBALT [COBALT]. But the compromises only apply to the 772 second of two further sub-cases: 774 o With per-flow-queuing, co-existence between Classic and 775 L4S flows is not normally a problem, because different 776 flows are not meant to be in the same queue 777 (BCP 124 [RFC4774] did not foresee the introduction of 778 per-flow-queuing, which appeared as a potential isolation 779 technique some eight years after the BCP was published). 781 o However, the isolation between L4S and Classic flows is 782 not perfect in cases where the hashes of flow IDs collide 783 or where multiple flows within a layer-3 VPN are 784 encapsulated within one flow ID. 786 To summarize, the coexistence problem is confined to cases of 787 imperfect flow isolation in an FQ, or in potential cases where a 788 Classic ECN AQM has been deployed in a shared queue (see the L4S 789 operational guidance [I-D.ietf-tsvwg-l4sops] for further details 790 including recent surveys attempting to quantify prevalence). 791 Further, if one of these cases does occur, the coexistence problem 792 does not arise unless sources of Classic and L4S flows are 793 simultaneously sharing the same bottleneck queue (e.g. different 794 applications in the same household) and flows of each type have to 795 be large enough to coincide for long enough for any throughput 796 imbalance to have developed. 798 Severity: Where long-running L4S and Classic flows coincide in a 799 shared queue, testing of one L4S congestion control (TCP Prague) 800 has found that the imbalance in average throughput between an L4S 801 and a Classic flow can reach 25:1 in favour of L4S in the worst 802 case [ecn-fallback]. However, when capacity is most scarce, the 803 Classic flow gets a higher proportion of the link, for instance 804 over a 4 Mb/s link the throughput ratio is below ~10:1 over paths 805 with a base RTT below 100 ms, and falls below ~5:1 for base RTTs 806 below 20ms. 808 These throughput ratios can clearly fall well outside current RFC 809 guidance on coexistence. However, the tendency towards leaving a 810 greater share for Classic flows at lower link rate and the very 811 limited prevalence of the conditions necessary for harm to occur led 812 to the possibility of allowing the RFC requirements to be 813 compromised, albeit briefly: 815 * The recommended approach is still to detect and adapt to a Classic 816 ECN AQM in real-time, which is fully consistent with all the RFCs 817 on coexistence. In other words, the "SHOULD"s in the third bullet 818 of Section 4.3 above expect the sender to implement something 819 similar to the proof of concept code that detects the presence of 820 a Classic ECN AQM and falls back to a Classic congestion response 821 within a few round trips [ecn-fallback]. However, although this 822 code reliably detects a Classic ECN AQM, the current code can also 823 wrongly categorize an L4S AQM as Classic, most often in cases when 824 link rate is low or RTT is high. Although this is the safe way 825 round, and although implementers are expected to be able to 826 improve on this proof of concept, concerns have been raised that 827 implementers might lose faith in such detection and disable it. 829 * Therefore the third bullet in Section 4.3 above allows a 830 compromise where coexistence could diverge from the requirements 831 in the RFC Series briefly, but mandatory monitoring is required, 832 in order to detect such cases and trigger remedial action. This 833 approach tolerates a brief divergence from the RFCs given the 834 likely low prevalence and given harm here means a flow progresses 835 more slowly than otherwise, but it does progress. The L4S 836 operational guidance [I-D.ietf-tsvwg-l4sops] outlines a range of 837 example remedial actions that include alterations either to the 838 sender or to the network. However, the final normative 839 requirement in the third bullet of Section 4.3 above places 840 ultimate responsibility for remedial action on the sender. If 841 coexistence problems with a Classic ECN AQM are detected (implying 842 they have not been resolved by the network), it says the sender 843 "MUST" revert to a Classic congestion control." 845 [I-D.ietf-tsvwg-l4sops] also gives example ways in which L4S 846 congestion controls can be rolled out initially in lower risk 847 scenarios. 849 4.4. Filtering or Smoothing of ECN Feedback 851 Section 5.2 below specifies that an L4S AQM is expected to signal L4S 852 ECN immediately, to avoid introducing delay due to filtering or 853 smoothing. This contrasts with a Classic AQM, which filters out 854 variations in the queue before signalling ECN marking or drop. In 855 the L4S architecture [I-D.ietf-tsvwg-l4s-arch], responsibility for 856 smoothing out these variations shifts to the sender's congestion 857 control. 859 This shift of responsibility has the advantage that each sender can 860 smooth variations over a timescale proportionate to its own RTT. 861 Whereas, in the Classic approach, the network doesn't know the RTTs 862 of any of the flows, so it has to smooth out variations for a worst- 863 case RTT to ensure stability. For all the typical flows with shorter 864 RTT than the worst-case, this makes congestion control unnecessarily 865 sluggish. 867 This also gives an L4S sender the choice not to smooth, depending on 868 its context (start-up, congestion avoidance, etc). Therefore, this 869 document places no requirement on an L4S congestion control to smooth 870 out variations in any particular way. Implementers are encouraged to 871 openly publish the approach they take to smoothing, and the results 872 and experience they gain during the L4S experiment. 874 5. Network Node Behaviour 876 5.1. Classification and Re-Marking Behaviour 878 A network node that implements the L4S service: 880 * MUST classify arriving ECT(1) packets for L4S treatment, unless 881 overridden by another classifier (e.g., see Section 5.4.1.2); 883 * MUST classify arriving CE packets for L4S treatment as well, 884 unless overridden by a another classifier or unless the exception 885 referred to next applies; 887 CE packets might have originated as ECT(1) or ECT(0), but the 888 above rule to classify them as if they originated as ECT(1) is the 889 safe choice (see Appendix B for rationale). The exception is 890 where some flow-aware in-network mechanism happens to be available 891 for distinguishing CE packets that originated as ECT(0), as 892 described in Section 5.3, but there is no implication that such a 893 mechanism is necessary. 895 An L4S AQM treatment follows similar codepoint transition rules to 896 those in RFC 3168. Specifically, the ECT(1) codepoint MUST NOT be 897 changed to any other codepoint than CE, and CE MUST NOT be changed to 898 any other codepoint. An ECT(1) packet is classified as ECN-capable 899 and, if congestion increases, an L4S AQM algorithm will increasingly 900 mark the ECN field as CE, otherwise forwarding packets unchanged as 901 ECT(1). Necessary conditions for an L4S marking treatment are 902 defined in Section 5.2. 904 Under persistent overload an L4S marking treatment MUST begin 905 applying drop to L4S traffic until the overload episode has subsided, 906 as recommended for all AQM methods in [RFC7567] (Section 4.2.1), 907 which follows the similar advice in RFC 3168 (Section 7). During 908 overload, it MUST apply the same drop probability to L4S traffic as 909 it would to Classic traffic. 911 Where an L4S AQM is transport-aware, this requirement could be 912 satisfied by using drop in only the most overloaded individual per- 913 flow AQMs. In a DualQ with flow-aware queue protection 914 (e.g. [I-D.briscoe-docsis-q-protection]), this could be achieved by 915 redirecting packets in those flows contributing most to the overload 916 out of the L4S queue so that they are subjected to drop in the 917 Classic queue. 919 For backward compatibility in uncontrolled environments, a network 920 node that implements the L4S treatment MUST also implement an AQM 921 treatment for the Classic service as defined in Section 1.2. This 922 Classic AQM treatment need not mark ECT(0) packets, but if it does, 923 see Section 5.2 for the strengths of the markings relative to drop. 924 It MUST classify arriving ECT(0) and Not-ECT packets for treatment by 925 this Classic AQM (for the DualQ Coupled AQM, see the extensive 926 discussion on classification in Sections 2.3 and 2.5.1.1 of 927 [I-D.ietf-tsvwg-aqm-dualq-coupled]). 929 In case unforeseen problems arise with the L4S experiment, it MUST be 930 possible to configure an L4S implementation to disable the L4S 931 treatment. Once disabled, all packets of all ECN codepoints will 932 receive Classic treatment and ECT(1) packets MUST be treated as if 933 they were Not-ECT. 935 5.2. The Strength of L4S CE Marking Relative to Drop 937 The relative strengths of L4S CE and drop are irrelevant where AQMs 938 are implemented in separate queues per-application-flow, which are 939 then explicitly scheduled (e.g. with an FQ scheduler as in FQ- 940 CoDel [RFC8290]). Nonetheless, the relationship between them needs 941 to be defined for the coupling between L4S and Classic congestion 942 signals in a DualQ Coupled AQM [I-D.ietf-tsvwg-aqm-dualq-coupled], as 943 below. 945 Unless an AQM node schedules application flows explicitly, the 946 likelihood that the AQM drops a Not-ECT Classic packet (p_C) MUST be 947 roughly proportional to the square of the likelihood that it would 948 have marked it if it had been an L4S packet (p_L). That is 950 p_C ~= (p_L / k)^2 952 The constant of proportionality (k) does not have to be standardised 953 for interoperability, but a value of 2 is RECOMMENDED. The term 954 'likelihood' is used above to allow for marking and dropping to be 955 either probabilistic or deterministic. 957 This formula ensures that Scalable and Classic flows will converge to 958 roughly equal congestion windows, for the worst case of Reno 959 congestion control. This is because the congestion windows of 960 Scalable and Classic congestion controls are inversely proportional 961 to p_L and sqrt(p_C) respectively. So squaring p_C in the above 962 formula counterbalances the square root that characterizes Reno- 963 friendly flows. 965 Note that, contrary to RFC 3168, an AQM implementing the L4S and 966 Classic treatments does not mark an ECT(1) packet under the same 967 conditions that it would have dropped a Not-ECT packet, as allowed by 968 [RFC8311], which updates RFC 3168. However, if it marks ECT(0) 969 packets, it does so under the same conditions that it would have 970 dropped a Not-ECT packet [RFC3168]. 972 Also, In the L4S architecture [I-D.ietf-tsvwg-l4s-arch], the sender, 973 not the network, is responsible for smoothing out variations in the 974 queue. So, an L4S AQM MUST signal congestion as soon as possible. 975 Then, an L4S sender generally interprets CE marking as an unsmoothed 976 signal. 978 This requirement does not prevent an L4S AQM from mixing in 979 additional congestion signals that are smoothed, such as the signals 980 from a Classic smoothed AQM that are coupled with unsmoothed L4S 981 signals in the coupled DualQ [I-D.ietf-tsvwg-aqm-dualq-coupled]. But 982 only as long as the onset of congestion can be signalled immediately, 983 and can be interpreted by the sender as if it has been signalled 984 immediately, which is important for interoperability 986 5.3. Exception for L4S Packet Identification by Network Nodes with 987 Transport-Layer Awareness 989 To implement L4S packet classification, a network node does not need 990 to identify transport-layer flows. Nonetheless, if an L4S network 991 node classifies packets by their transport-layer flow ID and their 992 ECN field, and if all the ECT packets in a flow have been ECT(0), the 993 node MAY classify any CE packets in the same flow as if they were 994 Classic ECT(0) packets. In all other cases, a network node MUST 995 classify all CE packets as if they were ECT(1) packets. Examples of 996 such other cases are: i) if no ECT packets have yet been identified 997 in a flow; ii) if it is not desirable for a network node to identify 998 transport-layer flows; or iii) if some ECT packets in a flow have 999 been ECT(1) (this advice will need to be verified as part of L4S 1000 experiments). 1002 5.4. Interaction of the L4S Identifier with other Identifiers 1004 The examples in this section concern how additional identifiers might 1005 complement the L4S identifier to classify packets between class-based 1006 queues. Firstly Section 5.4.1 considers two queues, L4S and Classic, 1007 as in the Coupled DualQ AQM [I-D.ietf-tsvwg-aqm-dualq-coupled], 1008 either alone (Section 5.4.1.1) or within a larger queuing hierarchy 1009 (Section 5.4.1.2). Then Section 5.4.2 considers schemes that might 1010 combine per-flow 5-tuples with other identifiers. 1012 5.4.1. DualQ Examples of Other Identifiers Complementing L4S 1013 Identifiers 1015 5.4.1.1. Inclusion of Additional Traffic with L4S 1017 In a typical case for the public Internet a network element that 1018 implements L4S in a shared queue might want to classify some low-rate 1019 but unresponsive traffic (e.g. DNS, LDAP, NTP, voice, game sync 1020 packets) into the low latency queue to mix with L4S traffic. In this 1021 case it would not be appropriate to call the queue an L4S queue, 1022 because it is shared by L4S and non-L4S traffic. Instead it will be 1023 called the low latency or L queue. The L queue then offers two 1024 different treatments: 1026 * The L4S treatment, which is a combination of the L4S AQM treatment 1027 and a priority scheduling treatment; 1029 * The low latency treatment, which is solely the priority scheduling 1030 treatment, without ECN-marking by the AQM. 1032 To identify packets for just the scheduling treatment, it would be 1033 inappropriate to use the L4S ECT(1) identifier, because such traffic 1034 is unresponsive to ECN marking. Examples of relevant non-ECN 1035 identifiers are: 1037 * address ranges of specific applications or hosts configured to be, 1038 or known to be, safe, e.g. hard-coded IoT devices sending low 1039 intensity traffic; 1041 * certain low data-volume applications or protocols (e.g. ARP, DNS); 1043 * specific Diffserv codepoints that indicate traffic with limited 1044 burstiness such as the EF (Expedited Forwarding [RFC3246]), Voice- 1045 Admit [RFC5865] or proposed NQB (Non-Queue- 1046 Building [I-D.ietf-tsvwg-nqb]) service classes or equivalent 1047 local-use DSCPs (see [I-D.briscoe-tsvwg-l4s-diffserv]). 1049 In summary, a network element that implements L4S in a shared queue 1050 MAY classify additional types of packets into the L queue based on 1051 identifiers other than the ECN field, but the types SHOULD be 'safe' 1052 to mix with L4S traffic, where 'safe' is explained in 1053 Section 5.4.1.1.1. 1055 A packet that carries one of these non-ECN identifiers to classify it 1056 into the L queue would not be subject to the L4S ECN marking 1057 treatment, unless it also carried an ECT(1) or CE codepoint. The 1058 specification of an L4S AQM MUST define the behaviour for packets 1059 with unexpected combinations of codepoints, e.g. a non-ECN-based 1060 classifier for the L queue, but ECT(0) in the ECN field (for examples 1061 see section 2.5.1.1 of the DualQ 1062 spec [I-D.ietf-tsvwg-aqm-dualq-coupled]). 1064 For clarity, non-ECN identifiers, such as the examples itemized 1065 above, might be used by some network operators who believe they 1066 identify non-L4S traffic that would be safe to mix with L4S traffic. 1067 They are not alternative ways for a host to indicate that it is 1068 sending L4S packets. Only the ECT(1) ECN codepoint indicates to a 1069 network element that a host is sending L4S packets (and CE indicates 1070 that it could have originated as ECT(1)). Specifically ECT(1) 1071 indicates that the host claims its behaviour satisfies the 1072 prerequisite transport requirements in Section 4. 1074 In order to include non-L4S packets in the L queue, a network node 1075 MUST NOT alter Not-ECT or ECT(0) in the IP-ECN field to an L4S 1076 identifier. This ensures that these codepoints survive for any 1077 potential use later on the network path. 1079 5.4.1.1.1. 'Safe' Unresponsive Traffic 1081 The above section requires unresponsive traffic to be 'safe' to mix 1082 with L4S traffic. Ideally this means that the sender never sends any 1083 sequence of packets at a rate that exceeds the available capacity of 1084 the bottleneck link. However, typically an unresponsive transport 1085 does not even know the bottleneck capacity of the path, let alone its 1086 available capacity. Nonetheless, an application can be considered 1087 safe enough if it paces packets out (not necessarily completely 1088 regularly) such that its maximum instantaneous rate from packet to 1089 packet stays well below a typical broadband access rate. 1091 This is a vague but useful definition, because many low latency 1092 applications of interest, such as DNS, voice, game sync packets, RPC, 1093 ACKs, keep-alives, could match this description. 1095 Low rate streams such as voice and game sync packets, might not use 1096 continuously adapting ECN-based congestion control, but they ought to 1097 at least use a 'circuit-breaker' style of congestion 1098 response [RFC8083]. If the volume of traffic from unresponsive 1099 applications is high enough to overload the link, this will at least 1100 protect the capacity available to responsive applications. However, 1101 queuing delay in the L queue will probably rise to that controlled by 1102 the Classic (drop-based) AQM. If a network operator considers that 1103 such self-restraint is not enough, it might want to police the L 1104 queue (see Section 8.2 of the L4S 1105 architecture [I-D.ietf-tsvwg-l4s-arch]). 1107 5.4.1.2. Exclusion of Traffic From L4S Treatment 1109 To extend the above example, an operator might want to exclude some 1110 traffic from the L4S treatment for a policy reason, e.g. security 1111 (traffic from malicious sources) or commercial (e.g. initially the 1112 operator may wish to confine the benefits of L4S to business 1113 customers). 1115 In this exclusion case, the classifier MUST classify on the relevant 1116 locally-used identifiers (e.g. source addresses) before classifying 1117 the non-matching traffic on the end-to-end L4S ECN identifier. 1119 A network node MUST NOT alter the end-to-end L4S ECN identifier from 1120 L4S to Classic, because an operator decision to exclude certain 1121 traffic from L4S treatment is local-only. The end-to-end L4S 1122 identifier then survives for other operators to use, or indeed, they 1123 can apply their own policy, independently based on their own choice 1124 of locally-used identifiers. This approach also allows any operator 1125 to remove its locally-applied exclusions in future, e.g. if it wishes 1126 to widen the benefit of the L4S treatment to all its customers. 1128 A network node that supports L4S but excludes certain packets 1129 carrying the L4S identifier from L4S treatment MUST still apply 1130 marking or dropping that is compatible with an L4S congestion 1131 response. For instance, it could either drop such packets with the 1132 same likelihood as Classic packets or it could ECN-mark them with a 1133 likelihood appropriate to L4S traffic (e.g. the coupled probability 1134 in a DualQ coupled AQM) but aiming for the Classic delay target. It 1135 MUST NOT ECN-mark such packets with a Classic marking probability, 1136 which could confuse the sender. 1138 5.4.1.3. Generalized Combination of L4S and Other Identifiers 1140 L4S concerns low latency, which it can provide for all traffic 1141 without differentiation and without _necessarily_ affecting bandwidth 1142 allocation. Diffserv provides for differentiation of both bandwidth 1143 and low latency, but its control of latency depends on its control of 1144 bandwidth. The two can be combined if a network operator wants to 1145 control bandwidth allocation but it also wants to provide low latency 1146 - for any amount of traffic within one of these allocations of 1147 bandwidth (rather than only providing low latency by limiting 1148 bandwidth) [I-D.briscoe-tsvwg-l4s-diffserv]. 1150 The DualQ examples so far have been framed in the context of 1151 providing the default Best Efforts Per-Hop Behaviour (PHB) using two 1152 queues - a Low Latency (L) queue and a Classic (C) Queue. This 1153 single DualQ structure is expected to be the most common and useful 1154 arrangement. But, more generally, an operator might choose to 1155 control bandwidth allocation through a hierarchy of Diffserv PHBs at 1156 a node, and to offer one (or more) of these PHBs using a pair of 1157 queues for a low latency and a Classic variant of the PHB. 1159 In the first case, if we assume that a network element provides no 1160 PHBs except the DualQ, if a packet carries ECT(1) or CE, the network 1161 element would classify it for the L4S treatment irrespective of its 1162 DSCP. And, if a packet carried (say) the EF DSCP, the network 1163 element could classify it into the L queue irrespective of its ECN 1164 codepoint. However, where the DualQ is in a hierarchy of other PHBs, 1165 the classifier would classify some traffic into other PHBs based on 1166 DSCP before classifying between the low latency and Classic queues 1167 (based on ECT(1), CE and perhaps also the EF DSCP or other 1168 identifiers as in the above example). 1169 [I-D.briscoe-tsvwg-l4s-diffserv] gives a number of examples of such 1170 arrangements to address various requirements. 1172 [I-D.briscoe-tsvwg-l4s-diffserv] describes how an operator might use 1173 L4S to offer low latency as well as using Diffserv for bandwidth 1174 differentiation. It identifies two main types of approach, which can 1175 be combined: the operator might split certain Diffserv PHBs between 1176 L4S and a corresponding Classic service. Or it might split the L4S 1177 and/or the Classic service into multiple Diffserv PHBs. In either of 1178 these cases, a packet would have to be classified on its Diffserv and 1179 ECN codepoints. 1181 In summary, there are numerous ways in which the L4S ECN identifier 1182 (ECT(1) and CE) could be combined with other identifiers to achieve 1183 particular objectives. The following categorization articulates 1184 those that are valid, but it is not necessarily exhaustive. Those 1185 tagged 'Recommended-standard-use' could be set by the sending host or 1186 a network. Those tagged 'Local-use' would only be set by a network: 1188 1. Identifiers Complementing the L4S Identifier 1190 a. Including More Traffic in the L Queue 1192 (Could use Recommended-standard-use or Local-use identifiers) 1194 b. Excluding Certain Traffic from the L Queue 1196 (Local-use only) 1198 2. Identifiers to place L4S classification in a PHB Hierarchy 1200 (Could use Recommended-standard-use or Local-use identifiers) 1202 a. PHBs Before L4S ECN Classification 1204 b. PHBs After L4S ECN Classification 1206 5.4.2. Per-Flow Queuing Examples of Other Identifiers Complementing L4S 1207 Identifiers 1209 At a node with per-flow queueing (e.g. FQ-CoDel [RFC8290]), the L4S 1210 identifier could complement the Layer-4 flow ID as a further level of 1211 flow granularity (i.e. Not-ECT and ECT(0) queued separately from 1212 ECT(1) and CE packets). "Risk of reordering Classic CE packets" in 1213 Appendix B discusses the resulting ambiguity if packets originally 1214 marked ECT(0) are marked CE by an upstream AQM before they arrive at 1215 a node that classifies CE as L4S. It argues that the risk of 1216 reordering is vanishingly small and the consequence of such a low 1217 level of reordering is minimal. 1219 Alternatively, it could be assumed that it is not in a flow's own 1220 interest to mix Classic and L4S identifiers. Then the AQM could use 1221 the ECN field to switch itself between a Classic and an L4S AQM 1222 behaviour within one per-flow queue. For instance, for ECN-capable 1223 packets, the AQM might consist of a simple marking threshold and an 1224 L4S ECN identifier might simply select a shallower threshold than a 1225 Classic ECN identifier would. 1227 5.5. Limiting Packet Bursts from Links 1229 As well as senders needing to limit packet bursts (Section 4.3), 1230 links need to limit the degree of burstiness they introduce. In both 1231 cases (senders and links) this is a tradeoff, because batch-handling 1232 of packets is done for good reason, e.g. processing efficiency or to 1233 make efficient use of medium acquisition delay. Some take the 1234 attitude that there is no point reducing burst delay at the sender 1235 below that introduced by links (or vice versa). However, delay 1236 reduction proceeds by cutting down 'the longest pole in the tent', 1237 which turns the spotlight on the next longest, and so on. 1239 This document does not set any quantified requirements for links to 1240 limit burst delay, primarily because link technologies are outside 1241 the remit of L4S specifications. Nonetheless, the following two 1242 subsections outline opportunities for addressing bursty links in the 1243 process of L4S implementation and deployment. 1245 5.5.1. Limiting Packet Bursts from Links Fed by an L4S AQM 1247 It would not make sense to implement an L4S AQM that feeds into a 1248 particular link technology without also reviewing opportunities to 1249 reduce any form of burst delay introduced by that link technology. 1250 This would at least limit the bursts that the link would otherwise 1251 introduce into the onward traffic, which would cause jumpy feedback 1252 to the sender as well as potential extra queuing delay downstream. 1253 This document does not presume to even give guidance on an 1254 appropriate target for such burst delay until there is more industry 1255 experience of L4S. However, as suggested in Section 4.3 it would not 1256 seem necessary to limit bursts lower than roughly 10% of the minimum 1257 base RTT expected in the typical deployment scenario (e.g. 250 us 1258 burst duration for links within the public Internet). 1260 5.5.2. Limiting Packet Bursts from Links Upstream of an L4S AQM 1262 The initial scope of the L4S experiment is to deploy L4S AQMs at 1263 bottlenecks and L4S congestion controls at senders. This is expected 1264 to highlight interactions with the most bursty upstream links and 1265 lead operators to tune down the burstiness of those links in their 1266 network that are configurable, or failing that, to have to compromise 1267 on the delay target of some L4S AQMs. It might also require specific 1268 redesign work relevant to the most problematic link types. Such 1269 knock-on effects of initial L4S deployment would all be part of the 1270 learning from the L4S experiment. 1272 The details of such link changes are beyond the scope of the present 1273 document. Nonetheless, where L4S technology is being implemented on 1274 an outgoing interface of a device, it would make sense to consider 1275 opportunities for reducing bursts arriving at other incoming 1276 interface(s). For instance, where an L4S AQM is implemented to feed 1277 into the upstream WAN interface of a home gateway, there would be 1278 opportunities to alter the WiFi profiles sent out of any WiFi 1279 interfaces from the same device, in order to mitigate incoming bursts 1280 of aggregated WiFi frames from other WiFi stations. 1282 6. Behaviour of Tunnels and Encapsulations 1284 6.1. No Change to ECN Tunnels and Encapsulations in General 1286 The L4S identifier is expected to work through and within any tunnel 1287 without modification, as long as the tunnel propagates the ECN field 1288 in any of the ways that have been defined since the first variant in 1289 the year 2001 [RFC3168]. L4S will also work with (but does not rely 1290 on) any of the more recent updates to ECN propagation in [RFC4301], 1291 [RFC6040] or [I-D.ietf-tsvwg-rfc6040update-shim]. However, it is 1292 likely that some tunnels still do not implement ECN propagation at 1293 all. In these cases, L4S will work through such tunnels, but within 1294 them the outer header of L4S traffic will appear as Classic. 1296 AQMs are typically implemented where an IP-layer buffer feeds into a 1297 lower layer, so they are agnostic to link layer encapsulations. 1298 Where a bottleneck link is not IP-aware, the L4S identifier is still 1299 expected to work within any lower layer encapsulation without 1300 modification, as long it propagates the ECN field as defined for the 1301 link technology, for example for MPLS [RFC5129] or 1302 TRILL [I-D.ietf-trill-ecn-support]. In some of these cases, 1303 e.g. layer-3 Ethernet switches, the AQM accesses the IP layer header 1304 within the outer encapsulation, so again the L4S identifier is 1305 expected to work without modification. Nonetheless, the programme to 1306 define ECN for other lower layers is still in 1307 progress [I-D.ietf-tsvwg-ecn-encap-guidelines]. 1309 6.2. VPN Behaviour to Avoid Limitations of Anti-Replay 1311 If a mix of L4S and Classic packets is sent into the same security 1312 association (SA) of a virtual private network (VPN), and if the VPN 1313 egress is employing the optional anti-replay feature, it could 1314 inappropriately discard Classic packets (or discard the records in 1315 Classic packets) by mistaking their greater queuing delay for a 1316 replay attack (see "Dropped Packets for Tunnels with Replay 1317 Protection Enabled" in [Heist21] for the potential performance 1318 impact). This known problem is common to both IPsec [RFC4301] and 1319 DTLS [RFC6347] VPNs, given they use similar anti-replay window 1320 mechanisms. The mechanism used can only check for replay within its 1321 window, so if the window is smaller than the degree of reordering, it 1322 can only assume there might be a replay attack and discard all the 1323 packets behind the trailing edge of the window. The specifications 1324 of IPsec AH [RFC4302] and ESP [RFC4303] suggest that an implementer 1325 scales the size of the anti-replay window with interface speed, and 1326 DTLS 1.3 [I-D.ietf-tls-dtls13] says "The receiver SHOULD pick a 1327 window large enough to handle any plausible reordering, which depends 1328 on the data rate." However, in practice, the size of a VPN's anti- 1329 replay window is not always scaled appropriately. 1331 If a VPN carrying traffic participating in the L4S experiment 1332 experiences inappropriate replay detection, the foremost remedy would 1333 be to ensure that the egress is configured to comply with the above 1334 window-sizing requirements. 1336 If an implementation of a VPN egress does not support a sufficiently 1337 large anti-replay window, e.g. due to hardware limitations, one of 1338 the temporary alternatives listed in order of preference below might 1339 be feasible instead: 1341 * If the VPN can be configured to classify packets into different 1342 SAs indexed by DSCP, apply the appropriate locally defined DSCPs 1343 to Classic and L4S packets. The DSCPs could be applied by the 1344 network (based on the least significant bit of the ECN field), or 1345 by the sending host. Such DSCPs would only need to survive as far 1346 as the VPN ingress. 1348 * If the above is not possible and it is necessary to use L4S, 1349 either of the following might be appropriate as a last resort: 1351 - disable anti-replay protection at the VPN egress, after 1352 considering the security implications (optional anti-replay is 1353 mandatory in both IPsec and DTLS); 1355 - configure the tunnel ingress not to propagate ECN to the outer, 1356 which would lose the benefits of L4S and Classic ECN over the 1357 VPN. 1359 Modification to VPN implementations is outside the present scope, 1360 which is why this section has so far focused on reconfiguration. 1361 Although this document does not define any requirements for VPN 1362 implementations, determining whether there is a need for such 1363 requirements could be one aspect of L4S experimentation. 1365 7. L4S Experiments 1367 This section describes open questions that L4S Experiments ought to 1368 focus on. This section also documents outstanding open issues that 1369 will need to be investigated as part of L4S experimentation, given 1370 they could not be fully resolved during the WG phase. It also lists 1371 metrics that will need to be monitored during experiments 1372 (summarizing text elsewhere in L4S documents) and finally lists some 1373 potential future directions that researchers might wish to 1374 investigate. 1376 In addition to this section, the DualQ 1377 spec [I-D.ietf-tsvwg-aqm-dualq-coupled] sets operational and 1378 management requirements for experiments with DualQ Coupled AQMs; and 1379 General operational and management requirements for experiments with 1380 L4S congestion controls are given in Section 4 and Section 5 above, 1381 e.g. co-existence and scaling requirements, incremental deployment 1382 arrangements. 1384 The specification of each scalable congestion control will need to 1385 include protocol-specific requirements for configuration and 1386 monitoring performance during experiments. Appendix A of the 1387 guidelines in [RFC5706] provides a helpful checklist. 1389 7.1. Open Questions 1391 L4S experiments would be expected to answer the following questions: 1393 * Have all the parts of L4S been deployed, and if so, what 1394 proportion of paths support it? 1396 - What types of L4S AQMs were deployed, e.g. FQ, coupled DualQ, 1397 uncoupled DualQ, other? And how prevalent was each? 1399 - Are the signalling patterns emitted by the deployed AQMs in any 1400 way different from those expected when the Prague requirements 1401 for endpoints were written? 1403 * Does use of L4S over the Internet result in significantly improved 1404 user experience? 1406 * Has L4S enabled novel interactive applications? 1408 * Did use of L4S over the Internet result in improvements to the 1409 following metrics: 1411 - queue delay (mean and 99th percentile) under various loads; 1413 - utilization; 1415 - starvation / fairness; 1417 - scaling range of flow rates and RTTs? 1419 * How dependent was the performance of L4S service on the bottleneck 1420 bandwidth or the path RTT? 1422 * How much do bursty links in the Internet affect L4S performance 1423 (see "Underutilization with Bursty Links" in [Heist21]) and how 1424 prevalent are they? How much limitation of burstiness from 1425 upstream links was needed and/or was realized - both at senders 1426 and at links, especially radio links or how much did L4S target 1427 delay have to be increased to accommodate the bursts (see bullet 1428 #7 in Section 4.3 and Section 5.5.2)? 1430 * Is the initial experiment with mis-marked bursty traffic at high 1431 RTT (see "Underutilization with Bursty Traffic" in [Heist21]) 1432 indicative of similar problems at lower RTTs and, if so, how 1433 effective is the suggested remedy in Appendix A.1 of the DualQ 1434 spec [I-D.ietf-tsvwg-aqm-dualq-coupled] (or possible other 1435 remedies)? 1437 * Was per-flow queue protection typically (un)necessary? 1439 - How well did overload protection or queue protection work? 1441 * How well did L4S flows coexist with Classic flows when sharing a 1442 bottleneck? 1444 - How frequently did problems arise? 1445 - What caused any coexistence problems, and were any problems due 1446 to single-queue Classic ECN AQMs (this assumes single-queue 1447 Classic ECN AQMs can be distinguished from FQ ones)? 1449 * How prevalent were problems with the L4S service due to tunnels / 1450 encapsulations that do not support ECN decapsulation? 1452 * How easy was it to implement a fully compliant L4S congestion 1453 control, over various different transport protocols (TCP, QUIC, 1454 RMCAT, etc)? 1456 Monitoring for harm to other traffic, specifically bandwidth 1457 starvation or excess queuing delay, will need to be conducted 1458 alongside all early L4S experiments. It is hard, if not impossible, 1459 for an individual flow to measure its impact on other traffic. So 1460 such monitoring will need to be conducted using bespoke monitoring 1461 across flows and/or across classes of traffic. 1463 7.2. Open Issues 1465 * What is the best way forward to deal with L4S over single-queue 1466 Classic ECN AQM bottlenecks, given current problems with 1467 misdetecting L4S AQMs as Classic ECN AQMs? See the L4S 1468 operational guidance [I-D.ietf-tsvwg-l4sops]. 1470 * Fixing the poor Interaction between current L4S congestion 1471 controls and CoDel with only Classic ECN support during flow 1472 startup. Originally, this was due to a bug in the initialization 1473 of the congestion EWMA in the Linux implementation of TCP Prague. 1474 That was quickly fixed, which removed the main performance impact, 1475 but further improvement would be useful (either by modifying 1476 CoDel, Scalable congestion controls, or both). 1478 7.3. Future Potential 1480 Researchers might find that L4S opens up the following interesting 1481 areas for investigation: 1483 * Potential for faster convergence time and tracking of available 1484 capacity; 1486 * Potential for improvements to particular link technologies, and 1487 cross-layer interactions with them; 1489 * Potential for using virtual queues, e.g. to further reduce latency 1490 jitter, or to leave headroom for capacity variation in radio 1491 networks; 1493 * Development and specification of reverse path congestion control 1494 using L4S building bocks (e.g. AccECN, QUIC); 1496 * Once queuing delay is cut down, what becomes the 'second longest 1497 pole in the tent' (other than the speed of light)? 1499 * Novel alternatives to the existing set of L4S AQMs; 1501 * Novel applications enabled by L4S. 1503 8. IANA Considerations 1505 The 01 codepoint of the ECN Field of the IP header is specified by 1506 the present Experimental RFC. The process for an experimental RFC to 1507 assign this codepoint in the IP header (v4 and v6) is documented in 1508 Proposed Standard [RFC8311], which updates the Proposed Standard 1509 [RFC3168]. 1511 When the present document is published as an RFC, IANA is asked to 1512 update the 01 entry in the registry, "ECN Field (Bits 6-7)" to the 1513 following (see https://www.iana.org/assignments/dscp-registry/dscp- 1514 registry.xhtml#ecn-field ): 1516 +========+=====================+=============================+ 1517 | Binary | Keyword | References | 1518 +========+=====================+=============================+ 1519 | 01 | ECT(1) (ECN-Capable | [RFC8311] [RFC Errata 5399] | 1520 | | Transport(1))[1] | [RFCXXXX] | 1521 +--------+---------------------+-----------------------------+ 1523 Table 1 1525 [XXXX is the number that the RFC Editor assigns to the present 1526 document (this sentence to be removed by the RFC Editor)]. 1528 9. Security Considerations 1530 Approaches to assure the integrity of signals using the new 1531 identifier are introduced in Appendix C.1. See the security 1532 considerations in the L4S architecture [I-D.ietf-tsvwg-l4s-arch] for 1533 further discussion of mis-use of the identifier, as well as extensive 1534 discussion of policing rate and latency in regard to L4S. 1536 If the anti-replay window of a VPN egress is too small, it will 1537 mistake deliberate delay differences as a replay attack, and discard 1538 higher delay packets (e.g. Classic) carried within the same security 1539 association (SA) as low delay packets (e.g. L4S). Section 6.2 1540 recommends that VPNs used in L4S experiments are configured with a 1541 sufficiently large anti-replay window, as required by the relevant 1542 specifications. It also discusses other alternatives. 1544 If a user taking part in the L4S experiment sets up a VPN without 1545 being aware of the above advice, and if the user allows anyone to 1546 send traffic into their VPN, they would open up a DoS vulnerability 1547 in which an attacker could induce the VPN's anti-replay mechanism to 1548 discard enough of the user's Classic (C) traffic (if they are 1549 receiving any) to cause a significant rate reduction. While the user 1550 is actively downloading C traffic, the attacker sends C traffic into 1551 the VPN to fill the remainder of the bottleneck link, then sends 1552 intermittent L4S packets to maximize the chance of exceeding the 1553 VPN's replay window. The user can prevent this attack by following 1554 the recommendations in Section 6.2. 1556 The recommendation to detect loss in time units prevents the ACK- 1557 splitting attacks described in [Savage-TCP]. 1559 10. Acknowledgements 1561 Thanks to Richard Scheffenegger, John Leslie, David Taeht, Jonathan 1562 Morton, Gorry Fairhurst, Michael Welzl, Mikael Abrahamsson and Andrew 1563 McGregor for the discussions that led to this specification. Ing-jyh 1564 (Inton) Tsang was a contributor to the early drafts of this document. 1565 And thanks to Mikael Abrahamsson, Lloyd Wood, Nicolas Kuhn, Greg 1566 White, Tom Henderson, David Black, Gorry Fairhurst, Brian Carpenter, 1567 Jake Holland, Rod Grimes, Richard Scheffenegger, Sebastian Moeller, 1568 Neal Cardwell, Praveen Balasubramanian, Reza Marandian Hagh, Pete 1569 Heist, Stuart Cheshire, Vidhi Goel, Mirja Kuehlewind and Ermin Sakic 1570 for providing help and reviewing this draft and thanks to Ingemar 1571 Johansson for reviewing and providing substantial text. Thanks to 1572 Sebastian Moeller for identifying the interaction with VPN anti- 1573 replay and to Jonathan Morton for identifying the attack based on 1574 this. Particular thanks to tsvwg chairs Gorry Fairhurst, David Black 1575 and Wes Eddy for patiently helping this and the other L4S drafts 1576 through the IETF process. Appendix A listing the Prague L4S 1577 Requirements is based on text authored by Marcelo Bagnulo Braun that 1578 was originally an appendix to [I-D.ietf-tsvwg-l4s-arch]. That text 1579 was in turn based on the collective output of the attendees listed in 1580 the minutes of a 'bar BoF' on DCTCP Evolution during 1581 IETF-94 [TCPPrague]. 1583 The authors' contributions were part-funded by the European Community 1584 under its Seventh Framework Programme through the Reducing Internet 1585 Transport Latency (RITE) project (ICT-317700). The contribution of 1586 Koen De Schepper was also part-funded by the 5Growth and DAEMON EU 1587 H2020 projects. Bob Briscoe was also funded partly by the Research 1588 Council of Norway through the TimeIn project, partly by CableLabs and 1589 partly by the Comcast Innovation Fund. The views expressed here are 1590 solely those of the authors. 1592 11. References 1594 11.1. Normative References 1596 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1597 Requirement Levels", BCP 14, RFC 2119, 1598 DOI 10.17487/RFC2119, March 1997, 1599 . 1601 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1602 of Explicit Congestion Notification (ECN) to IP", 1603 RFC 3168, DOI 10.17487/RFC3168, September 2001, 1604 . 1606 [RFC4774] Floyd, S., "Specifying Alternate Semantics for the 1607 Explicit Congestion Notification (ECN) Field", BCP 124, 1608 RFC 4774, DOI 10.17487/RFC4774, November 2006, 1609 . 1611 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1612 and K. Carlberg, "Explicit Congestion Notification (ECN) 1613 for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 1614 2012, . 1616 11.2. Informative References 1618 [A2DTCP] Zhang, T., Wang, J., Huang, J., Huang, Y., Chen, J., and 1619 Y. Pan, "Adaptive-Acceleration Data Center TCP", IEEE 1620 Transactions on Computers 64(6):1522-1533, June 2015, 1621 . 1624 [Ahmed19] Ahmed, A.S., "Extending TCP for Low Round Trip Delay", 1625 Masters Thesis, Uni Oslo , August 2019, 1626 . 1628 [Alizadeh-stability] 1629 Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 1630 of DCTCP: Stability, Convergence, and Fairness", ACM 1631 SIGMETRICS 2011 , June 2011, 1632 . 1635 [ARED01] Floyd, S., Gummadi, R., and S. Shenker, "Adaptive RED: An 1636 Algorithm for Increasing the Robustness of RED's Active 1637 Queue Management", ACIRI Technical Report , August 2001, 1638 . 1640 [BBRv2] Cardwell, N., "BRTCP BBR v2 Alpha/Preview Release", github 1641 repository; Linux congestion control module, 1642 . 1644 [COBALT] Palmei, J., Gupta, S., Imputato, P., Morton, J., 1645 Tahiliani, M., Avallone, S., and D. Taht, "Design and 1646 Evaluation of COBALT Queue Discipline", In Proc. IEEE 1647 Int'l Symp. on Local and Metropolitan Area Networks 2019, 1648 pp1--6, 2019, 1649 . 1651 [DCttH19] De Schepper, K., Bondarenko, O., Tilmans, O., and B. 1652 Briscoe, "`Data Centre to the Home': Ultra-Low Latency for 1653 All", Updated RITE project Technical Report , July 2019, 1654 . 1656 [DualPI2Linux] 1657 Albisser, O., De Schepper, K., Briscoe, B., Tilmans, O., 1658 and H. Steen, "DUALPI2 - Low Latency, Low Loss and 1659 Scalable (L4S) AQM", Proc. Linux Netdev 0x13 , March 2019, 1660 . 1663 [ecn-fallback] 1664 Briscoe, B. and A.S. Ahmed, "TCP Prague Fall-back on 1665 Detection of a Classic ECN AQM", bobbriscoe.net Technical 1666 Report TR-BB-2019-002, April 2020, 1667 . 1669 [Heist21] Heist, P. and J. Morton, "L4S Tests", github README, May 1670 2021, . 1672 [I-D.briscoe-docsis-q-protection] 1673 Briscoe, B. and G. White, "The DOCSIS(r) Queue Protection 1674 Algorithm to Preserve Low Latency", Work in Progress, 1675 Internet-Draft, draft-briscoe-docsis-q-protection-02, 31 1676 January 2022, . 1679 [I-D.briscoe-iccrg-prague-congestion-control] 1680 Schepper, K. D., Tilmans, O., and B. Briscoe, "Prague 1681 Congestion Control", Work in Progress, Internet-Draft, 1682 draft-briscoe-iccrg-prague-congestion-control-00, 9 March 1683 2021, . 1686 [I-D.briscoe-tsvwg-l4s-diffserv] 1687 Briscoe, B., "Interactions between Low Latency, Low Loss, 1688 Scalable Throughput (L4S) and Differentiated Services", 1689 Work in Progress, Internet-Draft, draft-briscoe-tsvwg-l4s- 1690 diffserv-02, 4 November 2018, 1691 . 1694 [I-D.cardwell-iccrg-bbr-congestion-control] 1695 Cardwell, N., Cheng, Y., Yeganeh, S. H., Swett, I., and V. 1696 Jacobson, "BBR Congestion Control", Work in Progress, 1697 Internet-Draft, draft-cardwell-iccrg-bbr-congestion- 1698 control-01, 7 November 2021, 1699 . 1702 [I-D.ietf-tcpm-accurate-ecn] 1703 Briscoe, B., Kühlewind, M., and R. Scheffenegger, "More 1704 Accurate ECN Feedback in TCP", Work in Progress, Internet- 1705 Draft, draft-ietf-tcpm-accurate-ecn-16, 3 February 2022, 1706 . 1709 [I-D.ietf-tcpm-generalized-ecn] 1710 Bagnulo, M. and B. Briscoe, "ECN++: Adding Explicit 1711 Congestion Notification (ECN) to TCP Control Packets", 1712 Work in Progress, Internet-Draft, draft-ietf-tcpm- 1713 generalized-ecn-09, 31 January 2022, 1714 . 1717 [I-D.ietf-tls-dtls13] 1718 Rescorla, E., Tschofenig, H., and N. Modadugu, "The 1719 Datagram Transport Layer Security (DTLS) Protocol Version 1720 1.3", Work in Progress, Internet-Draft, draft-ietf-tls- 1721 dtls13-43, 30 April 2021, 1722 . 1725 [I-D.ietf-trill-ecn-support] 1726 Eastlake, D. E. and B. Briscoe, "TRILL (TRansparent 1727 Interconnection of Lots of Links): ECN (Explicit 1728 Congestion Notification) Support", Work in Progress, 1729 Internet-Draft, draft-ietf-trill-ecn-support-07, 25 1730 February 2018, . 1733 [I-D.ietf-tsvwg-aqm-dualq-coupled] 1734 Schepper, K. D., Briscoe, B., and G. White, "DualQ Coupled 1735 AQMs for Low Latency, Low Loss and Scalable Throughput 1736 (L4S)", Work in Progress, Internet-Draft, draft-ietf- 1737 tsvwg-aqm-dualq-coupled-22, 4 March 2022, 1738 . 1741 [I-D.ietf-tsvwg-ecn-encap-guidelines] 1742 Briscoe, B. and J. Kaippallimalil, "Guidelines for Adding 1743 Congestion Notification to Protocols that Encapsulate IP", 1744 Work in Progress, Internet-Draft, draft-ietf-tsvwg-ecn- 1745 encap-guidelines-16, 25 May 2021, 1746 . 1749 [I-D.ietf-tsvwg-l4s-arch] 1750 Briscoe, B., Schepper, K. D., Bagnulo, M., and G. White, 1751 "Low Latency, Low Loss, Scalable Throughput (L4S) Internet 1752 Service: Architecture", Work in Progress, Internet-Draft, 1753 draft-ietf-tsvwg-l4s-arch-16, 1 February 2022, 1754 . 1757 [I-D.ietf-tsvwg-l4sops] 1758 White, G., "Operational Guidance for Deployment of L4S in 1759 the Internet", Work in Progress, Internet-Draft, draft- 1760 ietf-tsvwg-l4sops-02, 25 October 2021, 1761 . 1764 [I-D.ietf-tsvwg-nqb] 1765 White, G. and T. Fossati, "A Non-Queue-Building Per-Hop 1766 Behavior (NQB PHB) for Differentiated Services", Work in 1767 Progress, Internet-Draft, draft-ietf-tsvwg-nqb-10, 4 March 1768 2022, . 1771 [I-D.ietf-tsvwg-rfc6040update-shim] 1772 Briscoe, B., "Propagating Explicit Congestion Notification 1773 Across IP Tunnel Headers Separated by a Shim", Work in 1774 Progress, Internet-Draft, draft-ietf-tsvwg-rfc6040update- 1775 shim-14, 25 May 2021, 1776 . 1779 [I-D.sridharan-tcpm-ctcp] 1780 Sridharan, M., Tan, K., Bansal, D., and D. Thaler, 1781 "Compound TCP: A New TCP Congestion Control for High-Speed 1782 and Long Distance Networks", Work in Progress, Internet- 1783 Draft, draft-sridharan-tcpm-ctcp-02, 11 November 2008, 1784 . 1787 [I-D.stewart-tsvwg-sctpecn] 1788 Stewart, R. R., Tuexen, M., and X. Dong, "ECN for Stream 1789 Control Transmission Protocol (SCTP)", Work in Progress, 1790 Internet-Draft, draft-stewart-tsvwg-sctpecn-05, 15 January 1791 2014, . 1794 [LinuxPacedChirping] 1795 Misund, J. and B. Briscoe, "Paced Chirping - Rethinking 1796 TCP start-up", Proc. Linux Netdev 0x13 , March 2019, 1797 . 1799 [Mathis09] Mathis, M., "Relentless Congestion Control", PFLDNeT'09 , 1800 May 2009, . 1803 [Paced-Chirping] 1804 Misund, J., "Rapid Acceleration in TCP Prague", Masters 1805 Thesis , May 2018, 1806 . 1809 [PI2] De Schepper, K., Bondarenko, O., Tsang, I., and B. 1810 Briscoe, "PI^2 : A Linearized AQM for both Classic and 1811 Scalable TCP", Proc. ACM CoNEXT 2016 pp.105-119, December 1812 2016, 1813 . 1815 [PragueLinux] 1816 Briscoe, B., De Schepper, K., Albisser, O., Misund, J., 1817 Tilmans, O., Kühlewind, M., and A.S. Ahmed, "Implementing 1818 the `TCP Prague' Requirements for Low Latency Low Loss 1819 Scalable Throughput (L4S)", Proc. Linux Netdev 0x13 , 1820 March 2019, . 1823 [QV] Briscoe, B. and P. Hurtig, "Up to Speed with Queue View", 1824 RITE Technical Report D2.3; Appendix C.2, August 2015, 1825 . 1828 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1829 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1830 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1831 S., Wroclawski, J., and L. Zhang, "Recommendations on 1832 Queue Management and Congestion Avoidance in the 1833 Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, 1834 . 1836 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1837 "Definition of the Differentiated Services Field (DS 1838 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1839 DOI 10.17487/RFC2474, December 1998, 1840 . 1842 [RFC3246] Davie, B., Charny, A., Bennet, J.C.R., Benson, K., Le 1843 Boudec, J.Y., Courtney, W., Davari, S., Firoiu, V., and D. 1844 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1845 Behavior)", RFC 3246, DOI 10.17487/RFC3246, March 2002, 1846 . 1848 [RFC3540] Spring, N., Wetherall, D., and D. Ely, "Robust Explicit 1849 Congestion Notification (ECN) Signaling with Nonces", 1850 RFC 3540, DOI 10.17487/RFC3540, June 2003, 1851 . 1853 [RFC3649] Floyd, S., "HighSpeed TCP for Large Congestion Windows", 1854 RFC 3649, DOI 10.17487/RFC3649, December 2003, 1855 . 1857 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1858 Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, 1859 December 2005, . 1861 [RFC4302] Kent, S., "IP Authentication Header", RFC 4302, 1862 DOI 10.17487/RFC4302, December 2005, 1863 . 1865 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", 1866 RFC 4303, DOI 10.17487/RFC4303, December 2005, 1867 . 1869 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1870 Congestion Control Protocol (DCCP)", RFC 4340, 1871 DOI 10.17487/RFC4340, March 2006, 1872 . 1874 [RFC4341] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1875 Control Protocol (DCCP) Congestion Control ID 2: TCP-like 1876 Congestion Control", RFC 4341, DOI 10.17487/RFC4341, March 1877 2006, . 1879 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 1880 Datagram Congestion Control Protocol (DCCP) Congestion 1881 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342, 1882 DOI 10.17487/RFC4342, March 2006, 1883 . 1885 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1886 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1887 . 1889 [RFC5033] Floyd, S. and M. Allman, "Specifying New Congestion 1890 Control Algorithms", BCP 133, RFC 5033, 1891 DOI 10.17487/RFC5033, August 2007, 1892 . 1894 [RFC5129] Davie, B., Briscoe, B., and J. Tay, "Explicit Congestion 1895 Marking in MPLS", RFC 5129, DOI 10.17487/RFC5129, January 1896 2008, . 1898 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1899 Friendly Rate Control (TFRC): Protocol Specification", 1900 RFC 5348, DOI 10.17487/RFC5348, September 2008, 1901 . 1903 [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and K. 1904 Ramakrishnan, "Adding Explicit Congestion Notification 1905 (ECN) Capability to TCP's SYN/ACK Packets", RFC 5562, 1906 DOI 10.17487/RFC5562, June 2009, 1907 . 1909 [RFC5622] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1910 Control Protocol (DCCP) Congestion ID 4: TCP-Friendly Rate 1911 Control for Small Packets (TFRC-SP)", RFC 5622, 1912 DOI 10.17487/RFC5622, August 2009, 1913 . 1915 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 1916 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 1917 . 1919 [RFC5706] Harrington, D., "Guidelines for Considering Operations and 1920 Management of New Protocols and Protocol Extensions", 1921 RFC 5706, DOI 10.17487/RFC5706, November 2009, 1922 . 1924 [RFC5865] Baker, F., Polk, J., and M. Dolly, "A Differentiated 1925 Services Code Point (DSCP) for Capacity-Admitted Traffic", 1926 RFC 5865, DOI 10.17487/RFC5865, May 2010, 1927 . 1929 [RFC5925] Touch, J., Mankin, A., and R. Bonica, "The TCP 1930 Authentication Option", RFC 5925, DOI 10.17487/RFC5925, 1931 June 2010, . 1933 [RFC6040] Briscoe, B., "Tunnelling of Explicit Congestion 1934 Notification", RFC 6040, DOI 10.17487/RFC6040, November 1935 2010, . 1937 [RFC6077] Papadimitriou, D., Ed., Welzl, M., Scharf, M., and B. 1938 Briscoe, "Open Research Issues in Internet Congestion 1939 Control", RFC 6077, DOI 10.17487/RFC6077, February 2011, 1940 . 1942 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1943 Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, 1944 January 2012, . 1946 [RFC6660] Briscoe, B., Moncaster, T., and M. Menth, "Encoding Three 1947 Pre-Congestion Notification (PCN) States in the IP Header 1948 Using a Single Diffserv Codepoint (DSCP)", RFC 6660, 1949 DOI 10.17487/RFC6660, July 2012, 1950 . 1952 [RFC6675] Blanton, E., Allman, M., Wang, L., Jarvinen, I., Kojo, M., 1953 and Y. Nishida, "A Conservative Loss Recovery Algorithm 1954 Based on Selective Acknowledgment (SACK) for TCP", 1955 RFC 6675, DOI 10.17487/RFC6675, August 2012, 1956 . 1958 [RFC7560] Kuehlewind, M., Ed., Scheffenegger, R., and B. Briscoe, 1959 "Problem Statement and Requirements for Increased Accuracy 1960 in Explicit Congestion Notification (ECN) Feedback", 1961 RFC 7560, DOI 10.17487/RFC7560, August 2015, 1962 . 1964 [RFC7567] Baker, F., Ed. and G. Fairhurst, Ed., "IETF 1965 Recommendations Regarding Active Queue Management", 1966 BCP 197, RFC 7567, DOI 10.17487/RFC7567, July 2015, 1967 . 1969 [RFC7713] Mathis, M. and B. Briscoe, "Congestion Exposure (ConEx) 1970 Concepts, Abstract Mechanism, and Requirements", RFC 7713, 1971 DOI 10.17487/RFC7713, December 2015, 1972 . 1974 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 1975 "Proportional Integral Controller Enhanced (PIE): A 1976 Lightweight Control Scheme to Address the Bufferbloat 1977 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 1978 . 1980 [RFC8083] Perkins, C. and V. Singh, "Multimedia Congestion Control: 1981 Circuit Breakers for Unicast RTP Sessions", RFC 8083, 1982 DOI 10.17487/RFC8083, March 2017, 1983 . 1985 [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage 1986 Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, 1987 March 2017, . 1989 [RFC8257] Bensley, S., Thaler, D., Balasubramanian, P., Eggert, L., 1990 and G. Judd, "Data Center TCP (DCTCP): TCP Congestion 1991 Control for Data Centers", RFC 8257, DOI 10.17487/RFC8257, 1992 October 2017, . 1994 [RFC8290] Hoeiland-Joergensen, T., McKenney, P., Taht, D., Gettys, 1995 J., and E. Dumazet, "The Flow Queue CoDel Packet Scheduler 1996 and Active Queue Management Algorithm", RFC 8290, 1997 DOI 10.17487/RFC8290, January 2018, 1998 . 2000 [RFC8298] Johansson, I. and Z. Sarker, "Self-Clocked Rate Adaptation 2001 for Multimedia", RFC 8298, DOI 10.17487/RFC8298, December 2002 2017, . 2004 [RFC8311] Black, D., "Relaxing Restrictions on Explicit Congestion 2005 Notification (ECN) Experimentation", RFC 8311, 2006 DOI 10.17487/RFC8311, January 2018, 2007 . 2009 [RFC8312] Rhee, I., Xu, L., Ha, S., Zimmermann, A., Eggert, L., and 2010 R. Scheffenegger, "CUBIC for Fast Long-Distance Networks", 2011 RFC 8312, DOI 10.17487/RFC8312, February 2018, 2012 . 2014 [RFC8511] Khademi, N., Welzl, M., Armitage, G., and G. Fairhurst, 2015 "TCP Alternative Backoff with ECN (ABE)", RFC 8511, 2016 DOI 10.17487/RFC8511, December 2018, 2017 . 2019 [RFC8888] Sarker, Z., Perkins, C., Singh, V., and M. Ramalho, "RTP 2020 Control Protocol (RTCP) Feedback for Congestion Control", 2021 RFC 8888, DOI 10.17487/RFC8888, January 2021, 2022 . 2024 [RFC8985] Cheng, Y., Cardwell, N., Dukkipati, N., and P. Jha, "The 2025 RACK-TLP Loss Detection Algorithm for TCP", RFC 8985, 2026 DOI 10.17487/RFC8985, February 2021, 2027 . 2029 [RFC9000] Iyengar, J., Ed. and M. Thomson, Ed., "QUIC: A UDP-Based 2030 Multiplexed and Secure Transport", RFC 9000, 2031 DOI 10.17487/RFC9000, May 2021, 2032 . 2034 [Savage-TCP] 2035 Savage, S., Cardwell, N., Wetherall, D., and T. Anderson, 2036 "TCP Congestion Control with a Misbehaving Receiver", ACM 2037 SIGCOMM Computer Communication Review 29(5):71--78, 2038 October 1999. 2040 [SCReAM] Johansson, I., "SCReAM", github repository; , 2041 . 2044 [sub-mss-prob] 2045 Briscoe, B. and K. De Schepper, "Scaling TCP's Congestion 2046 Window for Small Round Trip Times", BT Technical Report 2047 TR-TUB8-2015-002, May 2015, 2048 . 2050 [TCP-CA] Jacobson, V. and M.J. Karels, "Congestion Avoidance and 2051 Control", Laurence Berkeley Labs Technical Report , 2052 November 1988, . 2054 [TCPPrague] 2055 Briscoe, B., "Notes: DCTCP evolution 'bar BoF': Tue 21 Jul 2056 2015, 17:40, Prague", tcpprague mailing list archive , 2057 July 2015, . 2060 [VCP] Xia, Y., Subramanian, L., Stoica, I., and S. Kalyanaraman, 2061 "One more bit is enough", Proc. SIGCOMM'05, ACM CCR 2062 35(4)37--48, 2005, 2063 . 2065 Appendix A. Rationale for the 'Prague L4S Requirements' 2067 This appendix is informative, not normative. It gives a list of 2068 modifications to current scalable congestion controls so that they 2069 can be deployed over the public Internet and coexist safely with 2070 existing traffic. The list complements the normative requirements in 2071 Section 4 that a sender has to comply with before it can set the L4S 2072 identifier in packets it sends into the Internet. As well as 2073 rationale for safety improvements (the requirements in Section 4) 2074 this appendix also includes preferable performance improvements 2075 (optimizations). 2077 The requirements and recommendations in Section 4) have become know 2078 as the Prague L4S Requirements, because they were originally 2079 identified at an ad hoc meeting during IETF-94 in Prague [TCPPrague]. 2080 They were originally called the 'TCP Prague Requirements', but they 2081 are not solely applicable to TCP, so the name and wording has been 2082 generalized for all transport protocols, and the name 'TCP Prague' is 2083 now used for a specific implementation of the requirements. 2085 At the time of writing, DCTCP [RFC8257] is the most widely used 2086 scalable transport protocol. In its current form, DCTCP is specified 2087 to be deployable only in controlled environments. Deploying it in 2088 the public Internet would lead to a number of issues, both from the 2089 safety and the performance perspective. The modifications and 2090 additional mechanisms listed in this section will be necessary for 2091 its deployment over the global Internet. Where an example is needed, 2092 DCTCP is used as a base, but the requirements in Section 4 apply 2093 equally to other scalable congestion controls, covering adaptive 2094 real-time media, etc., not just capacity-seeking behaviours. 2096 A.1. Rationale for the Requirements for Scalable Transport Protocols 2098 A.1.1. Use of L4S Packet Identifier 2100 Description: A scalable congestion control needs to distinguish the 2101 packets it sends from those sent by Classic congestion controls (see 2102 the precise normative requirement wording in Section 4.1). 2104 Motivation: It needs to be possible for a network node to classify 2105 L4S packets without flow state into a queue that applies an L4S ECN 2106 marking behaviour and isolates L4S packets from the queuing delay of 2107 Classic packets. 2109 A.1.2. Accurate ECN Feedback 2111 Description: The transport protocol for a scalable congestion control 2112 needs to provide timely, accurate feedback about the extent of ECN 2113 marking experienced by all packets (see the precise normative 2114 requirement wording in Section 4.2). 2116 Motivation: Classic congestion controls only need feedback about the 2117 existence of a congestion episode within a round trip, not precisely 2118 how many packets were marked with ECN or dropped. Therefore, in 2119 2001, when ECN feedback was added to TCP [RFC3168], it could not 2120 inform the sender of more than one ECN mark per RTT. Since then, 2121 requirements for more accurate ECN feedback in TCP have been defined 2122 in [RFC7560] and [I-D.ietf-tcpm-accurate-ecn] specifies a change to 2123 the TCP protocol to satisfy these requirements. Most other transport 2124 protocols already satisfy this requirement (see Section 4.2). 2126 A.1.3. Capable of Replacement by Classic Congestion Control 2128 Description: It needs to be possible to replace the implementation of 2129 a scalable congestion control with a Classic control (see the precise 2130 normative requirement wording in Section 4.3). 2132 Motivation: L4S is an experimental protocol, therefore it seems 2133 prudent to be able to disable it at source in case of insurmountable 2134 problems, perhaps due to some unexpected interaction on a particular 2135 sender; over a particular path or network; with a particular receiver 2136 or even ultimately an insurmountable problem with the experiment as a 2137 whole. 2139 A.1.4. Fall back to Classic Congestion Control on Packet Loss 2141 Description: As well as responding to ECN markings in a scalable way, 2142 a scalable congestion control needs to react to packet loss in a way 2143 that will coexist safely with a Reno congestion control [RFC5681] 2144 (see the precise normative requirement wording in Section 4.3). 2146 Motivation: Part of the safety conditions for deploying a scalable 2147 congestion control on the public Internet is to make sure that it 2148 behaves properly when it builds a queue at a network bottleneck that 2149 has not been upgraded to support L4S. Packet loss can have many 2150 causes, but it usually has to be conservatively assumed that it is a 2151 sign of congestion. Therefore, on detecting packet loss, a scalable 2152 congestion control will need to fall back to Classic congestion 2153 control behaviour. If it does not comply, it could starve Classic 2154 traffic. 2156 A scalable congestion control can be used for different types of 2157 transport, e.g. for real-time media or for reliable transport like 2158 TCP. Therefore, the particular Classic congestion control behaviour 2159 to fall back on will need to be dependent on the specific congestion 2160 control implementation. In the particular case of DCTCP, the DCTCP 2161 specification [RFC8257] states that "It is RECOMMENDED that an 2162 implementation deal with loss episodes in the same way as 2163 conventional TCP." For safe deployment, Section 4.3 requires any 2164 specification of a scalable congestion control for the public 2165 Internet to define the above requirement as a "MUST". 2167 Even though a bottleneck is L4S capable, it might still become 2168 overloaded and have to drop packets. In this case, the sender may 2169 receive a high proportion of packets marked with the CE bit set and 2170 also experience loss. Current DCTCP implementations each react 2171 differently to this situation. At least one implementation reacts 2172 only to the drop signal (e.g. by halving the CWND) and at least 2173 another DCTCP implementation reacts to both signals (e.g. by halving 2174 the CWND due to the drop and also further reducing the CWND based on 2175 the proportion of marked packet). A third approach for the public 2176 Internet has been proposed that adjusts the loss response to result 2177 in a halving when combined with the ECN response. We believe that 2178 further experimentation is needed to understand what is the best 2179 behaviour for the public Internet, which may or not be one of these 2180 existing approaches. 2182 A.1.5. Coexistence with Classic Congestion Control at Classic ECN 2183 bottlenecks 2185 Description: Monitoring has to be in place so that a non-L4S but ECN- 2186 capable AQM can be detected at path bottlenecks. This is in case 2187 such an AQM has been implemented in a shared queue, in which case any 2188 long-running scalable flow would predominate over any simultaneous 2189 long-running Classic flow sharing the queue. The precise requirement 2190 wording in Section 4.3 is written so that such a problem could either 2191 be resolved in real-time, or via administrative intervention. 2193 Motivation: Similarly to the discussion in Appendix A.1.4, this 2194 requirement in Section 4.3 is a safety condition to ensure an L4S 2195 congestion control coexists well with Classic flows when it builds a 2196 queue at a shared network bottleneck that has not been upgraded to 2197 support L4S. Nonetheless, if necessary, it is considered reasonable 2198 to resolve such problems over management timescales (possibly 2199 involving human intervention) because: 2201 * although a Classic flow can considerably reduce its throughput in 2202 the face of a competing scalable flow, it still makes progress and 2203 does not starve; 2205 * implementations of a Classic ECN AQM in a queue that is intended 2206 to be shared are believed to be rare; 2208 * detection of such AQMs is not always clear-cut; so focused out-of- 2209 band testing (or even contacting the relevant network operator) 2210 would improve certainty. 2212 Therefore, the relevant normative requirement (Section 4.3) is 2213 divided into three stages: monitoring, detection and action: 2215 Monitoring: Monitoring involves collection of the measurement data 2216 to be analysed. Monitoring is expressed as a 'MUST' for 2217 uncontrolled environments, although the placement of the 2218 monitoring function is left open. Whether monitoring has to be 2219 applied in real-time is expressed as a 'SHOULD'. This allows for 2220 the possibility that the operator of an L4S sender (e.g. a CDN) 2221 might prefer to test out-of-band for signs of Classic ECN AQMs, 2222 perhaps to avoid continually consuming resources to monitor live 2223 traffic. 2225 Detection: Detection involves analysis of the monitored data to 2226 detect the likelihood of a Classic ECN AQM. Detection can either 2227 directly detect actual coexistence problems between flows, or it 2228 can aim to identify AQM technologies that are likely to present 2229 coexistence problems, based on knowledge of AQMs deployed at the 2230 time. The requirements recommend that detection occurs live in 2231 real-time. However, detection is allowed to be deferred (e.g. it 2232 might involve further testing targeted at candidate AQMs); 2234 Action: This involves the act of switching the sender to a Classic 2235 congestion control. This might occur in real-time within the 2236 congestion control for the subsequent duration of a flow, or it 2237 might involve administrative action to switch to Classic 2238 congestion control for a specific interface or for a certain set 2239 of destination addresses. 2241 Instead of the sender taking action itself, the operator of the 2242 sender (e.g. a CDN) might prefer to ask the network operator to 2243 modify the Classic AQM's treatment of L4S packets; or to ensure 2244 L4S packets bypass the AQM; or to upgrade the AQM to support L4S 2245 (see the L4S operational guidance [I-D.ietf-tsvwg-l4sops]). Once 2246 L4S flows no longer shared the Classic ECN AQM they would 2247 obviously no longer detect it, and the requirement to act on it 2248 would no longer apply. 2250 The whole set of normative requirements concerning Classic ECN AQMs 2251 in Section 4.3 is worded so that it does not apply in controlled 2252 environments, such as private networks or data centre networks. CDN 2253 servers placed within an access ISP's network can be considered as a 2254 single controlled environment, but any onward networks served by the 2255 access network, including all the attached customer networks, would 2256 be unlikely to fall under the same degree of coordinated control. 2257 Monitoring is expressed as a 'MUST' for these uncontrolled segments 2258 of paths (e.g. beyond the access ISP in a home network), because 2259 there is a possibility that there might be a shared queue Classic ECN 2260 AQM in that segment. Nonetheless, the intent of the wording is to 2261 only require occasional monitoring of these uncontrolled regions, and 2262 not to burden CDN operators if monitoring never uncovers any 2263 potential problems. 2265 More detailed discussion of all the above options and alternatives 2266 can be found in the L4S operational guidance [I-D.ietf-tsvwg-l4sops]. 2268 Having said all the above, the approach recommended in Section 4.3 is 2269 to monitor, detect and act in real-time on live traffic. A passive 2270 monitoring algorithm to detect a Classic ECN AQM at the bottleneck 2271 and fall back to Classic congestion control is described in an 2272 extensive technical report [ecn-fallback], which also provides a link 2273 to Linux source code, and a large online visualization of its 2274 evaluation results. Very briefly, the algorithm primarily monitors 2275 RTT variation using the same algorithm that maintains the mean 2276 deviation of TCP's smoothed RTT, but it smooths over a duration of 2277 the order of a Classic sawtooth. The outcome is also conditioned on 2278 other metrics such as the presence of CE marking and congestion 2279 avoidance phase having stabilized. The report also identifies 2280 further work to improve the approach, for instance improvements with 2281 low capacity links and combining the measurements with a cache of 2282 what had been learned about a path in previous connections. The 2283 report also suggests alternative approaches. 2285 Although using passive measurements within live traffic (as above) 2286 can detect a Classic ECN AQM, it is much harder (perhaps impossible) 2287 to determine whether or not the AQM is in a shared queue. 2288 Nonetheless, this is much easier using active test traffic out-of- 2289 band, because two flows can be used. Section 4 of the same 2290 report [ecn-fallback] describes a simple technique to detect a 2291 Classic ECN AQM and determine whether it is in a shared queue, 2292 summarized here. 2294 An L4S-enabled test server could be set up so that, when a test 2295 client accesses it, it serves a script that gets the client to open 2296 two parallel long-running flows. It could serve one with a Classic 2297 congestion control (C, that sets ECT(0)) and one with a scalable CC 2298 (L, that sets ECT(1)). If neither flow induces any ECN marks, it can 2299 be presumed the path does not contain a Classic ECN AQM. If either 2300 flow induces some ECN marks, the server could measure the relative 2301 flow rates and round trip times of the two flows. Table 2 shows the 2302 AQM that can be inferred for various cases (presuming the AQM 2303 behaviours known at the time of writing). 2305 +========+=======+========================+ 2306 | Rate | RTT | Inferred AQM | 2307 +========+=======+========================+ 2308 | L > C | L = C | Classic ECN AQM (FIFO) | 2309 +--------+-------+------------------------+ 2310 | L = C | L = C | Classic ECN AQM (FQ) | 2311 +--------+-------+------------------------+ 2312 | L = C | L < C | FQ-L4S AQM | 2313 +--------+-------+------------------------+ 2314 | L ~= C | L < C | Coupled DualQ AQM | 2315 +--------+-------+------------------------+ 2317 Table 2: Out-of-band testing with two 2318 parallel flows. L:=L4S, C:=Classic. 2320 Finally, we motivate the recommendation in Section 4.3 that a 2321 scalable congestion control is not expected to change to setting 2322 ECT(0) while it adapts its behaviour to coexist with Classic flows. 2323 This is because the sender needs to continue to check whether it made 2324 the right decision - and switch back if it was wrong, or if a 2325 different link becomes the bottleneck: 2327 * If, as recommended, the sender changes only its behaviour but not 2328 its codepoint to Classic, its codepoint will still be compatible 2329 with either an L4S or a Classic AQM. If the bottleneck does 2330 actually support both, it will still classify ECT(1) into the same 2331 L4S queue, where the sender can measure that switching to Classic 2332 behaviour was wrong, so that it can switch back. 2334 * In contrast, if the sender changes both its behaviour and its 2335 codepoint to Classic, even if the bottleneck supports both, it 2336 will classify ECT(0) into the Classic queue, reinforcing the 2337 sender's incorrect decision so that it never switches back. 2339 * Also, not changing codepoint avoids the risk of being flipped to a 2340 different path by a load balancer or multipath routing that hashes 2341 on the whole of the ex-ToS byte (unfortunately still a common 2342 pathology). 2344 Note that if a flow is configured to _only_ use a Classic congestion 2345 control, it is then entirely appropriate not to use ECT(1). 2347 A.1.6. Reduce RTT dependence 2349 Description: A scalable congestion control needs to reduce RTT bias 2350 as much as possible at least over the low to typical range of RTTs 2351 that will interact in the intended deployment scenario (see the 2352 precise normative requirement wording in Section 4.3). 2354 Motivation: The throughput of Classic congestion controls is known to 2355 be inversely proportional to RTT, so one would expect flows over very 2356 low RTT paths to nearly starve flows over larger RTTs. However, 2357 Classic congestion controls have never allowed a very low RTT path to 2358 exist because they induce a large queue. For instance, consider two 2359 paths with base RTT 1 ms and 100 ms. If a Classic congestion control 2360 induces a 100 ms queue, it turns these RTTs into 101 ms and 200 ms 2361 leading to a throughput ratio of about 2:1. Whereas if a scalable 2362 congestion control induces only a 1 ms queue, the ratio is 2:101, 2363 leading to a throughput ratio of about 50:1. 2365 Therefore, with very small queues, long RTT flows will essentially 2366 starve, unless scalable congestion controls comply with this 2367 requirement in Section 4.3. 2369 The RTT bias in current Classic congestion controls works 2370 satisfactorily when the RTT is higher than typical, and L4S does not 2371 change that. So, there is no additional requirement in Section 4.3 2372 for high RTT L4S flows to remove RTT bias - they can but they don't 2373 have to. 2375 A.1.7. Scaling down to fractional congestion windows 2377 Description: A scalable congestion control needs to remain responsive 2378 to congestion when typical RTTs over the public Internet are 2379 significantly smaller because they are no longer inflated by queuing 2380 delay (see the precise normative requirement wording in Section 4.3). 2382 Motivation: As currently specified, the minimum congestion window of 2383 ECN-capable TCP (and its derivatives) is expected to be 2 sender 2384 maximum segment sizes (SMSS), or 1 SMSS after a retransmission 2385 timeout. Once the congestion window reaches this minimum, if there 2386 is further ECN-marking, TCP is meant to wait for a retransmission 2387 timeout before sending another segment (see section 6.1.2 of the ECN 2388 spec [RFC3168]). In practice, most known window-based congestion 2389 control algorithms become unresponsive to ECN congestion signals at 2390 this point. No matter how much ECN marking, the congestion window no 2391 longer reduces. Instead, the sender's lack of any further congestion 2392 response forces the queue to grow, overriding any AQM and increasing 2393 queuing delay (making the window large enough to become responsive 2394 again). This can result in a stable but deeper queue, or it might 2395 drive the queue to loss, then the retransmission timeout mechanism 2396 acts as a backstop. 2398 Most window-based congestion controls for other transport protocols 2399 have a similar minimum window, albeit when measured in bytes for 2400 those that use smaller packets. 2402 L4S mechanisms significantly reduce queueing delay so, over the same 2403 path, the RTT becomes lower. Then this problem becomes surprisingly 2404 common [sub-mss-prob]. This is because, for the same link capacity, 2405 smaller RTT implies a smaller window. For instance, consider a 2406 residential setting with an upstream broadband Internet access of 8 2407 Mb/s, assuming a max segment size of 1500 B. Two upstream flows will 2408 each have the minimum window of 2 SMSS if the RTT is 6 ms or less, 2409 which is quite common when accessing a nearby data centre. So, any 2410 more than two such parallel TCP flows will become unresponsive to ECN 2411 and increase queuing delay. 2413 Unless scalable congestion controls address the requirement in 2414 Section 4.3 from the start, they will frequently become unresponsive 2415 to ECN, negating the low latency benefit of L4S, for themselves and 2416 for others. 2418 That would seem to imply that scalable congestion controllers ought 2419 to be required to be able work with a congestion window less than 2420 1 SMSS. For instance, if an ECN-capable TCP gets an ECN-mark when it 2421 is already sitting at a window of 1 SMSS, RFC 3168 requires it to 2422 defer sending for a retransmission timeout. A less drastic but more 2423 complex mechanism can maintain a congestion window less than 1 SMSS 2424 (significantly less if necessary), as described in [Ahmed19]. Other 2425 approaches are likely to be feasible. 2427 However, the requirement in Section 4.3 is worded as a "SHOULD" 2428 because it is believed that the existence of a minimum window is not 2429 all bad. When competing with an unresponsive flow, a minimum window 2430 naturally protects the flow from starvation by at least keeping some 2431 data flowing. 2433 By stating the requirement to go lower than 1 SMSS as a "SHOULD", 2434 while the requirement in RFC 3168 still stands as well, we shall be 2435 able to watch the choices of minimum window evolve in different 2436 scalable congestion controllers. 2438 A.1.8. Measuring Reordering Tolerance in Time Units 2440 Description: When detecting loss, a scalable congestion control needs 2441 to be tolerant to reordering over an adaptive time interval, which 2442 scales with throughput, rather than counting only in fixed units of 2443 packets, which does not scale (see the precise normative requirement 2444 wording in Section 4.3). 2446 Motivation: A primary purpose of L4S is scalable throughput (it's in 2447 the name). Scalability in all dimensions is, of course, also a goal 2448 of all IETF technology. The inverse linear congestion response in 2449 Section 4.3 is necessary, but not sufficient, to solve the congestion 2450 control scalability problem identified in [RFC3649]. As well as 2451 maintaining frequent ECN signals as rate scales, it is also important 2452 to ensure that a potentially false perception of loss does not limit 2453 throughput scaling. 2455 End-systems cannot know whether a missing packet is due to loss or 2456 reordering, except in hindsight - if it appears later. So they can 2457 only deem that there has been a loss if a gap in the sequence space 2458 has not been filled, either after a certain number of subsequent 2459 packets has arrived (e.g. the 3 DupACK rule of standard TCP 2460 congestion control [RFC5681]) or after a certain amount of time 2461 (e.g. the RACK approach [RFC8985]). 2463 As we attempt to scale packet rate over the years: 2465 * Even if only _some_ sending hosts still deem that loss has 2466 occurred by counting reordered packets, _all_ networks will have 2467 to keep reducing the time over which they keep packets in order. 2468 If some link technologies keep the time within which reordering 2469 occurs roughly unchanged, then loss over these links, as perceived 2470 by these hosts, will appear to continually rise over the years. 2472 * In contrast, if all senders detect loss in units of time, the time 2473 over which the network has to keep packets in order stays roughly 2474 invariant. 2476 Therefore hosts have an incentive to detect loss in time units (so as 2477 not to fool themselves too often into detecting losses when there are 2478 none). And for hosts that are changing their congestion control 2479 implementation to L4S, there is no downside to including time-based 2480 loss detection code in the change (loss recovery implemented in 2481 hardware is an exception, covered later). Therefore requiring L4S 2482 hosts to detect loss in time-based units would not be a burden. 2484 If the requirement in Section 4.3 were not placed on L4S hosts, even 2485 though it would be no burden on hosts to comply, all networks would 2486 face unnecessary uncertainty over whether some L4S hosts might be 2487 detecting loss by counting packets. Then _all_ link technologies 2488 will have to unnecessarily keep reducing the time within which 2489 reordering occurs. That is not a problem for some link technologies, 2490 but it becomes increasingly challenging for other link technologies 2491 to continue to scale, particularly those relying on channel bonding 2492 for scaling, such as LTE, 5G and DOCSIS. 2494 Given Internet paths traverse many link technologies, any scaling 2495 limit for these more challenging access link technologies would 2496 become a scaling limit for the Internet as a whole. 2498 It might be asked how it helps to place this loss detection 2499 requirement only on L4S hosts, because networks will still face 2500 uncertainty over whether non-L4S flows are detecting loss by counting 2501 DupACKs. The answer is that those link technologies for which it is 2502 challenging to keep squeezing the reordering time will only need to 2503 do so for non-L4S traffic (which they can do because the L4S 2504 identifier is visible at the IP layer). Therefore, they can focus 2505 their processing and memory resources into scaling non-L4S (Classic) 2506 traffic. Then, the higher the proportion of L4S traffic, the less of 2507 a scaling challenge they will have. 2509 To summarize, there is no reason for L4S hosts not to be part of the 2510 solution instead of part of the problem. 2512 Requirement ("MUST") or recommendation ("SHOULD")? As explained 2513 above, this is a subtle interoperability issue between hosts and 2514 networks, which seems to need a "MUST". Unless networks can be 2515 certain that all L4S hosts follow the time-based approach, they still 2516 have to cater for the worst case - continually squeeze reordering 2517 into a smaller and smaller duration - just for hosts that might be 2518 using the counting approach. However, it was decided to express this 2519 as a recommendation, using "SHOULD". The main justification was that 2520 networks can still be fairly certain that L4S hosts will follow this 2521 recommendation, because following it offers only gain and no pain. 2523 Details: 2525 The speed of loss recovery is much more significant for short flows 2526 than long, therefore a good compromise is to adapt the reordering 2527 window; from a small fraction of the RTT at the start of a flow, to a 2528 larger fraction of the RTT for flows that continue for many round 2529 trips. 2531 This is broadly the approach adopted by TCP RACK (Recent 2532 ACKnowledgements) [RFC8985]. However, RACK starts with the 3 DupACK 2533 approach, because the RTT estimate is not necessarily stable. As 2534 long as the initial window is paced, such initial use of 3 DupACK 2535 counting would amount to time-based loss detection and therefore 2536 would satisfy the time-based loss detection recommendation of 2537 Section 4.3. This is because pacing of the initial window would 2538 ensure that 3 DupACKs early in the connection would be spread over a 2539 small fraction of the round trip. 2541 As mentioned above, hardware implementations of loss recovery using 2542 DupACK counting exist (e.g. some implementations of RoCEv2 for RDMA). 2543 For low latency, these implementations can change their congestion 2544 control to implement L4S, because the congestion control (as distinct 2545 from loss recovery) is implemented in software. But they cannot 2546 easily satisfy this loss recovery requirement. However, it is 2547 believed they do not need to, because such implementations are 2548 believed to solely exist in controlled environments, where the 2549 network technology keeps reordering extremely low anyway. This is 2550 why controlled environments with hardly any reordering are excluded 2551 from the scope of the normative recommendation in Section 4.3. 2553 Detecting loss in time units also prevents the ACK-splitting attacks 2554 described in [Savage-TCP]. 2556 A.2. Scalable Transport Protocol Optimizations 2558 A.2.1. Setting ECT in Control Packets and Retransmissions 2560 Description: This item concerns TCP and its derivatives (e.g. SCTP) 2561 as well as RTP/RTCP [RFC6679]. The original specification of ECN for 2562 TCP precluded the use of ECN on control packets and retransmissions. 2563 Similarly RFC 6679 precludes the use of ECT on RTCP datagrams, in 2564 case the path changes after it has been checked for ECN traversal. 2565 To improve performance, scalable transport protocols ought to enable 2566 ECN at the IP layer in TCP control packets (SYN, SYN-ACK, pure ACKs, 2567 etc.) and in retransmitted packets. The same is true for other 2568 transports, e.g. SCTP, RTCP. 2570 Motivation (TCP): RFC 3168 prohibits the use of ECN on these types of 2571 TCP packet, based on a number of arguments. This means these packets 2572 are not protected from congestion loss by ECN, which considerably 2573 harms performance, particularly for short flows. 2574 ECN++ [I-D.ietf-tcpm-generalized-ecn] proposes experimental use of 2575 ECN on all types of TCP packet as long as AccECN 2576 feedback [I-D.ietf-tcpm-accurate-ecn] is available (which itself 2577 satisfies the accurate feedback requirement in Section 4.2 for using 2578 a scalable congestion control). 2580 Motivation (RTCP): L4S experiments in general will need to observe 2581 the rule in the RTP ECN spec [RFC6679] that precludes ECT on RTCP 2582 datagrams. Nonetheless, as ECN usage becomes more widespread, it 2583 would be useful to conduct specific experiments with ECN-capable RTCP 2584 to gather data on whether such caution is necessary. 2586 A.2.2. Faster than Additive Increase 2588 Description: It would improve performance if scalable congestion 2589 controls did not limit their congestion window increase to the 2590 standard additive increase of 1 SMSS per round trip [RFC5681] during 2591 congestion avoidance. The same is true for derivatives of TCP 2592 congestion control, including similar approaches used for real-time 2593 media. 2595 Motivation: As currently defined [RFC8257], DCTCP uses the 2596 traditional Reno additive increase in congestion avoidance phase. 2597 When the available capacity suddenly increases (e.g. when another 2598 flow finishes, or if radio capacity increases) it can take very many 2599 round trips to take advantage of the new capacity. TCP 2600 Cubic [RFC8312] was designed to solve this problem, but as flow rates 2601 have continued to increase, the delay accelerating into available 2602 capacity has become prohibitive. See, for instance, the examples in 2603 Section 5.1 of the L4S architecture [I-D.ietf-tsvwg-l4s-arch]. Even 2604 when out of its Reno-compatibility mode, every 8x scaling of Cubic's 2605 flow rate leads to 2x more acceleration delay. 2607 In the steady state, DCTCP induces about 2 ECN marks per round trip, 2608 so it is possible to quickly detect when these signals have 2609 disappeared and seek available capacity more rapidly, while 2610 minimizing the impact on other flows (Classic and 2611 scalable) [LinuxPacedChirping]. Alternatively, approaches such as 2612 Adaptive Acceleration (A2DTCP [A2DTCP]) have been proposed to address 2613 this problem in data centres, which might be deployable over the 2614 public Internet. 2616 A.2.3. Faster Convergence at Flow Start 2618 Description: It would improve performance if scalable congestion 2619 controls converged (reached their steady-state share of the capacity) 2620 faster than Classic congestion controls or at least no slower. This 2621 affects the flow start behaviour of any L4S congestion control 2622 derived from a Classic transport that uses TCP slow start, including 2623 those for real-time media. 2625 Motivation: As an example, a new DCTCP flow takes longer than a 2626 Classic congestion control to obtain its share of the capacity of the 2627 bottleneck when there are already ongoing flows using the bottleneck 2628 capacity. In a data centre environment DCTCP takes about a factor of 2629 1.5 to 2 longer to converge due to the much higher typical level of 2630 ECN marking that DCTCP background traffic induces, which causes new 2631 flows to exit slow start early [Alizadeh-stability]. In testing for 2632 use over the public Internet the convergence time of DCTCP relative 2633 to a regular loss-based TCP slow start is even less 2634 favourable [Paced-Chirping] due to the shallow ECN marking threshold 2635 needed for L4S. It is exacerbated by the typically greater mismatch 2636 between the link rate of the sending host and typical Internet access 2637 bottlenecks. This problem is detrimental in general, but would 2638 particularly harm the performance of short flows relative to Classic 2639 congestion controls. 2641 Appendix B. Compromises in the Choice of L4S Identifier 2643 This appendix is informative, not normative. As explained in 2644 Section 2, there is insufficient space in the IP header (v4 or v6) to 2645 fully accommodate every requirement. So the choice of L4S identifier 2646 involves tradeoffs. This appendix records the pros and cons of the 2647 choice that was made. 2649 Non-normative recap of the chosen codepoint scheme: 2651 Packets with ECT(1) and conditionally packets with CE signify L4S 2652 semantics as an alternative to the semantics of Classic 2653 ECN [RFC3168], specifically: 2655 - The ECT(1) codepoint signifies that the packet was sent by an 2656 L4S-capable sender. 2658 - Given shortage of codepoints, both L4S and Classic ECN sides of 2659 an AQM have to use the same CE codepoint to indicate that a 2660 packet has experienced congestion. If a packet that had 2661 already been marked CE in an upstream buffer arrived at a 2662 subsequent AQM, this AQM would then have to guess whether to 2663 classify CE packets as L4S or Classic ECN. Choosing the L4S 2664 treatment is a safer choice, because then a few Classic packets 2665 might arrive early, rather than a few L4S packets arriving 2666 late. 2668 - Additional information might be available if the classifier 2669 were transport-aware. Then it could classify a CE packet for 2670 Classic ECN treatment if the most recent ECT packet in the same 2671 flow had been marked ECT(0). However, the L4S service ought 2672 not to need transport-layer awareness. 2674 Cons: 2676 Consumes the last ECN codepoint: The L4S service could potentially 2677 supersede the service provided by Classic ECN, therefore using 2678 ECT(1) to identify L4S packets could ultimately mean that the 2679 ECT(0) codepoint was 'wasted' purely to distinguish one form of 2680 ECN from its successor. 2682 ECN hard in some lower layers: It is not always possible to support 2683 the equivalent of an IP-ECN field in an AQM acting in a buffer 2684 below the IP layer [I-D.ietf-tsvwg-ecn-encap-guidelines]. Then, 2685 depending on the lower layer scheme, the L4S service might have to 2686 drop rather than mark frames even though they might encapsulate an 2687 ECN-capable packet. 2689 Risk of reordering Classic CE packets within a flow: Classifying all 2690 CE packets into the L4S queue risks any CE packets that were 2691 originally ECT(0) being incorrectly classified as L4S. If there 2692 were delay in the Classic queue, these incorrectly classified CE 2693 packets would arrive early, which is a form of reordering. 2694 Reordering within a microflow can cause TCP senders (and senders 2695 of similar transports) to retransmit spuriously. However, the 2696 risk of spurious retransmissions would be extremely low for the 2697 following reasons: 2699 1. It is quite unusual to experience queuing at more than one 2700 bottleneck on the same path (the available capacities have to 2701 be identical). 2703 2. In only a subset of these unusual cases would the first 2704 bottleneck support Classic ECN marking while the second 2705 supported L4S ECN marking, which would be the only scenario 2706 where some ECT(0) packets could be CE marked by an AQM 2707 supporting Classic ECN then the remainder experienced further 2708 delay through the Classic side of a subsequent L4S DualQ AQM. 2710 3. Even then, when a few packets are delivered early, it takes 2711 very unusual conditions to cause a spurious retransmission, in 2712 contrast to when some packets are delivered late. The first 2713 bottleneck has to apply CE-marks to at least N contiguous 2714 packets and the second bottleneck has to inject an 2715 uninterrupted sequence of at least N of these packets between 2716 two packets earlier in the stream (where N is the reordering 2717 window that the transport protocol allows before it considers 2718 a packet is lost). 2720 For example consider N=3, and consider the sequence of 2721 packets 100, 101, 102, 103,... and imagine that packets 2722 150,151,152 from later in the flow are injected as follows: 2723 100, 150, 151, 101, 152, 102, 103... If this were late 2724 reordering, even one packet arriving out of sequence would 2725 trigger a spurious retransmission, but there is no spurious 2726 retransmission here with early reordering, because packet 2727 101 moves the cumulative ACK counter forward before 3 2728 packets have arrived out of order. Later, when packets 2729 148, 149, 153... arrive, even though there is a 3-packet 2730 hole, there will be no problem, because the packets to fill 2731 the hole are already in the receive buffer. 2733 4. Even with the current TCP recommendation of N=3 [RFC5681] 2734 spurious retransmissions will be unlikely for all the above 2735 reasons. As RACK [RFC8985] is becoming widely deployed, it 2736 tends to adapt its reordering window to a larger value of N, 2737 which will make the chance of a contiguous sequence of N early 2738 arrivals vanishingly small. 2740 5. Even a run of 2 CE marks within a Classic ECN flow is 2741 unlikely, given FQ-CoDel is the only known widely deployed AQM 2742 that supports Classic ECN marking and it takes great care to 2743 separate out flows and to space any markings evenly along each 2744 flow. 2746 It is extremely unlikely that the above set of 5 eventualities 2747 that are each unusual in themselves would all happen 2748 simultaneously. But, even if they did, the consequences would 2749 hardly be dire: the odd spurious fast retransmission. Whenever 2750 the traffic source (a Classic congestion control) mistakes the 2751 reordering of a string of CE marks for a loss, one might think 2752 that it will reduce its congestion window as well as emitting a 2753 spurious retransmission. However, it would have already reduced 2754 its congestion window when the CE markings arrived early. If it 2755 is using ABE [RFC8511], it might reduce cwnd a little more for a 2756 loss than for a CE mark. But it will revert that reduction once 2757 it detects that the retransmission was spurious. 2759 In conclusion, the impact of early reordering on spurious 2760 retransmissions due to CE being ambiguous will generally be 2761 vanishingly small. 2763 Insufficient anti-replay window in some pre-existing VPNs: If delay 2764 is reduced for a subset of the flows within a VPN, the anti-replay 2765 feature of some VPNs is known to potentially mistake the 2766 difference in delay for a replay attack. Section 6.2 recommends 2767 that the anti-replay window at the VPN egress is sufficiently 2768 sized, as required by the relevant specifications. However, in 2769 some VPN implementations the maximum anti-replay window is 2770 insufficient to cater for a large delay difference at prevailing 2771 packet rates. Section 6.2 suggests alternative work-rounds for 2772 such cases, but end-users using L4S over a VPN will need to be 2773 able to recognize the symptoms of this problem, in order to seek 2774 out these work-rounds. 2776 Hard to distinguish Classic ECN AQM: With this scheme, when a source 2777 receives ECN feedback, it is not explicitly clear which type of 2778 AQM generated the CE markings. This is not a problem for Classic 2779 ECN sources that send ECT(0) packets, because an L4S AQM will 2780 recognize the ECT(0) packets as Classic and apply the appropriate 2781 Classic ECN marking behaviour. 2783 However, in the absence of explicit disambiguation of the CE 2784 markings, an L4S source needs to use heuristic techniques to work 2785 out which type of congestion response to apply (see 2786 Appendix A.1.5). Otherwise, if long-running Classic flow(s) are 2787 sharing a Classic ECN AQM bottleneck with long-running L4S 2788 flow(s), which then apply an L4S response to Classic CE signals, 2789 the L4S flows would outcompete the Classic flow(s). Experiments 2790 have shown that L4S flows can take about 20 times more capacity 2791 share than equivalent Classic flows. Nonetheless, as link 2792 capacity reduces (e.g. to 4 Mb/s), the inequality reduces. So 2793 Classic flows always make progress and are not starved. 2795 When L4S was first proposed (in 2015, 14 years after the Classic 2796 ECN spec [RFC3168] was published), it was believed that Classic 2797 ECN AQMs had failed to be deployed, because research measurements 2798 had found little or no evidence of CE marking. In subsequent 2799 years Classic ECN was included in per-flow-queuing (FQ) 2800 deployments, however an FQ scheduler stops an L4S flow 2801 outcompeting Classic, because it enforces equality between flow 2802 rates. It is not known whether there have been any non-FQ 2803 deployments of Classic ECN AQMs in the subsequent years, or 2804 whether there will be in future. 2806 An algorithm for detecting a Classic ECN AQM as soon as a flow 2807 stabilizes after start-up has been proposed [ecn-fallback] (see 2808 Appendix A.1.5 for a brief summary). Testbed evaluations of v2 of 2809 the algorithm have shown detection is reasonably good for Classic 2810 ECN AQMs, in a wide range of circumstances. However, although it 2811 can correctly detect an L4S ECN AQM in many circumstances, its is 2812 often incorrect at low link capacities and/or high RTTs. Although 2813 this is the safe way round, there is a danger that it will 2814 discourage use of the algorithm. 2816 Non-L4S service for control packets: Solely for the case of TCP, the 2817 Classic ECN RFCs [RFC3168] and [RFC5562] require a sender to clear 2818 the ECN field to Not-ECT on retransmissions and on certain control 2819 packets specifically pure ACKs, window probes and SYNs. When L4S 2820 packets are classified by the ECN field, these TCP control packets 2821 would not be classified into an L4S queue, and could therefore be 2822 delayed relative to the other packets in the flow. This would not 2823 cause reordering (because retransmissions are already out of 2824 order, and these control packets typically carry no data). 2825 However, it would make critical TCP control packets more 2826 vulnerable to loss and delay. To address this problem, 2827 ECN++ [I-D.ietf-tcpm-generalized-ecn] proposes an experiment in 2828 which all TCP control packets and retransmissions are ECN-capable 2829 as long as appropriate ECN feedback is available in each case. 2831 Pros: 2833 Should work e2e: The ECN field generally propagates end-to-end 2834 across the Internet without being wiped or mangled, at least over 2835 fixed networks. Unlike the DSCP, the setting of the ECN field is 2836 at least meant to be forwarded unchanged by networks that do not 2837 support ECN. 2839 Should work in tunnels: The L4S identifiers work across and within 2840 any tunnel that propagates the ECN field in any of the variant 2841 ways it has been defined since ECN-tunneling was first specified 2842 in the year 2001 [RFC3168]. However, it is likely that some 2843 tunnels still do not implement ECN propagation at all. 2845 Should work for many link technologies: At most, but not all, path 2846 bottlenecks there is IP-awareness, so that L4S AQMs can be located 2847 where the IP-ECN field can be manipulated. Bottlenecks at lower 2848 layer nodes without IP-awareness either have to use drop to signal 2849 congestion or a specific congestion notification facility has to 2850 be defined for that link technology, including propagation to and 2851 from IP-ECN. The programme to define these is progressing and in 2852 each case so far the scheme already defined for ECN inherently 2853 supports L4S as well (see Section 6.1). 2855 Could migrate to one codepoint: If all Classic ECN senders 2856 eventually evolve to use the L4S service, the ECT(0) codepoint 2857 could be reused for some future purpose, but only once use of 2858 ECT(0) packets had reduced to zero, or near-zero, which might 2859 never happen. 2861 L4 not required: Being based on the ECN field, this scheme does not 2862 need the network to access transport layer flow identifiers. 2863 Nonetheless, it does not preclude solutions that do. 2865 Appendix C. Potential Competing Uses for the ECT(1) Codepoint 2867 The ECT(1) codepoint of the ECN field has already been assigned once 2868 for the ECN nonce [RFC3540], which has now been categorized as 2869 historic [RFC8311]. ECN is probably the only remaining field in the 2870 Internet Protocol that is common to IPv4 and IPv6 and still has 2871 potential to work end-to-end, with tunnels and with lower layers. 2872 Therefore, ECT(1) should not be reassigned to a different 2873 experimental use (L4S) without carefully assessing competing 2874 potential uses. These fall into the following categories: 2876 C.1. Integrity of Congestion Feedback 2878 Receiving hosts can fool a sender into downloading faster by 2879 suppressing feedback of ECN marks (or of losses if retransmissions 2880 are not necessary or available otherwise). 2882 The historic ECN nonce protocol [RFC3540] proposed that a TCP sender 2883 could set either of ECT(0) or ECT(1) in each packet of a flow and 2884 remember the sequence it had set. If any packet was lost or 2885 congestion marked, the receiver would miss that bit of the sequence. 2886 An ECN Nonce receiver had to feed back the least significant bit of 2887 the sum, so it could not suppress feedback of a loss or mark without 2888 a 50-50 chance of guessing the sum incorrectly. 2890 It is highly unlikely that ECT(1) will be needed for integrity 2891 protection in future. The ECN Nonce RFC [RFC3540] as been 2892 reclassified as historic, partly because other ways have been 2893 developed to protect feedback integrity of TCP and other 2894 transports [RFC8311] that do not consume a codepoint in the IP 2895 header. For instance: 2897 * the sender can test the integrity of the receiver's feedback by 2898 occasionally setting the IP-ECN field to a value normally only set 2899 by the network. Then it can test whether the receiver's feedback 2900 faithfully reports what it expects (see para 2 of Section 20.2 of 2901 the ECN spec [RFC3168]. This works for loss and it will work for 2902 the accurate ECN feedback [RFC7560] intended for L4S. 2904 * A network can enforce a congestion response to its ECN markings 2905 (or packet losses) by auditing congestion exposure 2906 (ConEx) [RFC7713]. Whether the receiver or a downstream network 2907 is suppressing congestion feedback or the sender is unresponsive 2908 to the feedback, or both, ConEx audit can neutralise any advantage 2909 that any of these three parties would otherwise gain. 2911 * The TCP authentication option (TCP-AO [RFC5925]) can be used to 2912 detect any tampering with TCP congestion feedback (whether 2913 malicious or accidental). TCP's congestion feedback fields are 2914 immutable end-to-end, so they are amenable to TCP-AO protection, 2915 which covers the main TCP header and TCP options by default. 2916 However, TCP-AO is often too brittle to use on many end-to-end 2917 paths, where middleboxes can make verification fail in their 2918 attempts to improve performance or security, e.g. by 2919 resegmentation or shifting the sequence space. 2921 C.2. Notification of Less Severe Congestion than CE 2923 Various researchers have proposed to use ECT(1) as a less severe 2924 congestion notification than CE, particularly to enable flows to fill 2925 available capacity more quickly after an idle period, when another 2926 flow departs or when a flow starts, e.g. VCP [VCP], Queue View 2927 (QV) [QV]. 2929 Before assigning ECT(1) as an identifier for L4S, we must carefully 2930 consider whether it might be better to hold ECT(1) in reserve for 2931 future standardisation of rapid flow acceleration, which is an 2932 important and enduring problem [RFC6077]. 2934 Pre-Congestion Notification (PCN) is another scheme that assigns 2935 alternative semantics to the ECN field. It uses ECT(1) to signify a 2936 less severe level of pre-congestion notification than CE [RFC6660]. 2937 However, the ECN field only takes on the PCN semantics if packets 2938 carry a Diffserv codepoint defined to indicate PCN marking within a 2939 controlled environment. PCN is required to be applied solely to the 2940 outer header of a tunnel across the controlled region in order not to 2941 interfere with any end-to-end use of the ECN field. Therefore a PCN 2942 region on the path would not interfere with the L4S service 2943 identifier defined in Section 3. 2945 Authors' Addresses 2947 Koen De Schepper 2948 Nokia Bell Labs 2949 Antwerp 2950 Belgium 2951 Email: koen.de_schepper@nokia.com 2952 URI: https://www.bell-labs.com/usr/koen.de_schepper 2954 Bob Briscoe (editor) 2955 Independent 2956 United Kingdom 2957 Email: ietf@bobbriscoe.net 2958 URI: http://bobbriscoe.net/