idnits 2.17.00 (12 Aug 2021) /tmp/idnits60668/draft-ietf-mops-streaming-opcons-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There is 1 instance of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (11 May 2021) is 374 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 MOPS J. Holland 3 Internet-Draft Akamai Technologies, Inc. 4 Intended status: Informational A. Begen 5 Expires: 12 November 2021 Networked Media 6 S. Dawkins 7 Tencent America LLC 8 11 May 2021 10 Operational Considerations for Streaming Media 11 draft-ietf-mops-streaming-opcons-04 13 Abstract 15 This document provides an overview of operational networking issues 16 that pertain to quality of experience in streaming of video and other 17 high-bitrate media over the internet. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on 12 November 2021. 36 Copyright Notice 38 Copyright (c) 2021 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 43 license-info) in effect on the date of publication of this document. 44 Please review these documents carefully, as they describe your rights 45 and restrictions with respect to this document. Code Components 46 extracted from this document must include Simplified BSD License text 47 as described in Section 4.e of the Trust Legal Provisions and are 48 provided without warranty as described in the Simplified BSD License. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 53 1.1. Notes for Contributors and Reviewers . . . . . . . . . . 3 54 1.1.1. Venues for Contribution and Discussion . . . . . . . 4 55 1.1.2. Template for Contributions . . . . . . . . . . . . . 4 56 1.1.3. History of Public Discussion . . . . . . . . . . . . 5 57 2. Bandwidth Provisioning . . . . . . . . . . . . . . . . . . . 5 58 2.1. Scaling Requirements for Media Delivery . . . . . . . . . 5 59 2.1.1. Video Bitrates . . . . . . . . . . . . . . . . . . . 6 60 2.1.2. Virtual Reality Bitrates . . . . . . . . . . . . . . 7 61 2.2. Path Requirements . . . . . . . . . . . . . . . . . . . . 7 62 2.3. Caching Systems . . . . . . . . . . . . . . . . . . . . . 8 63 2.4. Predictable Usage Profiles . . . . . . . . . . . . . . . 8 64 2.5. Unpredictable Usage Profiles . . . . . . . . . . . . . . 9 65 2.6. Extremely Unpredictable Usage Profiles . . . . . . . . . 10 66 3. Adaptive Bitrate . . . . . . . . . . . . . . . . . . . . . . 11 67 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 11 68 3.2. Segmented Delivery . . . . . . . . . . . . . . . . . . . 12 69 3.2.1. Idle Time between Segments . . . . . . . . . . . . . 12 70 3.2.2. Head-of-Line Blocking . . . . . . . . . . . . . . . . 12 71 3.3. Unreliable Transport . . . . . . . . . . . . . . . . . . 13 72 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 73 5. Security Considerations . . . . . . . . . . . . . . . . . . . 13 74 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 75 7. Informative References . . . . . . . . . . . . . . . . . . . 13 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 78 1. Introduction 80 As the internet has grown, an increasingly large share of the traffic 81 delivered to end users has become video. Estimates put the total 82 share of internet video traffic at 75% in 2019, expected to grow to 83 82% by 2022. What's more, this estimate projects the gross volume of 84 video traffic will more than double during this time, based on a 85 compound annual growth rate continuing at 34% (from Appendix D of 86 [CVNI]). 88 A substantial part of this growth is due to increased use of 89 streaming video, although the amount of video traffic in real-time 90 communications (for example, online videoconferencing) has also grown 91 significantly. While both streaming video and videoconferencing have 92 real-time delivery and latency requirements, these requirements vary 93 from one application to another. For example, videoconferencing 94 demands an end-to-end (one-way) latency of a few hundreds of 95 milliseconds whereas live streaming can tolerate latencies of several 96 seconds. 98 This document specifically focuses on the streaming applications and 99 defines streaming as follows: Streaming is transmission of a 100 continuous media from a server to a client and its simultaneous 101 consumption by the client. Here, continous media refers to media and 102 associated streams such as video, audio, metadata, etc. In this 103 definition, the critical term is "simultaneous", as it is not 104 considered streaming if one downloads a video file and plays it after 105 the download is completed, which would be called download-and-play. 106 This has two implications. First, server's transmission rate must 107 (loosely or tightly) match to client's consumption rate for an 108 uninterrupted playback. That is, the client must not run out of data 109 (buffer underrun) or take more than it can keep (buffer overrun) as 110 any excess media is simply discarded. Second, client's consumption 111 rate is limited by not only bandwidth availability but also the real- 112 time constraints. That is, the client cannot fetch media that is not 113 available yet. 115 In many contexts, video traffic can be handled transparently as 116 generic application-level traffic. However, as the volume of video 117 traffic continues to grow, it's becoming increasingly important to 118 consider the effects of network design decisions on application-level 119 performance, with considerations for the impact on video delivery. 121 This document aims to provide a taxonomy of networking issues as they 122 relate to quality of experience in internet video delivery. The 123 focus is on capturing characteristics of video delivery that have 124 surprised network designers or transport experts without specific 125 video expertise, since these highlight key differences between common 126 assumptions in existing networking documents and observations of 127 video delivery issues in practice. 129 Making specific recommendations for mitigating these issues is out of 130 scope, though some existing mitigations are mentioned in passing. 131 The intent is to provide a point of reference for future solution 132 proposals to use in describing how new technologies address or avoid 133 these existing observed problems. 135 1.1. Notes for Contributors and Reviewers 137 Note to RFC Editor: Please remove this section and its subsections 138 before publication. 140 This section is to provide references to make it easier to review the 141 development and discussion on the draft so far. 143 1.1.1. Venues for Contribution and Discussion 145 This document is in the Github repository at: 147 https://github.com/ietf-wg-mops/draft-ietf-mops-streaming-opcons 149 Readers are welcome to open issues and send pull requests for this 150 document. 152 Substantial discussion of this document should take place on the MOPS 153 working group mailing list (mops@ietf.org). 155 * Join: https://www.ietf.org/mailman/listinfo/mops 157 * Search: https://mailarchive.ietf.org/arch/browse/mops/ 159 1.1.2. Template for Contributions 161 Contributions are solicited regarding issues and considerations that 162 have an impact on media streaming operations. 164 Please note that contributions may be merged and substantially 165 edited, and as a reminder, please carefully consider the Note Well 166 before contributing: https://datatracker.ietf.org/submit/note-well/ 168 Contributions can be emailed to mops@ietf.org, submitted as issues to 169 the issue tracker of the repository in Section 1.1.1, or emailed to 170 the document authors at draft-ietf-mops-streaming-opcons@ietf.org. 172 Contributors describing an issue not yet addressed in the draft are 173 requested to provide the following information, where applicable: 175 * a suggested title or name for the issue 177 * a long-term pointer to the best reference describing the issue 179 * a short description of the nature of the issue and its impact on 180 media quality of service, including: 182 - where in the network this issue has root causes 184 - who can detect this issue when it occurs 186 * an overview of the issue's known prevalence in practice. pointers 187 to write-ups of high-profile incidents are a plus. 189 * a list of known mitigation techniques, with (for each known 190 mitigation): 192 - a name for the mitigation technique 194 - a long-term pointer to the best reference describing it 196 - a short description of the technique: 198 o what it does 200 o where in the network it operates 202 o an overview of the tradeoffs involved-how and why it's 203 helpful, what it costs. 205 - supplemental information about the technique's deployment 206 prevalence and status 208 1.1.3. History of Public Discussion 210 Presentations: 212 * IETF 105 BOF: 214 https://www.youtube.com/watch?v=4G3YBVmn9Eo&t=47m21s 216 * IETF 106 meeting: 218 https://www.youtube.com/watch?v=4_k340xT2jM&t=7m23s 220 * MOPS Interim Meeting 2020-04-15: 222 https://www.youtube.com/watch?v=QExiajdC0IY&t=10m25s 224 * IETF 108 meeting: 226 https://www.youtube.com/watch?v=ZaRsk0y3O9k&t=2m48s 228 * MOPS 2020-10-30 Interim meeting: 230 https://www.youtube.com/watch?v=vDZKspv4LXw&t=17m15s 232 2. Bandwidth Provisioning 234 2.1. Scaling Requirements for Media Delivery 235 2.1.1. Video Bitrates 237 Video bitrate selection depends on many variables. Different 238 providers give different guidelines, but an equation that 239 approximately matches the bandwidth requirement estimates from 240 several video providers is given in [MSOD]: 242 Kbps = (HEIGHT * WIDTH * FRAME_RATE) / (MOTION_FACTOR * 1024) 244 Height and width are in pixels, frame rate is in frames per second, 245 and the motion factor is a value that ranges from 20 for a low-motion 246 talking heads video to 7 for sports, and content with a lot of screen 247 changes. 249 The motion factor captures the variability in bitrate due to the 250 amount and frequency of high-detail motion, which generally 251 influences the compressability of the content. 253 The exact bitrate required for a particular video also depends on a 254 number of specifics about the codec used and how the codec-specific 255 tuning parameters are matched to the content, but this equation 256 provides a rough estimate that approximates the usual bitrate 257 characteristics using the most common codecs and settings for 258 production traffic. 260 Here are a few common resolutions used for video content, with their 261 typical and peak per-user bandwidth requirements for 60 frames per 262 second (FPS): 264 +============+================+==========+=========+ 265 | Name | Width x Height | Typical | Peak | 266 +============+================+==========+=========+ 267 | DVD | 720 x 480 | 1.3 Mbps | 3 Mbps | 268 +------------+----------------+----------+---------+ 269 | 720p (1K) | 1280 x 720 | 3.6 Mbps | 5 Mbps | 270 +------------+----------------+----------+---------+ 271 | 1080p (2K) | 1920 x 1080 | 8.1 Mbps | 18 Mbps | 272 +------------+----------------+----------+---------+ 273 | 2160p (4k) | 3840 x 2160 | 32 Mbps | 70 Mbps | 274 +------------+----------------+----------+---------+ 276 Table 1 278 2.1.2. Virtual Reality Bitrates 280 Even the basic virtual reality (360-degree) videos (that allow users 281 to look around freely, referred to as three degrees of freedom - 282 3DoF) require substantially larger bitrates when they are captured 283 and encoded as such videos require multiple fields of view of the 284 scene. The typical multiplication factor is 8 to 10. Yet, due to 285 smart delivery methods such as viewport-based or tiled-based 286 streaming, we do not need to send the whole scene to the user. 287 Instead, the user needs only the portion corresponding to its 288 viewpoint at any given time. 290 In more immersive applications, where basic user movement (3DoF+) or 291 full user movement (6DoF) is allowed, the required bitrate grows even 292 further. In this case, the immersive content is typically referred 293 to as volumetric media. One way to represent the volumetric media is 294 to use point clouds, where streaming a single object may easily 295 require a bitrate of 30 Mbps or higher. Refer to [MPEGI] and [PCC] 296 for more details. 298 2.2. Path Requirements 300 The bitrate requirements in Section 2.1 are per end-user actively 301 consuming a media feed, so in the worst case, the bitrate demands can 302 be multiplied by the number of simultaneous users to find the 303 bandwidth requirements for a router on the delivery path with that 304 number of users downstream. For example, at a node with 10,000 305 downstream users simultaneously consuming video streams, 306 approximately 80 Gbps would be necessary in order for all of them to 307 get typical content at 1080p resolution at 60 fps, or up to 180 Gbps 308 to get sustained high-motion content such as sports, while 309 maintaining the same resolution. 311 However, when there is some overlap in the feeds being consumed by 312 end users, it is sometimes possible to reduce the bandwidth 313 provisioning requirements for the network by performing some kind of 314 replication within the network. This can be achieved via object 315 caching with delivery of replicated objects over individual 316 connections, and/or by packet-level replication using multicast. 318 To the extent that replication of popular content can be performed, 319 bandwidth requirements at peering or ingest points can be reduced to 320 as low as a per-feed requirement instead of a per-user requirement. 322 2.3. Caching Systems 324 When demand for content is relatively predictable, and especially 325 when that content is relatively static, caching content close to 326 requesters, and pre-loading caches to respond quickly to initial 327 requests, is often useful (for example, HTTP/1.1 caching is described 328 in [RFC7234]). This is subject to the usual considerations for 329 caching - for example, how much data must be cached to make a 330 significant difference to the requester, and how the benefits of 331 caching and pre-loading caches balances against the costs of tracking 332 "stale" content in caches and refreshing that content. 334 It is worth noting that not all high-demand content is also "live" 335 content. One popular example is when popular streaming content can 336 be staged close to a significant number of requesters, as can happen 337 when a new episode of a popular show is released. This content may 338 be largely stable, so low-cost to maintain in multiple places 339 throughout the Internet. This can reduce demands for high end-to-end 340 bandwidth without having to use mechanisms like multicast. 342 Caching and pre-loading can also reduce exposure to peering point 343 congestion, since less traffic crosses the peering point exchanges if 344 the caches are placed in peer networks, and could be pre-loaded 345 during off-peak hours, using "Lower-Effort Per-Hop Behavior (LE PHB) 346 for Differentiated Services" [RFC8622], "Low Extra Delay Background 347 Transport (LEDBAT)" [RFC6817], or similar mechanisms. 349 All of this depends, of course, on the ability of a content provider 350 to predict usage and provision bandwidth, caching, and other 351 mechanisms to meet the needs of users. In some cases (Section 2.4), 352 this is relatively routine, but in other cases, it is more difficult 353 (Section 2.5, Section 2.6). 355 2.4. Predictable Usage Profiles 357 Historical data shows that users consume more video and videos at 358 higher bitrates than they did in the past on their connected devices. 359 Improvements in the codecs that help with reducing the encoding 360 bitrates with better compression algorithms could not have offset the 361 increase in the demand for the higher quality video (higher 362 resolution, higher frame rate, better color gamut, better dynamic 363 range, etc.). In particular, mobile data usage has shown a large 364 jump over the years due to increased consumption of entertainement as 365 well as conversational video. 367 TBD: insert charts showing historical relative data usage patterns 368 with error bars by time of day in consumer networks? 369 Cross-ref vs. video quality by time of day in practice for some case 370 study? Not sure if there's a good way to capture a generalized 371 insight here, but it seems worth making the point that demand 372 projections can be used to help with e.g. power consumption with 373 routing architectures that provide for modular scalability. 375 2.5. Unpredictable Usage Profiles 377 Although TCP/IP has been used with a number of widely used 378 applications that have symmetric bandwidth requirements (similar 379 bandwidth requirements in each direction between endpoints), many 380 widely-used Internet applications operate in client-server roles, 381 with asymmetric bandwidth requirements. A common example might be an 382 HTTP GET operation, where a client sends a relatively small HTTP GET 383 request for a resource to an HTTP server, and often receives a 384 significantly larger response carrying the requested resource. When 385 HTTP is commonly used to stream movie-length video, the ratio between 386 response size and request size can become quite large. 388 For this reason, operators may pay more attention to downstream 389 bandwidth utilization when planning and managing capacity. In 390 addition, operators have been able to deploy access networks for end 391 users using underlying technologies that are inherently asymetric, 392 favoring downstream bandwidth (e.g. ADSL, cellular technologies, 393 most IEEE 802.11 variants), assuming that users will need less 394 upstream bandwidth than downstream bandwidth. This strategy usually 395 works, except when it does not, because application bandwidth usage 396 patterns have changed. 398 One example of this type of change was when peer-to-peer file sharing 399 applications gained popularity in the early 2000s. To take one well- 400 documented case ([RFC5594]), the Bittorrent application created 401 "swarms" of hosts, uploading and downloading files to each other, 402 rather than communicating with a server. Bittorrent favored peers 403 who uploaded as much as they downloaded, so that new Bittorrent users 404 had an incentive to significantly increase their upstream bandwidth 405 utilization. 407 The combination of the large volume of "torrents" and the peer-to- 408 peer characteristic of swarm transfers meant that end user hosts were 409 suddenly uploading higher volumes of traffic to more destinations 410 than was the case before Bittorrent. This caused at least one large 411 ISP to attempt to "throttle" these transfers, to mitigate the load 412 that these hosts placed on their network. These efforts were met by 413 increased use of encryption in Bittorrent, similar to an arms race, 414 and set off discussions about "Net Neutrality" and calls for 415 regulatory action. 417 Especially as end users increase use of video-based social networking 418 applications, it will be helpful for access network providers to 419 watch for increasing numbers of end users uploading significant 420 amounts of content. 422 2.6. Extremely Unpredictable Usage Profiles 424 The causes of unpredictable usage described in Section 2.5 were more 425 or less the result of human choices, but we were reminded during a 426 post-IETF 107 meeting that humans are not always in control, and 427 forces of nature can cause enormous fluctuations in traffic patterns. 429 In his talk, Sanjay Mishra [Mishra] reported that after the CoViD-19 430 pandemic broke out in early 2020, 432 * Comcast's streaming and web video consumption rose by 38%, with 433 their reported peak traffic up 32% overall between March 1 to 434 March 30, 436 * AT&T reported a 28% jump in core network traffic (single day in 437 April, as compared to pre stay-at-home daily average traffic), 438 with video accounting for nearly half of all mobile network 439 traffic, while social networking and web browsing remained the 440 highest percentage (almost a quarter each) of overall mobility 441 traffic, and 443 * Verizon reported similar trends with video traffic up 36% over an 444 average day (pre COVID-19)}. 446 We note that other operators saw similar spikes during this time 447 period. Craig Labowitz [Labovitz] reported 449 * Weekday peak traffic increases over 45%-50% from pre-lockdown 450 levels, 452 * A 30% increase in upstream traffic over their pre-pandemic levels, 453 and 455 * A steady increase in the overall volume of DDoS traffic, with 456 amounts exceeding the pre-pandemic levels by 40%. (He attributed 457 this increase to the significant rise in gaming-related DDoS 458 attacks ([LabovitzDDoS]), as gaming usage also increased.) 460 Subsequently, the Inernet Architecture Board (IAB) held a COVID-19 461 Network Impacts Workshop [IABcovid] in November 2020. Given a larger 462 number of reports and more time to reflect, the following 463 observations from the draft workshop report are worth considering. 465 * Participants describing different types of networks reported 466 different kinds of impacts, but all types of networks saw impacts. 468 * Mobile networks saw traffic reductions and residential networks 469 saw significant increases. 471 * Reported traffic increases from ISPs and IXPs over just a few 472 weeks were as big as the traffic growth over the course of a 473 typical year, representing a 15-20% surge in growth to land at a 474 new normal that was much higher than anticipated. 476 * At DE-CIX Frankfurt, the world's largest Internet Exchange Point 477 in terms of data throughput, the year 2020 has seen the largest 478 increase in peak traffic within a single year since the IXP was 479 founded in 1995. 481 * The usage pattern changed significantly as work-from-home and 482 videoconferencing usage peaked during normal work hours, which 483 would have typically been off-peak hours with adults at work and 484 children at school. One might expect that the peak would have had 485 more impact on networks if it had happened during typical evening 486 peak hours for video streaming applications. 488 * The increase in daytime bandwidth consumption reflected both 489 significant increases in "essential" applications such as 490 videoconferencing and VPNs, and entertainment applications as 491 people watched videos or played games. 493 * At the IXP-level, it was observed that port utilization increased. 494 This phenomenon is mostly explained by a higher traffic demand 495 from residential users. 497 3. Adaptive Bitrate 499 3.1. Overview 501 Adaptive BitRate (ABR) is a sort of application-level response 502 strategy in which the streaming client attempts to detect the 503 available bandwidth of the network path by observing the successful 504 application-layer download speed, then chooses a bitrate for each of 505 the video, audio, subtitles and metadata (among the limited number of 506 available options) that fits within that bandwidth, typically 507 adjusting as changes in available bandwidth occur in the network or 508 changes in capabilities occur during the playback (such as available 509 memory, CPU, display size, etc.). 511 The choice of bitrate occurs within the context of optimizing for 512 some metric monitored by the client, such as highest achievable video 513 quality or lowest chances for a rebuffering (playback stall). 515 3.2. Segmented Delivery 517 ABR playback is commonly implemented by streaming clients using HLS 518 [RFC8216] or DASH [DASH] to perform a reliable segmented delivery of 519 media over HTTP. Different implementations use different strategies 520 [ABRSurvey], often proprietary algorithms (called rate adaptation or 521 bitrate selection algorithms) to perform available bandwidth 522 estimation/prediction and the bitrate selection. Most clients only 523 use passive observations, i.e., they do not generate probe traffic to 524 measure the available bandwidth. 526 This kind of bandwidth-measurement systems can experience trouble in 527 several ways that can be affected by networking design choices. 529 3.2.1. Idle Time between Segments 531 When the bitrate selection is successfully chosen below the available 532 capacity of the network path, the response to a segment request will 533 typically complete in less absolute time than the duration of the 534 requested segment. The resulting idle time within the connection 535 carrying the segments has a few surprising consequences: 537 * Mobile flow-bandwidth spectrum and timing mapping. 539 * TCP slow-start when restarting after idle requires multiple RTTs 540 to re-establish a throughput at the network's available capacity. 541 On high-RTT paths or with small enough segments, this can produce 542 a falsely low application-visible measurement of the available 543 network capacity. 545 A detailed investigation of this phenomenon is available in 546 [NOSSDAV12]. 548 3.2.2. Head-of-Line Blocking 550 In the event of a lost packet on a TCP connection with SACK support 551 (a common case for segmented delivery in practice), loss of a packet 552 can provide a confusing bandwidth signal to the receiving 553 application. Because of the sliding window in TCP, many packets may 554 be accepted by the receiver without being available to the 555 application until the missing packet arrives. Upon arrival of the 556 one missing packet after retransmit, the receiver will suddenly get 557 access to a lot of data at the same time. 559 To a receiver measuring bytes received per unit time at the 560 application layer, and interpreting it as an estimate of the 561 available network bandwidth, this appears as a high jitter in the 562 goodput measurement. 564 Active Queue Management (AQM) systems such as PIE [RFC8033] or 565 variants of RED [RFC2309] that induce early random loss under 566 congestion can mitigate this by using ECN [RFC3168] where available. 567 ECN provides a congestion signal and induce a similar backoff in 568 flows that use Explicit Congestion Notification-capable transport, 569 but by avoiding loss avoids inducing head-of-line blocking effects in 570 TCP connections. 572 3.3. Unreliable Transport 574 In contrast to segmented delivery, several applications use UDP or 575 unreliable SCTP to deliver RTP or raw TS-formatted video. 577 Under congestion and loss, this approach generally experiences more 578 video artifacts with fewer delay or head-of-line blocking effects. 579 Often one of the key goals is to reduce latency, to better support 580 applications like videoconferencing, or for other live-action video 581 with interactive components, such as some sporting events. 583 Congestion avoidance strategies for this kind of deployment vary 584 widely in practice, ranging from some streams that are entirely 585 unresponsive to using feedback signaling to change encoder settings 586 (as in [RFC5762]), or to use fewer enhancement layers (as in 587 [RFC6190]), to proprietary methods for detecting quality of 588 experience issues and cutting off video. 590 4. IANA Considerations 592 This document requires no actions from IANA. 594 5. Security Considerations 596 This document introduces no new security issues. 598 6. Acknowledgements 600 Thanks to Mark Nottingham, Glenn Deen, Dave Oran, Aaron Falk, Kyle 601 Rose, Leslie Daigle, Lucas Pardue, Matt Stock, Alexandre Gouaillard, 602 and Mike English for their very helpful reviews and comments. 604 7. Informative References 606 [ABRSurvey] 607 Abdelhak Bentaleb et al, ., "A Survey on Bitrate 608 Adaptation Schemes for Streaming Media Over HTTP", 2019, 609 . 611 [CVNI] Cisco Systems, Inc, ., "Cisco Visual Networking Index: 612 Forecast and Trends, 2017-2022 White Paper", 27 February 613 2019, . 617 [DASH] "Information technology -- Dynamic adaptive streaming over 618 HTTP (DASH) -- Part 1: Media presentation description and 619 segment formats", ISO/IEC 23009-1:2019, 2019, 620 . 622 [IABcovid] Jari Arkko / Stephen Farrel / Mirja Kühlewind / Colin 623 Perkins, ., "Report from the IAB COVID-19 Network Impacts 624 Workshop 2020", November 2020, 625 . 628 [Labovitz] Labovitz, C. and Nokia Deepfield, "Network traffic 629 insights in the time of COVID-19: April 9 update", April 630 2020, . 633 [LabovitzDDoS] 634 Takahashi, D. and Venture Beat, "Why the game industry is 635 still vulnerable to DDoS attacks", May 2018, 636 . 640 [Mishra] Mishra, S. and J. Thibeault, "An update on Streaming Video 641 Alliance", 2020, . 646 [MPEGI] Boyce et al, J.M., "MPEG Immersive Video Coding Standard", 647 n.d., . 649 [MSOD] Akamai Technologies, Inc, ., "Media Services On Demand: 650 Encoder Best Practices", 2019, . 655 [NOSSDAV12] 656 Saamer Akhshabi et al, ., "What Happens When HTTP Adaptive 657 Streaming Players Compete for Bandwidth?", June 2012, 658 . 660 [PCC] Sebastian Schwarz et al, ., "Emerging MPEG Standards for 661 Point Cloud Compression", March 2019, 662 . 664 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 665 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 666 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 667 S., Wroclawski, J., and L. Zhang, "Recommendations on 668 Queue Management and Congestion Avoidance in the 669 Internet", RFC 2309, DOI 10.17487/RFC2309, April 1998, 670 . 672 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 673 of Explicit Congestion Notification (ECN) to IP", 674 RFC 3168, DOI 10.17487/RFC3168, September 2001, 675 . 677 [RFC5594] Peterson, J. and A. Cooper, "Report from the IETF Workshop 678 on Peer-to-Peer (P2P) Infrastructure, May 28, 2008", 679 RFC 5594, DOI 10.17487/RFC5594, July 2009, 680 . 682 [RFC5762] Perkins, C., "RTP and the Datagram Congestion Control 683 Protocol (DCCP)", RFC 5762, DOI 10.17487/RFC5762, April 684 2010, . 686 [RFC6190] Wenger, S., Wang, Y.-K., Schierl, T., and A. 687 Eleftheriadis, "RTP Payload Format for Scalable Video 688 Coding", RFC 6190, DOI 10.17487/RFC6190, May 2011, 689 . 691 [RFC6817] Shalunov, S., Hazel, G., Iyengar, J., and M. Kuehlewind, 692 "Low Extra Delay Background Transport (LEDBAT)", RFC 6817, 693 DOI 10.17487/RFC6817, December 2012, 694 . 696 [RFC7234] Fielding, R., Ed., Nottingham, M., Ed., and J. Reschke, 697 Ed., "Hypertext Transfer Protocol (HTTP/1.1): Caching", 698 RFC 7234, DOI 10.17487/RFC7234, June 2014, 699 . 701 [RFC8033] Pan, R., Natarajan, P., Baker, F., and G. White, 702 "Proportional Integral Controller Enhanced (PIE): A 703 Lightweight Control Scheme to Address the Bufferbloat 704 Problem", RFC 8033, DOI 10.17487/RFC8033, February 2017, 705 . 707 [RFC8216] Pantos, R., Ed. and W. May, "HTTP Live Streaming", 708 RFC 8216, DOI 10.17487/RFC8216, August 2017, 709 . 711 [RFC8622] Bless, R., "A Lower-Effort Per-Hop Behavior (LE PHB) for 712 Differentiated Services", RFC 8622, DOI 10.17487/RFC8622, 713 June 2019, . 715 Authors' Addresses 717 Jake Holland 718 Akamai Technologies, Inc. 719 150 Broadway 720 Cambridge, MA 02144, 721 United States of America 723 Email: jakeholland.net@gmail.com 725 Ali Begen 726 Networked Media 727 Turkey 729 Email: ali.begen@networked.media 731 Spencer Dawkins 732 Tencent America LLC 733 United States of America 735 Email: spencerdawkins.ietf@gmail.com