idnits 2.17.00 (12 Aug 2021) /tmp/idnits42863/draft-oran-icnrg-qosarch-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (19 November 2020) is 541 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: draft-ietf-quic-transport has been published as RFC 9000 == Outdated reference: A later version (-10) exists of draft-irtf-icnrg-ccninfo-05 == Outdated reference: A later version (-06) exists of draft-irtf-icnrg-icntraceroute-01 == Outdated reference: A later version (-09) exists of draft-irtf-nwcrg-nwc-ccn-reqs-04 == Outdated reference: A later version (-07) exists of draft-moiseenko-icnrg-flowclass-06 == Outdated reference: A later version (-07) exists of draft-oran-icnrg-flowbalance-04 Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICNRG D. Oran 3 Internet-Draft Network Systems Research and Design 4 Intended status: Informational 19 November 2020 5 Expires: 23 May 2021 7 Considerations in the development of a QoS Architecture for CCNx-like 8 ICN protocols 9 draft-oran-icnrg-qosarch-06 11 Abstract 13 This is a position paper. It documents the author's personal views 14 on how Quality of Service (QoS) capabilities ought to be accommodated 15 in ICN protocols like CCNx or NDN which employ flow-balanced 16 Interest/Data exchanges and hop-by-hop forwarding state as their 17 fundamental machinery. It argues that such protocols demand a 18 substantially different approach to QoS from that taken in TCP/IP, 19 and proposes specific design patterns to achieve both classification 20 and differentiated QoS treatment on both a flow and aggregate basis. 21 It also considers the effect of caches in addition to memory, CPU and 22 link bandwidth as a resource that should be subject to explicitly 23 unfair resource allocation. The proposed methods are intended to 24 operate purely at the network layer, providing the primitives needed 25 to achieve both transport and higher layer QoS objectives. It 26 explicitly excludes any discussion of Quality of Experience (QoE) 27 which can only be assessed and controlled at the application layer or 28 above. 30 This document is not a product of the IRTF Information-Centric 31 Networking Research Group (ICNRG) but has been through formal last 32 call and has the support of the participants in the research group 33 for publication as an individual submission. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at https://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on 23 May 2021. 51 Copyright Notice 53 Copyright (c) 2020 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 58 license-info) in effect on the date of publication of this document. 59 Please review these documents carefully, as they describe your rights 60 and restrictions with respect to this document. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 65 1.1. Applicability Assessment by ICNRG Chairs . . . . . . . . 4 66 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 67 3. Background on Quality of Service in network protocols . . . . 4 68 3.1. Basics on how ICN protocols like NDN and CCNx work . . . 7 69 3.2. Congestion Control basics relevant to ICN . . . . . . . . 8 70 4. What can we control to achieve QoS in ICN? . . . . . . . . . 10 71 5. How does this relate to QoS in TCP/IP? . . . . . . . . . . . 11 72 6. Why is ICN Different? Can we do Better? . . . . . . . . . . 13 73 6.1. Equivalence class capabilities . . . . . . . . . . . . . 13 74 6.2. Topology interactions with QoS . . . . . . . . . . . . . 13 75 6.3. Specification of QoS treatments . . . . . . . . . . . . . 14 76 6.4. ICN forwarding semantics effect on QoS . . . . . . . . . 15 77 6.5. QoS interactions with Caching . . . . . . . . . . . . . . 16 78 7. Strawman principles for an ICN QoS architecture . . . . . . . 16 79 7.1. Can Intserv-like traffic control in ICN provide richer QoS 80 semantics? . . . . . . . . . . . . . . . . . . . . . . . 20 81 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 82 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 83 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 84 10.1. Normative References . . . . . . . . . . . . . . . . . . 23 85 10.2. Informative References . . . . . . . . . . . . . . . . . 23 86 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 29 88 1. Introduction 90 The TCP/IP protocol suite used on today's Internet has over 30 years 91 of accumulated research and engineering into the provision of Quality 92 of Service machinery, employed with varying success in different 93 environments. ICN protocols like Named Data Networking (NDN [NDN]) 94 and Content-Centric Networking (CCNx [RFC8569],[RFC8609]) have an 95 accumulated 10 years of research and very little deployment. We 96 therefore have the opportunity to either recapitulate the approaches 97 taken with TCP/IP (e.g. Intserv [RFC2998] and Diffserv [RFC2474]) or 98 design a new architecture and associated mechanisms aligned with the 99 properties of ICN protocols which differ substantially from those of 100 TCP/IP. This position paper advocates the latter approach and 101 comprises the author's personal views on how Quality of Service (QoS) 102 capabilities ought to be accommodated in ICN protocols like CCNx or 103 NDN. Specifically, these protocols differ in fundamental ways from 104 TCP/IP. The important differences are summarized in the following 105 table: 107 +=============================+====================================+ 108 | TCP/IP | CCNx or NDN | 109 +=============================+====================================+ 110 | Stateless forwarding | Stateful forwarding | 111 +-----------------------------+------------------------------------+ 112 | Simple Packets | Object model with optional caching | 113 +-----------------------------+------------------------------------+ 114 | Pure datagram model | Request-response model | 115 +-----------------------------+------------------------------------+ 116 | Asymmetric Routing | Symmetric Routing | 117 +-----------------------------+------------------------------------+ 118 | Independent flow directions | Flow balance^(*) | 119 +-----------------------------+------------------------------------+ 120 | Flows grouped by IP prefix | Flows grouped by name prefix | 121 | and port | | 122 +-----------------------------+------------------------------------+ 123 | End-to-end congestion | Hop-by-hop congestion control | 124 | control | | 125 +-----------------------------+------------------------------------+ 127 Table 1: Differences between IP and ICN relevant to QoS architecture 129 | ^(*)Flow Balance is a property of NDN and CCNx that ensures one 130 | Interest packet provokes a response of no more than one data 131 | packet. Further discussion of the relevance of this to QoS can 132 | be found in [I-D.oran-icnrg-flowbalance] 134 This document proposes specific design patterns to achieve both flow 135 classification and differentiated QoS treatment for ICN on both a 136 flow and aggregate basis. It also considers the effect of caches in 137 addition to memory, CPU and link bandwidth as a resource that should 138 be subject to explicitly unfair resource allocation. The proposed 139 methods are intended to operate purely at the network layer, 140 providing the primitives needed to achieve both transport and higher 141 layer QoS objectives. It does not propose detailed protocol 142 machinery to achieve these goals; it leaves these to supplementary 143 specifications, such as [I-D.moiseenko-icnrg-flowclass] and 144 [I-D.anilj-icnrg-dnc-qos-icn]. It explicitly excludes any discussion 145 of Quality of Experience (QoE) which can only be assessed and 146 controlled at the application layer or above. 148 Much of this document is derived from presentations the author has 149 given at ICNRG meetings over the last few years that are available 150 through the IETF datatracker (see, for example [Oran2018QoSslides]). 152 1.1. Applicability Assessment by ICNRG Chairs 154 QoS in ICN is an important topic with a huge design space. ICNRG has 155 been discussing different specific protocol mechanisms as well as 156 conceptual approaches. This document presents architectural 157 considerations for QoS, leveraging ICN properties instead of merely 158 applying IP-QoS mechanisms - without defining a specific architecture 159 or specific protocols mechanisms yet. However, there is consensus in 160 ICNRG that this document, clarifying the author's views, could 161 inspire such work and should hence be published as a position paper. 163 2. Requirements Language 165 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 166 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 167 document are to be interpreted as described in RFC 2119 [RFC2119]. 169 3. Background on Quality of Service in network protocols 171 Much of this background material is tutorial and can be simply 172 skipped by readers familiar with the long and checkered history of 173 quality of service in packet networks. Other parts of it are 174 polemical yet serve to illuminate the author's personal biases and 175 technical views. 177 All networking systems provide some degree of "quality of service" in 178 that they exhibit non-zero utility when offered traffic to carry. In 179 other words, the network is totally useless if it never delivers any 180 of the traffic injected by applications. The term QoS is therefore 181 more correctly applied in a more restricted sense to describe systems 182 that control the allocation of various resources in order to achieve 183 _managed unfairness_. Absent explicit mechanisms to decide what 184 traffic to be unfair to, most systems try to achieve some form of 185 "fairness" in the allocation of resources, optimizing the overall 186 utility delivered to all offered load under the constraint of 187 available resources. From this it should be obvious that you cannot 188 use QoS mechanisms to create or otherwise increase resource capacity! 189 In fact, all known QoS schemes have non-zero overhead and hence may 190 (albeit slightly) decrease the total resources available to carry 191 user traffic. 193 Further, accumulated experience seems to indicate that QoS is helpful 194 in a fairly narrow range of network conditions: 196 * If your resources are lightly loaded, you don't need it, as 197 neither congestive loss nor substantial queueing delay occurs 199 * If your resources are heavily oversubscribed, it doesn't save you. 200 So many users will be unhappy that you are probably not delivering 201 a viable service 203 * Failures can rapidly shift your state from the first above to the 204 second, in which case either: 206 - your QoS machinery cannot respond quickly enough to maintain 207 the advertised service quality continuously, or 209 - resource allocations are sufficiently conservative to result in 210 substantial wasted capacity under non-failure conditions 212 Nevertheless, though not universally deployed, QoS is advantageous at 213 least for some applications and some network environments. Some 214 examples include: 216 * applications with steep utility functions [Shenker2006], such as 217 real-time multimedia 219 * applications with safety-critical operational constraints, such as 220 avionics or industrial automation 222 * dedicated or tightly managed networks whose economics depend on 223 strict adherence to challenging service level agreements (SLAs) 225 Another factor in the design and deployment of QoS is the scalability 226 and scope over which the desired service can be achieved. Here there 227 are two major considerations, one technical, the other economic/ 228 political: 230 * Some signaled QoS schemes, such as RSVP (Resource reSerVation 231 Protocol) [RFC2205], maintain state in routers for each flow, 232 which scales linearly with the number of flows. For core routers 233 through which pass millions to billions of flows, the memory 234 required is infeasible to provide. 236 * The Internet is comprised of many minimally cooperating autonomous 237 systems [AS]. There are practically no successful examples of QoS 238 deployments crossing the AS boundaries of multiple service 239 providers. This in almost all cases limits the applicability of 240 QoS capabilities to be intra-domain. 242 This document adopts a narrow definition of QoS as _managed 243 unfairness_^(*). However, much of the networking literature uses the 244 term more colloquially as applying to any mechanism that improves 245 overall performance. One could use a different, broader definition 246 of QoS that encompasses optimizing the allocation of network 247 resources across all offered traffic without considering individual 248 users' traffic. A consequence would be the need to cover whether 249 (and how) ICN might result in better overall performance than IP 250 under constant resource conditions, which is a much broader goal than 251 that attempted here. The chosen narrower scope comports with the 252 commonly understood meaning of "QoS" in the research community. 253 Under this scope, and under constant resource constraints, the only 254 way to provide traffic discrimination is in fact to sacrifice 255 fairness. Readers assuming the broader context will find a large 256 class of proven techniques to be ignored. This is intentional. 257 Among these are seamless producer mobility schemes like MAPME 258 [Auge2018], and network coding of ICN data as discussed in 259 [I-D.irtf-nwcrg-nwc-ccn-reqs]. 261 | ^(*)This term to explain QoS is generally ascribed to Van 262 | Jacobson, who in talks in the late 1990's said "[The problem we 263 | are solving is to] Give 'better' service to some at the expense 264 | of giving worse service to others. QoS fantasies to the 265 | contrary, it's a zero sum game. In other words, QoS is 266 | _managed unfairness_." 268 Finally, the relationship between QoS and either accounting or 269 billing is murky. Some schemes can accurately account for resource 270 consumption and ascertain to which user to allocate the usage. 271 Others cannot. While the choice of mechanism may have important 272 practical economic and political consequences for cost and workable 273 business models, this document considers none of those things and 274 discusses QoS only in the context of providing managed unfairness. 276 For those unfamiliar with ICN protocols, a brief description of how 277 NDN and CCNx operate as a packet network is below in Section 3.1. 278 Some further background on congestion control for ICN follows in 279 Section 3.2. 281 3.1. Basics on how ICN protocols like NDN and CCNx work 283 The following is intended as a brief summary of the salient features 284 of the NDN and CCnx ICN protocols relevant to congestion control and 285 QoS. Quite extensive tutorial information may be found in a number 286 of places including material available from [NDNTutorials]. 288 In NDN and CCNx, all protocol interactions operate as a two-way 289 handshake. Named content is requested by a _consumer_ via an 290 _Interest message_ which is routed hop-by-hop through a series of 291 _forwarders_ until it reaches a node that stores the requested data. 292 This can be either the _producer_ of the data, or a forwarder holding 293 a cached copy of the requested data. The content matching the name 294 in the Interest is returned to the requester over the _inverse_ of 295 the path traversed by the corresponding Interest. 297 Forwarding in CCNx and NDN is _per-packet stateful_. Routing 298 information to select next-hops for an Interest is obtained from a 299 _Forwarding Information Base (FIB)_ which is similar in function to 300 the FIB in an IP router, except that it holds name prefixes rather 301 than IP address prefixes. Conventionally a _Longest Name Prefix 302 Match (LNPM)_ is used for lookup, although other algorithms are 303 possible including controlled flooding and adaptive learning based on 304 prior history. 306 Each Interest message leaves a trail of "breadcrumbs" as state in 307 each forwarder. This state, held in a data structure known as a 308 _Pending Interest Table (PIT)_ is used to forward the returning Data 309 message to the consumer. Since the PIT constitutes per-packet state 310 it is therefore a large consumer of memory resources especially in 311 forwarders carrying high traffic loads over long Round Trip Time 312 (RTT) paths, and hence plays a substantial role as a QoS-controllable 313 resource in ICN forwarders. 315 In addition to its role in forwarding Interest messages and returning 316 the corresponding Data messages, an ICN forwarder can also operate as 317 a cache, optionally storing a copy of any Data messages it has seen 318 in a local data structure known as a _Content Store (CS)_. Data in 319 the Content Store may be returned in response to a matching Interest 320 rather than forwarding the Interest further through the network to 321 the original Producer. Both CCNx and NDN have a variety of ways to 322 configure caching, including mechanisms to avoid both cache pollution 323 and cache poisoning (these are clearly beyond the scope of this brief 324 introduction). 326 3.2. Congestion Control basics relevant to ICN 328 In any packet network that multiplexes traffic among multiple sources 329 and destinations, congestion control is necessary in order to: 331 1. Prevent collapse of utility due to overload, where the total 332 offered service declines as load increases, perhaps 333 precipitously, rather than increasing or remaining flat. 335 2. Avoid starvation of some traffic due to excessive demand by other 336 traffic. 338 3. Beyond the basic protections against starvation, achieve 339 "fairness" among competing traffic. Two common objective 340 functions are [minmaxfairness] and [proportionalfairness] both of 341 which have been implemented and deployed successfully on packet 342 networks for many years. 344 Before moving on to QoS, it is useful to consider how congestion 345 control works in NDN or CCNx. Unlike the IP protocol family, which 346 relies exclusively on end-to-end congestion control (e.g. 347 TCP[RFC0793], DCCP[RFC4340], SCTP[RFC4960], 348 QUIC[I-D.ietf-quic-transport]), CCNx and NDN can employ hop-by-hop 349 congestion control. There is per-Interest/Data state at every hop of 350 the path and therefore outstanding Interests provide information that 351 can be used to optimize resource allocation for data returning on the 352 inverse path, such as bandwidth sharing, prioritization and overload 353 control. In current designs, this allocation is often done using 354 Interest counting. By accepting one Interest packet from a 355 downstream node, implicitly this provides a guarantee (either hard or 356 soft) that there is sufficient bandwidth on the inverse direction of 357 the link to send back one Data packet. A number of congestion 358 control schemes have been developed for ICN that operate in this 359 fashion, for example [Wang2013], [Mahdian2016], [Song2018], 360 [Carofiglio2012]. Other schemes, like [Schneider2016] neither count 361 nor police Interests, but instead monitor queues using AQM (active 362 queue management) to mark returning Data packets that have 363 experienced congestion. This later class of schemes is similar to 364 those used on IP in the sense that they depend on consumers 365 adequately reducing their rate of Interest injection to avoid Data 366 packet drops due to buffer overflow in forwarders. The former class 367 of schemes is (arguably) more robust against mis-behavior by 368 consumers. 370 Given the stochastic nature of round trip times, and the ubiquity of 371 wireless links and encapsulation tunnels with variable bandwidth, a 372 simple scheme that admits interests only based on a time-invariant 373 estimate of the returning link bandwidth will perform poorly. 374 However, two characteristics of NDN and CCNx-like protocols can help 375 substantially to improve the accuracy and responsiveness of the 376 bandwidth allocation: 378 1. RTT is bounded by the inclusion of an _Interest Lifetime_ in each 379 Interest message, which puts an upper bound on the RTT 380 uncertainty for any given Interest/Data exchange. If Interest 381 lifetimes are kept reasonably short (a few RTTs) the allocation 382 of local forwarder resources do not have to deal with an 383 arbitrarily long tail. One could in fact do a deterministic 384 allocation on this basis, but the result would be highly 385 pessimistic. Nevertheless, having a cut-off does improve the 386 performance of an optimistic allocation scheme. 388 2. Returning Data packets can be congestion marked by an ECN-like 389 marking scheme if the inverse link starts experiencing long queue 390 occupancy or other congestion indication. Unlike TCP/IP, where 391 the rate adjustment can only be done end-to-end, this feedback is 392 usable immediately by the downstream ICN forwarder and the 393 Interest shaping rate lowered after a single link RTT. This may 394 allow less pessimistic rate adjustment schemes than the Additive 395 Increase, Multiplicative Decrease (AIMD) with .5 multiplier that 396 is commonly used on TCP/IP networks. It also allows the rate 397 adjustments to be spread more accurately among the Interest/Data 398 flows traversing a link sending congestion signals. 400 A useful discussion of these properties and how they demonstrate the 401 advantages of ICN approaches to congestion control can be found in 402 [Carofiglio2016] 404 4. What can we control to achieve QoS in ICN? 406 QoS is achieved through managed unfairness in the allocation of 407 resources in network elements, particularly in the routers doing 408 forwarding of ICN packets. So, a first order question is what 409 resources need to be allocated, and how to ascertain which traffic 410 gets what allocations. In the case of CCNx or NDN the important 411 network element resources are: 413 +===============+===============================================+ 414 | Resource | ICN Usage | 415 +===============+===============================================+ 416 | Communication | buffering for queued packets | 417 | Link capacity | | 418 +---------------+-----------------------------------------------+ 419 | Content Store | to hold cached data | 420 | capacity | | 421 +---------------+-----------------------------------------------+ 422 | Forwarder | for the Pending Interest Table (PIT) | 423 | memory | | 424 +---------------+-----------------------------------------------+ 425 | Compute | for forwarding packets, including the cost of | 426 | capacity | Forwarding Information Base (FIB) lookups. | 427 +---------------+-----------------------------------------------+ 429 Table 2: ICN-related Network Element Resources 431 For these resources, any QoS scheme has to specify two things: 433 1. How do you create _equivalence classes_ (a.k.a. flows) of traffic 434 to which different QoS treatments are applied? 436 2. What are the possible treatments and how are those mapped to the 437 resource allocation algorithms? 439 Two critical facts of life come into play when designing a QoS 440 scheme. First, the number of equivalence classes that can be 441 simultaneously tracked in a network element is bounded by both memory 442 and processing capacity to do the necessary lookups. One can allow 443 very fine-grained equivalence classes, but not be able to employ them 444 globally because of scaling limits of core routers. That means it is 445 wise to either restrict the range of equivalence classes, or allow 446 them to be _aggregated_, trading off accuracy in policing traffic 447 against ability to scale. 449 Second, the flexibility of expressible treatments can be tightly 450 constrained by both protocol encoding and algorithmic limitations. 451 The ability to encode the treatment requests in the protocol can be 452 limited (as it is for IP - there are only 6 of the Type of Service 453 (TOS) bits available for Diffserv treatments), but as or more 454 important is whether there are practical traffic policing, queuing, 455 and pacing algorithms that can be combined to support a rich set of 456 QoS treatments. 458 The two considerations above in combination can easily be 459 substantially more expressive than what can be achieved in practice 460 with the available number of queues on real network interfaces or the 461 amount of per-packet computation needed to enqueue or dequeue a 462 packet. 464 5. How does this relate to QoS in TCP/IP? 466 TCP/IP has fewer resource types to manage than ICN, and in some cases 467 the allocation methods are simpler, as shown in the following table: 469 +===============+=============+================================+ 470 | Resource | IP Relevant | TCP/IP Usage | 471 +===============+=============+================================+ 472 | Communication | YES | buffering for queued packets | 473 | Link capacity | | | 474 +---------------+-------------+--------------------------------+ 475 | Content Store | NO | no content store in IP | 476 | capacity | | | 477 +---------------+-------------+--------------------------------+ 478 | Forwarder | MAYBE | not needed for output-buffered | 479 | memory | | designs^(*) | 480 +---------------+-------------+--------------------------------+ 481 | Compute | YES | for forwarding packets, but | 482 | capacity | | arguably much cheaper than ICN | 483 +---------------+-------------+--------------------------------+ 485 Table 3: IP-related Network Element Resources 487 ^(*)Output-buffered designs are where all packet buffering resources 488 are associated with the output interfaces and there are no receiver 489 interface or internal forwarding buffers that can be over-subscribed. 490 Output-buffered switchs or routers are common but not universal, as 491 they generally require an internal speed-up factor where forwarding 492 capacity is greater than the sum of the input capacity of the 493 interfaces. 495 For these resources, IP has specified three fundamental things, as 496 shown in the following table: 498 +==============+====================================================+ 499 | What | How | 500 +==============+====================================================+ 501 | *Equivalence | subset+prefix match on IP | 502 | classes* | 5-tuple {SA,DA,SP,DP,PT} | 503 | | SA=Source Address | 504 | | DA=Destination Address | 505 | | SP=Source Port | 506 | | DP=Desintation Port | 507 | | PT=IP Protocol Type | 508 +--------------+----------------------------------------------------+ 509 | *Diffserv | (very) small number of | 510 | treatments* | globally-agreed traffic | 511 | | classes | 512 +--------------+----------------------------------------------------+ 513 | *Intserv | per-flow parameterized | 514 | treatments* | _Controlled Load_ and | 515 | | _Guaranteed_ service | 516 | | classes | 517 +--------------+----------------------------------------------------+ 519 Table 4: Fundamental protocol elements to achieve QoS for TCP/IP 521 Equivalence classes for IP can be pairwise, by matching against both 522 source and destination address+port, pure group using only 523 destination address+port, or source-specific multicast with source 524 adress+port and destination multicast address+port. 526 With Intserv, the Resource ReSerVation signaling protocol (RSVP) 527 [RFC2205] carries two data structures, the Flow Specifier (FLOWSPEC) 528 and the Traffic Specifier (TSPEC). The former fulfills the 529 requirement to identify the equivalence class to which the QoS being 530 signaled applies. The latter comprises the desired QoS treatment 531 along with a description of the dynamic character of the traffic 532 (e.g. average bandwidth and delay, peak bandwidth, etc.). Both of 533 these encounter substantial scaling limits, which has meant that 534 Intserv has historically been limited to confined topologies, and/or 535 high-value usages, like traffic engineering. 537 With Diffserv, the protocol encoding (6 bits in the TOS field of the 538 IP header) artificially limits the number of classes one can specify. 539 These are documented in [RFC4594]. Nonetheless, when used with fine- 540 grained equivalence classes, one still runs into limits on the number 541 of queues required. 543 6. Why is ICN Different? Can we do Better? 545 While one could adopt an approach to QoS mirroring the extensive 546 experience with TCP/IP, this would, in the author's view, be a 547 mistake. The implementation and deployment of QoS in IP networks has 548 been spotty at best. There are of course economic and political 549 reasons as well as technical reasons for these mixed results, but 550 there are several architectural choices in ICN that make it a 551 potentially much better protocol base to enhance with QoS machinery. 552 This section discusses those differences and their consequences. 554 6.1. Equivalence class capabilities 556 First and foremost, hierarchical names are a much richer basis for 557 specifying equivalence classes than IP 5-tuples. The IP address (or 558 prefix) can only separate traffic by topology to the granularity of 559 hosts, and not express actual computational instances nor sets of 560 data. Ports give some degree of per-instance demultiplexing, but 561 this tends to be both coarse and ephemeral, while confounding the 562 demultiplexing function with the assignment of QoS treatments to 563 particular subsets of the data. Some degree of finer granularity is 564 possible with IPv6 by exploiting the ability to use up to 64 bits of 565 address for classifying traffic. In fact, the hICN project 566 [I-D.muscariello-intarea-hicn], while adopting the request-response 567 model of CCNx, uses IPv6 addresses as the available namespace, and 568 IPv6 packets (plus "fake" TCP headers) as the wire format. 570 Nonetheless, the flexibility of tokenized (i.e. strings treated as 571 opaque tokens), variable length, hierarchical names allows one to 572 directly associate classes of traffic for QoS purposes with the 573 structure of an application namespace. The classification can be as 574 coarse or fine-grained as desired by the application. While not 575 _always_ the case, there is typically a straightforward association 576 between how objects are named, and how they are grouped together for 577 common treatment. Examples abound; a number can be conveniently 578 found in [I-D.moiseenko-icnrg-flowclass]. 580 6.2. Topology interactions with QoS 582 In ICN, QoS is not pre-bound to network topology since names are non- 583 topological, unlike unicast IP addresses. This allows QoS to be 584 applied to multi-destination and multi-path environments in a 585 straightforward manner, rather than requiring either multicast with 586 coarse class-based scheduling or complex signaling like that in RSVP- 587 TE [RFC3209] that is needed to make point-to-multipoint Muti-Protocol 588 Label Switching (MPLS) work. 590 Because of IP's stateless forwarding model, complicated by the 591 ubiquity of asymmetric routes, any flow-based QoS requires state that 592 is decoupled from the actual arrival of traffic and hence must be 593 maintained, at least as soft-state, even during quiescent periods. 594 Intserv, for example, requires flow signaling with state O(#flows). 595 ICN, even worst case, requires state O(#active Interest/Data 596 exchanges), since state can be instantiated on arrival of an 597 Interest, and removed (perhaps lazily) once the data has been 598 returned. 600 6.3. Specification of QoS treatments 602 Unlike Intserv, Diffserv eschews signaling in favor of class-based 603 configuration of resources and queues in network elements. However, 604 Diffserv limits traffic treatments to a few bits taken from the ToS 605 field of IP. No such wire encoding limitations exist for NDN or 606 CCNx, as the protocol is completely TLV (Type-Length-Value) based, 607 and one (or even more than one) new field can be easily defined to 608 carry QoS treatment information. 610 Therefore, there are greenfield possibilities for more powerful QoS 611 treatment options in ICN. For example, IP has no way to express a 612 QoS treatment like "try hard to deliver reliably, even at the expense 613 of delay or bandwidth". Such a QoS treatment for ICN could invoke 614 native ICN mechanisms, none of which are present in IP, such as: 616 * In-network retransmission in response to hop-by-hop errors 617 returned from upstream forwarders 619 * Trying multiple paths to multiple content sources either in 620 parallel or serially 622 * Assign higher precedence for short-term caching to recover from 623 downstream^(*) errors 625 * Coordinating cache utilization with forwarding resources 627 | ^(*)_Downstream_ refers to the direction Data messages flow 628 | toward the consumer (the issuer of Interests). Conversely, 629 | _Upstream_ refers to the direction Interests flow toward the 630 | producer of data. 632 Such mechanisms are typically described in NDN and CCNx as 633 _forwarding strategies_. However, little or no guidance is given for 634 what application actions or protocol machinery is used to decide 635 which forwarding strategy to use for which Interests that arrive at a 636 forwarder. See [BenAbraham2018] for an investigation of these 637 issues. Associating forwarding strategies with the equivalence 638 classes and QoS treatments directly can make them more accessible and 639 useful to implement and deploy. 641 Stateless forwarding and asymmetric routing in IP limits available 642 state/feedback to manage link resources. In contrast, NDN or CCNx 643 forwarding allows all link resource allocation to occur as part of 644 Interest forwarding, potentially simplifying things considerably. In 645 particular, with symmetric routing, producers have no control over 646 the paths their data packets traverse, and hence any QoS treatments 647 intended to influence routing paths from producer to consumer will 648 have no effect. 650 One complication in the handling of ICN QoS treatments is not present 651 in IP and hence worth mention. CCNx and NDN both perform _Interest 652 aggregation_ (See Section 2.3.2 of [RFC8569]). If an Interest 653 arrives matching an existing PIT entry, but with a different QoS 654 treatment from an Interest already forwarded, it can be tricky to 655 decide whether to aggregate the interest or forward it, and how to 656 keep track of the differing QoS treatments for the two Interests. 657 Exploration of the details surrounding these situations is beyond the 658 scope of this document; further discussion can be found for the 659 general case of flow balance and congestion control in 660 [I-D.oran-icnrg-flowbalance], and specifically for QoS treatments in 661 [I-D.anilj-icnrg-dnc-qos-icn]. 663 6.4. ICN forwarding semantics effect on QoS 665 IP has three forwarding semantics, with different QoS needs (Unicast, 666 Anycast, Multicast). ICN has the single forwarding semantic, so any 667 QoS machinery can be uniformly applied across any request/response 668 invocation. This applies whether the forwarder employs dynamic 669 destination routing, multi-destination forwarding with next-hops 670 tried serially, multi-destination with next-hops used in parallel, or 671 even localized flooding (e.g. directly on L2 multicast mechanisms). 672 Additionally, the pull-based model of ICN avoids a number of thorny 673 multicast QoS problems that IP has ([Wang2000], [RFC3170], 674 [Tseng2003]). 676 The Multi-destination/multi-path forwarding model in ICN changes 677 resource allocation needs in a fairly deep way. IP treats all 678 endpoints as open-loop packet sources, whereas NDN and CCNx have 679 strong asymmetry between producers and consumers as packet sources. 681 6.5. QoS interactions with Caching 683 IP has no caching in routers, whereas ICN needs ways to allocate 684 cache resources. Treatments to control caching operation are 685 unlikely to look much like the treatments used to control link 686 resources. NDN and CCNx already have useful cache control directives 687 associated with Data messages. The CCNx controls include: 689 ExpiryTime: time after which a cached Content Object is considered 690 expired and MUST no longer be used to respond to an Interest from 691 a cache. 693 Recommended Cache Time: time after which the publisher considers the 694 Content Object to be of low value to cache. 696 See [RFC8569] for the formal definitions. 698 ICN flow classifiers, such as those in 699 [I-D.moiseenko-icnrg-flowclass] can be used to achieve soft or hard 700 partitioning^(*) of cache resources in the content store of an ICN 701 forwarder. For example, cached content for a given equivalence class 702 can be considered _fate shared_ in a cache whereby objects from the 703 same equivalence class can be purged as a group rather than 704 individually. This can recover cache space more quickly and at lower 705 overhead than pure per-object replacement when a cache is under 706 extreme pressure and in danger of thrashing. In addition, since the 707 forwarder remembers the QoS treatment for each pending Interest in 708 its PIT, the above cache controls can be augmented by policy to 709 prefer retention of cached content for some equivalence classes as 710 part of the cache replacement algorithm. 712 | ^(*)With hard partitioning, there are dedicated cache resources 713 | for each equivalence class (or enumerated list of equivalence 714 | classes). With soft partitioning, resources are at least 715 | partly shared among the (sets of) equivalence classes of 716 | traffic. 718 7. Strawman principles for an ICN QoS architecture 720 Based on the observations made in the earlier sections, this summary 721 section captures the author's ideas for clear and actionable 722 architectural principles for how to incorporate QoS machinery into 723 ICN protocols like NDN and CCNx. Hopefully, they can guide further 724 work and focus effort on portions of the giant design space for QoS 725 that have the best tradeoffs in terms of flexibility, simplicity, and 726 deployability. 728 *Define equivalence classes using the name hierarchy rather than 729 creating an independent traffic class definition*. This directly 730 associates the specification of equivalence classes of traffic with 731 the structure of the application namespace. It can allow 732 hierarchical decomposition of equivalence classes in a natural way 733 because of the way hierarchical ICN names are constructed. Two 734 practical mechanisms are presented in [I-D.moiseenko-icnrg-flowclass] 735 with different tradeoffs between security and the ability to 736 aggregate flows. Either prefix-based (EC3) or explicit name 737 component based (ECNT) or both could be adopted as the part of the 738 QoS architecture for defining equivalence classes. 740 *Put consumers in control of Link and Forwarding resource 741 allocation*. Do all link buffering and forwarding (both memory and 742 CPU) resource allocations based on Interest arrivals. This is 743 attractive because it provides early congestion feedback to 744 consumers, and allows scheduling the reverse link direction ahead of 745 time for carrying the matching data. It makes enforcement of QoS 746 treatments a single-ended (i.e. at the consumer) rather than a 747 double-ended problem and can avoid wasting resources on fetching data 748 that will wind up dropped when it arrives at a bottleneck link. 750 *Allow producers to influence the allocation of cache resources*. 751 Producers want to affect caching decisions in order to: 753 * Shed load by having Interests served by content stores in 754 forwarders before reaching the producer itself. 756 * Survive transient producer reachability or link outages close to 757 the producer. 759 For caching to be effective, individual Data objects in an 760 equivalence class need to have similar treatment; otherwise well- 761 known cache thrashing pathologies due to self-interference emerge. 762 Producers have the most direct control over caching policies through 763 the caching directives in Data messages. It therefore makes sense to 764 put the producer, rather than the consumer or network operator in 765 charge of specifying these equivalence classes. 767 See [I-D.moiseenko-icnrg-flowclass] for specific mechanisms to 768 achieve this. 770 *Allow consumers to influence the allocation of cache resources*. 771 Consumers want to affect caching decisions in order to: 773 * Reduce latency for retrieving data 774 * Survive transient outages of either a producer or links close to 775 the consumer 777 Consumers can have indirect control over caching by specifying QoS 778 treatments in their Interests. Consider the following potential QoS 779 treatments by consumers that can drive caching policies: 781 * A QoS treatment requesting better robustness against transient 782 disconnection can be used by a forwarder close to the consumer (or 783 downstream of an unreliable link) to preferentially cache the 784 corresponding data. 786 * Conversely a QoS treatment together with, or in addition to a 787 request for short latency, to indicate that new data will be 788 requested soon enough that caching the current data being 789 requested would be ineffective and hence to only pay attention to 790 the caching preferences of the producer. 792 * A QoS treatment indicating a mobile consumer likely to incur a 793 mobility event within an RTT (or a few RTTs). Such a treatment 794 would allow a mobile network operator to preferentially cache the 795 data at a forwarder positioned at a _join point_ or _rendezvous 796 point_ of their topology 798 *Give network operators the ability to match customer SLAs to cache 799 resource availability*. Network operators, whether closely tied 800 administratively to producer or consumer, or constituting an 801 independent transit administration, provide the storage resources in 802 the ICN forwarders. Therefore, they are the ultimate arbiters of how 803 the cache resources are managed. In addition to any local policies 804 they may enforce, the cache behavior from the QoS standpoint emerges 805 from how the producer-specified equivalence classes map onto cache 806 space availability, including whether cache entries are treated 807 individually, or fate-shared. Forwarders also determine how the 808 consumer-specified QoS treatments map to the precedence used for 809 retaining Data objects in the cache. 811 Besides utilizing cache resources to meet the QoS goals of individual 812 producers and consumers, network operators also want to manage their 813 cache resources in order to: 815 * Ameliorate congestion hotspots by reducing load converging on 816 producers they host on their network. 818 * Improve Interest satisfaction rates by utilizing caches as short- 819 term retransmission buffers to recover from transient producer 820 reachability problems, link errors or link outages. 822 * Improve both latency and reliability in environments when 823 consumers are mobile in the operator's topology. 825 *Re-think how to specify traffic treatments - don't just copy 826 Diffserv*. Some of the Diffserv classes may form a good starting 827 point, as their mapping onto queuing algorithms for managing link 828 buffering are well understood. However, Diffserv alone does not 829 allow one to express latency versus reliability tradeoffs or other 830 useful QoS treatments. Nor does it permit "Traffic Specification 831 (TSPEC)"-style traffic descriptions as are allowed in a signaled QoS 832 scheme. Here are some examples: 834 * A "burst" treatment, where an initial Interest gives an aggregate 835 data size to request allocation of link capacity for a large burst 836 of Interest/Data exchanges. The Interest can be rejected at any 837 hop if the resources are not available. Such a treatment can also 838 accommodate Data implosion produced by the discovery procedures of 839 management protocols like [I-D.irtf-icnrg-ccninfo]. 841 * A "reliable" treatment, which affects preference for allocation of 842 PIT space for the Interest and Content Store space for the data in 843 order to improve the robustness of IoT data delivery in 844 constrained environment, as is described in 845 [I-D.gundogan-icnrg-iotqos]. 847 * A "search" treatment, which, within the specified Interest 848 Lifetime, tries many paths either in parallel or serial to 849 potentially many content sources, to maximize the probability that 850 the requested item will be found. This is done at the expense of 851 the extra bandwidth of both forwarding Interests and receiving 852 multiple responses upstream of an aggregation point. The 853 treatment can encode a value expressing tradeoffs like breadth- 854 first versus depth-first search, and bounds on the total resource 855 expenditure. Such a treatment would be useful for instrumentation 856 protocols like [I-D.irtf-icnrg-icntraceroute]. 858 | As an aside, loose latency control (on the order of seconds or 859 | tens of milliseconds as opposed milliseconds or microseconds) 860 | can be achieved by bounding Interest Lifetime as long as this 861 | lifetime machinery is not also used as an application mechanism 862 | to provide subscriptions or to establish path traces for 863 | producer mobility. See [Krol2018] for a discussion of the 864 | network versus application timescale issues in ICN protocols. 866 7.1. Can Intserv-like traffic control in ICN provide richer QoS 867 semantics? 869 Basic QoS treatments such as those summarized above may not be 870 adequate to cover the whole range of application utility functions 871 and deployment environments we expect for ICN. While it is true that 872 one does not necessarily need a separate signaling protocol like RSVP 873 given the state carried in the ICN data plane by forwarders, there 874 are some potentially important capabilities not provided by just 875 simple QoS treatments applied to per- Interest/Data exchanges. 876 Intserv's richer QoS capabilities may be of value, especially if they 877 can be provided in ICN at lower complexity and protocol overhead than 878 Intserv+RSVP. 880 There are three key capabilities missing from Diffserv-like QoS 881 treatments, no matter how sophisticated they may be in describing the 882 desired treatment for a given equivalence class of traffic. Intserv- 883 like QoS provides all of these: 885 1. The ability to *describe traffic flows* in a mathematically 886 meaningful way. This is done through parameters like average 887 rate, peak rate, and maximum burst size. The parameters are 888 encapsulated in a data structure called a "TSPEC" which can be 889 placed in whatever protocol needs the information (in the case of 890 TCP/IP Intserv, this is RSVP). 892 2. The ability to perform *admission control*, where the element 893 requesting the QoS treatment can know _before_ introducing 894 traffic whether the network elements have agreed to provide the 895 requested traffic treatment. An important side-effect of 896 providing this assurance is that the network elements install 897 state that allows the forwarding and queuing machinery to police 898 and shape the traffic in a way that provides a sufficient degree 899 of _isolation_ from the dynamic behavior of other traffic. 900 Depending on the admission control mechanism, it may or may not 901 be possible to explicitly release that state when the application 902 no longer needs the QoS treatment. 904 3. The permissable *degree of divergence* in the actual traffic 905 handling from the requested handling. Intserv provided two 906 choices here, the _controlled load_ service and the _guaranteed_ 907 service. The former allows stochastic deviation equivalent to 908 what one would experience on an unloaded path of a packet 909 network. The latter conforms to the TSPEC deterministically, at 910 the obvious expense of demanding extremely conservative resource 911 allocation. 913 Given the limited applicability of these capabilities in today's 914 Internet, the author does not take any position as to whether any of 915 these Intserv-like capabilities are needed for ICN to be succesful. 916 However, a few things seem important to consider. The following 917 paragraphs speculate about the consequences to the CCNx or NDN 918 protocol architectures of incorporating these features. 920 Superficially, it would be quite straightforward to accommodate 921 Intserv-equivalent traffic descriptions in CCNx or NDN. One could 922 define a new TLV for the Interest message to carry a TSPEC. A 923 forwarder encountering this, together with a QoS treatment request 924 (e.g. as proposed in Section 6.3) could associate the traffic 925 specification with the corresponding equivalence class derived from 926 the name in the Interest. This would allow the forwarder to create 927 state that not only would apply to the returning Data for that 928 Interest when being queued on the downstream interface, but be 929 maintained as soft state across multiple Interest/Data exchanges to 930 drive policing and shaping algorithms at per-flow granularity. The 931 cost in Interest message overhead would be modest, however the 932 complications associated with managing different traffic 933 specifications in different Interests for the same equivalence class 934 might be substantial. Of course, all the scalability considerations 935 with maintaining per-flow state also come into play. 937 Similarly, it would be equally straightforward to have a way to 938 express the degree of divergence capability that Intserv provides 939 through its controlled load and guaranteed service definitions. This 940 could either be packaged with the traffic specification or encoded 941 separately. 943 In contrast to the above, performing admission control for ICN flows 944 is likely to be just as heavy-weight as it turned out to be with IP 945 using RSVP. The dynamic multi-path, multi-destination forwarding 946 model of ICN makes performing admission control particularly tricky. 947 Just to illustrate: 949 * Forwarding next-hop selection is not confined to single paths (or 950 a few ECMP equivalent paths) as it is with IP, making it difficult 951 to know where to install state in advance of the arrival of an 952 Interest to forward. 954 * As with point-to-multipoint complexities when using RSVP for MPLS- 955 TE, state has to be installed to multiple producers over multiple 956 paths before an admission control algorithm can commit the 957 resources and say "yes" to a consumer needing admission control 958 capabilities 960 * Knowing when to remove admission control state is difficult in the 961 absence of a heavy-weight resource reservation protocol. Soft 962 state timeout may or may not be an adequate answer. 964 Despite the challenges above, it may be possible to craft an 965 admission control scheme for ICN that achieves the desired QoS goals 966 of applications without the invention and deployment of a complex 967 separate admission control signaling protocol. There have been 968 designs in earlier network architectures that were capable of 969 performing admission control piggybacked on packet transmission. 971 | (The earliest example the author is aware of is [Autonet]). 973 Such a scheme might have the following general shape *(warning: 974 serious hand waving follows!)*: 976 * In addition to a QoS treatment and a traffic specification, an 977 Interest requesting admission for the corresponding equivalence 978 class would so indicate via a new TLV. It would also need to: (a) 979 indicate an expiration time after which any reserved resources can 980 be released, and (b) indicate that caches be bypassed, so that the 981 admission control request arrives at a bone-fide producer. 983 * Each forwarder processing the Interest would check for resource 984 availability and if not available, or the requested service not 985 feasible, reject the Interest with an admission control failure. 986 If resources are available, the forwarder would record the traffic 987 specification as described above and forward the Interest. 989 * If the Interest successfully arrives at a producer, the producer 990 returns the requested Data. 992 * Each on-path forwarder, on receiving the matching Data message, if 993 the resources are still available, does the actual allocation, and 994 marks the admission control TLV as "provisionally approved". 995 Conversely, if the resource reservation fails, the admission 996 control is marked "failed", although the Data is still passed 997 downstream. 999 * Upon the Data message arriving, the consumer knows if admission 1000 succeeded or not, and subsequent Interests can rely on the QoS 1001 state being in place until either some failure occurs, or a 1002 topology or other forwarding change alters the forwarding path. 1003 To deal with this, additional machinery is needed to ensure 1004 subsequent Interests for an admitted flow either follow that path 1005 or an error is reported. One possibility (also useful in many 1006 other contexts), is to employ a _Path Steering_ mechanism, such as 1007 the one described in [Moiseenko2017]. 1009 8. IANA Considerations 1011 This document does not require any IANA actions. 1013 9. Security Considerations 1015 There are a few ways in which QoS for ICN interacts with security and 1016 privacy issues. Since QoS addresses relationships among traffic 1017 rather than the inherent characteristics of traffic, it neither 1018 enhances nor degrades the security and privacy properties of the data 1019 being carried, as long as the machinery does not alter or otherwise 1020 compromise the basic security properties of the associated protocols. 1021 The QoS approaches advocated here for ICN can serve to amplify 1022 existing threats to network traffic however: 1024 * An attacker able to manipulate the QoS treatments of traffic can 1025 mount a more focused (and potentially more effective) denial of 1026 service attack by suppressing performance on traffic the attacker 1027 is targeting. Since the architecture here assumes QoS treatments 1028 are manipulable hop-by-hop, any on-path adversary can wreak havoc. 1029 Note however, that in basic ICN, an on-path attacker can do this 1030 and more by dropping, delaying, or mis-routing traffic independent 1031 of any particular QoS machinery in use. 1033 * By explicitly revealing equivalence classes of traffic via either 1034 names or other fields in packets, an attacker has yet one more 1035 handle to use to discover linkability of multiple requests. 1037 10. References 1039 10.1. Normative References 1041 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1042 Requirement Levels", BCP 14, RFC 2119, 1043 DOI 10.17487/RFC2119, March 1997, 1044 . 1046 [RFC8569] Mosko, M., Solis, I., and C. Wood, "Content-Centric 1047 Networking (CCNx) Semantics", RFC 8569, 1048 DOI 10.17487/RFC8569, July 2019, 1049 . 1051 [RFC8609] Mosko, M., Solis, I., and C. Wood, "Content-Centric 1052 Networking (CCNx) Messages in TLV Format", RFC 8609, 1053 DOI 10.17487/RFC8609, July 2019, 1054 . 1056 10.2. Informative References 1058 [AS] "Autonomous System (Internet)", no date, 1059 . 1062 [Auge2018] Augé, J., Carofiglio, G., Grassi, G., Muscariello, L., 1063 Pau, G., and X. Zeng, "MAP-Me: Managing Anchor-Less 1064 Producer Mobility in Content-Centric Networks", in IEEE 1065 Transactions on Network and Service Management (Volume: 15 1066 , Issue: 2 , June 2018), DOI 10.1109/TNSM.2018.2796720, 1067 June 2018, . 1069 [Autonet] Schroeder, M., Birrell, A., Burrows, M., Murray, H., 1070 Needham, R., Rodeheffer, T., Satterthwaite, E., and C. 1071 Thacker, "Autonet: a High-speed, Self-configuring Local 1072 Area Network Using Point-to-point Links", in IEEE Journal 1073 on Selected Areas in Communications ( Volume: 9, Issue: 8, 1074 Oct 1991), DOI 10.1109/49.105178, October 1991, 1075 . 1078 [BenAbraham2018] 1079 Ben Abraham, H., Parwatikar, J., DeHart, J., Dresher, A., 1080 and P. Crowley, ""Decoupling Information and Connectivity 1081 via Information-Centric Transport", in ICN '18: 1082 Proceedings of the 5th ACM Conference on Information- 1083 Centric Networking September 21-23, 2018, Boston, MA, USA, 1084 DOI 10.1145/3267955.3267963, September 2018, 1085 . 1088 [Carofiglio2012] 1089 Carofiglio, G., Gallo, M., and L. Muscariello, "Joint hop- 1090 by-hop and receiver-driven Interest control protocol for 1091 content-centric networks", in ACM SIGCOMM Computer 1092 Communication Review, September 2012, 1093 DOI 10.1016/j.comnet.2016.09.012, September 2012, 1094 . 1097 [Carofiglio2016] 1098 Carofiglio, G., Gallo, M., and L. Muscariello, "Optimal 1099 multipath congestion control and request forwarding in 1100 information-centric networks: Protocol design and 1101 experimentation", in Computer Networks, Vol. 110 No. 9, 1102 December 2016, DOI 10.1145/2377677.2377772, December 2016, 1103 . 1105 [I-D.anilj-icnrg-dnc-qos-icn] 1106 Jangam, A., suthar, P., and M. Stolic, "QoS Treatments in 1107 ICN using Disaggregated Name Components", Work in 1108 Progress, Internet-Draft, draft-anilj-icnrg-dnc-qos-icn- 1109 02, 9 March 2020, . 1112 [I-D.gundogan-icnrg-iotqos] 1113 Gundogan, C., Schmidt, T., Waehlisch, M., Frey, M., Shzu- 1114 Juraschek, F., and J. Pfender, "Quality of Service for ICN 1115 in the IoT", Work in Progress, Internet-Draft, draft- 1116 gundogan-icnrg-iotqos-01, 8 July 2019, 1117 . 1120 [I-D.ietf-quic-transport] 1121 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1122 and Secure Transport", Work in Progress, Internet-Draft, 1123 draft-ietf-quic-transport-32, 20 October 2020, 1124 . 1127 [I-D.irtf-icnrg-ccninfo] 1128 Asaeda, H., Ooka, A., and X. Shao, "CCNinfo: Discovering 1129 Content and Network Information in Content-Centric 1130 Networks", Work in Progress, Internet-Draft, draft-irtf- 1131 icnrg-ccninfo-05, 21 September 2020, 1132 . 1134 [I-D.irtf-icnrg-icntraceroute] 1135 Mastorakis, S., Gibson, J., Moiseenko, I., Droms, R., and 1136 D. Oran, "ICN Traceroute Protocol Specification", Work in 1137 Progress, Internet-Draft, draft-irtf-icnrg-icntraceroute- 1138 01, 10 October 2020, . 1141 [I-D.irtf-nwcrg-nwc-ccn-reqs] 1142 Matsuzono, K., Asaeda, H., and C. Westphal, "Network 1143 Coding for Content-Centric Networking / Named Data 1144 Networking: Considerations and Challenges", Work in 1145 Progress, Internet-Draft, draft-irtf-nwcrg-nwc-ccn-reqs- 1146 04, 2 September 2020, . 1149 [I-D.moiseenko-icnrg-flowclass] 1150 Moiseenko, I. and D. Oran, "Flow Classification in 1151 Information Centric Networking", Work in Progress, 1152 Internet-Draft, draft-moiseenko-icnrg-flowclass-06, 12 1153 July 2020, . 1156 [I-D.muscariello-intarea-hicn] 1157 Muscariello, L., Carofiglio, G., Auge, J., Papalini, M., 1158 and M. Sardara, "Hybrid Information-Centric Networking", 1159 Work in Progress, Internet-Draft, draft-muscariello- 1160 intarea-hicn-04, 20 May 2020, 1161 . 1164 [I-D.oran-icnrg-flowbalance] 1165 Oran, D., "Maintaining CCNx or NDN flow balance with 1166 highly variable data object sizes", Work in Progress, 1167 Internet-Draft, draft-oran-icnrg-flowbalance-04, 24 August 1168 2020, . 1171 [Krol2018] Król, M., Habak, K., Oran, D., Kutscher, D., and I. 1172 Psaras, "RICE: Remote Method Invocation in ICN", in 1173 ICN'18: Proceedings of the 5th ACM Conference on 1174 Information-Centric Networking September 21-23, 2018, 1175 Boston, MA, USA, DOI 10.1145/3267955.3267956, September 1176 2018, . 1179 [Mahdian2016] 1180 Mahdian, M., Arianfar, S., Gibson, J., and D. Oran, 1181 "MIRCC: Multipath-aware ICN Rate-based Congestion 1182 Control", in Proceedings of the 3rd ACM Conference on 1183 Information-Centric Networking, 1184 DOI 10.1145/2984356.2984365, September 2016, 1185 . 1188 [minmaxfairness] 1189 "Max-min Fairness", no date, 1190 . 1192 [Moiseenko2017] 1193 Moiseenko, I. and D. Oran, "Path Switching in Content 1194 Centric and Named Data Networks", in ICN '17: Proceedings 1195 of the 4th ACM Conference on Information-Centric 1196 Networking, DOI 10.1145/3125719.3125721, September 2017, 1197 . 1200 [NDN] "Named Data Networking", various, 1201 . 1203 [NDNTutorials] 1204 "NDN Tutorials", various, 1205 . 1207 [Oran2018QoSslides] 1208 Oran, D., "Thoughts on Quality of Service for NDN/CCN- 1209 style ICN protocol architectures", presented at ICNRG 1210 Interim Meeting, Cambridge MA, 24 September 2018, 1211 . 1215 [proportionalfairness] 1216 "Proportionally Fair", no date, 1217 . 1219 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 1220 RFC 793, DOI 10.17487/RFC0793, September 1981, 1221 . 1223 [RFC2205] Braden, R., Ed., Zhang, L., Berson, S., Herzog, S., and S. 1224 Jamin, "Resource ReSerVation Protocol (RSVP) -- Version 1 1225 Functional Specification", RFC 2205, DOI 10.17487/RFC2205, 1226 September 1997, . 1228 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1229 "Definition of the Differentiated Services Field (DS 1230 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1231 DOI 10.17487/RFC2474, December 1998, 1232 . 1234 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1235 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 1236 Felstaine, "A Framework for Integrated Services Operation 1237 over Diffserv Networks", RFC 2998, DOI 10.17487/RFC2998, 1238 November 2000, . 1240 [RFC3170] Quinn, B. and K. Almeroth, "IP Multicast Applications: 1241 Challenges and Solutions", RFC 3170, DOI 10.17487/RFC3170, 1242 September 2001, . 1244 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1245 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1246 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1247 . 1249 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1250 Congestion Control Protocol (DCCP)", RFC 4340, 1251 DOI 10.17487/RFC4340, March 2006, 1252 . 1254 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1255 Guidelines for DiffServ Service Classes", RFC 4594, 1256 DOI 10.17487/RFC4594, August 2006, 1257 . 1259 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1260 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1261 . 1263 [Schneider2016] 1264 Schneider, K., Yi, C., Zhang, B., and L. Zhang, ""A 1265 Practical Congestion Control Scheme for Named Data 1266 Networking", in ACM-ICN '16: Proceedings of the 3rd ACM 1267 Conference on Information-Centric Networking, 1268 DOI 10.1145/2984356.2984369, September 2016, 1269 . 1272 [Shenker2006] 1273 Shenker, S., "Fundamental Design Issues for the Future 1274 Internet", in IEEE Journal on Selected Areas in 1275 Communications, Vol. 13, No. 7, DOI 10.1109/49.414637, 1276 September 2006, 1277 . 1279 [Song2018] Song, J., Lee, M., and T. Kwon, "SMIC: Subflow-level 1280 Multi-path Interest Control for Information Centric 1281 Networking", ICN '18: Proceedings of the 5th ACM 1282 Conference on Information-Centric Networking, 1283 DOI 10.1145/3267955.3267971, September 2018, 1284 . 1287 [Tseng2003] 1288 Tseng, CH.J., "The performance of QoS-aware IP multicast 1289 routing protocols", in Networks, Vol:42, No:2, 1290 DOI 10.1002/net.10084, September 2003, 1291 . 1294 [Wang2000] Wang, B. and J.C. Hou, "Multicast routing and its QoS 1295 extension: problems, algorithms, and protocols", in IEEE 1296 Network, Vol:14, Issue:1, Jan/Feb 2000, 1297 DOI 10.1109/65.819168, January 2000, 1298 . 1301 [Wang2013] Wang, Y., Rozhnova, N., Narayanan, A., Oran, D., and I. 1302 Rhee, "An Improved Hop-by-hop Interest Shaper for 1303 Congestion Control in Named Data Networking", in ICN '13: 1304 Proceedings of the 3rd ACM SIGCOMM workshop on 1305 Information-centric networking, August 2013, 1306 DOI 10.1145/2534169.2491233, August 2013, 1307 . 1310 Author's Address 1312 Dave Oran 1313 Network Systems Research and Design 1314 4 Shady Hill Square 1315 Cambridge, MA 02138 1316 United States of America 1318 Email: daveoran@orandom.net