idnits 2.17.00 (12 Aug 2021) /tmp/idnits7412/draft-peng-detnet-deadline-based-forwarding-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (March 1, 2022) is 74 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational draft: draft-stein-srtsn (ref. 'I-D.stein-srtsn') Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Shaofu. Peng 3 Internet-Draft Bin. Tan 4 Intended status: Standards Track ZTE Corporation 5 Expires: September 2, 2022 Peng. Liu 6 China Mobile 7 March 1, 2022 9 Deadline Based Deterministic Forwarding 10 draft-peng-detnet-deadline-based-forwarding-01 12 Abstract 14 This document describes a deterministic forwarding mechanism based on 15 deadline. The mechanism enhances strict priority scheduling 16 algorithm with dynamically adjusting the priority of the queue 17 according to its deadline attribute. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on September 2, 2022. 36 Copyright Notice 38 Copyright (c) 2022 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (https://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 54 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 55 2. Deadline Queue . . . . . . . . . . . . . . . . . . . . . . . 3 56 3. Get Deadline Information of Packets . . . . . . . . . . . . . 6 57 3.1. Get Planned Deadline . . . . . . . . . . . . . . . . . . 6 58 3.2. Get Existing Cumulative Planned Deadline . . . . . . . . 7 59 3.3. Get Existing Accumulated Actual Dwell Time . . . . . . . 7 60 3.4. Get Existing Accumulated Deadline Deviation . . . . . . . 8 61 4. Put Packets into the Deadline Queues . . . . . . . . . . . . 8 62 5. Traffic Regulation and Shaping . . . . . . . . . . . . . . . 11 63 6. Compatibility Considerations . . . . . . . . . . . . . . . . 12 64 7. Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . 13 65 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 66 9. Security Considerations . . . . . . . . . . . . . . . . . . . 13 67 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 68 11. Normative References . . . . . . . . . . . . . . . . . . . . 13 69 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 71 1. Introduction 73 [RFC8655] describes the architecture of deterministic network and 74 defines the QoS goals of deterministic forwarding: Minimum and 75 maximum end-to-end latency from source to destination, timely 76 delivery, and bounded jitter (packet delay variation); packet loss 77 ratio under various assumptions as to the operational states of the 78 nodes and links; an upper bound on out-of-order packet delivery. In 79 order to achieve these goals, deterministic networks use resource 80 reservation, explicit routing, service protection and other means. 81 Resource reservation refers to the occupation of resources by service 82 traffic, exclusive or shared in a certain proportion, such as 83 dedicated physical link, link bandwidth, queue resources, etc; 84 Explicit routing means that the transmission path of traffic flow in 85 the network needs to be selected in advance to ensure the stability 86 of the route and does not change with the real-time change of network 87 topology, and based on this, the upper bound of end-to-end delay and 88 delay jitter can be accurately calculated; Service protection refers 89 to sending multiple service flows along multiple disjoint paths at 90 the same time to reduce the packet loss rate. In general, a 91 deterministic path is a strictly explicit path calculated by a 92 centralized controller, and resources are reserved on the nodes along 93 the path to meet the SLA requirements of deterministic services. 95 [I-D.stein-srtsn] describes that the controller calculates the local 96 deadline time of each node for the traffic to be transmitted in 97 advance, which is a absolute system time, forms a stack of these 98 local deadline time, and then carries them in the forwarded data 99 packets. Each node forwards the packets according to its own local 100 deadline. [I-D.stein-srtsn] suggests that FIFO queue can not be used 101 to realize this function, because the packets stored in the queue are 102 always first in first out, so a special data structure is recommoned. 103 The packets in this data structure will be automatically sorted with 104 the order from emergency to non emergency according to the deadline 105 of the packets. However, it may be difficult to implement this 106 structure in hardware, and especially for a large network it may be 107 challenge to synchronize time. 109 Considering that the link transmission delay is generally a fixed 110 value, and we focus on the dwell time of the packets inside the node, 111 an alternate approach is to make the deadline eliminate the 112 interference of link delay and avoids relying on time synchronization 113 between nodes. 115 This document desrbies an alternate packets scheduling scheme that is 116 used for wide area network. It suggests to only use a single 117 deadline time to control the packets scheduling of all nodes along 118 the path. The single deadline time is an offset time, which is based 119 on the time when the packet enters the node and represents the 120 maximum time allowed for the packet to stay inside the node. 121 However, if each node has obvious differences in the capability of 122 packets forwarding and scheduling, more offset-time may be needed. 124 1.1. Requirements Language 126 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 127 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 128 "OPTIONAL" in this document are to be interpreted as described in BCP 129 14 [RFC2119] [RFC8174] when, and only when, they appear in all 130 capitals, as shown here. 132 2. Deadline Queue 134 For nodes in the network, some queues with deadline time (also termed 135 as TTL) are introduced and maintained for each outgoing port. These 136 queues are called deadline queue. Deadline queue has the following 137 characteristics: 139 o The TTL of each deadline queue will decrease with the passage of 140 time. When it decreases to 0, the scheduling priority of the 141 queue will be set to the highest, and the scheduling opportunity 142 will be obtained immediately (note that there may be interference 143 delay caused by a large packet being sent by a low priority 144 queue). It will prohibit receiving new packets, in which the 145 buffered packets will be sent to the outgoing port immediately, 146 and the maximum duration allowed to send packets is the 147 authorization time. In principle, all packets buffered in the 148 queue shall be sent within this authorization time. If the queue 149 is sent and the authorization time is still free, other queues 150 with lower priority can be scheduled during this authorization 151 time. 153 o The scheduling engine can initiate a cycle timer to decrement the 154 TTL of all deadline queues, that is, whenever the timer expires, 155 the deadline values of all queues will be subtracted from the 156 timer interval. Note that the time interval of the timer must be 157 greater than or equal to the authorization time of the deadline 158 queue. For simplicity, they are same. 160 o For a deadline queue whose TTL has been reduced to 0, after a new 161 round of timer timeout, the TTL will return to the maximum initial 162 value, allow receiving new packets, and continue to enter the next 163 round of operation that decreases with the passage of time. 165 o For a deadline queue whose TTL is not reduced to 0, it can receive 166 packets. In detailed, when a node receives a packet to be 167 forwarded from a specific outgoing port, it first obtains the 168 expected deadline of the packet, and then put the packet to the 169 deadline queue with the relevant TTL value of the outgoing port 170 for transmission. 172 o For a deadline queue whose TTL is not reduced to 0, its scheduling 173 priority cannot be set to the highest value. A local policy may 174 be used to control the transmission of buffered packets. There 175 are two options: the first option, allowing to be involved in 176 scheduling, also termed as in-time policy; the second option, not 177 allowing, also termed as on-time policy. In-time policy is 178 applicable to the service requirements of low delay, and on-time 179 policy is applicable to low delay jitter. When instantiating 180 deadline scheduling algorithm, one implementation can support only 181 one option or both. 183 o At the beginning, all deadline queues have different TTL values, 184 i.e., staggered from each other, so that the TTL of only one 185 deadline queue will decrease to 0 at any time. 187 The above authorization time, timer interval and maximum initial TTL 188 value shall be specified according to the actual capacity of the 189 node. In fact, each node in the network can independently use 190 different timer interval for different outgoing ports. The general 191 principle is that if an outgoing port has a large bandwidth (such as 192 100G bps), the timer interval (and the authorization time of the 193 deadline queue) can be small (such as 1us), because the link with 194 large bandwidth can send the required bits amount even within a small 195 time interval; If an outgoing port has a small bandwidth (e.g. 1G 196 bps), the timer interval (and the authorization time of the deadline 197 queue) should be larger (e.g. 10us), because the link with small 198 bandwidth needs to send the required bit amount within a larger time 199 interval. 201 A specific example of the deadline queue is depicted in Figure 1. 203 +------------------------------+ +------------------------------+ 204 | Deadline Queue Group: | | Deadline Queue Group: | 205 | queue-1(TTL=60us) ###### | | queue-1(TTL=50us) ###### | 206 | queue-2(TTL=50us) ###### | | queue-2(TTL=40us) ###### | 207 | queue-3(TTL=40us) ###### | | queue-3(TTL=30us) ###### | 208 | queue-4(TTL=30us) ###### | | queue-4(TTL=20us) ###### | 209 | queue-5(TTL=20us) ###### | | queue-5(TTL=10us) ###### | 210 | queue-6(TTL=10us) ###### | | queue-6(TTL=0us) ###### | 211 | queue-7(TTL=0us) ###### | | queue-7(TTL=60us) ###### | 212 +------------------------------+ +------------------------------+ 214 +------------------------------+ +------------------------------+ 215 | Non-deadline Queue Group: | | Non-deadline Queue Group: | 216 | queue-8 ############ | | queue-8 ############ | 217 | queue-9 ############ | | queue-9 ############ | 218 | queue-10 ############ | | queue-10 ############ | 219 | ... ... | | ... ... | 220 +------------------------------+ +------------------------------+ 222 -o----------------------------------o--------------------------------> 223 T0 T0+10us time 225 Figure 1: Example of Deadline Queue for outgoing Port 227 In this example, the timer interval for deadline queue group is 228 configured to 10us. Queue-1 ~ queue-7 are deadline queues, and other 229 queues are traditional non-deadline queues. Each deadline queue has 230 its TTL attribute. The maximum initial TTL is 60us. At the initial 231 time (T0), the TTL of all deadline queues are staggered from each 232 other. For example, the TTL of queue-1 is 60us, the TTL of queue-2 233 is 50uS, the TTL of queue-3 is 40us, and so on. At this time, only 234 the TTL of queue-7 is 0, which has the highest scheduling priority. 236 Suppose the scheduling engine initiates a cycle timer with a time 237 interval of 10us. After each timer timeout, the timer interval will 238 be subtracted from the TTL of all deadline queues. As shown in the 239 figure, at T0 + 10us, the timer timeout, the TTL of queue-1 becomes 240 50uS, the TTL of queue-2 becomes 40us, the TTL of queue-3 becomes 241 30us, etc. At this time, the TTL of queue-7 returns to the maximum 242 initial TTL of 60us and is no longer set to the highest scheduling 243 priority; The TTL of queue-6 becomes 0, which has the highest 244 scheduling priority. 246 For simplicity, set the authorization time of the deadline queue to 247 be consistent with the time interval of the cycle timer, which is 248 also 10us. When the TTL of a deadline queue becomes 0, it has a time 249 limit of 10us to send packets in the queue. During this period, it 250 will be prohibited to receive new packets (in fact, there can be no 251 new packets with a deadline of 0). After the 10us time elapses, the 252 cycle timer will timeout again, The TTL of another deadline queue 253 will change to 0. It is also feasible to set the authorization time 254 to be less than the cycle timer interval. 256 If the deadline queue with the highest priority is still free after 257 sending packets within the authorized time, the scheduling engine 258 will visit other queues with the second highest priority during the 259 rest of the authorized time. 261 Note that for each deadline queue with specific TTL, if both in-time 262 and on-time policies are supported, it may include two sub-queues, 263 one used to buffer in-time packets and the other used to buffer on- 264 time packets. 266 3. Get Deadline Information of Packets 268 3.1. Get Planned Deadline 270 The planned deadline of the packet is an offset time, which is based 271 on the time when the packet enters the node and represents the 272 maximum time allowed for the packet to stay inside the node. There 273 are many ways to obtain the planned deadline of the packet. 275 o Carried in the packet. The ingress PE node, when encapsulating 276 the deterministic service flow, can explicitly insert the planned 277 deadline into the packet according to SLA. The intermediate node, 278 after receiving the packet, can directly obtain the planned 279 deadline from the packet. Generally, only a single planned 280 deadline needs to be carried in the packet, which is applicable to 281 all nodes along the path; Or insert a stack composed of multiple 282 deadlines, one for each node. [I-D.peng-6man-deadline-option] 283 defined a method to carry planned deadline in the IPv6 packets . 285 o Included in the FIB entry. Each node in the network can maintain 286 the deterministic FIB entry. After the packet hits the 287 deterministic FIB entry, the planned deadline is obtained from the 288 forwarding information contained in the FIB entry. 290 o Included in the policy entry. Configure local policies on each 291 node in the network, and then set the corresponding planned 292 deadline according to the matched specific characteristics of the 293 packet, such as 5-tuple. 295 For a deterministic delay path based on deadline queue scheduling, 296 the path it passes through has deterministic end-to-end delay 297 requirements. It includes two parts, one is the cumulative node 298 delay and the other is the cumulative link transmission delay. The 299 end-to-end delay requirement is subtracted from the cumulative link 300 transmission delay to obtain the cumulative node delay. A simple 301 method is that the accumulated node delay is shared equally by each 302 intermediate node along the path to obtain the planning deadline of 303 each node. 305 3.2. Get Existing Cumulative Planned Deadline 307 The existing cumulative planned deadline of the packet refers to the 308 sum of the planned deadline of all upstream nodes before the packet 309 is transmitted to this node. This information needs to be carried in 310 the packet. Every time the packet passes through a node, the node 311 accumulates its corresponding planned deadline to the existing 312 cumulative planned deadline field in the packet. 313 [I-D.peng-6man-deadline-option] defined a method to carry existing 314 cumulative planned deadline in the IPv6 packets. 316 The setting of "existing cumulative planned deadline" in the packet 317 needs to be friendly to the chip for reading and writing. For 318 example, it should be designed as a fixed position in the packet. 319 The chip may support flexible configuration for that position. 321 3.3. Get Existing Accumulated Actual Dwell Time 323 The existing cumulative actual dwell time of the packet, refers to 324 the sum of the actual dwell time of all upstream nodes before the 325 packet is transmitted to this node. This information needs to be 326 carried in the packet. Every time the packet passes through a node, 327 the node accumulates its corresponding actual dwell time to the 328 existing cumulative actual dwell time field in the packet. 329 [I-D.peng-6man-deadline-option] defined a method to carry existing 330 cumulative actual dwell time in the IPv6 packets. 332 The setting of "existing cumulative actual dwell time" in the packet 333 needs to be friendly to the chip for reading and writing. For 334 example, it should be designed as a fixed position in the packet. 335 The chip may support flexible configuration for that position. 337 Although other methods can also be, for example, carrying the 338 absolute system time of receiving and sending in the packet to 339 compute the actual dwell time indirectly, that has a low 340 encapsulation efficiency and require strict time synchronization 341 between nodes. 343 A possible method to get the actual dwell time in the node is that, 344 the receiving and sending time of the packet can be recorded in the 345 auxiliary data structure (note that is not packet itself) of the 346 packet, and the actual dwell time of the packet in the node can be 347 calculated according to these two times. 349 3.4. Get Existing Accumulated Deadline Deviation 351 The existing accumulated deadline deviation equals existing 352 cumulative planned deadline minus existing cumulative actual dwell 353 time. This value can be positive or negative. 355 If the existing cumulative planned deadline and the existing 356 cumulative actual dwell time are carried in the packet, it is not 357 necessary to carry the existing accumulated deadline deviation. 358 Otherwise, it is necessary. The advantage of the former is that it 359 can be applied to more scenarios. 361 4. Put Packets into the Deadline Queues 363 The lifetime of the packet inside the node mainly includes two parts: 364 the first part is to lookup the forwarding table when the packet is 365 received from the incoming port (or generated by the control plane) 366 and deliver the packet to the line card where the outgoing port is 367 located; the second part is to store the packet in the queue of the 368 outgoing port for transmission. These two parts contribute to the 369 actual dwell time of the packet in the node. The former can be 370 called forwarding delay and the latter can be called queuing delay. 371 The forwarding delay is related to the chip implementation and is 372 generally constant; The queuing delay is unstable. 374 When a node receives a packet from an upstream node, it can first get 375 the existing accumulated deadline deviation, and then add it to the 376 planned deadline of the packet at this node to obtain the deadline 377 adjustment value, and then on the basis of the deadline adjustment 378 value, deducting the forwarding delay of the packet in the node, the 379 allowable queuing delay value is obtained, and then the packet will 380 be put to the deadline queue with TTL as the above allowable queuing 381 delay value for sending. If the calculated allowable queuing delay 382 is not exactly equal to the TTL of any queue, the packet selects the 383 queue with the closest TTL to enter. 385 Under normal circumstances, if each hop strictly controls the 386 scheduling of the packet according to its planned deadline, the 387 actual dwell time of the packet will be very close to the planned 388 deadline, and the absolute value of the existing accumulated deadline 389 deviation will be very small. 391 More generally, assume that the local node in a deterministic path is 392 i, all upstream nodes are from 1 to i-1, and downstream nodes are i + 393 1, the planned deadline is D, the actual dwell time is R, the 394 deadline adjustment value is M, the forwarding delay inside the node 395 is F, the existing accumulated deadline deviation is E, and the 396 allowable queuing delay is Q, then the allowable queuing delay (Q) of 397 the packet on this node i is calculated as follows: 399 E(i-1) = D(1) + D(2) + ... + D(i-1) - R(1) - R(2) - ... - R(i-1) 401 M(i) = D(i) + E(i-1) 403 Q(i) = M(i) - F(i) 405 Consider some extreme cases. For example, many upstream nodes adopt 406 the in-time policy to send packets quickly. Packets almostly need 407 not queue in these nodes, but only depend on the forwarding delay. 408 Then the existing accumulated deadline deviation (E) may be a very 409 large positive value, resulting in a large allowable queuing delay 410 (Q). If this value exceeds the maximum initial TTL of the deadline 411 queue maintained by the node, the allowable queuing delay (Q) should 412 be modified to the maximum initial TTL. 414 For another example, if some upstream nodes are abnormal and have a 415 very large actual dwell time (R), the existing accumulated deadline 416 deviation (E) may be a negative number, resulting in the allowable 417 queuing delay (Q) may be less than or equal to 0, which is smaller 418 than the cycle timer interval of the deadline queue maintained in the 419 node, then the allowable queuing delay (Q) should be modified to the 420 cycle timer interval value. 422 Figure 2 depicts an example of packets buffered to the deadline 423 queue. 425 packet-2 packet-1 +------------------------------+ 426 +--------+ +--------+ | Deadline Queue Group: | 427 | D=20us | | D=30us | | queue-1(TTL=60us) ###### | 428 | E=10us | | E=-10us| +--+ | queue-2(TTL=50us) ###### | 429 +--------+ +--------+ |\/| | queue-3(TTL=40us) ###### | 430 ------incoming port-1------> |/\| | queue-4(TTL=30us) ###### | 431 |\/| | queue-5(TTL=20us) ###### | 432 packet-4 packet-3 |/\| | queue-6(TTL=10us) ###### | 433 +--------+ +--------+ |\/| | queue-7(TTL=0us) ###### | 434 | | | D=30us | |/\| +------------------------------+ 435 +--------+ | E=-30us| |\/| 436 +--------+ |/\| 437 ------incoming port-2------> |\/| +------------------------------+ 438 |/\| | Non-deadline Queue Group: | 439 packet-6 packet-5 |\/| | queue-8 ############ | 440 +--------+ +--------+ |/\| | queue-9 ############ | 441 | | | D=40us | |\/| | queue-10 ############ | 442 +--------+ | E=40us | |/\| | ... ... | 443 +--------+ +--+ +------------------------------+ 444 ------incoming port-2------> ---------outgoing port----------> 446 -o----------------------------------o--------------------------------> 447 receiving-time base +F time 449 Figure 2: Time Sensitive Packets Buffered to Deadline Queue 451 As shown in Figure 2, the node successively receives six packets from 452 three incoming ports, among which packet 1, 2, 3 and 5 have 453 corresponding deadline information, while packet 4 and 6 are 454 traditional packets. These packets need to be forwarded to the same 455 outgoing port according to the forwarding table entries. It is 456 assumed that they arrive at the line card where the outgoing port is 457 located at almost the same time after the forwarding delay in the 458 node (F = 10us). At this time, the queue status of the outgoing port 459 is shown in the figure. Then: 461 o The allowable queuing delay (Q) of packet 1 in the node is 30 - 10 462 -10 = 10us, and it will be put into the deadline queue-6 (its TTL 463 is 10us). 465 o The allowable queuing delay (Q) of packet 2 in the node is 20 + 10 466 -10 = 20us, and it will be put into the deadline queue-5 (its TTL 467 is 20us). 469 o The allowable queuing delay (Q) of packet 3 in the node is 30 - 30 470 -10 = -10us, and it will be modified to the minimum positive value 471 10 us then put into the deadline queue-6 (its TTL is 10us). Note 472 that the minimum positive value is a timer interval that is a 473 local parameter per port based on port's bandwidth. 475 o The allowable queuing delay (Q) of packet 5 in the node is 40 + 40 476 -10 = 70us, and it will be modified to the maximum positive value 477 60 us then put into the deadline queue-1 (its TTL is 60us). Note 478 that the maximum positive value is an empirical value that can be 479 configured according to the maximum delay requirements of 480 deterministic services. 482 o Packets 4 and 6 will be put into the non-deadline queue in the 483 traditional way. 485 5. Traffic Regulation and Shaping 487 On the ingress PE node, traffic regulation is performed on UNI port, 488 so that the service traffic does not exceed its reserved bandwidth. 489 Suppose there are N sources, and the packets they send carry the same 490 deadline. These packets may arrive at an intermediate node at the 491 same time and put into the same deadline queue. If the reserved 492 bandwidth of deadline queue at N sources is M0, and the reserved 493 bandwidth of deadline queue at intermediate nodes is Mx, then it 494 needs to meet: N * M0 < = Mx. This means that a larger bandwidth is 495 required on the intermediate node to send more bits in the same time 496 duration, i.e., larger buffer size. Especially, packets with 497 different deadlines sent by a single ingress PE node at different 498 times, may be put into the same deadline queue by an intermediate 499 node and sent within the same authorization time. In order to 500 mitigate this impact, it is not recommended to apply the on-time 501 policy to packets with large deadline value. 503 On the ingress PE node, traffic shaping is performed on NNI port. 504 Multiple continuous packets of the specific service flow are stored 505 in the deadline queue with corresponding remaining time according to 506 the planned deadline of the service flow. Note that these packets 507 are not stored in the same queue over time. The amount of bits that 508 can be stored in one queue is equal to the reserved bandwidth * 509 authorization time, however, at least one whole packet shall be 510 loaded. For example, if the allowable queuing delay is 20us, then 511 within the current timer interval, the first sequence of the packets 512 will be put into the current deadline queue with TTL = 20us until the 513 reserved bandwidth limit is reached; Then, within the next timer 514 interval, the next sequence of packets will be put into the current 515 TTL = 20us queue until the reserved bandwidth limit is reached; and 516 so on, until the total service bits are loaded. 518 Figure 3 depicts an example of deadline based traffic shaping on the 519 ingress PE node. It is assumed that the packets loaded in each timer 520 interval do not exceed the reserved bandwidth of the service. 522 1st packet 523 | 524 v 525 +-+ +-+ +----+ +-+ +--+ +------+ 526 |1| |2| | 3 | |4| |5 | | 6 | <= packet sequence 527 +-+ +-+ +----+ +-+ +--+ +------+ 528 | | | | | | 529 ~+F ~+F ~+F ~+F ~+F ~+F 530 | | | | | | 531 UNI v v v v v v 532 ingress PE -+--------+--------+--------+--------+--------+--------+----> 533 NNI | | | | | | | time 534 |interval|interval|interval|interval|interval|interval| 535 v v v v 536 1,2 in 3 in 4,5 in 6 in 537 Buffer-A Buffer-B Buffer-C Buffer-D 538 (TTL=Q) (TTL=Q) (TTL=Q) (TTL=Q) 539 | | | | 540 ~+Q ~+Q ~+Q ~+Q 541 | | | | 542 sending v v v v 543 +-+ +-+ +----+ +-+ +--+ +------+ 544 |1| |2| | 3 | |4| |5 | | 6 | 545 +-+ +-+ +----+ +-+ +--+ +------+ 547 Figure 3: Deadline Based Traffic Shaping 549 6. Compatibility Considerations 551 For a particular path, if only some nodes in the path upgrade support 552 the deadline mechanism defined in this document, the end-to-end 553 deterministic delay/jitter target will only be partially achieved. 554 Those legacy devices may adopt the existing priority based scheduling 555 mechanism, and ignore the possible deadline information in the 556 packet, thus the delay intra node produced by them cannot be 557 perceived by the adjacent upgraded node. The more upgraded nodes 558 included in the path, the closer to the delay/jitter target. 560 However, only a few key nodes are upgraded to support deadline 561 mechanism, which is low-cost, but can meet a service with relatively 562 loose time requirements. 564 7. Benefits 566 The mechanism described in this document has the following benefits: 568 o Time synchronization is not required between network nodes. Each 569 node can flexibly set the authorization time length of the 570 deadline queue according to its own outgoing port bandwidth. 572 o Packet multiplexing based, it is an enhancement of PQ scheduling 573 algorithm, friendly to the upgrade of packet switching network. 574 All nodes in the network can independently use cycle timers with 575 different timeout intervals to traverse the deadline queues. 577 o The packet can control its expected dwell time in the node. A 578 single set of deadline queues supports multiple levels of dwell 579 time. 581 o For in-time policy, the end-to-end delay is H*(F~D), jitter is 582 H*Q; For on-time policy, the end-to-end delay is H*D, jitter is a 583 just single authorization time. 585 8. IANA Considerations 587 There is no IANA requestion for this document. 589 9. Security Considerations 591 TBD 593 10. Acknowledgements 595 TBD 597 11. Normative References 599 [I-D.peng-6man-deadline-option] 600 Peng, S. and B. Tan, "Deadline Option", draft-peng-6man- 601 deadline-option-00 (work in progress), January 2022. 603 [I-D.stein-srtsn] 604 Stein, Y. (., "Segment Routed Time Sensitive Networking", 605 draft-stein-srtsn-01 (work in progress), August 2021. 607 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 608 Requirement Levels", BCP 14, RFC 2119, 609 DOI 10.17487/RFC2119, March 1997, 610 . 612 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 613 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 614 May 2017, . 616 [RFC8655] Finn, N., Thubert, P., Varga, B., and J. Farkas, 617 "Deterministic Networking Architecture", RFC 8655, 618 DOI 10.17487/RFC8655, October 2019, 619 . 621 Authors' Addresses 623 Shaofu Peng 624 ZTE Corporation 625 China 627 Email: peng.shaofu@zte.com.cn 629 Bin Tan 630 ZTE Corporation 631 China 633 Email: tan.bin@zte.com.cn 635 Peng Liu 636 China Mobile 637 China 639 Email: liupengyjy@chinamobile.com