idnits 2.17.00 (12 Aug 2021) /tmp/idnits30944/draft-ietf-mpls-recovery-frmwrk-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == There are 10 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 31 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 25 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The "Author's Address" (or "Authors' Addresses") section title is misspelled. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 2702 (ref. '2') -- Possible downref: Non-RFC (?) normative reference: ref. '3' Summary: 10 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 MPLS Working Group Vishal Sharma (Metanoia, Inc.) 3 Informational Track Fiffi Hellstrand (Nortel Networks) 4 Expires: Januray 2003 (Editors) 6 July 2002 8 Framework for MPLS-based Recovery 9 11 Status of this memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that other 17 groups may also distribute working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 22 The list of current Internet-Drafts can be accessed at 23 http://www.ietf.org/ietf/1id-abstracts.txt 24 The list of Internet-Draft Shadow Directories can be accessed at 25 http://www.ietf.org/shadow.html. 27 Abstract 29 Multi-protocol label switching (MPLS) integrates the label swapping 30 forwarding paradigm with network layer routing. To deliver reliable 31 service, MPLS requires a set of procedures to provide protection of 32 the traffic carried on different paths. This requires that the label 33 switched routers (LSRs) support fault detection, fault notification, 34 and fault recovery mechanisms, and that MPLS signaling, support the 35 configuration of recovery. With these objectives in mind, this 36 document specifies a framework for MPLS based recovery. 38 Table of Contents 39 1. Introduction....................................................2 40 1.1. Background......................................................3 41 1.2. Motivation for MPLS-Based Recovery..............................3 42 1.3. Objectives/Goals................................................4 43 2. Contributing Authors............................................6 44 3. Overview........................................................6 45 3.1. Recovery Models.................................................7 46 3.1.1 Rerouting.....................................................7 47 3.1.2 Protection Switching..........................................7 48 3.2. The Recovery Cycles.............................................8 49 3.2.1 MPLS Recovery Cycle Model.....................................8 50 3.2.2 MPLS Reversion Cycle Model....................................9 51 3.2.3 Dynamic Re-routing Cycle Model...............................11 52 3.3. Definitions and Terminology....................................12 53 3.3.1 General Recovery Terminology.................................13 54 3.3.2 Failure Terminology..........................................15 55 3.4. Abbreviations..................................................16 56 4. MPLS-based Recovery Principles.................................16 57 4.1. Configuration of Recovery......................................17 58 4.2. Initiation of Path Setup.......................................17 59 4.3. Initiation of Resource Allocation..............................18 60 4.4. Scope of Recovery..............................................18 61 4.4.1 Topology.....................................................18 62 4.4.1.1 Local Repair................................................18 63 4.4.1.2 Global Repair...............................................19 64 4.4.1.3 Alternate Egress Repair.....................................19 65 4.4.1.4 Multi-Layer Repair..........................................20 66 4.4.1.5 Concatenated Protection Domains.............................20 67 4.4.2 Path Mapping.................................................20 68 4.4.3 Bypass Tunnels...............................................21 69 4.4.4 Recovery Granularity.........................................21 70 4.4.4.1 Selective Traffic Recovery..................................22 71 4.4.4.2 Bundling....................................................22 72 4.4.5 Recovery Path Resource Use...................................22 73 4.5. Fault Detection................................................22 74 4.6. Fault Notification.............................................23 75 4.7. Switch-Over Operation..........................................24 76 4.7.1 Recovery Trigger.............................................24 77 4.7.2 Recovery Action..............................................25 78 4.8. Post Recovery Operation........................................25 79 4.8.1 Fixed Protection Counterparts................................25 80 4.8.1.1 Revertive Mode..............................................25 81 4.8.1.2 Non-revertive Mode..........................................25 82 4.8.2 Dynamic Protection Counterparts..............................26 83 4.8.3 Restoration and Notification.................................26 84 4.8.4 Reverting to Preferred Path (or Controlled Rearrangement)....27 85 4.9. Performance....................................................27 86 5. MPLS Recovery Features.........................................28 87 6. Comparison Criteria............................................28 88 7. Security Considerations........................................30 89 8. Intellectual Property Considerations...........................30 90 9. Acknowledgements...............................................31 91 10. EditorsÆ Addresses.............................................31 92 11. References.....................................................31 94 1. Introduction 96 This memo describes a framework for MPLS-based recovery. We provide a 97 detailed taxonomy of recovery terminology, and discuss the motivation 98 for, the objectives of, and the requirements for MPLS-based recovery. 99 We outline principles for MPLS-based recovery, and also provide 100 comparison criteria that may serve as a basis for comparing and 101 evaluating different recovery schemes. 103 At points in the document, we provide some thoughts about the 104 operation or viability of certain recovery objectives. These should 105 be viewed as the opinions of the authors, and not the consolidated 106 views of the IETF. 108 1.1. Background 110 Network routing deployed today is focused primarily on connectivity, 111 and typically supports only one class of service, the best effort 112 class. Multi-protocol label switching [1], on the other hand, by 113 integrating forwarding based on label-swapping of a link local label 114 with network layer routing allows flexibility in the delivery of new 115 routing services. MPLS allows for using such media specific 116 forwarding mechanisms as label swapping. This enables some 117 sophisticated features such as quality-of-service (QoS) and traffic 118 engineering [2] to be implemented more effectively. An important 119 component of providing QoS, however, is the ability to transport data 120 reliably and efficiently. Although the current routing algorithms are 121 robust and survivable, the amount of time they take to recover from a 122 fault can be significant, on the order of several seconds or minutes, 123 causing disruption of service for some applications in the interim. 124 This is unacceptable is situations where the aim to provide a highly 125 reliable service, with recovery times that are on the order of 126 seconds down to 10's of milliseconds. 128 MPLS recovery may be motivated by the notion that there are 129 limitations to improving the recovery times of current routing 130 algorithms. Additional improvement can be obtained by augmenting 131 these algorithms with MPLS recovery mechanisms [3]. Since MPLS is a 132 possible technology of choice in future IP-based transport networks, 133 it is useful that MPLS be able to provide protection and restoration 134 of traffic. MPLS may facilitate the convergence of network 135 functionality on a common control and management plane. Further, a 136 protection priority could be used as a differentiating mechanism for 137 premium services that require high reliability. The remainder of this 138 document provides a framework for MPLS based recovery. It is focused 139 at a conceptual level and is meant to address motivation, objectives 140 and requirements. Issues of mechanism, policy, routing plans and 141 characteristics of traffic carried by recovery paths are beyond the 142 scope of this document. 144 1.2. Motivation for MPLS-Based Recovery 146 MPLS based protection of traffic (called MPLS-based Recovery) is 147 useful for a number of reasons. The most important is its ability to 148 increase network reliability by enabling a faster response to faults 149 than is possible with traditional Layer 3 (or IP layer) approaches 150 alone while still providing the visibility of the network afforded by 151 Layer 3. Furthermore, a protection mechanism using MPLS could enable 152 IP traffic to be put directly over WDM optical channels and provide a 153 recovery option without an intervening SONET layer. This would 154 facilitate the construction of IP-over-WDM networks that request a 155 fast recovery ability. 157 The need for MPLS-based recovery arises because of the following: 159 I. Layer 3 or IP rerouting may be too slow for a core MPLS network 160 that needs to support recovery times that are smaller than the 161 convergence times of IP routing protocols. 163 II. Layer 0 (for example, optical layer) or Layer 1 (for example, 164 SONET) mechanisms may be wasteful use of resources. 166 III. The granularity at which the lower layers may be able to protect 167 traffic may be too coarse for traffic that is switched using MPLS- 168 based mechanisms. 170 IV. Layer 0 or Layer 1 mechanisms may have no visibility into higher 171 layer operations. Thus, while they may provide, for example, link 172 protection, they cannot easily provide node protection or protection 173 of traffic transported at layer 3. Further, this may prevent the 174 lower layers from providing restoration based on the trafficÆs needs. 175 For example, fast restoration for traffic that needs it, and slower 176 restoration (with possibly more optimal use of resources) for traffic 177 that does not require fast restoration. In networks where the latter 178 class of traffic is dominant, providing fast restoration to all 179 classes of traffic may not be cost effective from a service 180 providerÆs perspective. 182 V. MPLS has desirable attributes when applied to the purpose of 183 recovery for connectionless networks. Specifically that an LSP is 184 source routed and a forwarding path for recovery can be "pinned" and 185 is not affected by transient instability in SPF routing brought on by 186 failure scenarios. 188 VI. Establishing interoperability of protection mechanisms between 189 routers/LSRs from different vendors in IP or MPLS networks is desired 190 to enable recovery mechanisms to work in a multivendor environment, 191 and to enable the transition of certain protected services to an MPLS 192 core. 194 1.3. Objectives/Goals 196 The following are some important goals for MPLS-based recovery. 198 Ia. MPLS-based recovery mechanisms may be subject to the traffic 199 engineering goal of optimal use of resources. 201 Ib. MPLS based recovery mechanisms should aim to facilitate 202 restoration times that are sufficiently fast for the end user 203 application. That is, that better match the end-userÆs application 204 requirements. In some cases, this may be as short as 10s of 205 milliseconds. 207 We observe that Ia and Ib are conflicting objectives, and a trade off 208 exists between them. The optimal choice depends on the end-user 209 applicationÆs sensitivity to restoration time and the cost impact of 210 introducing restoration in the network, as well as the end-user 211 applicationÆs sensitivity to cost. 213 II. MPLS-based recovery should aim to maximize network reliability 214 and availability. MPLS-based recovery of traffic should aim to 215 minimize the number of single points of failure in the MPLS protected 216 domain. 218 III. MPLS-based recovery should aim to enhance the reliability of the 219 protected traffic while minimally or predictably degrading the 220 traffic carried by the diverted resources. 222 IV. MPLS-based recovery techniques should aim to be applicable for 223 protection of traffic at various granularities. For example, it 224 should be possible to specify MPLS-based recovery for a portion of 225 the traffic on an individual path, for all traffic on an individual 226 path, or for all traffic on a group of paths. Note that a path is 227 used as a general term and includes the notion of a link, IP route or 228 LSP. 230 V. MPLS-based recovery techniques may be applicable for an entire 231 end-to-end path or for segments of an end-to-end path. 233 VI. MPLS-based recovery mechanisms should aim to take into 234 consideration the recovery actions of lower layers. MPLS-based 235 mechanisms should not trigger lower layer protection switching. 237 VII. MPLS-based recovery mechanisms should aim to minimize the loss 238 of data and packet reordering during recovery operations. (The 239 current MPLS specification itself has no explicit requirement on 240 reordering). 242 VIII. MPLS-based recovery mechanisms should aim to minimize the state 243 overhead incurred for each recovery path maintained. 245 IX. MPLS-based recovery mechanisms should aim to preserve the 246 constraints on traffic after switchover, if desired. That is, if 247 desired, the recovery path should meet the resource requirements of, 248 and achieve the same performance characteristics as, the working 249 path. 251 We observe that some of the above are conflicting goals, and real 252 deployment will often involve engineering compromises based on a 253 variety of factors such as cost, end-user application requirements, 254 network efficiency, and revenue considerations. Thus, these goals are 255 subject to tradeoffs based on the above considerations. 257 2. Contributing Authors 259 This document was the collective work of several individuals over a 260 period of two and a half years. The text and content of this document 261 was contributed by the editors and the co-authors listed below. (The 262 contact information for the editors appears in Section 10, and is not 263 repeated below.) 265 Ben Mack-Crane Srinivas Makam 266 Tellabs Operations, Inc. Eshernet, Inc. 267 4951 Indiana Avenue 1712 Ada Ct. 268 Lisle, IL 60532 Naperville, IL 60540 269 Phone: (630) 512-7255 Phone: (630) 308-3213 270 Ben.Mack-Crane@tellabs.com Smakam60540@yahoo.com 272 Ken Owens Changcheng Huang 273 Erlang Technology, Inc. Carleton University 274 345 Marshall Ave., Suite 300 Minto Center, Rm. 3082 275 St. Louis, MO 63119 1125 Colonial By Drive 276 Phone: (314) 918-1579 Ottawa, Ont. K1S 5B6 Canada 277 keno@erlangtech.com Phone: (613) 520-2600 x2477 278 Changcheng.Huang@sce.carleton.ca 280 Jon Weil Brad Cain 281 Nortel Networks Storigen Systems 282 Harlow Laboratories London Road 650 Suffolk Street 283 Harlow Essex CM17 9NA, UK Lowell, MA 01854 284 Phone: +44 (0)1279 403935 Phone: (978) 323-4454 285 jonweil@nortelnetworks.com bcain@storigen.com 287 Loa Andersson Bilel Jamoussi 288 Utfors AB Nortel Networks 289 R…sundav„gen 12, Box 525 3 Federal Street, BL3-03 290 169 29 Solna, Sweden Billerica, MA 01821, USA 291 Phone: +46 8 5270 5038 Phone:(978) 288-4506 292 loa.andersson@utfors.se jamoussi@nortelnetworks.com 294 Angela Chiu Seyhan Civanlar 295 Celion Networks, Inc. Lemur Networks, Inc. 296 One Shiela Drive, Suite 2 135 West 20th Street, 5th Floor 297 Tinton Falls, NJ 07724 New York, NY 10011 298 Phone: (732) 345-3441 Phone: (212) 367-7676 299 angela.chiu@celion.com scivanlar@lemurnetworks.com 301 3. Overview 303 There are several options for providing protection of traffic. The 304 most generic requirement is the specification of whether recovery 305 should be via Layer 3 (or IP) rerouting or via MPLS protection 306 switching or rerouting actions. 308 Generally network operators aim to provide the fastest and the best 309 protection mechanism that can be provided at a reasonable cost. The 310 higher the levels of protection, the more the resources consumed. 311 Therefore it is expected that network operators will offer a spectrum 312 of service levels. MPLS-based recovery should give the flexibility to 313 select the recovery mechanism, choose the granularity at which 314 traffic is protected, and to also choose the specific types of 315 traffic that are protected in order to give operators more control 316 over that tradeoff. With MPLS-based recovery, it can be possible to 317 provide different levels of protection for different classes of 318 service, based on their service requirements. For example, using 319 approaches outlined below, a Virtual Leased Line (VLL) service or 320 real-time applications like Voice over IP (VoIP) may be supported 321 using link/node protection together with pre-established, pre- 322 reserved path protection. Best effort traffic, on the other hand, may 323 use path protection that is established on demand or may simply rely 324 on IP re-route or higher layer recovery mechanisms. As another 325 example of their range of application, MPLS-based recovery strategies 326 may be used to protect traffic not originally flowing on label 327 switched paths, such as IP traffic that is normally routed hop-by- 328 hop, as well as traffic forwarded on label switched paths. 330 3.1. Recovery Models 332 There are two basic models for path recovery: rerouting and 333 protection switching. 335 Protection switching and rerouting, as defined below, may be used 336 together. For example, protection switching to a recovery path may 337 be used for rapid restoration of connectivity while rerouting 338 determines a new optimal network configuration, rearranging paths, as 339 needed, at a later time. 341 3.1.1 Rerouting 343 Recovery by rerouting is defined as establishing new paths or path 344 segments on demand for restoring traffic after the occurrence of a 345 fault. The new paths may be based upon fault information, network 346 routing policies, pre-defined configurations and network topology 347 information. Thus, upon detecting a fault, paths or path segments to 348 bypass the fault are established using signaling. 350 Once the network routing algorithms have converged after a fault, it 351 may be preferable, in some cases, to reoptimize the network by 352 performing a reroute based on the current state of the network and 353 network policies. This is discussed further in Section 3.8. 355 In terms of the principles defined in section 3, reroute recovery 356 employs paths established-on-demand with resources reserved-on- 357 demand. 359 3.1.2 Protection Switching 360 Protection switching recovery mechanisms pre-establish a recovery 361 path or path segment, based upon network routing policies, the 362 restoration requirements of the traffic on the working path, and 363 administrative considerations. The recovery path may or may not be 364 link and node disjoint with the working path. However if the recovery 365 path shares sources of failure with the working path, the overall 366 reliability of the construct is degraded. When a fault is detected, 367 the protected traffic is switched over to the recovery path(s) and 368 restored. 370 In terms of the principles in section 3, protection switching employs 371 pre-established recovery paths, and, if resource reservation is 372 required on the recovery path, pre-reserved resources. The various 373 sub-types of protection switching are detailed in Section 4.4 of this 374 document. 376 3.2. The Recovery Cycles 378 There are three defined recovery cycles: the MPLS Recovery Cycle, the 379 MPLS Reversion Cycle and the Dynamic Re-routing Cycle. The first 380 cycle detects a fault and restores traffic onto MPLS-based recovery 381 paths. If the recovery path is non-optimal the cycle may be followed 382 by any of the two latter cycles to achieve an optimized network 383 again. The reversion cycle applies for explicitly routed traffic that 384 that does not rely on any dynamic routing protocols to be converged. 385 The dynamic re-routing cycle applies for traffic that is forwarded 386 based on hop-by-hop routing. 388 3.2.1 MPLS Recovery Cycle Model 390 The MPLS recovery cycle model is illustrated in Figure 1. 391 Definitions and a key to abbreviations follow. 393 --Network Impairment 394 | --Fault Detected 395 | | --Start of Notification 396 | | | -- Start of Recovery Operation 397 | | | | --Recovery Operation Complete 398 | | | | | --Path Traffic Restored 399 | | | | | | 400 | | | | | | 401 v v v v v v 402 ---------------------------------------------------------------- 403 | T1 | T2 | T3 | T4 | T5 | 405 Figure 1. MPLS Recovery Cycle Model 407 The various timing measures used in the model are described below. 408 T1 Fault Detection Time 409 T2 Hold-off Time 410 T3 Notification Time 411 T4 Recovery Operation Time 412 T5 Traffic Restoration Time 414 Definitions of the recovery cycle times are as follows: 416 Fault Detection Time 418 The time between the occurrence of a network impairment and the 419 moment the fault is detected by MPLS-based recovery mechanisms. This 420 time may be highly dependent on lower layer protocols. 422 Hold-Off Time 424 The configured waiting time between the detection of a fault and 425 taking MPLS-based recovery action, to allow time for lower layer 426 protection to take effect. The Hold-off Time may be zero. 428 Note: The Hold-Off Time may occur after the Notification Time 429 interval if the node responsible for the switchover, the Path Switch 430 LSR (PSL), rather than the detecting LSR, is configured to wait. 432 Notification Time 434 The time between initiation of a fault indication signal (FIS) by the 435 LSR detecting the fault and the time at which the Path Switch LSR 436 (PSL) begins the recovery operation. This is zero if the PSL detects 437 the fault itself or infers a fault from such events as an adjacency 438 failure. 440 Note: If the PSL detects the fault itself, there still may be a Hold- 441 Off Time period between detection and the start of the recovery 442 operation. 444 Recovery Operation Time 446 The time between the first and last recovery actions. This may 447 include message exchanges between the PSL and PML to coordinate 448 recovery actions. 450 Traffic Restoration Time 452 The time between the last recovery action and the time that the 453 traffic (if present) is completely recovered. This interval is 454 intended to account for the time required for traffic to once again 455 arrive at the point in the network that experienced disrupted or 456 degraded service due to the occurrence of the fault (e.g. the PML). 457 This time may depend on the location of the fault, the recovery 458 mechanism, and the propagation delay along the recovery path. 460 3.2.2 MPLS Reversion Cycle Model 461 Protection switching, revertive mode, requires the traffic to be 462 switched back to a preferred path when the fault on that path is 463 cleared. The MPLS reversion cycle model is illustrated in Figure 2. 464 Note that the cycle shown below comes after the recovery cycle shown 465 in Fig. 1. 467 --Network Impairment Repaired 468 | --Fault Cleared 469 | | --Path Available 470 | | | --Start of Reversion Operation 471 | | | | --Reversion Operation Complete 472 | | | | | --Traffic Restored on Preferred Path 473 | | | | | | 474 | | | | | | 475 v v v v v v 476 ----------------------------------------------------------------- 477 | T7 | T8 | T9 | T10| T11| 479 Figure 2. MPLS Reversion Cycle Model 481 The various timing measures used in the model are described below. 482 T7 Fault Clearing Time 483 T8 Wait-to-Restore Time 484 T9 Notification Time 485 T10 Reversion Operation Time 486 T11 Traffic Restoration Time 488 Note that time T6 (not shown above) is the time for which the network 489 impairment is not repaired and traffic is flowing on the recovery 490 path. 492 Definitions of the reversion cycle times are as follows: 494 Fault Clearing Time 496 The time between the repair of a network impairment and the time that 497 MPLS-based mechanisms learn that the fault has been cleared. This 498 time may be highly dependent on lower layer protocols. 500 Wait-to-Restore Time 502 The configured waiting time between the clearing of a fault and MPLS- 503 based recovery action(s). Waiting time may be needed to ensure that 504 the path is stable and to avoid flapping in cases where a fault is 505 intermittent. The Wait-to-Restore Time may be zero. 507 Note: The Wait-to-Restore Time may occur after the Notification Time 508 interval if the PSL is configured to wait. 510 Notification Time 511 The time between initiation of a fault recovery signal (FRS) by the 512 LSR clearing the fault and the time at which the path switch LSR 513 begins the reversion operation. This is zero if the PSL clears the 514 fault itself. 515 Note: If the PSL clears the fault itself, there still may be a Wait- 516 to-Restore Time period between fault clearing and the start of the 517 reversion operation. 519 Reversion Operation Time 521 The time between the first and last reversion actions. This may 522 include message exchanges between the PSL and PML to coordinate 523 reversion actions. 525 Traffic Restoration Time 527 The time between the last reversion action and the time that traffic 528 (if present) is completely restored on the preferred path. This 529 interval is expected to be quite small since both paths are working 530 and care may be taken to limit the traffic disruption (e.g., using 531 "make before break" techniques and synchronous switch-over). 533 In practice, the only interesting times in the reversion cycle are 534 the Wait-to-Restore Time and the Traffic Restoration Time (or some 535 other measure of traffic disruption). Given that both paths are 536 available, there is no need for rapid operation, and a well- 537 controlled switch-back with minimal disruption is desirable. 539 3.2.3 Dynamic Re-routing Cycle Model 541 Dynamic rerouting aims to bring the IP network to a stable state 542 after a network impairment has occurred. A re-optimized network is 543 achieved after the routing protocols have converged, and the traffic 544 is moved from a recovery path to a (possibly) new working path. The 545 steps involved in this mode are illustrated in Figure 3. 547 Note that the cycle shown below may be overlaid on the recovery cycle 548 shown in Fig. 1 or the reversion cycle shown in Fig. 2, or both (in 549 the event that both the recovery cycle and the reversion cycle take 550 place before the routing protocols converge), and after the 551 convergence of the routing protocols it is determined (based on on- 552 line algorithms or off-line traffic engineering tools, network 553 configuration, or a variety of other possible criteria) that there is 554 a better route for the working path. 556 --Network Enters a Semi-stable State after an Impairment 557 | --Dynamic Routing Protocols Converge 558 | | --Initiate Setup of New Working Path between PSL 559 | | | and PML 560 | | | --Switchover Operation Complete 561 | | | | --Traffic Moved to New Working Path 562 | | | | | 563 | | | | | 564 v v v v v 565 ----------------------------------------------------------------- 566 | T12 | T13 | T14 | T15 | 568 Figure 3. Dynamic Rerouting Cycle Model 569 The various timing measures used in the model are described below. 570 T12 Network Route Convergence Time 571 T13 Hold-down Time (optional) 572 T14 Switchover Operation Time 573 T15 Traffic Restoration Time 575 Network Route Convergence Time 577 We define the network route convergence time as the time taken for 578 the network routing protocols to converge and for the network to 579 reach a stable state. 581 Holddown Time 583 We define the holddown period as a bounded time for which a recovery 584 path must be used. In some scenarios it may be difficult to determine 585 if the working path is stable. In these cases a holddown time may be 586 used to prevent excess flapping of traffic between a working and a 587 recovery path. 589 Switchover Operation Time 591 The time between the first and last switchover actions. This may 592 include message exchanges between the PSL and PML to coordinate the 593 switchover actions. 595 As an example of the recovery cycle, we present a sequence of events 596 that occur after a network impairment occurs and when a protection 597 switch is followed by dynamic rerouting. 599 I. Link or path fault occurs 600 II. Signaling initiated (FIS) for the detected fault 601 III. FIS arrives at the PSL 602 IV. The PSL initiates a protection switch to a pre-configured 603 recovery path 604 V. The PSL switches over the traffic from the working path to the 605 recovery path 606 VI. The network enters a semi-stable state 607 VII. Dynamic routing protocols converge after the fault, and a new 608 working path is calculated (based, for example, on some of the 609 criteria mentioned in Section 2.1.1). 610 VIII. A new working path is established between the PSL and the PML 611 (assumption is that PSL and PML have not changed) 612 IX. Traffic is switched over to the new working path. 614 3.3. Definitions and Terminology 615 This document assumes the terminology given in [1], and, in addition, 616 introduces the following new terms. 618 3.3.1 General Recovery Terminology 620 Rerouting 622 A recovery mechanism in which the recovery path or path segments are 623 created dynamically after the detection of a fault on the working 624 path. In other words, a recovery mechanism in which the recovery path 625 is not pre-established. 627 Protection Switching 629 A recovery mechanism in which the recovery path or path segments are 630 created prior to the detection of a fault on the working path. In 631 other words, a recovery mechanism in which the recovery path is pre- 632 established. 634 Working Path 636 The protected path that carries traffic before the occurrence of a 637 fault. The working path exists between a PSL and PML. The working 638 path can be of different kinds; a hop-by-hop routed path, a trunk, a 639 link, an LSP or part of a multipoint-to-point LSP. 641 Synonyms for a working path are primary path and active path. 643 Recovery Path 645 The path by which traffic is restored after the occurrence of a 646 fault. In other words, the path on which the traffic is directed by 647 the recovery mechanism. The recovery path is established by MPLS 648 means. The recovery path can either be an equivalent recovery path 649 and ensure no reduction in quality of service, or be a limited 650 recovery path and thereby not guarantee the same quality of service 651 (or some other criteria of performance) as the working path. A 652 limited recovery path is not expected to be used for an extended 653 period of time. 655 Synonyms for a recovery path are: back-up path, alternative path, and 656 protection path. 658 Protection Counterpart 660 The "other" path when discussing pre-planned protection switching 661 schemes. The protection counterpart for the working path is the 662 recovery path and vice-versa. 664 Path Group (PG) 666 A logical bundling of multiple working paths, each of which is routed 667 identically between a Path Switch LSR and a Path Merge LSR. 669 Protected Path Group (PPG) 671 A path group that requires protection. 673 Protected Traffic Portion (PTP) 675 The portion of the traffic on an individual path that requires 676 protection. For example, code points in the EXP bits of the shim 677 header may identify a protected portion. 679 Path Switch LSR (PSL) 681 An LSR that is responsible for switching or replicating the traffic 682 between the working path and the recovery path. 684 Path Merge LSR (PML) 686 An LSR that is responsible for receiving the recovery path traffic, 687 and either merging the traffic back onto the working path, or, if it 688 is itself the destination, passing the traffic on to the higher layer 689 protocols. 691 Point of Repair (POR) 693 An LSR that is setup for performing MPLS recovery. In other words, an 694 LSR that is responsible for effecting the repair of an LSP. The POR, 695 for example, can be a PSL or a PML, depending on the type of recovery 696 scheme employed. 698 Intermediate LSR 700 An LSR on a working or recovery path that is neither a PSL nor a PML 701 for that path. 703 Bypass Tunnel 705 A path that serves to back up a set of working paths using the label 706 stacking approach [1]. The working paths and the bypass tunnel must 707 all share the same path switch LSR (PSL) and the path merge LSR 708 (PML). 710 Switch-Over 712 The process of switching the traffic from the path that the traffic 713 is flowing on onto one or more alternate path(s). This may involve 714 moving traffic from a working path onto one or more recovery paths, 715 or may involve moving traffic from a recovery path(s) on to a more 716 optimal working path(s). 718 Switch-Back 719 The process of returning the traffic from one or more recovery paths 720 back to the working path(s). 722 Revertive Mode 724 A recovery mode in which traffic is automatically switched back from 725 the recovery path to the original working path upon the restoration 726 of the working path to a fault-free condition. This assumes a failed 727 working path does not automatically surrender resources to the 728 network. 730 Non-revertive Mode 732 A recovery mode in which traffic is not automatically switched back 733 to the original working path after this path is restored to a fault- 734 free condition. (Depending on the configuration, the original working 735 path may, upon moving to a fault-free condition, become the recovery 736 path, or it may be used for new working traffic, and be no longer 737 associated with its original recovery path). 739 MPLS Protection Domain 741 The set of LSRs over which a working path and its corresponding 742 recovery path are routed. 744 MPLS Protection Plan 746 The set of all LSP protection paths and the mapping from working to 747 protection paths deployed in an MPLS protection domain at a given 748 time. 750 Liveness Message 752 A message exchanged periodically between two adjacent LSRs that 753 serves as a link probing mechanism. It provides an integrity check of 754 the forward and the backward directions of the link between the two 755 LSRs as well as a check of neighbor aliveness. 757 Path Continuity Test 759 A test that verifies the integrity and continuity of a path or path 760 segment. The details of such a test are beyond the scope of this 761 draft. (This could be accomplished, for example, by transmitting a 762 control message along the same links and nodes as the data traffic or 763 similarly could be measured by the absence of traffic and by 764 providing feedback.) 766 3.3.2 Failure Terminology 768 Path Failure (PF) 769 Path failure is fault detected by MPLS-based recovery mechanisms, 770 which is define as the failure of the liveness message test or a path 771 continuity test, which indicates that path connectivity is lost. 773 Path Degraded (PD) 774 Path degraded is a fault detected by MPLS-based recovery mechanisms 775 that indicates that the quality of the path is unacceptable. 777 Link Failure (LF) 778 A lower layer fault indicating that link continuity is lost. This may 779 be communicated to the MPLS-based recovery mechanisms by the lower 780 layer. 782 Link Degraded (LD) 783 A lower layer indication to MPLS-based recovery mechanisms that the 784 link is performing below an acceptable level. 786 Fault Indication Signal (FIS) 787 A signal that indicates that a fault along a path has occurred. It is 788 relayed by each intermediate LSR to its upstream or downstream 789 neighbor, until it reaches an LSR that is setup to perform MPLS 790 recovery (the POR). The FIS is transmitted periodically by the 791 node/nodes closest to the point of failure, for some configurable 792 length of time. 794 Fault Recovery Signal (FRS) 795 A signal that indicates a fault along a working path has been 796 repaired. Again, like the FIS, it is relayed by each intermediate LSR 797 to its upstream or downstream neighbor, until is reaches the LSR that 798 performs recovery of the original path. The FRS is transmitted 799 periodically by the node/nodes closest to the point of failure, for 800 some configurable length of time. 802 3.4. Abbreviations 804 FIS: Fault Indication Signal. 805 FRS: Fault Recovery Signal. 806 LD: Link Degraded. 807 LF: Link Failure. 808 PD: Path Degraded. 809 PF: Path Failure. 810 PML: Path Merge LSR. 811 PG: Path Group. 812 POR: Point of Repair 813 PPG: Protected Path Group. 814 PTP: Protected Traffic Portion. 815 PSL: Path Switch LSR. 817 4. MPLS-based Recovery Principles 819 MPLS-based recovery refers to the ability to effect quick and 820 complete restoration of traffic affected by a fault in an MPLS- 821 enabled network. The fault may be detected on the IP layer or in 822 lower layers over which IP traffic is transported. Fastest MPLS 823 recovery is assumed to be achieved with protection switching and may 824 be viewed as the MPLS LSR switch completion time that is comparable 825 to, or equivalent to, the 50 ms switch-over completion time of the 826 SONET layer. This section provides a discussion of the concepts and 827 principles of MPLS-based recovery. The concepts are presented in 828 terms of atomic or primitive terms that may be combined to specify 829 recovery approaches. We do not make any assumptions about the 830 underlying layer 1 or layer 2 transport mechanisms or their recovery 831 mechanisms. 833 4.1. Configuration of Recovery 835 An LSR may support any or all of the following recovery options: 837 Default-recovery (No MPLS-based recovery enabled): 838 Traffic on the working path is recovered only via Layer 3 or IP 839 rerouting or by some lower layer mechanism such as SONET APS. This 840 is equivalent to having no MPLS-based recovery. This option may be 841 used for low priority traffic or for traffic that is recovered in 842 another way (for example load shared traffic on parallel working 843 paths may be automatically recovered upon a fault along one of the 844 working paths by distributing it among the remaining working paths). 846 Recoverable (MPLS-based recovery enabled): 847 This working path is recovered using one or more recovery paths, 848 either via rerouting or via protection switching. 850 4.2. Initiation of Path Setup 852 There are three options for the initiation of the recovery path 853 setup. The active and recovery paths may be established by using 854 either RSVP-TE [4][5] or CR-LDP [6]. 856 Pre-established: 858 This is the same as the protection switching option. Here a recovery 859 path(s) is established prior to any failure on the working path. The 860 path selection can either be determined by an administrative 861 centralized tool, or chosen based on some algorithm implemented at 862 the PSL and possibly intermediate nodes. To guard against the 863 situation when the pre-established recovery path fails before or at 864 the same time as the working path, the recovery path should have 865 secondary configuration options as explained in Section 3.3 below. 867 Pre Qualified: 869 A pre-established path need not be created, it may be pre-qualified. 870 A pre-qualified recovery path is not created expressly for protecting 871 the working path, but instead is a path created for other purposes 872 that is designated as a recovery path after determining that it is an 873 acceptable alternative for carrying the working path traffic. 875 Variants include the case where an optical path or trail is 876 configured, but no switches are set. 878 Established-on-Demand: 880 This is the same as the rerouting option. Here, a recovery path is 881 established after a failure on its working path has been detected and 882 notified to the PSL. 884 4.3. Initiation of Resource Allocation 886 A recovery path may support the same traffic contract as the working 887 path, or it may not. We will distinguish these two situations by 888 using different additive terms. If the recovery path is capable of 889 replacing the working path without degrading service, it will be 890 called an equivalent recovery path. If the recovery path lacks the 891 resources (or resource reservations) to replace the working path 892 without degrading service, it will be called a limited recovery path. 893 Based on this, there are two options for the initiation of resource 894 allocation: 896 Pre-reserved: 898 This option applies only to protection switching. Here a pre- 899 established recovery path reserves required resources on all hops 900 along its route during its establishment. Although the reserved 901 resources (e.g., bandwidth and/or buffers) at each node cannot be 902 used to admit more working paths, they are available to be used by 903 all traffic that is present at the node before a failure occurs. 905 Reserved-on-Demand: 907 This option may apply either to rerouting or to protection switching. 908 Here a recovery path reserves the required resources after a failure 909 on the working path has been detected and notified to the PSL and 910 before the traffic on the working path is switched over to the 911 recovery path. 913 Note that under both the options above, depending on the amount of 914 resources reserved on the recovery path, it could either be an 915 equivalent recovery path or a limited recovery path. 917 4.4. Scope of Recovery 919 4.4.1 Topology 921 4.4.1.1 Local Repair 923 The intent of local repair is to protect against a link or neighbor 924 node fault and to minimize the amount of time required for failure 925 propagation. In local repair (also known as local recovery), the node 926 immediately upstream of the fault is the one to initiate recovery 927 (either rerouting or protection switching). Local repair can be of 928 two types: 930 Link Recovery/Restoration 932 In this case, the recovery path may be configured to route around a 933 certain link deemed to be unreliable. If protection switching is 934 used, several recovery paths may be configured for one working path, 935 depending on the specific faulty link that each protects against. 937 Alternatively, if rerouting is used, upon the occurrence of a fault 938 on the specified link, each path is rebuilt such that it detours 939 around the faulty link. 940 In this case, the recovery path need only be disjoint from its 941 working path at a particular link on the working path, and may have 942 overlapping segments with the working path. Traffic on the working 943 path is switched over to an alternate path at the upstream LSR that 944 connects to the failed link. This method is potentially the fastest 945 to perform the switchover, and can be effective in situations where 946 certain path components are much more unreliable than others. 948 Node Recovery/Restoration 950 In this case, the recovery path may be configured to route around a 951 neighbor node deemed to be unreliable. Thus the recovery path is 952 disjoint from the working path only at a particular node and at links 953 associated with the working path at that node. Once again, the 954 traffic on the primary path is switched over to the recovery path at 955 the upstream LSR that directly connects to the failed node, and the 956 recovery path shares overlapping portions with the working path. 958 4.4.1.2 Global Repair 960 The intent of global repair is to protect against any link or node 961 fault on a path or on a segment of a path, with the obvious exception 962 of the faults occurring at the ingress node of the protected path 963 segment. In global repair, the POR is usually distant from the 964 failure and needs to be notified by a FIS. 965 In global repair also, end-to-end path recovery/restoration applies. 966 In many cases, the recovery path can be made completely link and node 967 disjoint with its working path. This has the advantage of protecting 968 against all link and node fault(s) on the working path (end-to-end 969 path or path segment). 970 However, it may, in some cases, be slower than local repair since the 971 fault notification message must now travel to the POR to trigger the 972 recovery action. 974 4.4.1.3 Alternate Egress Repair 976 It is possible to restore service without specifically recovering the 977 faulted path. 978 For example, for best effort IP service it is possible to select a 979 recovery path that has a different egress point from the working path 980 (i.e., there is no PML). The recovery path egress must simply be a 981 router that is acceptable for forwarding the FEC carried by the 982 working path (without creating looping). In an engineering context, 983 specific alternative FEC/LSP mappings with alternate egresses can be 984 formed. 986 This may simplify enhancing the reliability of implicitly constructed 987 MPLS topologies. A PSL may qualify LSP/FEC bindings as candidate 988 recovery paths as simply link and node disjoint with the immediate 989 downstream LSR of the working path. 991 4.4.1.4 Multi-Layer Repair 993 Multi-layer repair broadens the network designerÆs tool set for those 994 cases where multiple network layers can be managed together to 995 achieve overall network goals. Specific criteria for determining 996 when multi-layer repair is appropriate are beyond the scope of this 997 draft. 999 4.4.1.5 Concatenated Protection Domains 1001 A given service may cross multiple networks and these may employ 1002 different recovery mechanisms. It is possible to concatenate 1003 protection domains so that service recovery can be provided end-to- 1004 end. It is considered that the recovery mechanisms in different 1005 domains may operate autonomously, and that multiple points of 1006 attachment may be used between domains (to ensure there is no single 1007 point of failure). Alternate egress repair requires management of 1008 concatenated domains in that an explicit MPLS point of failure (the 1009 PML) is by definition excluded. Details of concatenated protection 1010 domains are beyond the scope of this draft. 1012 4.4.2 Path Mapping 1014 Path mapping refers to the methods of mapping traffic from a faulty 1015 working path on to the recovery path. There are several options for 1016 this, as described below. Note that the options below should be 1017 viewed as atomic terms that only describe how the working and 1018 protection paths are mapped to each other. The issues of resource 1019 reservation along these paths, and how switchover is actually 1020 performed lead to the more commonly used composite terms, such as 1+1 1021 and 1:1 protection, which were described in Section 2.1. 1023 1-to-1 Protection 1025 In 1-to-1 protection the working path has a designated recovery path 1026 that is only to be used to recover that specific working path. 1028 n-to-1 Protection 1030 In n-to-1 protection, up to n working paths are protected using only 1031 one recovery path. If the intent is to protect against any single 1032 fault on any of the working paths, the n working paths should be 1033 diversely routed between the same PSL and PML. In some cases, 1034 handshaking between PSL and PML may be required to complete the 1035 recovery, the details of which are beyond the scope of this draft. 1037 n-to-m Protection 1039 In n-to-m protection, up to n working paths are protected using m 1040 recovery paths. Once again, if the intent is to protect against any 1041 single fault on any of the n working paths, the n working paths and 1042 the m recovery paths should be diversely routed between the same PSL 1043 and PML. In some cases, handshaking between PSL and PML may be 1044 required to complete the recovery, the details of which are beyond 1045 the scope of this draft. n-to-m protection is for further study. 1047 Split Path Protection 1049 In split path protection, multiple recovery paths are allowed to 1050 carry the traffic of a working path based on a certain configurable 1051 load splitting ratio. This is especially useful when no single 1052 recovery path can be found that can carry the entire traffic of the 1053 working path in case of a fault. Split path protection may require 1054 handshaking between the PSL and the PML(s), and may require the 1055 PML(s) to correlate the traffic arriving on multiple recovery paths 1056 with the working path. Although this is an attractive option, the 1057 details of split path protection are beyond the scope of this draft, 1058 and are for further study. 1060 4.4.3 Bypass Tunnels 1062 It may be convenient, in some cases, to create a "bypass tunnel" for 1063 a PPG between a PSL and PML, thereby allowing multiple recovery paths 1064 to be transparent to intervening LSRs [2]. In this case, one LSP 1065 (the tunnel) is established between the PSL and PML following an 1066 acceptable route and a number of recovery paths are supported through 1067 the tunnel via label stacking. A bypass tunnel can be used with any 1068 of the path mapping options discussed in the previous section. 1070 As with recovery paths, the bypass tunnel may or may not have 1071 resource reservations sufficient to provide recovery without service 1072 degradation. It is possible that the bypass tunnel may have 1073 sufficient resources to recover some number of working paths, but not 1074 all at the same time. If the number of recovery paths carrying 1075 traffic in the tunnel at any given time is restricted, this is 1076 similar to the n-to-1 or n-to-m protection cases mentioned in Section 1077 3.4.2. 1079 4.4.4 Recovery Granularity 1081 Another dimension of recovery considers the amount of traffic 1082 requiring protection. This may range from a fraction of a path to a 1083 bundle of paths. 1085 4.4.4.1 Selective Traffic Recovery 1087 This option allows for the protection of a fraction of traffic within 1088 the same path. The portion of the traffic on an individual path that 1089 requires protection is called a protected traffic portion (PTP). A 1090 single path may carry different classes of traffic, with different 1091 protection requirements. The protected portion of this traffic may be 1092 identified by its class, as for example, via the EXP bits in the MPLS 1093 shim header or via the priority bit in the ATM header. 1095 4.4.4.2 Bundling 1097 Bundling is a technique used to group multiple working paths together 1098 in order to recover them simultaneously. The logical bundling of 1099 multiple working paths requiring protection, each of which is routed 1100 identically between a PSL and a PML, is called a protected path group 1101 (PPG). When a fault occurs on the working path carrying the PPG, the 1102 PPG as a whole can be protected either by being switched to a bypass 1103 tunnel or by being switched to a recovery path. 1105 4.4.5 Recovery Path Resource Use 1107 In the case of pre-reserved recovery paths, there is the question of 1108 what use these resources may be put to when the recovery path is not 1109 in use. There are two options: 1111 Dedicated-resource: 1112 If the recovery path resources are dedicated, they may not be used 1113 for anything except carrying the working traffic. For example, in 1114 the case of 1+1 protection, the working traffic is always carried on 1115 the recovery path. Even if the recovery path is not always carrying 1116 the working traffic, it may not be possible or desirable to allow 1117 other traffic to use these resources. 1119 Extra-traffic-allowed: 1120 If the recovery path only carries the working traffic when the 1121 working path fails, then it is possible to allow extra traffic to use 1122 the reserved resources at other times. Extra traffic is, by 1123 definition, traffic that can be displaced (without violating service 1124 agreements) whenever the recovery path resources are needed for 1125 carrying the working path traffic. 1127 Shared-resource: 1128 A shared recovery resource is dedicated for use by multiple primary 1129 resources that (according to SRLGs) are not expected to fail 1130 simultaneously. 1132 4.5. Fault Detection 1134 MPLS recovery is initiated after the detection of either a lower 1135 layer fault or a fault at the IP layer or in the operation of MPLS- 1136 based mechanisms. We consider four classes of impairments: Path 1137 Failure, Path Degraded, Link Failure, and Link Degraded. 1139 Path Failure (PF) is a fault that indicates to an MPLS-based recovery 1140 scheme that the connectivity of the path is lost. This may be 1141 detected by a path continuity test between the PSL and PML. Some, 1142 and perhaps the most common, path failures may be detected using a 1143 link probing mechanism between neighbor LSRs. An example of a probing 1144 mechanism is a liveness message that is exchanged periodically along 1145 the working path between peer LSRs [3]. For either a link probing 1146 mechanism or path continuity test to be effective, the test message 1147 must be guaranteed to follow the same route as the working or 1148 recovery path, over the segment being tested. In addition, the path 1149 continuity test must take the path merge points into consideration. 1150 In the case of a bi-directional link implemented as two 1151 unidirectional links, path failure could mean that either one or both 1152 unidirectional links are damaged. 1154 Path Degraded (PD) is a fault that indicates to MPLS-based recovery 1155 schemes/mechanisms that the path has connectivity, but that the 1156 quality of the connection is unacceptable. This may be detected by a 1157 path performance monitoring mechanism, or some other mechanism for 1158 determining the error rate on the path or some portion of the path. 1159 This is local to the LSR and consists of excessive discarding of 1160 packets at an interface, either due to label mismatch or due to TTL 1161 errors, for example. 1163 Link Failure (LF) is an indication from a lower layer that the link 1164 over which the path is carried has failed. If the lower layer 1165 supports detection and reporting of this fault (that is, any fault 1166 that indicates link failure e.g., SONET LOS), this may be used by the 1167 MPLS recovery mechanism. In some cases, using LF indications may 1168 provide faster fault detection than using only MPLSûbased fault 1169 detection mechanisms. 1171 Link Degraded (LD) is an indication from a lower layer that the link 1172 over which the path is carried is performing below an acceptable 1173 level. If the lower layer supports detection and reporting of this 1174 fault, it may be used by the MPLS recovery mechanism. In some cases, 1175 using LD indications may provide faster fault detection than using 1176 only MPLS-based fault detection mechanisms. 1178 4.6. Fault Notification 1180 MPLS-based recovery relies on rapid and reliable notification of 1181 faults. Once a fault is detected, the node that detected the fault 1182 must determine if the fault is severe enough to require path 1183 recovery. If the node is not capable of initiating direct action 1184 (e.g. as a point of repair, POR) the node should send out a 1185 notification of the fault by transmitting a FIS to the POR. This can 1186 take several forms: 1188 (i) control plane messaging: relayed hop-by-hop along the path of the 1189 failed LSP until a POR is reached. 1191 (ii) user plane messaging: sent to the PML, which may take corrective 1192 action (as a POR for 1+1) or then communicate with a POR (for 1:n) by 1193 any of several means: 1194 - control plane messaging 1195 - user plane return path (either through a bi-directional LSP 1196 or via other means) 1198 Since the FIS is a control message, it should be transmitted with 1199 high priority to ensure that it propagates rapidly towards the 1200 affected POR(s). Depending on how fault notification is configured in 1201 the LSRs of an MPLS domain, the FIS could be sent either as a Layer 2 1202 or Layer 3 packet [3]. The use of a Layer 2-based notification 1203 requires a Layer 2 path direct to the POR. An example of a FIS could 1204 be the liveness message sent by a downstream LSR to its upstream 1205 neighbor, with an optional fault notification field set or it can be 1206 implicitly denoted by a teardown message. Alternatively, it could be 1207 a separate fault notification packet. The intermediate LSR should 1208 identify which of its incoming links to propagate the FIS on. 1210 4.7. Switch-Over Operation 1212 4.7.1 Recovery Trigger 1214 The activation of an MPLS protection switch following the detection 1215 or notification of a fault requires a trigger mechanism at the PSL. 1216 MPLS protection switching may be initiated due to automatic inputs or 1217 external commands. The automatic activation of an MPLS protection 1218 switch results from a response to a defect or fault conditions 1219 detected at the PSL or to fault notifications received at the PSL. It 1220 is possible that the fault detection and trigger mechanisms may be 1221 combined, as is the case when a PF, PD, LF, or LD is detected at a 1222 PSL and triggers a protection switch to the recovery path. In most 1223 cases, however, the detection and trigger mechanisms are distinct, 1224 involving the detection of fault at some intermediate LSR followed by 1225 the propagation of a fault notification to the POR via the FIS, which 1226 serves as the protection switch trigger at the POR. MPLS protection 1227 switching in response to external commands results when the operator 1228 initiates a protection switch by a command to a POR (or alternatively 1229 by a configuration command to an intermediate LSR, which transmits 1230 the FIS towards the POR). 1232 Note that the PF fault applies to hard failures (fiber cuts, 1233 transmitter failures, or LSR fabric failures), as does the LF fault, 1234 with the difference that the LF is a lower layer impairment that may 1235 be communicated to - MPLS-based recovery mechanisms. The PD (or LD) 1236 fault, on the other hand, applies to soft defects (excessive errors 1237 due to noise on the link, for instance). The PD (or LD) results in a 1238 fault declaration only when the percentage of lost packets exceeds a 1239 given threshold, which is provisioned and may be set based on the 1240 service level agreement(s) in effect between a service provider and a 1241 customer. 1243 4.7.2 Recovery Action 1245 After a fault is detected or FIS is received by the POR, the recovery 1246 action involves either a rerouting or protection switching operation. 1247 In both scenarios, the next hop label forwarding entry for a recovery 1248 path is bound to the working path. 1250 4.8. Post Recovery Operation 1252 When traffic is flowing on the recovery path decisions can be made to 1253 whether let the traffic remain on the recovery path and consider it 1254 as a new working path or do a switch to the old or a new working 1255 path. This post recovery operation has two styles, one where the 1256 protection counterparts, i.e. the working and recovery path, are 1257 fixed or "pinned" to its route and one in which the PSL or other 1258 network entity with real time knowledge of failure dynamically 1259 performs re-establishment or controlled rearrangement of the paths 1260 comprising the protected service. 1262 4.8.1 Fixed Protection Counterparts 1264 For fixed protection counterparts the PSL will be pre-configured with 1265 the appropriate behavior to take when the original fixed path is 1266 restored to service. The choices are revertive and non-revertive 1267 mode. The choice will typically be depended on relative costs of the 1268 working and protection paths, and the tolerance of the service to the 1269 effects of switching paths yet again. These protection modes indicate 1270 whether or not there is a preferred path for the protected traffic. 1272 4.8.1.1 Revertive Mode 1274 If the working path always is the preferred path, this path will be 1275 used whenever it is available. Thus, in the event of a fault on this 1276 path, its unused resources will not be reclaimed by the network on 1277 failure. If the working path has a fault, traffic is switched to the 1278 recovery path. In the revertive mode of operation, when the 1279 preferred path is restored the traffic is automatically switched back 1280 to it. 1282 There are a number of implications to pinned working and recovery 1283 paths: 1284 - upon failure and traffic moved to recovery path, the traffic is 1285 unprotected until such time as the path defect in the original 1286 working path is repaired and that path restored to service. 1287 - upon failure and traffic moved to recovery path, the resources 1288 associated with the original path remain reserved. 1290 4.8.1.2 Non-revertive Mode 1292 In the non-revertive mode of operation, there is no preferred path or 1293 it may be desirable to minimize further disruption of the service 1294 brought on by a revertive switching operation. A switch-back to the 1295 original working path is not desired or not possible since the 1296 original path may no longer exist after the occurrence of a fault on 1297 that path. 1298 If there is a fault on the working path, traffic is switched to the 1299 recovery path. When or if the faulty path (the originally working 1300 path) is restored, it may become the recovery path (either by 1301 configuration, or, if desired, by management actions). 1303 In the non-revertive mode of operation, the working traffic may or 1304 may not be restored to a new optimal working path or to the original 1305 working path anyway. This is because it might be useful, in some 1306 cases, to either: (a) administratively perform a protection switch 1307 back to the original working path after gaining further assurances 1308 about the integrity of the path, or (b) it may be acceptable to 1309 continue operation on the recovery path, or (c) it may be desirable 1310 to move the traffic to a new optimal working path that is calculated 1311 based on network topology and network policies. 1313 4.8.2 Dynamic Protection Counterparts 1315 For dynamic protection counterparts when the traffic is switched over 1316 to a recovery path, the association between the original working path 1317 and the recovery path may no longer exist, since the original path 1318 itself may no longer exist after the fault. Instead, when the network 1319 reaches a stable state following routing convergence, the recovery 1320 path may be switched over to a different preferred path either 1321 optimization based on the new network topology and associated 1322 information or based on pre-configured information. 1324 Dynamic protection counterparts assume that upon failure, the PSL or 1325 other network entity will establish new working paths if another 1326 switch-over will be performed. 1328 4.8.3 Restoration and Notification 1330 MPLS restoration deals with returning the working traffic from the 1331 recovery path to the original or a new working path. Reversion is 1332 performed by the PSL either upon receiving notification, via FRS, 1333 that the working path is repaired, or upon receiving notification 1334 that a new working path is established. 1336 For fixed counterparts in revertive mode, an LSR that detected the 1337 fault on the working path also detects the restoration of the working 1338 path. If the working path had experienced a LF defect, the LSR 1339 detects a return to normal operation via the receipt of a liveness 1340 message from its peer. If the working path had experienced a LD 1341 defect at an LSR interface, the LSR could detect a return to normal 1342 operation via the resumption of error-free packet reception on that 1343 interface. Alternatively, a lower layer that no longer detects a LF 1344 defect may inform the MPLS-based recovery mechanisms at the LSR that 1345 the link to its peer LSR is operational. 1346 The LSR then transmits FRS to its upstream LSR(s) that were 1347 transmitting traffic on the working path. At the point the PSL 1348 receives the FRS, it switches the working traffic back to the 1349 original working path. 1351 A similar scheme is for dynamic counterparts where e.g. an update of 1352 topology and/or network convergence may trigger installation or setup 1353 of new working paths and may send notification to the PSL to perform 1354 a switch over. 1356 We note that if there is a way to transmit fault information back 1357 along a recovery path towards a PSL and if the recovery path is an 1358 equivalent working path, it is possible for the working path and its 1359 recovery path to exchange roles once the original working path is 1360 repaired following a fault. This is because, in that case, the 1361 recovery path effectively becomes the working path, and the restored 1362 working path functions as a recovery path for the original recovery 1363 path. This is important, since it affords the benefits of non- 1364 revertive switch operation outlined in Section 3.8.1, without leaving 1365 the recovery path unprotected. 1367 4.8.4 Reverting to Preferred Path (or Controlled Rearrangement) 1369 In the revertive mode, a "make before break" restoration switching 1370 can be used, which is less disruptive than performing protection 1371 switching upon the occurrence of network impairments. This will 1372 minimize both packet loss and packet reordering. The controlled 1373 rearrangement of paths can also be used to satisfy traffic 1374 engineering requirements for load balancing across an MPLS domain. 1376 4.9. Performance 1378 Resource/performance requirements for recovery paths should be 1379 specified in terms of the following attributes: 1381 I. Resource class attribute: 1382 Equivalent Recovery Class: The recovery path has the same resource 1383 reservations and performance guarantees as the working path. In other 1384 words, the recovery path meets the same SLAs as the working path. 1385 Limited Recovery Class: The recovery path does not have the same 1386 resource reservations and performance guarantees as the working path. 1388 A. Lower Class: The recovery path has lower resource requirements or 1389 less stringent performance requirements than the working path. 1391 B. Best Effort Class: The recovery path is best effort. 1393 II. Priority Attribute: 1394 The recovery path has a priority attribute just like the working path 1395 (i.e., the priority attribute of the associated traffic trunks). It 1396 can have the same priority as the working path or lower priority. 1398 III. Preemption Attribute: 1399 The recovery path can have the same preemption attribute as the 1400 working path or a lower one. 1402 5. MPLS Recovery Features 1404 The following features are desirable from an operational point of 1405 view: 1407 I. It is desirable that MPLS recovery provides an option to identify 1408 protection groups (PPGs) and protection portions (PTPs). 1410 II. Each PSL should be capable of performing MPLS recovery upon the 1411 detection of the impairments or upon receipt of notifications of 1412 impairments. 1414 III. A MPLS recovery method should not preclude manual protection 1415 switching commands. This implies that it would be possible under 1416 administrative commands to transfer traffic from a working path to a 1417 recovery path, or to transfer traffic from a recovery path to a 1418 working path, once the working path becomes operational following a 1419 fault. 1421 IV. A PSL may be capable of performing either a switch back to the 1422 original working path after the fault is corrected or a switchover to 1423 a new working path, upon the discovery or establishment of a more 1424 optimal working path. 1426 V. The recovery model should take into consideration path merging at 1427 intermediate LSRs. If a fault affects the merged segment, all the 1428 paths sharing that merged segment should be able to recover. 1429 Similarly, if a fault affects a non-merged segment, only the path 1430 that is affected by the fault should be recovered. 1432 6. Comparison Criteria 1434 Possible criteria to use for comparison of MPLS-based recovery 1435 schemes are as follows: 1437 Recovery Time 1439 We define recovery time as the time required for a recovery path to 1440 be activated (and traffic flowing) after a fault. Recovery Time is 1441 the sum of the Fault Detection Time, Hold-off Time, Notification 1442 Time, Recovery Operation Time, and the Traffic Restoration Time. In 1443 other words, it is the time between a failure of a node or link in 1444 the network and the time before a recovery path is installed and the 1445 traffic starts flowing on it. 1447 Full Restoration Time 1449 We define full restoration time as the time required for a permanent 1450 restoration. This is the time required for traffic to be routed onto 1451 links, which are capable of or have been engineered sufficiently to 1452 handle traffic in recovery scenarios. Note that this time may or may 1453 not be different from the "Recovery Time" depending on whether 1454 equivalent or limited recovery paths are used. 1456 Setup vulnerability 1458 The amount of time that a working path or a set of working paths is 1459 left unprotected during such tasks as recovery path computation and 1460 recovery path setup may be used to compare schemes. The nature of 1461 this vulnerability should be taken into account, e.g.: End to End 1462 schemes correlate the vulnerability with working paths, Local Repair 1463 schemes have a topological correlation that cuts across working paths 1464 and Network Plan approaches have a correlation that impacts the 1465 entire network. 1467 Backup Capacity 1469 Recovery schemes may require differing amounts of "backup capacity" 1470 in the event of a fault. This capacity will be dependent on the 1471 traffic characteristics of the network. However, it may also be 1472 dependent on the particular protection plan selection algorithms as 1473 well as the signaling and re-routing methods. 1475 Additive Latency 1477 Recovery schemes may introduce additive latency to traffic. For 1478 example, a recovery path may take many more hops than the working 1479 path. This may be dependent on the recovery path selection 1480 algorithms. 1482 Quality of Protection 1484 Recovery schemes can be considered to encompass a spectrum of "packet 1485 survivability" which may range from "relative" to "absolute". 1486 Relative survivability may mean that the packet is on an equal 1487 footing with other traffic of, as an example, the same diff-serv code 1488 point (DSCP) in contending for the resources of the portion of the 1489 network that survives the failure. Absolute survivability may mean 1490 that the survivability of the protected traffic has explicit 1491 guarantees. 1493 Re-ordering 1495 Recovery schemes may introduce re-ordering of packets. Also the 1496 action of putting traffic back on preferred paths might cause packet 1497 re-ordering. 1499 State Overhead 1501 As the number of recovery paths in a protection plan grows, the state 1502 required to maintain them also grows. Schemes may require differing 1503 numbers of paths to maintain certain levels of coverage, etc. The 1504 state required may also depend on the particular scheme used to 1505 recover. In many cases the state overhead will be in proportion to 1506 the number of recovery paths. 1508 Loss 1510 Recovery schemes may introduce a certain amount of packet loss during 1511 switchover to a recovery path. Schemes that introduce loss during 1512 recovery can measure this loss by evaluating recovery times in 1513 proportion to the link speed. 1515 In case of link or node failure a certain packet loss is inevitable. 1517 Coverage 1519 Recovery schemes may offer various types of failover coverage. The 1520 total coverage may be defined in terms of several metrics: 1522 I. Fault Types: Recovery schemes may account for only link faults or 1523 both node and link faults or also degraded service. For example, a 1524 scheme may require more recovery paths to take node faults into 1525 account. 1527 II. Number of concurrent faults: dependent on the layout of recovery 1528 paths in the protection plan, multiple fault scenarios may be able to 1529 be restored. 1531 III. Number of recovery paths: for a given fault, there may be one or 1532 more recovery paths. 1534 IV. Percentage of coverage: dependent on a scheme and its 1535 implementation, a certain percentage of faults may be covered. This 1536 may be subdivided into percentage of link faults and percentage of 1537 node faults. 1539 V. The number of protected paths may effect how fast the total set of 1540 paths affected by a fault could be recovered. The ratio of protected 1541 is n/N, where n is the number of protected paths and N is the total 1542 number of paths. 1544 7. Security Considerations 1546 The MPLS recovery that is specified herein does not raise any 1547 security issues that are not already present in the MPLS 1548 architecture. 1550 8. Intellectual Property Considerations 1552 The IETF has been notified of intellectual property rights claimed in 1553 regard to some or all of the specification contained in this 1554 document. For more information consult the online list of claimed 1555 rights. 1557 9. Acknowledgements 1559 We would like to thank members of the MPLS WG mailing list for their 1560 suggestions on the earlier versions of this draft. In particular, 1561 Bora Akyol, Dave Allan, Dave Danenberg, Sharam Davari, and Neil 1562 Harrison whose suggestions and comments were very helpful in revising 1563 the document. 1565 The editors would like to give very special thanks to Curtis 1566 Villamizar for his careful and extremely thorough reading of the 1567 document and for taking the time to provide numerous suggestions, 1568 which were very helpful in the last couple of revisions of the 1569 document. 1571 10. EditorsÆ Addresses 1573 Vishal Sharma Fiffi Hellstrand 1574 Metanoia, Inc. Nortel Networks 1575 1600 Villa Street, Unit 352 St Eriksgatan 115 1576 Mountain View, CA 94041-1174 PO Box 6701 1577 Phone: (650) 386-6723 113 85 Stockholm, Sweden 1578 v.sharma@ieee.org Phone: +46 8 5088 3687 1579 Fiffi@nortelnetworks.com 1581 11. References 1583 [1] Rosen, E., Viswanathan, A., and Callon, R., "Multiprotocol Label 1584 Switching Architecture", RFC 3031, January 2001. 1586 [2] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., McManus, J., 1587 "Requirements for Traffic Engineering Over MPLS", RFC 2702, 1588 September 1999. 1590 [3] Haung, C., Sharma, V., Owens, K., Makam, V. "Building Reliable 1591 MPLS Networks Using a Path Protection Mechanism", IEEE Commun. 1592 Mag., Vol. 40, Issue 3, March 2002, pp. 156-162. 1594 [4] Braden, R., Zhang, L., Berson, S., Herzog, S., "Resource 1595 ReSerVation Protocol (RSVP) -- Version 1 Functional 1596 Specification", RFC 2205, September 1997. 1598 [5] Awduche, D., et al "RSVP-TE Extensions to RSVP for LSP Tunnels", 1599 RFC 3209, December 2001. 1601 [6] Jamoussi, B., et al "Constraint-Based LSP Setup using LDP", RFC 1602 3212, January 2002.