idnits 2.17.00 (12 Aug 2021) /tmp/idnits33545/draft-ietf-mpls-recovery-frmwrk-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == There are 10 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 31 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 25 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The "Author's Address" (or "Authors' Addresses") section title is misspelled. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 2002) is 7310 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 2702 (ref. '2') -- Possible downref: Non-RFC (?) normative reference: ref. '3' Summary: 10 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 MPLS Working Group Vishal Sharma (Metanoia, Inc.) 3 Informational Track Fiffi Hellstrand (Nortel Networks) 4 Expires: November 2002 (Editors) 6 May 2002 8 Framework for MPLS-based Recovery 9 11 Status of this memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that other 17 groups may also distribute working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 22 The list of current Internet-Drafts can be accessed at 23 http://www.ietf.org/ietf/1id-abstracts.txt 24 The list of Internet-Draft Shadow Directories can be accessed at 25 http://www.ietf.org/shadow.html. 27 Abstract 29 Multi-protocol label switching (MPLS) integrates the label swapping 30 forwarding paradigm with network layer routing. To deliver reliable 31 service, MPLS requires a set of procedures to provide protection of 32 the traffic carried on different paths. This requires that the label 33 switched routers (LSRs) support fault detection, fault notification, 34 and fault recovery mechanisms, and that MPLS signaling, support the 35 configuration of recovery. With these objectives in mind, this 36 document specifies a framework for MPLS based recovery. 38 Table of Contents 39 1. Introduction....................................................2 40 1.1. Background......................................................3 41 1.2. Motivation for MPLS-Based Recovery..............................3 42 1.3. Objectives/Goals................................................4 43 2. Contributing Authors............................................6 44 3. Overview........................................................6 45 3.1. Recovery Models.................................................7 46 3.1.1 Rerouting.....................................................7 47 3.1.2 Protection Switching..........................................7 48 3.2. The Recovery Cycles.............................................8 49 3.2.1 MPLS Recovery Cycle Model.....................................8 50 3.2.2 MPLS Reversion Cycle Model....................................9 51 3.2.3 Dynamic Re-routing Cycle Model...............................11 52 3.3. Definitions and Terminology....................................12 53 3.3.1 General Recovery Terminology.................................13 54 3.3.2 Failure Terminology..........................................15 55 3.4. Abbreviations..................................................16 56 4. MPLS-based Recovery Principles.................................16 57 4.1. Configuration of Recovery......................................17 58 4.2. Initiation of Path Setup.......................................17 59 4.3. Initiation of Resource Allocation..............................18 60 4.4. Scope of Recovery..............................................18 61 4.4.1 Topology.....................................................18 62 4.4.1.1 Local Repair................................................18 63 4.4.1.2 Global Repair...............................................19 64 4.4.1.3 Alternate Egress Repair.....................................19 65 4.4.1.4 Multi-Layer Repair..........................................20 66 4.4.1.5 Concatenated Protection Domains.............................20 67 4.4.2 Path Mapping.................................................20 68 4.4.3 Bypass Tunnels...............................................21 69 4.4.4 Recovery Granularity.........................................21 70 4.4.4.1 Selective Traffic Recovery..................................21 71 4.4.4.2 Bundling....................................................22 72 4.4.5 Recovery Path Resource Use...................................22 73 4.5. Fault Detection................................................22 74 4.6. Fault Notification.............................................23 75 4.7. Switch-Over Operation..........................................24 76 4.7.1 Recovery Trigger.............................................24 77 4.7.2 Recovery Action..............................................24 78 4.8. Post Recovery Operation........................................24 79 4.8.1 Fixed Protection Counterparts................................25 80 4.8.1.1 Revertive Mode..............................................25 81 4.8.1.2 Non-revertive Mode..........................................25 82 4.8.2 Dynamic Protection Counterparts..............................26 83 4.8.3 Restoration and Notification.................................26 84 4.8.4 Reverting to Preferred Path (or Controlled Rearrangement)....27 85 4.9. Performance....................................................27 86 5. MPLS Recovery Features.........................................27 87 6. Comparison Criteria............................................28 88 7. Security Considerations........................................30 89 8. Intellectual Property Considerations...........................30 90 9. Acknowledgements...............................................30 91 10. EditorsÆ Addresses.............................................31 92 11. References.....................................................31 94 1. Introduction 96 This memo describes a framework for MPLS-based recovery. We provide a 97 detailed taxonomy of recovery terminology, and discuss the motivation 98 for, the objectives of, and the requirements for MPLS-based recovery. 99 We outline principles for MPLS-based recovery, and also provide 100 comparison criteria that may serve as a basis for comparing and 101 evaluating different recovery schemes. 103 At points in the document, we provide some thoughts about the 104 operation or viability of certain recovery objectives. These should 105 be viewed as the opinions of the authors, and not the consolidated 106 views of the IETF. 108 1.1. Background 110 Network routing deployed today is focused primarily on connectivity, 111 and typically supports only one class of service, the best effort 112 class. Multi-protocol label switching [1], on the other hand, by 113 integrating forwarding based on label-swapping of a link local label 114 with network layer routing allows flexibility in the delivery of new 115 routing services. MPLS allows for using such media specific 116 forwarding mechanisms as label swapping. This enables some 117 sophisticated features such as quality-of-service (QoS) and traffic 118 engineering [2] to be implemented more effectively. An important 119 component of providing QoS, however, is the ability to transport data 120 reliably and efficiently. Although the current routing algorithms are 121 robust and survivable, the amount of time they take to recover from a 122 fault can be significant, on the order of several seconds or minutes, 123 causing disruption of service for some applications in the interim. 124 This is unacceptable is situations where the aim to provide a highly 125 reliable service, with recovery times that are on the order of 126 seconds down to 10's of milliseconds. 128 MPLS recovery may be motivated by the notion that there are 129 limitations to improving the recovery times of current routing 130 algorithms. Additional improvement can be obtained by augmenting 131 these algorithms with MPLS recovery mechanisms [3]. Since MPLS is a 132 possible technology of choice in future IP-based transport networks, 133 it is useful that MPLS be able to provide protection and restoration 134 of traffic. MPLS may facilitate the convergence of network 135 functionality on a common control and management plane. Further, a 136 protection priority could be used as a differentiating mechanism for 137 premium services that require high reliability. The remainder of this 138 document provides a framework for MPLS based recovery. It is focused 139 at a conceptual level and is meant to address motivation, objectives 140 and requirements. Issues of mechanism, policy, routing plans and 141 characteristics of traffic carried by recovery paths are beyond the 142 scope of this document. 144 1.2. Motivation for MPLS-Based Recovery 146 MPLS based protection of traffic (called MPLS-based Recovery) is 147 useful for a number of reasons. The most important is its ability to 148 increase network reliability by enabling a faster response to faults 149 than is possible with traditional Layer 3 (or IP layer) approaches 150 alone while still providing the visibility of the network afforded by 151 Layer 3. Furthermore, a protection mechanism using MPLS could enable 152 IP traffic to be put directly over WDM optical channels and provide a 153 recovery option without an intervening SONET layer. This would 154 facilitate the construction of IP-over-WDM networks that request a 155 fast recovery ability. 157 The need for MPLS-based recovery arises because of the following: 159 I. Layer 3 or IP rerouting may be too slow for a core MPLS network 160 that needs to support recovery times that are smaller than the 161 convergence times of IP routing protocols. 163 II. Layer 0 (for example, optical layer) or Layer 1 (for example, 164 SONET) mechanisms may be wasteful use of resources. 166 III. The granularity at which the lower layers may be able to protect 167 traffic may be too coarse for traffic that is switched using MPLS- 168 based mechanisms. 170 IV. Layer 0 or Layer 1 mechanisms may have no visibility into higher 171 layer operations. Thus, while they may provide, for example, link 172 protection, they cannot easily provide node protection or protection 173 of traffic transported at layer 3. Further, this may prevent the 174 lower layers from providing restoration based on the trafficÆs needs. 175 For example, fast restoration for traffic that needs it, and slower 176 restoration (with possibly more optimal use of resources) for traffic 177 that does not require fast restoration. In networks where the latter 178 class of traffic is dominant, providing fast restoration to all 179 classes of traffic may not be cost effective from a service 180 providerÆs perspective. 182 V. MPLS has desirable attributes when applied to the purpose of 183 recovery for connectionless networks. Specifically that an LSP is 184 source routed and a forwarding path for recovery can be "pinned" and 185 is not affected by transient instability in SPF routing brought on by 186 failure scenarios. 188 VI. Establishing interoperability of protection mechanisms between 189 routers/LSRs from different vendors in IP or MPLS networks is desired 190 to enable recovery mechanisms to work in a multivendor environment, 191 and to enable the transition of certain protected services to an MPLS 192 core. 194 1.3. Objectives/Goals 196 The following are some important goals for MPLS-based recovery. 198 Ia. MPLS-based recovery mechanisms may be subject to the traffic 199 engineering goal of optimal use of resources. 201 Ib. MPLS based recovery mechanisms should aim to facilitate 202 restoration times that are sufficiently fast for the end user 203 application. That is, that better match the end-userÆs application 204 requirements. In some cases, this may be as short as 10s of 205 milliseconds. 207 We observe that Ia and Ib are conflicting objectives, and a trade off 208 exists between them. The optimal choice depends on the end-user 209 applicationÆs sensitivity to restoration time and the cost impact of 210 introducing restoration in the network, as well as the end-user 211 applicationÆs sensitivity to cost. 213 II. MPLS-based recovery should aim to maximize network reliability 214 and availability. MPLS-based recovery of traffic should aim to 215 minimize the number of single points of failure in the MPLS protected 216 domain. 218 III. MPLS-based recovery should aim to enhance the reliability of the 219 protected traffic while minimally or predictably degrading the 220 traffic carried by the diverted resources. 222 IV. MPLS-based recovery techniques should aim to be applicable for 223 protection of traffic at various granularities. For example, it 224 should be possible to specify MPLS-based recovery for a portion of 225 the traffic on an individual path, for all traffic on an individual 226 path, or for all traffic on a group of paths. Note that a path is 227 used as a general term and includes the notion of a link, IP route or 228 LSP. 230 V. MPLS-based recovery techniques may be applicable for an entire 231 end-to-end path or for segments of an end-to-end path. 233 VI. MPLS-based recovery mechanisms should aim to take into 234 consideration the recovery actions of lower layers. MPLS-based 235 mechanisms should not trigger lower layer protection switching. 237 VII. MPLS-based recovery mechanisms should aim to minimize the loss 238 of data and packet reordering during recovery operations. (The 239 current MPLS specification itself has no explicit requirement on 240 reordering). 242 VIII. MPLS-based recovery mechanisms should aim to minimize the state 243 overhead incurred for each recovery path maintained. 245 IX. MPLS-based recovery mechanisms should aim to preserve the 246 constraints on traffic after switchover, if desired. That is, if 247 desired, the recovery path should meet the resource requirements of, 248 and achieve the same performance characteristics as, the working 249 path. 251 We observe that some of the above are conflicting goals, and real 252 deployment will often involve engineering compromises based on a 253 variety of factors such as cost, end-user application requirements, 254 network efficiency, and revenue considerations. Thus, these goals are 255 subject to tradeoffs based on the above considerations. 257 2. Contributing Authors 259 This document was the collective work of several individuals over a 260 period of two and a half years. The text and content of this document 261 was contributed by the editors and the co-authors listed below. (The 262 contact information for the editors appears in Section 10, and is not 263 repeated below.) 265 Ben Mack-Crane Srinivas Makam 266 Tellabs Operations, Inc. Eshernet, Inc. 267 4951 Indiana Avenue 1712 Ada Ct. 268 Lisle, IL 60532 Naperville, IL 60540 269 Phone: (630) 512-7255 Phone: (630) 308-3213 270 Ben.Mack-Crane@tellabs.com Smakam60540@yahoo.com 272 Ken Owens Changcheng Huang 273 Erlang Technology, Inc. Carleton University 274 345 Marshall Ave., Suite 300 Minto Center, Rm. 3082 275 St. Louis, MO 63119 1125 Colonial By Drive 276 Phone: (314) 918-1579 Ottawa, Ont. K1S 5B6 Canada 277 keno@erlangtech.com Phone: (613) 520-2600 x2477 278 Changcheng.Huang@sce.carleton.ca 280 Jon Weil Brad Cain 281 Nortel Networks Storigen Systems 282 Harlow Laboratories London Road 650 Suffolk Street 283 Harlow Essex CM17 9NA, UK Lowell, MA 01854 284 Phone: +44 (0)1279 403935 Phone: (978) 323-4454 285 jonweil@nortelnetworks.com bcain@storigen.com 287 Loa Andersson Bilel Jamoussi 288 Utfors AB Nortel Networks 289 R…sundav„gen 12, Box 525 3 Federal Street, BL3-03 290 169 29 Solna, Sweden Billerica, MA 01821, USA 291 Phone: +46 8 5270 5038 Phone:(978) 288-4506 292 loa.andersson@utfors.se jamoussi@nortelnetworks.com 294 Angela Chiu Seyhan Civanlar 295 Celion Networks, Inc. Lemur Networks, Inc. 296 One Shiela Drive, Suite 2 135 West 20th Street, 5th Floor 297 Tinton Falls, NJ 07724 New York, NY 10011 298 Phone: (732) 345-3441 Phone: (212) 367-7676 299 angela.chiu@celion.com scivanlar@lemurnetworks.com 301 3. Overview 303 There are several options for providing protection of traffic. The 304 most generic requirement is the specification of whether recovery 305 should be via Layer 3 (or IP) rerouting or via MPLS protection 306 switching or rerouting actions. 308 Generally network operators aim to provide the fastest and the best 309 protection mechanism that can be provided at a reasonable cost. The 310 higher the levels of protection, the more the resources consumed. 311 Therefore it is expected that network operators will offer a spectrum 312 of service levels. MPLS-based recovery should give the flexibility to 313 select the recovery mechanism, choose the granularity at which 314 traffic is protected, and to also choose the specific types of 315 traffic that are protected in order to give operators more control 316 over that tradeoff. With MPLS-based recovery, it can be possible to 317 provide different levels of protection for different classes of 318 service, based on their service requirements. For example, using 319 approaches outlined below, a Virtual Leased Line (VLL) service or 320 real-time applications like Voice over IP (VoIP) may be supported 321 using link/node protection together with pre-established, pre- 322 reserved path protection. Best effort traffic, on the other hand, may 323 use path protection that is established on demand or may simply rely 324 on IP re-route or higher layer recovery mechanisms. As another 325 example of their range of application, MPLS-based recovery strategies 326 may be used to protect traffic not originally flowing on label 327 switched paths, such as IP traffic that is normally routed hop-by- 328 hop, as well as traffic forwarded on label switched paths. 330 3.1. Recovery Models 332 There are two basic models for path recovery: rerouting and 333 protection switching. 335 Protection switching and rerouting, as defined below, may be used 336 together. For example, protection switching to a recovery path may 337 be used for rapid restoration of connectivity while rerouting 338 determines a new optimal network configuration, rearranging paths, as 339 needed, at a later time. 341 3.1.1 Rerouting 343 Recovery by rerouting is defined as establishing new paths or path 344 segments on demand for restoring traffic after the occurrence of a 345 fault. The new paths may be based upon fault information, network 346 routing policies, pre-defined configurations and network topology 347 information. Thus, upon detecting a fault, paths or path segments to 348 bypass the fault are established using signaling. 350 Once the network routing algorithms have converged after a fault, it 351 may be preferable, in some cases, to reoptimize the network by 352 performing a reroute based on the current state of the network and 353 network policies. This is discussed further in Section 3.8. 355 In terms of the principles defined in section 3, reroute recovery 356 employs paths established-on-demand with resources reserved-on- 357 demand. 359 3.1.2 Protection Switching 360 Protection switching recovery mechanisms pre-establish a recovery 361 path or path segment, based upon network routing policies, the 362 restoration requirements of the traffic on the working path, and 363 administrative considerations. The recovery path may or may not be 364 link and node disjoint with the working path. However if the recovery 365 path shares sources of failure with the working path, the overall 366 reliability of the construct is degraded. When a fault is detected, 367 the protected traffic is switched over to the recovery path(s) and 368 restored. 370 In terms of the principles in section 3, protection switching employs 371 pre-established recovery paths, and, if resource reservation is 372 required on the recovery path, pre-reserved resources. The various 373 sub-types of protection switching are detailed in Section 4.4 of this 374 document. 376 3.2. The Recovery Cycles 378 There are three defined recovery cycles: the MPLS Recovery Cycle, the 379 MPLS Reversion Cycle and the Dynamic Re-routing Cycle. The first 380 cycle detects a fault and restores traffic onto MPLS-based recovery 381 paths. If the recovery path is non-optimal the cycle may be followed 382 by any of the two latter cycles to achieve an optimized network 383 again. The reversion cycle applies for explicitly routed traffic that 384 that does not rely on any dynamic routing protocols to be converged. 385 The dynamic re-routing cycle applies for traffic that is forwarded 386 based on hop-by-hop routing. 388 3.2.1 MPLS Recovery Cycle Model 390 The MPLS recovery cycle model is illustrated in Figure 1. 391 Definitions and a key to abbreviations follow. 393 --Network Impairment 394 | --Fault Detected 395 | | --Start of Notification 396 | | | -- Start of Recovery Operation 397 | | | | --Recovery Operation Complete 398 | | | | | --Path Traffic Restored 399 | | | | | | 400 | | | | | | 401 v v v v v v 402 ---------------------------------------------------------------- 403 | T1 | T2 | T3 | T4 | T5 | 405 Figure 1. MPLS Recovery Cycle Model 407 The various timing measures used in the model are described below. 408 T1 Fault Detection Time 409 T2 Hold-off Time 410 T3 Notification Time 411 T4 Recovery Operation Time 412 T5 Traffic Restoration Time 414 Definitions of the recovery cycle times are as follows: 416 Fault Detection Time 418 The time between the occurrence of a network impairment and the 419 moment the fault is detected by MPLS-based recovery mechanisms. This 420 time may be highly dependent on lower layer protocols. 422 Hold-Off Time 424 The configured waiting time between the detection of a fault and 425 taking MPLS-based recovery action, to allow time for lower layer 426 protection to take effect. The Hold-off Time may be zero. 428 Note: The Hold-Off Time may occur after the Notification Time 429 interval if the node responsible for the switchover, the Path Switch 430 LSR (PSL), rather than the detecting LSR, is configured to wait. 432 Notification Time 434 The time between initiation of a fault indication signal (FIS) by the 435 LSR detecting the fault and the time at which the Path Switch LSR 436 (PSL) begins the recovery operation. This is zero if the PSL detects 437 the fault itself or infers a fault from such events as an adjacency 438 failure. 440 Note: If the PSL detects the fault itself, there still may be a Hold- 441 Off Time period between detection and the start of the recovery 442 operation. 444 Recovery Operation Time 446 The time between the first and last recovery actions. This may 447 include message exchanges between the PSL and PML to coordinate 448 recovery actions. 450 Traffic Restoration Time 452 The time between the last recovery action and the time that the 453 traffic (if present) is completely recovered. This interval is 454 intended to account for the time required for traffic to once again 455 arrive at the point in the network that experienced disrupted or 456 degraded service due to the occurrence of the fault (e.g. the PML). 457 This time may depend on the location of the fault, the recovery 458 mechanism, and the propagation delay along the recovery path. 460 3.2.2 MPLS Reversion Cycle Model 461 Protection switching, revertive mode, requires the traffic to be 462 switched back to a preferred path when the fault on that path is 463 cleared. The MPLS reversion cycle model is illustrated in Figure 2. 464 Note that the cycle shown below comes after the recovery cycle shown 465 in Fig. 1. 467 --Network Impairment Repaired 468 | --Fault Cleared 469 | | --Path Available 470 | | | --Start of Reversion Operation 471 | | | | --Reversion Operation Complete 472 | | | | | --Traffic Restored on Preferred Path 473 | | | | | | 474 | | | | | | 475 v v v v v v 476 ----------------------------------------------------------------- 477 | T7 | T8 | T9 | T10| T11| 479 Figure 2. MPLS Reversion Cycle Model 481 The various timing measures used in the model are described below. 482 T7 Fault Clearing Time 483 T8 Wait-to-Restore Time 484 T9 Notification Time 485 T10 Reversion Operation Time 486 T11 Traffic Restoration Time 488 Note that time T6 (not shown above) is the time for which the network 489 impairment is not repaired and traffic is flowing on the recovery 490 path. 492 Definitions of the reversion cycle times are as follows: 494 Fault Clearing Time 496 The time between the repair of a network impairment and the time that 497 MPLS-based mechanisms learn that the fault has been cleared. This 498 time may be highly dependent on lower layer protocols. 500 Wait-to-Restore Time 502 The configured waiting time between the clearing of a fault and MPLS- 503 based recovery action(s). Waiting time may be needed to ensure that 504 the path is stable and to avoid flapping in cases where a fault is 505 intermittent. The Wait-to-Restore Time may be zero. 507 Note: The Wait-to-Restore Time may occur after the Notification Time 508 interval if the PSL is configured to wait. 510 Notification Time 511 The time between initiation of a fault recovery signal (FRS) by the 512 LSR clearing the fault and the time at which the path switch LSR 513 begins the reversion operation. This is zero if the PSL clears the 514 fault itself. 515 Note: If the PSL clears the fault itself, there still may be a Wait- 516 to-Restore Time period between fault clearing and the start of the 517 reversion operation. 519 Reversion Operation Time 521 The time between the first and last reversion actions. This may 522 include message exchanges between the PSL and PML to coordinate 523 reversion actions. 525 Traffic Restoration Time 527 The time between the last reversion action and the time that traffic 528 (if present) is completely restored on the preferred path. This 529 interval is expected to be quite small since both paths are working 530 and care may be taken to limit the traffic disruption (e.g., using 531 "make before break" techniques and synchronous switch-over). 533 In practice, the only interesting times in the reversion cycle are 534 the Wait-to-Restore Time and the Traffic Restoration Time (or some 535 other measure of traffic disruption). Given that both paths are 536 available, there is no need for rapid operation, and a well- 537 controlled switch-back with minimal disruption is desirable. 539 3.2.3 Dynamic Re-routing Cycle Model 541 Dynamic rerouting aims to bring the IP network to a stable state 542 after a network impairment has occurred. A re-optimized network is 543 achieved after the routing protocols have converged, and the traffic 544 is moved from a recovery path to a (possibly) new working path. The 545 steps involved in this mode are illustrated in Figure 3. 547 Note that the cycle shown below may be overlaid on the recovery cycle 548 shown in Fig. 1 or the reversion cycle shown in Fig. 2, or both (in 549 the event that both the recovery cycle and the reversion cycle take 550 place before the routing protocols converge), and after the 551 convergence of the routing protocols it is determined (based on on- 552 line algorithms or off-line traffic engineering tools, network 553 configuration, or a variety of other possible criteria) that there is 554 a better route for the working path. 556 --Network Enters a Semi-stable State after an Impairment 557 | --Dynamic Routing Protocols Converge 558 | | --Initiate Setup of New Working Path between PSL 559 | | | and PML 560 | | | --Switchover Operation Complete 561 | | | | --Traffic Moved to New Working Path 562 | | | | | 563 | | | | | 564 v v v v v 565 ----------------------------------------------------------------- 566 | T12 | T13 | T14 | T15 | 568 Figure 3. Dynamic Rerouting Cycle Model 569 The various timing measures used in the model are described below. 570 T12 Network Route Convergence Time 571 T13 Hold-down Time (optional) 572 T14 Switchover Operation Time 573 T15 Traffic Restoration Time 575 Network Route Convergence Time 577 We define the network route convergence time as the time taken for 578 the network routing protocols to converge and for the network to 579 reach a stable state. 581 Holddown Time 583 We define the holddown period as a bounded time for which a recovery 584 path must be used. In some scenarios it may be difficult to determine 585 if the working path is stable. In these cases a holddown time may be 586 used to prevent excess flapping of traffic between a working and a 587 recovery path. 589 Switchover Operation Time 591 The time between the first and last switchover actions. This may 592 include message exchanges between the PSL and PML to coordinate the 593 switchover actions. 595 As an example of the recovery cycle, we present a sequence of events 596 that occur after a network impairment occurs and when a protection 597 switch is followed by dynamic rerouting. 599 I. Link or path fault occurs 600 II. Signaling initiated (FIS) for the detected fault 601 III. FIS arrives at the PSL 602 IV. The PSL initiates a protection switch to a pre-configured 603 recovery path 604 V. The PSL switches over the traffic from the working path to the 605 recovery path 606 VI. The network enters a semi-stable state 607 VII. Dynamic routing protocols converge after the fault, and a new 608 working path is calculated (based, for example, on some of the 609 criteria mentioned in Section 2.1.1). 610 VIII. A new working path is established between the PSL and the PML 611 (assumption is that PSL and PML have not changed) 612 IX. Traffic is switched over to the new working path. 614 3.3. Definitions and Terminology 615 This document assumes the terminology given in [1], and, in addition, 616 introduces the following new terms. 618 3.3.1 General Recovery Terminology 620 Rerouting 622 A recovery mechanism in which the recovery path or path segments are 623 created dynamically after the detection of a fault on the working 624 path. In other words, a recovery mechanism in which the recovery path 625 is not pre-established. 627 Protection Switching 629 A recovery mechanism in which the recovery path or path segments are 630 created prior to the detection of a fault on the working path. In 631 other words, a recovery mechanism in which the recovery path is pre- 632 established. 634 Working Path 636 The protected path that carries traffic before the occurrence of a 637 fault. The working path exists between a PSL and PML. The working 638 path can be of different kinds; a hop-by-hop routed path, a trunk, a 639 link, an LSP or part of a multipoint-to-point LSP. 641 Synonyms for a working path are primary path and active path. 643 Recovery Path 645 The path by which traffic is restored after the occurrence of a 646 fault. In other words, the path on which the traffic is directed by 647 the recovery mechanism. The recovery path is established by MPLS 648 means. The recovery path can either be an equivalent recovery path 649 and ensure no reduction in quality of service, or be a limited 650 recovery path and thereby not guarantee the same quality of service 651 (or some other criteria of performance) as the working path. A 652 limited recovery path is not expected to be used for an extended 653 period of time. 655 Synonyms for a recovery path are: back-up path, alternative path, and 656 protection path. 658 Protection Counterpart 660 The "other" path when discussing pre-planned protection switching 661 schemes. The protection counterpart for the working path is the 662 recovery path and vice-versa. 664 Path Group (PG) 666 A logical bundling of multiple working paths, each of which is routed 667 identically between a Path Switch LSR and a Path Merge LSR. 669 Protected Path Group (PPG) 671 A path group that requires protection. 673 Protected Traffic Portion (PTP) 675 The portion of the traffic on an individual path that requires 676 protection. For example, code points in the EXP bits of the shim 677 header may identify a protected portion. 679 Path Switch LSR (PSL) 681 The PSL is responsible for switching or replicating the traffic 682 between the working path and the recovery path. 684 Path Merge LSR (PML) 686 An LSR that is responsible for receiving the recovery path traffic, 687 and either merges the traffic back onto the working path, or, if it 688 is itself the destination, passes the traffic on to the higher layer 689 protocols. 691 Intermediate LSR 693 An LSR on a working or recovery path that is neither a PSL nor a PML 694 for that path. 696 Bypass Tunnel 698 A path that serves to back up a set of working paths using the label 699 stacking approach [1]. The working paths and the bypass tunnel must 700 all share the same path switch LSR (PSL) and the path merge LSR 701 (PML). 703 Switch-Over 705 The process of switching the traffic from the path that the traffic 706 is flowing on onto one or more alternate path(s). This may involve 707 moving traffic from a working path onto one or more recovery paths, 708 or may involve moving traffic from a recovery path(s) on to a more 709 optimal working path(s). 711 Switch-Back 713 The process of returning the traffic from one or more recovery paths 714 back to the working path(s). 716 Revertive Mode 718 A recovery mode in which traffic is automatically switched back from 719 the recovery path to the original working path upon the restoration 720 of the working path to a fault-free condition. This assumes a failed 721 working path does not automatically surrender resources to the 722 network. 724 Non-revertive Mode 726 A recovery mode in which traffic is not automatically switched back 727 to the original working path after this path is restored to a fault- 728 free condition. (Depending on the configuration, the original working 729 path may, upon moving to a fault-free condition, become the recovery 730 path, or it may be used for new working traffic, and be no longer 731 associated with its original recovery path). 733 MPLS Protection Domain 735 The set of LSRs over which a working path and its corresponding 736 recovery path are routed. 738 MPLS Protection Plan 740 The set of all LSP protection paths and the mapping from working to 741 protection paths deployed in an MPLS protection domain at a given 742 time. 744 Liveness Message 746 A message exchanged periodically between two adjacent LSRs that 747 serves as a link probing mechanism. It provides an integrity check of 748 the forward and the backward directions of the link between the two 749 LSRs as well as a check of neighbor aliveness. 751 Path Continuity Test 753 A test that verifies the integrity and continuity of a path or path 754 segment. The details of such a test are beyond the scope of this 755 draft. (This could be accomplished, for example, by transmitting a 756 control message along the same links and nodes as the data traffic or 757 similarly could be measured by the absence of traffic and by 758 providing feedback.) 760 3.3.2 Failure Terminology 762 Path Failure (PF) 763 Path failure is fault detected by MPLS-based recovery mechanisms, 764 which is define as the failure of the liveness message test or a path 765 continuity test, which indicates that path connectivity is lost. 767 Path Degraded (PD) 768 Path degraded is a fault detected by MPLS-based recovery mechanisms 769 that indicates that the quality of the path is unacceptable. 771 Link Failure (LF) 772 A lower layer fault indicating that link continuity is lost. This may 773 be communicated to the MPLS-based recovery mechanisms by the lower 774 layer. 776 Link Degraded (LD) 777 A lower layer indication to MPLS-based recovery mechanisms that the 778 link is performing below an acceptable level. 780 Fault Indication Signal (FIS) 781 A signal that indicates that a fault along a path has occurred. It is 782 relayed by each intermediate LSR to its upstream or downstream 783 neighbor, until it reaches an LSR that is setup to perform MPLS 784 recovery. The FIS is transmitted periodically by the node/nodes 785 closest to the point of failure, for some configurable length of 786 time. 788 Fault Recovery Signal (FRS) 789 A signal that indicates a fault along a working path has been 790 repaired. Again, like the FIS, it is relayed by each intermediate LSR 791 to its upstream or downstream neighbor, until is reaches the LSR that 792 performs recovery of the original path. The FRS is transmitted 793 periodically by the node/nodes closest to the point of failure, for 794 some configurable length of time. 796 3.4. Abbreviations 798 FIS: Fault Indication Signal. 799 FRS: Fault Recovery Signal. 800 LD: Link Degraded. 801 LF: Link Failure. 802 PD: Path Degraded. 803 PF: Path Failure. 804 PML: Path Merge LSR. 805 PG: Path Group. 806 PPG: Protected Path Group. 807 PTP: Protected Traffic Portion. 808 PSL: Path Switch LSR. 810 4. MPLS-based Recovery Principles 812 MPLS-based recovery refers to the ability to effect quick and 813 complete restoration of traffic affected by a fault in an MPLS- 814 enabled network. The fault may be detected on the IP layer or in 815 lower layers over which IP traffic is transported. Fastest MPLS 816 recovery is assumed to be achieved with protection switching and may 817 be viewed as the MPLS LSR switch completion time that is comparable 818 to, or equivalent to, the 50 ms switch-over completion time of the 819 SONET layer. This section provides a discussion of the concepts and 820 principles of MPLS-based recovery. The concepts are presented in 821 terms of atomic or primitive terms that may be combined to specify 822 recovery approaches. We do not make any assumptions about the 823 underlying layer 1 or layer 2 transport mechanisms or their recovery 824 mechanisms. 826 4.1. Configuration of Recovery 828 An LSR may support any or all of the following recovery options: 830 Default-recovery (No MPLS-based recovery enabled): 831 Traffic on the working path is recovered only via Layer 3 or IP 832 rerouting or by some lower layer mechanism such as SONET APS. This 833 is equivalent to having no MPLS-based recovery. This option may be 834 used for low priority traffic or for traffic that is recovered in 835 another way (for example load shared traffic on parallel working 836 paths may be automatically recovered upon a fault along one of the 837 working paths by distributing it among the remaining working paths). 839 Recoverable (MPLS-based recovery enabled): 840 This working path is recovered using one or more recovery paths, 841 either via rerouting or via protection switching. 843 4.2. Initiation of Path Setup 845 There are three options for the initiation of the recovery path 846 setup. The active and recovery paths may be established by using 847 either RSVP-TE [4][5] or CR-LDP [6]. 849 Pre-established: 851 This is the same as the protection switching option. Here a recovery 852 path(s) is established prior to any failure on the working path. The 853 path selection can either be determined by an administrative 854 centralized tool, or chosen based on some algorithm implemented at 855 the PSL and possibly intermediate nodes. To guard against the 856 situation when the pre-established recovery path fails before or at 857 the same time as the working path, the recovery path should have 858 secondary configuration options as explained in Section 3.3 below. 860 Pre Qualified: 862 A pre-established path need not be created, it may be pre-qualified. 863 A pre-qualified recovery path is not created expressly for protecting 864 the working path, but instead is a path created for other purposes 865 that is designated as a recovery path after determining that it is an 866 acceptable alternative for carrying the working path traffic. 867 Variants include the case where an optical path or trail is 868 configured, but no switches are set. 870 Established-on-Demand: 872 This is the same as the rerouting option. Here, a recovery path is 873 established after a failure on its working path has been detected and 874 notified to the PSL. 876 4.3. Initiation of Resource Allocation 878 A recovery path may support the same traffic contract as the working 879 path, or it may not. We will distinguish these two situations by 880 using different additive terms. If the recovery path is capable of 881 replacing the working path without degrading service, it will be 882 called an equivalent recovery path. If the recovery path lacks the 883 resources (or resource reservations) to replace the working path 884 without degrading service, it will be called a limited recovery path. 885 Based on this, there are two options for the initiation of resource 886 allocation: 888 Pre-reserved: 890 This option applies only to protection switching. Here a pre- 891 established recovery path reserves required resources on all hops 892 along its route during its establishment. Although the reserved 893 resources (e.g., bandwidth and/or buffers) at each node cannot be 894 used to admit more working paths, they are available to be used by 895 all traffic that is present at the node before a failure occurs. 897 Reserved-on-Demand: 899 This option may apply either to rerouting or to protection switching. 900 Here a recovery path reserves the required resources after a failure 901 on the working path has been detected and notified to the PSL and 902 before the traffic on the working path is switched over to the 903 recovery path. 905 Note that under both the options above, depending on the amount of 906 resources reserved on the recovery path, it could either be an 907 equivalent recovery path or a limited recovery path. 909 4.4. Scope of Recovery 911 4.4.1 Topology 913 4.4.1.1 Local Repair 915 The intent of local repair is to protect against a link or neighbor 916 node fault and to minimize the amount of time required for failure 917 propagation. In local repair (also known as local recovery), the node 918 immediately upstream of the fault is the one to initiate recovery 919 (either rerouting or protection switching). Local repair can be of 920 two types: 922 Link Recovery/Restoration 924 In this case, the recovery path may be configured to route around a 925 certain link deemed to be unreliable. If protection switching is 926 used, several recovery paths may be configured for one working path, 927 depending on the specific faulty link that each protects against. 929 Alternatively, if rerouting is used, upon the occurrence of a fault 930 on the specified link, each path is rebuilt such that it detours 931 around the faulty link. 932 In this case, the recovery path need only be disjoint from its 933 working path at a particular link on the working path, and may have 934 overlapping segments with the working path. Traffic on the working 935 path is switched over to an alternate path at the upstream LSR that 936 connects to the failed link. This method is potentially the fastest 937 to perform the switchover, and can be effective in situations where 938 certain path components are much more unreliable than others. 940 Node Recovery/Restoration 942 In this case, the recovery path may be configured to route around a 943 neighbor node deemed to be unreliable. Thus the recovery path is 944 disjoint from the working path only at a particular node and at links 945 associated with the working path at that node. Once again, the 946 traffic on the primary path is switched over to the recovery path at 947 the upstream LSR that directly connects to the failed node, and the 948 recovery path shares overlapping portions with the working path. 950 4.4.1.2 Global Repair 952 The intent of global repair is to protect against any link or node 953 fault on a path or on a segment of a path, with the obvious exception 954 of the faults occurring at the ingress node of the protected path 955 segment. In global repair the PSL is usually distant from the failure 956 and needs to be notified by a FIS. 957 In global repair also, end-to-end path recovery/restoration applies. 958 In many cases, the recovery path can be made completely link and node 959 disjoint with its working path. This has the advantage of protecting 960 against all link and node fault(s) on the working path (end-to-end 961 path or path segment). 962 However, it may, in some cases, be slower than local repair since the 963 fault notification message must now travel to the PSL to trigger the 964 recovery action. 966 4.4.1.3 Alternate Egress Repair 968 It is possible to restore service without specifically recovering the 969 faulted path. 970 For example, for best effort IP service it is possible to select a 971 recovery path that has a different egress point from the working path 972 (i.e., there is no PML). The recovery path egress must simply be a 973 router that is acceptable for forwarding the FEC carried by the 974 working path (without creating looping). In an engineering context, 975 specific alternative FEC/LSP mappings with alternate egresses can be 976 formed. 978 This may simplify enhancing the reliability of implicitly constructed 979 MPLS topologies. A PSL may qualify LSP/FEC bindings as candidate 980 recovery paths as simply link and node disjoint with the immediate 981 downstream LSR of the working path. 983 4.4.1.4 Multi-Layer Repair 985 Multi-layer repair broadens the network designerÆs tool set for those 986 cases where multiple network layers can be managed together to 987 achieve overall network goals. Specific criteria for determining 988 when multi-layer repair is appropriate are beyond the scope of this 989 draft. 991 4.4.1.5 Concatenated Protection Domains 993 A given service may cross multiple networks and these may employ 994 different recovery mechanisms. It is possible to concatenate 995 protection domains so that service recovery can be provided end-to- 996 end. It is considered that the recovery mechanisms in different 997 domains may operate autonomously, and that multiple points of 998 attachment may be used between domains (to ensure there is no single 999 point of failure). Alternate egress repair requires management of 1000 concatenated domains in that an explicit MPLS point of failure (the 1001 PML) is by definition excluded. Details of concatenated protection 1002 domains are beyond the scope of this draft. 1004 4.4.2 Path Mapping 1006 Path mapping refers to the methods of mapping traffic from a faulty 1007 working path on to the recovery path. There are several options for 1008 this, as described below. Note that the options below should be 1009 viewed as atomic terms that only describe how the working and 1010 protection paths are mapped to each other. The issues of resource 1011 reservation along these paths, and how switchover is actually 1012 performed lead to the more commonly used composite terms, such as 1+1 1013 and 1:1 protection, which were described in Section 2.1. 1015 1-to-1 Protection 1017 In 1-to-1 protection the working path has a designated recovery path 1018 that is only to be used to recover that specific working path. 1020 n-to-1 Protection 1022 In n-to-1 protection, up to n working paths are protected using only 1023 one recovery path. If the intent is to protect against any single 1024 fault on any of the working paths, the n working paths should be 1025 diversely routed between the same PSL and PML. In some cases, 1026 handshaking between PSL and PML may be required to complete the 1027 recovery, the details of which are beyond the scope of this draft. 1029 n-to-m Protection 1031 In n-to-m protection, up to n working paths are protected using m 1032 recovery paths. Once again, if the intent is to protect against any 1033 single fault on any of the n working paths, the n working paths and 1034 the m recovery paths should be diversely routed between the same PSL 1035 and PML. In some cases, handshaking between PSL and PML may be 1036 required to complete the recovery, the details of which are beyond 1037 the scope of this draft. n-to-m protection is for further study. 1039 Split Path Protection 1041 In split path protection, multiple recovery paths are allowed to 1042 carry the traffic of a working path based on a certain configurable 1043 load splitting ratio. This is especially useful when no single 1044 recovery path can be found that can carry the entire traffic of the 1045 working path in case of a fault. Split path protection may require 1046 handshaking between the PSL and the PML(s), and may require the 1047 PML(s) to correlate the traffic arriving on multiple recovery paths 1048 with the working path. Although this is an attractive option, the 1049 details of split path protection are beyond the scope of this draft, 1050 and are for further study. 1052 4.4.3 Bypass Tunnels 1054 It may be convenient, in some cases, to create a "bypass tunnel" for 1055 a PPG between a PSL and PML, thereby allowing multiple recovery paths 1056 to be transparent to intervening LSRs [2]. In this case, one LSP 1057 (the tunnel) is established between the PSL and PML following an 1058 acceptable route and a number of recovery paths are supported through 1059 the tunnel via label stacking. A bypass tunnel can be used with any 1060 of the path mapping options discussed in the previous section. 1062 As with recovery paths, the bypass tunnel may or may not have 1063 resource reservations sufficient to provide recovery without service 1064 degradation. It is possible that the bypass tunnel may have 1065 sufficient resources to recover some number of working paths, but not 1066 all at the same time. If the number of recovery paths carrying 1067 traffic in the tunnel at any given time is restricted, this is 1068 similar to the n-to-1 or n-to-m protection cases mentioned in Section 1069 3.4.2. 1071 4.4.4 Recovery Granularity 1073 Another dimension of recovery considers the amount of traffic 1074 requiring protection. This may range from a fraction of a path to a 1075 bundle of paths. 1077 4.4.4.1 Selective Traffic Recovery 1079 This option allows for the protection of a fraction of traffic within 1080 the same path. The portion of the traffic on an individual path that 1081 requires protection is called a protected traffic portion (PTP). A 1082 single path may carry different classes of traffic, with different 1083 protection requirements. The protected portion of this traffic may be 1084 identified by its class, as for example, via the EXP bits in the MPLS 1085 shim header or via the priority bit in the ATM header. 1087 4.4.4.2 Bundling 1089 Bundling is a technique used to group multiple working paths together 1090 in order to recover them simultaneously. The logical bundling of 1091 multiple working paths requiring protection, each of which is routed 1092 identically between a PSL and a PML, is called a protected path group 1093 (PPG). When a fault occurs on the working path carrying the PPG, the 1094 PPG as a whole can be protected either by being switched to a bypass 1095 tunnel or by being switched to a recovery path. 1097 4.4.5 Recovery Path Resource Use 1099 In the case of pre-reserved recovery paths, there is the question of 1100 what use these resources may be put to when the recovery path is not 1101 in use. There are two options: 1103 Dedicated-resource: 1104 If the recovery path resources are dedicated, they may not be used 1105 for anything except carrying the working traffic. For example, in 1106 the case of 1+1 protection, the working traffic is always carried on 1107 the recovery path. Even if the recovery path is not always carrying 1108 the working traffic, it may not be possible or desirable to allow 1109 other traffic to use these resources. 1111 Extra-traffic-allowed: 1112 If the recovery path only carries the working traffic when the 1113 working path fails, then it is possible to allow extra traffic to use 1114 the reserved resources at other times. Extra traffic is, by 1115 definition, traffic that can be displaced (without violating service 1116 agreements) whenever the recovery path resources are needed for 1117 carrying the working path traffic. 1119 Shared-resource: 1120 A shared recovery resource is dedicated for use by multiple primary 1121 resources that (according to SRLGs) are not expected to fail 1122 simultaneously. 1124 4.5. Fault Detection 1126 MPLS recovery is initiated after the detection of either a lower 1127 layer fault or a fault at the IP layer or in the operation of MPLS- 1128 based mechanisms. We consider four classes of impairments: Path 1129 Failure, Path Degraded, Link Failure, and Link Degraded. 1131 Path Failure (PF) is a fault that indicates to an MPLS-based recovery 1132 scheme that the connectivity of the path is lost. This may be 1133 detected by a path continuity test between the PSL and PML. Some, 1134 and perhaps the most common, path failures may be detected using a 1135 link probing mechanism between neighbor LSRs. An example of a probing 1136 mechanism is a liveness message that is exchanged periodically along 1137 the working path between peer LSRs [3]. For either a link probing 1138 mechanism or path continuity test to be effective, the test message 1139 must be guaranteed to follow the same route as the working or 1140 recovery path, over the segment being tested. In addition, the path 1141 continuity test must take the path merge points into consideration. 1142 In the case of a bi-directional link implemented as two 1143 unidirectional links, path failure could mean that either one or both 1144 unidirectional links are damaged. 1146 Path Degraded (PD) is a fault that indicates to MPLS-based recovery 1147 schemes/mechanisms that the path has connectivity, but that the 1148 quality of the connection is unacceptable. This may be detected by a 1149 path performance monitoring mechanism, or some other mechanism for 1150 determining the error rate on the path or some portion of the path. 1151 This is local to the LSR and consists of excessive discarding of 1152 packets at an interface, either due to label mismatch or due to TTL 1153 errors, for example. 1155 Link Failure (LF) is an indication from a lower layer that the link 1156 over which the path is carried has failed. If the lower layer 1157 supports detection and reporting of this fault (that is, any fault 1158 that indicates link failure e.g., SONET LOS), this may be used by the 1159 MPLS recovery mechanism. In some cases, using LF indications may 1160 provide faster fault detection than using only MPLSûbased fault 1161 detection mechanisms. 1163 Link Degraded (LD) is an indication from a lower layer that the link 1164 over which the path is carried is performing below an acceptable 1165 level. If the lower layer supports detection and reporting of this 1166 fault, it may be used by the MPLS recovery mechanism. In some cases, 1167 using LD indications may provide faster fault detection than using 1168 only MPLS-based fault detection mechanisms. 1170 4.6. Fault Notification 1172 MPLS-based recovery relies on rapid and reliable notification of 1173 faults. Once a fault is detected, the node that detected the fault 1174 must determine if the fault is severe enough to require path 1175 recovery. If the node is not capable of initiating direct action 1176 (e.g. as a PSL) the node should send out a notification of the fault 1177 by transmitting a FIS to those of its upstream LSRs that were sending 1178 traffic on the working path that is affected by the fault. This 1179 notification is relayed hop-by-hop by each subsequent LSR to its 1180 upstream neighbor, until it eventually reaches a PSL. A PSL is the 1181 only LSR that can terminate the FIS and initiate a protection switch 1182 of the working path to a recovery path. 1184 Since the FIS is a control message, it should be transmitted with 1185 high priority to ensure that it propagates rapidly towards the 1186 affected PSL(s). Depending on how fault notification is configured in 1187 the LSRs of an MPLS domain, the FIS could be sent either as a Layer 2 1188 or Layer 3 packet [3]. The use of a Layer 2-based notification 1189 requires a Layer 2 path direct to the PSL. An example of a FIS could 1190 be the liveness message sent by a downstream LSR to its upstream 1191 neighbor, with an optional fault notification field set or it can be 1192 implicitly denoted by a teardown message. Alternatively, it could be 1193 a separate fault notification packet. The intermediate LSR should 1194 identify which of its incoming links (upstream LSRs) to propagate the 1195 FIS on. In the case of 1+1 protection, the FIS should also be sent 1196 downstream to the PML where the recovery action is taken. 1198 4.7. Switch-Over Operation 1200 4.7.1 Recovery Trigger 1202 The activation of an MPLS protection switch following the detection 1203 or notification of a fault requires a trigger mechanism at the PSL. 1204 MPLS protection switching may be initiated due to automatic inputs or 1205 external commands. The automatic activation of an MPLS protection 1206 switch results from a response to a defect or fault conditions 1207 detected at the PSL or to fault notifications received at the PSL. It 1208 is possible that the fault detection and trigger mechanisms may be 1209 combined, as is the case when a PF, PD, LF, or LD is detected at a 1210 PSL and triggers a protection switch to the recovery path. In most 1211 cases, however, the detection and trigger mechanisms are distinct, 1212 involving the detection of fault at some intermediate LSR followed by 1213 the propagation of a fault notification back to the PSL via the FIS, 1214 which serves as the protection switch trigger at the PSL. MPLS 1215 protection switching in response to external commands results when 1216 the operator initiates a protection switch by a command to a PSL (or 1217 alternatively by a configuration command to an intermediate LSR, 1218 which transmits the FIS towards the PSL). 1220 Note that the PF fault applies to hard failures (fiber cuts, 1221 transmitter failures, or LSR fabric failures), as does the LF fault, 1222 with the difference that the LF is a lower layer impairment that may 1223 be communicated to - MPLS-based recovery mechanisms. The PD (or LD) 1224 fault, on the other hand, applies to soft defects (excessive errors 1225 due to noise on the link, for instance). The PD (or LD) results in a 1226 fault declaration only when the percentage of lost packets exceeds a 1227 given threshold, which is provisioned and may be set based on the 1228 service level agreement(s) in effect between a service provider and a 1229 customer. 1231 4.7.2 Recovery Action 1233 After a fault is detected or FIS is received by the PSL, the recovery 1234 action involves either a rerouting or protection switching operation. 1235 In both scenarios, the next hop label forwarding entry for a recovery 1236 path is bound to the working path. 1238 4.8. Post Recovery Operation 1240 When traffic is flowing on the recovery path decisions can be made to 1241 whether let the traffic remain on the recovery path and consider it 1242 as a new working path or do a switch to the old or a new working 1243 path. This post recovery operation has two styles, one where the 1244 protection counterparts, i.e. the working and recovery path, are 1245 fixed or "pinned" to its route and one in which the PSL or other 1246 network entity with real time knowledge of failure dynamically 1247 performs re-establishment or controlled rearrangement of the paths 1248 comprising the protected service. 1250 4.8.1 Fixed Protection Counterparts 1252 For fixed protection counterparts the PSL will be pre-configured with 1253 the appropriate behavior to take when the original fixed path is 1254 restored to service. The choices are revertive and non-revertive 1255 mode. The choice will typically be depended on relative costs of the 1256 working and protection paths, and the tolerance of the service to the 1257 effects of switching paths yet again. These protection modes indicate 1258 whether or not there is a preferred path for the protected traffic. 1260 4.8.1.1 Revertive Mode 1262 If the working path always is the preferred path, this path will be 1263 used whenever it is available. Thus, in the event of a fault on this 1264 path, its unused resources will not be reclaimed by the network on 1265 failure. If the working path has a fault, traffic is switched to the 1266 recovery path. In the revertive mode of operation, when the 1267 preferred path is restored the traffic is automatically switched back 1268 to it. 1270 There are a number of implications to pinned working and recovery 1271 paths: 1272 - upon failure and traffic moved to recovery path, the traffic is 1273 unprotected until such time as the path defect in the original 1274 working path is repaired and that path restored to service. 1275 - upon failure and traffic moved to recovery path, the resources 1276 associated with the original path remain reserved. 1278 4.8.1.2 Non-revertive Mode 1280 In the non-revertive mode of operation, there is no preferred path or 1281 it may be desirable to minimize further disruption of the service 1282 brought on by a revertive switching operation. A switch-back to the 1283 original working path is not desired or not possible since the 1284 original path may no longer exist after the occurrence of a fault on 1285 that path. 1286 If there is a fault on the working path, traffic is switched to the 1287 recovery path. When or if the faulty path (the originally working 1288 path) is restored, it may become the recovery path (either by 1289 configuration, or, if desired, by management actions). 1291 In the non-revertive mode of operation, the working traffic may or 1292 may not be restored to a new optimal working path or to the original 1293 working path anyway. This is because it might be useful, in some 1294 cases, to either: (a) administratively perform a protection switch 1295 back to the original working path after gaining further assurances 1296 about the integrity of the path, or (b) it may be acceptable to 1297 continue operation on the recovery path, or (c) it may be desirable 1298 to move the traffic to a new optimal working path that is calculated 1299 based on network topology and network policies. 1301 4.8.2 Dynamic Protection Counterparts 1303 For dynamic protection counterparts when the traffic is switched over 1304 to a recovery path, the association between the original working path 1305 and the recovery path may no longer exist, since the original path 1306 itself may no longer exist after the fault. Instead, when the network 1307 reaches a stable state following routing convergence, the recovery 1308 path may be switched over to a different preferred path either 1309 optimization based on the new network topology and associated 1310 information or based on pre-configured information. 1312 Dynamic protection counterparts assume that upon failure, the PSL or 1313 other network entity will establish new working paths if another 1314 switch-over will be performed. 1316 4.8.3 Restoration and Notification 1318 MPLS restoration deals with returning the working traffic from the 1319 recovery path to the original or a new working path. Reversion is 1320 performed by the PSL either upon receiving notification, via FRS, 1321 that the working path is repaired, or upon receiving notification 1322 that a new working path is established. 1324 For fixed counterparts in revertive mode, an LSR that detected the 1325 fault on the working path also detects the restoration of the working 1326 path. If the working path had experienced a LF defect, the LSR 1327 detects a return to normal operation via the receipt of a liveness 1328 message from its peer. If the working path had experienced a LD 1329 defect at an LSR interface, the LSR could detect a return to normal 1330 operation via the resumption of error-free packet reception on that 1331 interface. Alternatively, a lower layer that no longer detects a LF 1332 defect may inform the MPLS-based recovery mechanisms at the LSR that 1333 the link to its peer LSR is operational. 1334 The LSR then transmits FRS to its upstream LSR(s) that were 1335 transmitting traffic on the working path. At the point the PSL 1336 receives the FRS, it switches the working traffic back to the 1337 original working path. 1339 A similar scheme is for dynamic counterparts where e.g. an update of 1340 topology and/or network convergence may trigger installation or setup 1341 of new working paths and may send notification to the PSL to perform 1342 a switch over. 1344 We note that if there is a way to transmit fault information back 1345 along a recovery path towards a PSL and if the recovery path is an 1346 equivalent working path, it is possible for the working path and its 1347 recovery path to exchange roles once the original working path is 1348 repaired following a fault. This is because, in that case, the 1349 recovery path effectively becomes the working path, and the restored 1350 working path functions as a recovery path for the original recovery 1351 path. This is important, since it affords the benefits of non- 1352 revertive switch operation outlined in Section 3.8.1, without leaving 1353 the recovery path unprotected. 1355 4.8.4 Reverting to Preferred Path (or Controlled Rearrangement) 1357 In the revertive mode, a "make before break" restoration switching 1358 can be used, which is less disruptive than performing protection 1359 switching upon the occurrence of network impairments. This will 1360 minimize both packet loss and packet reordering. The controlled 1361 rearrangement of paths can also be used to satisfy traffic 1362 engineering requirements for load balancing across an MPLS domain. 1364 4.9. Performance 1366 Resource/performance requirements for recovery paths should be 1367 specified in terms of the following attributes: 1369 I. Resource class attribute: 1370 Equivalent Recovery Class: The recovery path has the same resource 1371 reservations and performance guarantees as the working path. In other 1372 words, the recovery path meets the same SLAs as the working path. 1373 Limited Recovery Class: The recovery path does not have the same 1374 resource reservations and performance guarantees as the working path. 1376 A. Lower Class: The recovery path has lower resource requirements or 1377 less stringent performance requirements than the working path. 1379 B. Best Effort Class: The recovery path is best effort. 1381 II. Priority Attribute: 1382 The recovery path has a priority attribute just like the working path 1383 (i.e., the priority attribute of the associated traffic trunks). It 1384 can have the same priority as the working path or lower priority. 1386 III. Preemption Attribute: 1387 The recovery path can have the same preemption attribute as the 1388 working path or a lower one. 1390 5. MPLS Recovery Features 1392 The following features are desirable from an operational point of 1393 view: 1395 I. It is desirable that MPLS recovery provides an option to identify 1396 protection groups (PPGs) and protection portions (PTPs). 1398 II. Each PSL should be capable of performing MPLS recovery upon the 1399 detection of the impairments or upon receipt of notifications of 1400 impairments. 1402 III. A MPLS recovery method should not preclude manual protection 1403 switching commands. This implies that it would be possible under 1404 administrative commands to transfer traffic from a working path to a 1405 recovery path, or to transfer traffic from a recovery path to a 1406 working path, once the working path becomes operational following a 1407 fault. 1409 IV. A PSL may be capable of performing either a switch back to the 1410 original working path after the fault is corrected or a switchover to 1411 a new working path, upon the discovery or establishment of a more 1412 optimal working path. 1414 V. The recovery model should take into consideration path merging at 1415 intermediate LSRs. If a fault affects the merged segment, all the 1416 paths sharing that merged segment should be able to recover. 1417 Similarly, if a fault affects a non-merged segment, only the path 1418 that is affected by the fault should be recovered. 1420 6. Comparison Criteria 1422 Possible criteria to use for comparison of MPLS-based recovery 1423 schemes are as follows: 1425 Recovery Time 1427 We define recovery time as the time required for a recovery path to 1428 be activated (and traffic flowing) after a fault. Recovery Time is 1429 the sum of the Fault Detection Time, Hold-off Time, Notification 1430 Time, Recovery Operation Time, and the Traffic Restoration Time. In 1431 other words, it is the time between a failure of a node or link in 1432 the network and the time before a recovery path is installed and the 1433 traffic starts flowing on it. 1435 Full Restoration Time 1437 We define full restoration time as the time required for a permanent 1438 restoration. This is the time required for traffic to be routed onto 1439 links, which are capable of or have been engineered sufficiently to 1440 handle traffic in recovery scenarios. Note that this time may or may 1441 not be different from the "Recovery Time" depending on whether 1442 equivalent or limited recovery paths are used. 1444 Setup vulnerability 1446 The amount of time that a working path or a set of working paths is 1447 left unprotected during such tasks as recovery path computation and 1448 recovery path setup may be used to compare schemes. The nature of 1449 this vulnerability should be taken into account, e.g.: End to End 1450 schemes correlate the vulnerability with working paths, Local Repair 1451 schemes have a topological correlation that cuts across working paths 1452 and Network Plan approaches have a correlation that impacts the 1453 entire network. 1455 Backup Capacity 1457 Recovery schemes may require differing amounts of "backup capacity" 1458 in the event of a fault. This capacity will be dependent on the 1459 traffic characteristics of the network. However, it may also be 1460 dependent on the particular protection plan selection algorithms as 1461 well as the signaling and re-routing methods. 1463 Additive Latency 1465 Recovery schemes may introduce additive latency to traffic. For 1466 example, a recovery path may take many more hops than the working 1467 path. This may be dependent on the recovery path selection 1468 algorithms. 1470 Quality of Protection 1472 Recovery schemes can be considered to encompass a spectrum of "packet 1473 survivability" which may range from "relative" to "absolute". 1474 Relative survivability may mean that the packet is on an equal 1475 footing with other traffic of, as an example, the same diff-serv code 1476 point (DSCP) in contending for the resources of the portion of the 1477 network that survives the failure. Absolute survivability may mean 1478 that the survivability of the protected traffic has explicit 1479 guarantees. 1481 Re-ordering 1483 Recovery schemes may introduce re-ordering of packets. Also the 1484 action of putting traffic back on preferred paths might cause packet 1485 re-ordering. 1487 State Overhead 1489 As the number of recovery paths in a protection plan grows, the state 1490 required to maintain them also grows. Schemes may require differing 1491 numbers of paths to maintain certain levels of coverage, etc. The 1492 state required may also depend on the particular scheme used to 1493 recover. In many cases the state overhead will be in proportion to 1494 the number of recovery paths. 1496 Loss 1498 Recovery schemes may introduce a certain amount of packet loss during 1499 switchover to a recovery path. Schemes that introduce loss during 1500 recovery can measure this loss by evaluating recovery times in 1501 proportion to the link speed. 1503 In case of link or node failure a certain packet loss is inevitable. 1505 Coverage 1506 Recovery schemes may offer various types of failover coverage. The 1507 total coverage may be defined in terms of several metrics: 1509 I. Fault Types: Recovery schemes may account for only link faults or 1510 both node and link faults or also degraded service. For example, a 1511 scheme may require more recovery paths to take node faults into 1512 account. 1514 II. Number of concurrent faults: dependent on the layout of recovery 1515 paths in the protection plan, multiple fault scenarios may be able to 1516 be restored. 1518 III. Number of recovery paths: for a given fault, there may be one or 1519 more recovery paths. 1521 IV. Percentage of coverage: dependent on a scheme and its 1522 implementation, a certain percentage of faults may be covered. This 1523 may be subdivided into percentage of link faults and percentage of 1524 node faults. 1526 V. The number of protected paths may effect how fast the total set of 1527 paths affected by a fault could be recovered. The ratio of protected 1528 is n/N, where n is the number of protected paths and N is the total 1529 number of paths. 1531 7. Security Considerations 1533 The MPLS recovery that is specified herein does not raise any 1534 security issues that are not already present in the MPLS 1535 architecture. 1537 8. Intellectual Property Considerations 1539 The IETF has been notified of intellectual property rights claimed in 1540 regard to some or all of the specification contained in this 1541 document. For more information consult the online list of claimed 1542 rights. 1544 9. Acknowledgements 1546 We would like to thank members of the MPLS WG mailing list for their 1547 suggestions on the earlier versions of this draft. In particular, 1548 Bora Akyol, Dave Allan, Neil Harrison, and Dave Danenberg whose 1549 suggestions and comments were very helpful in revising the document. 1551 The editors would like to give very special thanks to Curtis 1552 Villamizar for his careful and extremely thorough reading of the 1553 document and for taking the time to provide numerous suggestions, 1554 which were very helpful in the last couple of revisions of the 1555 document. 1557 10. EditorsÆ Addresses 1559 Vishal Sharma Fiffi Hellstrand 1560 Metanoia, Inc. Nortel Networks 1561 305 Elan Village Ln., Unit 121 St Eriksgatan 115 1562 San Jose, CA 95134 PO Box 6701 1563 Phone: (408) 955-0910 113 85 Stockholm, Sweden 1564 v.sharma@ieee.org Phone: +46 8 5088 3687 1565 Fiffi@nortelnetworks.com 1567 11. References 1569 [1] Rosen, E., Viswanathan, A., and Callon, R., "Multiprotocol Label 1570 Switching Architecture", RFC 3031, January 2001. 1572 [2] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., McManus, J., 1573 "Requirements for Traffic Engineering Over MPLS", RFC 2702, 1574 September 1999. 1576 [3] Haung, C., Sharma, V., Owens, K., Makam, V. "Building Reliable 1577 MPLS Networks Using a Path Protection Mechanism", IEEE Commun. 1578 Mag., Vol. 40, Issue 3, March 2002, pp. 156-162. 1580 [4] Braden, R., Zhang, L., Berson, S., Herzog, S., "Resource 1581 ReSerVation Protocol (RSVP) -- Version 1 Functional 1582 Specification", RFC 2205, September 1997. 1584 [5] Awduche, D., et al "RSVP-TE Extensions to RSVP for LSP Tunnels", 1585 RFC 3209, December 2001. 1587 [6] Jamoussi, B., et al "Constraint-Based LSP Setup using LDP", RFC 1588 3212, January 2002.