idnits 2.17.00 (12 Aug 2021) /tmp/idnits44527/draft-fuxh-mpls-delay-loss-te-problem-statement-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 164 instances of too long lines in the document, the longest one being 3 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: As defined in section 2.2. , the estimation interval for the performance parameters is assumed to be on the order of minutes. The solution MUST not impact stability nor significantly increase convergence time if performance parameters change over a timeframe on the order of minutes. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: If performance sensitive re-routing is implemented, and revertive behavior to a preferred LSP is supported, then the preferred LSP MUST not be released. When the end-to-end performance of the preferred LSP becomes acceptable, the service is restored to this preferred LSP. -- The document date (October 15, 2012) is 3505 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1540' is mentioned on line 215, but not defined == Missing Reference: 'ITU-T Y.1541' is mentioned on line 222, but not defined == Missing Reference: 'RFC 6374' is mentioned on line 315, but not defined == Missing Reference: 'RFC 3209' is mentioned on line 408, but not defined == Unused Reference: 'CL-UC' is defined on line 645, but no explicit reference was found in the text == Unused Reference: 'CL-FW' is defined on line 651, but no explicit reference was found in the text == Unused Reference: 'ITU-T.Y.1540' is defined on line 654, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 673, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-ietf-rtgwg-cl-use-cases-01 == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 == Outdated reference: draft-ietf-mpls-tp-use-cases-and-design has been published as RFC 6965 Summary: 1 error (**), 0 flaws (~~), 15 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group X. Fu, M. Betts, Q. Wang 2 Internet Draft ZTE 3 Intended Status: Informational V. Manral 4 Expires: April 14, 2013 Hewlett-Packard Corp. 5 D. McDysan, A. Malis 6 Verizon 7 S. Giacalone 8 Thomson Reuters 9 J. Drake 10 Juniper Networks 12 October 15, 2012 14 Delay and Loss Traffic Engineering Problem Statement for MPLS 16 draft-fuxh-mpls-delay-loss-te-problem-statement-00 18 Status of this Memo 20 This Internet-Draft is submitted to IETF in full conformance with the 21 provisions of BCP 78 and BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering Task 24 Force (IETF), its areas, and its working groups. Note that other groups 25 may also distribute working documents as Internet-Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference material 30 or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html 38 This Internet-Draft will expire on February 27, 2013. 40 Copyright Notice 42 Copyright (c) 2012 IETF Trust and the persons identified as the document 43 authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal Provisions 46 Relating to IETF Documents (http://trustee.ietf.org/license-info) in 47 effect on the date of publication of this document. Please review these 48 documents carefully, as they describe your rights and restrictions with 49 respect to this document. Code Components extracted from this document 50 must include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Abstract 56 Deployment and usage of cloud based applications and services that use 57 an underlying MPLS network are expanding and an increasing number of 58 applications are extremely sensitive to delay and packet loss. 59 Furthermore, in cloud computing an additional decision problem arises of 60 simultaneously choosing the data center to host applications along with 61 MPLS network connectivity such that the overall performance of the 62 application is met. Existing mechanisms exist to measure and monitor 63 MPLS path performance parameters for packet loss and delay, but only 64 after the path has been setup. These cloud-based and performance 65 sensitive applications would benefit from measurement of MPLS network 66 and potential path information that would be provided for use in the 67 computation and selection of LSPs. 69 This document provides a statement of problems faced by these cloud 70 based and performance sensitive applications and describes requirements 71 to enable the efficient and accurate measurement of the MPLS network and 72 allow new performance parameters to be reported and used in the 73 computation of MPLS services in support of these cloud based and 74 performance sensitive applications. 76 Table of Contents 78 1. Introduction...................................................3 79 1.1. Scope.....................................................3 80 2. Conventions used in this document..............................3 81 2.1. Acronyms..................................................3 82 2.2. Terminology and Assumptions...............................4 83 2.2.1. Latency..............................................4 84 2.2.2. Packet Loss..........................................4 85 2.2.3. Packet Delay Variation...............................4 86 3. Motivation and Background......................................5 87 3.1. General Characteristics of Performance Parameters.........5 88 3.2. Use Cases for Performance Parameter Sensitive LSP Placement5 89 4. Problem Statement..............................................6 90 4.1. End-end Measurement Insufficient for Performance Sensitive LSP 91 Path Selection.................................................6 92 4.2. Lower Layer MPLS Networks Unable to Communicate Significant 93 Performance Changes............................................7 94 4.3. No Method to Communicate Significant Node/Link Performance 95 Changes........................................................7 96 4.4. Routing Metrics Insufficient for Performance Sensitive Path 97 Selection......................................................7 98 4.5. LSP Signaling Methods Insufficient for Performance Sensitive 99 Path Selection.................................................8 100 5. Functional Requirements........................................8 101 5.1. Augment LSP Requestor Signaling with Performance Parameter 102 Values.........................................................8 103 5.2. Specify Criteria for Node and Link Performance Parameter 104 Estimation, Measurement Methods................................9 105 5.3. Support Node Level Performance Information when Needed....9 106 5.4. Augment Routing Information with Performance Parameter Estimates 107 ..............................................................10 108 5.5. Augment Signaling Information with Concatenated Estimates10 109 5.6. Define Significant Performance Parameter Change Thresholds and 110 Frequency.....................................................10 111 5.7. Define Thresholds and Timers for Links with Unusable Performance 112 ..............................................................10 113 5.8. Communicate Significant Performance Changes between Layers11 114 5.9. Support for Networks with Composite Link.................11 115 5.10. Restoration, Protection and Rerouting...................11 116 5.11. Management and Operational Requirements.................12 117 6. IANA Considerations...........................................12 118 7. Security Considerations.......................................12 119 8. References....................................................12 120 8.1. Normative References.....................................12 121 8.2. Informative References...................................12 122 9. Acknowledgments...............................................13 124 1. Introduction 126 This draft is one of two created from draft-fuxh-mpls-delay-loss-te- 127 framework-05 in response to comments from an MPLS Review Team (RT). This 128 draft focuses on a problem statement and requirements while the other 129 focuses on a framework. 131 The intent of this document is to focus on stating the technical aspects 132 of the application oriented problems to be solved and specific 133 requirements targeted to solve these problems. 135 It describes requirements and application needs for bounded values of 136 latency, packet loss and delay variation. 138 1.1. Scope 140 A (G)MPLS network may have multiple layers of packet, TDM and/or optical 141 network technology and an important objective is to make a prediction of 142 end-to-end latency, loss and delay variation based upon the current 143 state of this network with acceptable accuracy before an LSP is 144 established. 146 The (G)MPLS network may cover a single IGP area/level, may be a 147 hierarchical IGP under control of a single administrator, or may involve 148 multiple domains under control of multiple administrators. 150 2. Conventions used in this document 152 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 153 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 154 document are to be interpreted as described in RFC-2119 [RFC2119]. 156 2.1. Acronyms 158 SLA Service Level Agreement 160 SLS Service Level Specification 162 NPO Network Performance Objective 164 2.2. Terminology and Assumptions 166 A Service Level Agreement (SLA) is a contractual agreement that service 167 providers have with customers for services comprised of numerical values 168 for performance measures; for example, latency, loss and delay 169 variation. Additionally, network operators may have Service Level 170 Specification (SLS) that is for internal use by the operator. See [ITU- 171 T.Y.1540], [ITU-T.Y.1541], RFC 3809, Section 4.9 [RFC3809] for examples 172 of the form of such SLA and SLS specifications. 174 Network Performance Objective (NPO) is defined in section 5 of [ITU- 175 T.Y.1541] in terms of numerical values for performance measures, 176 principally latency, loss, and delay variation.. The term NPO is used in 177 this document since the SLA and SLS measures have network operator and 178 service specific implications. Furthermore, the NPO measures are 179 sufficiently well defined to address other use cases and the stated 180 problems. 182 Of particular interest is the composition methods defined in Y.1541 for 183 estimating performance parameters of candidate LSP paths based upon the 184 performance parameter estimates/measurements of individual nodes and 185 links. 187 This document assumes that the evaluation interval for a performance 188 parameter is on the order of minutes as stated in [ITU-T Y.1541, 5.3.2], 189 which is the same of that used in some commercial networks. 191 2.2.1. Latency 193 Section 6.2.1 of [ITU-T Y.1540] defines mean IP Packet Transfer Delay 194 (IPTD) as the arithmetic average of the one-way delay observed between 195 measurement points IPTD is referred to as "latency" in this document. 197 Section 8.2.1 of [ITU-T Y.1541] defines composition of the IPTD UNI-UNI 198 performance parameter as "the mean IP packet transfer delay (IPTD) 199 performance parameter, the UNI-UNI performance is the sum of the means 200 contributed by network sections." 202 2.2.2. Packet Loss 204 Section 6.4 of [ITU-T Y.1540] defines IP Packet Loss Ratio (IPLR) as the 205 "ratio of total lost IP packet outcomes to total transmitted IP packets 206 in a population of interest," which is referred to as "loss" in this 207 document. 209 Section 8.2.2 of [ITU-T Y.1541] defines composition of the IPLR UNI-UNI 210 performance parameter as "may be estimated by inverting the probability 211 of successful packet transfer across n network sections." 213 2.2.3. Packet Delay Variation 215 Section 6.2.4.2 of [ITU-T Y.1540] defines quantile-based limits on IP 216 Packet Delay Variation (IPDV), which is referred to as "delay variation" 217 in this document. 219 Section 8.2.4 of [ITU-T Y.1541] defines composition of the IPDV UNI-UNI 220 performance parameter as "must recognize their sub-additive nature and 221 it is difficult to estimate accurately without considerable information 222 about the individual delay distributions." Appendix IV of [ITU-T Y.1541] 223 gives several examples of IPDV estimate calculations. 225 3. Motivation and Background 227 3.1. General Characteristics of Performance Parameters 229 In general, nodes and links may contribute to the performance 230 parameters. 232 For many applications, the latency NPO is very important. In networks 233 with wide geographic separation, propagation delay may dominate latency, 234 while in local or metro networks nodal latency may become important. 236 Some link technologies (e.g., wireless, wifi, satellite) may have packet 237 loss characteristics inherently different from those of other link 238 technologies(e.g., fiber optic, cable) networks. Furthermore, the 239 loading of queues may also create packet loss. 241 Delay variation (sometimes also referred to as packet jitter) is 242 important to some applications, such as interactive voice, video and/or 243 multimedia communication, gaming, and simulations. If delay varies too 244 much, then a playback buffer for such applications may underflow or 245 overflow, resulting in a disruption to the application. Delay variation 246 is caused primarily by queuing within a node. 248 3.2. Use Cases for Performance Parameter Sensitive LSP Placement 250 In High Frequency trading for Electronic Financial markets, computers 251 make decisions based on the Electronic Data received, without human 252 intervention. These trades now account for a majority of the trading 253 volumes and rely exclusively on ultra-low-latency direct market access. 254 In certain networks, such as financial information networks (e.g. stock 255 market data providers), network performance information (e.g. latency) 256 is critical to data path selection. In these networks, extremely large 257 amounts of money rest on the ability to access market data as quickly as 258 possible and to predictably make trades faster than the competition. 259 Using metrics such as hop count or link cost as routing may not always 260 meet this need. In such networks it would be beneficial to be able to 261 make path selection decisions based on performance data (such as 262 latency) in a cost-effective and scalable way. 264 In other networks, for example, network-based VPNs there are in place 265 between a customer and a provider a Service Level Agreement (SLA) which 266 specifies performance objectives, such as latency, loss, and delay 267 variation. In some cases these performance objectives are defined 268 between specific customer locations. Furthermore, packets may be 269 associated with certain classes as identified by packet header fields 270 (e.g., IP DSCP, IEEE P-bits, MPLS TC bits) that are associated with 271 different performance objectives. In these types of networks, the 272 objective is to provide service that is no worse than the performance 273 objective. A single SLA may support many customers of the same type. 275 There is also a need to support specific SLAs, typically for very large 276 customers who demand premium performance for which they are willing to 277 pay a premium price. 279 In emerging cloud-based services, an additional decision problem where 280 the application may be placed in a choice of more than one data center 281 and the (G)MPLS network connectivity may also be chosen [CLO, CSO]. In 282 these types of applications, the objective so to meet the overall 283 performance of the application deployed in one more or more data 284 centers. The performance of the intra- data center performance 285 component is out of scope of this draft, but this overall cloud plus 286 networking decision problem would benefit from a prediction of the MPLS 287 network performance as part of path establishment. 289 4. Problem Statement 291 With the use cases in the previous section as motivation, there are 292 several technical problems that currently standardized IETF protocols do 293 not adequately address: 295 o End-end Measurement Insufficient for Performance Sensitive LSP Path 297 o Routing Metrics Insufficient for Performance Sensitive Path Selection 299 o LSP Signaling Methods Insufficient for Performance Sensitive Path 300 Selection 302 o Lower Layer MPLS Networks Unable to Communicate Significant 303 Performance Changes 305 o No Method to Communicate Significant Node/Link Performance Changes 307 The following sections expand on each of these technical problem areas 308 in more detail. Although some of the problem statements are made in 309 terms of existing/proposed protocols, there is no intention to imply 310 that the solution requires a revision to these protocols. 312 4.1. End-end Measurement Insufficient for Performance Sensitive LSP Path 313 Selection 315 Methods exist to measure established LSP performance, e.g., [RFC 6374] 316 for MPLS-TP, and are most useful in verifying support for an NPO. RFC 317 6374 specifies a mechanism to measure and monitor performance parameters 318 for packet loss, and one-way and two-way latency, delay variation and 319 throughput. However, if measured performance is not met for an LSP there 320 is not a standardized method to aid in an LSP originator or a proxy 321 (e.g., PCE) to select a modified path that would meet the performance 322 objective. 324 Therefore, there is a need to enable path computation that has access to 325 an up to date recent performance estimate. 327 4.2. Lower Layer MPLS Networks Unable to Communicate Significant 328 Performance Changes 330 Historically, when an IP/MPLS network was operated over a lower layer 331 circuit switched network (e.g., SONET rings), a change in latency caused 332 by the lower layer network (e.g., due to a maintenance action or 333 failure) this was not known to the MPLS network. This resulted in 334 latency affecting end user experience, sometimes violating NPO, SLS 335 and/or SLA values and/or resulting in user complaints. 337 Using lower layer networks to provide restoration and grooming may be 338 more efficient than performing packet only restoration, but the 339 inability to communicate performance parameters, in particular latency, 340 from the lower layer network to the higher layer network is an important 341 problem to be solved in not only the composite link case [CL-REQ, 342 section 4.2], but also in the case of single links connecting nodes. 344 In summary, Multi-layer GMPLS networks do not have a means to 345 communicate a significant change in performance (e.g., latency) from one 346 layer to another. 348 4.3. No Method to Communicate Significant Node/Link Performance Changes 350 Performance characteristics of links and nodes may change dynamically in 351 response to a number of events. There is currently no way to 352 automatically indicate which nodes and/or links have had significant 353 performance changes to LSP originators or proxies so that they can 354 attempt to recompute and signal a path that would meet the LSP 355 performance objective. 357 4.4. Routing Metrics Insufficient for Performance Sensitive Path Selection 359 Optimization on a single metric does not meet the needs for all cases of 360 performance sensitive path selection. In some cases, minimizing latency 361 relates directly to the best customer experience (e.g., in TCP closer is 362 faster or in financial trading the absolute minimum latency possible 363 provides a competitive advantage). In other cases, user experience is 364 relatively insensitive to latency, up to a specific limit at which point 365 user perception of quality degrades significantly (e.g., interactive 366 human voice and multimedia conferencing). A number of NPOs have a bound 367 on point-point latency, and as long as this bound is met, the NPO is met 368 -- decreasing the latency is not necessary. In some NPOs, if the 369 specified latency is not met, the user considers the service as 370 unavailable. An unprotected LSP can be manually provisioned on a set of 371 links to meet this type of NPO, but this lowers availability since an 372 alternate route that meets the latency NPO cannot be determined. 374 One operational approach is to provision IP/MPLS networks over 375 unprotected circuits and set the metric and/or TE-metric proportional to 376 latency. This resulted in traffic being directed over the least latency 377 path, even if this was not needed to meet an NPO or user experience 378 objectives. This results in reduced flexibility and increased cost for 379 network operators. However, the (TE) metric is often used to represent 380 other information, such as link speed, economic cost or in support of 381 ECMP (as described below) and may not be able to be set to be 382 proportional to latency. Furthermore, if performance metrics such as 383 loss and delay variation are to be supported in path selection, then 384 proportional mapping is not possible. 386 Link attributes and LSP affinities [RFC 3209] can be used operationally 387 to encode some information regarding performance, for example, 388 indicating wired versus wireless, satellite versus terrestrial, etc. 389 However, these attributes/affinities are used to encode other attributes 390 and the 32 bit format is limiting in terms of numerical representation 391 of performance objective parameters. 393 Another operational approach is to set (TE) metrics to (nearly) the same 394 value so that LSPs are placed across multiple links using Equal Cost 395 Multi-Path (ECMP) path selection. However, these parallel links may 396 have markedly different performance characteristics (e.g., latency) and 397 choice of a link that meets the performance objective is needed [CL-REQ, 398 section 4.3]. 400 IGP link and TE metrics are not sufficient to support performance 401 sensitive path selection in a single IGP area/level [EXPRESS-PATH]. 403 4.5. LSP Signaling Methods Insufficient for Performance Sensitive Path 404 Selection 406 Current signaling approaches do not support inter area/level or inter- 407 domain performance sensitive path selection. There is no standard for 408 setting link attributes and LSP resource affinities [RFC 3209] between 409 administrative domains, and since these have been used within some 410 domains they are not a viable candidate to solve the aforementioned 411 problems in this context. Augmenting an IGP with performance information 412 does not solve the problem in these cases. 414 What is needed is a means for the originator/proxy of an LSP to confirm 415 whether the estimated performance of a computed LSP path will meet the 416 performance objective. 418 5. Functional Requirements 420 This section groups functional requirements intended to address the 421 problems stated in the previous section into related areas. 423 5.1. Augment LSP Requestor Signaling with Performance Parameter Values 425 The solution needs to provide a means for an LSP requestor to signal 426 performance parameter sensitive paths. The following requirements state 427 the types of requests that are required. 429 The solution MUST provide a means to indicate which performance 430 parameters are supported by the network area/level or domain. 432 The solution MUST provide a means for the LSP requestor to ask for the 433 minimum possible value for each supported performance parameter. 435 For example, an LSP requestor may ask for an LSP that has the minimum 436 possible value of latency. 438 The solution MUST provide a means for the LSP requestor to ask for a 439 range of acceptable values for each supported performance parameter. 441 For example, an LSP requestor may ask for an LSP that has performance 442 between a minimum value of latency and packet loss and a maximum value 443 of latency and packet. 445 5.2. Specify Criteria for Node and Link Performance Parameter Estimation, 446 Measurement Methods 448 The solution MUST provide a means to configure the one-way link and node 449 performance parameters for latency, loss and delay variation. 451 The solution SHOULD provide a means to dynamically measure and/or 452 estimate the one-way link and node performance parameters for latency, 453 loss and delay variation. 455 As defined in section 2.2. , the estimation interval for the performance 456 parameters is assumed to be on the order of minutes. The solution MUST 457 not impact stability nor significantly increase convergence time if 458 performance parameters change over a timeframe on the order of minutes. 460 5.3. Support Node Level Performance Information when Needed 462 There are several scenarios under which node-related performance 463 parameters (latency, loss, delay variation) has a different level of 464 importance: 466 1. The case of few nodes with large geographic separation, (e.g.,trans- 467 oceanic), where link latency alone would be a good approximation. 469 2. The case of many nodes with small geographic separation (e.g., 470 interconnected nearby data centers) where node/device latency is very 471 important but link latency may be negligible. 473 3. The case of some number of nodes with medium geographic separation, 474 where usage of both link and node latency may be desirable. 476 The intent in case 1 is to measure the predominant latency in 477 uncongested service provider networks, where geographic delay dominates 478 and is on the order of milliseconds or more. The argument in cases 2 479 and 3 for including node-level queuing performance parameters is that it 480 better represents the performance experienced by applications. The 481 argument against including queuing related performance parameters is 482 that it if used in routing decisions it can result in routing 483 instability. This tradeoff is discussed in detail in [CL-FW, Section 484 4.1.1]. 486 The solution MUST define methods to include node level performance 487 estimate information to routing protocols. 489 The solution MUST define methods to include node level performance 490 estimate information to signaling protocols. 492 A specific deployment of the solution MAY choose to not use the node 493 level performance estimates. 495 5.4. Augment Routing Information with Performance Parameter Estimates 497 The solution MUST provide a means to communicate performance parameters 498 of both links and nodes as an estimate for use in performance sensitive 499 LSP path selection within nodes of a single IGP area/level. 501 The solution SHOULD provide a means to communicate latency, loss and 502 delay variation of links and nodes as a traffic engineering performance 503 parameter for use in performance sensitive LSP path selection across a 504 set of nodes in a hierarchy of IGP areas/levels. 506 5.5. Augment Signaling Information with Concatenated Estimates 508 The solution MUST provide a means to signal concatenated performance 509 parameter estimates for both links and nodes as an estimate for use in 510 performance sensitive LSP path selection traversing two or more separate 511 administrative domains. See the terminology section for references on 512 the concatenation method for specific performance parameters. 514 For example, the solution needs to support the capability to compute a 515 route with X amount of bandwidth with less than Y ms of latency and less 516 than Z% loss across multiple domains. 518 The solution MUST support the means to concatenate performance parameter 519 estimates and report this for each traversed domain on the end-end path 521 The solution MUST interoperate with existing path selection and 522 signaling methods traversing multiple domains. 524 5.6. Define Significant Performance Parameter Change Thresholds and 525 Frequency 527 Latency, loss and delay variation measurements and/or estimates may be 528 time varying. The solution MUST provide a means to control the 529 advertisement rate of performance parameter estimates to avoid 530 instability. 532 Any automatic LSP routing and/or load balancing solutions MUST NOT 533 oscillate such that performance observed by users changes such that an 534 NPO is violated. Since oscillation may cause reordering, there MUST be 535 means to control the frequency of changing the path over which an LSP is 536 placed. 538 5.7. Define Thresholds and Timers for Links with Unusable Performance 540 The solution MUST provide a means to configure a performance parameter 541 threshold which defines placement of a node or link into an unusable 542 state. The solution MUST provide a means to configure a performance 543 parameter threshold which defines transition of a node or a link from an 544 unusable state to a useable state. The solution MUST provide a means to 545 control the minimum transition time between these states. 547 This unusable state is intended to operate on a link/node capability 548 basis and not a global basis. Since state transition conditions are 549 locally configured, all routers within a domain should synchronize this 550 configuration value. 552 With current TE protocols, a refreshed LSP would use the most recent 553 performance parameter estimates and may be rerouted based upon nodes or 554 links being placed in an unusable performance state. Section 5.11. 555 defines requirements for a desirable function where performance 556 sensitive LSP re-routing would occur. 558 5.8. Communicate Significant Performance Changes between Layers 560 In order to support network NPOs and provide acceptable user experience, 561 the solution MUST specify a protocol means to allow a lower layer server 562 network to communicate performance parameters (e.g., latency, loss, 563 delay variation) to the higher layer client network. 565 5.9. The above requirement applies to layering with different technologies 566 (e.g., MPLS over OTN) or to different levels within the same technology 567 (e.g., hierarchical LSPs). 569 5.10. Support for Networks with Composite Link 571 An LSP may traverse a network with Composite Links [CL-REQ]. The 572 solution's selection of performance sensitive paths SHOULD be compatible 573 with the general availability, stability and transient response 574 requirements of [CL-REQ, Section 4.1]. 576 When an LSP traverses a network with composite links that has component 577 links provided by lower layer networks, the solution MUST interoperate 578 with the requirements [CL-REQ, Section 4.2]. 580 When an LSP traverses a network with composite links that has parallel 581 component links with different characteristics, the solution MUST 582 interoperate with the requirements [CL-REQ, Section 4.3]. 584 5.11. Restoration, Protection and Rerouting 586 The ability to re-route an LSP if one or more NPO objectives are not met 587 is highly desirable. The solution SHOULD support the capability to 588 configure an LSP as capable of implementing performance sensitive re- 589 routing, as detailed in the following conditional requirements. 591 If performance sensitive re-routing is implemented, the solution MUST 592 provide a means to configure performance parameter threshold crossing 593 and time values. 595 If performance sensitive re-routing is implemented, the solution MUST 596 support a configuration option to move an end-to-end LSP away from any 597 link or node whose performance violates the configured threshold. 599 If implemented, the solution MUST provide a means to control the 600 frequency of LSP rerouting to avoid instability. 602 If performance sensitive re-routing is implemented, and revertive 603 behavior to a preferred LSP is supported, then the preferred LSP MUST 604 not be released. When the end-to-end performance of the preferred LSP 605 becomes acceptable, the service is restored to this preferred LSP. 607 The latency performance of pre-defined protection or dynamic reroutable 608 LSP MUST be defined by the solution in terms of the maximum acceptable 609 latency difference between the primary and protection/restoration path 610 MUST be specifiable in the solution. For example, [MPLS-TP-USE-CASE] 611 defines a Relative Delay Time which is the difference of the Absolute 612 Delay between the primary and protection path. 614 5.12. Management and Operational Requirements 616 Existing management and diagnostic protocols MUST be able to operate 617 over networks supporting performance sensitive LSP placement. 619 If performance sensitive re-routing is implemented, and end-to-end 620 measurements of the LSP performance are made, then the LSP requestor is 621 able to request path placement for a performance sensitive LSP using the 622 previously stated requirements. Since a threshold crossing of the end- 623 to-end performance measurement may or may not correspond to a change in 624 the concatenated performance parameter estimates, making any automatic 625 decision on this basis is not recommended since it could create 626 instability. 628 6. IANA Considerations 630 No new IANA consideration are raised by this document. 632 7. Security Considerations 634 This document raises no new security issues. 636 8. References 638 8.1. Normative References 640 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 641 Requirement Levels", BCP 14, RFC 2119, March 1997. 643 8.2. Informative References 645 [CL-UC] C. Villamizer et al, "Composite Link Use Cases and Design 646 Considerations," draft-ietf-rtgwg-cl-use-cases-01 648 [CL-REQ] C. Villamizar et al, "Requirements for MPLS Over a Composite 649 Link", draft-ietf-rtgwg-cl-requirement-08 . 651 [CL-FW] C. Villamizar et al, "Composite Link Framework in Multi Protocol 652 Label Switching (MPLS)", work in progress 654 [ITU-T.Y.1540] ITU-T, "Internet protocol data communication service - IP 655 packet transfer and availability performance parameters", 656 2011, . 658 [ITU-T.Y.1541] ITU-T, "Network performance objectives for IP-based 659 services", 2011, . 661 [RFC3809] Nagarajan, A., "Generic Requirements for Provider Provisioned 662 Virtual Private Networks (PPVPN)", RFC 3809, June 2004. 664 [CLO] Young Lee et al, "Problem Statement for Cross-Layer 665 Optimization," work in progress. 667 [CSO] Greg Bernstein, Young Lee, "Cross Stratum Optimization Use- 668 cases," work in progress. 670 [EXPRESS-PATH] A. Atlas, "Performance-based Path Selection for 671 Explicitly Routed LSPs", work in progress. 673 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., and 674 G. Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 675 3209, December 2001. 677 [MPLS-TP-USE-CASE] L. Fang, "MPLS-TP Applicability; Use Cases and 678 Design", draft-ietf-mpls-tp-use-cases-and-design-01 . 680 9. Acknowledgments 682 This document was prepared using 2-Word-v2.0.template.dot. 684 The authors would like to thank the MPLS Review Team of Stewart Bryant, 685 Daniel King and He Jia for their many helpful comments suggestions in 686 July 2012. 688 Copyright (c) 2012 IETF Trust and the persons identified as authors of 689 the code. All rights reserved. 691 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS 692 IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 693 TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 694 PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER 695 OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 696 EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 697 PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 698 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 699 LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 700 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 701 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 703 This code was derived from IETF RFC [insert RFC number]. Please 704 reproduce this note if possible. 706 Authors' Addresses 708 Xihua Fu 709 ZTE 710 Email: fu.xihua@zte.com.cn 712 Vishwas Manral 713 Hewlett-Packard Corp. 714 191111 Pruneridge Ave. 715 Cupertino, CA 95014 716 US 717 Phone: 408-447-1497 718 Email: vishwas.manral@hp.com 720 Dave McDysan 721 Verizon 722 Email: dave.mcdysan@verizon.com 724 Andrew Malis 725 Verizon 726 Email: andrew.g.malis@verizon.com 728 Spencer Giacalone 729 Thomson Reuters 730 195 Broadway 731 New York, NY 10007 732 US 733 Phone: 646-822-3000 734 Email: spencer.giacalone@thomsonreuters.com 736 Malcolm Betts 737 ZTE 738 Email: malcolm.betts@zte.com.cn 740 Qilei Wang 741 ZTE 742 Email: wang.qilei@zte.com.cn 744 John Drake 745 Juniper Networks 746 Email: jdrake@juniper.net