idnits 2.17.00 (12 Aug 2021) /tmp/idnits11385/draft-ietf-rtgwg-cl-use-cases-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 13, 2013) is 3234 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: draft-ietf-mpls-multipath-use has been published as RFC 7190 == Outdated reference: A later version (-04) exists of draft-ietf-rtgwg-cl-framework-03 == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 -- Obsolete informational reference (is this intentional?): RFC 1717 (Obsoleted by RFC 1990) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTGWG S. Ning 3 Internet-Draft Tata Communications 4 Intended status: Informational A. Malis 5 Expires: January 14, 2014 D. McDysan 6 Verizon 7 L. Yong 8 Huawei USA 9 C. Villamizar 10 Outer Cape Cod Network 11 Consulting 12 July 13, 2013 14 Advannced Multipath Use Cases and Design Considerations 15 draft-ietf-rtgwg-cl-use-cases-04 17 Abstract 19 This document provides a set of use cases and design considerations 20 for Advanced Multipath. 22 Advanced Multipath is a formalization of multipath techniques 23 currently in use in IP and MPLS networks and a set of extensions to 24 existing multipath techniques. 26 Status of this Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on January 14, 2014. 43 Copyright Notice 45 Copyright (c) 2013 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 61 2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 3 62 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 63 4. Multipath Foundation Use Cases . . . . . . . . . . . . . . . . 5 64 5. Delay Sensitive Applications . . . . . . . . . . . . . . . . . 8 65 6. Large Volume of IP and LDP Traffic . . . . . . . . . . . . . . 9 66 7. Multipath and Packet Ordering . . . . . . . . . . . . . . . . 9 67 7.1. MPLS-TP in network edges only . . . . . . . . . . . . . . 11 68 7.2. Multipath at core LSP ingress/egress . . . . . . . . . . . 12 69 7.3. MPLS-TP as a MPLS client . . . . . . . . . . . . . . . . . 13 70 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 71 9. Security Considerations . . . . . . . . . . . . . . . . . . . 14 72 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14 73 11. Informative References . . . . . . . . . . . . . . . . . . . . 14 74 Appendix A. More Details on Existing Network Operator 75 Practices and Protocol Usage . . . . . . . . . . . . 17 76 Appendix B. Existing Multipath Standards and Techniques . . . . . 19 77 B.1. Common Multpath Load Spliting Techniques . . . . . . . . . 19 78 B.2. Static and Dynamic Load Balancing Multipath . . . . . . . 21 79 B.3. Traffic Split over Parallel Links . . . . . . . . . . . . 21 80 B.4. Traffic Split over Multiple Paths . . . . . . . . . . . . 22 81 Appendix C. Characteristics of Transport in Core Networks . . . . 22 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 24 84 1. Introduction 86 Advanced Multipath requirements are specified in 87 [I-D.ietf-rtgwg-cl-requirement]. An Advanced Multipath framework is 88 defined in [I-D.ietf-rtgwg-cl-framework]. 90 Multipath techniques have been widely used in IP networks for over 91 two decades. The use of MPLS began more than a decade ago. 92 Multipath has been widely used in IP/MPLS networks for over a decade 93 with very little protocol support dedicated to effective use of 94 multipath. 96 The state of the art in multipath prior to Advanced Multipath is 97 documented in Appendix B. 99 Both Ethernet Link Aggregation [IEEE-802.1AX] and MPLS link bundling 100 [RFC4201] have been widely used in today's MPLS networks. Advanced 101 Multipath differs in the following characteristics. 103 1. Advanced Multipath allows bundling of non-homogenous links 104 together as a single logical link. 106 2. Advanced Multipath provides more information in the TE-LSDB and 107 supports more explicit control over placement of LSP. 109 2. Assumptions 111 The supported services are, but not limited to, pseudowire (PW) based 112 services ([RFC3985]), including Virtual Private Network (VPN) 113 services, Internet traffic encapsulated by at least one MPLS label 114 ([RFC3032]), and dynamically signaled MPLS ([RFC3209] or [RFC5036]) 115 or MPLS-TP Label Switched Paths (LSPs) ([RFC5921]). 117 The MPLS LSPs supporting these services may be point-to-point, point- 118 to-multipoint, or multipoint-to-multipoint. The MPLS LSPs may be 119 signaled using RSVP-TE [RFC3209] or LDP [RFC5036]. With RSVP-TE, 120 extensions to Interior Gateway Protocols (IGPs) may be used, 121 specifically to OSPF-TE [RFC3630] or ISIS-TE [RFC5305]. 123 The locations in a network where these requirements apply are a Label 124 Edge Router (LER) or a Label Switch Router (LSR) as defined in 125 [RFC3031]. 127 The IP DSCP field [RFC2474] [RFC2475] cannot be used for flow 128 identification since L3VPN requires Diffserv transparency (see RFC 129 4031 5.5.2 [RFC4031]), and in general network operators do not rely 130 on the DSCP of Internet packets. 132 3. Terminology 134 Terminology defined in [I-D.ietf-rtgwg-cl-requirement] is used in 135 this document. 137 In addition, the following terms are used: 139 classic multipath: 140 Classic multipath refers to the most common current practice in 141 implementation and deployment of multipath (see Appendix B). The 142 most common current practice makes use of a hash on the MPLS 143 label stack and if IPv4 or IPv6 are indicates under the label 144 stack, makes use of the IP source and destination addresses 145 [RFC4385] [RFC4928]. 147 classic link bundling: 148 Classic link bundling refers to the use of [RFC4201] where the 149 "all ones" component is not used. Where the "all ones" component 150 is used, link bundling behaves as classic multipath does. 151 Classic link bundling selects a single component link to carry 152 all of the traffic for a given LSP. 154 Among the important distinctions between classic multipath or classic 155 link bundling and Advanced Multipath are: 157 1. Classic multipath has no provision to retain packet order within 158 any specific LSP. Classic link bundling retains packet order 159 among any given LSP but as a result does a poor job of splitting 160 load among components and therefore is rarely (if ever) deployed. 161 Advanced Multipath allows per LSP control of load split 162 characteristics. 164 2. Classic multipath and classic link bundling do not provide a 165 means to put some LSP on component links with lower delay. 166 Advanced Multipath does. 168 3. Classic multipath will provide a load balance for IP and LDP 169 traffic. Classic link bundling will not. Neither classic 170 multipath or classic link bundling will measure IP and LDP 171 traffic and reduce the advertised "Available Bandwidth" as a 172 result of that measurement. Advanced Multipath better supports 173 RSVP-TE used with significant traffic levels of native IP and 174 native LDP. 176 4. Classic link bundling cannot support an LSP that is greater in 177 capacity than any single component link. Classic multipath 178 supports this capability but may reorder traffic on such an LSP. 179 Advanced Multipath can retain order of an LSP that is carried 180 within an LSP that is greater in capacity than any single 181 component link if the contained LSP has such a requirement. 183 None of these techniques, classic multipath, classic link bundling, 184 or Advanced Multipath, will reorder traffic among IP microflows. 185 None of these techniques will reorder traffic among PW, if a PWE3 186 Control Word is used [RFC4385]. 188 4. Multipath Foundation Use Cases 190 A simple multipath composed entirely of physical links is illustrated 191 in Figure 1, where an multipath is configured between LSR1 and LSR2. 192 This multipath has three component links. Individual component links 193 in a multipath may be supported by different transport technologies 194 such as SONET, OTN, Ethernet, etc. Even if the transport technology 195 implementing the component links is identical, the characteristics 196 (e.g., bandwidth, latency) of the component links may differ. 198 The multipath in Figure 1 may carry LSP traffic flows and control 199 plane packets. Control plane packets may appear as IP packets or may 200 be carried within a generic associated channel (G-Ach) [RFC5586]. A 201 LSP may be established over the link by either RSVP-TE [RFC3209] or 202 LDP [RFC5036] signaling protocols. All component links in a 203 multipath are summarized in the same forwarding adjacency LSP (FA- 204 LSP) routing advertisement [RFC3945]. The multipath is summarized as 205 one TE-Link advertised into the IGP by the multipath end points (the 206 LER if the multipath is MPLS based). This information is used in 207 path computation when a full MPLS control plane is in use. 209 If Advanced Multipath techniques are used, then the individual 210 component links or groups of component links may optionally be 211 advertised into the IGP as sub-TLV of the multipath FA advertisement 212 to indicate capacity available with various characteristics, such as 213 a delay range. 215 Management Plane 216 Configuration and Measurement <------------+ 217 ^ | 218 | | 219 +-------+-+ +-+-------+ 220 | | | | | | 221 CP Packets V | | V CP Packets 222 | V | | Component Link 1 | | ^ | 223 | | |=|===========================|=| | | 224 | +----| | Component Link 2 | |----+ | 225 | |=|===========================|=| | 226 Aggregated LSPs | | | | | 227 ~|~~~~~~>| | Component Link 3 | |~~~~>~~|~~ 228 | |=|===========================|=| | 229 | | | | | | 230 | LSR1 | | LSR2 | 231 +---------+ +---------+ 232 ! ! 233 ! ! 234 !<-------- Multipath ---------->! 236 Figure 1: a multipath constructed with multiple physical links 237 between two LSR 239 [I-D.ietf-rtgwg-cl-requirement] specifies that component links may 240 themselves be multipath. This is true for most implementations even 241 prior to the Advanced Multipath work in 242 [I-D.ietf-rtgwg-cl-requirement]. For example, a component of a pre- 243 Advanced Multipath MPLS Link Bundle or ISIS or OSPF ECMP could be an 244 Ethernet LAG. In some implementations many other combinations or 245 even arbitrary combinations could be supported. Figure 2 shows three 246 three forms of component links which may be deployed in a network. 248 +-------+ 1. Physical Link +-------+ 249 | |-|----------------------------------------------|-| | 250 | | | | | | 251 | | | +------+ +------+ | | | 252 | | | | MPLS | 2. Logical Link | MPLS | | | | 253 | |.|.... |......|.....................|......|....|.| | 254 | | |-----| LSR3 |---------------------| LSR4 |----| | | 255 | | | +------+ +------+ | | | 256 | | | | | | 257 | | | | | | 258 | | | +------+ +------+ | | | 259 | | | |GMPLS | 3. Logical Link |GMPLS | | | | 260 | |.|. ...|......|.....................|......|....|.| | 261 | | |-----| LSR5 |---------------------| LSR6 |----| | | 262 | | +------+ +------+ | | 263 | LSR1 | | LSR2 | 264 +-------+ +-------+ 265 |<---------------- Multipath --------------------->| 267 Figure 2: Illustration of Various Component Link Types 269 The three forms of component link shown in Figure 2 are: 271 1. The first component link is configured with direct physical media 272 plus a link layer protocol. This case also includes emulated 273 physical links, for example using pseudowire emulation. 275 2. The second component link is a TE tunnel that traverses LSR3 and 276 LSR4, where LSR3 and LSR4 are the nodes supporting MPLS, but 277 supporting few or no GMPLS extensions. 279 3. The third component link is formed by lower layer network that 280 has GMPLS enabled. In this case, LSR5 and LSR6 are not the nodes 281 controlled by the MPLS but provide the connectivity for the 282 component link. 284 A multipath forms one logical link between connected LSR (LSR1 and 285 LSR2 in Figure 1 and Figure 2) and is used to carry aggregated 286 traffic. Multipath relies on its component links to carry the 287 traffic but must distribute or load balance the traffic. The 288 endpoints of the multipath maps incoming traffic into the set of 289 component links. 291 For example, LSR1 in Figure 1 distributes the set of traffic flows 292 including control plane packets among the set of component links. 293 LSR2 in Figure 1 receives the packets from its component links and 294 sends them to MPLS forwarding engine with no attempt to reorder 295 packets arriving on different component links. The traffic in the 296 opposite direction, from LSR2 to LSR1, is distributed across the set 297 of component links by the LSR2. 299 These three forms of component link are a limited set of very simple 300 examples. Many other examples are possible. A component link may 301 itself be a multipath. A segment of an LSP (single hop for that LSP) 302 may be a multipath. 304 5. Delay Sensitive Applications 306 Most applications benefit from lower delay. Some types of 307 applications are far more sensitive than others. For example, real 308 time bidirectional applications such as voice communication or two 309 way video conferencing are far more sensitive to delay than 310 unidirectional streaming audio or video. Non-interactive bulk 311 transfer is almost insensitive to delay if a large enough TCP window 312 is used. 314 Some applications are sensitive to delay but users of those 315 applications are unwilling to pay extra to insure lower delay. For 316 example, many SIP end users are willing to accept the delay offered 317 to best effort services as long as call quality is good most of the 318 time. 320 Other applications are sensitive to delay and willing to pay extra to 321 insure lower delay. For example, financial trading applications are 322 extremely sensitive to delay and with a lot at stake are willing to 323 go to great lengths to reduce delay. 325 Among the requirements of Advanced Multipath are requirements to 326 support non-homogeneous links. One solution in support of lower 327 delay links is to advertise capacity available within configured 328 ranges of delay within a given multipath and the support the ability 329 to place an LSP only on component links that meeting that LSP's delay 330 requirements. 332 The Advanced Multipath requirements to accommodate delay sensitive 333 applications are analogous to Diffserv requirements to accommodate 334 applications requiring higher quality of service on the same 335 infrastructure as applications with less demanding requirements. The 336 ability to share capacity with less demanding applications, with best 337 effort applications being the least demanding, can greatly reduce the 338 cost of delivering service to the more demanding applications. 340 6. Large Volume of IP and LDP Traffic 342 IP and LDP do not support traffic engineering. Both make use of a 343 shortest (lowest routing metric) path, with an option to use equal 344 cost multipath (ECMP). Note that though ECMP is prohibited in LDP 345 specifications, it is widely implemented. Where implemented for LDP, 346 ECMP is generally disabled by default for standards compliance, but 347 often enabled in LDP deployments. 349 Without traffic engineering capability, there must be sufficient 350 capacity to accommodate the IP and LDP traffic. If not, persistent 351 queuing delay and loss will occur. Unlike RSVP-TE, a subset of 352 traffic cannot be routed using constraint based routing to avoid a 353 congested portion of an infrastructure. 355 In existing networks which accommodate IP and/or LDP with RSVP-TE, 356 either the IP and LDP can be carried over RSVP-TE, or where the 357 traffic contribution of IP and LDP is small, IP and LDP can be 358 carried native and the effect on RSVP-TE can be ignored. Ignoring 359 the traffic contribution of IP is certainly valid on high capacity 360 networks where native IP is used primarily for control and network 361 management and customer IP is carried within RSVP-TE. 363 Where it is desirable to carry native IP and/or LDP and IP and/or LDP 364 traffic volumes are not negligible, RSVP-TE needs improvement. An 365 enhancement offered by Advanced Multipath is an ability to measure 366 the IP and LDP, filter the measurements, and reduce the capacity 367 available to RSVP-TE to avoid congestion. The treatment given to the 368 IP or LDP traffic is similar to the treatment when using the "auto- 369 bandwidth" feature in some RSVP-TE implementations on that same 370 traffic, and giving a higher priority (numerically lower setup 371 priority and holding priority value) to the "auto-bandwidth" LSP. 372 The difference is that the measurement is made at each hop and the 373 reduction in advertised bandwidth is made more directly. 375 7. Multipath and Packet Ordering 377 A strong motivation for multipath is the need to provide LSP capacity 378 in IP backbones that exceeds the capacity of single wavelengths 379 provided by transport equipment and exceeds the practical capacity 380 limits achievable through inverse multiplexing. Appendix C describes 381 characteristics and limitations of transport systems today. 382 Section 3 defines the terms "classic multipath" and "classic link 383 bundling" used in this section. 385 For purpose of discussion, consider two very large cities, city A and 386 city Z. For example, in the US high traffic cities might be New York 387 and Los Angeles and in Europe high traffic cities might be London and 388 Amsterdam. Two other high volume cities, city B and city Y may share 389 common provider core network infrastructure. Using the same 390 examples, the city B and Y may Washington DC and San Francisco or 391 Paris and Stockholm. In the US, the common infrastructure may span 392 Denver, Chicago, Detroit, and Cleveland. Other major traffic 393 contributors on either US coast include Boston, northern Virginia on 394 the east coast, and Seattle, and San Diego on the west coast. The 395 capacity of IP/MPLS links within the shared infrastructure, for 396 example city to city links in the Denver, Chicago, Detroit, and 397 Cleveland path in the US example, have capacities for most of the 398 2000s decade that greatly exceeded single circuits available in 399 transport networks. 401 For a case with four large traffic sources on either side of the 402 shared infrastructure, up to sixteen core city to core city traffic 403 flows in excess of transport circuit capacity may be accommodated on 404 the shared infrastructure. 406 Today the most common IP/MPLS core network design makes use of very 407 large links which consist of many smaller component links, but use 408 classic multipath techniques. A component link typically corresponds 409 to the largest circuit that the transport system is capable of 410 providing (or the largest cost effective circuit). IP source and 411 destination address hashing is used to distribute flows across the 412 set of component links as described in Appendix B.3. 414 Classic multipath can handle large LSP up to the total capacity of 415 the multipath (within limits, see Appendix B.2). A disadvantage of 416 classic multipath is the reordering among traffic within a given core 417 city to core city LSP. While there is no reordering within any 418 microflow and therefore no customer visible issue, MPLS-TP cannot be 419 used across an infrastructure where classic multipath is in use, 420 except within pseudowires. 422 Capacity issues force the use of classic multipath today. Classic 423 multipath excludes a direct use of MPLS-TP. The desire for OAM, 424 offered by MPLS-TP, is in conflict with the use of classic multipath. 425 There are a number of alternatives that satisfy both requirements. 426 Some alternatives are described below. 428 MPLS-TP in network edges only 430 A simple approach which requires no change to the core is to 431 disallow MPLS-TP across the core unless carried within a 432 pseudowire (PW). MPLS-TP may be used within edge domains where 433 classic multipath is not used. PW may be signaled end to end 434 using single segment PW (SS-PW), or stitched across domains using 435 multisegment PW (MS-PW). The PW and anything carried within the 436 PW may use OAM as long as fat-PW [RFC6391] load splitting is not 437 used by the PW. 439 Advanced Multipath at core LSP ingress/egress 441 The interior of the core network may use classic link bundling, 442 with the limitation that no LSP can exceed the capacity of a 443 single circuit. Larger non-MPLS-TP LSP can be configured using 444 multiple ingress to egress component MPLS-TP LSP. This can be 445 accomplished using existing IP source and destination address 446 hashing configured at LSP ingress and egress. Each component 447 LSP, if constrained to be no larger than the capacity of a single 448 circuit. can make use of MPLS-TP and offer OAM for all top level 449 LSP across the core. 451 MPLS-TP as a MPLS client 453 A third approach involves making use of Entropy Labels [RFC6790] 454 on all MPLS-TP LSP such that the entire MPLS-TP LSP is treated as 455 a microflow by midpoint LSR, even if further encapsulated in very 456 large server layer MPLS LSP. 458 The above list of alternatives allow packet ordering within an LSP to 459 be maintained in some circumstances and allow very large LSP 460 capacities. Each of these alternatives are discussed further in the 461 following subsections. 463 7.1. MPLS-TP in network edges only 465 Classic MPLS link bundling is defined in [RFC4201] and has existed 466 since early in the 2000s decade. Classic MPLS link bundling place 467 any given LSP entirely on a single component link. Classic MPLS link 468 bundling is not in widespread use as the means to accommodate large 469 link capacities in core networks due to the simplicity and better 470 multiplexing gain, and therefore lower network cost of classic 471 multipath. 473 If MPLS-TP OAM capability in the IP/MPLS network core LSP is not 474 required, then there is no need to change existing network designs 475 which use classic multipath and both label stack and IP source and 476 destination address based hashing as a basis for load splitting. 478 If MPLS-TP is needed for a subset of LSP, then those LSP can be 479 carried within pseudowires. The pseudowires adds a thin layer of 480 encapsulation and therefore a small overhead. If only a subset of 481 LSP need MPLS-TP OAM, then some LSP must make use of the pseudowires 482 and other LSP avoid them. A straightforward way to accomplish this 483 is with administrative attributes [RFC3209]. 485 7.2. Multipath at core LSP ingress/egress 487 Multipath can be configured for large LSP that are made of smaller 488 MPLS-TP component LSP. Some implementations already support this 489 capability, though until Advanced Multipath no IETF document required 490 it. This approach is capable of supporting MPLS-TP OAM over the 491 entire set of component link LSP and therefore the entire set of top 492 level LSP traversing the core. 494 There are two primary disadvantage of this approach. One is the 495 number of top level LSP traversing the core can be dramatically 496 increased. The other disadvantage is the loss of multiplexing gain 497 that results from use of classic link bundling within the interior of 498 the core network. 500 If component LSP use MPLS-TP, then no component LSP can exceed the 501 capacity of a single circuit. For a given multipath LSP there can 502 either be a number of equal capacity component LSP or some number of 503 full capacity component links plus one LSP carrying the excess. For 504 example, a 350 Gb/s multipath LSP over a 100 Gb/s infrastructure may 505 use five 70 Gb/s component LSP or three 100 Gb/s LSP plus one 50 Gb/s 506 LSP. Classic MPLS link bundling is needed to support MPLS-TP and 507 suffers from a bin packing problem even if LSP traffic is completely 508 predictable, which it never is in practice. 510 The common means of setting very large LSP link bandwidth parameters 511 uses long term statistical measures. For example, at one time many 512 providers based their LSP bandwidth parameters on the 95th percentile 513 of carried traffic as measured over the prior one week period. It is 514 common to add 10-30% to the 95th percentile value measured over the 515 prior week and adjust bandwidth parameters of LSP weekly. It is also 516 possible to measure traffic flow at the LSR and adjust bandwidth 517 parameters somewhat more dynamically. This is less common in 518 deployments and where deployed, makes use of filtering to track very 519 long term trends in traffic levels. In either case, short term 520 variation of traffic levels relative to signaled LSP capacity are 521 common. Allowing a large over allocation of LSP bandwidth parameters 522 (ie: adding 30% or more) avoids over utilization of any given LSP, 523 but increases unused network capacity and increases network cost. 524 Allowing a small over allocation of LSP bandwidth parameters (ie: 525 10-20% or less) results in both underutilization and over utilization 526 but statistically results in a total utilization within the core that 527 is under capacity most or all of the time. 529 The classic multipath solution accommodates the situation in which 530 some very large LSP are under utilizing their signaled capacity and 531 others are over utilizing their capacity with the need for far less 532 unused network capacity to accommodate variation in actual traffic 533 levels. If the actual traffic levels of LSP can be described by a 534 probability distribution, the variation of the sum of LSP is less 535 than the variation of any given LSP for all but a constant traffic 536 level (where the variation of the sum and the variation of the 537 components are both zero). 539 Splitting very large LSP at the ingress and carrying those large LSP 540 within smaller MPLS-TP component LSP and then using classic link 541 bundling to carry the MPLS-TP LSP is a viable approach. However this 542 approach loses the statistical gain discussed in the prior 543 paragraphs. Losing this statistical gain drives up network costs 544 necessary to acheive the same very low probability of only mild 545 congestion that is expected of provider networks. 547 There are two situations which can motivate the use of this approach. 548 This design is favored if the provider values MPLS-TP OAM across the 549 core more than efficiency (or is unaware of the efficiency issue). 550 This design can also make sense if transport equipment or very low 551 cost core LSR are available which support only classic link bundling 552 and regardless of loss of multiplexing gain, are more cost effective 553 at carrying transit traffic than using equipment which supports IP 554 source and destination address hashing. 556 7.3. MPLS-TP as a MPLS client 558 Accommodating MPLS-TP as a MPLS client requires the small change to 559 forwarding behavior necessary to support [RFC6790] and is therefore 560 most applicable to major network overbuilds or new deployments. This 561 approach is described in [I-D.ietf-mpls-multipath-use] and makes use 562 of Entropy Labels [RFC6790] to prevent reordering of MPLS-TP LSP or 563 any other LSP which requires that its traffic not be reordered for 564 OAM or other reasons. 566 The advantage of this approach is an ability to accommodate MPLS-TP 567 as a client LSP but retain the high multiplexing gain and therefore 568 efficiency and low network cost of a pure MPLS deployment. The 569 disadvantage is the need for a small change in forwarding to support 570 [RFC6790]. 572 8. IANA Considerations 574 This memo includes no request to IANA. 576 9. Security Considerations 578 This document is a use cases document. Existing protocols are 579 referenced such as MPLS. Existing techniques such as MPLS link 580 bundling and multipath techniques are referenced. These protocols 581 and techniques are documented elsewhere and contain security 582 considerations which are unchanged by this document. 584 This document also describes use cases for multipath and Advanced 585 Multipath. Advanced Multipath requirements are defined in 586 [I-D.ietf-rtgwg-cl-requirement]. [I-D.ietf-rtgwg-cl-framework] 587 defines a framework for Advanced Multipath. Advanced Multipath bears 588 many similarities to MPLS link bundling and multipath techniques used 589 with MPLS. Additional security considerations, if any, beyond those 590 already identified for MPLS, MPLS link bundling and multipath 591 techniques, will be documented in the framework document if specific 592 to the overall framework of Advanced Multipath, or in protocol 593 extensions if specific to a given protocol extension defined later to 594 support Advanced Multipath. 596 10. Acknowledgments 598 In the interest of full disclosure of affiliation and in the interest 599 of acknowledging sponsorship, past affiliations of authors are noted. 600 Much of the work done by Ning So occurred while Ning was at Verizon. 601 Much of the work done by Curtis Villamizar occurred while at 602 Infinera. 604 11. Informative References 606 [I-D.ietf-mpls-multipath-use] 607 Villamizar, C., "Use of Multipath with MPLS-TP and MPLS", 608 draft-ietf-mpls-multipath-use-00 (work in progress), 609 February 2013. 611 [I-D.ietf-rtgwg-cl-framework] 612 Ning, S., McDysan, D., Osborne, E., Yong, L., and C. 613 Villamizar, "Composite Link Framework in Multi Protocol 614 Label Switching (MPLS)", draft-ietf-rtgwg-cl-framework-03 615 (work in progress), June 2013. 617 [I-D.ietf-rtgwg-cl-requirement] 618 Villamizar, C., McDysan, D., Ning, S., Malis, A., and L. 619 Yong, "Requirements for Advanced Multipath in MPLS 620 Networks", draft-ietf-rtgwg-cl-requirement-11 (work in 621 progress), July 2013. 623 [IEEE-802.1AX] 624 IEEE Standards Association, "IEEE Std 802.1AX-2008 IEEE 625 Standard for Local and Metropolitan Area Networks - Link 626 Aggregation", 2006, . 629 [ITU-T.G.694.2] 630 ITU-T, "Spectral grids for WDM applications: CWDM 631 wavelength grid", 2003, 632 . 634 [RFC1717] Sklower, K., Lloyd, B., McGregor, G., and D. Carr, "The 635 PPP Multilink Protocol (MP)", RFC 1717, November 1994. 637 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 638 "Definition of the Differentiated Services Field (DS 639 Field) in the IPv4 and IPv6 Headers", RFC 2474, 640 December 1998. 642 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 643 and W. Weiss, "An Architecture for Differentiated 644 Services", RFC 2475, December 1998. 646 [RFC2597] Heinanen, J., Baker, F., Weiss, W., and J. Wroclawski, 647 "Assured Forwarding PHB Group", RFC 2597, June 1999. 649 [RFC2615] Malis, A. and W. Simpson, "PPP over SONET/SDH", RFC 2615, 650 June 1999. 652 [RFC2991] Thaler, D. and C. Hopps, "Multipath Issues in Unicast and 653 Multicast Next-Hop Selection", RFC 2991, November 2000. 655 [RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path 656 Algorithm", RFC 2992, November 2000. 658 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 659 Label Switching Architecture", RFC 3031, January 2001. 661 [RFC3032] Rosen, E., Tappan, D., Fedorkow, G., Rekhter, Y., 662 Farinacci, D., Li, T., and A. Conta, "MPLS Label Stack 663 Encoding", RFC 3032, January 2001. 665 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 666 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 667 Tunnels", RFC 3209, December 2001. 669 [RFC3260] Grossman, D., "New Terminology and Clarifications for 670 Diffserv", RFC 3260, April 2002. 672 [RFC3270] Le Faucheur, F., Wu, L., Davie, B., Davari, S., Vaananen, 673 P., Krishnan, R., Cheval, P., and J. Heinanen, "Multi- 674 Protocol Label Switching (MPLS) Support of Differentiated 675 Services", RFC 3270, May 2002. 677 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 678 (TE) Extensions to OSPF Version 2", RFC 3630, 679 September 2003. 681 [RFC3809] Nagarajan, A., "Generic Requirements for Provider 682 Provisioned Virtual Private Networks (PPVPN)", RFC 3809, 683 June 2004. 685 [RFC3945] Mannie, E., "Generalized Multi-Protocol Label Switching 686 (GMPLS) Architecture", RFC 3945, October 2004. 688 [RFC3985] Bryant, S. and P. Pate, "Pseudo Wire Emulation Edge-to- 689 Edge (PWE3) Architecture", RFC 3985, March 2005. 691 [RFC4031] Carugi, M. and D. McDysan, "Service Requirements for Layer 692 3 Provider Provisioned Virtual Private Networks (PPVPNs)", 693 RFC 4031, April 2005. 695 [RFC4124] Le Faucheur, F., "Protocol Extensions for Support of 696 Diffserv-aware MPLS Traffic Engineering", RFC 4124, 697 June 2005. 699 [RFC4201] Kompella, K., Rekhter, Y., and L. Berger, "Link Bundling 700 in MPLS Traffic Engineering (TE)", RFC 4201, October 2005. 702 [RFC4385] Bryant, S., Swallow, G., Martini, L., and D. McPherson, 703 "Pseudowire Emulation Edge-to-Edge (PWE3) Control Word for 704 Use over an MPLS PSN", RFC 4385, February 2006. 706 [RFC4928] Swallow, G., Bryant, S., and L. Andersson, "Avoiding Equal 707 Cost Multipath Treatment in MPLS Networks", BCP 128, 708 RFC 4928, June 2007. 710 [RFC5036] Andersson, L., Minei, I., and B. Thomas, "LDP 711 Specification", RFC 5036, October 2007. 713 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 714 Engineering", RFC 5305, October 2008. 716 [RFC5586] Bocci, M., Vigoureux, M., and S. Bryant, "MPLS Generic 717 Associated Channel", RFC 5586, June 2009. 719 [RFC5921] Bocci, M., Bryant, S., Frost, D., Levrau, L., and L. 721 Berger, "A Framework for MPLS in Transport Networks", 722 RFC 5921, July 2010. 724 [RFC6391] Bryant, S., Filsfils, C., Drafz, U., Kompella, V., Regan, 725 J., and S. Amante, "Flow-Aware Transport of Pseudowires 726 over an MPLS Packet Switched Network", RFC 6391, 727 November 2011. 729 [RFC6790] Kompella, K., Drake, J., Amante, S., Henderickx, W., and 730 L. Yong, "The Use of Entropy Labels in MPLS Forwarding", 731 RFC 6790, November 2012. 733 Appendix A. More Details on Existing Network Operator Practices and 734 Protocol Usage 736 Often, network operators have a contractual Service Level Agreement 737 (SLA) with customers for services that are comprised of numerical 738 values for performance measures, principally availability, latency, 739 delay variation. Additionally, network operators may have 740 performance objectives for internal use by the operator. See 741 RFC3809, Section 4.9 [RFC3809] for examples of the form of such SLA 742 and performance objective specifications. In this document we use 743 the term Performance Objective as defined in 744 [I-D.ietf-rtgwg-cl-requirement]. Applications and acceptable user 745 experience have an important relationship to these performance 746 parameters. 748 Consider latency as an example. In some cases, minimizing latency 749 relates directly to the best customer experience (for example, in 750 interactive applications closer is faster). In other cases, user 751 experience is relatively insensitive to latency, up to a specific 752 limit at which point user perception of quality degrades 753 significantly (e.g., interactive human voice and multimedia 754 conferencing). A number of Performance Objectives have. a bound on 755 point-to-point latency, and as long as this bound is met, the 756 Performance Objective is met -- decreasing the latency is not 757 necessary. In some Performance Objectives, if the specified latency 758 is not met, the user considers the service as unavailable. An 759 unprotected LSP can be manually provisioned on a set of links to meet 760 this type of Performance Objective, but this lowers availability 761 since an alternate route that meets the latency Performance Objective 762 cannot be determined. 764 Historically, when an IP/MPLS network was operated over a lower layer 765 circuit switched network (e.g., SONET rings), a change in latency 766 caused by the lower layer network (e.g., due to a maintenance action 767 or failure) was not known to the MPLS network. This resulted in 768 latency affecting end user experience, sometimes violating 769 Performance Objectives or resulting in user complaints. 771 A response to this problem was to provision IP/MPLS networks over 772 unprotected circuits and set the metric and/or TE-metric proportional 773 to latency. This resulted in traffic being directed over the least 774 latency path, even if this was not needed to meet an Performance 775 Objective or meet user experience objectives. This results in 776 reduced flexibility and increased cost for network operators. Some 777 providers perfer to use lower layer networks to provide restoration 778 and grooming, but the inability to communicate performance 779 parameters, in particular latency, from the lower layer network to 780 the higher layer network is an important problem to be solved before 781 this can be done. 783 Latency Performance Objectives for point-to-point services are often 784 tied closely to geographic locations, while latency for multipoint 785 services may be based upon a worst case within a region. 787 The time frames for restoration (i.e., as implemented by 788 predetermined protection, convergence of routing protocols and/or 789 signaling) for services range from on the order of 100 ms or less 790 (e.g., for VPWS to emulate classical SDH/SONET protection switching), 791 to several minutes (e.g., to allow BGP to reconverge for L3VPN) and 792 may differ among the set of customers within a single service. 794 The presence of only three Traffic Class (TC) bits (previously known 795 as EXP bits) in the MPLS shim header is limiting when a network 796 operator needs to support QoS classes for multiple services (e.g., 797 L2VPN VPWS, VPLS, L3VPN and Internet), each of which has a set of QoS 798 classes that need to be supported and where the operator prefers to 799 use only E-LSP [RFC3270]. In some cases one bit is used to indicate 800 conformance to some ingress traffic classification, leaving only two 801 bits for indicating the service QoS classes. One approach that has 802 been taken is to aggregate these QoS classes into similar sets on 803 LER-LSR and LSR-LSR links and continue to use only E-LSP. Another 804 approach is to use L-LSP as defined in [RFC3270] or use the Class- 805 Type as defined in [RFC4124] to support up to eight mappings of TC 806 into Per-Hop Behavior (PHB). 808 The IP DSCP cannot be used for flow identification. The use of IP 809 DSCP for flow identification is incompatible with Assured Forwarding 810 services [RFC2597] or any other service which may use more than one 811 DSCP code point to carry traffic for a given microflow. In general 812 network operators do not rely on the DSCP of Internet packets in core 813 networks but must preserve DSCP values for use closer to network 814 edges. 816 A label is pushed onto Internet packets when they are carried along 817 with L2/L3VPN packets on the same link or lower layer network 818 provides a mean to distinguish between the QoS class for these 819 packets. 821 Operating an MPLS-TE network involves a different paradigm from 822 operating an IGP metric-based LDP signaled MPLS network. The 823 multipoint-to-point LDP signaled MPLS LSPs occur automatically, and 824 balancing across parallel links occurs if the IGP metrics are set 825 "equally" (with equality a locally definable relation) and if ECMP is 826 enabled for LDP, which it large network operators generally do. 828 Traffic is typically comprised of large (some very large) flows and a 829 much larger number of small flows. In some cases, separate LSPs are 830 established for very large flows. Very large microflows can occur 831 even if the IP header information is inspected by a LSR. For example 832 an IPsec tunnel that carries a large amount of traffic must be 833 carried as a single large flow. An important example of large flows 834 is that of a L2/L3 VPN customer who has an access line bandwidth 835 comparable to a client-client component link bandwidth -- there could 836 be flows that are on the order of the access line bandwidth. 838 Appendix B. Existing Multipath Standards and Techniques 840 Today the requirement to handle large aggregations of traffic, much 841 larger than a single component link, can be handled by a number of 842 techniques which we will collectively call multipath. Multipath 843 applied to parallel links between the same set of nodes includes 844 Ethernet Link Aggregation [IEEE-802.1AX], link bundling [RFC4201], or 845 other aggregation techniques some of which may be vendor specific. 846 Multipath applied to diverse paths rather than parallel links 847 includes Equal Cost MultiPath (ECMP) as applied to OSPF, ISIS, LDP, 848 or even BGP, and equal cost LSP, as described in Appendix B.4. 849 Various multipath techniques have strengths and weaknesses. 851 Existing multipath techniques solve the problem of large aggregations 852 of traffic, without addressing the other requirements outlined in 853 this document, particularly those described in Section 5 and 854 Section 6. 856 B.1. Common Multpath Load Spliting Techniques 858 Identical load balancing techniques are used for multipath both over 859 parallel links and over diverse paths. 861 Large aggregates of IP traffic do not provide explicit signaling to 862 indicate the expected traffic loads. Large aggregates of MPLS 863 traffic are carried in MPLS tunnels supported by MPLS LSP. LSP which 864 are signaled using RSVP-TE extensions do provide explicit signaling 865 which includes the expected traffic load for the aggregate. LSP 866 which are signaled using LDP do not provide an expected traffic load. 868 MPLS LSP may contain other MPLS LSP arranged hierarchically. When an 869 MPLS LSR serves as a midpoint LSR in an LSP carrying client LSP as 870 payload, there is no signaling associated with these client LSP. 871 Therefore even when using RSVP-TE signaling there may be insufficient 872 information provided by signaling to adequately distribute load based 873 solely on signaling. 875 Generally a set of label stack entries that is unique across the 876 ordered set of label numbers in the label stack can safely be assumed 877 to contain a group of flows. The reordering of traffic can therefore 878 be considered to be acceptable unless reordering occurs within 879 traffic containing a common unique set of label stack entries. 880 Existing load splitting techniques take advantage of this property in 881 addition to looking beyond the bottom of the label stack and 882 determining if the payload is IPv4 or IPv6 to load balance traffic 883 accordingly. 885 MPLS-TP OAM violates the assumption that it is safe to reorder 886 traffic within an LSP. If MPLS-TP OAM is to be accommodated, then 887 existing multipath techniques must be modified. [RFC6790] and 888 [I-D.ietf-mpls-multipath-use] provide a solution but require a small 889 forwarding change. 891 For example, a large aggregate of IP traffic may be subdivided into a 892 large number of groups of flows using a hash on the IP source and 893 destination addresses. This is as described in [RFC2475] and 894 clarified in [RFC3260]. For MPLS traffic carrying IP, a similar hash 895 can be performed on the set of labels in the label stack. These 896 techniques are both examples of means to subdivide traffic into 897 groups of flows for the purpose of load balancing traffic across 898 aggregated link capacity. The means of identifying a group of flows 899 should not be confused with the definition of a flow. 901 Discussion of whether a hash based approach provides a sufficiently 902 even load balance using any particular hashing algorithm or method of 903 distributing traffic across a set of component links is outside of 904 the scope of this document. 906 The current load balancing techniques are referenced in [RFC4385] and 907 [RFC4928]. The use of three hash based approaches are described in 908 [RFC2991] and [RFC2992]. A mechanism to identify flows within PW is 909 described in [RFC6391]. The use of hash based approaches is 910 mentioned as an example of an existing set of techniques to 911 distribute traffic over a set of component links. Other techniques 912 are not precluded. 914 B.2. Static and Dynamic Load Balancing Multipath 916 Static multipath generally relies on the mathematical probability 917 that given a very large number of small microflows, these microflows 918 will tend to be distributed evenly across a hash space. Early very 919 static multipath implementations assumed that all component links are 920 of equal capacity and perform a modulo operation across the hashed 921 value. An alternate static multipath technique uses a table 922 generally with a power of two size, and distributes the table entries 923 proportionally among component links according to the capacity of 924 each component link. 926 Static load balancing works well if there are a very large number of 927 small microflows (i.e., microflow rate is much less than component 928 link capacity). However, the case where there are even a few large 929 microflows is not handled well by static load balancing. 931 A dynamic load balancing multipath technique is one where the traffic 932 bound to each component link is measured and the load split is 933 adjusted accordingly. As long as the adjustment is done within a 934 single network element, then no protocol extensions are required and 935 there are no interoperability issues. 937 Note that if the load balancing algorithm and/or its parameters is 938 adjusted, then packets in some flows may be briefly delivered out of 939 sequence, however in practice such adjustments can be made very 940 infrequent. 942 B.3. Traffic Split over Parallel Links 944 The load splitting techniques defined in Appendix B.1 and 945 Appendix B.2 are both used in splitting traffic over parallel links 946 between the same pair of nodes. The best known technique, though far 947 from being the first, is Ethernet Link Aggregation [IEEE-802.1AX]. 948 This same technique had been applied much earlier using OSPF or ISIS 949 Equal Cost MultiPath (ECMP) over parallel links between the same 950 nodes. Multilink PPP [RFC1717] uses a technique that provides 951 inverse multiplexing, however a number of vendors had provided 952 proprietary extensions to PPP over SONET/SDH [RFC2615] that predated 953 Ethernet Link Aggregation but are no longer used. 955 Link bundling [RFC4201] provides yet another means of handling 956 parallel LSP. RFC4201 explicitly allow a special value of all ones 957 to indicate a split across all members of the bundle. This "all 958 ones" component link is signaled in the MPLS RESV to indicate that 959 the link bundle is making use of classic multipath techniques. 961 B.4. Traffic Split over Multiple Paths 963 OSPF or ISIS Equal Cost MultiPath (ECMP) is a well known form of 964 traffic split over multiple paths that may traverse intermediate 965 nodes. ECMP is often incorrectly equated to only this case, and 966 multipath over multiple diverse paths is often incorrectly equated to 967 ECMP. 969 Many implementations are able to create more than one LSP between a 970 pair of nodes, where these LSP are routed diversely to better make 971 use of available capacity. The load on these LSP can be distributed 972 proportionally to the reserved bandwidth of the LSP. These multiple 973 LSP may be advertised as a single PSC FA and any LSP making use of 974 the FA may be split over these multiple LSP. 976 Link bundling [RFC4201] component links may themselves be LSP. When 977 this technique is used, any LSP which specifies the link bundle may 978 be split across the multiple paths of the component LSP that comprise 979 the bundle. 981 Appendix C. Characteristics of Transport in Core Networks 983 The characteristics of primary interest are the capacity of a single 984 circuit and the use of wave division multiplexing (WDM) to provide a 985 large number of parallel circuits. 987 Wave division multiplexing (WDM) supports multiple independent 988 channels (independent ignoring crosstalk noise) at slightly different 989 wavelengths of light, multiplexed onto a single fiber. Typical in 990 the early 2000s was 40 wavelengths of 10 Gb/s capacity per 991 wavelength. These wavelengths are in the C-band range, which is 992 about 1530-1565 nm, though some work has been done using the L-band 993 1565-1625 nm. 995 The C-band has been carved up using a 100 GHz spacing from 191.7 THz 996 to 196.1 THz by [ITU-T.G.694.2]. This yields 44 channels. If the 997 outermost channels are not used, due to poorer transmission 998 characteristics, then typically 40 are used. For practical reasons, 999 a 50 GhZ or 25 GHz spacing is used by more recent equipment, 1000 yielding. 80 or 160 channels in practice. 1002 The early optical modulation techniques used within a single channel 1003 yielded 2.5Gb/s and 10 Gb/s capacity per channel. As modulation 1004 techniques have improved 40 Gb/s and 100 Gb/s per channel have been 1005 achieved. 1007 The 40 channels of 10 Gb/s common in the mid 2000s yields a total of 1008 400 Gb/s. Tighter spacing and better modulations are yielding up to 1009 8 Tb/s or more in more recent systems. 1011 Over the optical modulation is an electrical encoding. In the 1990s 1012 this was typically Synchronous Optical Networking (SONET) or 1013 Synchronous Digital Hierarchy (SDH), with a maximum defined circuit 1014 capacity of 40 Gb/s (OC-768), though the 10 Gb/s OC-192 is more 1015 common. More recently the low level electrical encoding has been 1016 Optical Transport Network (OTN) defined by ITU-T. OTN currently 1017 defines circuit capacities up to a nominal 100 Gb/s (ODU4). Both 1018 SONET/SDH and OTN make use of time division multiplexing (TDM) where 1019 the a higher capacity circuit such as a 100 Gb/s ODU4 in OTN may be 1020 subdivided into lower fixed capacity circuits such as ten 10 Gb/s 1021 ODU2. 1023 In the 1990s, all IP and later IP/MPLS networks either used a 1024 fraction of maximum circuit capacity, or at most the full circuit 1025 capacity toward the end of the decade, when full circuit capacity was 1026 2.5 Gb/s or 10 Gb/s. Beyond 2000, the TDM circuit multiplexing 1027 capability of SONET/SDH or OTN was rarely used. 1029 Early in the 2000s both transport equipment and core LSR offered 40 1030 Gb/s SONET OC-768. However 10 Gb/s transport equipment was 1031 predominantly deployed throughout the decade, partially because LSR 1032 10GbE ports were far more cost effective than either OC-192 or OC-768 1033 and 10GbE became practical in the second half of the decade. 1035 Entering the 2010 decade, LSR 40GbE and 100GbE are expected to become 1036 widely available and cost effective. Slightly preceding this 1037 transport equipment making use of 40 Gb/s and 100 Gb/s modulations 1038 are becoming available. This transport equipment is capable or 1039 carrying 40 Gb/s ODU3 and 100 Gb/s ODU4 circuits. 1041 Early in the 2000s decade IP/MPLS core networks were making use of 1042 single 10 Gb/s circuits. Capacity grew quickly in the first half of 1043 the decade but more IP/MPLS core networks had only a small number of 1044 IP/MPLS links requiring 4-8 parallel 10 Gb/s circuits. However, the 1045 use of multipath was necessary, was deemed the simplest and most cost 1046 effective alternative, and became thoroughly entrenched. By the end 1047 of the 2000s decade nearly all major IP/MPLS core service provider 1048 networks and a few content provider networks had IP/MPLS links which 1049 exceeded 100 Gb/s, long before 40GbE was available and 40 Gb/s 1050 transport in widespread use. 1052 It is less clear when IP/MPLS LSP exceeded 10 Gb/s, 40 Gb/s, and 100 1053 Gb/s. By 2010, many service providers have LSP in excess of 100 1054 Gb/s, but few are willing to disclose how many LSP have reached this 1055 capacity. 1057 By 2012 40GbE and 100GbE LSR products had become available, but were 1058 mostly still being evaluated or in trial use by service providers and 1059 contect providers. The cost of components required to deliver 100GbE 1060 products remained high making these products less cost effective. 1061 This is expected to change within years. 1063 The important point is that IP/MPLS core network links have long ago 1064 exceeded 100 Gb/s and some may have already exceeded a Tb/s and a 1065 small number of IP/MPLS LSP exceed 100 Gb/s. By the time 100 Gb/s 1066 circuits are widely deployed, many IP/MPLS core network links are 1067 likely to exceed 1 Tb/s and many IP/MPLS LSP capacities are likely to 1068 exceed 100 Gb/s. The growth in service provider traffic has 1069 consistently outpaced growth in DWDM channel capacities and the 1070 growth in capacity of single interfaces and is expected to continue 1071 to do so. Therefore multipath techniques are likely here to stay. 1073 Authors' Addresses 1075 So Ning 1076 Tata Communications 1078 Email: ning.so@tatacommunications.com 1080 Andrew Malis 1081 Verizon 1082 60 Sylvan Road 1083 Waltham, MA 02451 1084 USA 1086 Phone: +1 781-466-2362 1087 Email: andrew.g.malis@verizon.com 1089 Dave McDysan 1090 Verizon 1091 22001 Loudoun County PKWY 1092 Ashburn, VA 20147 1093 USA 1095 Email: dave.mcdysan@verizon.com 1096 Lucy Yong 1097 Huawei USA 1098 5340 Legacy Dr. 1099 Plano, TX 75025 1100 USA 1102 Phone: +1 469-277-5837 1103 Email: lucy.yong@huawei.com 1105 Curtis Villamizar 1106 Outer Cape Cod Network Consulting 1108 Email: curtis@occnc.com