idnits 2.17.00 (12 Aug 2021) /tmp/idnits4065/draft-ietf-rtgwg-cl-use-cases-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 14, 2014) is 2922 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Obsolete informational reference (is this intentional?): RFC 1717 (Obsoleted by RFC 1990) Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTGWG S. Ning 3 Internet-Draft Tata Communications 4 Intended status: Informational A. Malis 5 Expires: November 15, 2014 Consultant 6 D. McDysan 7 Verizon 8 L. Yong 9 Huawei USA 10 C. Villamizar 11 Outer Cape Cod Network Consulting 12 May 14, 2014 14 Advanced Multipath Use Cases and Design Considerations 15 draft-ietf-rtgwg-cl-use-cases-06 17 Abstract 19 Advanced Multipath is a formalization of multipath techniques 20 currently in use in IP and MPLS networks and a set of extensions to 21 existing multipath techniques. 23 This document provides a set of use cases and design considerations 24 for Advanced Multipath. Existing practices are described. Use cases 25 made possible through Advanced Multipath extensions are described. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on November 15, 2014. 44 Copyright Notice 46 Copyright (c) 2014 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 4. Multipath Foundation Use Cases . . . . . . . . . . . . . . . 5 65 5. Advanced Multipath Use Cases . . . . . . . . . . . . . . . . 8 66 5.1. Delay Sensitive Applications . . . . . . . . . . . . . . 8 67 5.2. Large Volume of IP and LDP Traffic . . . . . . . . . . . 9 68 5.3. Multipath and Packet Ordering . . . . . . . . . . . . . . 9 69 5.3.1. MPLS-TP in network edges only . . . . . . . . . . . . 11 70 5.3.2. Multipath at core LSP ingress/egress . . . . . . . . 12 71 5.3.3. MPLS-TP as a MPLS client . . . . . . . . . . . . . . 13 72 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 73 7. Security Considerations . . . . . . . . . . . . . . . . . . . 14 74 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 14 75 9. Informative References . . . . . . . . . . . . . . . . . . . 14 76 Appendix A. Network Operator Practices and Protocol Usage . . . 17 77 Appendix B. Existing Multipath Standards and Techniques . . . . 19 78 B.1. Common Multpath Load Spliting Techniques . . . . . . . . 19 79 B.2. Static and Dynamic Load Balancing Multipath . . . . . . . 20 80 B.3. Traffic Split over Parallel Links . . . . . . . . . . . . 21 81 B.4. Traffic Split over Multiple Paths . . . . . . . . . . . . 21 82 Appendix C. Characteristics of Transport in Core Networks . . . 22 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 85 1. Introduction 87 Advanced Multipath requirements are specified in [RFC7226]. An 88 Advanced Multipath framework is defined in 89 [I-D.ietf-rtgwg-cl-framework]. 91 Multipath techniques have been widely used in IP networks for over 92 two decades. The use of MPLS began more than a decade ago. 93 Multipath has been widely used in IP/MPLS networks for over a decade 94 with very little protocol support dedicated to effective use of 95 multipath. 97 The state of the art in multipath prior to Advanced Multipath is 98 documented in Appendix B. 100 Both Ethernet Link Aggregation [IEEE-802.1AX] and MPLS link bundling 101 [RFC4201] have been widely used in today's MPLS networks. Advanced 102 Multipath differs in the following characteristics. 104 1. Advanced Multipath allows bundling of non-homogenous links 105 together as a single logical link. 107 2. Advanced Multipath provides more information in the TE-LSDB and 108 supports more explicit control over placement of LSP. 110 2. Assumptions 112 The supported services are, but not limited to, pseudowire (PW) based 113 services ([RFC3985]), including Virtual Private Network (VPN) 114 services, Internet traffic encapsulated by at least one MPLS label 115 ([RFC3032]), and dynamically signaled MPLS ([RFC3209] or [RFC5036]) 116 or MPLS-TP Label Switched Paths (LSPs) ([RFC5921]). 118 The MPLS LSPs supporting these services may be point-to-point, point- 119 to-multipoint, or multipoint-to-multipoint. The MPLS LSPs may be 120 signaled using RSVP-TE [RFC3209] or LDP [RFC5036]. With RSVP-TE, 121 extensions to Interior Gateway Protocols (IGPs) may be used, 122 specifically to OSPF-TE [RFC3630] or ISIS-TE [RFC5305]. 124 The locations in a network where these requirements apply are a Label 125 Edge Router (LER) or a Label Switch Router (LSR) as defined in 126 [RFC3031]. 128 The IP DSCP field [RFC2474] [RFC2475] cannot be used for flow 129 identification since L3VPN requires Diffserv transparency (see RFC 130 4031 5.5.2 [RFC4031]), and in general network operators do not rely 131 on the DSCP of Internet packets. 133 3. Terminology 135 Terminology defined in [RFC7226] and [RFC7190] is used in this 136 document. 138 In addition, the following terms are used: 140 classic multipath: 141 Classic multipath refers to the most common current practice in 142 implementation and deployment of multipath (see Appendix B). The 143 most common current practice when applied to MPLS traffic makes 144 use of a hash on the MPLS label stack, and if IPv4 or IPv6 are 145 indicated under the label stack, makes use of the IP source and 146 destination addresses [RFC4385] [RFC4928]. 148 classic link bundling: 149 Classic link bundling refers to the use of [RFC4201] where the 150 "all ones" component is not used. Where the "all ones" component 151 is used, link bundling behaves as classic multipath does. 152 Classic link bundling selects a single component link to carry 153 all of the traffic for a given LSP. 155 Among the important distinctions between classic multipath or classic 156 link bundling and Advanced Multipath are: 158 1. Classic multipath has no provision to retain packet order within 159 any specific LSP. Classic link bundling retains packet order 160 among any given LSP but as a result does a poor job of splitting 161 load among components and therefore is rarely (if ever) deployed. 162 Advanced Multipath allows per LSP control of load split 163 characteristics. 165 2. Classic multipath and classic link bundling do not provide a 166 means to put some LSP on component links with lower delay. 167 Advanced Multipath does. 169 3. Classic multipath will provide a load balance for IP and LDP 170 traffic. Classic link bundling will not. Neither classic 171 multipath or classic link bundling will measure IP and LDP 172 traffic and reduce the RSVP-TE advertised "Available Bandwidth" 173 as a result of that measurement. Advanced Multipath better 174 supports RSVP-TE used with significant traffic levels of native 175 IP and native LDP. 177 4. Classic link bundling cannot support an LSP that is greater in 178 capacity than any single component link. Classic multipath 179 supports this capability but may reorder traffic on such an LSP. 180 Advanced Multipath can retain order of an LSP that is carried 181 within an LSP that is greater in capacity than any single 182 component link if the contained LSP has such a requirement. 184 None of these techniques, classic multipath, classic link bundling, 185 or Advanced Multipath, will reorder traffic among IP microflows. 186 None of these techniques will reorder traffic among PW, if a PWE3 187 Control Word is used [RFC4385]. 189 4. Multipath Foundation Use Cases 191 A simple multipath composed entirely of physical links is illustrated 192 in Figure 1, where an multipath is configured between LSR1 and LSR2. 193 This multipath has three component links. Individual component links 194 in a multipath may be supported by different transport technologies 195 such as SONET, OTN, Ethernet, etc. Even if the transport technology 196 implementing the component links is identical, the characteristics 197 (e.g., bandwidth, latency) of the component links may differ. 199 The multipath in Figure 1 may carry LSP traffic flows and control 200 plane packets. Control plane packets may appear as IP packets or may 201 be carried within a generic associated channel (G-Ach) [RFC5586]. A 202 LSP may be established over the link by either RSVP-TE [RFC3209] or 203 LDP [RFC5036] signaling protocols. All component links in a 204 multipath are summarized in the same forwarding adjacency LSP (FA- 205 LSP) routing advertisement [RFC3945]. The multipath is summarized as 206 one TE-Link advertised into the IGP by the multipath end points (the 207 LER if the multipath is MPLS based). This information is used in 208 path computation when a full MPLS control plane is in use. 210 If Advanced Multipath techniques are used, then the individual 211 component links or groups of component links may optionally be 212 advertised into the IGP as sub-TLV of the multipath FA advertisement 213 to indicate capacity available with various characteristics, such as 214 a delay range. 216 Management Plane 217 Configuration and Measurement <------------+ 218 ^ | 219 | | 220 +-------+-+ +-+-------+ 221 | | | | | | 222 CP Packets V | | V CP Packets 223 | V | | Component Link 1 | | ^ | 224 | | |=|===========================|=| | | 225 | +----| | Component Link 2 | |----+ | 226 | |=|===========================|=| | 227 Aggregated LSPs | | | | | 228 ~|~~~~~~>| | Component Link 3 | |~~~~>~~|~~ 229 | |=|===========================|=| | 230 | | | | | | 231 | LSR1 | | LSR2 | 232 +---------+ +---------+ 233 ! ! 234 ! ! 235 !<-------- Multipath ---------->! 237 Figure 1: a multipath constructed with multiple physical links 238 between two LSR 240 [RFC7226] specifies that component links may themselves be multipath. 241 This is true for most implementations even prior to the Advanced 242 Multipath work in [RFC7226]. For example, a component of a pre- 243 Advanced Multipath MPLS Link Bundle or ISIS or OSPF ECMP could be an 244 Ethernet LAG. In some implementations many other combinations or 245 even arbitrary combinations could be supported. Figure 2 shows three 246 three forms of component links which may be deployed in a network. 248 +-------+ 1. Physical Link +-------+ 249 | |-|----------------------------------------------|-| | 250 | | | | | | 251 | | | +------+ +------+ | | | 252 | | | | MPLS | 2. Logical Link | MPLS | | | | 253 | |.|.... |......|.....................|......|....|.| | 254 | | |-----| LSR3 |---------------------| LSR4 |----| | | 255 | | | +------+ +------+ | | | 256 | | | | | | 257 | | | | | | 258 | | | +------+ +------+ | | | 259 | | | |GMPLS | 3. Logical Link |GMPLS | | | | 260 | |.|. ...|......|.....................|......|....|.| | 261 | | |-----| LSR5 |---------------------| LSR6 |----| | | 262 | | +------+ +------+ | | 263 | LSR1 | | LSR2 | 264 +-------+ +-------+ 265 |<---------------- Multipath --------------------->| 267 Figure 2: Illustration of Various Component Link Types 269 The three forms of component link shown in Figure 2 are: 271 1. The first component link is configured with direct physical media 272 plus a link layer protocol. This case also includes emulated 273 physical links, for example using pseudowire emulation. 275 2. The second component link is a TE tunnel that traverses LSR3 and 276 LSR4, where LSR3 and LSR4 are the nodes supporting MPLS, but 277 supporting few or no GMPLS extensions. 279 3. The third component link is formed by lower layer network that 280 has GMPLS enabled. In this case, LSR5 and LSR6 are not the nodes 281 controlled by the MPLS but provide the connectivity for the 282 component link. 284 A multipath forms one logical link between connected LSR (LSR1 and 285 LSR2 in Figure 1 and Figure 2) and is used to carry aggregated 286 traffic. Multipath relies on its component links to carry the 287 traffic but must distribute or load balance the traffic. The 288 endpoints of the multipath maps incoming traffic into the set of 289 component links. 291 For example, LSR1 in Figure 1 distributes the set of traffic flows 292 including control plane packets among the set of component links. 293 LSR2 in Figure 1 receives the packets from its component links and 294 sends them to MPLS forwarding engine with no attempt to reorder 295 packets arriving on different component links. The traffic in the 296 opposite direction, from LSR2 to LSR1, is distributed across the set 297 of component links by the LSR2. 299 These three forms of component link are a limited set of very simple 300 examples. Many other examples are possible. A component link may 301 itself be a multipath. A segment of an LSP (single hop for that LSP) 302 may be a multipath. 304 5. Advanced Multipath Use Cases 306 The following subsections provide some uses of the Advanced Multipath 307 extensions. These are not the only uses, simply a set of examples. 309 5.1. Delay Sensitive Applications 311 Most applications benefit from lower delay. Some types of 312 applications are far more sensitive than others. For example, real 313 time bidirectional applications such as voice communication or two 314 way video conferencing are far more sensitive to delay than 315 unidirectional streaming audio or video. Non-interactive bulk 316 transfer is almost insensitive to delay if a large enough TCP window 317 is used. 319 Some applications are sensitive to delay but users of those 320 applications are unwilling to pay extra to insure lower delay. For 321 example, many SIP end users are willing to accept the delay offered 322 to best effort services as long as call quality is good most of the 323 time. 325 Other applications are sensitive to delay and willing to pay extra to 326 insure lower delay. For example, financial trading applications are 327 extremely sensitive to delay and with a lot at stake are willing to 328 go to great lengths to reduce delay. 330 Among the requirements of Advanced Multipath are requirements to 331 support non-homogeneous links. One solution in support of lower 332 delay links is to advertise capacity available within configured 333 ranges of delay within a given multipath and then support the ability 334 to place an LSP only on component links that meeting that LSP's delay 335 requirements. 337 The Advanced Multipath requirements to accommodate delay sensitive 338 applications are analogous to Diffserv requirements to accommodate 339 applications requiring higher quality of service on the same 340 infrastructure as applications with less demanding requirements. The 341 ability to share capacity with less demanding applications, with best 342 effort applications generally being the least demanding, can greatly 343 reduce the cost of delivering service to the more demanding 344 applications. 346 5.2. Large Volume of IP and LDP Traffic 348 IP and LDP do not support traffic engineering. Both make use of a 349 shortest (lowest routing metric) path, with an option to use equal 350 cost multipath (ECMP). Note that though ECMP is prohibited in LDP 351 specifications, it is widely implemented. Where implemented for LDP, 352 ECMP is generally disabled by default for standards compliance, but 353 often enabled in LDP deployments. 355 Without traffic engineering capability, there must be sufficient 356 capacity to accommodate the IP and LDP traffic. If not, persistent 357 queuing delay and loss will occur. Unlike RSVP-TE, a subset of 358 traffic cannot be routed using constraint based routing to avoid a 359 congested portion of an infrastructure. 361 In existing networks which accommodate IP and/or LDP with RSVP-TE, 362 either the IP and LDP can be carried over RSVP-TE, or where the 363 traffic contribution of IP and LDP is small, IP and LDP can be 364 carried native and the effect on RSVP-TE can be ignored. Ignoring 365 the traffic contribution of IP is valid on high capacity networks 366 where a very low volume of native IP is used primarily for control 367 and network management and customer IP is carried within RSVP-TE. 369 Where it is desirable to carry native IP and/or LDP and IP and/or LDP 370 traffic volumes are not negligible, RSVP-TE needs improvement. An 371 enhancement offered by Advanced Multipath is an ability to measure 372 the IP and LDP, filter the measurements, and reduce the capacity 373 available to RSVP-TE to avoid congestion. The treatment given to the 374 IP or LDP traffic is similar to the treatment when using the "auto- 375 bandwidth" feature in some RSVP-TE implementations on that same 376 traffic, and giving a higher priority (numerically lower setup 377 priority and holding priority value) to the "auto-bandwidth" LSP. 378 The difference is that the measurement is made at each hop and the 379 reduction in advertised bandwidth is made more directly. 381 5.3. Multipath and Packet Ordering 383 A strong motivation for multipath is the need to provide LSP capacity 384 in IP backbones that exceeds the capacity of single wavelengths 385 provided by transport equipment and exceeds the practical capacity 386 limits achievable through inverse multiplexing. Appendix C describes 387 characteristics and limitations of transport systems today. 388 Section 3 defines the terms "classic multipath" and "classic link 389 bundling" used in this section. 391 For purpose of discussion, consider two very large cities, city A and 392 city Z. For example, in the US high traffic cities might be New York 393 and Los Angeles and in Europe high traffic cities might be London and 394 Amsterdam. Two other high volume cities, city B and city Y may share 395 common provider core network infrastructure. Using the same 396 examples, the city B and Y may Washington DC and San Francisco or 397 Paris and Stockholm. In the US, the common infrastructure may span 398 Denver, Chicago, Detroit, and Cleveland. Other major traffic 399 contributors on either US coast include Boston, northern Virginia on 400 the east coast, and Seattle, and San Diego on the west coast. The 401 capacity of IP/MPLS links within the shared infrastructure, for 402 example city to city links in the Denver, Chicago, Detroit, and 403 Cleveland path in the US example, have capacities for most of the 404 2000s decade that greatly exceeded single circuits available in 405 transport networks. 407 For a case with four large traffic sources on either side of the 408 shared infrastructure, up to sixteen core city to core city traffic 409 flows in excess of transport circuit capacity may be accommodated on 410 the shared infrastructure. 412 Today the most common IP/MPLS core network design makes use of very 413 large links which consist of many smaller component links, but use 414 classic multipath techniques. A component link typically corresponds 415 to the largest circuit that the transport system is capable of 416 providing (or the largest cost effective circuit). IP source and 417 destination address hashing is used to distribute flows across the 418 set of component links as described in Appendix B.3. 420 Classic multipath can handle large LSP up to the total capacity of 421 the multipath (within limits, see Appendix B.2). A disadvantage of 422 classic multipath is the reordering among traffic within a given core 423 city to core city LSP. While there is no reordering within any 424 microflow and therefore no customer visible issue, MPLS-TP cannot be 425 used across an infrastructure where classic multipath is in use, 426 except within pseudowires. 428 Capacity issues force the use of classic multipath today. Classic 429 multipath excludes a direct use of MPLS-TP. The desire for OAM, 430 offered by MPLS-TP, is in conflict with the use of classic multipath. 431 There are a number of alternatives that satisfy both requirements. 432 Some alternatives are described below. 434 MPLS-TP in network edges only 436 A simple approach which requires no change to the core is to 437 disallow MPLS-TP across the core unless carried within a 438 pseudowire (PW). MPLS-TP may be used within edge domains where 439 classic multipath is not used. PW may be signaled end to end 440 using single segment PW (SS-PW), or stitched across domains using 441 multisegment PW (MS-PW). The PW and anything carried within the 442 PW may use OAM as long as fat-PW [RFC6391] load splitting is not 443 used by the PW. 445 Advanced Multipath at core LSP ingress/egress 447 The interior of the core network may use classic link bundling, 448 with the limitation that no LSP can exceed the capacity of a 449 single circuit. Larger non-MPLS-TP LSP can be configured using 450 multiple ingress to egress component MPLS-TP LSP. This can be 451 accomplished using existing IP source and destination address 452 hashing configured at LSP ingress and egress. Each component 453 LSP, if constrained to be no larger than the capacity of a single 454 circuit, can make use of MPLS-TP and offer OAM for all top level 455 LSP across the core. 457 MPLS-TP as a MPLS client 459 A third approach involves making use of Entropy Labels [RFC6790] 460 on all MPLS-TP LSP such that the entire MPLS-TP LSP is treated as 461 a microflow by midpoint LSR, even if further encapsulated in very 462 large server layer MPLS LSP. 464 The above list of alternatives allow packet ordering within an LSP to 465 be maintained in some circumstances and allow very large LSP 466 capacities. Each of these alternatives are discussed further in the 467 following subsections. 469 5.3.1. MPLS-TP in network edges only 471 Classic MPLS link bundling is defined in [RFC4201] and has existed 472 since early in the 2000s decade. Classic MPLS link bundling place 473 any given LSP entirely on a single component link. Classic MPLS link 474 bundling is not in widespread use as the means to accommodate large 475 link capacities in core networks due to the simplicity and better 476 multiplexing gain, and therefore lower network cost of classic 477 multipath. 479 If MPLS-TP OAM capability in the IP/MPLS network core LSP is not 480 required, then there is no need to change existing network designs 481 which use classic multipath and both label stack and IP source and 482 destination address based hashing as a basis for load splitting. 484 If MPLS-TP is needed for a subset of LSP, then those LSP can be 485 carried within pseudowires. The pseudowires adds a thin layer of 486 encapsulation and therefore a small overhead. If only a subset of 487 LSP need MPLS-TP OAM, then some LSP must make use of the pseudowires 488 and other LSP avoid them. A straightforward way to accomplish this 489 is with administrative attributes [RFC3209]. 491 5.3.2. Multipath at core LSP ingress/egress 493 Multipath can be configured for large LSP that are made of smaller 494 MPLS-TP component LSP. Some implementations already support this 495 capability, though until Advanced Multipath no IETF document required 496 it. This approach is capable of supporting MPLS-TP OAM over the 497 entire set of component link LSP and therefore the entire set of top 498 level LSP traversing the core. 500 There are two primary disadvantage of this approach. One is the 501 number of top level LSP traversing the core can be dramatically 502 increased. The other disadvantage is the loss of multiplexing gain 503 that results from use of classic link bundling within the interior of 504 the core network. 506 If component LSP use MPLS-TP, then no component LSP can exceed the 507 capacity of a single circuit. For a given multipath LSP there can 508 either be a number of equal capacity component LSP or some number of 509 full capacity component links plus one LSP carrying the excess. For 510 example, a 350 Gb/s multipath LSP over a 100 Gb/s infrastructure may 511 use five 70 Gb/s component LSP or three 100 Gb/s LSP plus one 50 Gb/s 512 LSP. Classic MPLS link bundling is needed to support MPLS-TP and 513 suffers from a bin packing problem even if LSP traffic is completely 514 predictable, which it never is in practice. 516 The common means of setting very large LSP link bandwidth parameters 517 uses long term statistical measures. For example, at one time many 518 providers based their LSP bandwidth parameters on the 95th percentile 519 of carried traffic as measured over the prior one week period. It is 520 common to add 10-30% to the 95th percentile value measured over the 521 prior week and adjust bandwidth parameters of LSP weekly. It is also 522 possible to measure traffic flow at the LSR and adjust bandwidth 523 parameters somewhat more dynamically. This is less common in 524 deployments and where deployed, makes use of filtering to track very 525 long term trends in traffic levels. In either case, short term 526 variation of traffic levels relative to signaled LSP capacity are 527 common. Allowing a large over allocation of LSP bandwidth parameters 528 (ie: adding 30% or more) avoids over utilization of any given LSP, 529 but increases unused network capacity and increases network cost. 530 Allowing a small over allocation of LSP bandwidth parameters (ie: 531 10-20% or less) results in both underutilization and over utilization 532 but statistically results in a total utilization within the core that 533 is under capacity most or all of the time. 535 The classic multipath solution accommodates the situation in which 536 some very large LSP are under utilizing their signaled capacity and 537 others are over utilizing their capacity with the need for far less 538 unused network capacity to accommodate variation in actual traffic 539 levels. If the actual traffic levels of LSP can be described by a 540 probability distribution, the variation of the sum of LSP is less 541 than the variation of any given LSP for all but a constant traffic 542 level (where the variation of the sum and the variation of the 543 components are both zero). 545 Splitting very large LSP at the ingress and carrying those large LSP 546 within smaller MPLS-TP component LSP and then using classic link 547 bundling to carry the MPLS-TP LSP is a viable approach. However this 548 approach loses the statistical gain discussed in the prior 549 paragraphs. Losing this statistical gain drives up network costs 550 necessary to acheive the same very low probability of only mild 551 congestion that is expected of provider networks. 553 There are two situations which can motivate the use of this approach. 554 This design is favored if the provider values MPLS-TP OAM across the 555 core more than efficiency (or is unaware of the efficiency issue). 556 This design can also make sense if transport equipment or very low 557 cost core LSR are available which support only classic link bundling 558 and regardless of loss of multiplexing gain, are more cost effective 559 at carrying transit traffic than using equipment which supports IP 560 source and destination address hashing. 562 5.3.3. MPLS-TP as a MPLS client 564 Accommodating MPLS-TP as a MPLS client requires the small change to 565 forwarding behavior necessary to support [RFC6790] and is therefore 566 most applicable to major network overbuilds or new deployments. This 567 approach is described in [RFC7190] and makes use of Entropy Labels 568 [RFC6790] to prevent reordering of MPLS-TP LSP or any other LSP which 569 requires that its traffic not be reordered for OAM or other reasons. 571 The advantage of this approach is an ability to accommodate MPLS-TP 572 as a client LSP but retain the high multiplexing gain and therefore 573 efficiency and low network cost of a pure MPLS deployment. The 574 disadvantage is the need for a small change in forwarding to support 575 [RFC6790]. 577 6. IANA Considerations 579 This memo includes no request to IANA. 581 7. Security Considerations 583 This document is a use cases document. Existing protocols are 584 referenced such as MPLS. Existing techniques such as MPLS link 585 bundling and multipath techniques are referenced. These protocols 586 and techniques are documented elsewhere and contain security 587 considerations which are unchanged by this document. 589 This document also describes use cases for multipath and Advanced 590 Multipath. Advanced Multipath requirements are defined in [RFC7226]. 591 [I-D.ietf-rtgwg-cl-framework] defines a framework for Advanced 592 Multipath. Advanced Multipath bears many similarities to MPLS link 593 bundling and multipath techniques used with MPLS. Additional 594 security considerations, if any, beyond those already identified for 595 MPLS, MPLS link bundling and multipath techniques, will be documented 596 in the framework document if specific to the overall framework of 597 Advanced Multipath, or in protocol extensions if specific to a given 598 protocol extension defined later to support Advanced Multipath. 600 8. Acknowledgments 602 In the interest of full disclosure of affiliation and in the interest 603 of acknowledging sponsorship, past affiliations of authors are noted. 604 Much of the work done by Ning So occurred while Ning was at Verizon. 605 Much of the work done by Curtis Villamizar occurred while at 606 Infinera. Much of the work done by Andy Malis occurred while Andy 607 was at Verizon. 609 9. Informative References 611 [I-D.ietf-rtgwg-cl-framework] 612 Ning, S., McDysan, D., Osborne, E., Yong, L., and C. 613 Villamizar, "Advanced Multipath Framework in MPLS", draft- 614 ietf-rtgwg-cl-framework-04 (work in progress), July 2013. 616 [IEEE-802.1AX] 617 IEEE Standards Association, "IEEE Std 802.1AX-2008 IEEE 618 Standard for Local and Metropolitan Area Networks - Link 619 Aggregation", 2006, . 622 [ITU-T.G.694.2] 623 ITU-T, "Spectral grids for WDM applications: CWDM 624 wavelength grid", 2003, 625 . 627 [RFC1717] Sklower, K., Lloyd, B., McGregor, G., and D. Carr, "The 628 PPP Multilink Protocol (MP)", RFC 1717, November 1994. 630 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 631 "Definition of the Differentiated Services Field (DS 632 Field) in the IPv4 and IPv6 Headers", RFC 2474, December 633 1998. 635 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 636 and W. Weiss, "An Architecture for Differentiated 637 Services", RFC 2475, December 1998. 639 [RFC2597] Heinanen, J., Baker, F., Weiss, W., and J. Wroclawski, 640 "Assured Forwarding PHB Group", RFC 2597, June 1999. 642 [RFC2615] Malis, A. and W. Simpson, "PPP over SONET/SDH", RFC 2615, 643 June 1999. 645 [RFC2991] Thaler, D. and C. Hopps, "Multipath Issues in Unicast and 646 Multicast Next-Hop Selection", RFC 2991, November 2000. 648 [RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path 649 Algorithm", RFC 2992, November 2000. 651 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 652 Label Switching Architecture", RFC 3031, January 2001. 654 [RFC3032] Rosen, E., Tappan, D., Fedorkow, G., Rekhter, Y., 655 Farinacci, D., Li, T., and A. Conta, "MPLS Label Stack 656 Encoding", RFC 3032, January 2001. 658 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 659 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 660 Tunnels", RFC 3209, December 2001. 662 [RFC3260] Grossman, D., "New Terminology and Clarifications for 663 Diffserv", RFC 3260, April 2002. 665 [RFC3270] Le Faucheur, F., Wu, L., Davie, B., Davari, S., Vaananen, 666 P., Krishnan, R., Cheval, P., and J. Heinanen, "Multi- 667 Protocol Label Switching (MPLS) Support of Differentiated 668 Services", RFC 3270, May 2002. 670 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 671 (TE) Extensions to OSPF Version 2", RFC 3630, September 672 2003. 674 [RFC3809] Nagarajan, A., "Generic Requirements for Provider 675 Provisioned Virtual Private Networks (PPVPN)", RFC 3809, 676 June 2004. 678 [RFC3945] Mannie, E., "Generalized Multi-Protocol Label Switching 679 (GMPLS) Architecture", RFC 3945, October 2004. 681 [RFC3985] Bryant, S. and P. Pate, "Pseudo Wire Emulation Edge-to- 682 Edge (PWE3) Architecture", RFC 3985, March 2005. 684 [RFC4031] Carugi, M. and D. McDysan, "Service Requirements for Layer 685 3 Provider Provisioned Virtual Private Networks (PPVPNs)", 686 RFC 4031, April 2005. 688 [RFC4124] Le Faucheur, F., "Protocol Extensions for Support of 689 Diffserv-aware MPLS Traffic Engineering", RFC 4124, June 690 2005. 692 [RFC4201] Kompella, K., Rekhter, Y., and L. Berger, "Link Bundling 693 in MPLS Traffic Engineering (TE)", RFC 4201, October 2005. 695 [RFC4385] Bryant, S., Swallow, G., Martini, L., and D. McPherson, 696 "Pseudowire Emulation Edge-to-Edge (PWE3) Control Word for 697 Use over an MPLS PSN", RFC 4385, February 2006. 699 [RFC4928] Swallow, G., Bryant, S., and L. Andersson, "Avoiding Equal 700 Cost Multipath Treatment in MPLS Networks", BCP 128, RFC 701 4928, June 2007. 703 [RFC5036] Andersson, L., Minei, I., and B. Thomas, "LDP 704 Specification", RFC 5036, October 2007. 706 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 707 Engineering", RFC 5305, October 2008. 709 [RFC5586] Bocci, M., Vigoureux, M., and S. Bryant, "MPLS Generic 710 Associated Channel", RFC 5586, June 2009. 712 [RFC5921] Bocci, M., Bryant, S., Frost, D., Levrau, L., and L. 713 Berger, "A Framework for MPLS in Transport Networks", RFC 714 5921, July 2010. 716 [RFC6391] Bryant, S., Filsfils, C., Drafz, U., Kompella, V., Regan, 717 J., and S. Amante, "Flow-Aware Transport of Pseudowires 718 over an MPLS Packet Switched Network", RFC 6391, November 719 2011. 721 [RFC6790] Kompella, K., Drake, J., Amante, S., Henderickx, W., and 722 L. Yong, "The Use of Entropy Labels in MPLS Forwarding", 723 RFC 6790, November 2012. 725 [RFC7190] Villamizar, C., "Use of Multipath with MPLS and MPLS 726 Transport Profile (MPLS-TP)", RFC 7190, March 2014. 728 [RFC7226] Villamizar, C., McDysan, D., Ning, S., Malis, A., and L. 729 Yong, "Requirements for Advanced Multipath in MPLS 730 Networks", RFC 7226, May 2014. 732 Appendix A. Network Operator Practices and Protocol Usage 734 Often, network operators have a contractual Service Level Agreement 735 (SLA) with customers for services that are comprised of numerical 736 values for performance measures, principally availability, latency, 737 delay variation. Additionally, network operators may have 738 performance objectives for internal use by the operator. See 739 RFC3809, Section 4.9 [RFC3809] for examples of the form of such SLA 740 and performance objective specifications. In this document we use 741 the term Performance Objective as defined in [RFC7226]. Applications 742 and acceptable user experience have an important relationship to 743 these performance parameters. 745 Consider latency as an example. In some cases, minimizing latency 746 relates directly to the best customer experience (for example, in 747 interactive applications closer is faster). In other cases, user 748 experience is relatively insensitive to latency, up to a specific 749 limit at which point user perception of quality degrades 750 significantly (e.g., interactive human voice and multimedia 751 conferencing). A number of Performance Objectives have a bound on 752 point-to-point latency and as long as this bound is met the 753 Performance Objective is met; decreasing the latency is not 754 necessary. In some Performance Objectives, if the specified latency 755 is not met, the user considers the service as unavailable. An 756 unprotected LSP can be manually provisioned on a set of links to meet 757 this type of Performance Objective, but this lowers availability 758 since an alternate route that meets the latency Performance Objective 759 cannot be determined. 761 Historically, when an IP/MPLS network was operated over a lower layer 762 circuit switched network (e.g., SONET rings), a change in latency 763 caused by the lower layer network (e.g., due to a maintenance action 764 or failure) was not known to the MPLS network. This resulted in 765 latency affecting end user experience, sometimes violating 766 Performance Objectives or resulting in user complaints. 768 A response to this problem was to provision IP/MPLS networks over 769 unprotected circuits and set the metric and/or TE-metric proportional 770 to latency. This resulted in traffic being directed over the least 771 latency path, even if this was not needed to meet an Performance 772 Objective or meet user experience objectives. This results in 773 reduced flexibility and increased cost for network operators. Some 774 providers perfer to use lower layer networks to provide restoration 775 and grooming, but the inability to communicate performance 776 parameters, in particular latency, from the lower layer network to 777 the higher layer network is an important problem to be solved before 778 this can be done. 780 Latency Performance Objectives for point-to-point services are often 781 tied closely to geographic locations, while latency for multipoint 782 services may be based upon a worst case within a region. 784 The time frames for restoration (i.e., as implemented by 785 predetermined protection, convergence of routing protocols and/or 786 signaling) for services range from on the order of 100 ms or less 787 (e.g., for VPWS to emulate classical SDH/SONET protection switching), 788 to several minutes (e.g., to allow BGP to reconverge for L3VPN) and 789 may differ among the set of customers within a single service. 791 The presence of only three Traffic Class (TC) bits (previously known 792 as EXP bits) in the MPLS shim header is limiting when a network 793 operator needs to support QoS classes for multiple services (e.g., 794 L2VPN VPWS, VPLS, L3VPN and Internet), each of which has a set of QoS 795 classes that need to be supported and where the operator prefers to 796 use only E-LSP [RFC3270]. In some cases one bit is used to indicate 797 conformance to some ingress traffic classification, leaving only two 798 bits for indicating the service QoS classes. One approach that has 799 been taken is to aggregate these QoS classes into similar sets on 800 LER-LSR and LSR-LSR links and continue to use only E-LSP. Another 801 approach is to use L-LSP as defined in [RFC3270] or use the Class- 802 Type as defined in [RFC4124] to support up to eight mappings of TC 803 into Per-Hop Behavior (PHB). 805 The IP DSCP cannot be used for flow identification. The use of IP 806 DSCP for flow identification is incompatible with Assured Forwarding 807 services [RFC2597] or any other service which may use more than one 808 DSCP code point to carry traffic for a given microflow. In general 809 network operators do not rely on the DSCP of Internet packets in core 810 networks but must preserve DSCP values for use closer to network 811 edges. 813 A label is pushed onto Internet packets when they are carried along 814 with L2VPN or L3VPN packets on the same link or lower layer network 815 provides a mean to distinguish between the QoS class for these 816 packets. 818 Operating an MPLS-TE network involves a different paradigm from 819 operating an IGP metric-based LDP signaled MPLS network. The 820 multipoint-to-point LDP signaled MPLS LSPs occur automatically, and 821 balancing across parallel links occurs if the IGP metrics are set 822 "equally" (with equality a locally definable relation) and if ECMP is 823 enabled for LDP, which network operators generally do in large 824 networks. 826 Traffic is typically comprised of large (some very large) flows and a 827 much larger number of small flows. In some cases, separate LSPs are 828 established for very large flows. Very large microflows can occur 829 even if the IP header information is inspected by a LSR. For example 830 an IPsec tunnel that carries a large amount of traffic must be 831 carried as a single large flow. An important example of large flows 832 is that of a L2VPN or L3VPN customer who has an access line bandwidth 833 comparable to a client-client component link bandwidth -- there could 834 be flows that are on the order of the access line bandwidth. 836 Appendix B. Existing Multipath Standards and Techniques 838 Today the requirement to handle large aggregations of traffic, much 839 larger than a single component link, can be handled by a number of 840 techniques which we will collectively call multipath. Multipath 841 applied to parallel links between the same set of nodes includes 842 Ethernet Link Aggregation [IEEE-802.1AX], link bundling [RFC4201], or 843 other aggregation techniques some of which may be vendor specific. 844 Multipath applied to diverse paths rather than parallel links 845 includes Equal Cost MultiPath (ECMP) as applied to OSPF, ISIS, LDP, 846 or even BGP, and equal cost LSP, as described in Appendix B.4. 847 Various multipath techniques have strengths and weaknesses. 849 Existing multipath techniques solve the problem of large aggregations 850 of traffic, without addressing the other requirements outlined in 851 this document, particularly those described in Section 5. 853 B.1. Common Multpath Load Spliting Techniques 855 Identical load balancing techniques are used for multipath both over 856 parallel links and over diverse paths. 858 Large aggregates of IP traffic do not provide explicit signaling to 859 indicate the expected traffic loads. Large aggregates of MPLS 860 traffic are carried in MPLS tunnels supported by MPLS LSP. LSP which 861 are signaled using RSVP-TE extensions do provide explicit signaling 862 which includes the expected traffic load for the aggregate. LSP 863 which are signaled using LDP do not provide an expected traffic load. 865 MPLS LSP may contain other MPLS LSP arranged hierarchically. When an 866 MPLS LSR serves as a midpoint LSR in an LSP carrying client LSP as 867 payload, there is no signaling associated with these client LSP. 868 Therefore even when using RSVP-TE signaling there may be insufficient 869 information provided by signaling to adequately distribute load based 870 solely on signaling. 872 Generally a set of label stack entries that is unique across the 873 ordered set of label numbers in the label stack can safely be assumed 874 to contain a group of flows. The reordering of traffic can therefore 875 be considered to be acceptable unless reordering occurs within 876 traffic containing a common unique set of label stack entries. 877 Existing load splitting techniques take advantage of this property in 878 addition to looking beyond the bottom of the label stack and 879 determining if the payload is IPv4 or IPv6 to load balance traffic 880 accordingly. 882 MPLS-TP OAM violates the assumption that it is safe to reorder 883 traffic within an LSP. If MPLS-TP OAM is to be accommodated, then 884 existing multipath techniques must be modified. [RFC6790] and 885 [RFC7190] provide a solution but require a small forwarding change. 887 For example, a large aggregate of IP traffic may be subdivided into a 888 large number of groups of flows using a hash on the IP source and 889 destination addresses. This is as described in [RFC2475] and 890 clarified in [RFC3260]. For MPLS traffic carrying IP, a similar hash 891 can be performed on the set of labels in the label stack. These 892 techniques are both examples of means to subdivide traffic into 893 groups of flows for the purpose of load balancing traffic across 894 aggregated link capacity. The means of identifying a group of flows 895 should not be confused with the definition of a flow. 897 Discussion of whether a hash based approach provides a sufficiently 898 even load balance using any particular hashing algorithm or method of 899 distributing traffic across a set of component links is outside of 900 the scope of this document. 902 The current load balancing techniques are referenced in [RFC4385] and 903 [RFC4928]. The use of three hash based approaches are described in 904 [RFC2991] and [RFC2992]. A mechanism to identify flows within PW is 905 described in [RFC6391]. The use of hash based approaches is 906 mentioned as an example of an existing set of techniques to 907 distribute traffic over a set of component links. Other techniques 908 are not precluded. 910 B.2. Static and Dynamic Load Balancing Multipath 912 Static multipath generally relies on the mathematical probability 913 that given a very large number of small microflows, these microflows 914 will tend to be distributed evenly across a hash space. Early very 915 static multipath implementations assumed that all component links are 916 of equal capacity and perform a modulo operation across the hashed 917 value. An alternate static multipath technique uses a table 918 generally with a power of two size, and distributes the table entries 919 proportionally among component links according to the capacity of 920 each component link. 922 Static load balancing works well if there are a very large number of 923 small microflows (i.e., microflow rate is much less than component 924 link capacity). However, the case where there are even a few large 925 microflows is not handled well by static load balancing. 927 A dynamic load balancing multipath technique is one where the traffic 928 bound to each component link is measured and the load split is 929 adjusted accordingly. As long as the adjustment is done within a 930 single network element, then no protocol extensions are required and 931 there are no interoperability issues. 933 Note that if the load balancing algorithm and/or its parameters is 934 adjusted, then packets in some flows may be briefly delivered out of 935 sequence, however in practice such adjustments can be made very 936 infrequent. 938 B.3. Traffic Split over Parallel Links 940 The load splitting techniques defined in Appendix B.1 and 941 Appendix B.2 are both used in splitting traffic over parallel links 942 between the same pair of nodes. The best known technique, though far 943 from being the first, is Ethernet Link Aggregation [IEEE-802.1AX]. 944 This same technique had been applied much earlier using OSPF or ISIS 945 Equal Cost MultiPath (ECMP) over parallel links between the same 946 nodes. Multilink PPP [RFC1717] uses a technique that provides 947 inverse multiplexing, however a number of vendors had provided 948 proprietary extensions to PPP over SONET/SDH [RFC2615] that predated 949 Ethernet Link Aggregation but are no longer used. 951 Link bundling [RFC4201] provides yet another means of handling 952 parallel LSP. RFC4201 explicitly allow a special value of all ones 953 to indicate a split across all members of the bundle. This "all 954 ones" component link is signaled in the MPLS RESV to indicate that 955 the link bundle is making use of classic multipath techniques. 957 B.4. Traffic Split over Multiple Paths 959 OSPF or ISIS Equal Cost MultiPath (ECMP) is a well known form of 960 traffic split over multiple paths that may traverse intermediate 961 nodes. ECMP is often incorrectly equated to only this case, and 962 multipath over multiple diverse paths is often incorrectly equated to 963 ECMP. 965 Many implementations are able to create more than one LSP between a 966 pair of nodes, where these LSP are routed diversely to better make 967 use of available capacity. The load on these LSP can be distributed 968 proportionally to the reserved bandwidth of the LSP. These multiple 969 LSP may be advertised as a single PSC FA and any LSP making use of 970 the FA may be split over these multiple LSP. 972 Link bundling [RFC4201] component links may themselves be LSP. When 973 this technique is used, any LSP which specifies the link bundle may 974 be split across the multiple paths of the component LSP that comprise 975 the bundle. 977 Appendix C. Characteristics of Transport in Core Networks 979 The characteristics of primary interest are the capacity of a single 980 circuit and the use of wave division multiplexing (WDM) to provide a 981 large number of parallel circuits. 983 Wave division multiplexing (WDM) supports multiple independent 984 channels (independent ignoring crosstalk noise) at slightly different 985 wavelengths of light, multiplexed onto a single fiber. Typical in 986 the early 2000s was 40 wavelengths of 10 Gb/s capacity per 987 wavelength. These wavelengths are in the C-band range, which is 988 about 1530-1565 nm, though some work has been done using the L-band 989 1565-1625 nm. 991 The C-band has been carved up using a 100 GHz spacing from 191.7 THz 992 to 196.1 THz by [ITU-T.G.694.2]. This yields 44 channels. If the 993 outermost channels are not used, due to poorer transmission 994 characteristics, then typically 40 are used. For practical reasons, 995 a 50 GhZ or 25 GHz spacing is used by more recent equipment, 996 yielding. 80 or 160 channels in practice. 998 The early optical modulation techniques used within a single channel 999 yielded 2.5Gb/s and 10 Gb/s capacity per channel. As modulation 1000 techniques have improved 40 Gb/s and 100 Gb/s per channel have been 1001 achieved. 1003 The 40 channels of 10 Gb/s common in the mid 2000s yields a total of 1004 400 Gb/s. Tighter spacing and better modulations are yielding up to 1005 8 Tb/s or more in more recent systems. 1007 Over the optical modulation is an electrical encoding. In the 1990s 1008 this was typically Synchronous Optical Networking (SONET) or 1009 Synchronous Digital Hierarchy (SDH), with a maximum defined circuit 1010 capacity of 40 Gb/s (OC-768), though the 10 Gb/s OC-192 is more 1011 common. More recently the low level electrical encoding has been 1012 Optical Transport Network (OTN) defined by ITU-T. OTN currently 1013 defines circuit capacities up to a nominal 100 Gb/s (ODU4). Both 1014 SONET/SDH and OTN make use of time division multiplexing (TDM) where 1015 the a higher capacity circuit such as a 100 Gb/s ODU4 in OTN may be 1016 subdivided into lower fixed capacity circuits such as ten 10 Gb/s 1017 ODU2. 1019 In the 1990s, all IP and later IP/MPLS networks either used a 1020 fraction of maximum circuit capacity, or at most the full circuit 1021 capacity toward the end of the decade, when full circuit capacity was 1022 2.5 Gb/s or 10 Gb/s. Beyond 2000, the TDM circuit multiplexing 1023 capability of SONET/SDH or OTN was rarely used. 1025 Early in the 2000s both transport equipment and core LSR offered 40 1026 Gb/s SONET OC-768. However 10 Gb/s transport equipment was 1027 predominantly deployed throughout the decade, partially because LSR 1028 10GbE ports were far more cost effective than either OC-192 or OC-768 1029 and 10GbE became practical in the second half of the decade. 1031 Entering the 2010 decade, LSR 40GbE and 100GbE are expected to become 1032 widely available and cost effective. Slightly preceding this 1033 transport equipment making use of 40 Gb/s and 100 Gb/s modulations 1034 are becoming available. This transport equipment is capable or 1035 carrying 40 Gb/s ODU3 and 100 Gb/s ODU4 circuits. 1037 Early in the 2000s decade IP/MPLS core networks were making use of 1038 single 10 Gb/s circuits. Capacity grew quickly in the first half of 1039 the decade but more IP/MPLS core networks had only a small number of 1040 IP/MPLS links requiring 4-8 parallel 10 Gb/s circuits. However, the 1041 use of multipath was necessary, was deemed the simplest and most cost 1042 effective alternative, and became thoroughly entrenched. By the end 1043 of the 2000s decade nearly all major IP/MPLS core service provider 1044 networks and a few content provider networks had IP/MPLS links which 1045 exceeded 100 Gb/s, long before 40GbE was available and 40 Gb/s 1046 transport in widespread use. 1048 It is less clear when IP/MPLS LSP exceeded 10 Gb/s, 40 Gb/s, and 100 1049 Gb/s. By 2010, many service providers have LSP in excess of 100 Gb/ 1050 s, but few are willing to disclose how many LSP have reached this 1051 capacity. 1053 By 2012 40GbE and 100GbE LSR products had become available, but were 1054 mostly still being evaluated or in trial use by service providers and 1055 contect providers. The cost of components required to deliver 100GbE 1056 products remained high making these products less cost effective. 1057 This is expected to change within years. 1059 The important point is that IP/MPLS core network links have long ago 1060 exceeded 100 Gb/s and some may have already exceeded a Tb/s and a 1061 small number of IP/MPLS LSP exceed 100 Gb/s. By the time 100 Gb/s 1062 circuits are widely deployed, many IP/MPLS core network links are 1063 likely to exceed 1 Tb/s and many IP/MPLS LSP capacities are likely to 1064 exceed 100 Gb/s. The growth in service provider traffic has 1065 consistently outpaced growth in DWDM channel capacities and the 1066 growth in capacity of single interfaces and is expected to continue 1067 to do so. Therefore multipath techniques are likely here to stay. 1069 Authors' Addresses 1071 So Ning 1072 Tata Communications 1074 Email: ning.so@tatacommunications.com 1076 Andrew Malis 1077 Consultant 1079 Email: agmalis@gmail.com 1081 Dave McDysan 1082 Verizon 1083 22001 Loudoun County PKWY 1084 Ashburn, VA 20147 1085 USA 1087 Email: dave.mcdysan@verizon.com 1089 Lucy Yong 1090 Huawei USA 1091 5340 Legacy Dr. 1092 Plano, TX 75025 1093 USA 1095 Phone: +1 469-277-5837 1096 Email: lucy.yong@huawei.com 1098 Curtis Villamizar 1099 Outer Cape Cod Network Consulting 1101 Email: curtis@occnc.com