idnits 2.17.00 (12 Aug 2021) /tmp/idnits9994/draft-so-yong-mpls-ctg-framework-requirement-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.ii or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) -- The document has an IETF Trust Provisions (28 Dec 2009) Section 6.c(ii) Publication Limitation clause. If this document is intended for submission to the IESG for publication, this constitutes an error. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 203 instances of too long lines in the document, the longest one being 3 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 9, 2009) is 4692 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T G.800' is mentioned on line 353, but not defined == Missing Reference: 'RFC 3209' is mentioned on line 625, but not defined == Missing Reference: 'RFC 2215' is mentioned on line 625, but not defined == Missing Reference: 'RFC 4124' is mentioned on line 725, but not defined == Unused Reference: 'RFC2119' is defined on line 882, but no explicit reference was found in the text == Unused Reference: 'RFC2215' is defined on line 885, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 889, but no explicit reference was found in the text == Unused Reference: 'RFC3477' is defined on line 892, but no explicit reference was found in the text == Unused Reference: 'RFC4090' is defined on line 900, but no explicit reference was found in the text == Unused Reference: 'RFC4124' is defined on line 903, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 12 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Routing Working Group N. So 2 Internet Draft A. Malis 3 Intended Status: Informational D. McDysan 4 Expires: Verizon 5 L. Yong 6 Huawei 7 F. Jounay 8 France Telecom 9 Y. Kamite 10 NTT 11 July 9, 2009 13 Framework and Requirements for MPLS Over Composite Link 14 draft-so-yong-mpls-ctg-framework-requirement-02 16 Status of this Memo 18 This Internet-Draft is submitted to IETF in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted to IETF in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, and 23 derivative works of it may not be created, and it may not be published 24 except as an Internet-Draft. 26 This Internet-Draft is submitted to IETF in full conformance with the 27 provisions of BCP 78 and BCP 79. This document may not be modified, and 28 derivative works of it may not be created, except to publish it as an 29 RFC and to translate it into languages other than English. 31 Internet-Drafts are working documents of the Internet Engineering Task 32 Force (IETF), its areas, and its working groups. Note that other groups 33 may also distribute working documents as Internet-Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference material 38 or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html 46 This Internet-Draft will expire on December 17, 2009. 48 Copyright Notice 50 Copyright (c) 2009 IETF Trust and the persons identified as the document 51 authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal Provisions 54 Relating to IETF Documents in effect on the date of publication of this 55 document (http://trustee.ietf.org/license-info). Please review these 56 documents carefully, as they describe your rights and restrictions with 57 respect to this document. 59 Abstract 61 This document states a traffic distribution problem in today's IP/MPLS 62 network when multiple links are configured between two routers. The 63 document presents motivation, a framework and requirements. It defines a 64 composite link as a group of parallel links that can be considered as a 65 single traffic engineering link or as an IP link, and used for MPLS. 66 The document primarily focuses on MPLS traffic controlled through 67 control plane protocols, the advertisement of composite link parameter 68 in routing protocols, and the use of composite links in the RSVP-TE and 69 LDP signaling protocols. Interactions with the data and management plane 70 are also addressed. Applicability can be between a single pair of MPLS- 71 capable nodes, a sequence of MPLS-capable nodes, or a multi-layer 72 network connecting MPLS-capable nodes. 74 Table of Contents 76 1. Introduction...................................................3 77 2. Conventions used in this document..............................4 78 2.1. Acronyms..................................................4 79 2.2. Terminology...............................................4 80 3. Motivation and Summary Problem Statement.......................5 81 3.1. Motivation................................................5 82 3.2. Summary of Problems Requiring Solution....................6 83 4. Framework......................................................7 84 4.1. Single Routing Instance...................................7 85 4.1.1. Summary Block Diagram View...........................7 86 4.1.2. CTG Interior Functions...............................8 87 4.1.3. CTG Exterior Functions...............................8 88 4.1.4. Multi-Layer Network Context..........................8 89 4.2. Multiple Routing Instances...............................10 90 5. CTG Requirements for a Single Routing Instance................11 91 5.1. Management and Measurement of CTG Interior Functions.....11 92 5.1.1. Configuration as a Routable Virtual Interface.......11 93 5.1.2. Traffic Flow and CTG Mapping........................12 94 5.1.2.1. Using Control Plane TE Information.............12 95 5.1.2.2. When no TE Information is Available (i.e., LDP)12 96 5.1.2.3. Handling Bandwidth Shortage Events.............13 97 5.1.3. Management of Other Operational Aspects.............13 98 5.1.3.1. Resilience.....................................13 99 5.1.3.2. Flow/Connection Mapping Change Frequency.......14 100 5.1.3.3. OAM Messaging Support..........................14 101 5.2. CTG Exterior Functions...................................15 102 5.2.1. Signaling Protocol Extensions.......................15 103 5.2.2. Routing Advertisement Extensions....................16 104 5.2.3. Multi-Layer Networking Aspects......................16 105 6. CTG Requirements for Multiple Routing Instances...............16 106 6.1. Management and Measurement of CTG Interior Functions.....16 107 6.1.1. Appearance as Multiple Routable Virtual Interfaces..16 108 6.1.2. Control of Resource Allocation......................16 109 6.1.3. Configuration of Prioritization and Preemption......16 111 6.2. CTG Exterior Functions...................................16 112 6.2.1. CTG Operation as a Higher-Level Routing Instance....16 113 7. Security Considerations.......................................17 114 8. IANA Considerations...........................................17 115 9. References....................................................17 116 9.1. Normative References.....................................17 117 9.2. Informative References...................................17 118 10. Acknowledgments..............................................18 120 1. Introduction 122 IP/MPLS network traffic growth forces carriers to deploy multiple 123 parallel physical/logical links between adjacent routers as the total 124 capacity of all aggregated traffic flows exceed the capacity of a single 125 link. The network is expected to carry aggregated traffic flows some of 126 which approach the capacity of any single link, and also some flows that 127 may be very small compared to the capacity of a single link. 129 Operating an MPLS network with multiple parallel links between all 130 adjacent routers causes scaling problems in the routing protocols. This 131 issue is addressed in [RFC4201] which defines the notion of a Link 132 Bundle -- a set of identical parallel traffic engineered (TE) links 133 (called component links) that are grouped together and advertised as a 134 single TE link within the routing protocol. 136 The Link Bundle concept is somewhat limited because of the requirement 137 that all component links must have identical capabilities, and because 138 it applies only to TE links. This document sets out a more generic set 139 of requirements for grouping together a set of parallel data links that 140 may have different characteristics, and for advertising and operating 141 them as a single TE or non-TE link called a Composite Link. 143 This document also describes a framework for selecting members of a 144 Composite Link, operating the Composite Link in signaling and routing, 145 and for distributing through local decisions data flows across the 146 component members of a Composite Link to achieve maximal data throughput 147 and enable link-level protection schemes. 149 Applicability of the work within this document is focused on MPLS 150 traffic as controlled through control plane protocols. Thus, this 151 document describes the routing protocols that advertise link parameters 152 and the Resource Reservation Protocol (RSVP-TE) and the Label 153 Distribution Protocol (LDP) signaling protocols that distribute MPLS 154 labels and establish Label Switched Paths (LSPs). Interactions between 155 the control plane and the data and management planes are also addressed. 156 The focus of this document is on MPLS traffic either signaled by RSVP-TE 157 or LDP. IP traffic over multiple parallel links is handled relatively 158 well by ECMP or LAG/hashing methods. The handling of IP control plane 159 traffic is within the scope of the framework and requirements of this 160 document. 162 The transport functions for TE and non-TE traffic delivery over a 163 Composite Link are termed a Composite Transport Group (CTG). In other 164 words, the objective of CTG is to solve the traffic sharing problem at a 165 composite link level by mapping labeled traffic flows to component 166 links: 168 1. using TE information from the control plane attached to the virtual 169 interface when available, or 171 2. using traffic measurements when it is not. 173 Specific protocol solutions are outside the scope of this document. 175 2. Conventions used in this document 177 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 178 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 179 document are to be interpreted as described in RFC-2119. 181 2.1. Acronyms 183 BW: Bandwidth 185 CTG: Composite Transport Group 187 ECMP: Equal Cost Multi-Path 189 FRR: Fast Re-Route 191 LAG: Link Aggregation Group 193 LDP: Label Distribution Protocol 195 LSP: Label Switched Path 197 MPLS: Multi-Protocol Label Switching 199 OAM: Operation, Administration, and Management 201 PDU: Protocol Data Unit 203 PE: Provider Edge device 205 RSVP: ReSource reserVation Protocol 207 RTD: Real Time Delay 209 TE: Traffic Engineering 211 VRF: Virtual Routing and Forwarding 213 2.2. Terminology 215 Composite Link or Composite Transport Group (CTG): a group of component 216 links, which can be considered as a single MPLS TE link or as a single 217 IP link used for MPLS. 219 Component Link: a physical link (e.g., Lambda, Ethernet PHY, SONET/ SDH, 220 OTN, etc.) with packet transport capability, or a logical link (e.g., 221 MPLS LSP, Ethernet VLAN, MPLS-TP LSP, etc.) 223 CTG Connection: An aggregation of traffic flows which are treated 224 together as a single unit by the CTG Interior Function for the purpose 225 of routing onto a specific component link and measuring traffic volume. 227 CTG Interior Functions: Actions performed by the MPLS routers directly 228 connected by a composite link. This includes the determination of the 229 connection and component link on which a traffic flow is placed. 230 Although a local implementation matter, the configuration control of 231 certain aspects of these interior functions is an important operational 232 requirement. 234 CTG Exterior Functions: These are performed by an MPLS router that makes 235 a composite link useable by the network via control protocols, or by an 236 MPLS router that interacts with other routers to dynamically control a 237 component link as part a composite link. These functions are those that 238 interact via routing and/or signaling protocols with other routers in 239 the same layer network or other layer networks. 241 Traffic Flow: A set of packets that with common identifier 242 characteristics that the CTG is able to use to aggregate traffic into 243 CTG Connections. Identifiers can be an MPLS label stack or any 244 combination of IP addresses and protocol types. 246 Virtual Interface: Composite link characteristics advertised in IGP 248 3. Motivation and Summary Problem Statement 250 3.1. Motivation 252 There are several established approaches to using multiple parallel 253 links between a pair of routers. These have limitations as summarized 254 below. 256 o ECMP/Hashing/LAG: IP traffic composed of a large number of flows with 257 bandwidth that is small with respect to the individual link capacity 258 can be handled relatively well using ECMP/LAG approaches. However, 259 these approaches do not make use of MPLS control plane information 260 nor traffic volume information. Distribution techniques applied only 261 within the data plane can result in less than ideal load balancing 262 across component links of a composite link. 264 o Advertisement of each component link into the IGP. Although this 265 would address the problem, it has a scaling impact on IGP routing, 266 and was an important motivation for the specification of link 267 bundling [RFC4201]. However, link bundling does not support a set of 268 component links with different characteristics (e.g., bandwidth, 269 latency) and only supports RSVP-TE. 271 o Planning Tool LSP Assignment: Although theoretically optimal, an 272 external system that participates in the IGP, measures traffic and 273 assigns TE LSPs and/or adjusts IGP metrics has a potentially large 274 response time to certain failure scenarios. Furthermore, such a 275 system could make use of more information than provided by link 276 bundling IGP advertisements and could make use of mechanisms that 277 would allow pinning MPLS traffic to a particular component link in a 278 CTG. 280 o In a multi-layer network, the characteristics of a component link can 281 be altered by a lower layer network and this can create significant 282 operational impact in some cases. For example, if a lower layer 283 network performs restoration and markedly increases the latency of a 284 link in a link bundle, the traffic placed on this longer latency link 285 may generate user complaints and/or exceed the parameters of a 286 Service Level Agreement (SLA). 288 o In the case where multiple routing instances could share a composite 289 link, inefficiency can result if either 1) specific component links 290 are assigned to an individual routing instance, or 2) if statically 291 assigned capacity is made to a logical/sub interface in each 292 component link of a CTG for each routing instance. In other words, 293 the issue is that unused capacity in one routing instance cannot be 294 used by another in either of these cases. 296 3.2. Summary of Problems Requiring Solution 298 The following bullets highlight aspects of CTG-related solution for 299 which detailed requirements are stated in Section 5. 301 o Ensure the ability to transport both RSVP-TE and LDP signaled non-TE 302 LSPs on the same composite link (i.e., a single set of component 303 links) while maintaining acceptable service quality for both RSVP-TE 304 and LDP signaled LSPs. 306 o Extend a link bundling type function to scenarios with groups of 307 links having different characteristics (e.g., bandwidth, latency). 309 o When an end-to-end LSP signaled by RSVP-TE uses a composite link, the 310 CTG must select a component link that meets the end-to-end 311 requirements for the LSP. To perform this function, the CTG must be 312 made aware of the required, desired, and acceptable link 313 characteristics (e.g., latency, optimization frequency) for each CTG 314 hop in the path. 316 o Support sets of component links between routers across intermediate 317 nodes at the same and/or lower layers where the characteristics 318 (e.g., latency) of said links may change dynamically. The solution 319 should support the case where the changes in characteristics of these 320 links are not communicated by the IGP (e.g., a link in a lower layer 321 network has a change in latency or QoS due to a restoration action). 323 o In the case where multiple routing instances could share a composite 324 link, a means to reduce or manage the potential inefficiency is 325 highly desirable. A local implementation by the same router type at 326 each end of a CTG could address this issue. However, in the case of 327 different routers at each end of a CTG there is a need to specify the 328 operational configuration commands and measurements to ensure 329 interoperability. Alternatively, the case of multiple routing 330 instances sharing a CTG could be viewed as an instance of multi-layer 331 routing. In this case, some lower-layer instance of routing 332 associated with the CTG can be viewed as a server. This CTG server 333 controls the composite link and arbitrates between the signaled 334 requests and measured load offered by the higher level, client 335 instances of routing (i.e., users of the CTG). The CTG server assigns 336 resources on component links to these client level routing instances 337 and communicates this via routing messages into each of the client 338 instances, which then communicate this to their peers in the domain 339 of each routing instance. This server level function is a way to meet 340 operational requirements where flows from one routing instance need 341 to preempt flows from another routing instance, as detailed in the 342 requirements in section 6.1.3. 344 4. Framework 346 4.1. Single Routing Instance 348 4.1.1. Summary Block Diagram View 350 The CTG framework for a single routing instance is illustrated in Figure 351 1, where a composite link is configured between routers R1 and R2. In 352 this example, the composite link has three component links. A composite 353 link is defined in ITU-T [ITU-T G.800] as a single link that bundle 354 comprises multiple parallel component links between the two routers. 355 Each component link in a composite link is supported by a separate 356 server layer trail. A component link can be implemented by different 357 transport technologies such as wavelength, SONET/SDH, OTN, Ethernet PHY, 358 Ethernet VLAN, or can be a logical link [LSP Hierarchy] for example, 359 MPLS, or MPLS-TP. Even if the transport technology implementing the 360 component links is identical, the characteristics (e.g., bandwidth, 361 latency) of the component links may differ. 363 An important framework concept is that of a CTG connection shown in 364 Figure 1. Instead of simply mapping the incoming traffic flows directly 365 to the component links, aggregating multiple flows into a connection 366 makes the measurement of actual bandwidth usage more scalable and 367 manageable. Then the CTG can place connections in a 1:1 manner onto the 368 component links. Although the mapping of flows to connections and then 369 to a component link is a local implementation matter, the management 370 plane configuration and measurement of this mapping is an important 371 external operational interface necessary for interoperability. Note that 372 a special case of this model is where a single flow is mapped to a 373 single connection. 375 Management Plane 376 Configuration and Measurement <----------------+ 377 ^ | 378 | | 379 | | 380 v v 381 +---------+ +---------+ 382 Control | R1 | | R2 | Control 383 Plane ====> | | ====> Plane 384 | +---+ Component Link 1 +---+ | 385 | | |===========================| | | 386 | | |~~~~~~ CTG Connections ~~~~| | | 387 ~~|~~>~~| |===========================| |~~>~~|~~ 388 ~~|~~>~~| C | Component Link 2 | C |~~>~~|~~ 389 Traffic ~~|~~>~~| |===========================| |~~>~~|~~ Traffic 390 Flows ~~|~~>~~| T |~~~~~~ CTG Connections ~~~~| T |~~>~~|~~ Flows 391 ~~|~~>~~| |===========================| |~~>~~|~~ 392 ~~|~~>~~| G | Component Link 3 | G |~~>~~|~~ 393 ~~|~~>~~| |===========================| |~~>~~|~~ 394 | | |~~~~~~ CTG connections ~~~~| | | 395 | | |===========================| | | 396 | +---+ +---+ | 397 +---------+ +---------+ 398 ! ! ! ! 399 ! !<---- Component Links ---->! ! 400 !<------ Composite Link ------->! 402 Figure 1: Composite Transport Group Architecture Model 404 CTG functions can be grouped into two major categories, as described in 405 the following subsections. 407 4.1.2. CTG Interior Functions 409 CTG Interior Functions: implemented within the interior of MPLS routers 410 connected via a composite link. This includes the local data plane 411 functions of determining the component link on which a traffic flow is 412 placed. Management configuration for some aspects of these interior 413 functions is important to achieve operational consistency and this is 414 the focus of requirements in this document for interior functions. 416 4.1.3. CTG Exterior Functions 418 CTG Exterior Functions have aspects that are applicable exterior to the 419 CTG connected MPLS routers. In other words, functions that are used by 420 other routers, such as routing advertisements and signaling messages 421 related to specific characteristics of a composite link. 423 4.1.4. Multi-Layer Network Context 425 The model of Figure 1 applies to at least the scenarios illustrated in 426 Figure 2. The component links may be physical or logical, and the 427 composite link may be made up of a mixture of physical and logical links 428 supported by different technologies. Figure 2 and the following 429 description provide a contextual framework for the multi-layer 430 networking related problem described in section 3.2. In the first 431 scenario, a set of physical links connect adjacent (P) routers (R1/R2). 433 In the second scenario, a set of logical links connect adjacent (P or 434 PE) routers over other equipment (i.e., R3/R4) that may implement RSVP- 435 TE signaled MPLS tunnels which may be in the same IGP as R1/R2 or in a 436 different IGP. . When R3 and R4 are not part of R1/R2's IGP (e.g., they 437 may implement MPLS-TP) R3/R4 can have a signaling but not a routing 438 interface with R1/R2. In other words, R3/R4 offers connectivity to R1/R2 439 in an overlay model. Another case is where R3/R4 provide a TE-LSP 440 segment of TE-LSP from R1 and R2. 442 +----+---+ 1. Physical Link +---+----+ 443 | | |----------------------------------------------| | | 444 | | | | | | 445 | | | +------+ +------+ | C | | 446 | | C | | MPLS | 2. Logical Link | MPLS | | | | 447 | | |.... |......|.....................|......|....| | | 448 | | |-----| R3 |---------------------| R4 |----| | | 449 | | T | +------+ +------+ | T | | 450 | | | | | | 451 | | | | | | 452 | | G | +------+ +------+ | G | | 453 | | | |GMPLS | 3. Lower Layer Link |GMPLS | | | | 454 | | |. ...|......|.....................|......|....| | | 455 | | |-----| R5 |---------------------| R6 |----| | | 456 | | | +------+ +------+ | | | 457 | R1 | | | | R2 | 458 +----+---+ +---+----+ 459 |<---------- Composite Link ----------------->| 461 Figure 2: Illustration of Component Link Types 463 In the third scenario, GMPLS lower layer LSPs (e.g., Fiber, Wavelength, 464 TDM) as determined by a lower layer network in a multi-layer network 465 deployment as illustrated by R5/R6. In this case, R5 and R6 would 466 usually not be part of the same IGP as R1/R2 and may have a static 467 interface, or may have a signaling but not a routing association with R1 468 and R2. Note that in scenarios 2 and 3 when the intermediate routers are 469 not part of the same IGP as R1/R2 (i.e., can be viewed as operating at a 470 lower layer) that the characteristics of these links (e.g., latency) may 471 change dynamically, and there is an operational desire to handle this 472 type of situation in a more automated fashion than is currently possible 473 with existing protocols. Note that this problem currently occurs with a 474 single lower-layer link in existing networks and it would be desirable 475 for the solution to handle the case of a single lower-layer component 476 link as well. Note that the interfaces at R1 and R2 are associated with 477 these different component links can be configured with IP addresses or 478 use unnumbered links as an interior, local function since the individual 479 component links are not advertised as the CTG virtual interface. 481 4.2. Multiple Routing Instances 483 In the case where the routers connected via a CTG support multiple 484 routing instances there is additional context as described in this 485 section. In general, each routing instance can have its own instances of 486 control plane, IGP, and/or routing/signaling protocols. In general, 487 they need not be aware of the existence of the other routing instances. 488 However, it is operationally desirable for efficiency reasons for these 489 routing instances to share the resources of a composite link and have 490 the capability for a higher level of control logic to allocate resources 491 amongst the instances based upon configured policy and the current state 492 of at least the local composite link, but potentially that of other 493 composite links in the network. Figure 3 shows the model where a 494 composite link appears as a routable virtual interface to each routing 495 instance. 497 +-----+---+ Component Link1 +---+-----+ 498 | | |----------------------------------------| | | 499 |RIA.1| | | |RIA.2| 500 | | C | Virtual Interface | C | | 501 |IGPA====================================================IGPA| 502 |_____| T | Component Link2 | T |_____| 503 | | |----------------------------------------| | | 504 |RIB.1| G | | G |RIB.2| 505 | | | Component Link3 | | | 506 | | |----------------------------------------| | | 507 |IGPB====================================================IGPB| 508 +-----+---+ Virtual Interface +---+-----+ 509 | | 510 |<------------- Composite Link ----------------->| 511 Figure 3: Routing Instances Sharing Composite Link 513 In Figure 3, the router on the left side is configured with two routing 514 instances (RI) RIA.1 and RIB.1. Another router on the right side is 515 configured with two routing instances RIA.2 and RIB.2. Routing instance 516 A belongs to IGPA network and routing instance B belongs to IGPB 517 network. In this example the composite link contains three component 518 links. IGPA and IGPB can be TE and/or non-TE enabled. In this case, 519 there are additional CTG related functions related to the dynamic 520 allocation of resources in the component links to each of the multiple 521 routing instances. Furthermore, there are operational scenarios where in 522 response to certain failure scenarios and/or load conditions that the 523 multi-routing instance CTG function may preempt certain LSPs and/or 524 cause changes in the routing information communicated by the IGPs as 525 detailed in the section on multi-instance CTG exterior function 526 requirements. 528 The multiple routing instance case of CTG appears to have a number of 529 requirements and context in common with the single routing instance of 530 CTG, and hence it is retained within the same document in this version. 531 The structure of this framework section, as well as the following 532 requirements section, is to place the multiple routing instance CTG 533 requirements at the end and to only describe aspects unique to the 534 multiple routing instance case. 536 The larger view of CTG as a higher level instance in the context of 537 multiple lower level routing instances may be sufficiently different and 538 broad enough in scope to justify elaboration in a separate document. 539 However, an objective should be to use the framework and as many common 540 requirements from the single routing instance CTG framework and 541 requirements as possible. 543 5. CTG Requirements for a Single Routing Instance 545 5.1. Management and Measurement of CTG Interior Functions 547 5.1.1. Configuration as a Routable Virtual Interface 549 The operator SHALL be able to configure a "virtual interface" 550 corresponding to a composite link and component link characteristics as 551 a TE link or an IP link in IP/MPLS network. 553 The solution SHALL allow configuration of virtual interface parameters 554 for a TE link (e.g., available bandwidth, maximum bandwidth, maximum 555 allowable LSP bandwidth, TE metric, and resource classes (i.e., 556 administrative groups) or link colors). 558 The solution SHALL allow configuration of virtual interface parameters 559 for an IP link used for MPLS (e.g., administrative cost or weight). 561 The solution SHALL support configuration of a composite link composed of 562 set of component links that may be logical or physical, with each 563 component link potentially having at least the following characteristics 564 which may differ: 566 o Logical/Physical 568 o Bandwidth 570 o Latency 572 o QoS characteristics (e.g., jitter, error rate) 574 The "virtual interface" SHALL appear as a fully-featured routing 575 adjacency in each routing instance, not just as an FA [RFC4206]. In 576 particular, it needs to work with at least the following IP/MPLS 577 control protocols: OSPF/IS-IS, LDP, IGPOSPF-TE/ISIS-TE, and RSVP-TE. 579 CTG SHALL accept a new component link or remove an existing component 580 link by operator provisioning or in response to signaling at a lower 581 layer (e.g., using GMPLS). 583 The solution SHALL support derivation of the advertised interface 584 parameters from configured component link parameters based on operator 585 policy. 587 A composite link SHALL be configurable as a numbered or unnumbered link 588 (virtual interface in IP/MPLS). 590 A component link SHALL be configurable as a numbered link or unnumbered 591 link. A component link should be not advertised in IGP. 593 5.1.2. Traffic Flow and CTG Mapping 595 CTG SHALL support operator assignment of traffic flows to specific 596 connections. 598 CTG SHALL support operator assignment of connections to specific 599 component links. 601 CTG shall support separation of resources for traffic flows mapped to 602 connections that have access to TE information (e.g., RSVP-TE signaled 603 flows) from those that do not have access to TE information (e.g., LDP- 604 signaled flows). 606 The solution SHALL support transport IP packets across a composite link 607 for control plane (signaling, routing) and management plane functions. 609 In order to prevent packet loss, CTG must employ make-before-break when 610 a change in the mapping of a CTG connection to a component link mapping 611 change has to occur. 613 5.1.2.1. Using Control Plane TE Information 615 The following requirements apply to the case of RSVP-TE signaled LSPs. 617 The solution SHALL support the admission control by RSVP-TE that is 618 signaled from the routers outside the CTG. Note that RSVP-TE signaling 619 need not specify the actual component link because the selection of 620 component link is the local matter of two adjacent routers based upon 621 signaled and locally configured information. 623 CTG shall be able to receive, interpret and act upon at least the 624 following RSVP-TE signaled parameters: bandwidth setup priority, and 625 holding priority [RFC 3209, RFC 2215] preemption priority and traffic 626 class [RFC 4124], and apply them to the CTG connections where the LSP is 627 mapped. 629 CTG shall support configuration of at least the following parameters on 630 a per composite link basis: 632 o Local Bandwidth Oversubscription factor 634 5.1.2.2. When no TE Information is Available (i.e., LDP) 636 The following requirements apply to the case of LDP signaled LSPs when 637 no signaled TE information is available. 639 CTG shall map LDP-assigned labeled packets based upon local 640 configuration (e.g., label stack depth) to define a CTG connection that 641 is mapped to one of the component links by the CTG. 643 The solution SHALL map LDP-assigned labeled packets that identify the 644 outer label's FEC. 646 The solution SHALL support entropy labels [Entropy Label] to map more 647 granular flows to connections. 649 The solution SHALL be able to measure the bandwidth actually used by a 650 particular connection and derive proper local traffic TE information for 651 the connection. 653 When the connection bandwidth exceeds the component link capacity, the 654 solution SHALL be able to reassign the traffic flows to several 655 connections. 657 The solution SHALL support management plane controlled parameters that 658 define at least a minimum bandwidth, maximum bandwidth, preemption 659 priority, and holding priority for each connection without TE 660 information (i.e., LDP signaled flows). 662 5.1.2.3. Handling Bandwidth Shortage Events 664 The following requirements apply to a virtual interface that supports 665 the traffic flows both with and without TE information, in response to a 666 bandwidth shortage event. A "bandwidth shortage" can arise in CTG if the 667 total bandwidth of the connections with provisioned/signaled TE 668 information and those signaled without TE information (but with measured 669 bandwidth) exceeds the bandwidth of the composite link that carries the 670 CTG connections. 672 CTG shall support a policy-based preemption capability such that, in the 673 event of such a "bandwidth shortage", the signaled or configured 674 preemption and holding parameters can be applied to the following 675 treatments to the connections: 677 o For a connection that has RSVP-TE LSPs, signal the router that the 678 LSP has been preempted. CTG shall support soft preemption (i.e., 679 notify the preempted LSP source prior to preemption). [Soft 680 Preemption] 682 o For a connection that has LDP(s), where the CTG is aware of the LDP 683 signaling involved to the preempted label stack depth, signal release 684 of the label to the router 686 o For a connection that has non-re-routable RSVP-TE LSPs or non- 687 releasable LDP labels, signal the router or operator that the LSP or 688 LDP label has been lost. 690 5.1.3. Management of Other Operational Aspects 692 5.1.3.1. Resilience 694 Component links in a composite link may fail independently. The failure 695 of a component link may impact some CTG connections. The impacted CTG 696 connections shall be transferred to other active component links using 697 the same rules as for the original assignment of CTG connections to 698 component links. 700 The component link recovery scheme SHALL perform equal to or better than 701 existing local recovery methods. A short service disruption may occur 702 during the recovery period. 704 Fast ReRoute (FRR) SHALL be configurable for a composite link. 706 5.1.3.2. Flow/Connection Mapping Change Frequency 708 The solution requires methods to dampen the frequency of flow to 709 connection mapping change, connection bandwidth change, and/or 710 connection to component link mapping changes (e.g., for re- 711 optimization). Operator imposed control policy SHALL be supported. 713 The solution SHALL support latency and delay variation sensitive traffic 714 and limit the mapping change for these flows, and place them on 715 component links that have lower latency. 717 The determination of latency sensitive traffic SHALL be determined by 718 any of the following methods: 720 o Use of a pre-defined local policy setting at composite link ingress 722 o A manually configured setting at composite link ingress 724 o MPLS traffic class in a RSVP-TE signaling message (i.e., Diffserv-TE 725 traffic class [RFC 4124]) 727 The determination of latency sensitive traffic SHOULD be determined (if 728 possible) by any of the following methods: 730 o Pre-set bits in the Payload (e.g., DSCP bits for IP or Ethernet user 731 priority for Ethernet payload) which are typically assigned by end- 732 user 734 o MPLS Traffic-Class Field (aka EXP) which is typically mapped by the 735 LER/LSR on the basis that its value is given for differentiating 736 latency-sensitive traffic of end-users 738 5.1.3.3. OAM Messaging Support 740 Fault management requirement 742 There are two aspects of fault management in the solution. One is about 743 composite link between two local adjacent routers. The other is about 744 the individual component link. 746 OAM protocols for fault management from the outside routers (e.g., LSP- 747 Ping/Trace, IP-ping/Trace) SHALL be transparently treated. 749 For example, it is expected that LSP-ping/trace message is able to 750 diagnose composite link status and its associated virtual interface 751 information; however, it is not required to directly treat individual 752 component link and CTG-connection because they are local matter of two 753 routers. 755 The solution SHALL support fault notification mechanism (e.g., syslog, 756 SNMP trap to the management system/operators) with the granularity level 757 of affected part as detailed below: 759 o Data-plane of component link level 761 o Data-plane of composite link level (as a whole) 763 o Control-plane of the virtual interface level (i.e., routing/signaling 764 on it) 766 o o A CTG that believes that the underlying server layer might not 767 efficiently report failures, can run Bidirectional Forwarding 768 Detection (BFD) over a component link. 770 CTG shall support configuration of timers so that lower layer methods 771 have time to detect/restore faults before a CTG function would be 772 invoked. 774 The solution SHALL allow operator or control plane to query which 775 component link a LSP is assigned to. 777 5.2. CTG Exterior Functions 779 5.2.1. Signaling Protocol Extensions 781 The solution SHALL support signaling a composite link between two 782 routers (e.g., P, P/PE, or PE). 784 The solution SHALL support signaling a component link as part of a 785 composite link. 787 The solution SHALL support signaling a composite link and automatically 788 injecting it into the IGP LSP Hierarchy or a private link for 789 connected two routers. 791 The solution SHALL support signaling of at least the following 792 additional parameters for component link: 794 o Minimum and Maximum (estimated or measured) latency 796 o Bandwidth of the highest and lowest speed 798 The solution SHOULD support signaling of at least the following 799 additional parameters for component link: 801 o Delay Variation 803 o Loss rate 805 5.2.2. Routing Advertisement Extensions 807 It shall be possible to represent multiple values, or a range of values, 808 for the composite link interface parameters in order to communicate 809 information about differences in the constituent component links in an 810 exterior function route advertisement. For example, a range of latencies 811 for the component links that comprise the composite links could be 812 advertised. 814 Multi-Layer Networking Aspects 816 The solution SHALL support derivation of the advertised interface 817 parameters from signaled component link parameters from a lower layer 818 (e.g., latency) based on operator policy. 820 6. CTG Requirements for Multiple Routing Instances 822 This section covers requirements conditioned on the case where the 823 solution supports multiple routing instances. Unless otherwise stated, 824 all requirements for a single routing instance from section 5 apply 825 individually to each of the multiple routing instances. 827 6.1. Management and Measurement of CTG Interior Functions 829 6.1.1. Appearance as Multiple Routable Virtual Interfaces 831 CTG SHALL support multiple routing instances that see a single separate 832 "virtual interface" to a shared composite link composed of parallel 833 physical/logical component links between a pair of routers. 835 6.1.2. Control of Resource Allocation 837 The operator SHALL be able to statically assign resources (e.g., 838 component link, or bandwidth to a sub/logical interface) to each routing 839 instance virtual interface. 841 6.1.3. Configuration of Prioritization and Preemption 843 The solution SHALL support a policy based local to the CTG preemption 844 capability across all routing instances and a set of requirements 845 similar to those listed in section 5.1.2.3. Note that this requirement 846 applies across the multiple routing instances. 848 6.2. CTG Exterior Functions 850 6.2.1. CTG Operation as a Higher-Level Routing Instance 852 The following requirements apply to the case where CTG exterior 853 functions supporting multiple routing instances communicate with each 854 other. 856 CTG exterior functions shall be able to advertise parameters such as 857 reserved capacity, measured capacity usage, and available resources for 858 the CTGs of which they perform CTG interior functions. 860 CTG exterior functions shall be able to signal and respond to requests 861 for a change in allocation of the CTG interior function resources. 863 7. Security Considerations 865 The solution is a local function on the router to support traffic 866 engineering management over multiple parallel links. It does not 867 introduce a security risk for control plane and data plane. 869 The solution could change the frequency of routing update messages and 870 therefore could change routing convergence time. The solution MUST 871 provide controls to dampen the frequency of such changes so as to not 872 destabilize routing protocols. 874 8. IANA Considerations 876 IANA actions to provide solutions are for further study. 878 9. References 880 9.1. Normative References 882 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 883 Requirement Levels", BCP 14, RFC 2119, March 1997. 885 [RFC2215] S. Shenker, J. Wroclawski, "General Characterization 886 Parameters for Integrated Service Network Elements." 887 September 1997 889 [RFC3209] D. Awduche, L. Berger, D. Gan, T. Li, V. Srinivasan, G. 890 Swallow, "RSVP-TE: Extensions to RSVP for LSP Tunnels," December 2001 892 [RFC3477] Kompella, K., "Signalling Unnumbered Links in Resource 893 ReSerVation Protocol - Traffic Engineering (RSVP-TE)", RFC 3477, January 894 2003. 896 [RFC4206] Label Switched Paths (LSP) Hierarchy with Generalized Multi- 897 Protocol Label Switching (GMPLS) Traffic Engineering (TE) K. Kompella, 898 Y. Rekhter October 2005 900 [RFC4090] Pan, P., "Fast Reroute Extensions to RSVP-TE for LSP 901 Tunnels", RFC 4090, May 2005. 903 [RFC4124] Protocol Extensions for Support of Diffserv-aware MPLS Traffic 904 Engineering F. Le Faucheur, Ed. June 2005 906 [RFC4201] Kompella, K., "Link Bundle in MPLS Traffic Engineering", RFC 907 4201, March 2005. 909 9.2. Informative References 911 [Entropy Label] Kompella, K. and S. Amante, "The Use of Entropy Labels 912 in MPLS Forwarding", November 2008, Work in Progress 914 [LSP Hierarchy] Shiomoto, K. and A. Farrel, "Procedures for Dynamically 915 Signaled Hierarchical Label Switched Paths", November 2008, Work in 916 Progress 918 [Soft Preemption] Meyer, M. and J. Vasseur, "MPLS Traffic Engineering 919 Soft Preemption", February 2009, Work in Progress 921 10. Acknowledgments 923 Authors would like to thank Adrian Farrel from Olddog for his extensive 924 comments and suggestions, Ron Bonica from Juniper, Nabil Bitar from 925 Verizon, Eric Gray from Ericsson, Lou Berger from LabN, and Kireeti 926 Kompella from Juniper, for their reviews and great suggestions. 928 This document was prepared using 2-Word-v2.0.template.dot. 930 Copyright (c) 2009 IETF Trust and the persons identified as authors of 931 the code. All rights reserved. 933 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS 934 IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 935 TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 936 PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER 937 OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 938 EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 939 PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 940 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 941 LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 942 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 943 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 945 This code was derived from IETF RFC [insert RFC number]. Please 946 reproduce this note if possible. 948 Authors' Addresses 950 So Ning 951 Verizon 952 2400 N. Glem Ave., 953 Richerdson, TX 75082 954 Phone: +1 972-729-7905 955 Email: ning.so@verizonbusiness.com 957 Andrew Malis 958 Verizon 959 117 West St. 960 Waltham, MA 02451 961 Phone: +1 781-466-2362 962 Email: andrew.g.malis@verizon.com 964 Dave McDysan 965 Verizon 966 22001 Loudoun County PKWY 967 Ashburn, VA 20147 968 Email: dave.mcdysan@verizon.com 970 Lucy Yong 971 Huawei USA 972 1700 Alma Dr. Suite 500 973 Plano, TX 75075 974 Phone: +1 469-229-5387 975 Email: lucyyong@huawei.com 977 Frederic Jounay 978 France Telecom 979 2, avenue Pierre-Marzin 980 22307 Lannion Cedex, 981 FRANCE 982 Email: frederic.jounay@orange-ftgroup.com 984 Yuji Kamite 985 NTT Communications Corporation 986 Granpark Tower 987 3-4-1 Shibaura, Minato-ku 988 Tokyo 108-8118 989 Japan 990 Email: y.kamite@ntt.com