idnits 2.17.00 (12 Aug 2021) /tmp/idnits42466/draft-so-yong-mpls-ctg-framework-requirement-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 14, 2009) is 4844 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. 'ITU-T G.800' Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. So 3 Internet-Draft A. Malis 4 Intended status: Standards Track D. McDysan 5 Expires: August 18, 2009 Verizon 6 L. Yong 7 Huawei USA 8 F. Jounay 9 France Telecom 10 February 14, 2009 12 Framework and Requirements for Composite Transport Group (CTG) 13 draft-so-yong-mpls-ctg-framework-requirement-01 15 Status of this Memo 17 This Internet-Draft is submitted to IETF in full conformance with the 18 provisions of BCP 78 and BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress." 30 The list of current Internet-Drafts can be accessed at 31 http://www.ietf.org/ietf/1id-abstracts.txt. 33 The list of Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 This Internet-Draft will expire on August 18, 2009. 38 Copyright Notice 40 Copyright (c) 2009 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Abstract 52 This document states a traffic distribution problem in today's IP/ 53 MPLS network when multiple physical or logical links are configured 54 between two routers. The document presents a Composite Transport 55 Group framework as TE transport methodology over composite link for 56 the problems and specifies a set of requirements for Composite 57 Transport Group(CTG). 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Conventions used in this document . . . . . . . . . . . . . . 4 63 2.1. Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 2.2. Terminologies . . . . . . . . . . . . . . . . . . . . . . 4 65 3. Problem Statements . . . . . . . . . . . . . . . . . . . . . . 6 66 3.1. Incomplete/Inefficient Utilization . . . . . . . . . . . . 6 67 3.2. Inefficiency/Inflexibility of Logical Interface 68 Bandwidth Allocation . . . . . . . . . . . . . . . . . . . 7 69 4. Composite Transport Group Framework . . . . . . . . . . . . . 9 70 4.1. CTG Framework . . . . . . . . . . . . . . . . . . . . . . 9 71 4.2. CTG Performance . . . . . . . . . . . . . . . . . . . . . 11 72 4.3. Differences between CTG and A Link Bundle . . . . . . . . 12 73 4.3.1. Virtual Routable Link vs. TE Link . . . . . . . . . . 12 74 4.3.2. Component Link Parameter Independence . . . . . . . . 13 75 5. Composite Transport Group Requirements . . . . . . . . . . . . 14 76 5.1. Composite Link Appearance as a Routable Virtual 77 Interface . . . . . . . . . . . . . . . . . . . . . . . . 14 78 5.2. CTG mapping of Traffic Flows to Component Links . . . . . 14 79 5.2.1. Mapping Using Router TE information . . . . . . . . . 15 80 5.2.2. Mapping When No Router TE Information is Available . . 15 81 5.3. Bandwidth Control for Connections with and without TE 82 information . . . . . . . . . . . . . . . . . . . . . . . 15 83 5.4. CTG Transport Resilience . . . . . . . . . . . . . . . . . 16 84 5.5. CTG Operational and Performance . . . . . . . . . . . . . 16 85 6. Security Considerations . . . . . . . . . . . . . . . . . . . 17 86 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 87 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 19 88 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 20 89 9.1. Normative References . . . . . . . . . . . . . . . . . . . 20 90 9.2. Informative References . . . . . . . . . . . . . . . . . . 20 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 21 93 1. Introduction 95 IP/MPLS network traffic growth forces carriers to deploy multiple 96 parallel physical/logical links between two routers. The network is 97 also expected to carry some flows at rates that can approach capacity 98 of any single link, and some flows to be very small compared to a 99 single link capacity. There is not an existing technology today that 100 allows carriers to efficiently utilize all parallel transport 101 resources in a complex IP/MPLS network environment. Composite 102 Transport Group (CTG) provides the local traffic engineering 103 management/transport over multiple parallel links that solves this 104 problem in MPLS networks. 106 The primary function of Composite Transport Group is to efficiently 107 transport aggregated traffic flows over multiple parallel links. CTG 108 can take the flow TE information into account when distributing the 109 flows over individual links to gain local traffic engineering 110 management and link failure protection. Because all links have the 111 same ingress and egress point, CTG does not need to perform route 112 computation and forwarding based on the traffic unit end point 113 information, which allows for a unique local transport traffic 114 engineering scheme. CTG can transport both TE flows and non TE 115 flows. It maps the flows to CTG connections that have assigned TE 116 information either based on flow TE information or auto bandwidth 117 measurement on the connections. CTG distribution function uses CTG 118 connection TE information in the component link selection that CTG 119 connections traverse over. 121 This document contains the problem statements and the framework and a 122 set of requirements for TE transport methodology over composite link. 123 The necessity for protocol extensions to provide solutions is for 124 future study. 126 2. Conventions used in this document 128 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 129 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 130 document are to be interpreted as described in [RFC2119]. 132 2.1. Acronyms 134 BW: BandWidth 136 CTG: Composite Transport Group 138 ECMP: Equal Cost Multi-Path 140 FRR: Fast Re-Route 142 LAG: Link Aggregation Group 144 LDP: Label Distributed Protocol 146 LR: Logical Router 148 LSP: Label Switched Path 150 MPLS: Multi-Protocol Label Switching 152 OAM: Operation, Administration, and Maintenance 154 PDU: Protocol Data Unit 156 PE: Provider Edge device 158 RSVP: ReSource reserVation Protocol 160 RTD: Real Time Delay 162 TE: Traffic engineering 164 VRF: Virtual Routing & Forwarding 166 2.2. Terminologies 168 Composite Link: a group of component links that acts as single 169 routable interface 171 Component Link: physical link (e.g. Lambda, Ethernet PHY, etc) or 172 logical links (e.g. LSP, etc) 173 Composite Transport Group (CTG): traffic engineered transport 174 function entity over composite link 176 CTG connection: a connection used for data plane 178 3. Problem Statements 180 Two applications are described here that encounter problems when 181 multiple parallel links are deployed between two routers in today's 182 IP/MPLS networks. 184 3.1. Incomplete/Inefficient Utilization 186 An MPLS-TE network is deployed to carry traffic on RSVP-TE LSPs, i.e. 187 traffic engineered flows. When traffic volume exceeds the capacity 188 of a single physical link, multiple physical links are deployed 189 between two routers as a single backbone trunk. How to assign LSP 190 traffic over multiple links and maintain this backbone trunk as a 191 higher capacity and higher availability trunk than a single physical 192 link becomes an extremely difficult task for carriers today. Three 193 methods that are available today are described here. 195 1. A hashing method is a common practice for traffic distribution 196 over multiple paths. Equal Cost Multi-Path (ECMP) for IP 197 services and IEEE-defined Link Aggregation Group (LAG) for 198 Ethernet traffic are two of the widely deployed hashing based 199 technologies. However, two common occurrences in carrier 200 networks often prevent hashing being used efficiently. First, 201 for MPLS networks carrying mostly Virtual Private Network (VPN) 202 traffic, the incoming traffic are usually highly encrypted, so 203 that hashing depth is severely limited. Second, the traffic in 204 an MPLS-TE network typically contain a certain number of traffic 205 flows that have vast differences in the bandwidth requirements. 206 Furthermore, the links may be of different speeds. In those 207 cases hashing can cause some links to be congested while others 208 are partially filled because hashing can only distinguish the 209 flows, not the flow rates. A TE based solution better applies 210 for these cases. IETF has always had two technology tracks for 211 traffic distribution: TE-based and non-TE based. A TE based 212 solution provides a natural compliment to non-TE based hashing 213 methods. 215 2. Assigning individual LSPs to each link through constrained 216 routing. A planning tool can track the utilization of each link 217 and assignment of LSPs to the links. To gain high availability, 218 FRR [RFC4090] is used to create a bypass tunnel on a link to 219 protect traffic on another link or to create a detour LSP to 220 protect another LSP. If reserving BW for the bypass tunnels or 221 the detour LSPs, the network will reserve a large amount of 222 capacity for failure recovery, which reduces the capacity to 223 carry other traffic. If not reserving BW for the bypass tunnels 224 and the detour LSPs, the planning tool can not assign LSPs 225 properly to avoid the congestion during link failure when there 226 are more than two parallel links. This is because during the 227 link failure, the impacted traffic is simply put on a bypass 228 tunnel or detour LSPs which does not have enough reserved 229 bandwidth to carry the extra traffic during the failure recovery 230 phase. 232 3. Facility protection, also called 1:1 protection. Dedicate one 233 link to protect another link. Only assign traffic to one link in 234 the normal condition. When the working link fails, switch 235 traffic to the protection link. This requires 50% capacity for 236 failure recovery. This works when there are only two links. 237 Under the multiple parallel link condition, this causes 238 inefficient use of network capacity because there is no 239 protection capacity sharing. In addition, due to traffic 240 burstiness, having one link fully loaded and another link idle 241 increases transport latency and packet loss, which lowers the 242 link performance quality for transport. 244 None of these methods satisfies carrier requirement either because of 245 poor link utilization or poor performance. This forces carriers to 246 go with the solution of deploying single higher capacity link. 247 However, a higher capacity link can be expensive as compared with 248 parallel low capacity links of equivalent aggregate capacity; a high 249 capacity link can not be deployed in some circumstances due to 250 physical impairments; or the highest capacity link may not large 251 enough for some carriers. 253 An LDP network can encounter the same issue as an MPLS-TE enabled 254 network when multiple parallel links are deployed as a backbone 255 trunk. An LDP network can have large variance in flow rates where, 256 for example, the small flows may be carrying stock tickers at a few 257 kbps per flow while the large flows can be near 10 Gbps per flow 258 carrying machine to machine and server to server traffic from 259 individual customers. Those large traffic flows often cannot be 260 broken into micro flows. Therefore, hashing would not work well for 261 the networks carrying such flows. Without per-flow TE information, 262 this type of network has even more difficulty to use multiple 263 parallel links and keep high link utilization. 265 3.2. Inefficiency/Inflexibility of Logical Interface Bandwidth 266 Allocation 268 Logically-separate routing instances in some implementations further 269 complicates the situation. Dedicating separate physical backbone 270 links, or in the case of sharing of a single common link, dedicating 271 a portion of the link, to each routing instance is not efficient. 272 For example, if there are 2 routing instances and 3 parallel links 273 and half of each link bandwidth is assigned to a routing instance, 274 then neither routing instance can support an LSP with bandwidth 275 greater than half the link bandwidth. The same problem is also 276 present in the case of sharing of a single common link using the 277 dedicated logical interface and link bandwidth method. An 278 alternative in dealing with multiple parallel links is to assign a 279 logical interface and bandwidth on each of the parallel physical 280 links to each routing instance, which improves efficiency as compared 281 to dedicating physical links to each routing instance. 283 Note that the traffic flows and LSPs from these different routing 284 instances effectively operate in a Ships-in-the-Night mode, where 285 they are unaware of each other. Inflexibility results if there are 286 multiple sets of LSPs (e.g., from different routing instances) 287 sharing one link or a set of parallel links, and at least one set of 288 LSPs can preempt others, then more efficient sharing of the link set 289 between the routing instances is highly desirable. 291 4. Composite Transport Group Framework 293 4.1. CTG Framework 295 Composite Transport Group (CTG) is the TE method to transport 296 aggregated traffic over a composite link. A composite link defined 297 in ITU-T [ITU-T G.800] is a single link that bundles multiple 298 parallel links between the two same subnetworks. Each component link 299 in a composite link is independent in the sense that each component 300 link is supported by a separate server layer trail that can be 301 implemented by different transport technologies such as wavelength, 302 Ethernet PHY, MPLS(-TP). The composite link conveys communication 303 information using different server layer trails thus the sequence of 304 symbols across this link may not be preserved. 306 Composite Transport Group (CTG) is primarily a local traffic 307 engineering and transport framework over multiple parallel links or 308 multiple paths. The objective is for a composite link to appear as a 309 virtual interface to the connected routers. The router provisions 310 incoming traffic over the virtual interface. CTG creates CTG 311 connection and map incoming traffic CTG connections. CTG connections 312 are transported over parallel links, i.e. component links in a 313 composite link. The CTG distribution function can locally determine 314 which component link CTG connections should traverse over. The CTG 315 framework is illustrated in Figure 1 below. 317 +---------+ +-----------+ 318 | +---+ +---+ | 319 | | |============================| | | 320 LSP,LDP,IGP| | C |~~~~~~5 CTG Connections ~~~~| C | | 321 ~~~|~~>~~| |============================| |~~~>~~~|~~~ 322 ~~~|~~>~~| T |============================| T |~~~>~~~|~~~ 323 ~~~|~~>~~| |~~~~~~3 CTG Connections ~~~~| |~~~>~~~|~~~ 324 | | G |============================| G | | 325 | | |============================| | | 326 | | |~~~~~~9 CTG connections~~~~~| | | 327 | | |============================| | | 328 | R1 +---+ +---+ R2 | 329 +---------+ +-----------+ 330 ! ! ! ! 331 ! !<----Component Links ------>! ! 332 !<------ Composite Link ----------->! 334 Figure 1: Composite Transport Group Architecture Model 336 In Figure 1, a composite link is configured between router R1 and R2. 337 The composite link has three component links. To transport LSP 338 traffic, CTG creates a CTG connection for the LSP first, and select a 339 component link to carry the connection. (apply for LDP and IGP 340 traffic as well). A CTG connection only exists in the scope of a 341 composite link. The traffic in a CTG connection is transported over 342 a single component link. 344 The model in Figure 1 applies two basic scenarios but is not limited 345 to. First, a set of physical links connect adjacent (P) routers. 346 Second, a set of logical links connect adjacent (P or PE) routers 347 over other equipment that may implement RSVP-TE signaled MPLS 348 tunnels, or MPLS-TP tunnels. 350 A CTG connection is a point-to-point logical connection over a 351 composite link. The connection rides on component link in a one-to- 352 one or many-to-one relationship. LSPs map to CTG connections in a 353 one-to-one or many-to-one relationship. The connection can have the 354 following traffic engineering parameters: 356 o bandwidth over-subscription 358 o factor placement 360 o priority 362 o holding priority 364 CTG connection TE parameters can be mapped directly from the LSP 365 parameters signaled in RSVP-TE or can be set at the CTG management 366 interface (CTG Logical Port). The connection bandwidth shall be set. 367 If a LSP has no bandwidth information, the bandwidth will be 368 calculated at CTG ingress using automatic bandwidth measurement 369 function. 371 LDP LSPs can be mapped onto the connections per LDP label. Both 372 outer label (PE-PE label) and Inner label (VRF Label) can be used for 373 the connection mapping. CTG connection bandwidth shall be set 374 through auto-bandwidth measurement function at the CTG ingress. When 375 the connection bandwidth tends to exceed the component link capacity, 376 CTG is able to reassign the flows in one connection into several 377 connections and assign other component links for the connections 378 without traffic disruption. 380 A CTG component link can be a physical link or logical link (LSP 381 Tunnel [LSP Hierarchy]) between two routers. When component links 382 are physical links, there is no restriction to component link type, 383 bandwidth, and performance objectives (e.g., RTD and Jitter). Each 384 component link maintains its own OAM. CTG is able to get component 385 link status from each link and take an action upon component link 386 status changes. 388 Each component link can have its own Component Link Cost and 389 Component Link Bandwidth as its associated engineered parameters. 390 CTG uses component link parameters in the assignment of CTG 391 connections to component links. 393 CTG provides local traffic engineering management over parallel links 394 based on CTG connection TE information and component link parameters. 395 Component link selection for CTG connections is determined locally 396 and may change without reconfiguring the traffic flows. Changing the 397 selection may be triggered by a component link condition change, 398 configuration of a new traffic flow or modification on existing one, 399 or operator required optimization process. The assignment of CTG 400 connections to component links enables TE based traffic distribution 401 and link failure recovery with much less link capacity than current 402 methods mentioned in the section of the problem statements. 404 CTG connections are created for traffic management purpose on a 405 composite link. They do not change the forwarding schema. The 406 forwarding engine still forwards based on the LSP label created per 407 traffic LSP. Therefore, there is no change to the forwarding. 409 CTG techniques applies to the situation that the rate of the distinct 410 traffic flows are not higher than the capacity of any component link 411 in composite link. 413 4.2. CTG Performance 415 Packet re-ordering when moving a CTG connection from one component 416 link to another can occur when the new path is shorter than the 417 previous path and the interval between packet transmissions is less 418 than the difference in latency between the previous and the new 419 paths. If the new path is longer than the previous path, then re- 420 ordering will not occur, but the inter-packet delay variation will be 421 increased for those packets before and after the change from the 422 previous to the new path. Requirements are stated in this draft to 423 allow an operator to control the frequency of CTG path changes to 424 control the rate of occurrence for these reordering or inter-packet 425 delay variation events. 427 In order to prevent packet loss, CTG must employ make-before-break 428 when a connection to component link mapping change has to occur. 429 When CTG determines that the current component link for the 430 connection is no longer sufficient based on the connection bandwidth 431 requirement, CTG ingress establishes a new connection with increased 432 bandwidth on the alternative component link, and switches the traffic 433 onto the new connection before the old connection is torn down. If 434 the new connection is placed on a link that has equal or longer 435 latency than the previous link, the packet re-ordering problem does 436 not occur, but inter-packet delay variation will increase for a pair 437 of packets. When a component link fails, CTG may also move some 438 impacted CTG connections to other component links. In this case, a 439 short service disruption may occur, similar to that caused by other 440 local protection methods. 442 Time sensitive traffic can be supported by CTG. For example, when 443 some traffic which is very sensitive to latency (as indicated by pre- 444 set priority bits (i.e., DSCP or Ethernet user priority) is being 445 carried over CTG that consists of component links that cannot support 446 the traffic latency requirement, the traffic flow with strict latency 447 requirement can be mapped onto certain component links manually or by 448 using pre-defined policy setting at CTG ingress. 450 4.3. Differences between CTG and A Link Bundle 452 4.3.1. Virtual Routable Link vs. TE Link 454 CTG is a data plane transport function over a composite link. A 455 composite link contains multiple component links that can carry 456 traffic independently. CTG is the method to transport aggregated 457 traffic over a composite link. The composite link appears as a 458 single routable virtual interface between the connected routers. The 459 component links in composite link do not belong to IGP links in OSPF/ 460 IS-IS. The network only maps LSP or LDP to a composite link, i.e. 461 not to individual component links. CTG ingress will select component 462 link for individual LSP and LDP and merge them at composite link 463 egress. CTG ingress does not need to inform CTG egress which 464 component link CTG connections traverse over. 466 A link bundle [RFC4201] is a collection of TE links. It is a logical 467 construct that represents a way to group/map the information about 468 certain physical resources that interconnect routers. The purpose of 469 link bundle is to improve routing scalability by reducing the amount 470 of information that has to be handled by OSPF/IS-IS. Each physical 471 links in the link bundle are an IGP link in OSPF/IS-IS. A link 472 bundle only has the significance to router control plane. The 473 mapping of LSP to component link in a bundle is determined at LSP 474 setup time and this mapping does not change due to new configurations 475 of LSP/LDP traffic. A link bundle only applies to RSVP-TE signaled 476 traffic, CTG applies to RSVP/RSVP-TE/LDP signaled traffic. 478 4.3.2. Component Link Parameter Independence 480 CTG allows component links to have different costs, traffic 481 engineering metric and resource classes. CTG can derive the virtual 482 interface cost from component link costs based on operator policy. 483 CTG can derive the traffic engineering parameter for a virtual 484 interface from its component link traffic engineering parameters. 486 A Link Bundle requires that all component links in a bundle to have 487 the same traffic engineering metric, and the same set of resource 488 classes. 490 5. Composite Transport Group Requirements 492 Composite Transport Group (CTG) is about the method to transport 493 aggregated traffic over multiple parallel links. CTG can address the 494 problems existing in today IP/MPLS network. Here are some CTG 495 requirements: 497 5.1. Composite Link Appearance as a Routable Virtual Interface 499 The carrier needs a solution where multiple routing instances see a 500 separate "virtual interface" to a shared composite link composed of 501 parallel physical/logical links between a pair of routers. 503 CTG would communicate parameters (e.g., admin cost, available 504 bandwidth, maximum bandwidth, allowable bandwidth) for the "virtual 505 interface" associated with each routing instance. 507 The "virtual interface" shall appear as a fully-featured routing 508 adjacency in each routing instance, not just an FA [RFC3477] . In 509 particular, it needs to work with at least the following IP/MPLS 510 control protocols: OSPF/IS-IS, LDP, IGP-TE, and RSVP-TE. 512 CTG SHALL accept a new component link or remove an existing component 513 link by operator provisioning or in response to signaling at a lower 514 layer (e.g., using GMPLS). 516 CTG SHALL be able to derive the admin cost and TE metric of the 517 "virtual interface" from the admin cost and TE metric of individual 518 component links. 520 A component link in CTG SHALL be supportable numbered link or 521 unnumbered link in the IGP. 523 5.2. CTG mapping of Traffic Flows to Component Links 525 The objective of CTG is to solve the traffic sharing problem at a 526 virtual interface level by mapping LSP traffic to component links 527 (not using hashing): 529 1. using TE information from the control planes of the routing 530 instances attached to the virtual interface when available, or 532 2. using traffic measurements when it is not. 534 CTG SHALL map traffic flows to CTG connections and place an entire 535 connection onto a single component link. 537 CTG SHALL support operator assignment of traffic flow to component 538 link. 540 5.2.1. Mapping Using Router TE information 542 CTG SHALL use RSVP-TE for bandwidth signaled by a routing instance to 543 explicitly assign TE information to the CTG connection that the LSP 544 is mapped to. 546 CTG SHALL be able to receive, interpret and act upon at least the 547 following router signaled parameters: minimum bandwidth, maximum 548 bandwidth, preemption priority, and holding priority and apply them 549 to the CTG connections where the LSP is mapped. 551 5.2.2. Mapping When No Router TE Information is Available 553 CTG SHALL map LDP-assigned labeled packets based upon local 554 configuration (e.g., label stack depth) to define a CTG connection 555 that is mapped to one of the component links in the CTG. 557 CTG SHALL map LDP-assigned labeled packets that identify the source- 558 destination LER as a CTG connection. 560 CTG SHOULD support entropy labels [Entropy Label] to map more 561 granular flows to CTG connections. 563 In a mapping case, the CTG SHALL be able to measure the bandwidth 564 actually used by a particular connection and derive proper TE 565 information for the connection. 567 CTG SHALL support parameters that define at least a minimum 568 bandwidth, maximum bandwidth, preemption priority, and holding 569 priority for connections without TE information. 571 5.3. Bandwidth Control for Connections with and without TE information 573 The following requirements apply to a virtual interface with CTG 574 capability that supports the traffic flows with TE information and 575 the flows without TE information. 577 A "bandwidth shortage" issue can arise in CTG if the total bandwidth 578 of the connections with provisioned TE information and those with 579 auto measured TE information exceeds the bandwidth of the composite 580 link. 582 CTG SHALL support a policy based preemption capability such that, in 583 the event of such a "bandwidth shortage", the signaled or configured 584 preemption and holding parameters can be applied to the following 585 treatments to the connections: 587 o For a connection that has RSVP-TE LSP(s), signal the router that 588 the LSP has been preempted. CTG SHALL support soft preemption 589 (i.e., notify the preempted LSP source prior to preemption). 590 [Soft Preemption] 592 o For a connection that has LDP(s), where the CTG is aware of the 593 LDP signaling involved to the preempted label stack depth, signal 594 release of the label to the router 596 o For a connection that has non-re-routable RSVP-TE LSP(s) or non- 597 releasable LDP(s), signal the router or operator that the LSP or 598 LDP has been lost. 600 5.4. CTG Transport Resilience 602 Component links in CTG may fail independently. The failure of a 603 component link may impact some CTG connections. The impacted CTG 604 connections SHALL be replaced to other active component links by 605 using the same rules as of the assignment of CTG connection to 606 component link. 608 CTG component link recovery scheme SHALL perform equal to or better 609 than existing local recovery methods. A short service disruption may 610 occur during the recovery period. 612 5.5. CTG Operational and Performance 614 CTG requires methods to dampen the frequency of connection bandwidth 615 change and/or connection to component link mapping changes (e.g., for 616 re-optimization). Operator imposed control policy SHALL be allowed. 618 CTG SHALL support latency sensitive traffic. 620 The determination of latency sensitive traffic SHALL be determined by 621 any of the following methods: 623 o Use of a pre-defined local policy setting at CTG ingress 625 o A manually configured setting at CTG ingress 627 o MPLS traffic class in a RSVP-TE signaling message 629 The determination of latency sensitive traffic SHOULD be determined 630 (if possible) by any of the following methods: 632 o Pre-set bits in the Payload (e.g., DSCP bits for IP or Ethernet 633 user priority for Ethernet payload) 635 6. Security Considerations 637 CTG is a local function on the router to support traffic engineering 638 management over multiple parallel links. It does not introduce a 639 security risk for control plane and data plane. 641 7. IANA Considerations 643 IANA actions to provide solutions are for further study. 645 8. Acknowledgements 647 Authors would like to thank Adrian Farrel from Olddog, Ron Bonica 648 from Juniper, Nabil Bitar from Verizon, and Eric Gray from Ericsson 649 for the review and great suggestions. 651 9. References 653 9.1. Normative References 655 [ITU-T G.800] 656 ITU-T Q12, "Unified Functional Architecture of Transport 657 Network", ITU-T G.800, February 2008. 659 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 660 Requirement Levels", RFC 2119, March 1997. 662 [RFC3477] Kompella, K., "Signalling Unnumbered Links in Resource 663 ReSerVation Protocol - Traffic Engineering (RSVP-TE)", 664 RFC 3477, January 2003. 666 [RFC4090] Pan, P., "Fast Reroute Extensions to RSVP-TE for LSP 667 Tunnels", RFC 4090, May 2005. 669 [RFC4201] Kompella, K., "Link Bundle in MPLS Traffic Engineering", 670 RFC 4201, March 2005. 672 9.2. Informative References 674 [Entropy Label] 675 Kompella, K. and S. Amante, "The Use of Entropy Labels in 676 MPLS Forwarding", November 2008, . 679 [LSP Hierarchy] 680 Shiomoto, K. and A. Farrel, "Procedures for Dynamically 681 Signaled Hierarchical Label Switched Paths", 682 November 2008, . 685 [Soft Preemption] 686 Meyer, M. and J. Vasseur, "MPLS Traffic Engineering Soft 687 Preemption", February 2009, . 690 Authors' Addresses 692 So Ning 693 Verizon 694 2400 N. Glem Ave., 695 Richerson, TX 75082 697 Phone: +1 972-729-7905 698 Email: ning.so@verizonbusiness.com 700 Andrew Malis 701 Verizon 702 117 West St. 703 Waltham, MA 02451 705 Phone: +1 781-466-2362 706 Email: andrew.g.malis@verizon.com 708 Dave McDysan 709 Verizon 710 22001 Loudoun County PKWY 711 Ashburn, VA 20147 713 Phone: +1 707-886-1891 714 Email: dave.mcdysan@verizon.com 716 Lucy Yong 717 Huawei USA 718 1700 Alma Dr. Suite 500 719 Plano, TX 75075 721 Phone: +1 469-229-5387 722 Email: lucyyong@huawei.com 724 Frederic Jounay 725 France Telecom 726 2, avenue Pierre-Marzin 727 22307 Lannion Cedex, 728 FRANCE 730 Phone: 731 Email: frederic.jounay@orange-ftgroup.com