idnits 2.17.00 (12 Aug 2021) /tmp/idnits36350/draft-ietf-rtgwg-cl-requirement-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 22, 2013) is 3347 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-04) exists of draft-ietf-rtgwg-cl-framework-01 == Outdated reference: A later version (-06) exists of draft-ietf-rtgwg-cl-use-cases-01 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RTGWG C. Villamizar, Ed. 3 Internet-Draft OCCNC, LLC 4 Intended status: Informational D. McDysan, Ed. 5 Expires: September 23, 2013 Verizon 6 S. Ning 7 Tata Communications 8 A. Malis 9 Verizon 10 L. Yong 11 Huawei USA 12 March 22, 2013 14 Requirements for Composite Links in MPLS Networks 15 draft-ietf-rtgwg-cl-requirement-10 17 Abstract 19 There is often a need to provide large aggregates of bandwidth that 20 are best provided using parallel links between routers or MPLS LSR. 21 In core networks there is often no alternative since the aggregate 22 capacities of core networks today far exceed the capacity of a single 23 physical link or single packet processing element. 25 The presence of parallel links, with each link potentially comprised 26 of multiple layers has resulted in additional requirements. Certain 27 services may benefit from being restricted to a subset of the 28 component links or a specific component link, where component link 29 characteristics, such as latency, differ. Certain services require 30 that an LSP be treated as atomic and avoid reordering. Other 31 services will continue to require only that reordering not occur 32 within a microflow as is current practice. 34 Status of this Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on September 23, 2013. 50 Copyright Notice 52 Copyright (c) 2013 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 69 2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 4 70 3. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 4 71 4. Network Operator Functional Requirements . . . . . . . . . . . 5 72 4.1. Availability, Stability and Transient Response . . . . . . 5 73 4.2. Component Links Provided by Lower Layer Networks . . . . . 6 74 4.3. Parallel Component Links with Different Characteristics . 8 75 5. Derived Requirements . . . . . . . . . . . . . . . . . . . . . 10 76 6. Management Requirements . . . . . . . . . . . . . . . . . . . 11 77 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12 78 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 79 9. Security Considerations . . . . . . . . . . . . . . . . . . . 12 80 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 81 10.1. Normative References . . . . . . . . . . . . . . . . . . . 13 82 10.2. Informative References . . . . . . . . . . . . . . . . . . 13 83 Appendix A. ITU-T G.800 Composite Link Definitions and 84 Terminology . . . . . . . . . . . . . . . . . . . . . 14 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 87 1. Introduction 89 The purpose of this document is to describe why network operators 90 require certain functions in order to solve certain business problems 91 (Section 2). The intent is to first describe why things need to be 92 done in terms of functional requirements that are as independent as 93 possible of protocol specifications (Section 4). For certain 94 functional requirements this document describes a set of derived 95 protocol requirements (Section 5). Appendix A provides a summary of 96 G.800 terminology used to define a composite link. 98 1.1. Requirements Language 100 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 101 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 102 document are to be interpreted as described in RFC 2119 [RFC2119]. 104 2. Assumptions 106 The services supported include pseudowire based services (RFC 3985 107 [RFC3985]), including VPN services, Internet traffic encapsulated by 108 at least one MPLS label (RFC 3032 [RFC3032]), and dynamically 109 signaled MPLS (RFC 3209 [RFC3209] or RFC 5036 [RFC5036]) or MPLS-TP 110 LSPs (RFC 5921 [RFC5921]). The MPLS LSPs supporting these services 111 may be point-to-point, point-to-multipoint, or multipoint-to- 112 multipoint. 114 The locations in a network where these requirements apply are a Label 115 Edge Router (LER) or a Label Switch Router (LSR) as defined in RFC 116 3031 [RFC3031]. 118 The IP DSCP cannot be used for flow identification since L3VPN 119 requires Diffserv transparency (see RFC 4031 5.5.2 [RFC4031]), and in 120 general network operators do not rely on the DSCP of Internet 121 packets. 123 3. Definitions 125 ITU-T G.800 Based Composite and Component Link Definitions: 126 Section 6.9.2 of ITU-T-G.800 [ITU-T.G.800] defines composite and 127 component links as summarized in Appendix A. The following 128 definitions for composite and component links are derived from 129 and intended to be consistent with the cited ITU-T G.800 130 terminology. 132 Composite Link: A composite link is a logical link composed of a 133 set of parallel point-to-point component links, where all 134 links in the set share the same endpoints. A composite link 135 may itself be a component of another composite link, but only 136 a strict hierarchy of links is allowed. 138 Component Link: A point-to-point physical link (including one or 139 more link layer) or a logical link that preserves ordering in 140 the steady state. A component link may have transient out of 141 order events, but such events must not exceed the network's 142 specific NPO. Examples of a physical link are: any set of 143 link layers over a WDM wavelength or any supportable 144 combination of Ethernet PHY, PPP, SONET or OTN over a 145 physical link. Examples of a logical link are: MPLS LSP, 146 Ethernet VLAN, MPLS-TP LSP. A set of link layers supported 147 over pseudowire is a logical link that appears to the client 148 to be a physical link. 150 Flow: A sequence of packets that must be transferred in order on one 151 component link. 153 Flow identification: The label stack and other information that 154 uniquely identifies a flow. Other information in flow 155 identification may include an IP header, PW control word, 156 Ethernet MAC address, etc. Note that an LSP may contain one or 157 more Flows or an LSP may be equivalent to a Flow. Flow 158 identification is used to locally select a component link, or a 159 path through the network toward the destination. 161 Network Performance Objective (NPO): Numerical values for 162 performance measures, principally availability, latency, and 163 delay variation. See [I-D.ietf-rtgwg-cl-use-cases] for more 164 details. 166 4. Network Operator Functional Requirements 168 The Functional Requirements in this section are grouped in 169 subsections starting with the highest priority. 171 4.1. Availability, Stability and Transient Response 173 Limiting the period of unavailability in response to failures or 174 transient events is extremely important as well as maintaining 175 stability. The transient period between some service disrupting 176 event and the convergence of the routing and/or signaling protocols 177 MUST occur within a time frame specified by NPO values. 178 [I-D.ietf-rtgwg-cl-use-cases] provides references and a summary of 179 service types requiring a range of restoration times. 181 FR#1 The solution SHALL provide a means to summarize some routing 182 advertisements regarding the characteristics of a composite 183 link such that the routing protocol converges within the 184 timeframe needed to meet the network performance objective. A 185 composite link CAN be announced in conjunction with detailed 186 parameters about its component links, such as bandwidth and 187 latency. The composite link SHALL behave as a single IGP 188 adjacency. 190 FR#2 The solution SHALL ensure that all possible restoration 191 operations happen within the timeframe needed to meet the NPO. 192 The solution may need to specify a means for aggregating 193 signaling to meet this requirement. 195 FR#3 The solution SHALL provide a mechanism to select a path for a 196 flow across a network that contains a number of paths comprised 197 of pairs of nodes connected by composite links in such a way as 198 to automatically distribute the load over the network nodes 199 connected by composite links while meeting all of the other 200 mandatory requirements stated above. The solution SHOULD work 201 in a manner similar to that of current networks without any 202 composite link protocol enhancements when the characteristics 203 of the individual component links are advertised. 205 FR#4 If extensions to existing protocols are specified and/or new 206 protocols are defined, then the solution SHOULD provide a means 207 for a network operator to migrate an existing deployment in a 208 minimally disruptive manner. 210 FR#5 Any automatic LSP routing and/or load balancing solutions MUST 211 NOT oscillate such that performance observed by users changes 212 such that an NPO is violated. Since oscillation may cause 213 reordering, there MUST be means to control the frequency of 214 changing the component link over which a flow is placed. 216 FR#6 Management and diagnostic protocols MUST be able to operate 217 over composite links. 219 Existing scaling techniques used in MPLS networks apply to MPLS 220 networks which support Composite Links. Scalability and stability 221 are covered in more detail in [I-D.ietf-rtgwg-cl-framework]. 223 4.2. Component Links Provided by Lower Layer Networks 225 Case 3 as defined in [ITU-T.G.800] involves a component link 226 supporting an MPLS layer network over another lower layer network 227 (e.g., circuit switched or another MPLS network (e.g., MPLS-TP)). 228 The lower layer network may change the latency (and/or other 229 performance parameters) seen by the MPLS layer network. Network 230 Operators have NPOs of which some components are based on performance 231 parameters. Currently, there is no protocol for the lower layer 232 network to inform the higher layer network of a change in a 233 performance parameter. Communication of the latency performance 234 parameter is a very important requirement. Communication of other 235 performance parameters (e.g., delay variation) is desirable. 237 FR#7 In order to support network NPOs and provide acceptable user 238 experience, the solution SHALL specify a protocol means to 239 allow a lower layer server network to communicate latency to 240 the higher layer client network. 242 FR#8 The precision of latency reporting SHOULD be configurable. A 243 reasonable default SHOULD be provided. Implementations SHOULD 244 support precision of at least 10% of the one way latencies for 245 latency of 1 ms or more. 247 FR#9 The solution SHALL provide a means to limit the latency on a 248 per LSP basis between nodes within a network to meet an NPO 249 target when the path between these nodes contains one or more 250 pairs of nodes connected via a composite link. 252 The NPOs differ across the services, and some services have 253 different NPOs for different QoS classes, for example, one QoS 254 class may have a much larger latency bound than another. 255 Overload can occur which would violate an NPO parameter (e.g., 256 loss) and some remedy to handle this case for a composite link 257 is required. 259 FR#10 If the total demand offered by traffic flows exceeds the 260 capacity of the composite link, the solution SHOULD define a 261 means to cause the LSPs for some traffic flows to move to some 262 other point in the network that is not congested. These 263 "preempted LSPs" may not be restored if there is no 264 uncongested path in the network. 266 The intent is to measure the predominant latency in uncongested 267 service provider networks, where geographic delay dominates and is on 268 the order of milliseconds or more. The argument for including 269 queuing delay is that it reflects the delay experienced by 270 applications. The argument against including queuing delay is that 271 it if used in routing decisions it can result in routing instability. 272 This tradeoff is discussed in detail in 273 [I-D.ietf-rtgwg-cl-framework]. 275 4.3. Parallel Component Links with Different Characteristics 277 Corresponding to Case 1 of [ITU-T.G.800], as one means to provide 278 high availability, network operators deploy a topology in the MPLS 279 network using lower layer networks that have a certain degree of 280 diversity at the lower layer(s). Many techniques have been developed 281 to balance the distribution of flows across component links that 282 connect the same pair of nodes. When the path for a flow can be 283 chosen from a set of candidate nodes connected via composite links, 284 other techniques have been developed. Refer to the Appendices in 285 [I-D.ietf-rtgwg-cl-use-cases] for a description of existing 286 techniques and a set of references. 288 FR#11 The solution SHALL measure traffic on a labeled traffic flow 289 and dynamically select the component link on which to place 290 this flow in order to balance the load so that no component 291 link in the composite link between a pair of nodes is 292 overloaded. 294 FR#12 When a traffic flow is moved from one component link to 295 another in the same composite link between a set of nodes (or 296 sites), it MUST be done so in a minimally disruptive manner. 298 FR#13 Load balancing MAY be used during sustained low traffic 299 periods to reduce the number of active component links for the 300 purpose of power reduction. 302 FR#14 The solution SHALL provide a means to identify flows whose 303 rearrangement frequency needs to be bounded by a configured 304 value. 306 FR#15 The solution SHALL provide a means that communicates whether 307 the flows within an LSP can be split across multiple component 308 links. The solution SHOULD provide a means to indicate the 309 flow identification field(s) which can be used along the flow 310 path which can be used to perform this function. 312 FR#16 The solution SHALL provide a means to indicate that a traffic 313 flow shall select a component link with the minimum latency 314 value. 316 FR#17 The solution SHALL provide a means to indicate that a traffic 317 flow shall select a component link with a maximum acceptable 318 latency value as specified by protocol. 320 FR#18 The solution SHALL provide a means to indicate that a traffic 321 flow shall select a component link with a maximum acceptable 322 delay variation value as specified by protocol. 324 FR#19 The solution SHALL provide a means local to a node that 325 automatically distributes flows across the component links in 326 the composite link such that NPOs are met. 328 FR#20 The solution SHALL provide a means to distribute flows from a 329 single LSP across multiple component links to handle at least 330 the case where the traffic carried in an LSP exceeds that of 331 any component link in the composite link. As defined in 332 section 3, a flow is a sequence of packets that must be 333 transferred on one component link. 335 FR#21 The solution SHOULD support the use case where a composite 336 link itself is a component link for a higher order composite 337 link. For example, a composite link comprised of MPLS-TP bi- 338 directional tunnels viewed as logical links could then be used 339 as a component link in yet another composite link that 340 connects MPLS routers. 342 FR#22 The solution MUST support an optional means for LSP signaling 343 to bind an LSP to a particular component link within a 344 composite link. If this option is not exercised, then an LSP 345 that is bound to a composite link may be bound to any 346 component link matching all other signaled requirements, and 347 different directions of a bidirectional LSP can be bound to 348 different component links. 350 FR#23 The solution MUST support a means to indicate that both 351 directions of co-routed bidirectional LSP MUST be bound to the 352 same component link. 354 A minimally disruptive change implies that as little disruption as is 355 practical occurs. Such a change can be achieved with zero packet 356 loss. A delay discontinuity may occur, which is considered to be a 357 minimally disruptive event for most services if this type of event is 358 sufficiently rare. A delay discontinuity is an example of a 359 minimally disruptive behavior corresponding to current techniques. 361 A delay discontinuity is an isolated event which may greatly exceed 362 the normal delay variation (jitter). A delay discontinuity has the 363 following effect. When a flow is moved from a current link to a 364 target link with lower latency, reordering can occur. When a flow is 365 moved from a current link to a target link with a higher latency, a 366 time gap can occur. Some flows (e.g., timing distribution, PW 367 circuit emulation) are quite sensitive to these effects. A delay 368 discontinuity can also cause a jitter buffer underrun or overrun 369 affecting user experience in real time voice services (causing an 370 audible click). These sensitivities may be specified in an NPO. 372 As with any load balancing change, a change initiated for the purpose 373 of power reduction may be minimally disruptive. Typically the 374 disruption is limited to a change in delay characteristics and the 375 potential for a very brief period with traffic reordering. The 376 network operator when configuring a network for power reduction 377 should weigh the benefit of power reduction against the disadvantage 378 of a minimal disruption. 380 5. Derived Requirements 382 This section takes the next step and derives high-level requirements 383 on protocol specification from the functional requirements. 385 DR#1 The solution SHOULD attempt to extend existing protocols 386 wherever possible, developing a new protocol only if this adds 387 a significant set of capabilities. 389 DR#2 A solution SHOULD extend LDP capabilities to meet functional 390 requirements (without using TE methods as decided in 391 [RFC3468]). 393 DR#3 Coexistence of LDP and RSVP-TE signaled LSPs MUST be supported 394 on a composite link. Other functional requirements should be 395 supported as independently of signaling protocol as possible. 397 DR#4 When the nodes connected via a composite link are in the same 398 MPLS network topology, the solution MAY define extensions to 399 the IGP. 401 DR#5 When the nodes are connected via a composite link are in 402 different MPLS network topologies, the solution SHALL NOT rely 403 on extensions to the IGP. 405 DR#6 The solution SHOULD support composite link IGP advertisement 406 that results in convergence time better than that of 407 advertising the individual component links. The solution SHALL 408 be designed so that it represents the range of capabilities of 409 the individual component links such that functional 410 requirements are met, and also minimizes the frequency of 411 advertisement updates which may cause IGP convergence to occur. 413 Examples of advertisement update triggering events to be 414 considered include: LSP establishment/release, changes in 415 component link characteristics (e.g., latency, up/down state), 416 and/or bandwidth utilization. 418 DR#7 When a worst case failure scenario occurs, the number of 419 RSVP-TE LSPs to be resignaled will cause a period of 420 unavailability as perceived by users. The resignaling time of 421 the solution MUST meet the NPO objective for the duration of 422 unavailability. The resignaling time of the solution MUST NOT 423 increase significantly as compared with current methods. 425 6. Management Requirements 427 MR#1 Management Plane MUST support polling of the status and 428 configuration of a composite link and its individual composite 429 link and support notification of status change. 431 MR#2 Management Plane MUST be able to activate or de-activate any 432 component link in a composite link in order to facilitate 433 operation maintenance tasks. The routers at each end of a 434 composite link MUST redistribute traffic to move traffic from 435 a de-activated link to other component links based on the 436 traffic flow TE criteria. 438 MR#3 Management Plane MUST be able to configure a LSP over a 439 composite link and be able to select a component link for the 440 LSP. 442 MR#4 Management Plane MUST be able to trace which component link a 443 LSP is assigned to and monitor individual component link and 444 composite link performance. 446 MR#5 Management Plane MUST be able to verify connectivity over each 447 individual component link within a composite link. 449 MR#6 Component link fault notification MUST be sent to the 450 management plane. 452 MR#7 Composite link fault notification MUST be sent to management 453 plane and distribute via link state message in the IGP. 455 MR#8 Management Plane SHOULD provide the means for an operator to 456 initiate an optimization process. 458 MR#9 An operator initiated optimization MUST be performed in a 459 minimally disruptive manner as described in Section 4.3. 461 MR#10 Any statement which requires the solution to support some new 462 functionality through use of the words new functionality, 463 SHOULD be interpretted as follows. The implementation either 464 MUST or SHOULD support the new functionality depending on the 465 use of either MUST or SHOULD in the requirements statement. 466 The implementation SHOULD in most or all cases allow any new 467 functionality to be individually enabled or disabled through 468 configuration. 470 7. Acknowledgements 472 Frederic Jounay of France Telecom and Yuji Kamite of NTT 473 Communications Corporation co-authored a version of this document. 475 A rewrite of this document occurred after the IETF77 meeting. 476 Dimitri Papadimitriou, Lou Berger, Tony Li, the former WG chairs John 477 Scuder and Alex Zinin, the current WG chair Alia Atlas, and others 478 provided valuable guidance prior to and at the IETF77 RTGWG meeting. 480 Tony Li and John Drake have made numerous valuable comments on the 481 RTGWG mailing list that are reflected in versions following the 482 IETF77 meeting. 484 Iftekhar Hussain and Kireeti Kompella made comments on the RTGWG 485 mailing list after IETF82 that identified a new requirement. 486 Iftekhar Hussain made numerous valuable comments on the RTGWG mailing 487 list that resulted in improvements to document clarity. 489 In the interest of full disclosure of affiliation and in the interest 490 of acknowledging sponsorship, past affiliations of authors are noted. 491 Much of the work done by Ning So occurred while Ning was at Verizon. 492 Much of the work done by Curtis Villamizar occurred while at 493 Infinera. Infinera continues to sponsor this work on a consulting 494 basis. 496 8. IANA Considerations 498 This memo includes no request to IANA. 500 9. Security Considerations 502 This document specifies a set of requirements. The requirements 503 themselves do not pose a security threat. If these requirements are 504 met using MPLS signaling as commonly practiced today with 505 authenticated but unencrypted OSPF-TE, ISIS-TE, and RSVP-TE or LDP, 506 then the requirement to provide additional information in this 507 communication presents additional information that could conceivably 508 be gathered in a man-in-the-middle confidentiality breach. Such an 509 attack would require a capability to monitor this signaling either 510 through a provider breach or access to provider physical transmission 511 infrastructure. A provider breach already poses a threat of numerous 512 tpes of attacks which are of far more serious consequence. Encrption 513 of the signaling can prevent or render more difficult any 514 confidentiality breach that otherwise might occur by means of access 515 to provider physical transmission infrastructure. 517 10. References 519 10.1. Normative References 521 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 522 Requirement Levels", BCP 14, RFC 2119, March 1997. 524 10.2. Informative References 526 [I-D.ietf-rtgwg-cl-framework] 527 Ning, S., McDysan, D., Osborne, E., Yong, L., and C. 528 Villamizar, "Composite Link Framework in Multi Protocol 529 Label Switching (MPLS)", draft-ietf-rtgwg-cl-framework-01 530 (work in progress), August 2012. 532 [I-D.ietf-rtgwg-cl-use-cases] 533 Ning, S., Malis, A., McDysan, D., Yong, L., and C. 534 Villamizar, "Composite Link Use Cases and Design 535 Considerations", draft-ietf-rtgwg-cl-use-cases-01 (work in 536 progress), August 2012. 538 [ITU-T.G.800] 539 ITU-T, "Unified functional architecture of transport 540 networks", 2007, . 543 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 544 Label Switching Architecture", RFC 3031, January 2001. 546 [RFC3032] Rosen, E., Tappan, D., Fedorkow, G., Rekhter, Y., 547 Farinacci, D., Li, T., and A. Conta, "MPLS Label Stack 548 Encoding", RFC 3032, January 2001. 550 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 551 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 552 Tunnels", RFC 3209, December 2001. 554 [RFC3468] Andersson, L. and G. Swallow, "The Multiprotocol Label 555 Switching (MPLS) Working Group decision on MPLS signaling 556 protocols", RFC 3468, February 2003. 558 [RFC3985] Bryant, S. and P. Pate, "Pseudo Wire Emulation Edge-to- 559 Edge (PWE3) Architecture", RFC 3985, March 2005. 561 [RFC4031] Carugi, M. and D. McDysan, "Service Requirements for Layer 562 3 Provider Provisioned Virtual Private Networks (PPVPNs)", 563 RFC 4031, April 2005. 565 [RFC5036] Andersson, L., Minei, I., and B. Thomas, "LDP 566 Specification", RFC 5036, October 2007. 568 [RFC5921] Bocci, M., Bryant, S., Frost, D., Levrau, L., and L. 569 Berger, "A Framework for MPLS in Transport Networks", 570 RFC 5921, July 2010. 572 Appendix A. ITU-T G.800 Composite Link Definitions and Terminology 574 Composite Link: 575 Section 6.9.2 of ITU-T-G.800 [ITU-T.G.800] defines composite link 576 in terms of three cases, of which the following two are relevant 577 (the one describing inverse (TDM) multiplexing does not apply). 578 Note that these case definitions are taken verbatim from section 579 6.9, "Layer Relationships". 581 Case 1: "Multiple parallel links between the same subnetworks 582 can be bundled together into a single composite link. Each 583 component of the composite link is independent in the sense 584 that each component link is supported by a separate server 585 layer trail. The composite link conveys communication 586 information using different server layer trails thus the 587 sequence of symbols crossing this link may not be preserved. 588 This is illustrated in Figure 14." 590 Case 3: "A link can also be constructed by a concatenation of 591 component links and configured channel forwarding 592 relationships. The forwarding relationships must have a 1:1 593 correspondence to the link connections that will be provided 594 by the client link. In this case, it is not possible to 595 fully infer the status of the link by observing the server 596 layer trails visible at the ends of the link. This is 597 illustrated in Figure 16." 599 Subnetwork: A set of one or more nodes (i.e., LER or LSR) and links. 600 As a special case it can represent a site comprised of multiple 601 nodes. 603 Forwarding Relationship: Configured forwarding between ports on a 604 subnetwork. It may be connectionless (e.g., IP, not considered 605 in this draft), or connection oriented (e.g., MPLS signaled or 606 configured). 608 Component Link: A topolological relationship between subnetworks 609 (i.e., a connection between nodes), which may be a wavelength, 610 circuit, virtual circuit or an MPLS LSP. 612 Authors' Addresses 614 Curtis Villamizar (editor) 615 OCCNC, LLC 617 Email: curtis@occnc.com 619 Dave McDysan (editor) 620 Verizon 621 22001 Loudoun County PKWY 622 Ashburn, VA 20147 624 Email: dave.mcdysan@verizon.com 626 So Ning 627 Tata Communications 629 Email: ning.so@tatacommunications.com 631 Andrew Malis 632 Verizon 633 60 Sylvan Road 634 Waltham, MA 02451 636 Phone: +1 781-466-2362 637 Email: andrew.g.malis@verizon.com 638 Lucy Yong 639 Huawei USA 640 5340 Legacy Dr. 641 Plano, TX 75025 643 Phone: +1 469-277-5837 644 Email: lucy.yong@huawei.com