idnits 2.17.00 (12 Aug 2021) /tmp/idnits1044/draft-fuxh-mpls-delay-loss-te-framework-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([RFC3031]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 9, 2012) is 3725 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC3031' is mentioned on line 30, but not defined == Missing Reference: 'RFC 2119' is mentioned on line 47, but not defined == Unused Reference: 'RFC2119' is defined on line 518, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 521, but no explicit reference was found in the text == Unused Reference: 'RFC3473' is defined on line 525, but no explicit reference was found in the text == Unused Reference: 'RFC3477' is defined on line 529, but no explicit reference was found in the text == Unused Reference: 'RFC3630' is defined on line 533, but no explicit reference was found in the text == Unused Reference: 'RFC4203' is defined on line 537, but no explicit reference was found in the text == Unused Reference: 'EXPRESS-PATH' is defined on line 546, but no explicit reference was found in the text == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 == Outdated reference: A later version (-02) exists of draft-giacalone-ospf-te-express-path-01 Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group X. Fu 2 Internet-Draft ZTE 3 Intended status: Standards Track V. Manral 4 Expires: September 10, 2012 Hewlett-Packard Corp. 5 D. McDysan 6 A. Malis 7 Verizon 8 S. Giacalone 9 Thomson Reuters 10 M. Betts 11 Q. Wang 12 ZTE 13 J. Drake 14 Juniper Networks 15 March 9, 2012 17 Loss and Delay Traffic Engineering Framework for MPLS 18 draft-fuxh-mpls-delay-loss-te-framework-04 20 Abstract 22 With more and more enterprises using cloud based services, the 23 distances between the user and the applications are growing. A lot 24 of the current applications are designed to work across LAN's and 25 have various inherent assumptions. For multiple applications such as 26 High Performance Computing and Electronic Financial markets, the 27 response times are critical as is packet loss, while other 28 applications require more throughput. 30 [RFC3031] describes the architecture of MPLS based networks. This 31 draft extends the MPLS architecture to allow for latency, loss and 32 jitter as properties. It describes requirements and control plane 33 implication for latency and packet loss as a traffic engineering 34 performance metric in today's network which is consisting of 35 potentially multiple layers of packet transport network and optical 36 transport network in order to make a accurate end-to-end latency and 37 loss prediction before a path is established. 39 Note MPLS architecture for Multicast will be taken up in a future 40 version of the draft. 42 Requirements Language 44 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 45 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 46 document are to be interpreted as described in [RFC 2119]. 48 Status of this Memo 50 This Internet-Draft is submitted in full conformance with the 51 provisions of BCP 78 and BCP 79. 53 Internet-Drafts are working documents of the Internet Engineering 54 Task Force (IETF). Note that other groups may also distribute 55 working documents as Internet-Drafts. The list of current Internet- 56 Drafts is at http://datatracker.ietf.org/drafts/current/. 58 Internet-Drafts are draft documents valid for a maximum of six months 59 and may be updated, replaced, or obsoleted by other documents at any 60 time. It is inappropriate to use Internet-Drafts as reference 61 material or to cite them other than as "work in progress." 63 This Internet-Draft will expire on September 10, 2012. 65 Copyright Notice 67 Copyright (c) 2012 IETF Trust and the persons identified as the 68 document authors. All rights reserved. 70 This document is subject to BCP 78 and the IETF Trust's Legal 71 Provisions Relating to IETF Documents 72 (http://trustee.ietf.org/license-info) in effect on the date of 73 publication of this document. Please review these documents 74 carefully, as they describe your rights and restrictions with respect 75 to this document. Code Components extracted from this document must 76 include Simplified BSD License text as described in Section 4.e of 77 the Trust Legal Provisions and are provided without warranty as 78 described in the Simplified BSD License. 80 Table of Contents 82 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 83 2. Architecture requirements overview . . . . . . . . . . . . . . 4 84 2.1. Communicate Latency and Loss as TE Metric . . . . . . . . 4 85 2.2. Requirement for Composite Link . . . . . . . . . . . . . . 5 86 2.3. Requirement for Hierarchy LSP . . . . . . . . . . . . . . 5 87 2.4. Latency Accumulation and Verification . . . . . . . . . . 5 88 2.5. Restoration, Protection and Rerouting . . . . . . . . . . 6 89 3. End-to-End Latency . . . . . . . . . . . . . . . . . . . . . . 7 90 4. End-to-End Jitter . . . . . . . . . . . . . . . . . . . . . . 8 91 5. End-to-End Loss . . . . . . . . . . . . . . . . . . . . . . . 8 92 6. Protocol Considerations . . . . . . . . . . . . . . . . . . . 9 93 7. Control Plane Implication . . . . . . . . . . . . . . . . . . 9 94 7.1. Implications for Routing . . . . . . . . . . . . . . . . . 9 95 7.2. Implications for Signaling . . . . . . . . . . . . . . . . 11 96 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 97 9. Security Considerations . . . . . . . . . . . . . . . . . . . 12 98 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12 99 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 100 11.1. Normative References . . . . . . . . . . . . . . . . . . . 12 101 11.2. Informative References . . . . . . . . . . . . . . . . . . 13 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 13 104 1. Introduction 106 In High Frequency trading for Electronic Financial markets, computers 107 make decisions based on the Electronic Data received, without human 108 intervention. These trades now account for a majority of the trading 109 volumes and rely exclusively on ultra-low-latency direct market 110 access. 112 Extremely low latency measurements for MPLS LSP tunnels are defined 113 in [draft-ietf-mpls-loss-delay]. They allow a mechanism to measure 114 and monitor performance metrics for packet loss, and one-way and two- 115 way delay, as well as related metrics like delay variation and 116 channel throughput. 118 The measurements are however effective only after the LSP is created 119 and cannot be used by MPLS Path computation engine to define paths 120 that have the latest latency. This draft defines the architecture 121 used, so that end-to-end tunnels can be set up based on latency, loss 122 or jitter characteristics. 124 End-to-end service optimization based on latency and packet loss is a 125 key requirement for service provider. This type of function will be 126 adopted by their "premium" service customers. They would like to pay 127 for this "premium" service. Latency and loss on a route level will 128 help carriers' customers to make his provider selection decision. 130 2. Architecture requirements overview 132 2.1. Communicate Latency and Loss as TE Metric 134 The solution MUST provide a means to communicate latency, latency 135 variation and packet loss of links and nodes as a traffic engineering 136 performance metric into IGP. 138 Latency, latency variation and packet loss may be unstable, for 139 example, if queueing latency were included, then IGP could become 140 unstable. The solution MUST provide a means to control latency and 141 loss IGP message advertisement rate and avoid instability when the 142 latency, latency variation and packet loss value changes frequently. 144 In the case where it is known that either the changes are too 145 frequent or there is a backup which is preferred, the solution shall 146 put the node or the link in unusable state for services requiring a 147 particular service capability. This unusable state is on a 148 capability basis and not a global basis. The condition to get into 149 the state is locally configured and all routers in a domain should 150 have this criteria synchronized. 152 Path computation entity MUST have the capability to compute one end- 153 to-end path with latency and packet loss constraint. For example, it 154 has the capability to compute a route with X amount of bandwidth with 155 less than Y ms of latency and less than Z% packet loss limit based on 156 the latency and packet loss traffic engineering database. It MUST 157 also support the path computation with routing constraints 158 combination with pre-defined priorities, e.g., SRLG diversity, 159 latency, loss, jitter and cost. If the performance of link exceeds 160 its configured maximum threshold, path computation entity may not 161 select this kind of link although end-to-end performance is still 162 met. 164 2.2. Requirement for Composite Link 166 One end-to-end LSP may traverses some Composite Links [CL-REQ]. Even 167 if the transport technology (e.g., OTN) component links are 168 identical, the latency and packet loss characteristics of the 169 component links may differ due to factors such as fiber distance and/ 170 or fiber characteristics. 172 The solution MUST provide a means to indicate that a traffic flow 173 should select a component link with minimum latency and/or packet 174 loss, maximum acceptable latency and/or packet loss value and maximum 175 acceptable delay variation value as specified by protocol. The 176 endpoints of Composite Link will take these parameters into account 177 for component link selection or creation. Details of how transient 178 respose is taken is specified in Section 4.1 [CL-REQ]. The exact 179 details for component links will be taken up separately and are not 180 part of this document. 182 2.3. Requirement for Hierarchy LSP 184 Heirarchical LSP's may traverse server layer LSP's. For such LSP's 185 there may be some latency and packet loss constraint requirement for 186 the segment in server layer. 188 The solution MUST provide a means to indicate FA selection or FA-LSP 189 creation with minimum latency and/or packet loss, maximum acceptable 190 latency and/or packet loss value and maximum acceptable delay 191 variation value. The boundary nodes of FA-LSP will take these 192 parameters into account for FA selection or FA-LSP creation. 194 2.4. Latency Accumulation and Verification 196 The solution SHOULD provide a means to accumulate (e.g., sum) latency 197 information of links and nodes along that an LSP traverses, (e.g., 198 Inter-AS, Inter-Area or Multi-Layer) so that the source node can 199 validate if the desired maximum latency constraint can be satisfied 200 for a packet traversing the LSP. [Y.1541] provides details of how 201 the latency value is accumulated. 203 Both One-way and Round-trip latency collection along the LSP by 204 signaling protocol and latency verification at the end of LSP should 205 be supported. 207 The accumulation of the delay is "simple" for the static component 208 i.e. its a linear addition, the dynamic/network loading component is 209 more interesting and would involve some estimate of the "worst case". 210 However, method of deriving this worst case appears to be more in the 211 scope of Network Operator policy than standards i.e. the operator 212 needs to decide, based on the SLAs offered, the required confidence 213 level. 215 2.5. Restoration, Protection and Rerouting 217 Some customers may insist on having the ability to re-route if the 218 latency and loss SLA is not being met. If a "provisioned" end-to-end 219 LSP latency and/or loss could not meet the latency and loss agreement 220 between operator and his user, the solution SHOULD support pre- 221 defined or dynamic re-routing (e.g., make-before-break) to handle 222 this case based on the local policy. In revertive behaviour is 223 supported, the original LSP must not be released and is monitored by 224 control plane. When the end-to-end performance is repaired, the 225 service is restored to the original LSP. 227 The solution SHOULD support to move an end-to-end LSP away from any 228 link whose performance violates the configured threshold. 230 End-to-end measurements of the LSP also need to be performed in 231 addition to the link-by-link measurements. A threshold violation of 232 the End-to-End criteria as measured by the head end node should cause 233 rerouting of the LSP. 235 The anomalous path can be switch to protection path or rerouted to 236 new path because of end-to-end performance couldn't meet any more. 238 If a "provisioned" end-to-end LSP latency and/or loss performance is 239 improved (i.e., beyond a configurable minimum value), the solution 240 SHOULD support the re-routing to optimize latency and/or loss end-to- 241 end cost. 243 The latency performance of pre-defined protection or dynamic re- 244 routing LSP MUST meet the latency SLA parameter. 246 Due to some flapping conditions the latency and loss of an LSP may 247 change, this may cause the LSP to be frequently switched to a new 248 path. In order to avoid churn, the solution SHOULD specify the 249 switchover of the LSP according to maximum acceptable change rate. 251 3. End-to-End Latency 253 Procedures to measure latency and loss has been provided in ITU-T 254 [Y.1731], [G.709] and [ietf-mpls-loss-delay]. The control plane can 255 be independent of the mechanism used and different mechanisms can be 256 used for measurement based on different standards. 258 Latency on a path has two sources: Node latency which is caused by 259 the node as a result of process time in each node and: Link latency 260 as a result of packet/frame transit time between two neighbouring 261 nodes or a FA-LSP/ Composite Link [CL-REQ]. 263 Latency or one-way delay is the time it takes for a packet within a 264 stream going from measurement point 1 to measurement point 2, as 265 defined in [Y.1540]. 267 The architecture uses assumption that the sum of the latencies of the 268 individual components approximately adds up to the average latency of 269 an LSP. Though using the sum may not be perfect, it however gives a 270 good approximation that can be used for Traffic Engineering (TE) 271 purposes. 273 The total measured latency of an LSP consists of the sum of the 274 latency of the LSP hop, as well as the average latency of switching 275 on a device, which may vary based on queuing and buffering. 277 Hop latency can be measured by getting the latency measurement 278 between the egress of one MPLS LSR to the ingress of the nexthop LSR. 279 This value may be constant for most part, unless there is protection 280 switching, or other similar changes at a lower layer. 282 The switching latency on a device, can be measured internally, and 283 multiple mechanisms and data structures to do the same have been 284 defined. [Add references to papers by Verghese, Kompella, Duffield]. 286 We also looked at other measurement granularities before deciding on 287 an interface based measurement. An approximation of the Flow based 288 measurement is the per DSCP value, measurement from the ingress of 289 one port to the egress of every other port in the device. 291 Another approximation that can be used is per interface DSCP based 292 measurement, which can be an agrregate of the average measurements 293 per interface. The average can itself be calculated in ways, so as 294 to provide closer approximation. 296 For the purpose of this draft it is assumed that the node latency is 297 a small factor of the total latency in the networks where this 298 solution is deployed. The node latency is hence ignored for the 299 benefit of simplicity in this solution. 301 The average link delay over a configurable interval should be 302 reported by data plane in micro-seconds. 304 4. End-to-End Jitter 306 Jitter or Packet Delay Variation of a packet within a stream of 307 packets is defined for a selected pair of packets in the stream going 308 from measurement point 1 to measurement point 2. 310 This architecture uses the assumptions of [Y.1540] to calculate the 311 accumulated jitter from the individual components approximately. 312 Though using this may not be perfect, it however gives a good 313 approximation that can be used for Traffic Engineering (TE) purposes. 315 The buffering and queuing within a device will lead to the jitter. 316 Just like latency measurements, jitter measurements can be 317 approximated as either per DSCP per port pair (Ingress and Egress) or 318 as per DSCP per egress port, however such measurements have been left 319 out for the sake of simplicity of the solution. 321 For the purpose of this draft it is assumed that the node latency is 322 a small factor of the total latency in the networks where this 323 solution is deployed. The node latency is hence ignored for the 324 benefit of simplicity. 326 The jitter is measured in micro-seconds. 328 5. End-to-End Loss 330 Loss or Packet Drop probability of a packet within a stream of 331 packets is defined as the number of packets dropped within a given 332 interval. 334 This architecture uses the assumptions of [Y.1540] to calculate the 335 accumulated loss from the individual components approximately. 336 Though using the accumulated metrics may not be perfect, it however 337 gives a good approximation that can be used for Traffic Engineering 338 (TE) purposes. 340 The buffering and queuing mechanisms within a device will decide 341 which packet is to be dropped. Just like latency and jitter 342 measurements, the loss can best be appproximated as either per DSCP 343 per port pair (Ingresss and Egress) or as per DSCP per egress port. 344 However such mechanisms are not used in this solution to keep the 345 solution simple. 347 The loss is measured in terms of the number of packets per million 348 packets. 350 6. Protocol Considerations 352 The protocol metrics above can be sent in IGP protocol packets as 353 defined in RFC 3630. They can then be used by Source Node or the 354 Path Computation engine to decide paths with the desired path 355 properties. 357 As Link-state IGP information is flooded throughout an area, frequent 358 changes can cause a lot of control traffic. To prevent such 359 flooding, data should only be flooded when it crosses a certain 360 configured maximum. 362 A separate measurement should be done for an LSP when it is UP. Also 363 LSP's path should only be recalculated when the end-to-end metrics 364 changes in a way it becomes more than desired. 366 7. Control Plane Implication 368 7.1. Implications for Routing 370 The latency and packet loss performance metric MUST be advertised 371 into path computation entity by IGP (OSPF-TE, OSPFv3-TE or IS-IS-TE) 372 to perform route computation and network planning based on latency 373 and packet loss SLA target. 375 Latency, latency variation and packet loss value MUST be reported as 376 a average value which is calculated by data plane measurements. 378 Latency and packet loss characteristics of these links and nodes may 379 change dynamically. In order to control IGP messaging and avoid 380 being unstable when the latency, latency variation and packet loss 381 value changes, a threshold and a limit on rate of change MUST be 382 configured in the IGP control plane. 384 Latency and packet loss values changes need to be updated and flooded 385 in the IGP control messages only when there is significant changes in 386 the value. When the head end-node deterimines the IGP update affects 387 the LSP for which it is ingress, it recalculates the LSP. 389 A target value MUST be configured to control plane for each link. If 390 the link performance improves beyond a configurable target value, it 391 must be re-advertised. The receiving node determines whether a 392 "provisioned" end-to-end LSP latency and/or loss performance is 393 improved. 395 It is sometimes important for paths that desire low latency to avoid 396 nodes that have a significant contribution to latency. Control plane 397 should report two components of the delay, "static" and "dynamic". 398 The dynamic component is always caused by traffic loading and 399 queuing. The "dynamic" portion SHOULD be reported as an approximate 400 value. The static component should be a fixed latency through the 401 node without any queuing. Link latency attribute should also take 402 into account the latency of node, i.e., the latency between the 403 incoming port and the outgoing port of a network element. Half of 404 the fixed node latency can be added to each link. 406 When the Composite Links [CL-REQ] is advertised into IGP, there are 407 following considerations. 409 o One option is that the latency and packet loss of composite link 410 may be the range (e.g., at least minimum and maximum) latency 411 value of all component links. It may also be the maximum or 412 average latency value of all component links. In both cases, only 413 partial information is transmited in the IGP. So the path 414 computation entity has insufficient information to determine 415 whether a particular path can support its latency and packet loss 416 requirements. This leads to signaling crankback. 418 o Another option is that latency and packet loss of each component 419 link within one Composite Link could be advertised but having only 420 one IGP adjacency. 422 One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse 423 a FA-LSP of server layer (e.g., OTN rings). The boundary nodes of 424 the FA-LSP SHOULD be aware of the latency and packet loss information 425 of this FA-LSP. 427 If the FA-LSP is able to form a routing adjacency and/or as a TE link 428 in the client network, the total latency and packet loss value of the 429 FA-LSP can be as an input to a transformation that results in a FA 430 traffic engineering metric and advertised into the client layer 431 routing instances. Note that this metric will include the latency 432 and packet loss of the links and nodes that the trail traverses. 434 If total latency and packet loss information of the FA-LSP changes 435 (e.g., due to a maintenance action or failure in OTN rings), the 436 boundary node of the FA-LSP will receive the TE link information 437 advertisement including the latency and packet value which is already 438 changed and if it is over than the threshold and a limit on rate of 439 change, then it will compute the total latency and packet value of 440 the FA-LSP again. If the total latency and packet loss value of FA- 441 LSP changes, the client layer MUST also be notified about the latest 442 value of FA. The client layer can then decide if it will accept the 443 increased latency and packet loss or request a new path that meets 444 the latency and packet loss requirement. 446 7.2. Implications for Signaling 448 In order to assign the LSP to one of component links with different 449 latency and loss characteristics, RSVP-TE message needs to carry a 450 indication of request minimum latency and/or packet loss, maximum 451 acceptable latency and/or packet loss value and maximum acceptable 452 delay variation value for the component link selection or creation. 453 The composite link will take these parameters into account when 454 assigning traffic of LSP to a component link. 456 One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse 457 a FA-LSP of server layer (e.g., OTN rings). There will be some 458 latency and packet loss constraint requirement for the segment route 459 in server layer. So RSVP-TE message needs to carry a indication of 460 request minimum latency and/or packet loss, maximum acceptable 461 latency and/or packet loss value and maximum acceptable delay 462 variation value. The boundary nodes of FA-LSP will take these 463 parameters into account for FA selection or FA-LSP creation. 465 RSVP-TE needs to be extended to accumulate (e.g., sum) latency 466 information of links and nodes along one LSP across multi-domain 467 (e.g., Inter-AS, Inter-Area or Multi-Layer) so that an latency 468 verification can be made at end points. One-way and round-trip 469 latency collection along the LSP by signaling protocol can be 470 supported. So the end points of this LSP can verify whether the 471 total amount of latency could meet the latency agreement between 472 operator and his user. When RSVP-TE signaling is used, the source 473 can determine if the latency requirement is met much more rapidly 474 than performing the actual end-to-end latency measurement. 476 Restoration, protection and equipment variations can impact 477 "provisioned" latency and packet loss (e.g., latency and packet loss 478 increase). For example, restoration/provisioning action in transport 479 network that increases latency seen by packet network observable by 480 customers, possibly violating SLAs. The change of one end-to-end LSP 481 latency and packet loss performance MUST be known by source and/or 482 sink node. So it can inform the higher layer network of a latency 483 and packet loss change. The latency or packet loss change of links 484 and nodes will affect one end-to-end LSPs total amount of latency or 485 packet loss. Applications can fail beyond an application-specific 486 threshold. Some remedy mechanism could be used. 488 Pre-defined protection or dynamic re-routing could be triggered to 489 handle this case. In the case of predefined protection, large 490 amounts of redundant capacity may have a significant negative impact 491 on the overall network cost. Service provider may have many layers 492 of pre-defined restoration for this transfer, but they have to 493 duplicate restoration resources at significant cost. Solution should 494 provides some mechanisms to avoid the duplicate restoration and 495 reduce the network cost. Dynamic re-routing also has to face the 496 risk of resource limitation. So the choice of mechanism MUST be 497 based on SLA or policy. In the case where the latency SLA can not be 498 met after a re-route is attempted, control plane should report an 499 alarm to management plane. It could also try restoration for several 500 times which could be configured. 502 8. IANA Considerations 504 No new IANA consideration are raised by this document. 506 9. Security Considerations 508 This document raises no new security issues. 510 10. Acknowledgements 512 TBD. 514 11. References 516 11.1. Normative References 518 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 519 Requirement Levels", BCP 14, RFC 2119, March 1997. 521 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 522 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 523 Tunnels", RFC 3209, December 2001. 525 [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching 526 (GMPLS) Signaling Resource ReserVation Protocol-Traffic 527 Engineering (RSVP-TE) Extensions", RFC 3473, January 2003. 529 [RFC3477] Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links 530 in Resource ReSerVation Protocol - Traffic Engineering 531 (RSVP-TE)", RFC 3477, January 2003. 533 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 534 (TE) Extensions to OSPF Version 2", RFC 3630, 535 September 2003. 537 [RFC4203] Kompella, K. and Y. Rekhter, "OSPF Extensions in Support 538 of Generalized Multi-Protocol Label Switching (GMPLS)", 539 RFC 4203, October 2005. 541 11.2. Informative References 543 [CL-REQ] C. Villamizar, "Requirements for MPLS Over a Composite 544 Link", draft-ietf-rtgwg-cl-requirement-04 . 546 [EXPRESS-PATH] 547 S. Giacalone, "OSPF Traffic Engineering (TE) Express 548 Path", draft-giacalone-ospf-te-express-path-01 . 550 [G.709] ITU-T Recommendation G.709, "Interfaces for the Optical 551 Transport Network (OTN)", December 2009. 553 [Y.1731] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 554 for Ethernet based networks", Feb 2008. 556 [ietf-mpls-loss-delay] 557 D. Frost, "Packet Loss and Delay Measurement for MPLS 558 Networks", draft-ietf-mpls-loss-delay-03 . 560 Authors' Addresses 562 Xihua Fu 563 ZTE 565 Email: fu.xihua@zte.com.cn 566 Vishwas Manral 567 Hewlett-Packard Corp. 568 191111 Pruneridge Ave. 569 Cupertino, CA 95014 570 US 572 Phone: 408-447-1497 573 Email: vishwas.manral@hp.com 574 URI: 576 Dave McDysan 577 Verizon 579 Email: dave.mcdysan@verizon.com 581 Andrew Malis 582 Verizon 584 Email: andrew.g.malis@verizon.com 586 Spencer Giacalone 587 Thomson Reuters 588 195 Broadway 589 New York, NY 10007 590 US 592 Phone: 646-822-3000 593 Email: spencer.giacalone@thomsonreuters.com 594 URI: 596 Malcolm Betts 597 ZTE 599 Email: malcolm.betts@zte.com.cn 601 Qilei Wang 602 ZTE 604 Email: wang.qilei@zte.com.cn 605 John Drake 606 Juniper Networks 608 Email: jdrake@juniper.net