idnits 2.17.00 (12 Aug 2021) /tmp/idnits1051/draft-fuxh-mpls-delay-loss-te-framework-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([RFC3031]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 14, 2011) is 3841 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC3031' is mentioned on line 30, but not defined == Missing Reference: 'RFC 2119' is mentioned on line 47, but not defined == Unused Reference: 'RFC2119' is defined on line 512, but no explicit reference was found in the text == Unused Reference: 'RFC3209' is defined on line 515, but no explicit reference was found in the text == Unused Reference: 'RFC3473' is defined on line 519, but no explicit reference was found in the text == Unused Reference: 'RFC3477' is defined on line 523, but no explicit reference was found in the text == Unused Reference: 'RFC3630' is defined on line 527, but no explicit reference was found in the text == Unused Reference: 'RFC4203' is defined on line 531, but no explicit reference was found in the text == Unused Reference: 'EXPRESS-PATH' is defined on line 540, but no explicit reference was found in the text == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 == Outdated reference: A later version (-02) exists of draft-giacalone-ospf-te-express-path-01 Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group X. Fu 2 Internet-Draft ZTE 3 Intended status: Standards Track V. Manral 4 Expires: May 17, 2012 Hewlett-Packard Corp. 5 D. McDysan 6 A. Malis 7 Verizon 8 S. Giacalone 9 Thomson Reuters 10 M. Betts 11 Q. Wang 12 ZTE 13 J. Drake 14 Juniper Networks 15 November 14, 2011 17 Traffic Engineering architecture for services aware MPLS 18 draft-fuxh-mpls-delay-loss-te-framework-03 20 Abstract 22 With more and more enterprises using cloud based services, the 23 distances between the user and the applications are growing. A lot 24 of the current applications are designed to work across LAN's and 25 have various inherent assumptions. For multiple applications such as 26 High Performance Computing and Electronic Financial markets, the 27 response times are critical as is packet loss, while other 28 applications require more throughput. 30 [RFC3031] describes the architecture of MPLS based networks. This 31 draft extends the MPLS architecture to allow for latency, loss and 32 jitter as properties. It describes requirements and control plane 33 implication for latency and packet loss as a traffic engineering 34 performance metric in today's network which is consisting of 35 potentially multiple layers of packet transport network and optical 36 transport network in order to make a accurate end-to-end latency and 37 loss prediction before a path is established. 39 Note MPLS architecture for Multicast will be taken up in a future 40 version of the draft. 42 Requirements Language 44 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 45 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 46 document are to be interpreted as described in [RFC 2119]. 48 Status of this Memo 50 This Internet-Draft is submitted in full conformance with the 51 provisions of BCP 78 and BCP 79. 53 Internet-Drafts are working documents of the Internet Engineering 54 Task Force (IETF). Note that other groups may also distribute 55 working documents as Internet-Drafts. The list of current Internet- 56 Drafts is at http://datatracker.ietf.org/drafts/current/. 58 Internet-Drafts are draft documents valid for a maximum of six months 59 and may be updated, replaced, or obsoleted by other documents at any 60 time. It is inappropriate to use Internet-Drafts as reference 61 material or to cite them other than as "work in progress." 63 This Internet-Draft will expire on May 17, 2012. 65 Copyright Notice 67 Copyright (c) 2011 IETF Trust and the persons identified as the 68 document authors. All rights reserved. 70 This document is subject to BCP 78 and the IETF Trust's Legal 71 Provisions Relating to IETF Documents 72 (http://trustee.ietf.org/license-info) in effect on the date of 73 publication of this document. Please review these documents 74 carefully, as they describe your rights and restrictions with respect 75 to this document. Code Components extracted from this document must 76 include Simplified BSD License text as described in Section 4.e of 77 the Trust Legal Provisions and are provided without warranty as 78 described in the Simplified BSD License. 80 Table of Contents 82 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 83 2. Architecture requirements overview . . . . . . . . . . . . . . 4 84 2.1. Communicate Latency and Loss as TE Metric . . . . . . . . 4 85 2.2. Requirement for Composite Link . . . . . . . . . . . . . . 5 86 2.3. Requirement for Hierarchy LSP . . . . . . . . . . . . . . 5 87 2.4. Latency Accumulation and Verification . . . . . . . . . . 5 88 2.5. Restoration, Protection and Rerouting . . . . . . . . . . 6 89 3. End-to-End Latency . . . . . . . . . . . . . . . . . . . . . . 6 90 4. End-to-End Jitter . . . . . . . . . . . . . . . . . . . . . . 8 91 5. End-to-End Loss . . . . . . . . . . . . . . . . . . . . . . . 8 92 6. Protocol Considerations . . . . . . . . . . . . . . . . . . . 9 93 7. Control Plane Implication . . . . . . . . . . . . . . . . . . 9 94 7.1. Implications for Routing . . . . . . . . . . . . . . . . . 9 95 7.2. Implications for Signaling . . . . . . . . . . . . . . . . 11 96 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 97 9. Security Considerations . . . . . . . . . . . . . . . . . . . 12 98 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12 99 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 100 11.1. Normative References . . . . . . . . . . . . . . . . . . . 12 101 11.2. Informative References . . . . . . . . . . . . . . . . . . 13 102 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 13 104 1. Introduction 106 In High Frequency trading for Electronic Financial markets, computers 107 make decisions based on the Electronic Data received, without human 108 intervention. These trades now account for a majority of the trading 109 volumes and rely exclusively on ultra-low-latency direct market 110 access. 112 Extremely low latency measurements for MPLS LSP tunnels are defined 113 in [draft-ietf-mpls-loss-delay]. They allow a mechanism to measure 114 and monitor performance metrics for packet loss, and one-way and two- 115 way delay, as well as related metrics like delay variation and 116 channel throughput. 118 The measurements are however effective only after the LSP is created 119 and cannot be used by MPLS Path computation engine to define paths 120 that have the latest latency. This draft defines the architecture 121 used, so that end-to-end tunnels can be set up based on latency, loss 122 or jitter characteristics. 124 End-to-end service optimization based on latency and packet loss is a 125 key requirement for service provider. This type of function will be 126 adopted by their "premium" service customers. They would like to pay 127 for this "premium" service. Latency and loss on a route level will 128 help carriers' customers to make his provider selection decision. 130 2. Architecture requirements overview 132 2.1. Communicate Latency and Loss as TE Metric 134 The solution MUST provide a means to communicate latency, latency 135 variation and packet loss of links and nodes as a traffic engineering 136 performance metric into IGP. 138 Latency, latency variation and packet loss may be unstable, for 139 example, if queueing latency were included, then IGP could become 140 unstable. The solution MUST provide a means to control latency and 141 loss IGP message advertisement and avoid unstable when the latency, 142 latency variation and packet loss value changes. 144 In the case where it is known that either the changes are too 145 frequent or there is a backup which is preferred, we can put the node 146 or the link in unusable state for services requiring a particular 147 service capability. This unusable state is on a capability basis and 148 not a global basis. 150 Path computation entity MUST have the capability to compute one end- 151 to-end path with latency and packet loss constraint. For example, it 152 has the capability to compute a route with X amount of bandwidth with 153 less than Y ms of latency and less than Z% packet loss limit based on 154 the latency and packet loss traffic engineering database. It MUST 155 also support the path computation with routing constraints 156 combination with pre-defined priorities, e.g., SRLG diversity, 157 latency, loss, jitter and cost. If the performance of link exceeds 158 its configured maximum threshold, path computation entity may not 159 select this kind of link although end-to-end performance is still 160 met. 162 2.2. Requirement for Composite Link 164 One end-to-end LSP may traverses some Composite Links [CL-REQ]. Even 165 if the transport technology (e.g., OTN) component links are 166 identical, the latency and packet loss characteristics of the 167 component links may differ. 169 The solution MUST provide a means to indicate that a traffic flow 170 should select a component link with minimum latency and/or packet 171 loss, maximum acceptable latency and/or packet loss value and maximum 172 acceptable delay variation value as specified by protocol. The 173 endpoints of Composite Link will take these parameters into account 174 for component link selection or creation. The exact details for 175 component links will be taken up seperately and are not part of this 176 document. 178 2.3. Requirement for Hierarchy LSP 180 One end-to-end LSP may traverse a server layer. There will be some 181 latency and packet loss constraint requirement for the segment route 182 in server layer. 184 The solution MUST provide a means to indicate FA selection or FA-LSP 185 creation with minimum latency and/or packet loss, maximum acceptable 186 latency and/or packet loss value and maximum acceptable delay 187 variation value. The boundary nodes of FA-LSP will take these 188 parameters into account for FA selection or FA-LSP creation. 190 2.4. Latency Accumulation and Verification 192 The solution SHOULD provide a means to accumulate (e.g., sum) of 193 latency information of links and nodes along one LSP across multi- 194 domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so that an latency 195 validation decision can be made at the source node. One-way and 196 round-trip latency collection along the LSP by signaling protocol and 197 latency verification at the end of LSP should be supported. 199 The accumulation of the delay is "simple" for the static component 200 i.e. its a linear addition, the dynamic/network loading component is 201 more interesting and would involve some estimate of the "worst case". 202 However, method of deriving this worst case appears to be more in the 203 scope of Network Operator policy than standards i.e. the operator 204 needs to decide, based on the SLAs offered, the required confidence 205 level. 207 2.5. Restoration, Protection and Rerouting 209 Some customers may insist on having the ability to re-route if the 210 latency and loss SLA is not being met. If a "provisioned" end-to-end 211 LSP latency and/or loss could not meet the latency and loss agreement 212 between operator and his user, the solution SHOULD support pre- 213 defined or dynamic re-routing (e.g., make-before-break) to handle 214 this case based on the local policy. In revertive behaviour is 215 supported, the original LSP must not be released and is monitored by 216 control plane. When the end-to-end performance is repaired, the 217 service is restored to the original LSP. 219 The solution should support to move one end-to-end path away from any 220 link whose performance exceeds the configured maximum threshold. The 221 anomalous path can be switch to protection path or rerouted to new 222 path because of end-to-end performance couldn't meet any more. 224 If a "provisioned" end-to-end LSP latency and/or loss performance is 225 improved (i.e., beyond a configurable minimum value) because of some 226 segment performance promotion, the solution SHOULD support the re- 227 routing to optimize latency and/or loss end-to-end cost. 229 The latency performance of pre-defined protection or dynamic re- 230 routing LSP MUST meet the latency SLA parameter. The difference of 231 latency value between primary and protection/restoration path SHOULD 232 be zero. 234 As a result of the change of latency and loss in the LSP, current LSP 235 may be frequently switched to a new LSP with a appropriate latency 236 and packet loss value. In order to avoid this, the solution SHOULD 237 indicate the switchover of the LSP according to maximum acceptable 238 change latency and packet loss value. 240 3. End-to-End Latency 242 Procedures to measure latency and loss has been provided in ITU-T 243 [Y.1731], [G.709] and [ietf-mpls-loss-delay]. The control plane can 244 be independent of the mechanism used and different mechanisms can be 245 used for measurement based on different standards. 247 Latency on a path has two sources: Node latency which is caused by 248 the node as a result of process time in each node and: Link latency 249 as a result of packet/frame transit time between two neighbouring 250 nodes or a FA-LSP/ Composite Link [CL-REQ]. 252 Latency or one-way delay is the time it takes for a packet within a 253 stream going from measurement point 1 to measurement point 2. 255 The architecture uses assumption that the sum of the latencies of the 256 individual components approximately adds up to the average latency of 257 an LSP. Though using the sum may not be perfect, it however gives a 258 good approximation that can be used for Traffic Engineering (TE) 259 purposes. 261 The total latency of an LSP consists of the sum of the latency of the 262 LSP hop, as well as the average latency of switching on a device, 263 which may vary based on queuing and buffering. 265 Hop latency can be measured by getting the latency measurement 266 between the egress of one MPLS LSR to the ingress of the nexthop LSR. 267 This value may be constant for most part, unless there is protection 268 switching, or other similar changes at a lower layer. 270 The switching latency on a device, can be measured internally, and 271 multiple mechanisms and data structures to do the same have been 272 defined. Add references to papers by Verghese, Kompella, Duffield. 273 Though the mechanisms define how to do flow based measurements, the 274 amount of information gathered in such a case, may become too 275 cumbersome for the Path Computation element to effectively use. 277 An approximation of Flow based measurement is the per DSCP value, 278 measurement from the ingress of one port to the egress of every other 279 port in the device. 281 Another approximation that can be used is per interface DSCP based 282 measurement, which can be an agrregate of the average measurements 283 per interface. The average can itself be calculated in ways, so as 284 to provide closer approximation. 286 For the purpose of this draft it is assumed that the node latency is 287 a small factor of the total latency in the networks where this 288 solution is deployed. The node latency is hence ignored for the 289 benefit of simplicity. 291 The average link delay over a configurable interval should be 292 reported by data plane in micro-seconds. 294 4. End-to-End Jitter 296 Jitter or Packet Delay Variation of a packet within a stream of 297 packets is defined for a selected pair of packets in the stream going 298 from measurement point 1 to measurement point 2. 300 The architecture uses assumption that the sum of the jitter of the 301 individual components approximately adds up to the average jitter of 302 an LSP. Though using the sum may not be perfect, it however gives a 303 good approximation that can be used for Traffic Engineering (TE) 304 purposes. 306 There may be very less jitter on a link-hop basis. 308 The buffering and queuing within a device will lead to the jitter. 309 Just like latency measurements, jitter measurements can be 310 appproximated as either per DSCP per port pair (Ingresss and Egress) 311 or as per DSCP per egress port. 313 For the purpose of this draft it is assumed that the node latency is 314 a small factor of the total latency in the networks where this 315 solution is deployed. The node latency is hence ignored for the 316 benefit of simplicity. 318 The jitter is measured in terms of 10's of nano-seconds. 320 5. End-to-End Loss 322 Loss or Packet Drop probability of a packet within a stream of 323 packets is defined as the number of packets dropped within a given 324 interval. 326 The architecture uses assumption that the sum of the loss of the 327 individual components approximately adds up to the average loss of an 328 LSP. Though using the sum may not be perfect, it however gives a 329 good approximation that can be used for Traffic Engineering (TE) 330 purposes. 332 There may be very less loss on a link-hop basis, except in case of 333 physical link issues. 335 The buffering and queuing mechanisms within a device will decide 336 which packet is to be dropped. Just like latency and jitter 337 measurements, the loss can best be appproximated as either per DSCP 338 per port pair (Ingresss and Egress) or as per DSCP per egress port. 340 The loss is measured in terms of the number of packets per million 341 packets. 343 6. Protocol Considerations 345 The protocol metrics above can be sent in IGP protocol packets RFC 346 3630. They can then be used by the Path Computation engine to decide 347 paths with the desired path properties. 349 As Link-state IGP information is flooded throughout an area, frequent 350 changes can cause a lot of control traffic. To prevent such 351 flooding, data should only be flooded when it crosses a certain 352 configured maximum. 354 A seperate measurement should be done for an LSP when it is UP. Also 355 LSP's path should only be recalculated when the end-to-end metrics 356 changes in a way it becomes more than desired. 358 7. Control Plane Implication 360 7.1. Implications for Routing 362 The latency and packet loss performance metric MUST be advertised 363 into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to 364 perform route computation and network planning based on latency and 365 packet loss SLA target. 367 Latency, latecny variation and packet loss value MUST be reported as 368 a average value which is calculated by data plane. 370 Latency and packet loss characteristics of these links and nodes may 371 change dynamically. In order to control IGP messaging and avoid 372 being unstable when the latency, latency variation and packet loss 373 value changes, a threshold and a limit on rate of change MUST be 374 configured to control plane. 376 If any latency and packet loss values change and over than the 377 threshold and a limit on rate of change, then the latency and loss 378 change of link MUST be notified to the IGP again. The receiving node 379 detrimines whether the link affects any of these LSPs for which it is 380 ingress. If there are, it must determine whether those LSPs still 381 meet end-to-end performance objectives. 383 A minimum value MUST be configured to control plane. If the link 384 performance improves beyond a configurable minimum value, it must be 385 re-advertised. The receiving node detrimines whether a "provisioned" 386 end-to-end LSP latency and/or loss performance is improved because of 387 some segment performance promotion. 389 It is sometimes important for paths that desire low latency is to 390 avoid nodes that have a significant contribution to latency. Control 391 plane should report two components of the delay, "static" and 392 "dynamic". The dynamic component is always caused by traffic loading 393 and queuing. The "dynamic" portion SHOULD be reported as an 394 approximate value. It should be a fixed latency through the node 395 without any queuing. Link latency attribute should also take into 396 account the latency of node, i.e., the latency between the incoming 397 port and the outgoing port of a network element. Half of the fixed 398 node latency can be added to each link. 400 When the Composite Links [CL-REQ] is advertised into IGP, there are 401 following considerations. 403 o One option is that the latency and packet loss of composite link 404 may be the range (e.g., at least minimum and maximum) latency 405 value of all component links. It may also be the maximum or 406 average latency value of all component links. In both cases, only 407 partial information is transmited in the IGP. So the path 408 computation entity has insufficient information to determine 409 whether a particular path can support its latency and packet loss 410 requirements. This leads to signaling crankback. 412 o Another option is that latency and packet loss of each component 413 link within one Composite Link could be advertised but having only 414 one IGP adjacency. 416 One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse 417 a FA-LSP of server layer (e.g., OTN rings). The boundary nodes of 418 the FA-LSP SHOULD be aware of the latency and packet loss information 419 of this FA-LSP. 421 If the FA-LSP is able to form a routing adjacency and/or as a TE link 422 in the client network, the total latency and packet loss value of the 423 FA-LSP can be as an input to a transformation that results in a FA 424 traffic engineering metric and advertised into the client layer 425 routing instances. Note that this metric will include the latency 426 and packet loss of the links and nodes that the trail traverses. 428 If total latency and packet loss information of the FA-LSP changes 429 (e.g., due to a maintenance action or failure in OTN rings), the 430 boundary node of the FA-LSP will receive the TE link information 431 advertisement including the latency and packet value which is already 432 changed and if it is over than the threshold and a limit on rate of 433 change, then it will compute the total latency and packet value of 434 the FA-LSP again. If the total latency and packet loss value of FA- 435 LSP changes, the client layer MUST also be notified about the latest 436 value of FA. The client layer can then decide if it will accept the 437 increased latency and packet loss or request a new path that meets 438 the latency and packet loss requirement. 440 7.2. Implications for Signaling 442 In order to assign the LSP to one of component links with different 443 latency and loss characteristics, RSVP-TE message needs to carry a 444 indication of request minimum latency and/or packet loss, maximum 445 acceptable latency and/or packet loss value and maximum acceptable 446 delay variation value for the component link selection or creation. 447 The composite link will take these parameters into account when 448 assigning traffic of LSP to a component link. 450 One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse 451 a FA-LSP of server layer (e.g., OTN rings). There will be some 452 latency and packet loss constraint requirement for the segment route 453 in server layer. So RSVP-TE message needs to carry a indication of 454 request minimum latency and/or packet loss, maximum acceptable 455 latency and/or packet loss value and maximum acceptable delay 456 variation value. The boundary nodes of FA-LSP will take these 457 parameters into account for FA selection or FA-LSP creation. 459 RSVP-TE needs to be extended to accumulate (e.g., sum) latency 460 information of links and nodes along one LSP across multi-domain 461 (e.g., Inter-AS, Inter-Area or Multi-Layer) so that an latency 462 verification can be made at end points. One-way and round-trip 463 latency collection along the LSP by signaling protocol can be 464 supported. So the end points of this LSP can verify whether the 465 total amount of latency could meet the latency agreement between 466 operator and his user. When RSVP-TE signaling is used, the source 467 can determine if the latency requirement is met much more rapidly 468 than performing the actual end-to-end latency measurement. 470 Restoration, protection and equipment variations can impact 471 "provisioned" latency and packet loss (e.g., latency and packet loss 472 increase). For example, restoration/provisioning action in transport 473 network that increases latency seen by packet network observable by 474 customers, possibly violating SLAs. The change of one end-to-end LSP 475 latency and packet loss performance MUST be known by source and/or 476 sink node. So it can inform the higher layer network of a latency 477 and packet loss change. The latency or packet loss change of links 478 and nodes will affect one end-to-end LSPs total amount of latency or 479 packet loss. Applications can fail beyond an application-specific 480 threshold. Some remedy mechanism could be used. 482 Pre-defined protection or dynamic re-routing could be triggered to 483 handle this case. In the case of predefined protection, large 484 amounts of redundant capacity may have a significant negative impact 485 on the overall network cost. Service provider may have many layers 486 of pre-defined restoration for this transfer, but they have to 487 duplicate restoration resources at significant cost. Solution should 488 provides some mechanisms to avoid the duplicate restoration and 489 reduce the network cost. Dynamic re-routing also has to face the 490 risk of resource limitation. So the choice of mechanism MUST be 491 based on SLA or policy. In the case where the latency SLA can not be 492 met after a re-route is attempted, control plane should report an 493 alarm to management plane. It could also try restoration for several 494 times which could be configured. 496 8. IANA Considerations 498 No new IANA consideration are raised by this document. 500 9. Security Considerations 502 This document raises no new security issues. 504 10. Acknowledgements 506 TBD. 508 11. References 510 11.1. Normative References 512 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 513 Requirement Levels", BCP 14, RFC 2119, March 1997. 515 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 516 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 517 Tunnels", RFC 3209, December 2001. 519 [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching 520 (GMPLS) Signaling Resource ReserVation Protocol-Traffic 521 Engineering (RSVP-TE) Extensions", RFC 3473, January 2003. 523 [RFC3477] Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links 524 in Resource ReSerVation Protocol - Traffic Engineering 525 (RSVP-TE)", RFC 3477, January 2003. 527 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 528 (TE) Extensions to OSPF Version 2", RFC 3630, 529 September 2003. 531 [RFC4203] Kompella, K. and Y. Rekhter, "OSPF Extensions in Support 532 of Generalized Multi-Protocol Label Switching (GMPLS)", 533 RFC 4203, October 2005. 535 11.2. Informative References 537 [CL-REQ] C. Villamizar, "Requirements for MPLS Over a Composite 538 Link", draft-ietf-rtgwg-cl-requirement-04 . 540 [EXPRESS-PATH] 541 S. Giacalone, "OSPF Traffic Engineering (TE) Express 542 Path", draft-giacalone-ospf-te-express-path-01 . 544 [G.709] ITU-T Recommendation G.709, "Interfaces for the Optical 545 Transport Network (OTN)", December 2009. 547 [Y.1731] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 548 for Ethernet based networks", Feb 2008. 550 [ietf-mpls-loss-delay] 551 D. Frost, "Packet Loss and Delay Measurement for MPLS 552 Networks", draft-ietf-mpls-loss-delay-03 . 554 Authors' Addresses 556 Xihua Fu 557 ZTE 559 Email: fu.xihua@zte.com.cn 561 Vishwas Manral 562 Hewlett-Packard Corp. 563 191111 Pruneridge Ave. 564 Cupertino, CA 95014 565 US 567 Phone: 408-447-1497 568 Email: vishwas.manral@hp.com 569 URI: 571 Dave McDysan 572 Verizon 574 Email: dave.mcdysan@verizon.com 576 Andrew Malis 577 Verizon 579 Email: andrew.g.malis@verizon.com 581 Spencer Giacalone 582 Thomson Reuters 583 195 Broadway 584 New York, NY 10007 585 US 587 Phone: 646-822-3000 588 Email: spencer.giacalone@thomsonreuters.com 589 URI: 591 Malcolm Betts 592 ZTE 594 Email: malcolm.betts@zte.com.cn 596 Qilei Wang 597 ZTE 599 Email: wang.qilei@zte.com.cn 601 John Drake 602 Juniper Networks 604 Email: jdrake@juniper.net