idnits 2.17.00 (12 Aug 2021) /tmp/idnits3926/draft-fuxh-mpls-delay-loss-te-framework-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 26, 2011) is 3952 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC4205' is mentioned on line 338, but not defined ** Obsolete undefined reference: RFC 4205 (Obsoleted by RFC 5307) == Missing Reference: 'RFC4204' is mentioned on line 338, but not defined == Missing Reference: 'RFC5440' is mentioned on line 339, but not defined == Missing Reference: 'GMPLS-SEC' is mentioned on line 339, but not defined == Unused Reference: 'RFC3209' is defined on line 354, but no explicit reference was found in the text == Unused Reference: 'RFC3477' is defined on line 362, but no explicit reference was found in the text == Unused Reference: 'RFC3630' is defined on line 366, but no explicit reference was found in the text == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group X. Fu 3 Internet-Draft M. Betts 4 Intended status: Standards Track Q. Wang 5 Expires: January 27, 2012 ZTE 6 D. McDysan 7 A. Malis 8 Verizon 9 S. Giacalone 10 Thomson Reuters 11 J. Drake 12 Juniper Networks 13 July 26, 2011 15 Framework for latency and loss traffic engineering application 16 draft-fuxh-mpls-delay-loss-te-framework-00 18 Abstract 20 Latency and packet loss is such requirement that must be achieved 21 according to the Service Level Agreement (SLA) / Network Performance 22 Objective (NPO) between customers and service providers. Latency and 23 packet loss can be associated with different service level. The user 24 may select a private line provider based on the ability to meet a 25 latency and loss SLA. 27 The key driver for latency and loss is stock/commodity trading 28 applications that use data base mirroring. A few milli seconds and 29 packet loss can impact a transaction. Financial or trading companies 30 are very focused on end-to-end private pipe line latency 31 optimizations that improve things 2-3 ms. Latency/loss and 32 associated SLA is one of the key parameters that these "high value" 33 customers use to select a private pipe line provider. Other key 34 applications like video gaming, conferencing and storage area 35 networks require stringent latency, loss and bandwidth. 37 This document describes requirements and control plane implication 38 for latency and packet loss as a traffic engineering performance 39 metric in today's network which is consisting of potentially multiple 40 layers of packet transport network and optical transport network in 41 order to meet the latency/loss SLA between service provider and his 42 customers. 44 Status of this Memo 46 This Internet-Draft is submitted in full conformance with the 47 provisions of BCP 78 and BCP 79. 49 Internet-Drafts are working documents of the Internet Engineering 50 Task Force (IETF). Note that other groups may also distribute 51 working documents as Internet-Drafts. The list of current Internet- 52 Drafts is at http://datatracker.ietf.org/drafts/current/. 54 Internet-Drafts are draft documents valid for a maximum of six months 55 and may be updated, replaced, or obsoleted by other documents at any 56 time. It is inappropriate to use Internet-Drafts as reference 57 material or to cite them other than as "work in progress." 59 This Internet-Draft will expire on January 27, 2012. 61 Copyright Notice 63 Copyright (c) 2011 IETF Trust and the persons identified as the 64 document authors. All rights reserved. 66 This document is subject to BCP 78 and the IETF Trust's Legal 67 Provisions Relating to IETF Documents 68 (http://trustee.ietf.org/license-info) in effect on the date of 69 publication of this document. Please review these documents 70 carefully, as they describe your rights and restrictions with respect 71 to this document. Code Components extracted from this document must 72 include Simplified BSD License text as described in Section 4.e of 73 the Trust Legal Provisions and are provided without warranty as 74 described in the Simplified BSD License. 76 Table of Contents 78 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 79 1.1. Conventions Used in This Document . . . . . . . . . . . . 4 80 2. Latency and Loss Report . . . . . . . . . . . . . . . . . . . 4 81 3. Requirements Identification . . . . . . . . . . . . . . . . . 5 82 4. Control Plane Implication . . . . . . . . . . . . . . . . . . 7 83 5. Security Considerations . . . . . . . . . . . . . . . . . . . 9 84 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 85 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 9 86 7.1. Normative References . . . . . . . . . . . . . . . . . . . 9 87 7.2. Informative References . . . . . . . . . . . . . . . . . . 10 88 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10 90 1. Introduction 92 Current operation and maintenance mode of latency and packet loss 93 measurement is high in cost and low in efficiency. The latency and 94 packet loss can only be measured after the connection has been 95 established, if the measurement indicates that the latency SLA is not 96 met then another path is computed, setup and measured. This "trial 97 and error" process is very inefficient. To avoid this problem a 98 means of making an accurate prediction of latency and packet loss 99 before a path is establish is required. 101 This document describes the requirements and control plane 102 implication to communicate latency and packet loss as a traffic 103 engineering performance metric in today's network which is consisting 104 of potentially multiple layers of packet transport network and 105 optical transport network in order to meet the latency and packet 106 loss SLA between service provider and his customers. 108 1.1. Conventions Used in This Document 110 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 111 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 112 document are to be interpreted as described in [RFC2119]. 114 2. Latency and Loss Report 116 This section isn't going to say how latency or packet loss is 117 measured. How to measure has been provided in ITU-T [Y.1731], 118 [G.709] and [ietf-mpls-loss-delay]. It's purpose is to define what 119 it is sufficiently clear that mechanisms could be defined to measure 120 it, and so that independent implementations will report the same 121 thing. If control plane wish to define the ability to report latency 122 and packet loss, control plane must be clear what it are reporting. 124 Packet/Frame loss probability is expressed as a percentage of the 125 number of service packets/frames not delivered divided by the total 126 number of service frames during time interval T. Loss is always 127 measured by sending a measurement packet or frame from measurement 128 point to its reception and recception sending back a response. 130 The link of latecny is the time interval between the propagation of 131 an electrical signal and its reception. Latency is always measured 132 by sending a measurement packet or frame from measurement point to 133 its reception. In some usages, latency is measured by sending a 134 packet/frame that is returned to the sender and the round-trip time 135 is considered the latency of bidirectional co-routed or associated 136 LSP. One way time is considered as the latency of unidirectional 137 LSP. The one way latency may not be half of the round-trip latency 138 in the case that the transmit and receive directions of the path are 139 of unequal lengths. 141 Control plane should report two components of the delay, "static" and 142 "dynamic". The dynamic component is caused by traffic loading. What 143 is reporting for "dynamic" portion is approximation. 145 Latency on a connection has two sources: Node latency which is caused 146 by the node as a result of process time in each node and: Link 147 latency as a result of packet/frame transit time between two 148 neighbouring nodes or a FA-LSP/Composit Link [CL-REQ]. The average 149 latency of node should be reported. It is simpler to add node 150 latency to the link delay vs. carrying a separate parameter and does 151 not hide any important information. Latency variation is a parameter 152 that is used to indicate the variation range of the latency value. 153 Latency, latecny variation value must be reported as a average value 154 which is calculated by data plane. 156 3. Requirements Identification 158 End-to-end service optimization based on latency and packet loss is a 159 key requirement for service provider. This type of function will be 160 adopted by their "premium" service customers. They would like to pay 161 for this "premium" service. Latency and loss on a route level will 162 help carriers' customers to make his provider selection decision. 163 Following key requirements associated with latency and loss is 164 identified. 166 o REQ #1: The solution MUST provide a means to communicate latency, 167 latency variation and packet loss of links and nodes as a traffic 168 engineering performance metric into IGP. 170 o REQ #2: Latency, latency variation and packet loss may be 171 unstable, for example, if queueing latency were included, then IGP 172 could become unstable. The solution MUST provide a means to 173 control latency and loss IGP message advertisement and avoid 174 unstable when the latency, latency variation and packet loss value 175 changes. 177 o REQ #3: Path computation entity MUST have the capability to 178 compute one end-to-end path with latency and packet loss 179 constraint. for example, it has the capability to compute a route 180 with X amount bandwidth with less than Y ms of latency and Z% 181 packet loss limit based on the latency and packet loss traffic 182 engineering database. It MUST also support the path computation 183 with routing constraints combination with pre-defined priorities, 184 e.g., SRLG diversity, latency, loss and cost. 186 o REQ #4: One end-to-end LSP may traverses some Composite Links [CL- 187 REQ]. Even if the transport technology (e.g., OTN) implementing 188 the component links is identical, the latency and packet loss 189 characteristics of the component links may differ. In order to 190 assign the LSP to one of component links with different latency 191 and packet loss characteristics, the solution SHOULD provide a 192 means to indicate that a traffic flow should select a component 193 link with minimum latency and/or packet loss, maximum acceptable 194 latency and/or packet loss value and maximum acceptable delay 195 variation value as specified by protocol. The endpoints of 196 Composite Link will take these parameters into account for 197 component link selection or creation. 199 o REQ #5: One one end-to-end LSP may traverse a server layer. There 200 will be some latency and packet loss constraint requirement for 201 the segment route in server layer. The solution SHALL provide a 202 means to indicate FA selection or FA-LSP creation with minimum 203 latency and/or packet loss, maximum acceptable latency and/or 204 packet loss value and maximum acceptable delay variation value. 205 The boundary nodes of FA-LSP will take these parameters into 206 account for FA selection or FA-LSP creation. 208 o REQ #6: The solution SHOULD provide a means to accumulate (e.g., 209 sum) of latency information of links and nodes along one LSP 210 across multi-domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so 211 that an latency validation decision can be made at the source 212 node. One-way and round-trip latency collection along the LSP by 213 signaling protocol and latency verification at the end of LSP 214 should be supported. The accumulation of the delay is "simple" 215 for the static component i.e. its a linear addition, the dynamic/ 216 network loading component is more interesting and would involve 217 some estimate of the "worst case". However, method of deriving 218 this worst case appears to be more in the scope of Network 219 Operator policy than standards i.e. the operator needs to decide, 220 based on the SLAs offered, the required confidence level. 222 o REQ #7: Some customers may insist on having the ability to re- 223 route if the latency and loss SLA is not being met. If a 224 "provisioned" end-to-end LSP latency and/or loss could not meet 225 the latency and loss agreement between operator and his user, The 226 solution SHOULD support pre-defined or dynamic re-routing to 227 handle this case based on the local policy. The latency 228 performance of pre-defined protection or dynamic re-routing LSP 229 MUST meet the latency SLA parameter. 231 o REQ #8: If a "provisioned" end-to-end LSP latency and/or loss 232 performance is improved because of some segment performance 233 promotion, the solution SHOULD support the re-routing to optimize 234 latency and/or loss end-to-end cost. 236 o REQ #9: As a result of the change of latency and loss in the LSP, 237 current LSP may be frequently switched to a new LSP with a 238 appropriate latency and packet loss value. In order to avoid 239 this, the solution SHOULD indicate the switchover of the LSP 240 according to maximum acceptable change latency and packet loss 241 value. 243 4. Control Plane Implication 245 o The latency and packet loss performance metric MUST be advertised 246 into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to 247 perform route computation and network planning based on latecny 248 and packet loss SLA target. Latency, latecny variation and packet 249 loss value MUST be reported as a average value which is calculated 250 by data plane. Latency and packet loss characteristics of these 251 links and nodes may change dynamically. In order to control IGP 252 messaging and avoid being unstable when the latency, latency 253 variation and packet loss value changes, a threshold and a limit 254 on rate of change MUST be configured to control plane. If any 255 latency and packet loss values change and over than the threshold 256 and a limit on rate of change, then the change MUST be notified to 257 the IGP again. 259 o Link latency attribute may also take into account the latency of a 260 network element (node), i.e., the latency between the incoming 261 port and the outgoing port of a network element. If the link 262 attribute is to include node latency AND link latency, then when 263 the latency calculation is done for paths traversing links on the 264 same node then the node latency can be subtracted out. 266 o When the Composite Links [CL-REQ] is advertised into IGP, there 267 are following considerations. 269 * The latency and packet loss of composite link may be the range 270 (e.g., at least minimum and maximum) latency value of all 271 component links. It may also be the maximum latency value of 272 all component links. In these cases, only partial information 273 is transmited in the IGP. So the path computation entity has 274 insufficient information to determine whether a particular path 275 can support its latency and packet loss requirements. This 276 leads to signaling crankback. So IGP may be extended to 277 advertise latency and packet of each component link within one 278 Composite Link having an IGP adjacency. 280 o One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may 281 traverse a FA-LSP of server layer (e.g., OTN rings). The boundary 282 nodes of the FA-LSP SHOULD be aware of the latency and packet loss 283 information of this FA-LSP. 285 * If the FA-LSP is able to form a routing adjacency and/or as a 286 TE link in the client network, the total latency and packet 287 loss value of the FA-LSP can be as an input to a transformation 288 that results in a FA traffic engineering metric and advertised 289 into the client layer routing instances. Note that this metric 290 will include the latency and packet loss of the links and nodes 291 that the trail traverses. 293 * If total latency and packet loss information of the FA-LSP 294 changes (e.g., due to a maintenance action or failure in OTN 295 rings), the boundary node of the FA-LSP will receive the TE 296 link information advertisement including the latency and packet 297 value which is already changed and if it is over than the 298 threshold and a limit on rate of change, then it will compute 299 the total latency and packet value of the FA-LSP again. If the 300 total latency and packet loss value of FA-LSP changes, the 301 client layer MUST also be notified about the latest value of 302 FA. The client layer can then decide if it will accept the 303 increased latency and packet loss or request a new path that 304 meets the latency and packet loss requirement. 306 o Restoration, protection and equipment variations can impact 307 "provisioned" latency and packet loss (e.g., latency and packet 308 loss increase). The change of one end-to-end LSP latency and 309 packet loss performance MUST be known by source and/or sink node. 310 So it can inform the higher layer network of a latency and packet 311 loss change. The latency or packet loss change of links and nodes 312 will affect one end-to-end LSP's total amount of latency or packet 313 loss. Applications can fail beyond an application-specific 314 threshold. Some remedy mechanism could be used. 316 * Pre-defined protection or dynamic re-routing could be triggered 317 to handle this case. In the case of predefined protection, 318 large amounts of redundant capacity may have a significant 319 negative impact on the overall network cost. Service provider 320 may have many layers of pre-defined restoration for this 321 transfer, but they have to duplicate restoration resources at 322 significant cost. Solution should provides some mechanisms to 323 avoid the duplicate restoration and reduce the network cost. 324 Dynamic re-routing also has to face the risk of resource 325 limitation. So the choice of mechanism MUST be based on SLA or 326 policy. In the case where the latency SLA can not be met after 327 a re-route is attempted, control plane should report an alarm 328 to management plane. It could also try restoration for several 329 times which could be configured. 331 5. Security Considerations 333 The use of control plane protocols for signaling, routing, and path 334 computation of latency and loss opens security threats through 335 attacks on those protocols. The control plane may be secured using 336 the mechanisms defined for the protocols discussed. For further 337 details of the specific security measures refer to the documents that 338 define the protocols ([RFC3473], [RFC4203], [RFC4205], [RFC4204], and 339 [RFC5440]). [GMPLS-SEC] provides an overview of security 340 vulnerabilities and protection mechanisms for the GMPLS control 341 plane. 343 6. IANA Considerations 345 This document makes not requests for IANA action. 347 7. References 349 7.1. Normative References 351 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 352 Requirement Levels", BCP 14, RFC 2119, March 1997. 354 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 355 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 356 Tunnels", RFC 3209, December 2001. 358 [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching 359 (GMPLS) Signaling Resource ReserVation Protocol-Traffic 360 Engineering (RSVP-TE) Extensions", RFC 3473, January 2003. 362 [RFC3477] Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links 363 in Resource ReSerVation Protocol - Traffic Engineering 364 (RSVP-TE)", RFC 3477, January 2003. 366 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 367 (TE) Extensions to OSPF Version 2", RFC 3630, 368 September 2003. 370 [RFC4203] Kompella, K. and Y. Rekhter, "OSPF Extensions in Support 371 of Generalized Multi-Protocol Label Switching (GMPLS)", 372 RFC 4203, October 2005. 374 7.2. Informative References 376 [CL-REQ] C. Villamizar, "Requirements for MPLS Over a Composite 377 Link", draft-ietf-rtgwg-cl-requirement-02 . 379 [G.709] ITU-T Recommendation G.709, "Interfaces for the Optical 380 Transport Network (OTN)", December 2009. 382 [Y.1731] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 383 for Ethernet based networks", Feb 2008. 385 [ietf-mpls-loss-delay] 386 D. Frost, "Packet Loss and Delay Measurement for MPLS 387 Networks", draft-ietf-mpls-loss-delay-03 . 389 Authors' Addresses 391 Xihua Fu 392 ZTE 394 Email: fu.xihua@zte.com.cn 396 Malcolm Betts 397 ZTE 399 Email: malcolm.betts@zte.com.cn 401 Qilei Wang 402 ZTE 404 Email: wang.qilei@zte.com.cn 406 Dave McDysan 407 Verizon 409 Email: dave.mcdysan@verizon.com 410 Andrew Malis 411 Verizon 413 Email: andrew.g.malis@verizon.com 415 Spencer Giacalone 416 Thomson Reuters 418 Email: spencer.giacalone@thomsonreuters.com 420 John Drake 421 Juniper Networks 423 Email: jdrake@juniper.net