idnits 2.17.00 (12 Aug 2021) /tmp/idnits6901/draft-fuxh-ccamp-delay-loss-te-framework-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 4, 2011) is 3967 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC4205' is mentioned on line 333, but not defined ** Obsolete undefined reference: RFC 4205 (Obsoleted by RFC 5307) == Missing Reference: 'RFC4204' is mentioned on line 333, but not defined == Missing Reference: 'RFC5440' is mentioned on line 334, but not defined == Missing Reference: 'GMPLS-SEC' is mentioned on line 334, but not defined == Unused Reference: 'RFC3209' is defined on line 349, but no explicit reference was found in the text == Unused Reference: 'RFC3477' is defined on line 357, but no explicit reference was found in the text == Unused Reference: 'RFC3630' is defined on line 361, but no explicit reference was found in the text == Outdated reference: draft-ietf-rtgwg-cl-requirement has been published as RFC 7226 Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group X. Fu 3 Internet-Draft M. Betts 4 Intended status: Standards Track Q. Wang 5 Expires: January 5, 2012 ZTE 6 D. McDysan 7 A. Malis 8 Verizon 9 July 4, 2011 11 Framework for latency and loss traffic engineering application 12 draft-fuxh-ccamp-delay-loss-te-framework-00 14 Abstract 16 Latency and packet loss is such requirement that must be achieved 17 according to the Service Level Agreement (SLA) / Network Performance 18 Objective (NPO) between customers and service providers. Latency and 19 packet loss can be associated with different service level. The user 20 may select a private line provider based on the ability to meet a 21 latency and loss SLA. 23 The key driver for latency and loss is stock/commodity trading 24 applications that use data base mirroring. A few milli seconds and 25 packet loss can impact a transaction. Financial or trading companies 26 are very focused on end-to-end private pipe line latency 27 optimizations that improve things 2-3 ms. Latency/loss and 28 associated SLA is one of the key parameters that these "high value" 29 customers use to select a private pipe line provider. Other key 30 applications like video gaming, conferencing and storage area 31 networks require stringent latency, loss and bandwidth. 33 This document describes requirements to communicate latency and 34 packet loss as a traffic engineering performance metric in today's 35 network which is consisting of potentially multiple layers of packet 36 transport network and optical transport network in order to meet the 37 latency/loss SLA between service provider and his customers. 39 Status of this Memo 41 This Internet-Draft is submitted in full conformance with the 42 provisions of BCP 78 and BCP 79. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF). Note that other groups may also distribute 46 working documents as Internet-Drafts. The list of current Internet- 47 Drafts is at http://datatracker.ietf.org/drafts/current/. 49 Internet-Drafts are draft documents valid for a maximum of six months 50 and may be updated, replaced, or obsoleted by other documents at any 51 time. It is inappropriate to use Internet-Drafts as reference 52 material or to cite them other than as "work in progress." 54 This Internet-Draft will expire on January 5, 2012. 56 Copyright Notice 58 Copyright (c) 2011 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with respect 66 to this document. Code Components extracted from this document must 67 include Simplified BSD License text as described in Section 4.e of 68 the Trust Legal Provisions and are provided without warranty as 69 described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 74 1.1. Conventions Used in This Document . . . . . . . . . . . . 4 75 2. Latency and Loss Report . . . . . . . . . . . . . . . . . . . 4 76 3. Requirements Identification . . . . . . . . . . . . . . . . . 5 77 4. Control Plane Implication . . . . . . . . . . . . . . . . . . 7 78 5. Security Considerations . . . . . . . . . . . . . . . . . . . 9 79 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 80 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 9 81 7.1. Normative References . . . . . . . . . . . . . . . . . . . 9 82 7.2. Informative References . . . . . . . . . . . . . . . . . . 10 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10 85 1. Introduction 87 Current operation and maintenance mode of latency and packet loss 88 measurement is high in cost and low in efficiency. The latency and 89 packet loss can only be measured after the connection has been 90 established, if the measurement indicates that the latency SLA is not 91 met then another path is computed, setup and measured. This "trial 92 and error" process is very inefficient. To avoid this problem a 93 means of making an accurate prediction of latency and packet loss 94 before a path is establish is required. 96 This document describes the requirements and control plane 97 implication to communicate latency and packet loss as a traffic 98 engineering performance metric in today's network which is consisting 99 of potentially multiple layers of packet transport network and 100 optical transport network in order to meet the latency and packet 101 loss SLA between service provider and his customers. 103 1.1. Conventions Used in This Document 105 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 106 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 107 document are to be interpreted as described in [RFC2119]. 109 2. Latency and Loss Report 111 This section isn't going to say how latency or packet loss is 112 measured. How to measure has been provided in ITU-T [Y.1731], 113 [G.709] and [ietf-mpls-loss-delay]. It's purpose is to define what 114 it is sufficiently clear that mechanisms could be defined to measure 115 it, and so that independent implementations will report the same 116 thing. If control plane wish to define the ability to report latency 117 and packet loss, control plane must be clear what it are reporting. 119 Packet/Frame loss probability is expressed as a percentage of the 120 number of service packets/frames not delivered divided by the total 121 number of service frames during time interval T. Loss is always 122 measured by sending a measurement packet or frame from measurement 123 point to its reception and recception sending back a response. 125 The link of latecny is the time interval between the propagation of 126 an electrical signal and its reception. Latency is always measured 127 by sending a measurement packet or frame from measurement point to 128 its reception. In some usages, latency is measured by sending a 129 packet/frame that is returned to the sender and the round-trip time 130 is considered the latency of bidirectional co-routed or associated 131 LSP. One way time is considered as the latency of unidirectional 132 LSP. The one way latency may not be half of the round-trip latency 133 in the case that the transmit and receive directions of the path are 134 of unequal lengths. 136 Control plane should report two components of the delay, "static" and 137 "dynamic". The dynamic component is caused by traffic loading. What 138 is reporting for "dynamic" portion is approximation. 140 Latency on a connection has two sources: Node latency which is caused 141 by the node as a result of process time in each node and: Link 142 latency as a result of packet/frame transit time between two 143 neighbouring nodes or a FA-LSP/Composit Link [CL-REQ]. The average 144 latency of node should be reported. It is simpler to add node 145 latency to the link delay vs. carrying a separate parameter and does 146 not hide any important information. Latency variation is a parameter 147 that is used to indicate the variation range of the latency value. 148 Latency, latecny variation value must be reported as a average value 149 which is calculated by data plane. 151 3. Requirements Identification 153 End-to-end service optimization based on latency and packet loss is a 154 key requirement for service provider. This type of function will be 155 adopted by their "premium" service customers. They would like to pay 156 for this "premium" service. Latency and loss on a route level will 157 help carriers' customers to make his provider selection decision. 158 Following key requirements associated with latency and loss is 159 identified. 161 o REQ #1: The solution MUST provide a means to communicate latency, 162 latency variation and packet loss of links and nodes as a traffic 163 engineering performance metric into IGP. 165 o REQ #2: Latency, latency variation and packet loss may be 166 unstable, for example, if queueing latency were included, then IGP 167 could become unstable. The solution MUST provide a means to 168 control latency and loss IGP message advertisement and avoid 169 unstable when the latency, latency variation and packet loss value 170 changes. 172 o REQ #3: Path computation entity MUST have the capability to 173 compute one end-to-end path with latency and packet loss 174 constraint. for example, it has the capability to compute a route 175 with X amount bandwidth with less than Y ms of latency and Z% 176 packet loss limit based on the latency and packet loss traffic 177 engineering database. It MUST also support the path computation 178 with routing constraints combination with pre-defined priorities, 179 e.g., SRLG diversity, latency, loss and cost. 181 o REQ #4: One end-to-end LSP may traverses some Composite Links [CL- 182 REQ]. Even if the transport technology (e.g., OTN) implementing 183 the component links is identical, the latency and packet loss 184 characteristics of the component links may differ. In order to 185 assign the LSP to one of component links with different latency 186 and packet loss characteristics, the solution SHOULD provide a 187 means to indicate that a traffic flow should select a component 188 link with minimum latency and/or packet loss, maximum acceptable 189 latency and/or packet loss value and maximum acceptable delay 190 variation value as specified by protocol. The endpoints of 191 Composite Link will take these parameters into account for 192 component link selection or creation. 194 o REQ #5: One one end-to-end LSP may traverse a server layer. There 195 will be some latency and packet loss constraint requirement for 196 the segment route in server layer. The solution SHALL provide a 197 means to indicate FA selection or FA-LSP creation with minimum 198 latency and/or packet loss, maximum acceptable latency and/or 199 packet loss value and maximum acceptable delay variation value. 200 The boundary nodes of FA-LSP will take these parameters into 201 account for FA selection or FA-LSP creation. 203 o REQ #6: The solution SHOULD provide a means to accumulate (e.g., 204 sum) of latency information of links and nodes along one LSP 205 across multi-domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so 206 that an latency validation decision can be made at the source 207 node. One-way and round-trip latency collection along the LSP by 208 signaling protocol and latency verification at the end of LSP 209 should be supported. The accumulation of the delay is "simple" 210 for the static component i.e. its a linear addition, the dynamic/ 211 network loading component is more interesting and would involve 212 some estimate of the "worst case". However, method of deriving 213 this worst case appears to be more in the scope of Network 214 Operator policy than standards i.e. the operator needs to decide, 215 based on the SLAs offered, the required confidence level. 217 o REQ #7: Some customers may insist on having the ability to re- 218 route if the latency and loss SLA is not being met. If a 219 "provisioned" end-to-end LSP latency and/or loss could not meet 220 the latency and loss agreement between operator and his user, The 221 solution SHOULD support pre-defined or dynamic re-routing to 222 handle this case based on the local policy. The latency 223 performance of pre-defined protection or dynamic re-routing LSP 224 MUST meet the latency SLA parameter. 226 o REQ #8: If a "provisioned" end-to-end LSP latency and/or loss 227 performance is improved because of some segment performance 228 promotion, the solution SHOULD support the re-routing to optimize 229 latency and/or loss end-to-end cost. 231 o REQ #9: As a result of the change of latency and loss in the LSP, 232 current LSP may be frequently switched to a new LSP with a 233 appropriate latency and packet loss value. In order to avoid 234 this, the solution SHOULD indicate the switchover of the LSP 235 according to maximum acceptable change latency and packet loss 236 value. 238 4. Control Plane Implication 240 o The latency and packet loss performance metric MUST be advertised 241 into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to 242 perform route computation and network planning based on latecny 243 and packet loss SLA target. Latency, latecny variation and packet 244 loss value MUST be reported as a average value which is calculated 245 by data plane. Latency and packet loss characteristics of these 246 links and nodes may change dynamically. In order to control IGP 247 messaging and avoid being unstable when the latency, latency 248 variation and packet loss value changes, a threshold and a limit 249 on rate of change MUST be configured to control plane. If any 250 latency and packet loss values change and over than the threshold 251 and a limit on rate of change, then the change MUST be notified to 252 the IGP again. 254 o Link latency attribute may also take into account the latency of a 255 network element (node), i.e., the latency between the incoming 256 port and the outgoing port of a network element. If the link 257 attribute is to include node latency AND link latency, then when 258 the latency calculation is done for paths traversing links on the 259 same node then the node latency can be subtracted out. 261 o When the Composite Links [CL-REQ] is advertised into IGP, there 262 are following considerations. 264 * The latency and packet loss of composite link may be the range 265 (e.g., at least minimum and maximum) latency value of all 266 component links. It may also be the maximum latency value of 267 all component links. In these cases, only partial information 268 is transmited in the IGP. So the path computation entity has 269 insufficient information to determine whether a particular path 270 can support its latency and packet loss requirements. This 271 leads to signaling crankback. So IGP may be extended to 272 advertise latency and packet of each component link within one 273 Composite Link having an IGP adjacency. 275 o One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may 276 traverse a FA-LSP of server layer (e.g., OTN rings). The boundary 277 nodes of the FA-LSP SHOULD be aware of the latency and packet loss 278 information of this FA-LSP. 280 * If the FA-LSP is able to form a routing adjacency and/or as a 281 TE link in the client network, the total latency and packet 282 loss value of the FA-LSP can be as an input to a transformation 283 that results in a FA traffic engineering metric and advertised 284 into the client layer routing instances. Note that this metric 285 will include the latency and packet loss of the links and nodes 286 that the trail traverses. 288 * If total latency and packet loss information of the FA-LSP 289 changes (e.g., due to a maintenance action or failure in OTN 290 rings), the boundary node of the FA-LSP will receive the TE 291 link information advertisement including the latency and packet 292 value which is already changed and if it is over than the 293 threshold and a limit on rate of change, then it will compute 294 the total latency and packet value of the FA-LSP again. If the 295 total latency and packet loss value of FA-LSP changes, the 296 client layer MUST also be notified about the latest value of 297 FA. The client layer can then decide if it will accept the 298 increased latency and packet loss or request a new path that 299 meets the latency and packet loss requirement. 301 o Restoration, protection and equipment variations can impact 302 "provisioned" latency and packet loss (e.g., latency and packet 303 loss increase). The change of one end-to-end LSP latency and 304 packet loss performance MUST be known by source and/or sink node. 305 So it can inform the higher layer network of a latency and packet 306 loss change. The latency or packet loss change of links and nodes 307 will affect one end-to-end LSP's total amount of latency or packet 308 loss. Applications can fail beyond an application-specific 309 threshold. Some remedy mechanism could be used. 311 * Pre-defined protection or dynamic re-routing could be triggered 312 to handle this case. In the case of predefined protection, 313 large amounts of redundant capacity may have a significant 314 negative impact on the overall network cost. Service provider 315 may have many layers of pre-defined restoration for this 316 transfer, but they have to duplicate restoration resources at 317 significant cost. Solution should provides some mechanisms to 318 avoid the duplicate restoration and reduce the network cost. 319 Dynamic re-routing also has to face the risk of resource 320 limitation. So the choice of mechanism MUST be based on SLA or 321 policy. In the case where the latency SLA can not be met after 322 a re-route is attempted, control plane should report an alarm 323 to management plane. It could also try restoration for several 324 times which could be configured. 326 5. Security Considerations 328 The use of control plane protocols for signaling, routing, and path 329 computation of latency and loss opens security threats through 330 attacks on those protocols. The control plane may be secured using 331 the mechanisms defined for the protocols discussed. For further 332 details of the specific security measures refer to the documents that 333 define the protocols ([RFC3473], [RFC4203], [RFC4205], [RFC4204], and 334 [RFC5440]). [GMPLS-SEC] provides an overview of security 335 vulnerabilities and protection mechanisms for the GMPLS control 336 plane. 338 6. IANA Considerations 340 This document makes not requests for IANA action. 342 7. References 344 7.1. Normative References 346 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 347 Requirement Levels", BCP 14, RFC 2119, March 1997. 349 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 350 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 351 Tunnels", RFC 3209, December 2001. 353 [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching 354 (GMPLS) Signaling Resource ReserVation Protocol-Traffic 355 Engineering (RSVP-TE) Extensions", RFC 3473, January 2003. 357 [RFC3477] Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links 358 in Resource ReSerVation Protocol - Traffic Engineering 359 (RSVP-TE)", RFC 3477, January 2003. 361 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 362 (TE) Extensions to OSPF Version 2", RFC 3630, 363 September 2003. 365 [RFC4203] Kompella, K. and Y. Rekhter, "OSPF Extensions in Support 366 of Generalized Multi-Protocol Label Switching (GMPLS)", 367 RFC 4203, October 2005. 369 7.2. Informative References 371 [CL-REQ] C. Villamizar, "Requirements for MPLS Over a Composite 372 Link", draft-ietf-rtgwg-cl-requirement-02 . 374 [G.709] ITU-T Recommendation G.709, "Interfaces for the Optical 375 Transport Network (OTN)", December 2009. 377 [Y.1731] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 378 for Ethernet based networks", Feb 2008. 380 [ietf-mpls-loss-delay] 381 D. Frost, "Packet Loss and Delay Measurement for MPLS 382 Networks", draft-ietf-mpls-loss-delay-03 . 384 Authors' Addresses 386 Xihua Fu 387 ZTE 389 Email: fu.xihua@zte.com.cn 391 Malcolm Betts 392 ZTE 394 Email: malcolm.betts@zte.com.cn 396 Qilei Wang 397 ZTE 399 Email: wang.qilei@zte.com.cn 401 Dave McDysan 402 Verizon 404 Email: dave.mcdysan@verizon.com 405 Andrew Malis 406 Verizon 408 Email: andrew.g.malis@verizon.com