idnits 2.17.00 (12 Aug 2021) /tmp/idnits42231/draft-bhaprasud-ippm-pm-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (June 24, 2017) is 1785 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2697' is mentioned on line 98, but not defined == Missing Reference: 'RFC2698' is mentioned on line 98, but not defined == Missing Reference: 'RFC2474' is mentioned on line 99, but not defined == Outdated reference: draft-ietf-ippm-alt-mark has been published as RFC 8321 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 IPPM Working Group B. M Gaonkar 3 Internet-Draft S. Jacob 4 Intended status: Standards Track Juniper 5 Expires: December 26, 2017 G. Fioccola 6 Telecom Italia 7 Q. Wu 8 Huawei 9 P. Ananthasankaran 10 Nokia 11 June 24, 2017 13 Performance Measurement Models 14 draft-bhaprasud-ippm-pm-03 16 Abstract 18 This document defines the performance measurement models for service 19 level packets on the network which can be implemented in different 20 kind of network scenarios. Based on the performance matrix, the 21 analytics data can be pulled from a live network which is not 22 possible at present.This can be used for self evolving networks. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on December 26, 2017. 41 Copyright Notice 43 Copyright (c) 2017 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Conventions used in this document . . . . . . . . . . . . . . 3 60 3. Traffic Management Architecture . . . . . . . . . . . . . . . 5 61 3.1. Selection Process . . . . . . . . . . . . . . . . . . . . 5 62 3.2. Metering Process . . . . . . . . . . . . . . . . . . . . 6 63 4. Performance Measurement Models . . . . . . . . . . . . . . . 6 64 4.1. Complete data measurement (Monitoring all the traffic) . 6 65 4.2. Color based data measurement . . . . . . . . . . . . . . 7 66 4.3. CoS based Data measurement . . . . . . . . . . . . . . . 7 67 4.4. CoS and Color based Data measurement . . . . . . . . . . 8 68 5. Active and Passive performance measurements . . . . . . . . . 8 69 6. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 8 70 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 9 71 8. Security Considerations . . . . . . . . . . . . . . . . . . . 9 72 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 73 9.1. Normative References . . . . . . . . . . . . . . . . . . 10 74 9.2. Informative References . . . . . . . . . . . . . . . . . 10 75 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10 77 1. Introduction 79 Today performance monitoring or tracking of the performance 80 experienced by customer traffic is a key technology to strengthen 81 service offering and verify service level agreement between customers 82 and service providers, perform troubleshooting. The lack of adequate 83 monitoring tools to detect an interesting subset of a packet stream, 84 as identified by a particular packet attribute(e.g., commit rate or 85 DSCP) and measure that packet loss drives an effort to design a new 86 method for the performance monitoring of live traffic, possibly easy 87 to implement and deploy. The draft aims to provide fine granularity 88 loss, delay and delay variation measurement and define a performance 89 measurement model on customer traffic based on a set of constraints 90 that are associated with service level agreement such as cos 91 attribute, color attribute. Each customer traffic is corresponding 92 to an interesting subset of the same packet stream. The customer or 93 a interesting packet stream can be identified by a list of source or 94 destination prefixes, or by ingress or egress interfaces, combing 95 with packet attributes such as DSCP or commit rate).Unlike Color and 96 COS identification specified in MEF 23.1, this draft doesn't define 97 new Color and CoS identification mechanism, instead, it stick to 98 color definition in [RFC2697] and [RFC2698] and COS definition in 99 [RFC2474]. 101 The network would be provisioned with multiple services(e.g., real 102 time service, interactive service) having different network 103 performance criteria(e.g., bandwidth constraint or packet loss 104 constraint for the end to end path) based on the customers' 105 requirement. This models aims at performing Loss, Delay and delay 106 variation measurement for these services (belonging to the same 107 customer)independently for each defined network performance criteria. 109 The class-of-service and packet color classification defined in the 110 network is a key factor to classify network traffic and drive traffic 111 management mechanism to achieve corresponding network performance 112 criteria for each service. This draft uses the class-of-service 113 model and color based model for any given network to define the 114 performance measurement for various services with the different 115 network performance criteria requirements. 117 The proposed models is suitable mainly for passive performance 118 measurements but can be considered for active and hybrid performance 119 measurements as well. 121 This solution models loss, delay an delay variation measurement in 122 different kinds of network scenarios. The different models explained 123 here will help to analyse performance pattern, analyze the network 124 congestion in a better way and model the network in a better way. 125 For instance, Loss measurement is carried out between 2 end points. 126 The underlying technology could be an active loss measurement or a 127 passive loss measurement. 129 Any loss measurement will require 2 counters: 131 o Number of packets transmitted from one end point. 133 o Number of packets received at the other end point. 135 This draft explains the different ways to model the above data and 136 get meaningful result for the loss, delay and delay variation 137 measurement. The underlying technology could be an MPLS performance 138 measurement, or an IP based performance measurement. 140 2. Conventions used in this document 142 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 143 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 144 document are to be interpreted as described in RFC2119 [RFC2119]. 146 Observation Point An Observation Point is a location in the network 147 where data packets can be observed. Examples include a line to 148 which a probe is attached, a shared medium, such as an Ethernet- 149 based LAN, a single port of a router, or a set of interfaces 150 (physical or logical) of a router. 152 Persistence Data Store The persistence Data store is a scalable data 153 store which collects time based data such as streaming data or 154 time series data for network analytics. 156 Time Series Data Time Series Data is a sequence of data points with 157 time stamps. The data points are limited to loss, delay and delay 158 variation measurement results in this document. 160 Packet Stream A Packet Stream denotes a set of packets from the 161 Observed Packet Stream that flows past some specified point within 162 the Metering Process. An example of a Packet Stream is the output 163 of the Selection Process. 165 Packet Content The Packet Content denotes the union of the packet 166 header (which includes link layer, network layer, and other 167 encapsulation headers) and the packet payload. 169 Color Identifier: It is used to identify the color that applies to 170 the data packet. Color identifier can be assigned to service 171 level packet based on commit rate and excess rate set for the 172 traffic. For example, the service level packet will be set with 173 "green" color if it is less than committed" rate; the Service 174 Level packet will be set with "yellow" color if it is exceeding 175 the"committed" rate but less than the "excess" rate. The service 176 frame will be set with "red" color if it is exceeding both the 177 "committed" and "excess" rates. 179 CoS Identifier: It is used to identify the CoS that applies to the 180 data packet. CoS identifier can be assigned based on dot1p value 181 in C-tag, or DSCP in IP header. 183 Complete data measurement: Complete data measurement is a data 184 measurement method which monitors every packet and condense a 185 large amount of information about packet arrivals into a small 186 number of statistics. The aim of "monitoring every packet" is to 187 ensure that the information reported is not dependent on the 188 application. 190 Color based data measurement: Color based data measurement is a data 191 measurement method which monitors the data packet with the same 192 color identifier. Color identifier could be "green" 193 color,"yellow" color and "red" color. 195 CoS based data measurement: Color based data measurement is a data 196 measurement method which monitors the data packet with the same 197 CoS identifier. COS identifier could be C-Tag Priority Code 198 Point(PCP) or DSCP. 200 CoS and Color based Data measurement: CoS and Color based Data 201 measurement is a data measurement method which monitors the data 202 packet with the specific CoS Identifier and Specific Color 203 Identifier as constraints. The measurement results with CoS 204 Identifier and Color Identifier constraints constitute a Network 205 Performance matrix. 207 3. Traffic Management Architecture 209 A stream of packets is observed at an Observation Point of the source 210 endpoint and destination endpoints. Two observation points can also 211 be placed at the same endpoint for node monitoring 212 [I-D.ietf-ippm-alt-mark], i.e.,one is at ingress interface of the 213 endpoint and the other is at the egress interface of the endpoint. A 214 Selection Process inspects each packet to determine whether or not it 215 is to be selected for data analytics. The Selection Process is part 216 of the Metering Process, which constructs a report stream on selected 217 packets as output, using the Packet Content, and possibly other 218 information such as the arrival timestamp. The report stream on 219 selected packets will be stored in the persistence data store for 220 real time data analysis or time sequence data analysis. 222 The following figure indicates the sequence of the three processes 223 (Selection, Metering, and Storing). 225 +-----------+ +-----------+ 226 |Persistence| |Persistence| 227 |Data Store | |Data Store | 228 Src Endpoint +-----^-----+ Dst Endpoint +------^----+ 229 +------------------+ | +------------------+| 230 | Metering Process | | | Metering Process || 231 Observed | +-----------+ | | | +-----------+ || 232 Packet--->| | Selection |------+ Observed | | Selection | || 233 Stream | | Process |--------Packet--->| | Process |-----+ 234 | +-----------+ | Stream | +-----------+ | 235 +------------------+ +------------------+ 237 3.1. Selection Process 239 This section defines the Selection Process and related objects. 241 Selection Process: A Selection Process takes the Observed Packet 242 Stream as its input and selects a subset of that stream as its 243 output. 245 Selection State: A Selection Process may maintain state information 246 for use by the Selection Process. At a given time, the Selection 247 State may depend on packets observed at and before that time, and 248 other variables. Examples include sequence numbers of packets at 249 the input of Selectors,a timestamp of observation of the packet at 250 the Observation Point,indicators of whether the packet was 251 selected by a given Selector. 253 Selector: A Selector defines the action of a Selection Process on a 254 single packet of its input. If selected, the packet becomes an 255 element of the output Packet Stream. 257 The Selector can make use of the following information in 258 determining whether a packet is selected: 260 * COS Identifier in the Packet Content; 262 * Traffic attribute such as Color identifier; 264 * Combination of CoS Identifier and Color Identifier 266 3.2. Metering Process 268 A Metering Process selects packets from the Observed Packet Stream 269 using a Selection Process, and produces as output a Report Stream 270 concerning the selected packets. 272 4. Performance Measurement Models 274 4.1. Complete data measurement (Monitoring all the traffic) 276 This model uses the complete data traffic between the 2 end-points to 277 compute loss measurement, delay and delay variation. This will 278 result in computation of loss, delay and delay variation measurement 279 for the entire traffic in the network in one direction. This is 280 primarily used in cases of backbone traffic where traffic from 281 different services are aggregated and send into the core network. 282 This will count all the packet, this gives the overall measurment 283 between one endpoint to other. 285 4.2. Color based data measurement 287 This is same as the above section of "complete data measurement" with 288 a minor difference, only monitoring the data packet with specific 289 color identifier. 291 In this model the packets are counted in the following Way: Count 292 specific data traffic with different color identifier between 2 end 293 points for loss, delay and delay variation measurement. One example 294 of Color based data measurement is to count two type of color based 295 traffic: 297 o Count all committed traffic between the 2 end-point for loss 298 measurement. 300 o Count all Excess traffic which is beyond the committed traffic for 301 the specific network. 303 o The probe carries the time stamps, which can later be used for 304 calculating the service outage. 306 o This method can be used for mapping the overall customer traffic 307 along with EIR, based on the EIR provider can increase the 308 bandwidth and charge him. 310 When both of these are combined then it becomes the model for 311 complete traffic as mentioned in the above section. 313 In practice the Color of traffic can use any mechanism based on the 314 network encapsulation.As long as the packets could be treated 315 differently based on the underlying encapsulation this mechanism 316 could be used. 318 This can be used for measuring the whole traffic of the customer who 319 dont want cos level measurement.Ideally this can be used for provider 320 who extend bandwidth for small providers, point to point services 321 etc. 323 4.3. CoS based Data measurement 325 This model uses the data traffic in the network which is flowing in a 326 specific CoS to measure the loss, delay and delay variation in the 327 network. Based on the class of traffic in the network the 328 transmitted and received packets are counted to calculate the packets 329 transfered per service level. The time stamp will be captured along 330 with the packet count to measure the service down time. This model 331 measures the performance per service level. This data can be stored 332 on the routers which can be used to plot the live analytics. 334 Primary use of this kind of measurement is to measure packet loss 335 delay and delay variation for a specific service which needs to meet 336 network performance requirements. The service could be a point-to- 337 point layer2 service, an MPLS based service. 339 4.4. CoS and Color based Data measurement 341 This model uses a combination of both Color based data measurement 342 and CoS based data measurement. Packets are counted for a specific 343 CoS with a specific Color.This can count both in profile packet which 344 are green and yellow which are out profile packets. This will not 345 count the red packet which doesn't meet network performance 346 requirements.The packets will be counted per service level with CIR 347 and EIR along with time stamps to find the service outage and loss. 348 The per service level counting for COS and color will give more 349 granular level data for poloting service graph and if some service is 350 continously exceeding the bandwidth this data can be used for 351 charging the end customer for extra bandwidth usage or increase the 352 bandwidth based on usage basis. 354 5. Active and Passive performance measurements 356 This model reinforces the use of well known methodologies for passive 357 performance measurements. A very simple, flexible and 358 straightforward mechanism is presented in [I-D.ietf-ippm-alt-mark]. 359 The basic idea is to virtually split traffic flows into consecutive 360 batches of packets:each block represents a measurable entity 361 unambiguously recognizable thanks to the alternate marking. This 362 approach, called Alternate Marking method, is efficient both for 363 passive performance monitoring and for active performance monitoring. 364 Most of the applications requires passive packet loss measurement for 365 a better accuracy. Instead, in same cases, it is desirable to have 366 only active delay measurements (e.g TWAMP or OWAMP), because it is 367 enough. 369 6. Use Cases 371 Consider a provider running point to point service between router A 372 and B for his customer "X".Customer "X" has voice traffic which 373 requires special treatment,then he requires attention for database 374 traffic. The customer "X" has SLA with the provider. Now the 375 challenge faced by the provider is how to measure the traffic of 376 customer "X" for each class and calculate the bandwidth, moreover the 377 provider has to see whether the "X" is sending traffic which is 378 exceeding the level so that he can make tariff accordingly. This 379 problem is solved by the above models which can measures the packet 380 for each class of traffic and tabulates the data. Later point of 381 time this data can be pulled for evaluation. 383 +-------+ +-------+ 384 | | | | 385 | +--------------+ | 386 | | P2P service | | 387 +-------+ +-------+ 388 Router A Router B 390 Figure 1: P2P 392 The same considerations can be applicable in a multipoint to 393 multipoint scenario (e.g. VPN or Data Center interconnections). In 394 this case Customer "X" has multiple ingress endpoints and multiple 395 egress endpoints. The proposed matrix model is composed by the 396 number of flows of "X" in the multipoint scenario and by class-of- 397 service and color classification. So the SLA matrix is a reference 398 for the analysis and evaluation phase. 400 +--+ +--+ 401 | | | | 402 +--+ +--+ 403 Router A1 Router B1 404 +--+ +--+ 405 | | MP2MP service | | 406 +--+ +--+ 407 Router A2 Router B2 408 . . 409 . . 410 . . 411 +--+ +--+ 412 | | | | 413 +--+ +--+ 414 Router An Router Bn 416 Figure 2: MP2MP 418 7. Acknowledgements 420 We would like to thank Brian Trammell for giving us the opportunity 421 to present our draft.We would like to thank Greg Mirsky for the 422 comments. 424 8. Security Considerations 426 This document does not introduce security issues beyond those 427 discussed in [I-D.ietf-ippm-alt-mark]. 429 9. References 431 9.1. Normative References 433 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 434 Requirement Levels", March 1997. 436 9.2. Informative References 438 [I-D.ietf-ippm-alt-mark] 439 Fioccola, G., Capello, A., Cociglio, M., Castaldelli, L., 440 Chen, M., Zheng, L., Mirsky, G., and T. Mizrahi, 441 "Alternate Marking method for passive performance 442 monitoring", draft-ietf-ippm-alt-mark-04 (work in 443 progress), March 2017. 445 Authors' Addresses 447 Bharat M Gaonkar 448 Juniper Networks 449 1133 Innovation Way 450 Sunnyvale, California 94089 451 USA 453 Email: gbharat@juniper.net 455 Sudhin Jacob 456 Juniper Networks 457 1133 Innovation Way 458 Sunnyvale, California 94089 459 USA 461 Email: gbharat@juniper.net 463 Giuseppe Fioccola 464 Telecom Italia 465 Via Reiss Romoli, 274 466 Torino 10148 467 Italy 469 Email: giuseppe.fioccola@telecomitalia.it 470 Qin Wu 471 Huawei 472 101 Software Avenue, Yuhua District 473 Nanjing, Jiangsu 210012 474 China 476 Email: bill.wu@huawei.com 478 Praveen Ananthasankaran 479 Nokia 480 Manyata Embassy Tech Park, Silver Oak (Wing A), 481 Outer Ring Road, Nagawara 482 Bangalore 560045 483 Inda 485 Email: praveen.ananthasankaran@nokia.com