idnits 2.17.00 (12 Aug 2021) /tmp/idnits6856/draft-ietf-ippm-metric-registry-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 3 instances of too long lines in the document, the longest one being 1 character in excess of 72. -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 6, 2015) is 2510 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC3393' is defined on line 1051, but no explicit reference was found in the text == Unused Reference: 'RFC4566' is defined on line 1067, but no explicit reference was found in the text == Unused Reference: 'RFC5481' is defined on line 1082, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 1085, but no explicit reference was found in the text == Unused Reference: 'RFC6776' is defined on line 1093, but no explicit reference was found in the text == Unused Reference: 'RFC6792' is defined on line 1097, but no explicit reference was found in the text == Unused Reference: 'RFC7003' is defined on line 1100, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 -- Obsolete informational reference (is this intentional?): RFC 2679 (Obsoleted by RFC 7679) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) == Outdated reference: draft-ietf-lmap-framework has been published as RFC 7594 Summary: 6 errors (**), 0 flaws (~~), 9 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: January 7, 2016 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 A. Akhter 11 Consultant 12 July 6, 2015 14 Registry for Performance Metrics 15 draft-ietf-ippm-metric-registry-03 17 Abstract 19 This document defines the IANA Registry for Performance Metrics. 20 This document also gives a set of guidelines for Registered 21 Performance Metric requesters and reviewers. 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on January 7, 2016. 40 Copyright Notice 42 Copyright (c) 2015 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 59 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 60 4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 5. Motivation for a Performance Metrics Registry . . . . . . . . 7 62 5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7 63 5.2. Single point of reference for Performance Metrics . . . . 8 64 5.3. Side benefits . . . . . . . . . . . . . . . . . . . . . . 8 65 6. Criteria for Performance Metrics Registration . . . . . . . . 8 66 7. Performance Metric Registry: Prior attempt . . . . . . . . . 9 67 7.1. Why this Attempt Will Succeed . . . . . . . . . . . . . . 10 68 8. Definition of the Performance Metric Registry . . . . . . . . 10 69 8.1. Summary Category . . . . . . . . . . . . . . . . . . . . 11 70 8.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 11 71 8.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 12 72 8.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 13 73 8.1.4. Description . . . . . . . . . . . . . . . . . . . . . 13 74 8.2. Metric Definition Category . . . . . . . . . . . . . . . 13 75 8.2.1. Reference Definition . . . . . . . . . . . . . . . . 13 76 8.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 13 77 8.3. Method of Measurement Category . . . . . . . . . . . . . 14 78 8.3.1. Reference Method . . . . . . . . . . . . . . . . . . 14 79 8.3.2. Packet Generation Stream . . . . . . . . . . . . . . 14 80 8.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 15 81 8.3.4. Sampling Distribution . . . . . . . . . . . . . . . . 15 82 8.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16 83 8.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 16 84 8.4. Output Category . . . . . . . . . . . . . . . . . . . . . 16 85 8.4.1. Type . . . . . . . . . . . . . . . . . . . . . . . . 16 86 8.4.2. Reference Definition . . . . . . . . . . . . . . . . 17 87 8.4.3. Metric Units . . . . . . . . . . . . . . . . . . . . 17 88 8.5. Administrative information . . . . . . . . . . . . . . . 17 89 8.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 17 90 8.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 17 91 8.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 17 92 8.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 17 93 8.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 17 94 9. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 18 95 9.1. Adding new Performance Metrics to the Registry . . . . . 18 96 9.2. Revising Registered Performance Metrics . . . . . . . . . 19 97 9.3. Deprecating Registered Performance Metrics . . . . . . . 20 98 10. Security considerations . . . . . . . . . . . . . . . . . . . 21 99 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 100 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 22 101 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 102 13.1. Normative References . . . . . . . . . . . . . . . . . . 22 103 13.2. Informative References . . . . . . . . . . . . . . . . . 23 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 106 1. Open Issues 108 1. Define the Filter column subcolumns, i.e. how filters are 109 expressed. 111 2. Need to include an example for a passive metric 113 3. Shall we remove the definitions of active and passive? If we 114 remove it, shall we keep all the related comments in the draft? 116 4. URL: should we include a URL link in each registry entry with a 117 URL specific to the entry that links to a different text page 118 that contains all the details of the registry entry as in 119 http://www.iana.org/assignments/xml-registry/xml- 120 registry.xhtml#ns 122 5. As discussed between Marcelo and Benoit, modify "defines" in the 123 Parameter definition. Reasoning: the distinction between a new 124 performance metric and a parameter is not clear. If it's defined 125 as a variable, is it a new perf metric? "All Parameters must be 126 known to measure using a metric": well, if it's a new perf 127 metric, we don't have the problem. And state what the parameter 128 is the example. 130 6. As discussed between Marcelo and Benoit, can we find a Parameter 131 for passive monitoring? The sampling distribution is a fixed 132 Parameter, right? Because it's needed "to interpret the 133 results", as mentioned in the Parameter definition. 135 7. We miss a new Parameter section that explains the link between 136 Parameters, Fixed Parameters, Run-time Parameters, and 137 potentially stream parameters. We must also add in this section 138 that "Differences in values for a fixed parameters implies a new 139 registry entries" 141 8. The double definitions are annoying: Registered Performance 142 Metric = Registered Metric, and Performance Metrics Registry = 143 Registry. I (Benoit) am in favor to only keep a single 144 definition (the longest one), and be consistent 146 2. Introduction 148 The IETF specifies and uses Performance Metrics of protocols and 149 applications transported over its protocols. Performance metrics are 150 such an important part of the operations of IETF protocols that 151 [RFC6390] specifies guidelines for their development. 153 The definition and use of Performance Metrics in the IETF happens in 154 various working groups (WG), most notably: 156 The "IP Performance Metrics" (IPPM) WG is the WG primarily 157 focusing on Performance Metrics definition at the IETF. 159 The "Metric Blocks for use with RTCP's Extended Report Framework" 160 (XRBLOCK) WG recently specified many Performance Metrics related 161 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 162 which establishes a framework to allow new information to be 163 conveyed in RTCP, supplementing the original report blocks defined 164 in "RTP: A Transport Protocol for Real-Time Applications", 165 [RFC3550]. 167 The "Benchmarking Methodology" WG (BMWG) defined many Performance 168 Metrics for use in laboratory benchmarking of inter-networking 169 technologies. 171 The "IP Flow Information eXport" (IPFIX) concluded WG specified an 172 IANA process for new Information Elements. Some Performance 173 Metrics related Information Elements are proposed on regular 174 basis. 176 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 177 defined some Performance Metrics related to Session Initiation 178 Protocol (SIP) voice quality [RFC6035]. 180 It is expected that more Performance Metrics will be defined in the 181 future, not only IP-based metrics, but also metrics which are 182 protocol-specific and application-specific. 184 However, despite the importance of Performance Metrics, there are two 185 related problems for the industry. First, how to ensure that when 186 one party requests another party to measure (or report or in some way 187 act on) a particular Performance Metric, then both parties have 188 exactly the same understanding of what Performance Metric is being 189 referred to. Second, how to discover which Performance Metrics have 190 been specified, so as to avoid developing new Performance Metric that 191 is very similar, but not quite inter-operable. The problems can be 192 addressed by creating a registry of performance metrics. The usual 193 way in which IETF organizes namespaces is with Internet Assigned 194 Numbers Authority (IANA) registries, and there is currently no 195 Performance Metrics Registry maintained by the IANA. 197 This document therefore creates an IANA-maintained Performance 198 Metrics Registry. It also provides best practices on how to specify 199 new entries or update ones in the Performance Metrics Registry. 201 3. Terminology 203 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 204 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 205 "OPTIONAL" in this document are to be interpreted as described in 206 [RFC2119]. 208 Performance Metric: A Performance Metric is a quantitative measure 209 of performance, targeted to an IETF-specified protocol or targeted 210 to an application transported over an IETF-specified protocol. 211 Examples of Performance Metrics are the FTP response time for a 212 complete file download, the DNS response time to resolve the IP 213 address, a database logging time, etc. This definition is 214 consistent with the definition of metric in [RFC2330] and broader 215 than the definition of performance metric in [RFC6390]. 217 Registered Performance Metric: A Registered Performance Metric (or 218 Registered Metric) is a Performance Metric expressed as an entry 219 in the Performance Metric Registry, administered by IANA. Such a 220 performance metric has met all the registry review criteria 221 defined in this document in order to included in the registry. 223 Performance Metrics Registry: The IANA registry containing 224 Registered Performance Metrics. In this document, it is also 225 called simply "Registry". 227 Proprietary Registry: A set of metrics that are registered in a 228 proprietary registry, as opposed to Performance Metrics Registry. 230 Performance Metrics Experts: The Performance Metrics Experts is a 231 group of designated experts [RFC5226] selected by the IESG to 232 validate the Performance Metrics before updating the Performance 233 Metrics Registry. The Performance Metrics Experts work closely 234 with IANA. 236 Parameter: An input factor defined as a variable in the definition 237 of a Performance Metric. A numerical or other specified factor 238 forming one of a set that defines a metric or sets the conditions 239 of its operation. All Parameters must be known to measure using a 240 metric and interpret the results. Although Parameters do not 241 change the fundamental nature of the Performance Metric's 242 definition, some have substantial influence on the network 243 property being assessed and interpretation of the results. 245 Note: Consider the case of packet loss in the following two 246 Active Measurement Method cases. The first case is packet loss 247 as background loss where the parameter set includes a very 248 sparse Poisson stream, and only characterizes the times when 249 packets were lost. Actual user streams likely see much higher 250 loss at these times, due to tail drop or radio errors. The 251 second case is packet loss as inverse of throughput where the 252 parameter set includes a very dense, bursty stream, and 253 characterizes the loss experienced by a stream that 254 approximates a user stream. These are both "loss metrics", but 255 the difference in interpretation of the results is highly 256 dependent on the Parameters (at least), to the extreme where we 257 are actually using loss to infer its compliment: delivered 258 throughput. 260 Active Measurement Method: Methods of Measurement conducted on 261 traffic which serves only the purpose of measurement and is 262 generated for that reason alone, and whose traffic characteristics 263 are known a priori. Examples of Active Measurement Methods are 264 the measurement methods for the One way delay metric defined in 265 [RFC2679] and the one for round trip delay defined in [RFC2681]. 267 Passive Measurement Method: Methods of Measurement conducted on 268 network traffic, generated either from the end users or from 269 network elements. One characteristic of Passive Measurement 270 Methods is that sensitive information may be observed, and as a 271 consequence, stored in the measurement system. 273 4. Scope 275 This document is meant for two different audiences. For those 276 defining new Registered Performance Metrics, it provides 277 specifications and best practices to be used in deciding which 278 Registered Metrics are useful for a measurement study, instructions 279 for writing the text for each column of the Registered Metrics, and 280 information on the supporting documentation required for the new 281 Registry entry (up to and including the publication of one or more 282 RFCs or I-Ds describing it). For the appointed Performance Metrics 283 Experts and for IANA personnel administering the new IANA Performance 284 Metric Registry, it defines a set of acceptance criteria against 285 which these proposed Registered Performance Metrics should be 286 evaluated. 288 This Performance Metric Registry is applicable to Performance Metrics 289 issued from Active Measurement, Passive Measurement, and any other 290 form of Performance Metric. This registry is designed to encompass 291 Performance Metrics developed throughout the IETF and especially for 292 the technologies specified in the following working groups: IPPM, 293 XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to 294 set up a Performance Metric Registry, and the reasons why this design 295 was inadequate [RFC6248]. Finally, this document gives a set of 296 guidelines for requesters and expert reviewers of candidate 297 Registered Performance Metrics. 299 This document makes no attempt to populate the Registry with initial 300 entries. It does provides a few examples that are merely 301 illustrations and should not be included in the registry at this 302 point in time. 304 Based on [RFC5226] Section 4.3, this document is processed as Best 305 Current Practice (BCP) [RFC2026]. 307 5. Motivation for a Performance Metrics Registry 309 In this section, we detail several motivations for the Performance 310 Metric Registry. 312 5.1. Interoperability 314 As any IETF registry, the primary use for a registry is to manage a 315 namespace for its use within one or more protocols. In the 316 particular case of the Performance Metric Registry, there are two 317 types of protocols that will use the Performance Metrics in the 318 Registry during their operation (by referring to the Index values): 320 o Control protocol: this type of protocols is used to allow one 321 entity to request another entity to perform a measurement using a 322 specific metric defined by the Registry. One particular example 323 is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP 324 terminology, the Registry is used in the LMAP Control protocol to 325 allow a Controller to request a measurement task to one or more 326 Measurement Agents. In order to enable this use case, the entries 327 of the Performance Metric Registry must be well enough defined to 328 allow a Measurement Agent implementation to trigger a specific 329 measurement task upon the reception of a control protocol message. 330 This requirement heavily constrains the type of entries that are 331 acceptable for the Performance Metric Registry. 333 o Report protocol: This type of protocols is used to allow an entity 334 to report measurement results to another entity. By referencing 335 to a specific Performance Metric Registry, it is possible to 336 properly characterize the measurement result data being reported. 337 Using the LMAP terminology, the Registry is used in the Report 338 protocol to allow a Measurement Agent to report measurement 339 results to a Collector. 341 5.2. Single point of reference for Performance Metrics 343 A Performance Metrics Registry serves as a single point of reference 344 for Performance Metrics defined in different working groups in the 345 IETF. As we mentioned earlier, there are several WGs that define 346 Performance Metrics in the IETF and it is hard to keep track of all 347 them. This results in multiple definitions of similar Performance 348 Metrics that attempt to measure the same phenomena but in slightly 349 different (and incompatible) ways. Having a Registry would allow 350 both the IETF community and external people to have a single list of 351 relevant Performance Metrics defined by the IETF (and others, where 352 appropriate). The single list is also an essential aspect of 353 communication about Performance Metrics, where different entities 354 that request measurements, execute measurements, and report the 355 results can benefit from a common understanding of the referenced 356 Performance Metric. 358 5.3. Side benefits 360 There are a couple of side benefits of having such a Registry. 361 First, the Registry could serve as an inventory of useful and used 362 Performance Metrics, that are normally supported by different 363 implementations of measurement agents. Second, the results of 364 measurements using the Performance Metrics would be comparable even 365 if they are performed by different implementations and in different 366 networks, as the Performance Metric is properly defined. BCP 176 367 [RFC6576] examines whether the results produced by independent 368 implementations are equivalent in the context of evaluating the 369 completeness and clarity of metric specifications. This BCP defines 370 the standards track advancement testing for (active) IPPM metrics, 371 and the same process will likely suffice to determine whether 372 Registered Performance Metrics are sufficiently well specified to 373 result in comparable (or equivalent) results. Registered Performance 374 Metrics which have undergone such testing SHOULD be noted, with a 375 reference to the test results. 377 6. Criteria for Performance Metrics Registration 379 It is neither possible nor desirable to populate the Registry with 380 all combinations of Parameters of all Performance Metrics. The 381 Registered Performance Metrics should be: 383 1. interpretable by the user. 385 2. implementable by the software designer, 386 3. deployable by network operators, 388 4. accurate, for interoperability and deployment across vendors, 390 5. Operationally useful, so that it has significant industry 391 interest and/or has seen deployment, 393 6. Sufficiently tightly defined, so that different values for the 394 Run-time Parameters does not change the fundamental nature of the 395 measurement, nor change the practicality of its implementation. 397 In essence, there needs to be evidence that a candidate Registered 398 Performance Metric has significant industry interest, or has seen 399 deployment, and there is agreement that the candidate Registered 400 Performance Metric serves its intended purpose. 402 7. Performance Metric Registry: Prior attempt 404 There was a previous attempt to define a metric registry RFC 4148 405 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 406 it was "found to be insufficiently detailed to uniquely identify IPPM 407 metrics... [there was too much] variability possible when 408 characterizing a metric exactly" which led to the RFC4148 registry 409 having "very few users, if any". 411 A couple of interesting additional quotes from RFC 6248 might help 412 understand the issues related to that registry. 414 1. "It is not believed to be feasible or even useful to register 415 every possible combination of Type P, metric parameters, and 416 Stream parameters using the current structure of the IPPM Metrics 417 Registry." 419 2. "The registry structure has been found to be insufficiently 420 detailed to uniquely identify IPPM metrics." 422 3. "Despite apparent efforts to find current or even future users, 423 no one responded to the call for interest in the RFC 4148 424 registry during the second half of 2010." 426 The current approach learns from this by tightly defining each 427 Registered Performance Metric with only a few variable (Run-time) 428 Parameters to be specified by the measurement designer, if any. The 429 idea is that entries in the Registry stem from different measurement 430 methods which require input (Run-time) parameters to set factors like 431 source and destination addresses (which do not change the fundamental 432 nature of the measurement). The downside of this approach is that it 433 could result in a large number of entries in the Registry. There is 434 agreement that less is more in this context - it is better to have a 435 reduced set of useful metrics rather than a large set of metrics, 436 some with with questionable usefulness. 438 7.1. Why this Attempt Will Succeed 440 As mentioned in the previous section, one of the main issues with the 441 previous registry was that the metrics contained in the registry were 442 too generic to be useful. This document specifies stricter criteria 443 for performance metric registration (see section 6), and imposes a 444 group of Performance Metrics Experts that will provide guidelines to 445 assess if a Performance Metric is properly specified. 447 Another key difference between this attempt and the previous one is 448 that in this case there is at least one clear user for the Registry: 449 the LMAP framework and protocol. Because the LMAP protocol will use 450 the Registry values in its operation, this actually helps to 451 determine if a metric is properly defined. In particular, since we 452 expect that the LMAP control protocol will enable a controller to 453 request a measurement agent to perform a measurement using a given 454 metric by embedding the Performance Metric Registry value in the 455 protocol, a metric is properly specified if it is defined well-enough 456 so that it is possible (and practical) to implement the metric in the 457 measurement agent. This was the failure of the previous attempt: a 458 registry entry with an undefined Type-P (section 13 of RFC 2330 459 [RFC2330]) allows implementation to be ambiguous. 461 8. Definition of the Performance Metric Registry 463 In this section we define the columns of the Performance Metric 464 Registry. This Performance Metric Registry is applicable to 465 Performance Metrics issued from Active Measurement, Passive 466 Measurement, and any other form of Performance Metric. Because of 467 that, it may be the case that some of the columns defined are not 468 applicable for a given type of metric. If this is the case, the 469 column(s) SHOULD be populated with the "NA" value (Non Applicable). 470 However, the "NA" value MUST NOT be used by any metric in the 471 following columns: Identifier, Name, URI, Status, Requester, 472 Revision, Revision Date, Description. In addition, it may be 473 possible that, in the future, a new type of metric requires 474 additional columns. Should that be the case, it is possible to add 475 new columns to the registry. The specification defining the new 476 column(s) must define how to populate the new column(s) for existing 477 entries. 479 The columns of the Performance Metric Registry are defined next. The 480 columns are grouped into "Categories" to facilitate the use of the 481 registry. Categories are described at the 8.x heading level, and 482 columns are at the 8.x.y heading level. The Figure below illustrates 483 this organization. An entry (row) therefore gives a complete 484 description of a Registered Metric. 486 Each column serves as a check-list item and helps to avoid omissions 487 during registration and expert review. 489 Registry Categories and Columns, shown as 490 Category 491 ------------------ 492 Column | Column | 494 Summary 495 ------------------------------- 496 Identifier | Name | URI | Description | 498 Metric Definition 499 ----------------------------------------- 500 Reference Definition | Fixed Parameters | 502 Method of Measurement 503 --------------------------------------------------------------------- 504 Reference | Packet | Traffic | Sampling | Run-time | Role | 505 Method | Generation | Filter | Distribution | Parameters | | 506 | Stream | 507 Output 508 ----------------------------- 509 | Type | Reference | Units | 510 | | Definition | | 512 Administrative Information 513 ---------------------------------- 514 Status |Request | Rev | Rev.Date | 516 Comments and Remarks 517 -------------------- 519 8.1. Summary Category 521 8.1.1. Identifier 523 A numeric identifier for the Registered Performance Metric. This 524 identifier MUST be unique within the Performance Metric Registry. 526 The Registered Performance Metric unique identifier is a 16-bit 527 integer (range 0 to 65535). When adding newly Registered Performance 528 Metrics to the Performance Metric Registry, IANA should assign the 529 lowest available identifier to the next Registered Performance 530 Metric. 532 8.1.2. Name 534 As the name of a Registered Performance Metric is the first thing a 535 potential implementor will use when determining whether it is 536 suitable for a given application, it is important to be as precise 537 and descriptive as possible. 539 New names of Registered Performance Metrics: 541 1. "MUST be chosen carefully to describe the Registered Performance 542 Metric and the context in which it will be used." 544 2. "MUST be unique within the Performance Metric Registry." 546 3. "MUST use capital letters for the first letter of each component. 547 All other letters MUST be lowercase, even for acronyms. 548 Exceptions are made for acronyms containing a mixture of 549 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 551 4. MUST use '_' between each component of the Registered Performance 552 Metric name. 554 5. MUST start with prefix Act_ for active measurement Registered 555 Performance Metric. 557 6. MUST start with prefix Pas_ for passive monitoring Registered 558 Performance Metric. 560 7. Other types of Performance Metric should define a proper prefix 561 for identifying the type. 563 8. Some examples of names of passive metrics might be: 564 Pas_L3_L4_Octets (Layer 3 and 4 level accounting of bytes 565 observed), Pas_DNS_RTT (Round Trip Time of in DNS query response 566 of observed traffic), and Pas_L3_TCP_RTT (Passively observed 567 round trip time in TCP handshake organized with L3 addresses) 569 9. The remaining rules for naming are left for the Performance 570 Metric Experts to determine as they gather experience, so this is 571 an area of planned update by a future RFC 573 An example is "Act_UDP_Latency_Poisson_99mean" for a active 574 monitoring UDP latency metric using a Poisson stream of packets and 575 producing the 99th percentile mean as output. 577 8.1.3. URI 579 The URI column MUST contain a URI [RFC3986] that uniquely identified 580 the Registered Performance Metric. The URI is a URN [RFC2141]. The 581 URI is automatically generated by prepending the prefix 582 urn:ietf:params:ippm:metric: to the metric name. The resulting URI 583 is globally unique. 585 8.1.4. Description 587 A Registered Performance Metric description is a written 588 representation of a particular Registry entry. It supplements the 589 Registered Performance Metric name to help Registry users select 590 relevant Registered Performance Metrics. 592 8.2. Metric Definition Category 594 This category includes columns to prompt all necessary details 595 related to the metric definition, including the RFC reference and 596 values of input factors, called fixed parameters, which are left open 597 in the RFC but have a particular value defined by the performance 598 metric. 600 8.2.1. Reference Definition 602 This entry provides a reference (or references) to the relevant 603 section(s) of the document(s) that define the metric, as well as any 604 supplemental information needed to ensure an unambiguous definition 605 for implementations. The reference needs to be an immutable 606 document, such as an RFC; for other standards bodies, it is likely to 607 be necessary to reference a specific, dated version of a 608 specification. 610 8.2.2. Fixed Parameters 612 Fixed Parameters are Paremeters whose value must be specified in the 613 Registry. The measurement system uses these values. 615 Where referenced metrics supply a list of Parameters as part of their 616 descriptive template, a sub-set of the Parameters will be designated 617 as Fixed Parameters. For example, for active metrics, Fixed 618 Parameters determine most or all of the IPPM Framework convention 619 "packets of Type-P" as described in [RFC2330], such as transport 620 protocol, payload length, TTL, etc. An example for passive metrics 621 is for RTP packet loss calculation that relies on the validation of a 622 packet as RTP which is a multi-packet validation controlled by 623 MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL 624 values can alter the loss report and this value could be set as a 625 Fixed Parameter 627 A Parameter which is a Fixed Parameter for one Registry entry may be 628 designated as a Run-time Parameter for another Registry entry. 630 8.3. Method of Measurement Category 632 This category includes columns for references to relevant sections of 633 the RFC(s) and any supplemental information needed to ensure an 634 unambiguous method for implementations. 636 8.3.1. Reference Method 638 This entry provides references to relevant sections of the RFC(s) 639 describing the method of measurement, as well as any supplemental 640 information needed to ensure unambiguous interpretation for 641 implementations referring to the RFC text. 643 Specifically, this section should include pointers to pseudocode or 644 actual code that could be used for an unambigious implementation. 646 8.3.2. Packet Generation Stream 648 This column applies to Performance Metrics that generate traffic for 649 a part of their Measurement Method purposes including but not 650 necessarily limited to Active metrics. The generated traffic is 651 referred as stream and this columns describe its characteristics. 653 Each entry for this column contains the following information: 655 o Value: The name of the packet stream scheduling discipline 657 o Stream Parameters: The values and formats of input factors for 658 each type of stream. For example, the average packet rate and 659 distribution truncation value for streams with Poisson-distributed 660 inter-packet sending times. 662 o Reference: the specification where the stream is defined 664 The simplest example of stream specification is Singleton scheduling 665 (see [RFC2330]), where a single atomic measurement is conducted. 666 Each atomic measurement could consist of sending a single packet 667 (such as a DNS request) or sending several packets (for example, to 668 request a webpage). Other streams support a series of atomic 669 measurements in a "sample", with a schedule defining the timing 670 between each transmitted packet and subsequent measurement. 671 Principally, two different streams are used in IPPM metrics, Poisson 672 distributed as described in [RFC2330] and Periodic as described in 673 [RFC3432]. Both Poisson and Periodic have their own unique 674 parameters, and the relevant set of values is specified in this 675 column. 677 8.3.3. Traffic Filter 679 This column applies to Performance Metrics that observe packets 680 flowing through (the device with) the measurement agent i.e. that is 681 not necessarily addressed to the measurement agent. This includes 682 but is not limited to Passive Metrics. The filter specifies the 683 traffic that is measured. This includes protocol field values/ 684 ranges, such as address ranges, and flow or session identifiers. 686 The traffic filter itself depends on needs of the metric itself and a 687 balance of operators measurement needs and user's need for privacy. 688 Mechanics for conveying the filter criteria might be the BPF (Berkley 689 Packet Filter) or PSAMP [RFC5475] Property Match Filtering which 690 reuses IPFIX [RFC7012]. An example BPF string for matching TCP/80 691 traffic to remote destination net 192.0.2.0/24 would be "dst net 692 192.0.2.0/24 and tcp dst port 80". More complex filter engines might 693 be supported by the implementation that might allow for matching 694 using Deep Packet Inspection (DPI) technology. 696 8.3.4. Sampling Distribution 698 The sampling distribution defines out of all the packets that match 699 the traffic filter, which one of those are actually used for the 700 measurement. One possibility is "all" which implies that all packets 701 matching the Traffic filter are considered, but there may be other 702 sampling strategies. It includes the following information: 704 Value: the name of the sampling distribution 706 Parameters: if any. 708 Reference definition: pointer to the specification where the 709 sampling distribution is properly defined. 711 Sampling and Filtering Techniques for IP Packet Selection are 712 documented in the PSAMP (Packet Sampling) [RFC5475], while the 713 Framework for Packet Selection and Reporting, [RFC5474] provides more 714 background information. The sampling distribution parameters might 715 be expressed in terms of the Information Model for Packet Sampling 716 Exports, [RFC5477], and the Flow Selection Techniques, [RFC7014]. 718 8.3.5. Run-time Parameters 720 Run-Time Parameters are Parameters that must be determined, 721 configured into the measurement system, and reported with the results 722 for the context to be complete. However, the values of these 723 parameters is not specified in the Registry (like the Fixed 724 Parameters), rather these parameters are listed as an aid to the 725 measurement system implementer or user (they must be left as 726 variables, and supplied on execution). 728 Where metrics supply a list of Parameters as part of their 729 descriptive template, a sub-set of the Parameters will be designated 730 as Run-Time Parameters. 732 Examples of Run-time Parameters include IP addresses, measurement 733 point designations, start times and end times for measurement, and 734 other information essential to the method of measurement. 736 8.3.6. Role 738 In some method of measurements, there may be several roles defined 739 e.g. on a one-way packet delay active measurement, there is one 740 measurement agent that generates the packets and the other one that 741 receives the packets. This column contains the name of the role for 742 this particular entry. In the previous example, there should be two 743 entries in the registry, one for each role, so that when a 744 measurement agent is instructed to perform the one way delay source 745 metric know that it is supposed to generate packets. The values for 746 this field are defined in the reference method of measurement. 748 8.4. Output Category 750 For entries which involve a stream and many singleton measurements, a 751 statistic may be specified in this column to summarize the results to 752 a single value. If the complete set of measured singletons is 753 output, this will be specified here. 755 Some metrics embed one specific statistic in the reference metric 756 definition, while others allow several output types or statistics. 758 8.4.1. Type 760 This column contain the name of the output type. The output type 761 defines the type of result that the metric produces. It can be the 762 raw results or it can be some form of statistic. The specification 763 of the output type must define the format of the output. In some 764 systems, format specifications will simplify both measurement 765 implementation and collection/storage tasks. Note that if two 766 different statistics are required from a single measurement (for 767 example, both "Xth percentile mean" and "Raw"), then a new output 768 type must be defined ("Xth percentile mean AND Raw"). 770 8.4.2. Reference Definition 772 This column contains a pointer to the specification where the output 773 type is defined 775 8.4.3. Metric Units 777 The measured results must be expressed using some standard dimension 778 or units of measure. This column provides the units. 780 When a sample of singletons (see [RFC2330] for definitions of these 781 terms) is collected, this entry will specify the units for each 782 measured value. 784 8.5. Administrative information 786 8.5.1. Status 788 The status of the specification of this Registered Performance 789 Metric. Allowed values are 'current' and 'deprecated'. All newly 790 defined Information Elements have 'current' status. 792 8.5.2. Requester 794 The requester for the Registered Performance Metric. The requester 795 MAY be a document, such as RFC, or person. 797 8.5.3. Revision 799 The revision number of a Registered Performance Metric, starting at 0 800 for Registered Performance Metrics at time of definition and 801 incremented by one for each revision. 803 8.5.4. Revision Date 805 The date of acceptance or the most recent revision for the Registered 806 Performance Metric. 808 8.6. Comments and Remarks 810 Besides providing additional details which do not appear in other 811 categories, this open Category (single column) allows for unforeseen 812 issues to be addressed by simply updating this informational entry. 814 9. The Life-Cycle of Registered Metrics 816 Once a Performance Metric or set of Performance Metrics has been 817 identified for a given application, candidate Registry entry 818 specifications in accordance with Section 8 are submitted to IANA to 819 follow the process for review by the Performance Metric Experts, as 820 defined below. This process is also used for other changes to the 821 Performance Metric Registry, such as deprecation or revision, as 822 described later in this section. 824 It is also desirable that the author(s) of a candidate Registry entry 825 seek review in the relevant IETF working group, or offer the 826 opportunity for review on the WG mailing list. 828 9.1. Adding new Performance Metrics to the Registry 830 Requests to change Registered Metrics in the Performance Metric 831 Registry are submitted to IANA, which forwards the request to a 832 designated group of experts (Performance Metric Experts) appointed by 833 the IESG; these are the reviewers called for by the Expert Review 834 RFC5226 policy defined for the Performance Metric Registry. The 835 Performance Metric Experts review the request for such things as 836 compliance with this document, compliance with other applicable 837 Performance Metric-related RFCs, and consistency with the currently 838 defined set of Registered Performance Metrics. 840 Authors are expected to review compliance with the specifications in 841 this document to check their submissions before sending them to IANA. 843 The Performance Metric Experts should endeavor to complete referred 844 reviews in a timely manner. If the request is acceptable, the 845 Performance Metric Experts signify their approval to IANA, which 846 updates the Performance Metric Registry. If the request is not 847 acceptable, the Performance Metric Experts can coordinate with the 848 requester to change the request to be compliant. The Performance 849 Metric Experts may also choose in exceptional circumstances to reject 850 clearly frivolous or inappropriate change requests outright. 852 This process should not in any way be construed as allowing the 853 Performance Metric Experts to overrule IETF consensus. Specifically, 854 any Registered Metrics that were added with IETF consensus require 855 IETF consensus for revision or deprecation. 857 Decisions by the Performance Metric Experts may be appealed as in 858 Section 7 of RFC5226. 860 9.2. Revising Registered Performance Metrics 862 A request for Revision is only permissible when the changes maintain 863 backward-compatibility with implementations of the prior Registry 864 entry describing a Registered Metric (entries with lower revision 865 numbers, but the same Identifier and Name). 867 The purpose of the Status field in the Performance Metric Registry is 868 to indicate whether the entry for a Registered Metric is 'current' or 869 'deprecated'. 871 In addition, no policy is defined for revising IANA Performance 872 Metric entries or addressing errors therein. To be certain, changes 873 and deprecations within the Performance Metric Registry are not 874 encouraged, and should be avoided to the extent possible. However, 875 in recognition that change is inevitable, the provisions of this 876 section address the need for revisions. 878 Revisions are initiated by sending a candidate Registered Performance 879 Metric definition to IANA, as in Section 8, identifying the existing 880 Registry entry. 882 The primary requirement in the definition of a policy for managing 883 changes to existing Registered Performance Metrics is avoidance of 884 interoperability problems; Performance Metric Experts must work to 885 maintain interoperability above all else. Changes to Registered 886 Performance Metrics may only be done in an inter-operable way; 887 necessary changes that cannot be done in a way to allow 888 interoperability with unchanged implementations must result in the 889 creation of a new Registered Metric and possibly the deprecation of 890 the earlier metric. 892 A change to a Registered Performance Metric is held to be backward- 893 compatible only when: 895 1. "it involves the correction of an error that is obviously only 896 editorial; or" 898 2. "it corrects an ambiguity in the Registered Performance Metric's 899 definition, which itself leads to issues severe enough to prevent 900 the Registered Performance Metric's usage as originally defined; 901 or" 903 3. "it corrects missing information in the metric definition without 904 changing its meaning (e.g., the explicit definition of 'quantity' 905 semantics for numeric fields without a Data Type Semantics 906 value); or" 908 4. "it harmonizes with an external reference that was itself 909 corrected." 911 If an Performance Metric revision is deemed permissible by the 912 Performance Metric Experts, according to the rules in this document, 913 IANA makes the change in the Performance Metric Registry. The 914 requester of the change is appended to the requester in the Registry. 916 Each Registered Performance Metric in the Registry has a revision 917 number, starting at zero. Each change to a Registered Performance 918 Metric following this process increments the revision number by one. 920 When a revised Registered Performance Metric is accepted into the 921 Performance Metric Registry, the date of acceptance of the most 922 recent revision is placed into the revision Date column of the 923 Registry for that Registered Performance Metric. 925 Where applicable, additions to Registered Performance Metrics in the 926 form of text Comments or Remarks should include the date, but such 927 additions may not constitute a revision according to this process. 929 Older version(s) of the updated metric entries are kept in the 930 registry for archival purposes. The older entries are kept with all 931 fields unmodified (version, revision date) except for the status 932 field that is changed to "Deprecated". 934 9.3. Deprecating Registered Performance Metrics 936 Changes that are not permissible by the above criteria for Registered 937 Metric's revision may only be handled by deprecation. A Registered 938 Performance Metric MAY be deprecated and replaced when: 940 1. "the Registered Performance Metric definition has an error or 941 shortcoming that cannot be permissibly changed as in 942 Section Revising Registered Performance Metrics; or" 944 2. "the deprecation harmonizes with an external reference that was 945 itself deprecated through that reference's accepted deprecation 946 method; or" 948 A request for deprecation is sent to IANA, which passes it to the 949 Performance Metric Expert for review. When deprecating an 950 Performance Metric, the Performance Metric description in the 951 Performance Metric Registry must be updated to explain the 952 deprecation, as well as to refer to any new Performance Metrics 953 created to replace the deprecated Performance Metric. 955 The revision number of a Registered Performance Metric is incremented 956 upon deprecation, and the revision Date updated, as with any 957 revision. 959 The use of deprecated Registered Metrics should result in a log entry 960 or human-readable warning by the respective application. 962 Names and Metric ID of deprecated Registered Metrics must not be 963 reused. 965 The deprecated entries are kept with all fields unmodified, except 966 the version, revision date, and the status field (changed to 967 "Deprecated"). 969 10. Security considerations 971 This draft doesn't introduce any new security considerations for the 972 Internet. However, the definition of Performance Metrics may 973 introduce some security concerns, and should be reviewed with 974 security in mind. 976 11. IANA Considerations 978 This document specifies the procedure for Performance Metrics 979 Registry setup. IANA is requested to create a new Registry for 980 Performance Metrics called "Registered Performance Metrics" with the 981 columns defined in Section 8. 983 New assignments for Performance Metric Registry will be administered 984 by IANA through Expert Review [RFC5226], i.e., review by one of a 985 group of experts, the Performance Metric Experts, appointed by the 986 IESG upon recommendation of the Transport Area Directors. The 987 experts can be initially drawn from the Working Group Chairs and 988 document editors of the Performance Metrics Directorate among other 989 sources of experts. 991 The Identifier values from 64512 to 65536 are reserved for private 992 use. The name starting with the prefix Priv- are reserved for 993 private use. 995 This document requests the allocation of the URI prefix 996 urn:ietf:params:ippm:metric for the purpose of generating URIs for 997 registered metrics. 999 12. Acknowledgments 1001 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 1002 some brainstorming sessions on this topic. 1004 13. References 1006 13.1. Normative References 1008 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 1009 3", BCP 9, RFC 2026, October 1996. 1011 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1012 Requirement Levels", BCP 14, RFC 2119, March 1997. 1014 [RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997. 1016 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1017 "Framework for IP Performance Metrics", RFC 2330, May 1018 1998. 1020 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1021 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1022 3986, January 2005. 1024 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 1025 Registry", BCP 108, RFC 4148, August 2005. 1027 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 1028 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 1029 May 2008. 1031 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 1032 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, April 1033 2011. 1035 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 1036 Performance Metric Development", BCP 170, RFC 6390, 1037 October 2011. 1039 [RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP 1040 Performance Metrics (IPPM) Standard Advancement Testing", 1041 BCP 176, RFC 6576, March 2012. 1043 13.2. Informative References 1045 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1046 Delay Metric for IPPM", RFC 2679, September 1999. 1048 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1049 Delay Metric for IPPM", RFC 2681, September 1999. 1051 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1052 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1053 November 2002. 1055 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1056 performance measurement with periodic streams", RFC 3432, 1057 November 2002. 1059 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1060 Jacobson, "RTP: A Transport Protocol for Real-Time 1061 Applications", STD 64, RFC 3550, July 2003. 1063 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 1064 Protocol Extended Reports (RTCP XR)", RFC 3611, November 1065 2003. 1067 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1068 Description Protocol", RFC 4566, July 2006. 1070 [RFC5474] Duffield, N., Chiou, D., Claise, B., Greenberg, A., 1071 Grossglauser, M., and J. Rexford, "A Framework for Packet 1072 Selection and Reporting", RFC 5474, March 2009. 1074 [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. 1075 Raspall, "Sampling and Filtering Techniques for IP Packet 1076 Selection", RFC 5475, March 2009. 1078 [RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G. 1079 Carle, "Information Model for Packet Sampling Exports", 1080 RFC 5477, March 2009. 1082 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1083 Applicability Statement", RFC 5481, March 2009. 1085 [RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network 1086 Time Protocol Version 4: Protocol and Algorithms 1087 Specification", RFC 5905, June 2010. 1089 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 1090 "Session Initiation Protocol Event Package for Voice 1091 Quality Reporting", RFC 6035, November 2010. 1093 [RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information 1094 Reporting Using a Source Description (SDES) Item and an 1095 RTCP Extended Report (XR) Block", RFC 6776, October 2012. 1097 [RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the 1098 RTP Monitoring Framework", RFC 6792, November 2012. 1100 [RFC7003] Clark, A., Huang, R., and Q. Wu, "RTP Control Protocol 1101 (RTCP) Extended Report (XR) Block for Burst/Gap Discard 1102 Metric Reporting", RFC 7003, September 2013. 1104 [RFC7012] Claise, B. and B. Trammell, "Information Model for IP Flow 1105 Information Export (IPFIX)", RFC 7012, September 2013. 1107 [RFC7014] D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow 1108 Selection Techniques", RFC 7014, September 2013. 1110 [I-D.ietf-lmap-framework] 1111 Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1112 Aitken, P., and A. Akhter, "A framework for Large-Scale 1113 Measurement of Broadband Performance (LMAP)", draft-ietf- 1114 lmap-framework-14 (work in progress), April 2015. 1116 Authors' Addresses 1118 Marcelo Bagnulo 1119 Universidad Carlos III de Madrid 1120 Av. Universidad 30 1121 Leganes, Madrid 28911 1122 SPAIN 1124 Phone: 34 91 6249500 1125 Email: marcelo@it.uc3m.es 1126 URI: http://www.it.uc3m.es 1128 Benoit Claise 1129 Cisco Systems, Inc. 1130 De Kleetlaan 6a b1 1131 1831 Diegem 1132 Belgium 1134 Email: bclaise@cisco.com 1135 Philip Eardley 1136 BT 1137 Adastral Park, Martlesham Heath 1138 Ipswich 1139 ENGLAND 1141 Email: philip.eardley@bt.com 1143 Al Morton 1144 AT&T Labs 1145 200 Laurel Avenue South 1146 Middletown, NJ 1147 USA 1149 Email: acmorton@att.com 1151 Aamer Akhter 1152 Consultant 1153 118 Timber Hitch 1154 Cary, NC 1155 USA 1157 Email: aakhter@gmail.com