idnits 2.17.00 (12 Aug 2021) /tmp/idnits11718/draft-ietf-ippm-metric-registry-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2014) is 2878 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 3986' is mentioned on line 559, but not defined == Missing Reference: 'RFC 2141' is mentioned on line 560, but not defined ** Obsolete undefined reference: RFC 2141 (Obsoleted by RFC 8141) == Unused Reference: 'RFC2330' is defined on line 808, but no explicit reference was found in the text == Unused Reference: 'RFC3986' is defined on line 831, but no explicit reference was found in the text == Unused Reference: 'RFC2141' is defined on line 835, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) == Outdated reference: draft-ietf-lmap-framework has been published as RFC 7594 Summary: 6 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: January 4, 2015 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 July 3, 2014 12 Registry for Performance Metrics 13 draft-ietf-ippm-metric-registry-00 15 Abstract 17 This document specifies the common aspects of the IANA Registry for 18 Performance Metrics, both active and passive categories. This 19 document also gives a set of guidelines for Registered Performance 20 Metric requesters and reviewers. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on January 4, 2015. 39 Copyright Notice 41 Copyright (c) 2014 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 2 57 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 58 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 60 5. Design Considerations for the Registry and Registered Metrics 7 61 5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 8 62 5.2. Criteria for Registered Performance Metrics . . . . . . . 8 63 5.3. Single point of reference for Performance metrics . . . . 9 64 5.4. Side benefits . . . . . . . . . . . . . . . . . . . . . . 9 65 6. Performance Metric Registry: Prior attempt . . . . . . . . . 9 66 6.1. Why this Attempt Will Succeed? . . . . . . . . . . . . . 10 67 7. Common Columns of the Performance Metric Registry . . . . . . 11 68 7.1. Identifier . . . . . . . . . . . . . . . . . . . . . . . 11 69 7.2. Name . . . . . . . . . . . . . . . . . . . . . . . . . . 11 70 7.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 7.4. Status . . . . . . . . . . . . . . . . . . . . . . . . . 12 72 7.5. Requester . . . . . . . . . . . . . . . . . . . . . . . . 13 73 7.6. Revision . . . . . . . . . . . . . . . . . . . . . . . . 13 74 7.7. Revision Date . . . . . . . . . . . . . . . . . . . . . . 13 75 7.8. Description . . . . . . . . . . . . . . . . . . . . . . . 13 76 7.9. Reference Specification(s) . . . . . . . . . . . . . . . 13 77 8. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 13 78 8.1. The Process for Review by the Performance Metric Experts 13 79 8.2. Revising Registered Performance Metrics . . . . . . . . . 14 80 8.3. Deprecating Registered Performance Metrics . . . . . . . 16 81 9. Performance Metric Registry and other Registries . . . . . . 16 82 10. Security considerations . . . . . . . . . . . . . . . . . . . 17 83 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 84 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17 85 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 17 86 13.1. Normative References . . . . . . . . . . . . . . . . . . 17 87 13.2. Informative References . . . . . . . . . . . . . . . . . 18 88 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 19 90 1. Open Issues 92 1. Many aspects of the Naming convention are TBD, and need 93 discussion. For example, we have distinguished RTCP-XR metrics 94 as End-Point (neither active nor passive in the traditional 95 sense, so not Act_ or Pas_). Even though we may not cast all 96 naming conventions in stone at the start, it will be helpful to 97 look at several examples of passive metric names now. 99 2. We should expand on the different roles and responsibilities of 100 the Performance Metrics Experts versus the Performance Metric 101 Directorate. At least, the Performance Metric Directorate one 102 should be expanded. --- (v7) If these are different entities, 103 our only concern is the role of the "PM Experts". 105 3. RTCP-XR metrics are currently referred to as "end-point", and 106 have aspects that are similar to active (the measured stream 107 characteristics are known a priori and measurement commonly 108 takes place at the end-points of the path) and passive (there is 109 no additional traffic dedicated to measurement, with the 110 exception of the RTCP report packets themselves). We have one 111 example expressing an end-point metric in the active sub- 112 registry memo. 114 4. Revised Registry Entries: Keep for history (deprecated) or 115 Delete? 117 5. Need to include an example for a name for a passive metric 119 6. Definition of Parameter needs more work? 121 7. Whether the name of the metric should contain the version of the 122 metric 124 8. Suppression Flag for the metrics, does it belong to the 125 registry? If yes, is ti part of the core or the active one? 127 9. Endpoint metric: I think we need either to remove it from the 128 draft or to properly define it. Currently in the draft we have 129 it as a equal to passive and active but it is not defined, which 130 seems incoherent. 132 10. URL: should we include a URL link in each registry entry with a 133 URL specific to the entry that links to a different text page 134 that contains all the details of the registry entry as in 135 http://www.iana.org/assignments/xml-registry/xml- 136 registry.xhtml#ns 138 2. Introduction 140 The IETF specifies and uses Performance Metrics of protocols and 141 applications transported over its protocols. Performance metrics are 142 such an important part of the operations of IETF protocols that 143 [RFC6390] specifies guidelines for their development. 145 The definition and use of Performance Metrics in the IETF happens in 146 various working groups (WG), most notably: 148 The "IP Performance Metrics" (IPPM) WG is the WG primarily 149 focusing on Performance Metrics definition at the IETF. 151 The "Metric Blocks for use with RTCP's Extended Report Framework" 152 (XRBLOCK) WG recently specified many Performance Metrics related 153 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 154 which establishes a framework to allow new information to be 155 conveyed in RTCP, supplementing the original report blocks defined 156 in "RTP: A Transport Protocol for Real-Time Applications", 157 [RFC3550]. 159 The "Benchmarking Methodology" WG (BMWG) defined many Performance 160 Metrics for use in laboratory benchmarking of inter-networking 161 technologies. 163 The "IP Flow Information eXport" (IPFIX) WG Information elements 164 related to Performance Metrics are currently proposed. 166 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 167 defined some Performance Metrics related to Session Initiation 168 Protocol (SIP) voice quality [RFC6035]. 170 It is expected that more Performance Metrics will be defined in the 171 future, not only IP-based metrics, but also metrics which are 172 protocol-specific and application-specific. 174 However, despite the importance of Performance Metrics, there are two 175 related problems for the industry. First, how to ensure that when 176 one party requests another party to measure (or report or in some way 177 act on) a particular Performance Metric, then both parties have 178 exactly the same understanding of what Performance Metric is being 179 referred to. Second, how to discover which Performance Metrics have 180 been specified, so as to avoid developing new Performance Metric that 181 is very similar. The problems can be addressed by creating a 182 registry of performance metrics. The usual way in which IETF 183 organizes namespaces is with Internet Assigned Numbers Authority 184 (IANA) registries, and there is currently no Performance Metrics 185 Registry maintained by the IANA. 187 This document therefore proposes the creation of a Performance 188 Metrics Registry. It also provides best practices on how to define 189 new or updated entries in the Performance Metrics Registry. 191 3. Terminology 193 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 194 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 195 "OPTIONAL" in this document are to be interpreted as described in 196 [RFC2119]. 198 The terms Performance Metric and Performance Metrics Directorate are 199 defined in [RFC6390], and copied over in this document for the 200 readers convenience. 202 Performance Metric: A Performance Metric is a quantitative measure 203 of performance, specific to an IETF-specified protocol or specific 204 to an application transported over an IETF-specified protocol. 205 Examples of Performance Metrics are the FTP response time for a 206 complete file download, the DNS response time to resolve the IP 207 address, a database logging time, etc. 209 Registered Performance Metric: A Registered Performance Metric (or 210 Registered Metric) is a Performance Metric expressed as an entry 211 in the Performance Metric Registry, and comprised of a 212 specifically named metric which has met all the registry review 213 criteria, is under the curation of IETF Performance Metrics 214 Experts, and whose changes are controlled by IANA. 216 Performance Metrics Registry: The IANA registry containing 217 Registered Performance Metrics. In this document, it is also 218 called simply "Registry". 220 Proprietary Registry: A set of metrics that are registered in a 221 proprietary registry, as opposed to Performance Metrics Registry. 223 Performance Metrics Experts: The Performance Metrics Experts is a 224 group of experts selected by the IESG to validate the Performance 225 Metrics before updating the Performance Metrics Registry. The 226 Performance Metrics Experts work closely with IANA. 228 Performance Metrics Directorate: The Performance Metrics Directorate 229 is a directorate that provides guidance for Performance Metrics 230 development in the IETF. The Performance Metrics Directorate 231 should be composed of experts in the performance community, 232 potentially selected from the IP Performance Metrics (IPPM), 233 Benchmarking Methodology (BMWG), and Performance Metrics for Other 234 Layers (PMOL) WGs. 236 Parameter: An input factor defined as a variable in the definition 237 of a metric. A numerical or other specified factor forming one of 238 a set that defines a metric or sets the conditions of its 239 operation. All Input Parameters must be known to measure using a 240 metric and interpret the results. Although Input Parameters do 241 not change the fundamental nature of the metric's definition, some 242 have substantial influence on the network property being assessed 243 and interpretation of the results. 245 Consider the case of packet loss in the following two cases. 246 The first case is packet loss as background loss where the 247 parameter set includes a very sparse Poisson stream, and only 248 characterizes the times when packets were lost. Actual user 249 streams likely see much higher loss at these times, due to tail 250 drop or radio errors. The second case is packet loss as 251 inverse of Throughput where the parameter set includes a very 252 dense, bursty stream, and characterizes the loss experienced by 253 a stream that approximates a user stream. These are both "loss 254 metrics", but the difference in interpretation of the results 255 is highly dependent on the Parameters (at least), to the 256 extreme where we are actually using loss to infer its 257 compliment: delivered throughput. 259 Active Measurement Method: Methods of Measurement conducted on 260 traffic which serves only the purpose of measurement and is 261 generated for that reason alone, and whose traffic characteristics 262 are known a priori. An Internet user's host can generate active 263 measurement traffic (virtually all typical user-generated traffic 264 is not dedicated to active measurement, but it can produce such 265 traffic with the necessary application operating). 267 Passive Measurement Method: Methods of Measurement conducted on 268 network traffic, generated either from the end users or from 269 network elements. One characteristic of Passive Measurement 270 Methods is that sensitive information may be observed, and as a 271 consequence, stored in the measurement system. 273 Hybrid Measurement Method: Methods of Measurement which use a 274 combination of Active Measurement and Passive Measurement methods. 276 4. Scope 278 The intended audience of this document includes those who prepare and 279 submit a request for a Registered Performance Metric, and for the 280 Performance Metric Experts who review a request. 282 This document specifies a Performance Metrics Registry in IANA. This 283 Performance Metric Registry is applicable to Performance Metrics 284 issued from Active Measurement, Passive Measurement, or from end- 285 point calculation. This registry is designed to encompass 286 performance metrics developed throughout the IETF and especially for 287 the following existing working groups: IPPM, XRBLOCK, IPFIX, and 288 BMWG. This document analyzes an prior attempt to set up a 289 Performance Metric Registry, and the reasons why this design was 290 inadequate [RFC6248]. Finally, this document gives a set of 291 guidelines for requesters and expert reviewers of candidate 292 Registered Performance Metrics. 294 This document serves as the foundation for further work. It 295 specifies the set of columns describing common aspects necessary for 296 all entries in the Performance Metrics Registry. 298 Two documents describing sub-registries will be developed separately: 299 one for Active Registered Metrics and another one for the Passive 300 Registered Metrics. Indeed, Active and Passive Performance Metrics 301 appear to have different characteristics that must be documented in 302 their respective sub-registies. For example, Active Performance 303 Methods must specify the packet stream characteristics they generate 304 and measure, so it is essential to include the stream specifications 305 in the Registry entry. In the case of Passive Performance Metrics, 306 there is a need to specify the sampling distribution in the Registry. 307 While it would be possible to force the definition of the Registry 308 field to include both types of distributions in the same Registry 309 column, we believe it is cleaner and clearer to have separated sub- 310 registries with different columns that have a narrow definition. 312 It is possible that future Performance Metrics use Hybrid Measurement 313 methods, and it may be possible to register hybrid metrics in one of 314 the two planned sub-registries (active or passive), or it may be 315 efficient to define a third sub-registry with unique columns. The 316 current design with sub-registries allows for growth, and this is a 317 recognized option for extension. 319 This document makes no attempt to populate the Registry with initial 320 entries. 322 Based on [RFC5226] Section 4.3, this document is processed as Best 323 Current Practice (BCP) [RFC2026]. 325 5. Design Considerations for the Registry and Registered Metrics 327 In this section, we detail several design considerations that are 328 relevant for understanding the motivations and expected use of the 329 Performance Metric Registry. 331 5.1. Interoperability 333 As any IETF registry, the primary use for a registry is to manage a 334 namespace for its use within one or more protocols. In this 335 particular case of the Performance Metric Registry, there are two 336 types of protocols that will use the values defined in the Registry 337 for their operation: 339 o Control protocol: this type of protocols is used to allow one 340 entity to request another entity to perform a measurement using a 341 specific metric defined by the Registry. One particular example 342 is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP 343 terminology, the Registry is used in the LMAP Control protocol to 344 allow a Controller to request a measurement task to one or more 345 Measurement Agents. In order to enable this use case, the entries 346 of the Performance Metric Registry must be well enough defined to 347 allow a Measurement Agent implementation to trigger a specific 348 measurement task upon the reception of a control protocol message. 349 This requirements heavily constrains the type of entries that are 350 acceptable for the Performance Metric Registry. 352 o Report protocol: This type of protocols is used to allow an entity 353 to report measurement results to another entity. By referencing 354 to a specific Performance Metric Registry, it is possible to 355 properly characterize the measurement result data being 356 transferred. Using the LMAP terminology, the Registry is used in 357 the Report protocol to allow a Measurement Agent to report 358 measurement results to a Collector. 360 5.2. Criteria for Registered Performance Metrics 362 It is neither possible nor desirable to populate the Registry with 363 all combinations of input parameters of all Performance Metrics. The 364 Registered Performance Metrics should be: 366 1. interpretable by the user. 368 2. implementable by the software designer, 370 3. deployable by network operators, without major impact on the 371 networks, 373 4. accurate, for interoperability and deployment across vendors, 375 5. Operational useful, so that it has significant industry interest 376 and/or has seen deployment, 378 6. Sufficiently tightly defined, so that changing Parameters does 379 not change the fundamental nature of the measurement, nor change 380 the practicality of its implementation. 382 In essence, there needs to be evidence that a candidate Registry 383 entry has significant industry interest, or has seen deployment, and 384 there is agreement that the candidate Registered Metric serves its 385 intended purpose. 387 5.3. Single point of reference for Performance metrics 389 A Registry for Performance metrics serves as a single point of 390 reference for Performance Metrics defined in different working groups 391 in the IETF. As we mentioned earlier, there are several WGs that 392 define Performance Metrics in the IETF and it is hard to keep track 393 of all them. This results in multiple definitions of similar metrics 394 that attempt to measure the same phenomena but in slightly different 395 (and incompatible) ways. Having a Registry would allow both the IETF 396 community and external people to have a single list of relevant 397 Performance Metrics defined by the IETF (and others, where 398 appropriate). The single list is also an essential aspect of 399 communication about metrics, where different entities that request 400 measurements, execute measurements, and report the results can 401 benefit from a common understanding of the referenced metric. 403 5.4. Side benefits 405 There are a couple of side benefits of having such a Registry. 406 First, the Registry could serve as an inventory of useful and used 407 metrics, that are normally supported by different implementations of 408 measurement agents. Second, the results of the metrics would be 409 comparable even if they are performed by different implementations 410 and in different networks, as the metric is properly defined. BCP 411 176 [RFC6576] examines whether the results produced by independent 412 implementations are equivalent in the context of evaluating the 413 completeness and clarity of metric specifications. This BCP defines 414 the standards track advancement testing for (active) IPPM metrics, 415 and the same process will likely suffice to determine whether 416 Registry entries are sufficiently well specified to result in 417 comparable (or equivalent) results. Registry entries which have 418 undergone such testing SHOULD be noted, with a reference to the test 419 results. 421 6. Performance Metric Registry: Prior attempt 423 There was a previous attempt to define a metric registry RFC 4148 424 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 425 it was "found to be insufficiently detailed to uniquely identify IPPM 426 metrics... [there was too much] variability possible when 427 characterizing a metric exactly" which led to the RFC4148 registry 428 having "very few users, if any". 430 A couple of interesting additional quotes from RFC 6248 might help 431 understand the issues related to that registry. 433 1. "It is not believed to be feasible or even useful to register 434 every possible combination of Type P, metric parameters, and 435 Stream parameters using the current structure of the IPPM Metrics 436 Registry." 438 2. "The registry structure has been found to be insufficiently 439 detailed to uniquely identify IPPM metrics." 441 3. "Despite apparent efforts to find current or even future users, 442 no one responded to the call for interest in the RFC 4148 443 registry during the second half of 2010." 445 The current approach learns from this by tightly defining each entry 446 in the registry with only a few variable Parameters to be specified 447 by the measurement designer, if any. The idea is that entries in the 448 Registry represent different measurement methods which require input 449 parameters to set factors like source and destination addresses 450 (which do not change the fundamental nature of the measurement). The 451 downside of this approach is that it could result in a large number 452 of entries in the Registry. We believe that less is more in this 453 context - it is better to have a reduced set of useful metrics rather 454 than a large set of metrics with questionable usefulness. Therefore 455 this document defines that the Registry only includes metrics that 456 are well defined and that have proven to be operationally useful. In 457 order to guarantee these two characteristics we require that a set of 458 experts review the allocation request to verify that the metric is 459 well defined and it is operationally useful. 461 6.1. Why this Attempt Will Succeed? 463 The Registry defined in this document addresses the main issues 464 identified in the previous attempt. As we mention in the previous 465 section, one of the main issues with the previous registry was that 466 the metrics contained in the registry were too generic to be useful. 467 In this Registry, the Registry requests are evaluated by an expert 468 group, the Performance Metrics Experts, who will make sure that the 469 metric is properly defined. This document provides guidelines to 470 assess if a metric is properly defined. 472 Another key difference between this attempt and the previous one is 473 that in this case there is at least one clear user for the Registry: 475 the LMAP framework and protocol. Because the LMAP protocol will use 476 the Registry values in its operation, this actually helps to 477 determine if a metric is properly defined. In particular, since we 478 expect that the LMAP control protocol will enable a controller to 479 request a measurement agent to perform a measurement using a given 480 metric by embedding the Performance Metric Registry value in the 481 protocol, a metric is properly specified if it is defined well-enough 482 so that it is possible (and practical) to implement the metric in the 483 measurement agent. This was clearly not the case for the previous 484 attempt: defining a metric with an undefined P-Type makes its 485 implementation unpractical. 487 7. Common Columns of the Performance Metric Registry 489 The Performance Metric Registry is composed of two sub-registries: 490 the registry for Active Performance Metrics and the registry for 491 Passive Performance Metrics. The rationale for having two sub- 492 registries (as opposed to having a single registry for all metrics) 493 is because the set of registry columns must support unambiguous 494 registry entries, and there are fundamental differences in the 495 methods to collect active and passive metrics and the required input 496 parameters. Forcing them into a single, generalized registry would 497 result in a less meaningful structure for some entries in the 498 registry. Nevertheless, it is desirable that the two sub-registries 499 share the same structure as much as possible. In particular, both 500 registries will share the following columns: the identifier and the 501 name, the requester, the revision, the revision date and the 502 description. All these fields are described below. The design of 503 these two sub-registries is work-in-progress. 505 7.1. Identifier 507 A numeric identifier for the Registered Performance Metric. This 508 identifier MUST be unique within the Performance Metric Registry and 509 sub-registries. 511 The Registered Performance Metric unique identifier is a 16-bit 512 integer (range 0 to 65535). When adding newly Registered Performance 513 Metrics to the Performance Metric Registry, IANA SHOULD assign the 514 lowest available identifier to the next active monitoring Registered 515 Performance Metric, and the highest available identifier to the next 516 passive monitoring Registered Performance Metric. 518 7.2. Name 520 As the name of a Registered Performance Metric is the first thing a 521 potential implementor will use when determining whether it is 522 suitable for a given application, it is important to be as precise 523 and descriptive as possible. New names of Registered Performance 524 Metrics: 526 1. "MUST be chosen carefully to describe the Registered Performance 527 Metric and the context in which it will be used." 529 2. "MUST be unique within the Performance Metric Registry (including 530 sub-registries)." 532 3. "MUST use capital letters for the first letter of each component 533 . All other letters MUST be lowercase, even for acronyms. 534 Exceptions are made for acronyms containing a mixture of 535 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 537 4. "MUST use '_' between each component composing the Registered 538 Performance Metric name." 540 5. "MUST start with prefix Act_ for active measurement Registered 541 Performance Metric." 543 6. "MUST start with prefix Pass_ for passive monitoring Registered 544 Performance Metric." AL COMMENTS: how about just 3 letters for 545 consistency: "Pas_" 547 7. The remaining rules for naming are left to the Performance 548 Experts to determine as they gather experience, so this is an 549 area of planned update by a future RFC. 551 An example is "Act_UDP_Latency_Poisson_99mean" for a active 552 monitoring UDP latency metric using a Poisson stream of packets and 553 producing the 99th percentile mean as output. 555 >>>> NEED passive naming examples. 557 7.3. URI 559 The URI column MUST contain a URI [RFC 3986] that uniquely identified 560 the metric. The URI is a URN [RFC 2141]. The URI is automatically 561 generated by prepending the prefix urn:ietf:params:ippm:metric: to 562 the metric name. The resulting URI is globally unique. 564 7.4. Status 566 The status of the specification of this Registered Performance 567 Metric. Allowed values are 'current' and 'deprecated'. All newly 568 defined Information Elements have 'current' status. 570 7.5. Requester 572 The requester for the Registered Performance Metric. The requester 573 MAY be a document, such as RFC, or person. 575 7.6. Revision 577 The revision number of a Registered Performance Metric, starting at 0 578 for Registered Performance Metrics at time of definition and 579 incremented by one for each revision. 581 7.7. Revision Date 583 The date of acceptance or the most recent revision for the Registered 584 Performance Metric. 586 7.8. Description 588 A Registered Performance Metric Description is a written 589 representation of a particular Registry entry. It supplements the 590 metric name to help Registry users select relevant Registered 591 Performance Metrics. 593 7.9. Reference Specification(s) 595 Registry entries that follow the common columns must provide the 596 reference specification(s) on which the Registered Performance Metric 597 is based. 599 8. The Life-Cycle of Registered Metrics 601 Once a Performance Metric or set of Performance Metrics has been 602 identified for a given application, candidate Registry entry 603 specifications in accordance with Section X are submitted to IANA to 604 follow the process for review by the Performance Metric Experts, as 605 defined below. This process is also used for other changes to the 606 Performance Metric Registry, such as deprecation or revision, as 607 described later in this section. 609 It is also desirable that the author(s) of a candidate Registry entry 610 seek review in the relevant IETF working group, or offer the 611 opportunity for review on the WG mailing list. 613 8.1. The Process for Review by the Performance Metric Experts 615 Requests to change Registered Metrics in the Performance Metric 616 Registry or a linked sub-registry are submitted to IANA, which 617 forwards the request to a designated group of experts (Performance 618 Metric Experts) appointed by the IESG; these are the reviewers called 619 for by the Expert Review RFC5226 policy defined for the Performance 620 Metric Registry. The Performance Metric Experts review the request 621 for such things as compliance with this document, compliance with 622 other applicable Performance Metric-related RFCs, and consistency 623 with the currently defined set of Registered Performance Metrics. 625 Authors are expected to review compliance with the specifications in 626 this document to check their submissions before sending them to IANA. 628 The Performance Metric Experts should endeavor to complete referred 629 reviews in a timely manner. If the request is acceptable, the 630 Performance Metric Experts signify their approval to IANA, which 631 changes the Performance Metric Registry. If the request is not 632 acceptable, the Performance Metric Experts can coordinate with the 633 requester to change the request to be compliant. The Performance 634 Metric Experts may also choose in exceptional circumstances to reject 635 clearly frivolous or inappropriate change requests outright. 637 This process should not in any way be construed as allowing the 638 Performance Metric Experts to overrule IETF consensus. Specifically, 639 any Registered Metrics that were added with IETF consensus require 640 IETF consensus for revision or deprecation. 642 Decisions by the Performance Metric Experts may be appealed as in 643 Section 7 of RFC5226. 645 8.2. Revising Registered Performance Metrics 647 A request for Revision is ONLY permissible when the changes maintain 648 backward-compatibility with implementations of the prior Registry 649 entry describing a Registered Metric (entries with lower revision 650 numbers, but the same Identifier and Name). 652 The purpose of the Status field in the Performance Metric Registry is 653 to indicate whether the entry for a Registered Metric is 'current' or 654 'deprecated'. 656 In addition, no policy is defined for revising IANA Performance 657 Metric entries or addressing errors therein. To be certain, changes 658 and deprecations within the Performance Metric Registry are not 659 encouraged, and should be avoided to the extent possible. However, 660 in recognition that change is inevitable, the provisions of this 661 section address the need for revisions. 663 Revisions are initiated by sending a candidate Registered Performance 664 Metric definition to IANA, as in Section X, identifying the existing 665 Registry entry. 667 The primary requirement in the definition of a policy for managing 668 changes to existing Registered Performance Metrics is avoidance of 669 interoperability problems; Performance Metric Experts must work to 670 maintain interoperability above all else. Changes to Registered 671 Performance Metrics may only be done in an inter-operable way; 672 necessary changes that cannot be done in a way to allow 673 interoperability with unchanged implementations must result in the 674 creation of a new Registered Metric and possibly the deprecation of 675 the earlier metric. 677 A change to a Registered Performance Metric is held to be backward- 678 compatible only when: 680 1. "it involves the correction of an error that is obviously only 681 editorial; or" 683 2. "it corrects an ambiguity in the Registered Performance Metric's 684 definition, which itself leads to issues severe enough to prevent 685 the Registered Performance Metric's usage as originally defined; 686 or" 688 3. "it corrects missing information in the metric definition without 689 changing its meaning (e.g., the explicit definition of 'quantity' 690 semantics for numeric fields without a Data Type Semantics 691 value); or" 693 4. "it harmonizes with an external reference that was itself 694 corrected." 696 5. "BENOIT: NOTE THAT THERE ARE MORE RULES IN RFC 7013 SECTION 5 BUT 697 THEY WOULD ONLY APPLY TO THE ACTIVE/PASSIVE DRAFTS. TO BE 698 DISCUSSED." 700 If a change is deemed permissible by the Performance Metric Experts, 701 IANA makes the change in the Performance Metric Registry. The 702 requester of the change is appended to the requester in the Registry. 704 Each Registered Performance Metric in the Registry has a revision 705 number, starting at zero. Each change to a Registered Performance 706 Metric following this process increments the revision number by one. 708 COMMENT: Al (and Phil) think we should keep old/revised entries as- 709 is, marked as deprecated >>>> Since any revision must be inter- 710 operable according to the criteria above, there is no need for the 711 Performance Metric Registry to store information about old revisions. 713 When a revised Registered Performance Metric is accepted into the 714 Performance Metric Registry, the date of acceptance of the most 715 recent revision is placed into the revision Date column of the 716 Registry for that Registered Performance Metric. 718 Where applicable, additions to Registry entries in the form of text 719 Comments or Remarks should include the date, but such additions may 720 not constitute a revision according to this process. 722 8.3. Deprecating Registered Performance Metrics 724 Changes that are not permissible by the above criteria for Registered 725 Metric's revision may only be handled by deprecation. A Registered 726 Performance Metric MAY be deprecated and replaced when: 728 1. "the Registered Performance Metric definition has an error or 729 shortcoming that cannot be permissibly changed as in 730 Section Revising Registered Performance Metrics; or" 732 2. "the deprecation harmonizes with an external reference that was 733 itself deprecated through that reference's accepted deprecation 734 method; or" 736 A request for deprecation is sent to IANA, which passes it to the 737 Performance Metric Expert for review, as in Section 'The Process for 738 Review by the Performance Metric Experts'. When deprecating an 739 Performance Metric, the Performance Metric description in the 740 Performance Metric Registry must be updated to explain the 741 deprecation, as well as to refer to any new Performance Metrics 742 created to replace the deprecated Performance Metric. 744 The revision number of a Registered Performance Metric is incremented 745 upon deprecation, and the revision Date updated, as with any 746 revision. 748 The use of deprecated Registered Metrics should result in a log entry 749 or human-readable warning by the respective application. 751 Names and Metric ID of deprecated Registered Metrics must not be 752 reused. 754 9. Performance Metric Registry and other Registries 756 BENOIT: TBD. 758 THE BASIC IDEA IS THAT PEOPLE COULD DIRECTLY DEFINE PERF. METRICS IN 759 OTHER EXISTING REGISTRIES, FOR SPECIFIC PROTOCOL/ENCODING. EXAMPLE: 760 IPFIX. IDEALLY, ALL PERF. METRICS SHOULD BE DEFINED IN THIS 761 REGISTRY AND REFERS TO FROM OTHER REGISTRIES. 763 10. Security considerations 765 This draft doesn't introduce any new security considerations for the 766 Internet. However, the definition of Performance Metrics may 767 introduce some security concerns, and should be reviewed with 768 security in mind. 770 11. IANA Considerations 772 This document specifies the procedure for Performance Metrics 773 Registry setup. IANA is requested to create a new Registry for 774 Performance Metrics called "Registered Performance Metrics". 776 This Performance Metrics Registry contains two sub registries once 777 for active and another one for Passive Performance Metrics. These 778 sub registries are not defined in this document. However, these two 779 sub registries MUST contain the common columns defined in Section 7. 781 New assignments for Performance Metric Registry will be administered 782 by IANA through Expert Review [RFC5226], i.e., review by one of a 783 group of experts, the Performance Metric Experts, appointed by the 784 IESG upon recommendation of the Transport Area Directors. The 785 experts will initially be drawn from the Working Group Chairs and 786 document editors of the Performance Metrics Directorate [performance- 787 metrics-directorate]. 789 This document requests the allocation of the URI prefix 790 urn:ietf:params:ippm:metric for the purpose of generating URIs for 791 registered metrics. 793 12. Acknowledgments 795 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 796 some brainstorming sessions on this topic. 798 13. References 800 13.1. Normative References 802 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 803 Requirement Levels", BCP 14, RFC 2119, March 1997. 805 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 806 3", BCP 9, RFC 2026, October 1996. 808 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 809 "Framework for IP Performance Metrics", RFC 2330, May 810 1998. 812 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 813 Registry", BCP 108, RFC 4148, August 2005. 815 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 816 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 817 May 2008. 819 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 820 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, April 821 2011. 823 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 824 Performance Metric Development", BCP 170, RFC 6390, 825 October 2011. 827 [RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP 828 Performance Metrics (IPPM) Standard Advancement Testing", 829 BCP 176, RFC 6576, March 2012. 831 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 832 Resource Identifier (URI): Generic Syntax", STD 66, RFC 833 3986, January 2005. 835 [RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997. 837 13.2. Informative References 839 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 840 Protocol Extended Reports (RTCP XR)", RFC 3611, November 841 2003. 843 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 844 Jacobson, "RTP: A Transport Protocol for Real-Time 845 Applications", STD 64, RFC 3550, July 2003. 847 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 848 "Session Initiation Protocol Event Package for Voice 849 Quality Reporting", RFC 6035, November 2010. 851 [I-D.ietf-lmap-framework] 852 Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 853 Aitken, P., and A. Akhter, "A framework for large-scale 854 measurement platforms (LMAP)", draft-ietf-lmap- 855 framework-07 (work in progress), June 2014. 857 Authors' Addresses 859 Marcelo Bagnulo 860 Universidad Carlos III de Madrid 861 Av. Universidad 30 862 Leganes, Madrid 28911 863 SPAIN 865 Phone: 34 91 6249500 866 Email: marcelo@it.uc3m.es 867 URI: http://www.it.uc3m.es 869 Benoit Claise 870 Cisco Systems, Inc. 871 De Kleetlaan 6a b1 872 1831 Diegem 873 Belgium 875 Email: bclaise@cisco.com 877 Philip Eardley 878 British Telecom 879 Adastral Park, Martlesham Heath 880 Ipswich 881 ENGLAND 883 Email: philip.eardley@bt.com 885 Al Morton 886 AT&T Labs 887 200 Laurel Avenue South 888 Middletown, NJ 889 USA 891 Email: acmorton@att.com