idnits 2.17.00 (12 Aug 2021) /tmp/idnits47294/draft-ietf-ippm-metric-registry-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 17 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 10, 2014) is 2809 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 3986' is mentioned on line 584, but not defined == Missing Reference: 'RFC 2141' is mentioned on line 585, but not defined ** Obsolete undefined reference: RFC 2141 (Obsoleted by RFC 8141) == Unused Reference: 'RFC3986' is defined on line 1043, but no explicit reference was found in the text == Unused Reference: 'RFC2141' is defined on line 1047, but no explicit reference was found in the text == Unused Reference: 'RFC5477' is defined on line 1069, but no explicit reference was found in the text == Unused Reference: 'RFC5102' is defined on line 1073, but no explicit reference was found in the text == Unused Reference: 'RFC6792' is defined on line 1077, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 1080, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 1084, but no explicit reference was found in the text == Unused Reference: 'RFC6776' is defined on line 1088, but no explicit reference was found in the text == Unused Reference: 'RFC7003' is defined on line 1092, but no explicit reference was found in the text == Unused Reference: 'RFC4566' is defined on line 1100, but no explicit reference was found in the text == Unused Reference: 'RFC5481' is defined on line 1103, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) == Outdated reference: draft-ietf-lmap-framework has been published as RFC 7594 -- Obsolete informational reference (is this intentional?): RFC 5102 (Obsoleted by RFC 7012) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) Summary: 7 errors (**), 0 flaws (~~), 15 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: March 14, 2015 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 A. Akhter 11 Cisco Systems, Inc. 12 September 10, 2014 14 Registry for Performance Metrics 15 draft-ietf-ippm-metric-registry-01 17 Abstract 19 This document defines the IANA Registry for Performance Metrics. 20 This document also gives a set of guidelines for Registered 21 Performance Metric requesters and reviewers. 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on March 14, 2015. 40 Copyright Notice 42 Copyright (c) 2014 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 59 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 60 4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 5. Design Considerations for the Registry and Registered Metrics 7 62 5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7 63 5.2. Criteria for Registered Performance Metrics . . . . . . . 8 64 5.3. Single point of reference for Performance metrics . . . . 8 65 5.4. Side benefits . . . . . . . . . . . . . . . . . . . . . . 9 66 6. Performance Metric Registry: Prior attempt . . . . . . . . . 9 67 6.1. Why this Attempt Will Succeed? . . . . . . . . . . . . . 10 68 7. Defintion of the Performance Metric Registry . . . . . . . . 10 69 7.1. Summary Category . . . . . . . . . . . . . . . . . . . . 12 70 7.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 12 71 7.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 13 72 7.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 13 73 7.1.4. Description . . . . . . . . . . . . . . . . . . . . . 14 74 7.2. Metric Definition Category . . . . . . . . . . . . . . . 14 75 7.2.1. Reference Definition . . . . . . . . . . . . . . . . 14 76 7.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 14 77 7.3. Method of Measurement Category . . . . . . . . . . . . . 15 78 7.3.1. Reference Method . . . . . . . . . . . . . . . . . . 15 79 7.3.2. Packet Generation Stream . . . . . . . . . . . . . . 15 80 7.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 16 81 7.3.4. Sampling distribution . . . . . . . . . . . . . . . . 16 82 7.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16 83 7.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 17 84 7.4. Output Category . . . . . . . . . . . . . . . . . . . . . 17 85 7.4.1. Value . . . . . . . . . . . . . . . . . . . . . . . . 17 86 7.4.2. Data Format . . . . . . . . . . . . . . . . . . . . . 17 87 7.4.3. Reference . . . . . . . . . . . . . . . . . . . . . . 18 88 7.4.4. Metric Units . . . . . . . . . . . . . . . . . . . . 18 89 7.5. Admisnitratvie information . . . . . . . . . . . . . . . 18 90 7.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 18 91 7.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 18 92 7.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 18 93 7.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 18 94 7.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 18 95 8. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 19 96 8.1. Adding new Performance Metrics to the Registry . . . . . 19 97 8.2. Revising Registered Performance Metrics . . . . . . . . . 20 98 8.3. Deprecating Registered Performance Metrics . . . . . . . 21 99 9. Performance Metric Registry and other Registries . . . . . . 22 100 10. Security considerations . . . . . . . . . . . . . . . . . . . 22 101 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 102 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 23 103 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 104 13.1. Normative References . . . . . . . . . . . . . . . . . . 23 105 13.2. Informative References . . . . . . . . . . . . . . . . . 24 106 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 108 1. Open Issues 110 1. Many aspects of the Naming convention are TBD, and need 111 discussion. For example, we have distinguished RTCP-XR metrics 112 as End-Point (neither active nor passive in the traditional 113 sense, so not Act_ or Pas_). Even though we may not cast all 114 naming conventions in stone at the start, it will be helpful to 115 look at several examples of passive metric names now. 117 2. We should expand on the different roles and responsibilities of 118 the Performance Metrics Experts versus the Performance Metric 119 Directorate. At least, the Performance Metric Directorate one 120 should be expanded. --- (v7) If these are different entities, our 121 only concern is the role of the "PM Experts". 123 3. Revised Registry Entries: Keep for history (deprecated) or 124 Delete? 126 4. Need to include an example for a name for a passive metric 128 5. Definition of Parameter needs more work? 130 6. Whether the name of the metric should contain the version of the 131 metric 133 7. reserve some values for examples and private use? 135 8. should we define a "type" column with the possible values 136 "active" "passive" "hybrid" "endpoint"? if we go for all 4 of 137 them, we should define the corresponding prefixes for the metric 138 name (at this point only the pas and act are defined) 140 9. URL: should we include a URL link in each registry entry with a 141 URL specific to the entry that links to a different text page 142 that contains all the details of the registry entry as in 143 http://www.iana.org/assignments/xml-registry/xml- 144 registry.xhtml#ns 146 2. Introduction 148 The IETF specifies and uses Performance Metrics of protocols and 149 applications transported over its protocols. Performance metrics are 150 such an important part of the operations of IETF protocols that 151 [RFC6390] specifies guidelines for their development. 153 The definition and use of Performance Metrics in the IETF happens in 154 various working groups (WG), most notably: 156 The "IP Performance Metrics" (IPPM) WG is the WG primarily 157 focusing on Performance Metrics definition at the IETF. 159 The "Metric Blocks for use with RTCP's Extended Report Framework" 160 (XRBLOCK) WG recently specified many Performance Metrics related 161 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 162 which establishes a framework to allow new information to be 163 conveyed in RTCP, supplementing the original report blocks defined 164 in "RTP: A Transport Protocol for Real-Time Applications", 165 [RFC3550]. 167 The "Benchmarking Methodology" WG (BMWG) defined many Performance 168 Metrics for use in laboratory benchmarking of inter-networking 169 technologies. 171 The "IP Flow Information eXport" (IPFIX) WG Information elements 172 related to Performance Metrics are currently proposed. 174 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 175 defined some Performance Metrics related to Session Initiation 176 Protocol (SIP) voice quality [RFC6035]. 178 It is expected that more Performance Metrics will be defined in the 179 future, not only IP-based metrics, but also metrics which are 180 protocol-specific and application-specific. 182 However, despite the importance of Performance Metrics, there are two 183 related problems for the industry. First, how to ensure that when 184 one party requests another party to measure (or report or in some way 185 act on) a particular Performance Metric, then both parties have 186 exactly the same understanding of what Performance Metric is being 187 referred to. Second, how to discover which Performance Metrics have 188 been specified, so as to avoid developing new Performance Metric that 189 is very similar. The problems can be addressed by creating a 190 registry of performance metrics. The usual way in which IETF 191 organizes namespaces is with Internet Assigned Numbers Authority 192 (IANA) registries, and there is currently no Performance Metrics 193 Registry maintained by the IANA. 195 This document therefore creates a Performance Metrics Registry. It 196 also provides best practices on how to define new or updated entries 197 in the Performance Metrics Registry. 199 3. Terminology 201 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 202 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 203 "OPTIONAL" in this document are to be interpreted as described in 204 [RFC2119]. 206 The terms Performance Metric and Performance Metrics Directorate are 207 defined in [RFC6390], and copied over in this document for the 208 readers convenience. 210 Performance Metric: A Performance Metric is a quantitative measure 211 of performance, specific to an IETF-specified protocol or specific 212 to an application transported over an IETF-specified protocol. 213 Examples of Performance Metrics are the FTP response time for a 214 complete file download, the DNS response time to resolve the IP 215 address, a database logging time, etc. 217 Registered Performance Metric: A Registered Performance Metric (or 218 Registered Metric) is a Performance Metric expressed as an entry 219 in the Performance Metric Registry, and comprised of a 220 specifically named metric which has met all the registry review 221 criteria, is under the curation of IETF Performance Metrics 222 Experts, and whose changes are controlled by IANA. 224 Performance Metrics Registry: The IANA registry containing 225 Registered Performance Metrics. In this document, it is also 226 called simply "Registry". 228 Proprietary Registry: A set of metrics that are registered in a 229 proprietary registry, as opposed to Performance Metrics Registry. 231 Performance Metrics Experts: The Performance Metrics Experts is a 232 group of experts selected by the IESG to validate the Performance 233 Metrics before updating the Performance Metrics Registry. The 234 Performance Metrics Experts work closely with IANA. 236 Performance Metrics Directorate: The Performance Metrics Directorate 237 is a directorate that provides guidance for Performance Metrics 238 development in the IETF. The Performance Metrics Directorate 239 should be composed of experts in the performance community, 240 potentially selected from the IP Performance Metrics (IPPM), 241 Benchmarking Methodology (BMWG), and Performance Metrics for Other 242 Layers (PMOL) WGs. 244 Parameter: An input factor defined as a variable in the definition 245 of a metric. A numerical or other specified factor forming one of 246 a set that defines a metric or sets the conditions of its 247 operation. All Input Parameters must be known to measure using a 248 metric and interpret the results. Although Input Parameters do 249 not change the fundamental nature of the metric's definition, some 250 have substantial influence on the network property being assessed 251 and interpretation of the results. 253 Consider the case of packet loss in the following two cases. 254 The first case is packet loss as background loss where the 255 parameter set includes a very sparse Poisson stream, and only 256 characterizes the times when packets were lost. Actual user 257 streams likely see much higher loss at these times, due to tail 258 drop or radio errors. The second case is packet loss as 259 inverse of Throughput where the parameter set includes a very 260 dense, bursty stream, and characterizes the loss experienced by 261 a stream that approximates a user stream. These are both "loss 262 metrics", but the difference in interpretation of the results 263 is highly dependent on the Parameters (at least), to the 264 extreme where we are actually using loss to infer its 265 compliment: delivered throughput. 267 Active Measurement Method: Methods of Measurement conducted on 268 traffic which serves only the purpose of measurement and is 269 generated for that reason alone, and whose traffic characteristics 270 are known a priori. An Internet user's host can generate active 271 measurement traffic (virtually all typical user-generated traffic 272 is not dedicated to active measurement, but it can produce such 273 traffic with the necessary application operating). 275 Passive Measurement Method: Methods of Measurement conducted on 276 network traffic, generated either from the end users or from 277 network elements. One characteristic of Passive Measurement 278 Methods is that sensitive information may be observed, and as a 279 consequence, stored in the measurement system. 281 Hybrid Measurement Method: Methods of Measurement which use a 282 combination of Active Measurement and Passive Measurement methods. 284 4. Scope 286 The intended audience of this document includes those who prepare and 287 submit a request for a Registered Performance Metric, and for the 288 Performance Metric Experts who review a request. 290 This document specifies a Performance Metrics Registry in IANA. This 291 Performance Metric Registry is applicable to Performance Metrics 292 issued from Active Measurement, Passive Measurement, from end-point 293 calculation or any other form of Performance Metric. This registry 294 is designed to encompass Performance Metrics developed throughout the 295 IETF and especially for the following existing working groups: IPPM, 296 XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to 297 set up a Performance Metric Registry, and the reasons why this design 298 was inadequate [RFC6248]. Finally, this document gives a set of 299 guidelines for requesters and expert reviewers of candidate 300 Registered Performance Metrics. 302 This document makes no attempt to populate the Registry with initial 303 entries. It does provides a few examples that are merely 304 illustrations and should not be included in the registry at this 305 point in time. 307 Based on [RFC5226] Section 4.3, this document is processed as Best 308 Current Practice (BCP) [RFC2026]. 310 5. Design Considerations for the Registry and Registered Metrics 312 In this section, we detail several design considerations that are 313 relevant for understanding the motivations and expected use of the 314 Performance Metric Registry. 316 5.1. Interoperability 318 As any IETF registry, the primary use for a registry is to manage a 319 namespace for its use within one or more protocols. In this 320 particular case of the Performance Metric Registry, there are two 321 types of protocols that will use the values defined in the Registry 322 for their operation: 324 o Control protocol: this type of protocols is used to allow one 325 entity to request another entity to perform a measurement using a 326 specific metric defined by the Registry. One particular example 327 is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP 328 terminology, the Registry is used in the LMAP Control protocol to 329 allow a Controller to request a measurement task to one or more 330 Measurement Agents. In order to enable this use case, the entries 331 of the Performance Metric Registry must be well enough defined to 332 allow a Measurement Agent implementation to trigger a specific 333 measurement task upon the reception of a control protocol message. 334 This requirements heavily constrains the type of entries that are 335 acceptable for the Performance Metric Registry. 337 o Report protocol: This type of protocols is used to allow an entity 338 to report measurement results to another entity. By referencing 339 to a specific Performance Metric Registry, it is possible to 340 properly characterize the measurement result data being 341 transferred. Using the LMAP terminology, the Registry is used in 342 the Report protocol to allow a Measurement Agent to report 343 measurement results to a Collector. 345 5.2. Criteria for Registered Performance Metrics 347 It is neither possible nor desirable to populate the Registry with 348 all combinations of input parameters of all Performance Metrics. The 349 Registered Performance Metrics should be: 351 1. interpretable by the user. 353 2. implementable by the software designer, 355 3. deployable by network operators, without major impact on the 356 networks, 358 4. accurate, for interoperability and deployment across vendors, 360 5. Operational useful, so that it has significant industry interest 361 and/or has seen deployment, 363 6. Sufficiently tightly defined, so that changing Parameters does 364 not change the fundamental nature of the measurement, nor change 365 the practicality of its implementation. 367 In essence, there needs to be evidence that a candidate Registry 368 entry has significant industry interest, or has seen deployment, and 369 there is agreement that the candidate Registered Metric serves its 370 intended purpose. 372 5.3. Single point of reference for Performance metrics 374 A Registry for Performance metrics serves as a single point of 375 reference for Performance Metrics defined in different working groups 376 in the IETF. As we mentioned earlier, there are several WGs that 377 define Performance Metrics in the IETF and it is hard to keep track 378 of all them. This results in multiple definitions of similar metrics 379 that attempt to measure the same phenomena but in slightly different 380 (and incompatible) ways. Having a Registry would allow both the IETF 381 community and external people to have a single list of relevant 382 Performance Metrics defined by the IETF (and others, where 383 appropriate). The single list is also an essential aspect of 384 communication about metrics, where different entities that request 385 measurements, execute measurements, and report the results can 386 benefit from a common understanding of the referenced metric. 388 5.4. Side benefits 390 There are a couple of side benefits of having such a Registry. 391 First, the Registry could serve as an inventory of useful and used 392 metrics, that are normally supported by different implementations of 393 measurement agents. Second, the results of the metrics would be 394 comparable even if they are performed by different implementations 395 and in different networks, as the metric is properly defined. BCP 396 176 [RFC6576] examines whether the results produced by independent 397 implementations are equivalent in the context of evaluating the 398 completeness and clarity of metric specifications. This BCP defines 399 the standards track advancement testing for (active) IPPM metrics, 400 and the same process will likely suffice to determine whether 401 Registry entries are sufficiently well specified to result in 402 comparable (or equivalent) results. Registry entries which have 403 undergone such testing SHOULD be noted, with a reference to the test 404 results. 406 6. Performance Metric Registry: Prior attempt 408 There was a previous attempt to define a metric registry RFC 4148 409 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 410 it was "found to be insufficiently detailed to uniquely identify IPPM 411 metrics... [there was too much] variability possible when 412 characterizing a metric exactly" which led to the RFC4148 registry 413 having "very few users, if any". 415 A couple of interesting additional quotes from RFC 6248 might help 416 understand the issues related to that registry. 418 1. "It is not believed to be feasible or even useful to register 419 every possible combination of Type P, metric parameters, and 420 Stream parameters using the current structure of the IPPM Metrics 421 Registry." 423 2. "The registry structure has been found to be insufficiently 424 detailed to uniquely identify IPPM metrics." 426 3. "Despite apparent efforts to find current or even future users, 427 no one responded to the call for interest in the RFC 4148 428 registry during the second half of 2010." 430 The current approach learns from this by tightly defining each entry 431 in the registry with only a few variable Parameters to be specified 432 by the measurement designer, if any. The idea is that entries in the 433 Registry represent different measurement methods which require input 434 parameters to set factors like source and destination addresses 435 (which do not change the fundamental nature of the measurement). The 436 downside of this approach is that it could result in a large number 437 of entries in the Registry. We believe that less is more in this 438 context - it is better to have a reduced set of useful metrics rather 439 than a large set of metrics with questionable usefulness. Therefore 440 this document defines that the Registry only includes metrics that 441 are well defined and that have proven to be operationally useful. In 442 order to guarantee these two characteristics we require that a set of 443 experts review the allocation request to verify that the metric is 444 well defined and it is operationally useful. 446 6.1. Why this Attempt Will Succeed? 448 The Registry defined in this document addresses the main issues 449 identified in the previous attempt. As we mention in the previous 450 section, one of the main issues with the previous registry was that 451 the metrics contained in the registry were too generic to be useful. 452 In this Registry, the Registry requests are evaluated by an expert 453 group, the Performance Metrics Experts, who will make sure that the 454 metric is properly defined. This document provides guidelines to 455 assess if a metric is properly defined. 457 Another key difference between this attempt and the previous one is 458 that in this case there is at least one clear user for the Registry: 459 the LMAP framework and protocol. Because the LMAP protocol will use 460 the Registry values in its operation, this actually helps to 461 determine if a metric is properly defined. In particular, since we 462 expect that the LMAP control protocol will enable a controller to 463 request a measurement agent to perform a measurement using a given 464 metric by embedding the Performance Metric Registry value in the 465 protocol, a metric is properly specified if it is defined well-enough 466 so that it is possible (and practical) to implement the metric in the 467 measurement agent. This was clearly not the case for the previous 468 attempt: defining a metric with an undefined P-Type makes its 469 implementation unpractical. 471 7. Defintion of the Performance Metric Registry 473 In this section we define the columns of the Performance Metric 474 Registry. This registry will contain all Registered Performance 475 Metrics including active, passive, hybrid, endpoint metrics and any 476 other type of performance metric that can be envisioned. Because of 477 that, it may be the case that some of the columns defined are not 478 applicable for a given type of metric. If this is the case, the 479 column(s) SHOULD be populated with the "NA" value (Non Applicable). 480 However, the "NA" value MUST NOT be used any any metric in the 481 following columns: Identifier, Name, URI, Status, Requester, 482 Revision, Revision Date, Description and Reference Specification. 483 Moreover, In addition, it may be possible that in the future, a new 484 type of metric requires additional columns. Should that be the case, 485 it is possible to add new columns to the registry. The specification 486 defining the new column(s) MUST define how to populate the new 487 column(s) for existing entries. 489 The columns of the Performance Metric Registry are defined next. The 490 columns are grouped into "Categories" to facilitate the use of the 491 registry. Categories are described at the 3.x heading level, and 492 columns are at the 3.x.y heading level. The Figure below illustrates 493 this organization. An entry (row) therefore gives a complete 494 description of a Registered Metric. 496 Each column serves as a check-list item and helps to avoid omissions 497 during registration and expert review. In some cases an entry (row) 498 may have some columns without specific entries, marked Not Applicable 499 (NA). 501 Registry Categories and Columns, shown as 502 Category 503 ------------------ 504 Column | Column | 506 Summary 507 ------------------------------- 508 ID | Name | URI | Description | 510 Metric Definition 511 ----------------------------------------- 512 Reference Definition | Fixed Parameters | 514 Method of Measurement 515 --------------------------------------------------------------------------------- 516 Reference Method | Packet Generation | Traffic | Sampling | Run-time | Role | 517 | Stream | Filter | distribution | Param | | 519 Output 520 ----------------------------------------- 521 | Type | Reference | Data | Units | 522 | | Definition | Format | | 524 Administrative information 525 ---------------------------------- 526 Status |Request | Rev | Rev.Date | 528 Comments and Remarks 529 -------------------- 531 7.1. Summary Category 533 7.1.1. Identifier 535 A numeric identifier for the Registered Performance Metric. This 536 identifier MUST be unique within the Performance Metric Registry. 538 The Registered Performance Metric unique identifier is a 16-bit 539 integer (range 0 to 65535). When adding newly Registered Performance 540 Metrics to the Performance Metric Registry, IANA SHOULD assign the 541 lowest available identifier to the next Registered Performance 542 Metric. 544 7.1.2. Name 546 As the name of a Registered Performance Metric is the first thing a 547 potential implementor will use when determining whether it is 548 suitable for a given application, it is important to be as precise 549 and descriptive as possible. New names of Registered Performance 550 Metrics: 552 1. "MUST be chosen carefully to describe the Registered Performance 553 Metric and the context in which it will be used." 555 2. "MUST be unique within the Performance Metric Registry." 557 3. "MUST use capital letters for the first letter of each component 558 . All other letters MUST be lowercase, even for acronyms. 559 Exceptions are made for acronyms containing a mixture of 560 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 562 4. "MUST use '_' between each component composing the Registered 563 Performance Metric name." 565 5. "MUST start with prefix Act_ for active measurement Registered 566 Performance Metric." 568 6. "MUST start with prefix Pas_ for passive monitoring Registered 569 Performance Metric." 571 7. Other types of metrics should define a proper prefix for 572 identifying the type. 574 8. The remaining rules for naming are left to the Performance 575 Experts to determine as they gather experience, so this is an 576 area of planned update by a future RFC. 578 An example is "Act_UDP_Latency_Poisson_99mean" for a active 579 monitoring UDP latency metric using a Poisson stream of packets and 580 producing the 99th percentile mean as output. 582 7.1.3. URI 584 The URI column MUST contain a URI [RFC 3986] that uniquely identified 585 the metric. The URI is a URN [RFC 2141]. The URI is automatically 586 generated by prepending the prefix urn:ietf:params:ippm:metric: to 587 the metric name. The resulting URI is globally unique. 589 7.1.4. Description 591 A Registered Performance Metric Description is a written 592 representation of a particular Registry entry. It supplements the 593 metric name to help Registry users select relevant Registered 594 Performance Metrics. 596 7.2. Metric Definition Category 598 This category includes columns to prompt all necessary details 599 related to the metric definition, including the RFC reference and 600 values of input factors, called fixed parameters, which are left open 601 in the RFC but have a particular value defined by the performance 602 metric. 604 7.2.1. Reference Definition 606 This entry provides a reference (or references) to the relevant 607 section(s) of the document(s) that define the metric, as well as any 608 supplemental information needed to ensure an unambiguous definition 609 for implementations. The reference needs to be an immutable 610 document, such as an RFC; for other standards bodies, it is likely to 611 be necessary to reference a specific, dated version of a 612 specification. 614 7.2.2. Fixed Parameters 616 Fixed Parameters are input factors whose value must be specified in 617 the Registry. The measurement system uses these values. 619 Where referenced metrics supply a list of Parameters as part of their 620 descriptive template, a sub-set of the Parameters will be designated 621 as Fixed Parameters. For example, for active metrics, Fixed 622 Parameters determine most or all of the IPPM Framework convention 623 "packets of Type-P" as described in [RFC2330], such as transport 624 protocol, payload length, TTL, etc. An example for passive metrics 625 is for RTP packet loss calculation that relies on the validation of a 626 packet as RTP which is a multi-packet validation controlled by 627 MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL 628 values can alter the loss report and this value could be set as a 629 fixed parameter 631 A Parameter which is Fixed for one Registry entry may be designated 632 as a Run-time Parameter for another Registry entry. 634 7.3. Method of Measurement Category 636 This category includes columns for references to relevant sections of 637 the RFC(s) and any supplemental information needed to ensure an 638 unambiguous method for implementations. 640 7.3.1. Reference Method 642 This entry provides references to relevant sections of the RFC(s) 643 describing the method of measurement, as well as any supplemental 644 information needed to ensure unambiguous interpretation for 645 implementations referring to the RFC text. 647 Specifically, this section should include pointers to pseudocode or 648 actual code that could be used for an unambigious implementation. 650 7.3.2. Packet Generation Stream 652 This column applies to metrics that generate traffic for measurement 653 purposes including but not necessarily limited to Active metrics. 654 The generated traffic is referred as stream and this columns describe 655 its characteristics. Principally, two different streams are used in 656 IPPM metrics, Poisson distributed as described in [RFC2330] and 657 Periodic as described in [RFC3432]. Both Poisson and Periodic have 658 their own unique parameters, and the relevant set of values is 659 specified in this column. 661 Each entry for this column contains the following information: 663 o Value: The name of the packet stream scheduling discipline 665 o Stream Parameters: The values and formats of input factors for 666 each type of stream. For example, the average packet rate and 667 distribution truncation value for streams with Poisson-distributed 668 inter-packet sending times. 670 o Reference: the specification where the stream is defined 672 The simplest example of stream specification is Singleton scheduling, 673 where a single atomic measurement is conducted. Each atomic 674 measurement could consist of sending a single packet (such as a DNS 675 request) or sending several packets (for example, to request a 676 webpage). Other streams support a series of atomic measurements in a 677 "sample", with a schedule defining the timing between each 678 transmitted packet and subsequent measurement. 680 7.3.3. Traffic Filter 682 This column applies to metrics that observe packets flowing in the 683 wire i.e. that is not specifically addressed to the measurement 684 agent. This includes but is not limited to Passive Metrics. The 685 filter specifies the traffic constraints that the passive measurement 686 method used is valid (or invalid) for. This includes valid packet 687 sampling ranges, width of valid traffic matches (eg. all traffic on 688 interface, UDP packets packets in a flow (eg. same RTP session). 690 It is possible that the measurement method may not have a specific 691 limitation. However, this specific registry entry with it's 692 combination of fixed parameters implies restrictions. These 693 restrictions would be listed in this field. 695 7.3.4. Sampling distribution 697 The sampling distribution defines out of all the packets that match 698 the traffic filter, which one of those are actually used for the 699 measurement. One possibility is "all" which implies that all packets 700 matching the Traffic filter are considered, but there may be other 701 sampling strategies. It includes the following information: 703 Value: the name of the sampling distribution 705 Parameters: if any. 707 Reference definition: pointer to the specification where the 708 sampling distribution is properly defined. 710 7.3.5. Run-time Parameters 712 Run-Time Parameters are input factors that must be determined, 713 configured into the measurement system, and reported with the results 714 for the context to be complete. However, the values of these 715 parameters is not specified in the Registry, rather these parameters 716 are listed as an aid to the measurement system implementor or user 717 (they must be left as variables, and supplied on execution). 719 Where metrics supply a list of Parameters as part of their 720 descriptive template, a sub-set of the Parameters will be designated 721 as Run-Time Parameters. 723 A Data Format of each Run-time Parameter SHALL be specified in this 724 column, to simplify the control and implementation of measurement 725 devices. 727 Examples of Run-time Parameters include IP addresses, measurement 728 point designations, start times and end times for measurement, and 729 other information essential to the method of measurement. 731 7.3.6. Role 733 In some method of measurements, there may be several roles defined 734 e.g. on a one-way packet delay active measurement, there is one 735 measurement agent that generates the packets and the other one that 736 receives the packets. This column contains the name of the role for 737 this particular entry. In the previous example, there should be two 738 entries int he registry, one for each role, so that when a 739 measurement agent is instructed to perform the one way delay source 740 metric know that it is supposed to generate packets. The values for 741 this field are defined in the reference method of measurement. 743 7.4. Output Category 745 For entries which involve a stream and many singleton measurements, a 746 statistic may be specified in this column to summarize the results to 747 a single value. If the complete set of measured singletons is 748 output, this will be specified here. 750 Some metrics embed one specific statistic in the reference metric 751 definition, while others allow several output types or statistics. 753 7.4.1. Value 755 This column contain the name of the output type. The output type 756 defines the type of result that the metric produces. It can be the 757 raw results or it can be some form of statistic. The specification 758 of the output type must define the format of the output. In some 759 systems, format specifications will simplify both measurement 760 implementation and collection/storage tasks. Note that if two 761 different statistics are required from a single measurement (for 762 example, both "Xth percentile mean" and "Raw"), then a new output 763 type must be defined ("Xth percentile mean AND Raw"). 765 7.4.2. Data Format 767 This column provides the data format for the output. It is provided 768 to simplify the communication with collection systems and 769 implementation of measurement devices. 771 7.4.3. Reference 773 This column contains a pointer to the specification where the output 774 type is defined 776 7.4.4. Metric Units 778 The measured results must be expressed using some standard dimension 779 or units of measure. This column provides the units. 781 When a sample of singletons (see [RFC2330] for definitions of these 782 terms) is collected, this entry will specify the units for each 783 measured value. 785 7.5. Admisnitratvie information 787 7.5.1. Status 789 The status of the specification of this Registered Performance 790 Metric. Allowed values are 'current' and 'deprecated'. All newly 791 defined Information Elements have 'current' status. 793 7.5.2. Requester 795 The requester for the Registered Performance Metric. The requester 796 MAY be a document, such as RFC, or person. 798 7.5.3. Revision 800 The revision number of a Registered Performance Metric, starting at 0 801 for Registered Performance Metrics at time of definition and 802 incremented by one for each revision. 804 7.5.4. Revision Date 806 The date of acceptance or the most recent revision for the Registered 807 Performance Metric. 809 7.6. Comments and Remarks 811 Besides providing additional details which do not appear in other 812 categories, this open Category (single column) allows for unforeseen 813 issues to be addressed by simply updating this Informational entry. 815 8. The Life-Cycle of Registered Metrics 817 Once a Performance Metric or set of Performance Metrics has been 818 identified for a given application, candidate Registry entry 819 specifications in accordance with Section 7 are submitted to IANA to 820 follow the process for review by the Performance Metric Experts, as 821 defined below. This process is also used for other changes to the 822 Performance Metric Registry, such as deprecation or revision, as 823 described later in this section. 825 It is also desirable that the author(s) of a candidate Registry entry 826 seek review in the relevant IETF working group, or offer the 827 opportunity for review on the WG mailing list. 829 8.1. Adding new Performance Metrics to the Registry 831 Requests to change Registered Metrics in the Performance Metric 832 Registry are submitted to IANA, which forwards the request to a 833 designated group of experts (Performance Metric Experts) appointed by 834 the IESG; these are the reviewers called for by the Expert Review 835 RFC5226 policy defined for the Performance Metric Registry. The 836 Performance Metric Experts review the request for such things as 837 compliance with this document, compliance with other applicable 838 Performance Metric-related RFCs, and consistency with the currently 839 defined set of Registered Performance Metrics. 841 Authors are expected to review compliance with the specifications in 842 this document to check their submissions before sending them to IANA. 844 The Performance Metric Experts should endeavor to complete referred 845 reviews in a timely manner. If the request is acceptable, the 846 Performance Metric Experts signify their approval to IANA, which 847 changes the Performance Metric Registry. If the request is not 848 acceptable, the Performance Metric Experts can coordinate with the 849 requester to change the request to be compliant. The Performance 850 Metric Experts may also choose in exceptional circumstances to reject 851 clearly frivolous or inappropriate change requests outright. 853 This process should not in any way be construed as allowing the 854 Performance Metric Experts to overrule IETF consensus. Specifically, 855 any Registered Metrics that were added with IETF consensus require 856 IETF consensus for revision or deprecation. 858 Decisions by the Performance Metric Experts may be appealed as in 859 Section 7 of RFC5226. 861 8.2. Revising Registered Performance Metrics 863 A request for Revision is ONLY permissible when the changes maintain 864 backward-compatibility with implementations of the prior Registry 865 entry describing a Registered Metric (entries with lower revision 866 numbers, but the same Identifier and Name). 868 The purpose of the Status field in the Performance Metric Registry is 869 to indicate whether the entry for a Registered Metric is 'current' or 870 'deprecated'. 872 In addition, no policy is defined for revising IANA Performance 873 Metric entries or addressing errors therein. To be certain, changes 874 and deprecations within the Performance Metric Registry are not 875 encouraged, and should be avoided to the extent possible. However, 876 in recognition that change is inevitable, the provisions of this 877 section address the need for revisions. 879 Revisions are initiated by sending a candidate Registered Performance 880 Metric definition to IANA, as in Section X, identifying the existing 881 Registry entry. 883 The primary requirement in the definition of a policy for managing 884 changes to existing Registered Performance Metrics is avoidance of 885 interoperability problems; Performance Metric Experts must work to 886 maintain interoperability above all else. Changes to Registered 887 Performance Metrics may only be done in an inter-operable way; 888 necessary changes that cannot be done in a way to allow 889 interoperability with unchanged implementations must result in the 890 creation of a new Registered Metric and possibly the deprecation of 891 the earlier metric. 893 A change to a Registered Performance Metric is held to be backward- 894 compatible only when: 896 1. "it involves the correction of an error that is obviously only 897 editorial; or" 899 2. "it corrects an ambiguity in the Registered Performance Metric's 900 definition, which itself leads to issues severe enough to prevent 901 the Registered Performance Metric's usage as originally defined; 902 or" 904 3. "it corrects missing information in the metric definition without 905 changing its meaning (e.g., the explicit definition of 'quantity' 906 semantics for numeric fields without a Data Type Semantics 907 value); or" 909 4. "it harmonizes with an external reference that was itself 910 corrected." 912 5. "BENOIT: NOTE THAT THERE ARE MORE RULES IN RFC 7013 SECTION 5 BUT 913 THEY WOULD ONLY APPLY TO THE ACTIVE/PASSIVE DRAFTS. TO BE 914 DISCUSSED." 916 If a change is deemed permissible by the Performance Metric Experts, 917 IANA makes the change in the Performance Metric Registry. The 918 requester of the change is appended to the requester in the Registry. 920 Each Registered Performance Metric in the Registry has a revision 921 number, starting at zero. Each change to a Registered Performance 922 Metric following this process increments the revision number by one. 924 COMMENT: Al (and Phil) think we should keep old/revised entries as- 925 is, marked as deprecated >>>> Since any revision must be inter- 926 operable according to the criteria above, there is no need for the 927 Performance Metric Registry to store information about old revisions. 929 When a revised Registered Performance Metric is accepted into the 930 Performance Metric Registry, the date of acceptance of the most 931 recent revision is placed into the revision Date column of the 932 Registry for that Registered Performance Metric. 934 Where applicable, additions to Registry entries in the form of text 935 Comments or Remarks should include the date, but such additions may 936 not constitute a revision according to this process. 938 8.3. Deprecating Registered Performance Metrics 940 Changes that are not permissible by the above criteria for Registered 941 Metric's revision may only be handled by deprecation. A Registered 942 Performance Metric MAY be deprecated and replaced when: 944 1. "the Registered Performance Metric definition has an error or 945 shortcoming that cannot be permissibly changed as in 946 Section Revising Registered Performance Metrics; or" 948 2. "the deprecation harmonizes with an external reference that was 949 itself deprecated through that reference's accepted deprecation 950 method; or" 952 A request for deprecation is sent to IANA, which passes it to the 953 Performance Metric Expert for review, as in Section 'The Process for 954 Review by the Performance Metric Experts'. When deprecating an 955 Performance Metric, the Performance Metric description in the 956 Performance Metric Registry must be updated to explain the 957 deprecation, as well as to refer to any new Performance Metrics 958 created to replace the deprecated Performance Metric. 960 The revision number of a Registered Performance Metric is incremented 961 upon deprecation, and the revision Date updated, as with any 962 revision. 964 The use of deprecated Registered Metrics should result in a log entry 965 or human-readable warning by the respective application. 967 Names and Metric ID of deprecated Registered Metrics must not be 968 reused. 970 9. Performance Metric Registry and other Registries 972 BENOIT: TBD. 974 THE BASIC IDEA IS THAT PEOPLE COULD DIRECTLY DEFINE PERF. METRICS IN 975 OTHER EXISTING REGISTRIES, FOR SPECIFIC PROTOCOL/ENCODING. EXAMPLE: 976 IPFIX. IDEALLY, ALL PERF. METRICS SHOULD BE DEFINED IN THIS 977 REGISTRY AND REFERS TO FROM OTHER REGISTRIES. 979 10. Security considerations 981 This draft doesn't introduce any new security considerations for the 982 Internet. However, the definition of Performance Metrics may 983 introduce some security concerns, and should be reviewed with 984 security in mind. 986 11. IANA Considerations 988 This document specifies the procedure for Performance Metrics 989 Registry setup. IANA is requested to create a new Registry for 990 Performance Metrics called "Registered Performance Metrics" with the 991 columns defined in Section 7. 993 New assignments for Performance Metric Registry will be administered 994 by IANA through Expert Review [RFC5226], i.e., review by one of a 995 group of experts, the Performance Metric Experts, appointed by the 996 IESG upon recommendation of the Transport Area Directors. The 997 experts will initially be drawn from the Working Group Chairs and 998 document editors of the Performance Metrics Directorate [performance- 999 metrics-directorate]. 1001 This document requests the allocation of the URI prefix 1002 urn:ietf:params:ippm:metric for the purpose of generating URIs for 1003 registered metrics. 1005 12. Acknowledgments 1007 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 1008 some brainstorming sessions on this topic. 1010 13. References 1012 13.1. Normative References 1014 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1015 Requirement Levels", BCP 14, RFC 2119, March 1997. 1017 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 1018 3", BCP 9, RFC 2026, October 1996. 1020 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1021 "Framework for IP Performance Metrics", RFC 2330, May 1022 1998. 1024 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 1025 Registry", BCP 108, RFC 4148, August 2005. 1027 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 1028 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 1029 May 2008. 1031 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 1032 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, April 1033 2011. 1035 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 1036 Performance Metric Development", BCP 170, RFC 6390, 1037 October 2011. 1039 [RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP 1040 Performance Metrics (IPPM) Standard Advancement Testing", 1041 BCP 176, RFC 6576, March 2012. 1043 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1044 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1045 3986, January 2005. 1047 [RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997. 1049 13.2. Informative References 1051 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 1052 Protocol Extended Reports (RTCP XR)", RFC 3611, November 1053 2003. 1055 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1056 Jacobson, "RTP: A Transport Protocol for Real-Time 1057 Applications", STD 64, RFC 3550, July 2003. 1059 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 1060 "Session Initiation Protocol Event Package for Voice 1061 Quality Reporting", RFC 6035, November 2010. 1063 [I-D.ietf-lmap-framework] 1064 Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1065 Aitken, P., and A. Akhter, "A framework for large-scale 1066 measurement platforms (LMAP)", draft-ietf-lmap- 1067 framework-08 (work in progress), August 2014. 1069 [RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G. 1070 Carle, "Information Model for Packet Sampling Exports", 1071 RFC 5477, March 2009. 1073 [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. 1074 Meyer, "Information Model for IP Flow Information Export", 1075 RFC 5102, January 2008. 1077 [RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the 1078 RTP Monitoring Framework", RFC 6792, November 2012. 1080 [RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network 1081 Time Protocol Version 4: Protocol and Algorithms 1082 Specification", RFC 5905, June 2010. 1084 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1085 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1086 November 2002. 1088 [RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information 1089 Reporting Using a Source Description (SDES) Item and an 1090 RTCP Extended Report (XR) Block", RFC 6776, October 2012. 1092 [RFC7003] Clark, A., Huang, R., and Q. Wu, "RTP Control Protocol 1093 (RTCP) Extended Report (XR) Block for Burst/Gap Discard 1094 Metric Reporting", RFC 7003, September 2013. 1096 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1097 performance measurement with periodic streams", RFC 3432, 1098 November 2002. 1100 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1101 Description Protocol", RFC 4566, July 2006. 1103 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1104 Applicability Statement", RFC 5481, March 2009. 1106 Authors' Addresses 1108 Marcelo Bagnulo 1109 Universidad Carlos III de Madrid 1110 Av. Universidad 30 1111 Leganes, Madrid 28911 1112 SPAIN 1114 Phone: 34 91 6249500 1115 Email: marcelo@it.uc3m.es 1116 URI: http://www.it.uc3m.es 1118 Benoit Claise 1119 Cisco Systems, Inc. 1120 De Kleetlaan 6a b1 1121 1831 Diegem 1122 Belgium 1124 Email: bclaise@cisco.com 1126 Philip Eardley 1127 BT 1128 Adastral Park, Martlesham Heath 1129 Ipswich 1130 ENGLAND 1132 Email: philip.eardley@bt.com 1134 Al Morton 1135 AT&T Labs 1136 200 Laurel Avenue South 1137 Middletown, NJ 1138 USA 1140 Email: acmorton@att.com 1141 Aamer Akhter 1142 Cisco Systems, Inc. 1143 7025 Kit Creek Road 1144 RTP, NC 27709 1145 USA 1147 Email: aakhter@cisco.com