idnits 2.17.00 (12 Aug 2021) /tmp/idnits48797/draft-ietf-ippm-metric-registry-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 16, 2015) is 2650 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 3986' is mentioned on line 556, but not defined == Missing Reference: 'RFC 2141' is mentioned on line 557, but not defined ** Obsolete undefined reference: RFC 2141 (Obsoleted by RFC 8141) == Unused Reference: 'RFC3986' is defined on line 1002, but no explicit reference was found in the text == Unused Reference: 'RFC2141' is defined on line 1006, but no explicit reference was found in the text == Unused Reference: 'RFC5477' is defined on line 1028, but no explicit reference was found in the text == Unused Reference: 'RFC5102' is defined on line 1032, but no explicit reference was found in the text == Unused Reference: 'RFC6792' is defined on line 1036, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 1039, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 1043, but no explicit reference was found in the text == Unused Reference: 'RFC6776' is defined on line 1047, but no explicit reference was found in the text == Unused Reference: 'RFC7003' is defined on line 1051, but no explicit reference was found in the text == Unused Reference: 'RFC4566' is defined on line 1059, but no explicit reference was found in the text == Unused Reference: 'RFC5481' is defined on line 1062, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) == Outdated reference: draft-ietf-lmap-framework has been published as RFC 7594 -- Obsolete informational reference (is this intentional?): RFC 5102 (Obsoleted by RFC 7012) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) -- Obsolete informational reference (is this intentional?): RFC 2679 (Obsoleted by RFC 7679) Summary: 6 errors (**), 0 flaws (~~), 15 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: August 20, 2015 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 A. Akhter 11 Cisco Systems, Inc. 12 February 16, 2015 14 Registry for Performance Metrics 15 draft-ietf-ippm-metric-registry-02 17 Abstract 19 This document defines the IANA Registry for Performance Metrics. 20 This document also gives a set of guidelines for Registered 21 Performance Metric requesters and reviewers. 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at http://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on August 20, 2015. 40 Copyright Notice 42 Copyright (c) 2015 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (http://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 4. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 5. Motivation for a Performance Metrics Registry . . . . . . . . 6 62 5.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7 63 5.2. Single point of reference for Performance Metrics . . . . 7 64 5.3. Side benefits . . . . . . . . . . . . . . . . . . . . . . 8 65 6. Criteria for Performance Metrics Registration . . . . . . . . 8 66 7. Performance Metric Registry: Prior attempt . . . . . . . . . 9 67 7.1. Why this Attempt Will Succeed . . . . . . . . . . . . . . 9 68 8. Defintion of the Performance Metric Registry . . . . . . . . 10 69 8.1. Summary Category . . . . . . . . . . . . . . . . . . . . 11 70 8.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 11 71 8.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 12 72 8.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 12 73 8.1.4. Description . . . . . . . . . . . . . . . . . . . . . 13 74 8.2. Metric Definition Category . . . . . . . . . . . . . . . 13 75 8.2.1. Reference Definition . . . . . . . . . . . . . . . . 13 76 8.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 13 77 8.3. Method of Measurement Category . . . . . . . . . . . . . 14 78 8.3.1. Reference Method . . . . . . . . . . . . . . . . . . 14 79 8.3.2. Packet Generation Stream . . . . . . . . . . . . . . 14 80 8.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 15 81 8.3.4. Sampling distribution . . . . . . . . . . . . . . . . 15 82 8.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 15 83 8.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 15 84 8.4. Output Category . . . . . . . . . . . . . . . . . . . . . 16 85 8.4.1. Value . . . . . . . . . . . . . . . . . . . . . . . . 16 86 8.4.2. Reference . . . . . . . . . . . . . . . . . . . . . . 16 87 8.4.3. Metric Units . . . . . . . . . . . . . . . . . . . . 16 88 8.5. Administrative information . . . . . . . . . . . . . . . 16 89 8.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 16 90 8.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 17 91 8.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 17 92 8.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 17 93 8.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 17 94 9. The Life-Cycle of Registered Metrics . . . . . . . . . . . . 17 95 9.1. Adding new Performance Metrics to the Registry . . . . . 17 96 9.2. Revising Registered Performance Metrics . . . . . . . . . 18 97 9.3. Deprecating Registered Performance Metrics . . . . . . . 20 98 10. Security considerations . . . . . . . . . . . . . . . . . . . 20 99 11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 100 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21 101 13. References . . . . . . . . . . . . . . . . . . . . . . . . . 21 102 13.1. Normative References . . . . . . . . . . . . . . . . . . 21 103 13.2. Informative References . . . . . . . . . . . . . . . . . 22 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 23 106 1. Open Issues 108 1. Define the Filter column subcolumns, i.e. how filters are 109 expressed. 111 2. Need to include an example for a name for a passive metric 113 3. Shall we remove the definitions of active and passive? If we 114 remove it, shall we keep all the related comments in the draft? 116 4. URL: should we include a URL link in each registry entry with a 117 URL specific to the entry that links to a different text page 118 that contains all the details of the registry entry as in 119 http://www.iana.org/assignments/xml-registry/xml- 120 registry.xhtml#ns 122 2. Introduction 124 The IETF specifies and uses Performance Metrics of protocols and 125 applications transported over its protocols. Performance metrics are 126 such an important part of the operations of IETF protocols that 127 [RFC6390] specifies guidelines for their development. 129 The definition and use of Performance Metrics in the IETF happens in 130 various working groups (WG), most notably: 132 The "IP Performance Metrics" (IPPM) WG is the WG primarily 133 focusing on Performance Metrics definition at the IETF. 135 The "Metric Blocks for use with RTCP's Extended Report Framework" 136 (XRBLOCK) WG recently specified many Performance Metrics related 137 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 138 which establishes a framework to allow new information to be 139 conveyed in RTCP, supplementing the original report blocks defined 140 in "RTP: A Transport Protocol for Real-Time Applications", 141 [RFC3550]. 143 The "Benchmarking Methodology" WG (BMWG) defined many Performance 144 Metrics for use in laboratory benchmarking of inter-networking 145 technologies. 147 The "IP Flow Information eXport" (IPFIX) WG Information elements 148 related to Performance Metrics are currently proposed. 150 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 151 defined some Performance Metrics related to Session Initiation 152 Protocol (SIP) voice quality [RFC6035]. 154 It is expected that more Performance Metrics will be defined in the 155 future, not only IP-based metrics, but also metrics which are 156 protocol-specific and application-specific. 158 However, despite the importance of Performance Metrics, there are two 159 related problems for the industry. First, how to ensure that when 160 one party requests another party to measure (or report or in some way 161 act on) a particular Performance Metric, then both parties have 162 exactly the same understanding of what Performance Metric is being 163 referred to. Second, how to discover which Performance Metrics have 164 been specified, so as to avoid developing new Performance Metric that 165 is very similar. The problems can be addressed by creating a 166 registry of performance metrics. The usual way in which IETF 167 organizes namespaces is with Internet Assigned Numbers Authority 168 (IANA) registries, and there is currently no Performance Metrics 169 Registry maintained by the IANA. 171 This document therefore creates a Performance Metrics Registry. It 172 also provides best practices on how to specify new entries or update 173 ones in the Performance Metrics Registry. 175 3. Terminology 177 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 178 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 179 "OPTIONAL" in this document are to be interpreted as described in 180 [RFC2119]. 182 Performance Metric: A Performance Metric is a quantitative measure 183 of performance, targeted to an IETF-specified protocol or targeted 184 to an application transported over an IETF-specified protocol. 185 Examples of Performance Metrics are the FTP response time for a 186 complete file download, the DNS response time to resolve the IP 187 address, a database logging time, etc. This definition is 188 consistent with the definition of metric in [RFC2330] and broader 189 than the definition of performance metric in [RFC6390]. 191 Registered Performance Metric: A Registered Performance Metric (or 192 Registered Metric) is a Performance Metric expressed as an entry 193 in the Performance Metric Registry, administered by IANA. Such a 194 performance metric has met all the registry review criteria 195 defined in this document in order to included in the registry. 197 Performance Metrics Registry: The IANA registry containing 198 Registered Performance Metrics. In this document, it is also 199 called simply "Registry". 201 Proprietary Registry: A set of metrics that are registered in a 202 proprietary registry, as opposed to Performance Metrics Registry. 204 Performance Metrics Experts: The Performance Metrics Experts is a 205 group of experts selected by the IESG to validate the Performance 206 Metrics before updating the Performance Metrics Registry. The 207 Performance Metrics Experts work closely with IANA. 209 Parameter: An input factor defined as a variable in the definition 210 of a metric. A numerical or other specified factor forming one of 211 a set that defines a metric or sets the conditions of its 212 operation. All Parameters must be known to measure using a metric 213 and interpret the results. Although Parameters do not change the 214 fundamental nature of the metric's definition, some have 215 substantial influence on the network property being assessed and 216 interpretation of the results. 218 Consider the case of packet loss in the following two active 219 measurement cases. The first case is packet loss as background 220 loss where the parameter set includes a very sparse Poisson 221 stream, and only characterizes the times when packets were 222 lost. Actual user streams likely see much higher loss at these 223 times, due to tail drop or radio errors. The second case is 224 packet loss as inverse of Throughput where the parameter set 225 includes a very dense, bursty stream, and characterizes the 226 loss experienced by a stream that approximates a user stream. 227 These are both "loss metrics", but the difference in 228 interpretation of the results is highly dependent on the 229 Parameters (at least), to the extreme where we are actually 230 using loss to infer its compliment: delivered throughput. 232 Active Measurement Method: Methods of Measurement conducted on 233 traffic which serves only the purpose of measurement and is 234 generated for that reason alone, and whose traffic characteristics 235 are known a priori. Examples of Active Measurement Methods are 236 the the measurement methods for the One way delay metric defined 237 in [RFC2679] and the one for round trip delay defined in 238 [RFC2681]. 240 Passive Measurement Method: Methods of Measurement conducted on 241 network traffic, generated either from the end users or from 242 network elements. One characteristic of Passive Measurement 243 Methods is that sensitive information may be observed, and as a 244 consequence, stored in the measurement system. 246 Hybrid Measurement Method: Methods of Measurement which use a 247 combination of Active Measurement and Passive Measurement methods. 249 4. Scope 251 This document is meant for two different audiences. For those 252 defining new Registered Performance Metrics, it provides 253 specifications and best practices to be used in deciding which 254 Registered Metrics are useful for a measurement study, instructions 255 for writing the text for each column of the Registered Metrics, and 256 information on the supporting documentation required for the new 257 Registry entry (up to and including the publication of one or more 258 RFCs or I-Ds describing it). For the appointed Performance Metrics 259 Experts and for IANA personnel administering the new IANA Performance 260 Metric Registry, it defines a set of acceptance criteria against 261 which these proposed Registry Entries should be evaluated. 263 This document specifies a Performance Metrics Registry in IANA. This 264 Performance Metric Registry is applicable to Performance Metrics 265 issued from Active Measurement, Passive Measurement, from end-point 266 calculation or any other form of Performance Metric. This registry 267 is designed to encompass Performance Metrics developed throughout the 268 IETF and especially for the following existing working groups: IPPM, 269 XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to 270 set up a Performance Metric Registry, and the reasons why this design 271 was inadequate [RFC6248]. Finally, this document gives a set of 272 guidelines for requesters and expert reviewers of candidate 273 Registered Performance Metrics. 275 This document makes no attempt to populate the Registry with initial 276 entries. It does provides a few examples that are merely 277 illustrations and should not be included in the registry at this 278 point in time. 280 Based on [RFC5226] Section 4.3, this document is processed as Best 281 Current Practice (BCP) [RFC2026]. 283 5. Motivation for a Performance Metrics Registry 285 In this section, we detail several motivations for the Performance 286 Metric Registry. 288 5.1. Interoperability 290 As any IETF registry, the primary use for a registry is to manage a 291 namespace for its use within one or more protocols. In the 292 particular case of the Performance Metric Registry, there are two 293 types of protocols that will use the Performance Metrics in the 294 Registry during their operation (by referring to the Index values): 296 o Control protocol: this type of protocols is used to allow one 297 entity to request another entity to perform a measurement using a 298 specific metric defined by the Registry. One particular example 299 is the LMAP framework [I-D.ietf-lmap-framework]. Using the LMAP 300 terminology, the Registry is used in the LMAP Control protocol to 301 allow a Controller to request a measurement task to one or more 302 Measurement Agents. In order to enable this use case, the entries 303 of the Performance Metric Registry must be well enough defined to 304 allow a Measurement Agent implementation to trigger a specific 305 measurement task upon the reception of a control protocol message. 306 This requirements heavily constrains the type of entries that are 307 acceptable for the Performance Metric Registry. 309 o Report protocol: This type of protocols is used to allow an entity 310 to report measurement results to another entity. By referencing 311 to a specific Performance Metric Registry, it is possible to 312 properly characterize the measurement result data being 313 transferred. Using the LMAP terminology, the Registry is used in 314 the Report protocol to allow a Measurement Agent to report 315 measurement results to a Collector. 317 5.2. Single point of reference for Performance Metrics 319 A Registry for Performance Metrics serves as a single point of 320 reference for Performance Metrics defined in different working groups 321 in the IETF. As we mentioned earlier, there are several WGs that 322 define Performance Metrics in the IETF and it is hard to keep track 323 of all them. This results in multiple definitions of similar metrics 324 that attempt to measure the same phenomena but in slightly different 325 (and incompatible) ways. Having a Registry would allow both the IETF 326 community and external people to have a single list of relevant 327 Performance Metrics defined by the IETF (and others, where 328 appropriate). The single list is also an essential aspect of 329 communication about metrics, where different entities that request 330 measurements, execute measurements, and report the results can 331 benefit from a common understanding of the referenced metric. 333 5.3. Side benefits 335 There are a couple of side benefits of having such a Registry. 336 First, the Registry could serve as an inventory of useful and used 337 metrics, that are normally supported by different implementations of 338 measurement agents. Second, the results of measurements using the 339 metrics would be comparable even if they are performed by different 340 implementations and in different networks, as the metric is properly 341 defined. BCP 176 [RFC6576] examines whether the results produced by 342 independent implementations are equivalent in the context of 343 evaluating the completeness and clarity of metric specifications. 344 This BCP defines the standards track advancement testing for (active) 345 IPPM metrics, and the same process will likely suffice to determine 346 whether Registry entries are sufficiently well specified to result in 347 comparable (or equivalent) results. Registry entries which have 348 undergone such testing SHOULD be noted, with a reference to the test 349 results. 351 6. Criteria for Performance Metrics Registration 353 It is neither possible nor desirable to populate the Registry with 354 all combinations of input parameters of all Performance Metrics. The 355 Registered Performance Metrics should be: 357 1. interpretable by the user. 359 2. implementable by the software designer, 361 3. deployable by network operators, without major impact on the 362 networks, 364 4. accurate, for interoperability and deployment across vendors, 366 5. Operationally useful, so that it has significant industry 367 interest and/or has seen deployment, 369 6. Sufficiently tightly defined, so that different values for the 370 Run-time Parameters does not change the fundamental nature of the 371 measurement, nor change the practicality of its implementation. 373 In essence, there needs to be evidence that a candidate Registry 374 entry has significant industry interest, or has seen deployment, and 375 there is agreement that the candidate Registered Metric serves its 376 intended purpose. 378 7. Performance Metric Registry: Prior attempt 380 There was a previous attempt to define a metric registry RFC 4148 381 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 382 it was "found to be insufficiently detailed to uniquely identify IPPM 383 metrics... [there was too much] variability possible when 384 characterizing a metric exactly" which led to the RFC4148 registry 385 having "very few users, if any". 387 A couple of interesting additional quotes from RFC 6248 might help 388 understand the issues related to that registry. 390 1. "It is not believed to be feasible or even useful to register 391 every possible combination of Type P, metric parameters, and 392 Stream parameters using the current structure of the IPPM Metrics 393 Registry." 395 2. "The registry structure has been found to be insufficiently 396 detailed to uniquely identify IPPM metrics." 398 3. "Despite apparent efforts to find current or even future users, 399 no one responded to the call for interest in the RFC 4148 400 registry during the second half of 2010." 402 The current approach learns from this by tightly defining each entry 403 in the registry with only a few variable (Run-time) Parameters to be 404 specified by the measurement designer, if any. The idea is that 405 entries in the Registry stem from different measurement methods which 406 require input (Run-time) parameters to set factors like source and 407 destination addresses (which do not change the fundamental nature of 408 the measurement). The downside of this approach is that it could 409 result in a large number of entries in the Registry. There is 410 agreement that less is more in this context - it is better to have a 411 reduced set of useful metrics rather than a large set of metrics, 412 some with with questionable usefulness. Therefore this document 413 defines that the Registry only includes metrics that are well defined 414 and that have proven to be operationally useful. In order to assure 415 these two characteristics, a set of experts are required to review 416 the allocation request to verify that the metric is well defined and 417 it is operationally useful. 419 7.1. Why this Attempt Will Succeed 421 The Registry defined in this document addresses the main issues 422 identified in the previous attempt. As we mention in the previous 423 section, one of the main issues with the previous registry was that 424 the metrics contained in the registry were too generic to be useful. 425 In this Registry, the Registry requests are evaluated by an expert 426 group, the Performance Metrics Experts, who will make sure that the 427 metric is properly defined. This document provides guidelines to 428 assess if a metric is properly defined. 430 Another key difference between this attempt and the previous one is 431 that in this case there is at least one clear user for the Registry: 432 the LMAP framework and protocol. Because the LMAP protocol will use 433 the Registry values in its operation, this actually helps to 434 determine if a metric is properly defined. In particular, since we 435 expect that the LMAP control protocol will enable a controller to 436 request a measurement agent to perform a measurement using a given 437 metric by embedding the Performance Metric Registry value in the 438 protocol, a metric is properly specified if it is defined well-enough 439 so that it is possible (and practical) to implement the metric in the 440 measurement agent. This was the failure of the previous attempt: a 441 registry entry with an undefined Type-P (section 13 of RFC 2330 442 [RFC2330]) allows implementation to be ambiguous. 444 8. Defintion of the Performance Metric Registry 446 In this section we define the columns of the Performance Metric 447 Registry. This registry will contain all Registered Performance 448 Metrics including active, passive, hybrid, endpoint metrics and any 449 other type of performance metric that can be envisioned. Because of 450 that, it may be the case that some of the columns defined are not 451 applicable for a given type of metric. If this is the case, the 452 column(s) SHOULD be populated with the "NA" value (Non Applicable). 453 However, the "NA" value MUST NOT be used by any metric in the 454 following columns: Identifier, Name, URI, Status, Requester, 455 Revision, Revision Date, Description and Reference Specification. 456 Moreover, In addition, it may be possible that in the future, a new 457 type of metric requires additional columns. Should that be the case, 458 it is possible to add new columns to the registry. The specification 459 defining the new column(s) must define how to populate the new 460 column(s) for existing entries. 462 The columns of the Performance Metric Registry are defined next. The 463 columns are grouped into "Categories" to facilitate the use of the 464 registry. Categories are described at the 8.x heading level, and 465 columns are at the 8.x.y heading level. The Figure below illustrates 466 this organization. An entry (row) therefore gives a complete 467 description of a Registered Metric. 469 Each column serves as a check-list item and helps to avoid omissions 470 during registration and expert review. 472 Registry Categories and Columns, shown as 473 Category 474 ------------------ 475 Column | Column | 477 Summary 478 ------------------------------- 479 ID | Name | URI | Description | 481 Metric Definition 482 ----------------------------------------- 483 Reference Definition | Fixed Parameters | 485 Method of Measurement 486 --------------------------------------------------------------- 487 Reference | Packet | Traffic | Sampling | Run-time | Role | 488 Method | Generation | Filter | dist. | Param | | 489 | Stream | 490 Output 491 ----------------------------- 492 | Type | Reference | Units | 493 | | Definition | | 495 Administrative information 496 ---------------------------------- 497 Status |Request | Rev | Rev.Date | 499 Comments and Remarks 500 -------------------- 502 8.1. Summary Category 504 8.1.1. Identifier 506 A numeric identifier for the Registered Performance Metric. This 507 identifier MUST be unique within the Performance Metric Registry. 509 The Registered Performance Metric unique identifier is a 16-bit 510 integer (range 0 to 65535). When adding newly Registered Performance 511 Metrics to the Performance Metric Registry, IANA should assign the 512 lowest available identifier to the next Registered Performance 513 Metric. 515 8.1.2. Name 517 As the name of a Registered Performance Metric is the first thing a 518 potential implementor will use when determining whether it is 519 suitable for a given application, it is important to be as precise 520 and descriptive as possible. 522 New names of Registered Performance Metrics: 524 1. "MUST be chosen carefully to describe the Registered Performance 525 Metric and the context in which it will be used." 527 2. "MUST be unique within the Performance Metric Registry." 529 3. "MUST use capital letters for the first letter of each component 530 . All other letters MUST be lowercase, even for acronyms. 531 Exceptions are made for acronyms containing a mixture of 532 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 534 4. MUST use '_' between each component of the Registered Performance 535 Metric name. 537 5. MUST start with prefix Act_ for active measurement Registered 538 Performance Metric. 540 6. MUST start with prefix Pas_ for passive monitoring Registered 541 Performance Metric. 543 7. Other types of metrics should define a proper prefix for 544 identifying the type. 546 8. The remaining rules for naming are left for the Performance 547 Experts to determine as they gather experience, so this is an 548 area of planned update by a future RFC. 550 An example is "Act_UDP_Latency_Poisson_99mean" for a active 551 monitoring UDP latency metric using a Poisson stream of packets and 552 producing the 99th percentile mean as output. 554 8.1.3. URI 556 The URI column MUST contain a URI [RFC 3986] that uniquely identified 557 the metric. The URI is a URN [RFC 2141]. The URI is automatically 558 generated by prepending the prefix urn:ietf:params:ippm:metric: to 559 the metric name. The resulting URI is globally unique. 561 8.1.4. Description 563 A Registered Performance Metric Description is a written 564 representation of a particular Registry entry. It supplements the 565 metric name to help Registry users select relevant Registered 566 Performance Metrics. 568 8.2. Metric Definition Category 570 This category includes columns to prompt all necessary details 571 related to the metric definition, including the RFC reference and 572 values of input factors, called fixed parameters, which are left open 573 in the RFC but have a particular value defined by the performance 574 metric. 576 8.2.1. Reference Definition 578 This entry provides a reference (or references) to the relevant 579 section(s) of the document(s) that define the metric, as well as any 580 supplemental information needed to ensure an unambiguous definition 581 for implementations. The reference needs to be an immutable 582 document, such as an RFC; for other standards bodies, it is likely to 583 be necessary to reference a specific, dated version of a 584 specification. 586 8.2.2. Fixed Parameters 588 Fixed Parameters are input factors whose value must be specified in 589 the Registry. The measurement system uses these values. 591 Where referenced metrics supply a list of Parameters as part of their 592 descriptive template, a sub-set of the Parameters will be designated 593 as Fixed Parameters. For example, for active metrics, Fixed 594 Parameters determine most or all of the IPPM Framework convention 595 "packets of Type-P" as described in [RFC2330], such as transport 596 protocol, payload length, TTL, etc. An example for passive metrics 597 is for RTP packet loss calculation that relies on the validation of a 598 packet as RTP which is a multi-packet validation controlled by 599 MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL 600 values can alter the loss report and this value could be set as a 601 Fixed Parameter 603 A Parameter which is a Fixed Parameter for one Registry entry may be 604 designated as a Run-time Parameter for another Registry entry. 606 8.3. Method of Measurement Category 608 This category includes columns for references to relevant sections of 609 the RFC(s) and any supplemental information needed to ensure an 610 unambiguous method for implementations. 612 8.3.1. Reference Method 614 This entry provides references to relevant sections of the RFC(s) 615 describing the method of measurement, as well as any supplemental 616 information needed to ensure unambiguous interpretation for 617 implementations referring to the RFC text. 619 Specifically, this section should include pointers to pseudocode or 620 actual code that could be used for an unambigious implementation. 622 8.3.2. Packet Generation Stream 624 This column applies to metrics that generate traffic for a part of 625 their Measurement Method purposes including but not necessarily 626 limited to Active metrics. The generated traffic is referred as 627 stream and this columns describe its characteristics. 629 Each entry for this column contains the following information: 631 o Value: The name of the packet stream scheduling discipline 633 o Stream Parameters: The values and formats of input factors for 634 each type of stream. For example, the average packet rate and 635 distribution truncation value for streams with Poisson-distributed 636 inter-packet sending times. 638 o Reference: the specification where the stream is defined 640 The simplest example of stream specification is Singleton scheduling 641 (see [RFC2330]), where a single atomic measurement is conducted. 642 Each atomic measurement could consist of sending a single packet 643 (such as a DNS request) or sending several packets (for example, to 644 request a webpage). Other streams support a series of atomic 645 measurements in a "sample", with a schedule defining the timing 646 between each transmitted packet and subsequent measurement. 647 Principally, two different streams are used in IPPM metrics, Poisson 648 distributed as described in [RFC2330] and Periodic as described in 649 [RFC3432]. Both Poisson and Periodic have their own unique 650 parameters, and the relevant set of values is specified in this 651 column. 653 8.3.3. Traffic Filter 655 This column applies to metrics that observe packets flowing through 656 (the device with) the measurement agent i.e. that is not necessarily 657 addressed to the measurement agent. This includes but is not limited 658 to Passive Metrics. The filter specifies the traffic that is 659 measured. This includes protocol field values/ranges, such as 660 address ranges, and flow or session identifiers. 662 8.3.4. Sampling distribution 664 The sampling distribution defines out of all the packets that match 665 the traffic filter, which one of those are actually used for the 666 measurement. One possibility is "all" which implies that all packets 667 matching the Traffic filter are considered, but there may be other 668 sampling strategies. It includes the following information: 670 Value: the name of the sampling distribution 672 Parameters: if any. 674 Reference definition: pointer to the specification where the 675 sampling distribution is properly defined. 677 8.3.5. Run-time Parameters 679 Run-Time Parameters are input factors that must be determined, 680 configured into the measurement system, and reported with the results 681 for the context to be complete. However, the values of these 682 parameters is not specified in the Registry, rather these parameters 683 are listed as an aid to the measurement system implementor or user 684 (they must be left as variables, and supplied on execution). 686 Where metrics supply a list of Parameters as part of their 687 descriptive template, a sub-set of the Parameters will be designated 688 as Run-Time Parameters. 690 Examples of Run-time Parameters include IP addresses, measurement 691 point designations, start times and end times for measurement, and 692 other information essential to the method of measurement. 694 8.3.6. Role 696 In some method of measurements, there may be several roles defined 697 e.g. on a one-way packet delay active measurement, there is one 698 measurement agent that generates the packets and the other one that 699 receives the packets. This column contains the name of the role for 700 this particular entry. In the previous example, there should be two 701 entries int he registry, one for each role, so that when a 702 measurement agent is instructed to perform the one way delay source 703 metric know that it is supposed to generate packets. The values for 704 this field are defined in the reference method of measurement. 706 8.4. Output Category 708 For entries which involve a stream and many singleton measurements, a 709 statistic may be specified in this column to summarize the results to 710 a single value. If the complete set of measured singletons is 711 output, this will be specified here. 713 Some metrics embed one specific statistic in the reference metric 714 definition, while others allow several output types or statistics. 716 8.4.1. Value 718 This column contain the name of the output type. The output type 719 defines the type of result that the metric produces. It can be the 720 raw results or it can be some form of statistic. The specification 721 of the output type must define the format of the output. In some 722 systems, format specifications will simplify both measurement 723 implementation and collection/storage tasks. Note that if two 724 different statistics are required from a single measurement (for 725 example, both "Xth percentile mean" and "Raw"), then a new output 726 type must be defined ("Xth percentile mean AND Raw"). 728 8.4.2. Reference 730 This column contains a pointer to the specification where the output 731 type is defined 733 8.4.3. Metric Units 735 The measured results must be expressed using some standard dimension 736 or units of measure. This column provides the units. 738 When a sample of singletons (see [RFC2330] for definitions of these 739 terms) is collected, this entry will specify the units for each 740 measured value. 742 8.5. Administrative information 744 8.5.1. Status 746 The status of the specification of this Registered Performance 747 Metric. Allowed values are 'current' and 'deprecated'. All newly 748 defined Information Elements have 'current' status. 750 8.5.2. Requester 752 The requester for the Registered Performance Metric. The requester 753 MAY be a document, such as RFC, or person. 755 8.5.3. Revision 757 The revision number of a Registered Performance Metric, starting at 0 758 for Registered Performance Metrics at time of definition and 759 incremented by one for each revision. 761 8.5.4. Revision Date 763 The date of acceptance or the most recent revision for the Registered 764 Performance Metric. 766 8.6. Comments and Remarks 768 Besides providing additional details which do not appear in other 769 categories, this open Category (single column) allows for unforeseen 770 issues to be addressed by simply updating this informational entry. 772 9. The Life-Cycle of Registered Metrics 774 Once a Performance Metric or set of Performance Metrics has been 775 identified for a given application, candidate Registry entry 776 specifications in accordance with Section 8 are submitted to IANA to 777 follow the process for review by the Performance Metric Experts, as 778 defined below. This process is also used for other changes to the 779 Performance Metric Registry, such as deprecation or revision, as 780 described later in this section. 782 It is also desirable that the author(s) of a candidate Registry entry 783 seek review in the relevant IETF working group, or offer the 784 opportunity for review on the WG mailing list. 786 9.1. Adding new Performance Metrics to the Registry 788 Requests to change Registered Metrics in the Performance Metric 789 Registry are submitted to IANA, which forwards the request to a 790 designated group of experts (Performance Metric Experts) appointed by 791 the IESG; these are the reviewers called for by the Expert Review 792 RFC5226 policy defined for the Performance Metric Registry. The 793 Performance Metric Experts review the request for such things as 794 compliance with this document, compliance with other applicable 795 Performance Metric-related RFCs, and consistency with the currently 796 defined set of Registered Performance Metrics. 798 Authors are expected to review compliance with the specifications in 799 this document to check their submissions before sending them to IANA. 801 The Performance Metric Experts should endeavor to complete referred 802 reviews in a timely manner. If the request is acceptable, the 803 Performance Metric Experts signify their approval to IANA, which 804 updates the Performance Metric Registry. If the request is not 805 acceptable, the Performance Metric Experts can coordinate with the 806 requester to change the request to be compliant. The Performance 807 Metric Experts may also choose in exceptional circumstances to reject 808 clearly frivolous or inappropriate change requests outright. 810 This process should not in any way be construed as allowing the 811 Performance Metric Experts to overrule IETF consensus. Specifically, 812 any Registered Metrics that were added with IETF consensus require 813 IETF consensus for revision or deprecation. 815 Decisions by the Performance Metric Experts may be appealed as in 816 Section 7 of RFC5226. 818 9.2. Revising Registered Performance Metrics 820 A request for Revision is only permissible when the changes maintain 821 backward-compatibility with implementations of the prior Registry 822 entry describing a Registered Metric (entries with lower revision 823 numbers, but the same Identifier and Name). 825 The purpose of the Status field in the Performance Metric Registry is 826 to indicate whether the entry for a Registered Metric is 'current' or 827 'deprecated'. 829 In addition, no policy is defined for revising IANA Performance 830 Metric entries or addressing errors therein. To be certain, changes 831 and deprecations within the Performance Metric Registry are not 832 encouraged, and should be avoided to the extent possible. However, 833 in recognition that change is inevitable, the provisions of this 834 section address the need for revisions. 836 Revisions are initiated by sending a candidate Registered Performance 837 Metric definition to IANA, as in Section X, identifying the existing 838 Registry entry. 840 The primary requirement in the definition of a policy for managing 841 changes to existing Registered Performance Metrics is avoidance of 842 interoperability problems; Performance Metric Experts must work to 843 maintain interoperability above all else. Changes to Registered 844 Performance Metrics may only be done in an inter-operable way; 845 necessary changes that cannot be done in a way to allow 846 interoperability with unchanged implementations must result in the 847 creation of a new Registered Metric and possibly the deprecation of 848 the earlier metric. 850 A change to a Registered Performance Metric is held to be backward- 851 compatible only when: 853 1. "it involves the correction of an error that is obviously only 854 editorial; or" 856 2. "it corrects an ambiguity in the Registered Performance Metric's 857 definition, which itself leads to issues severe enough to prevent 858 the Registered Performance Metric's usage as originally defined; 859 or" 861 3. "it corrects missing information in the metric definition without 862 changing its meaning (e.g., the explicit definition of 'quantity' 863 semantics for numeric fields without a Data Type Semantics 864 value); or" 866 4. "it harmonizes with an external reference that was itself 867 corrected." 869 If an Performance Metric revision is deemed permissible by the 870 Performance Metric Experts, according to the rules in this document, 871 IANA makes the change in the Performance Metric Registry. The 872 requester of the change is appended to the requester in the Registry. 874 Each Registered Performance Metric in the Registry has a revision 875 number, starting at zero. Each change to a Registered Performance 876 Metric following this process increments the revision number by one. 878 COMMENT: Al (and Phil) think we should keep old/revised entries as- 879 is, marked as deprecated >>>> Since any revision must be inter- 880 operable according to the criteria above, there is no need for the 881 Performance Metric Registry to store information about old revisions. 883 When a revised Registered Performance Metric is accepted into the 884 Performance Metric Registry, the date of acceptance of the most 885 recent revision is placed into the revision Date column of the 886 Registry for that Registered Performance Metric. 888 Where applicable, additions to Registry entries in the form of text 889 Comments or Remarks should include the date, but such additions may 890 not constitute a revision according to this process. 892 Older version(s) of the updated metric entries are kept in the 893 registry for archival purposes. The older entries are kept with all 894 fields unmodified (version, revision date) except for the status 895 field that is changed to "Deprecated". 897 9.3. Deprecating Registered Performance Metrics 899 Changes that are not permissible by the above criteria for Registered 900 Metric's revision may only be handled by deprecation. A Registered 901 Performance Metric MAY be deprecated and replaced when: 903 1. "the Registered Performance Metric definition has an error or 904 shortcoming that cannot be permissibly changed as in 905 Section Revising Registered Performance Metrics; or" 907 2. "the deprecation harmonizes with an external reference that was 908 itself deprecated through that reference's accepted deprecation 909 method; or" 911 A request for deprecation is sent to IANA, which passes it to the 912 Performance Metric Expert for review, as in Section 'The Process for 913 Review by the Performance Metric Experts'. When deprecating an 914 Performance Metric, the Performance Metric description in the 915 Performance Metric Registry must be updated to explain the 916 deprecation, as well as to refer to any new Performance Metrics 917 created to replace the deprecated Performance Metric. 919 The revision number of a Registered Performance Metric is incremented 920 upon deprecation, and the revision Date updated, as with any 921 revision. 923 The use of deprecated Registered Metrics should result in a log entry 924 or human-readable warning by the respective application. 926 Names and Metric ID of deprecated Registered Metrics must not be 927 reused. 929 Deprecated metric entries are kept in the registry for archival 930 purposes. The deprecated entries are kept with all fields unmodified 931 (version, revision date) except for the status field that is changed 932 to "Deprecated". 934 10. Security considerations 936 This draft doesn't introduce any new security considerations for the 937 Internet. However, the definition of Performance Metrics may 938 introduce some security concerns, and should be reviewed with 939 security in mind. 941 11. IANA Considerations 943 This document specifies the procedure for Performance Metrics 944 Registry setup. IANA is requested to create a new Registry for 945 Performance Metrics called "Registered Performance Metrics" with the 946 columns defined in Section 8. 948 New assignments for Performance Metric Registry will be administered 949 by IANA through Expert Review [RFC5226], i.e., review by one of a 950 group of experts, the Performance Metric Experts, appointed by the 951 IESG upon recommendation of the Transport Area Directors. The 952 experts can be initially drawn from the Working Group Chairs and 953 document editors of the Performance Metrics Directorate among other 954 sources of experts. 956 The Identifier values from 64512 to 65536 are reserved for private 957 use. The name starting with the prefix Priv- are reserved for 958 private use. 960 This document requests the allocation of the URI prefix 961 urn:ietf:params:ippm:metric for the purpose of generating URIs for 962 registered metrics. 964 12. Acknowledgments 966 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 967 some brainstorming sessions on this topic. 969 13. References 971 13.1. Normative References 973 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 974 Requirement Levels", BCP 14, RFC 2119, March 1997. 976 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 977 3", BCP 9, RFC 2026, October 1996. 979 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 980 "Framework for IP Performance Metrics", RFC 2330, May 981 1998. 983 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 984 Registry", BCP 108, RFC 4148, August 2005. 986 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 987 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 988 May 2008. 990 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 991 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, April 992 2011. 994 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 995 Performance Metric Development", BCP 170, RFC 6390, 996 October 2011. 998 [RFC6576] Geib, R., Morton, A., Fardid, R., and A. Steinmitz, "IP 999 Performance Metrics (IPPM) Standard Advancement Testing", 1000 BCP 176, RFC 6576, March 2012. 1002 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1003 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1004 3986, January 2005. 1006 [RFC2141] Moats, R., "URN Syntax", RFC 2141, May 1997. 1008 13.2. Informative References 1010 [RFC3611] Friedman, T., Caceres, R., and A. Clark, "RTP Control 1011 Protocol Extended Reports (RTCP XR)", RFC 3611, November 1012 2003. 1014 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1015 Jacobson, "RTP: A Transport Protocol for Real-Time 1016 Applications", STD 64, RFC 3550, July 2003. 1018 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 1019 "Session Initiation Protocol Event Package for Voice 1020 Quality Reporting", RFC 6035, November 2010. 1022 [I-D.ietf-lmap-framework] 1023 Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1024 Aitken, P., and A. Akhter, "A framework for large-scale 1025 measurement platforms (LMAP)", draft-ietf-lmap- 1026 framework-10 (work in progress), January 2015. 1028 [RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G. 1029 Carle, "Information Model for Packet Sampling Exports", 1030 RFC 5477, March 2009. 1032 [RFC5102] Quittek, J., Bryant, S., Claise, B., Aitken, P., and J. 1033 Meyer, "Information Model for IP Flow Information Export", 1034 RFC 5102, January 2008. 1036 [RFC6792] Wu, Q., Hunt, G., and P. Arden, "Guidelines for Use of the 1037 RTP Monitoring Framework", RFC 6792, November 2012. 1039 [RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network 1040 Time Protocol Version 4: Protocol and Algorithms 1041 Specification", RFC 5905, June 2010. 1043 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1044 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1045 November 2002. 1047 [RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information 1048 Reporting Using a Source Description (SDES) Item and an 1049 RTCP Extended Report (XR) Block", RFC 6776, October 2012. 1051 [RFC7003] Clark, A., Huang, R., and Q. Wu, "RTP Control Protocol 1052 (RTCP) Extended Report (XR) Block for Burst/Gap Discard 1053 Metric Reporting", RFC 7003, September 2013. 1055 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1056 performance measurement with periodic streams", RFC 3432, 1057 November 2002. 1059 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1060 Description Protocol", RFC 4566, July 2006. 1062 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1063 Applicability Statement", RFC 5481, March 2009. 1065 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1066 Delay Metric for IPPM", RFC 2679, September 1999. 1068 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1069 Delay Metric for IPPM", RFC 2681, September 1999. 1071 Authors' Addresses 1073 Marcelo Bagnulo 1074 Universidad Carlos III de Madrid 1075 Av. Universidad 30 1076 Leganes, Madrid 28911 1077 SPAIN 1079 Phone: 34 91 6249500 1080 Email: marcelo@it.uc3m.es 1081 URI: http://www.it.uc3m.es 1082 Benoit Claise 1083 Cisco Systems, Inc. 1084 De Kleetlaan 6a b1 1085 1831 Diegem 1086 Belgium 1088 Email: bclaise@cisco.com 1090 Philip Eardley 1091 BT 1092 Adastral Park, Martlesham Heath 1093 Ipswich 1094 ENGLAND 1096 Email: philip.eardley@bt.com 1098 Al Morton 1099 AT&T Labs 1100 200 Laurel Avenue South 1101 Middletown, NJ 1102 USA 1104 Email: acmorton@att.com 1106 Aamer Akhter 1107 Cisco Systems, Inc. 1108 7025 Kit Creek Road 1109 RTP, NC 27709 1110 USA 1112 Email: aakhter@cisco.com