idnits 2.17.00 (12 Aug 2021) /tmp/idnits46976/draft-ietf-lmap-use-cases-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 10, 2014) is 2748 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ExtendTCP' is mentioned on line 267, but not defined == Unused Reference: 'Extend TCP' is defined on line 684, but no explicit reference was found in the text == Unused Reference: 'RFC6973' is defined on line 695, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Marc Linsner 3 Intended Status: Informational Cisco Systems 4 Expires: May 14, 2015 Philip Eardley 5 Trevor Burbridge 6 BT 7 Frode Sorensen 8 NPT 9 November 10, 2014 11 Large-Scale Broadband Measurement Use Cases 12 draft-ietf-lmap-use-cases-05 14 Abstract 16 Measuring broadband performance on a large scale is important for 17 network diagnostics by providers and users, as well as for public 18 policy. Understanding the various scenarios and users of measuring 19 broadband performance is essential to development of the Large-scale 20 Measurement of Broadband Performance (LMAP) framework, information 21 model and protocol. This document details two use cases that can 22 assist to developing that framework. The details of the measurement 23 metrics themselves are beyond the scope of this document. 25 Status of this Memo 27 This Internet-Draft is submitted to IETF in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF), its areas, and its working groups. Note that 32 other groups may also distribute working documents as 33 Internet-Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/1id-abstracts.html 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html 46 Copyright and License Notice 48 Copyright (c) 2014 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2.1 Internet Service Provider (ISP) Use Case . . . . . . . . . . 3 66 2.2 Regulator Use Case . . . . . . . . . . . . . . . . . . . . . 4 67 3 Details of ISP Use Case . . . . . . . . . . . . . . . . . . . . 5 68 3.1 Understanding the quality experienced by customers . . . . . 5 69 3.2 Understanding the impact and operation of new devices and 70 technology . . . . . . . . . . . . . . . . . . . . . . . . . 6 71 3.3 Design and planning . . . . . . . . . . . . . . . . . . . . 6 72 3.4 Monitoring Service Level Agreements . . . . . . . . . . . . 7 73 3.5 Identifying, isolating and fixing network problems . . . . . 7 74 4 Details of Regulator Use Case . . . . . . . . . . . . . . . . . 8 75 4.1 Promoting competition through transparency . . . . . . . . . 8 76 4.2 Promoting broadband deployment . . . . . . . . . . . . . . . 10 77 4.3 Monitoring "net neutrality" . . . . . . . . . . . . . . . . 10 78 5 Implementation Options . . . . . . . . . . . . . . . . . . . . 11 79 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 12 80 7 Security Considerations . . . . . . . . . . . . . . . . . . . . 14 81 8 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 15 82 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 83 Informative References . . . . . . . . . . . . . . . . . . . . . . 15 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 86 1 Introduction 88 This document describes two use cases for the Large-scale Measurement 89 of Broadband Performance (LMAP). The use cases contained in this 90 document are (1) the Internet Service Provider Use Case and (2) the 91 Regulator Use Case. In the first, a network operator wants to 92 understand the performance of the network and the quality experienced 93 by customers, whilst in the second, a regulator wants to provide 94 information on the performance of the ISPs in their jurisdiction. 95 There are other use cases that are not the focus of the initial LMAP 96 work, for example end users would like to use measurements to help 97 identify problems in their home network and to monitor the 98 performance of their broadband provider; it is expected that the same 99 mechanisms are applicable. 101 2 Use Cases 103 From the LMAP perspective, there is no difference between fixed 104 service and mobile (cellular) service used for Internet access. 105 Hence, like measurements will take place on both fixed and mobile 106 networks. Fixed services include technologies like Digital 107 Subscriber Line (DSL), Cable, and Carrier Ethernet. Mobile services 108 include all those advertised as 2G, 3G, 4G, and Long-Term Evolution 109 (LTE). A metric defined to measure end-to-end services will execute 110 similarly on all access technologies. Other metrics may be access 111 technology specific. The LMAP architecture covers both IPv4 and IPv6 112 networks. 114 2.1 Internet Service Provider (ISP) Use Case 116 A network operator needs to understand the performance of their 117 networks, the performance of the suppliers (downstream and upstream 118 networks), the performance of Internet access services, and the 119 impact that such performance has on the experience of their 120 customers. Largely, the processes that ISPs operate (which are based 121 on network measurement) include: 123 o Identifying, isolating and fixing problems, which may be in the 124 network, with the service provider, or in the end user equipment. 125 Such problems may be common to a point in the network topology 126 (e.g. a single exchange), common to a vendor or equipment type 127 (e.g. line card or home gateway) or unique to a single user line 128 (e.g. copper access). Part of this process may also be helping 129 users understand whether the problem exists in their home network 130 or with a third party application service instead of with their 131 broadband (BB) product. 133 o Design and planning. Through monitoring the end user experience 134 the ISP can design and plan their network to ensure specified 135 levels of user experience. Services may be moved closer to end 136 users, services upgraded, the impact of QoS assessed or more 137 capacity deployed at certain locations. Service Level Agreements 138 (SLAs) may be defined at network or product boundaries. 140 o Understanding the quality experienced by customers. Alongside 141 benchmarking competitors, gaining better insight into the user's 142 service through a sample panel of the operator's own customers. 143 The ISP requires a performance viewpoint of the end-to-end 144 perspective, which includes: home/enterprise networks; peering 145 points; Content Delivery Networks (CDNs); etc. 147 o Understanding the impact and operation of new devices and 148 technology. As a new product is deployed, or a new technology 149 introduced into the network, it is essential that its operation 150 and its impact is measured. This also helps to quantify the 151 advantage that the new technology is bringing and support the 152 business case for larger roll-out. 154 2.2 Regulator Use Case 156 Regulators in jurisdictions around the world are responding to 157 consumers' adoption of Internet access services for traditional 158 telecommunications and media services by promoting competition among 159 providers of electronic communications, to ensure that users derive 160 maximum benefit in terms of choice, price, and quality. 162 Competition is more effective with better information, so some 163 regulators have developed large-scale measurement programs. For 164 example, programs such as the U.S. Federal Communications 165 Commission's (FCC) Measuring Broadband America (MBA), European 166 Commission's Quality of Broadband Services in the EU reports and a 167 growing list of other programs employ a diverse set of operational 168 and technical approaches to gathering data to perform analysis and 169 reporting on diverse aspects of broadband performance. 171 While each jurisdiction responds to distinct consumer, industry, and 172 regulatory concerns, much commonality exists in the need to produce 173 datasets that can be used to compare multiple Internet access service 174 providers, diverse technical solutions, geographic and regional 175 distributions, and marketed and provisioned levels and combinations 176 of broadband Internet access services. In some jurisdictions, the 177 role of measuring is provided by a measurement provider. 179 Measurement providers measure network performance from users towards 180 multiple content and application providers, including dedicated test 181 measurement servers, to show the performance of the actual Internet 182 access service provided by different ISPs. Users need to know the 183 performance that they are achieving from their own ISP. In addition, 184 they need to know the performance of other ISPs of same location as 185 background information for selecting their ISP. Measurement providers 186 will provide measurement results with associated measurement methods 187 and measurement metrics. 189 From a consumer perspective, the differentiation between fixed and 190 mobile (cellular) Internet access services is blurring as the 191 applications used are very similar. Hence, regulators are measuring 192 both fixed and mobile Internet access services. 194 A regulator's role in the development and enforcement of broadband 195 Internet access service policies also requires that the measurement 196 approaches meet a high level of verifiability, accuracy and provider- 197 independence to support valid and meaningful comparisons of Internet 198 access service performance. 200 LMAP standards could answer regulators shared needs by providing 201 scalable, cost-effective, scientifically robust solutions to the 202 measurement and collection of broadband Internet access service 203 performance information. 205 3 Details of ISP Use Case 207 3.1 Understanding the quality experienced by customers 209 Operators want to understand the quality of experience (QoE) of their 210 broadband customers. The understanding can be gained through a 211 "panel", i.e. measurement probes deployed to a few 100 or 1000 212 customers. The panel needs to include a representative sample for 213 each of the operator's technologies (fiber, Hybrid Fibre-coaxial 214 (HFC), DSL...) and broadband speeds (80Mb/s, 20Mb/s, basic...). For 215 reasonable statistical validity, approximately 100 probes are needed 216 for each ISP product. The operator would like the end-to-end view of 217 the service, rather than (say) just the access portion. So as well as 218 simple network statistics like speed and loss rates, they want to 219 understand what the service feels like to the customer. This involves 220 relating the pure network parameters to something like a 'mean 221 opinion score' which will be service dependent (for instance web 222 browsing QoE is largely determined by latency above a few Mb/s). 224 An operator will also want compound metrics such as "reliability", 225 which might involve packet loss, DNS failures, re-training of the 226 line, video streaming under-runs etc. 228 The operator really wants to understand the end-to-end service 229 experience. However, the home network (Ethernet, WiFi, powerline) is 230 highly variable and outside its control. To date, operators (and 231 regulators) have instead measured performance from the home gateway. 232 However, mobile operators clearly must include the wireless link in 233 the measurement. 235 Active measurements are the most obvious approach, i.e., special 236 measurement traffic is sent by - and to - the probe. In order not to 237 degrade the service of the customer, the measurement data should only 238 be sent when the user is silent, and it shouldn't reduce the 239 customer's data allowance. The other approach is passive measurements 240 on the customer's ordinary traffic; the advantage is that it measures 241 what the customer actually does, but it creates extra variability 242 (different traffic mixes give different results) and especially it 243 raises privacy concerns. RFC6973] discusses privacy considerations 244 for Internet protocols in general, whilst [framework] discusses them 245 specifically for large-scale measurement systems. 247 From an operator's viewpoint, understanding customer experience 248 enables it to offer better services. Also, simple metrics can be more 249 easily understood by senior managers who make investment decisions 250 and by sales and marketing. 252 3.2 Understanding the impact and operation of new devices and technology 254 Another type of measurement is to test new capabilities before they 255 are rolled out. For example, the operator may want to: 257 o Check whether a customer can be upgraded to a new broadband 258 option 260 o Understand the impact of IPv6 before it makes it available to 261 customers (will v6 packets get through, what will the latency be 262 to major websites, what transition mechanisms will be most 263 appropriate?) 265 o Check whether a new capability can be signaled using TCP options 266 (how often it will be blocked by a middlebox? - along the lines of 267 the experiments described in [ExtendTCP]); 269 o Investigate a quality of service mechanism (e.g. checking 270 whether Diffserv markings are respected on some path); and so on. 272 3.3 Design and planning 274 Operators can use large scale measurements to help with their network 275 planning - proactive activities to improve the network. 277 For example, by probing from several different vantage points the 278 operator can see that a particular group of customers has performance 279 below that expected during peak hours, which should help capacity 280 planning. Naturally operators already have tools to help this - a 281 network element reports its individual utilization (and perhaps other 282 parameters). However, making measurements across a path rather than 283 at a point may make it easier to understand the network. There may 284 also be parameters like bufferbloat that aren't currently reported by 285 equipment and/or that are intrinsically path metrics. 287 With information gained from measurement results, capacity planning 288 and network design can be more effective. Such planning typically 289 uses simulations to emulate the measured performance of the current 290 network and understand the likely impact of new capacity and 291 potential changes to the topology. Simulations, informed by data from 292 a limited panel of probes, can help quantify the advantage that a new 293 technology brings and support the business case for larger roll-out. 295 It may also be possible to use probes to run stress tests for risk 296 analysis. For example, an operator could run a carefully controlled 297 and limited experiment in which probing is used to assess the 298 potential impact if some new application becomes popular. 300 3.4 Monitoring Service Level Agreements 302 Another example is that the operator may want to monitor performance 303 where there is a service level agreement (SLA). This could be with 304 its own customers, especially enterprises may have an SLA. The 305 operator can proactively spot when the service is degrading near to 306 the SLA limit, and get information that will enable more informed 307 conversations with the customer at contract renewal. 309 An operator may also want to monitor the performance of its 310 suppliers, to check whether they meet their SLA or to compare two 311 suppliers if it is dual-sourcing. This could include its transit 312 operator, CDNs, peering, video source, local network provider (for a 313 global operator in countries where it doesn't have its own network), 314 even the whole network for a virtual operator. 316 Through a better understanding of its own network and its suppliers, 317 the operator should be able to focus investment more effectively - in 318 the right place at the right time with the right technology. 320 3.5 Identifying, isolating and fixing network problems 322 Operators can use large scale measurements to help identify a fault 323 more rapidly and decide how to solve it. 325 Operators already have Test and Diagnostic tools, where a network 326 element reports some problem or failure to a management system. 327 However, many issues are not caused by a point failure but something 328 wider and so will trigger too many alarms, whilst other issues will 329 cause degradation rather than failure and so not trigger any alarm. 330 Large-scale measurements can help provide a more nuanced view that 331 helps network management to identify and fix problems more rapidly 332 and accurately. The network management tools may use simulations to 333 emulate the network and so help identify a fault and assess possible 334 solutions. 336 An operator can obtain useful information without measuring the 337 performance on every broadband line. By measuring a subset, the 338 operator can identify problems that affect a group of customers. For 339 example, the issue could be at a shared point in the network topology 340 (such as an exchange), or common to a vendor, or equipment type; for 341 instance, [IETF85-Plenary] describes a case where a particular home 342 gateway upgrade had caused a (mistaken!) drop in line rate. 344 A more extensive deployment of the measurement capability to every 345 broadband line would enable an operator to identify issues unique to 346 a single customer. Overall, large-scale measurements can help an 347 operator help an operator fix the fault more rapidly and/or allow the 348 affected customers to be informed what's happening. More accurate 349 information enables the operator to reassure customers and take more 350 rapid and effective action to cure the problem. 352 Often customers experience poor broadband due to problems in the home 353 network - the ISP's network is fine. For example they may have moved 354 too far away from their wireless access point. Perhaps 80% of 355 customer calls about fixed BB problems are due to in-home wireless 356 issues. These issues are expensive and frustrating for an operator, 357 as they are extremely hard to diagnose and solve. The operator would 358 like to narrow down whether the problem is in the home (with the home 359 network or edge device or home gateway), in the operator's network, 360 or with an application service. The operator would like two 361 capabilities. Firstly, self-help tools that customers use to improve 362 their own service or understand its performance better, for example 363 to re-position their devices for better WiFi coverage. Secondly, on- 364 demand tests that can the operator can run instantly - so the call 365 center person answering the phone (or e-chat) could trigger a test 366 and get the result whilst the customer is still in an on-line 367 session. 369 4 Details of Regulator Use Case 371 4.1 Promoting competition through transparency 372 Competition plays a vital role in regulation of the electronic 373 communications markets. For competition to successfully discipline 374 operators' behavior in the interests of their customers, end users 375 must be fully aware of the characteristics of the ISPs' access 376 offers. In some jurisdictions regulators mandate that transparent 377 information imade available about service offers. 379 End users need effective transparency to be able to make informed 380 choices throughout the different stages of their relationship with 381 ISPs, when selecting Internet access service offers, and when 382 considering switching service offer within an ISP or to an 383 alternative ISP. Quality information about service offers could 384 include speed, delay, and jitter. Regulators can publish such 385 information to facilitate end users' choice of service provider and 386 offer. It may also encourage ISPs to use the same metrics in their 387 service level contracts, which would further help end users to choose 388 an ISP. Finally, transparency may help content, application, service 389 and device providers develop their Internet offerings. 391 The published information needs to be: 393 o Accurate - the measurement results must be correct and not 394 influenced by errors or side effects. The results should be 395 reproducible and consistent over time. 397 o Comparable - common metrics should be used across different 398 ISPs and service offerings so that measurement results can be 399 compared. 401 o Meaningful - the metrics used for measurements need to reflect 402 what end users value about their broadband Internet access service 404 o Reliable - the number and distribution of measurement agents, 405 and the statistical processing of the raw measurement data, needs 406 to be appropriate 408 A set of measurement parameters and associated measurement methods 409 are used over time, e.g. speed, delay, and jitter. Then the 410 measurement raw data are collected and go through statistical post- 411 processing before the results can be published in an Internet access 412 service quality index to facilitate end users' choice of service 413 provider and offer. 415 The regulator can also promote competition through transparency by 416 encouraging end users to monitor the performance of their own 417 broadband Internet access service. They might use this information to 418 check that the performance meets that specified in their contract or 419 to understand whether their current subscription is the most 420 appropriate. 422 4.2 Promoting broadband deployment 424 Governments sometimes set strategic goals for high-speed broadband 425 penetration as an important component of the economic, cultural and 426 social development of the society. To evaluate the effect of the 427 stimulated growth over time, broadband Internet access take-up and 428 penetration of high-speed access can be monitored through measurement 429 campaigns. 431 An example of such an initiative is the "Digital Agenda for Europe" 432 which was adopted in 2010, to achieve universal broadband access. The 433 goal is to achieve by 2020, access for all Europeans to Internet 434 access speeds of 30 Mbps or above, and 50% or more of European 435 households subscribing to Internet connections above 100 Mbps. 437 To monitor actual broadband Internet access performance in a specific 438 country or a region, extensive measurement campaigns are needed. A 439 panel can be built based on operators and packages in the market, 440 spread over urban, suburban and rural areas. Probes can then be 441 distributed to the participants of the campaign. 443 Periodic tests running on the probes can for example measure actual 444 speed at peak and off-peak hours, but also other detailed quality 445 metrics like delay and jitter. Collected data goes afterwards through 446 statistical analysis, deriving estimates for the whole population 447 which can then be presented and published regularly. 449 Using a harmonized or standardized measurement methodology, or even a 450 common quality measurement platform, measurement results could also 451 be used for benchmarking of providers and/or countries. 453 4.3 Monitoring "net neutrality" 455 Regulatory approaches related to net neutrality and the open Internet 456 has been introduced in some jurisdictions. Examples of such efforts 457 are the Internet policy as outlined by the Body of European 458 Regulators for Electronic Communications Guidelines for quality of 459 service [BEREC Guidelines] and US FCC Preserving the Open Internet 460 Report and Order [FCC R&O]. Although legal challenges can change the 461 status of policy such as the court action negating the FCC R&O, the 462 take-away for LMAP purposes is that policy-makers are looking for 463 measurement solutions to assist them in discovering biased treatment 464 of traffic flows. The exact definitions and requirements vary from 465 one jurisdiction to another; the comments below provide some hints 466 about the potential role of measurements. 468 Net neutrality regulations do not necessarily require every packet to 469 be treated equally. Typically they allow "reasonable" traffic 470 management (for example if there is exceptional congestion) and allow 471 "specialized services" in parallel to, but separate from, ordinary 472 Internet access (for example for facilities-based IPTV). A regulator 473 may want to monitor such practices as input to the regulatory 474 evaluation. However, these concepts are evolving and differ across 475 jurisdictions, so measurement results should be assessed with 476 caution. 478 A regulator could monitor departures from application agnosticism 479 such as blocking or throttling of traffic from specific applications, 480 and preferential treatment of specific applications. A measurement 481 system could send, or passively monitor, application-specific traffic 482 and then measure in detail the transfer of the different packets. 483 Whilst it is relatively easy to measure port blocking, it is a 484 research topic how to detect other types of differentiated treatment. 485 The paper, "Glasnost: Enabling End Users to Detect Traffic 486 Differentiation" [M-Labs NSDI 2010] and follow-on tool "Glasnost" 487 [Glasnost] are examples of work in this area. 489 A regulator could also monitor the performance of the broadband 490 service over time, to try and detect if the specialized service is 491 provided at the expense of the Internet access service. Comparison 492 between ISPs or between different countries may also be relevant for 493 this kind of evaluation. 495 5 Implementation Options 497 There are several ways of implementing a measurement system. The 498 choice may be influenced by the details of the particular use case 499 and what the most important criteria are for the regulator, ISP or 500 third party operating the measurement system. 502 One way involves a special hardware device that is connected directly 503 to the home gateway. The devices are deployed to a carefully selected 504 panel of end users and they perform measurements according to a 505 defined schedule. The schedule can run throughout the day, to allow 506 continuous assessment of the network. Careful design ensures that 507 measurements do not detrimentally impact the home user experience or 508 corrupt the results by testing when the user is also using the 509 broadband line. The system is therefore tightly controlled by the 510 operator of the measurement system. One advantage of this approach is 511 that it is possible to get reliable benchmarks for the performance of 512 a network with only a few devices. One disadvantage is that it would 513 be expensive to deploy hardware devices on a mass scale sufficient to 514 understand the performance of the network at the granularity of a 515 single broadband user. 517 Another approach involves implementing the measurement capability as 518 a webpage or an "app" that end users are encouraged to download onto 519 their mobile phone or computing device. Measurements are triggered by 520 the end user, for example the user interface may have a button to 521 "test my broadband now". One advantage of this approach is that the 522 performance is measured to the end user, rather than to the home 523 gateway, and so includes the home network. Another difference is that 524 the system is much more loosely controlled, as the panel of end users 525 and the schedule of tests are determined by the end users themselves 526 rather than the measurement system. It would be easier to get large- 527 scale, however it is harder to get comparable benchmarks as the 528 measurements are affected by the home network and also the population 529 is self-selecting and so potentially biased towards those who think 530 they have a problem. This could be alleviated by stimulating 531 widespread downloading of the app and careful post-processing of the 532 results to reduce biases. 534 There are several other possibilities. For example, as a variant on 535 the first approach, the measurement capability could be implemented 536 as software embedded in the home gateway, which would make it more 537 viable to have the capability on every user line. As a variant on the 538 second approach, the end user could initiate measurements in response 539 to a request from the measurement system. 541 The operator of the measurement system should be careful to ensure 542 that measurements do not detrimentally impact users. Potential issues 543 include: 545 * Measurement traffic generated on a particular user's line may 546 impact that end user's quality of experience. The danger is 547 greater for measurements that generate a lot of traffic over a 548 lengthy period. 550 * The measurement traffic may impact that particular user's bill 551 or traffic cap. 553 * The measurement traffic from several end users may, in 554 combination, congest a shared link. 556 * The traffic associated with the control and reporting of 557 measurements may overload the network. The danger is greater where 558 the traffic associated with many end users is synchronized. 560 6 Conclusions 562 Large-scale measurements of broadband performance are useful for both 563 network operators and regulators. Network operators would like to use 564 measurements to help them better understand the quality experienced 565 by their customers, identify problems in the network and design 566 network improvements. Regulators would like to use measurements to 567 help promote competition between network operators, stimulate the 568 growth of broadband access and monitor 'net neutrality'. There are 569 other use cases that are not the focus of the initial LMAP charter 570 (although it is expected that the mechanisms developed would be 571 readily applied), for example end users would like to use 572 measurements to help identify problems in their home network and to 573 monitor the performance of their broadband provider. 575 From consideration of the various use cases, several common themes 576 emerge whilst there are also some detailed differences. These 577 characteristics guide the development of LMAP's framework, 578 information model and protocol. 580 A measurement capability is needed across a wide number of 581 heterogeneous environments. Tests may be needed in the home network, 582 in the ISP's network or beyond; they may be measuring a fixed or 583 wireless network; they may measure just the access network or across 584 several networks; at least some of which are not operated by the 585 measurement provider. 587 There is a role for both standardized and non-standardized 588 measurements. For example, a regulator would like to publish 589 standardized performance metrics for all network operators, whilst an 590 ISP may need their own tests to understand some feature special to 591 their network. Most use cases need active measurements, which create 592 and measure specific test traffic, but some need passive measurements 593 of the end user's traffic. 595 Regardless of the tests being operated, there needs to be a way to 596 demand or schedule the tests. Most use cases need a regular schedule 597 of measurements, but sometimes ad hoc testing is needed, for example 598 for troubleshooting. It needs to be ensured that measurements do not 599 affect the user experience and are not affected by user traffic 600 (unless desired). In addition there needs to be a common way to 601 collect the results. Standardization of this control and reporting 602 functionality allows the operator of a measurement system to buy the 603 various components from different vendors. 605 After the measurement results are collected, they need to be 606 understood and analyzed. Often it is sufficient to measure only a 607 small subset of end users, but per-line fault diagnosis requires the 608 ability to test every individual line. Analysis requires accurate 609 definition and understanding of where the test points are, as well as 610 contextual information about the topology, line, product and the 611 subscriber's contract. The actual analysis of results is beyond the 612 scope of LMAP, as is the key challenge of how to integrate the 613 measurement system into a network operator's existing tools for 614 diagnostics and network planning. 616 Finally the test data, along with any associated network, product or 617 subscriber contract data is commercial or private information and 618 needs to be protected. 620 7 Security Considerations 622 Large-scale measurements raise several potential security, privacy 623 (data protection) and business sensitivity issues. Both the network 624 operator and regulator use cases potentially raise the following 625 issues: 627 1. a malicious party that gains control of Measurement Agents to 628 launch DoS attacks at a target, or to alter (perhaps subtly) 629 Measurement Tasks in order to compromise the end user's privacy, 630 the business confidentiality of the network, or the accuracy of 631 the measurement system. 633 2. a malicious party that gains control of Measurement Agents to 634 create a platform for pervasive monitoring [RFC7258], in order to 635 attack the privacy of Internet users and organisations. 637 3. a malicious party that intercepts or corrupts the Measurement 638 Results &/or other information about the Subscriber, for similar 639 nefarious purposes. 641 4. a malicious party that uses fingerprinting techniques to 642 identify individual end users, even from anonymized data 644 5. a measurement system that does not obtain the end user's 645 informed consent, or fails to specify a specific purpose in the 646 consent, or uses the collected information for secondary uses 647 beyond those specified. 649 6. a measurement system that is vague about who is responsible for 650 privacy (data protection); this role is often termed the "data 651 controller". 653 In addition, the regulator use case has the following potential 654 issue: 656 7. a malicious network operator could try to identify the 657 broadband lines that the regulator was measuring and prioritise 658 that traffic ("game the system"). 660 The [framework] also considers some potential mitigations of these 661 issues. They will need to be considered by an LMAP protocol and more 662 generally by any measurement system. 664 8 IANA Considerations 666 None 668 Contributors 670 The information in this document is partially derived from text 671 written by the following contributors: 673 James Miller jamesmilleresquire@gmail.com 675 Rachel Huang rachel.huang@huawei.com 677 Informative References 679 [IETF85-Plenary] Crawford, S., "Large-Scale Active Measurement of 680 Broadband Networks", 681 http://www.ietf.org/proceedings/85/slides/slides-85-iesg- 682 opsandtech-7.pdf 'example' from slide 18 684 [Extend TCP] Michio Honda, Yoshifumi Nishida, Costin Raiciu, Adam 685 Greenhalgh, Mark Handley and Hideyuki Tokuda. "Is it Still 686 Possible to Extend TCP?" Proc. ACM Internet Measurement 687 Conference (IMC), November 2011, Berlin, Germany. 688 http://www.ietf.org/proceedings/82/slides/IRTF-1.pdf 690 [framework] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 691 Aitken, P., Akhter, A. "A framework for large-scale 692 measurement platforms (LMAP)", 693 http://datatracker.ietf.org/doc/draft-ietf-lmap-framework/ 695 [RFC6973] Cooper, A., Tschofenig, H.z., Aboba, B., Peterson, J., 696 Morris, J., Hansen, M., and R. Smith, "Privacy 697 Considerations for Internet Protocols", RFC 6973, July 698 2013. 700 [RFC7258] Farrell, S., Tschofenig, H., "PPervasive Monitoring Is an 701 Attack", RFC 7258, May 2014. 703 [FCC R&O] United States Federal Communications Commission, 10-201, 704 "Preserving the Open Internet, Broadband Industries 705 Practices, Report and Order", 706 http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10- 707 201A1.pdf 709 [BEREC Guidelines] Body of European Regulators for Electronic 710 Communications, "BEREC Guidelines for quality of service 711 in the scope of net neutrality", 712 http://berec.europa.eu/eng/document_register/ 713 subject_matter/berec/download/0/1101-berec-guidelines-for- 714 quality-of-service-_0.pdf 716 [M-Labs NSDI 2010] M-Lab, "Glasnost: Enabling End Users to Detect 717 Traffic Differentiation", 718 http://www.measurementlab.net/download/AMIfv945ljiJXzG- 719 fgUrZSTu2hs1xRl5Oh-rpGQMWL305BNQh- 720 BSq5oBoYU4a7zqXOvrztpJhK9gwk5unOe-fOzj4X-vOQz_HRrnYU- 721 aFd0rv332RDReRfOYkJuagysstN3GZ__lQHTS8_UHJTWkrwyqIUjffVeDxQ/ 723 [Glasnost] M-Lab tool "Glasnost", http://mlab-live.appspot.com/tools/ 724 glasnost 726 [P.800] ITU-T, "SERIES P: TELEPHONE TRANSMISSION QUALITY Methods for 727 objective and subjective assessment of quality", 728 https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC- 729 P.800-199608-I!!PDF-E&type=items 731 Authors' Addresses 733 Marc Linsner 734 Cisco Systems, Inc. 735 Marco Island, FL 736 USA 738 EMail: mlinsner@cisco.com 740 Philip Eardley 741 BT 742 B54 Room 77, Adastral Park, Martlesham 743 Ipswich, IP5 3RE 744 UK 746 Email: philip.eardley@bt.com 748 Trevor Burbridge 749 BT 750 B54 Room 77, Adastral Park, Martlesham 751 Ipswich, IP5 3RE 752 UK 754 Email: trevor.burbridge@bt.com 756 Frode Sorensen 757 Norwegian Post and Telecommunications Authority (NPT) 758 Lillesand 759 Norway 761 Email: frode.sorensen@npt.no