idnits 2.17.00 (12 Aug 2021) /tmp/idnits26001/draft-ietf-lmap-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (December 4, 2013) is 3089 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'IETF85-Plenary' is mentioned on line 428, but not defined == Missing Reference: 'Glasnost' is mentioned on line 641, but not defined == Unused Reference: 'Glosnast' is defined on line 738, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Marc Linsner 3 Intended Status: Informational Cisco Systems 4 Expires: June 7, 2014 Philip Eardley 5 Trevor Burbridge 6 BT 7 Frode Sorensen 8 NPT 9 December 4, 2013 11 Large-Scale Broadband Measurement Use Cases 12 draft-ietf-lmap-use-cases-01 14 Abstract 16 Measuring broadband performance on a large scale is important for 17 network diagnostics by providers and users, as well for as public 18 policy. To conduct such measurements, user networks gather data, 19 either on their own initiative or instructed by a measurement 20 controller, and then upload the measurement results to a designated 21 measurement server. Understanding the various scenarios and users of 22 measuring broadband performance is essential to development of the 23 system requirements. The details of the measurement metrics 24 themselves are beyond the scope of this document. 26 Status of this Memo 28 This Internet-Draft is submitted to IETF in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF), its areas, and its working groups. Note that 33 other groups may also distribute working documents as 34 Internet-Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 The list of current Internet-Drafts can be accessed at 42 http://www.ietf.org/1id-abstracts.html 44 The list of Internet-Draft Shadow Directories can be accessed at 45 http://www.ietf.org/shadow.html 47 Copyright and License Notice 49 Copyright (c) 2013 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 3 66 2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 2.1 Internet Service Provider (ISP) Use Case . . . . . . . . . . 3 68 2.2 Regulators . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 3 Details of ISP Use Case . . . . . . . . . . . . . . . . . . . . 5 70 3.1 Existing Capabilities and Shortcomings . . . . . . . . . . . 5 71 3.2 Understanding the quality experienced by customers . . . . . 6 72 3.3 Understanding the impact and operation of new devices and 73 technology . . . . . . . . . . . . . . . . . . . . . . . . . 7 74 3.4 Design and planning . . . . . . . . . . . . . . . . . . . . 8 75 3.5 Identifying, isolating and fixing network problems . . . . . 9 76 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 11 77 4 Details of Regulator Use Case . . . . . . . . . . . . . . . . . 12 78 4.1 Promoting competition through transparency . . . . . . . . . 12 79 4.2 Promoting broadband deployment . . . . . . . . . . . . . . . 13 80 4.3 Monitoring "net neutrality" . . . . . . . . . . . . . . . . 14 81 5 Security Considerations . . . . . . . . . . . . . . . . . . . . 14 82 6 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 15 83 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 84 Normative References . . . . . . . . . . . . . . . . . . . . . . . 15 85 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 87 1 Introduction 89 Large-scale Measurement of Broadband Performance (LMAP) includes use 90 cases to be considered in deriving the requirements to be used in 91 developing the solution. This documents attempts to describe those 92 use cases in further detail and include additional use cases. 94 1.1 Terminology 96 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 97 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 98 document are to be interpreted as described in RFC 2119 [RFC2119]. 100 2 Use Cases 102 The LMAP architecture utilizes metrics for instructions on how to 103 execute a particular measurement. Although layer 2 specific metrics 104 can and will be defined, from the LMAP perspective, there is no 105 difference between fixed service and mobile (cellular) service used 106 for Internet access. Hence, like measurements will take place on 107 both fixed and mobile networks. Fixed services, commonly known as 108 "Last Mile" include technologies like DSL, Cable, and Carrier 109 Ethernet. Mobile services include all those advertised as 2G, 3G, 110 4G, and LTE. A metric defined to measure over-the-top services will 111 execute similarly on all layer 2 technologies. The LMAP architecture 112 covers networks utilizing both IPv4 and IPv6. 114 2.1 Internet Service Provider (ISP) Use Case 116 An ISP, or indeed another network operator, needs to understand the 117 performance of their networks, the performance of the suppliers 118 (downstream and upstream networks), the performance of services, and 119 the impact that such performance has on the experience of their 120 customers. In addition they may also desire visibility of their 121 competitor's networks and services in order to be able to benchmark 122 and improve their own offerings. Largely the processes that ISPs 123 operate (which are based on network measurement) include: 125 o Identifying, isolating and fixing problems in the network, 126 services or with CPE and end user equipment. Such problems may be 127 common to a point in the network topology (e.g. a single 128 exchange), common to a vendor or equipment type (e.g. line card or 129 home gateway) or unique to a single user line (e.g. copper 130 access). Part of this process may also be helping users understand 131 whether the problem exists in their home network or with an over- 132 the-top service instead of with their BB product. 134 o Design and planning. Through identifying the end user experience 135 the ISP can design and plan their network to ensure specified 136 levels of user experience. Services may be moved closer to end 137 users, services upgraded, the impact of QoS assessed or more 138 capacity deployed at certain locations. SLAs may be defined at 139 network or product boundaries. 141 o Understanding the quality experienced by customers. Alongside 142 benchmarking competitors, gaining better insight into the user's 143 service through a sample panel of the operator's own customers. 144 The end-to-end perspective matters, across home /enterprise 145 networks, peering points, CDNs etc. 147 o Understanding the impact and operation of new devices and 148 technology. As a new product is deployed, or a new technology 149 introduced into the network, it is essential that its operation 150 and impact on other services is measured. This also helps to 151 quantify the advantage that the new technology is bringing and 152 support the business case for larger roll-out. 154 2.2 Regulators 156 Regulators in jurisdictions around the world are responding to 157 consumers' adoption of Internet access services for traditional 158 telecommunications and media services by promoting competition among 159 providers of electronic communications, to ensure that users derive 160 maximum benefit in terms of choice, price, and quality. 162 Some jurisdictions have responded to a need for greater information 163 about Internet access service performance in the development of 164 regulatory policies and approaches for broadband technologies by 165 developing large-scale measurement programs. Programs such as the 166 U.S. Federal Communications Commission's Measuring Broadband America, 167 European Commission's Quality of Broadband Services in the EU reports 168 and a growing list of other programs employ a diverse set of 169 operational and technical approaches to gathering data to perform 170 analysis and reporting on diverse aspects of broadband performance. 172 While each jurisdiction responds to distinct consumer, industry, and 173 regulatory concerns, much commonality exists in the need to produce 174 datasets that are able to compare multiple Internet access service 175 providers, diverse technical solutions, geographic and regional 176 distributions, and marketed and provisioned levels and combinations 177 of broadband Internet access services. In some jurisdictions, the 178 role of measuring is provided by a measurement provider. 180 Measurement providers measure network performance from users towards 181 multiple content and application providers, included dedicated test 182 measurement servers, to show a performance of the actual Internet 183 access service provided by different ISPs. Users need to know the 184 performance that are achieving from their own ISP. In addition, they 185 need to know the performance of other ISPs of same location as 186 background information for selecting their ISP. Measurement providers 187 will provide measurement results with associated measurement methods 188 and measurement metrics. 190 From a consumer perspective, the differentiation between fixed and 191 mobile (cellular) Internet access services is blurring as the 192 applications used are very similar. Hence, regulators are measuring 193 both fixed and mobile Internet access services. 195 Regulators role in the development and enforcement of broadband 196 Internet access service policies also require that the measurement 197 approaches meet a high level of verifiability, accuracy and provider- 198 independence to support valid and meaningful comparisons of Internet 199 access service performance 201 LMAP standards could answer regulators shared needs by providing 202 scalable, cost-effective, scientifically robust solutions to the 203 measurement and collection of broadband Internet access service 204 performance information. 206 3 Details of ISP Use Case 208 3.1 Existing Capabilities and Shortcomings 210 In order to get reliable benchmarks some ISPs use vendor provided 211 hardware measurement platforms that connect directly to the home 212 gateway. These devices typically perform a continuous test schedule, 213 allowing the operation of the network to be continually assessed 214 throughout the day. Careful design ensures that they do not 215 detrimentally impact the home user experience or corrupt the test 216 results by testing when the user is also using the Broadband line. 217 While the test capabilities of such probes are good, they are simply 218 too expensive to deploy on mass scale to enable detailed 219 understanding of network performance (e.g. to the granularity of a 220 single backhaul or single user line). In addition there is no easy 221 way to operate similar tests on other devices (eg set top box) or to 222 manage application level tests (such as IPTV) using the same control 223 and reporting framework. 225 ISPs also use speed and other diagnostic tests from user owned 226 devices (such as PCs, tablets or smartphones). These often use 227 browser related technology to conduct tests to servers in the ISP 228 network to confirm the operation of the user Internet access line. 229 These tests can be helpful for a user to understand whether their 230 Internet access line has a problem, and for dialogue with a helpdesk. 231 However they are not able to perform continuous testing and the 232 uncontrolled device and home network means that results are not 233 comparable. Producing statistics across such tests is very dangerous 234 as the population is self-selecting (e.g. those who think they have a 235 problem). 237 Faced with a gap in current vendor offerings some ISPs have taken the 238 approach of placing proprietary test capabilities on their home 239 gateway and other consumer device offerings (such as Set Top Boxes). 240 This also means that different device platforms may have different 241 and largely incomparable tests, developed by different company sub- 242 divisions managed by different systems. 244 3.2 Understanding the quality experienced by customers 246 Operators want to understand the quality of experience (QoE) of their 247 broadband customers. The understanding can be gained through a 248 "panel", i.e., a measurement probe is deployed to a few 100 or 1000 249 of its customers. The panel needs to be a representative sample for 250 each of the operator's technologies (FTTP, FTTC, ADSL...) and 251 broadband options (80Mb/s, 20Mb/s, basic...), ~100 probes for each. 252 The operator would like the end-to-end view of the service, rather 253 than (say) just the access portion. So as well as simple network 254 statistics like speed and loss rates they want to understand what the 255 service feels like to the customer. This involves relating the pure 256 network parameters to something like a 'mean opinion score' which 257 will be service dependent (for instance web browsing QoE is largely 258 determined by latency above a few Mb/s). 260 An operator will also want compound metrics such as "reliability", 261 which might involve packet loss, DNS failures, re-training of the 262 line, video streaming under-runs etc. 264 The operator really wants to understand the end-to-end service 265 experience. However, the home network (Ethernet, wifi, powerline) is 266 highly variable and outside its control. To date, operators (and 267 regulators) have instead measured performance from the home gateway. 268 However, mobile operators clearly must include the wireless link in 269 the measurement. 271 Active measurements are the most obvious approach, i.e., special 272 measurement traffic is sent by - and to - the probe. In order not to 273 degrade the service of the customer, the measurement data should only 274 be sent when the user is silent, and it shouldn't reduce the 275 customer's data allowance. The other approach is passive measurements 276 on the customer's ordinary traffic; the advantage is that it measures 277 what the customer actually does, but it creates extra variability 278 (different traffic mixes give different results) and especially it 279 raises privacy concerns. 281 From an operator's viewpoint, understanding customers better enables 282 it to offer better services. Also, simple metrics can be more easily 283 understood by senior managers who make investment decisions and by 284 sales and marketing. 286 The characteristics of large scale measurements that emerge from 287 these examples: 289 1. Averaged data (over say 1 month) is generally ok 291 2. A panel (subset) of only a few customers is OK 293 3. Both active and passive measurements are possible, though the 294 former seems easier 296 4. Regularly scheduled tests are fine (providing active tests 297 back off if the customer is using the line). Scheduling can be 298 done some time ahead ('starting tomorrow, run the following test 299 every day'). 301 5. The operator needs to devise metrics and compound measures 302 that represent the QoE 304 6. End-to-end service matters, and not (just) the access link 305 performance 307 3.3 Understanding the impact and operation of new devices and technology 309 Another type of measurement is to test new capabilities and services 310 before they are rolled out. For example, the operator may want to: 311 check whether a customer can be upgraded to a new broadband option; 312 understand the impact of IPv6 before it makes it available to its 313 customers (will v6 packets get through, what will the latency be to 314 major websites, what transition mechanisms will be most is 315 appropriate?); check whether a new capability can be signaled using 316 TCP options (how often it will be blocked by a middlebox? - along the 317 lines of some existing experiments) [Extend TCP]; investigate a 318 quality of service mechanism (eg checking whether Diffserv markings 319 are respected on some path); and so on. 321 The characteristics of large scale measurements that emerge from 322 these examples are: 324 1. New tests need to be devised that test a prospective 325 capability. 327 2. Most of the tests are probably simply: "send one packet and 328 record what happens", so an occasional one-off test is sufficient. 330 3. A panel (subset) of only a few customers is probably OK, to 331 gain an understanding of the impact of a new technology, but it 332 may be necessary to check an individual line where the roll-out is 333 per customer. 335 4. An active measurement is needed. 337 3.4 Design and planning 339 Operators can use large scale measurements to help with their network 340 planning - proactive activities to improve the network. 342 For example, by probing from several different vantage points the 343 operator can see that a particular group of customers has performance 344 below that expected during peak hours, which should help capacity 345 planning. Naturally operators already have tools to help this - a 346 network element reports its individual utilisation (and perhaps other 347 parameters). However, making measurements across a path rather than 348 at a point may make it easier to understand the network. There may 349 also be parameters like bufferbloat that aren't currently reported by 350 equipment and/or that are intrinsically path metrics. 352 With better information, capacity planning and network design can be 353 more effective. Such planning typically uses simulations to emulate 354 the measured performance of the current network and understand the 355 likely impact of new capacity and potential changes to the topology. 356 It may also be possible to run stress tests for risk analysis, for 357 example 'if whizzy new application (or device) becomes popular, which 358 parts of my network would struggle, what would be the impact on other 359 services and how many customers would be affected'. What-if 360 simulations could help quantify the advantage that a new technology 361 brings and support the business case for larger roll-out. This 362 approach should allow good results with measurements from a limited 363 panel of customers. 365 Another example is that the operator may want to monitor performance 366 where there is a service level agreement. This could be with its own 367 customers, especially enterprises may have an SLA. The operator can 368 proactively spot when the service is degrading near to the SLA limit, 369 and get information that will enable more informed conversations with 370 the customer at contract renewal. 372 An operator may also want to monitor the performance of its 373 suppliers, to check whether they meet their SLA or to compare two 374 suppliers if it is dual-sourcing. This could include its transit 375 operator, CDNs, peering, video source, local network provider (for a 376 global operator in countries where it doesn't have its own network), 377 even the whole network for a virtual operator. 379 Through a better understanding of its own network and its suppliers, 380 the operator should be able to focus investment more effectively - in 381 the right place at the right time with the right technology. 383 The characteristics of large scale measurements emerging from these 384 examples: 386 1. A key challenge is how to integrate results from measurements 387 into existing network planning and management tools 389 2. New tests may need to be devised for the what-if and risk 390 analysis scenarios. 392 3. Capacity constraints first reveal themselves during atypical 393 events (early warning). So averaging of measurements should be 394 over a much shorter time than the sub use case discussed above. 396 4. A panel (subset) of only a few customers is OK for most of the 397 examples, but it should probably be larger than the QoE use case 398 #1 and the operator may also want to regularly change who is in 399 the subset, in order to sample the revealing outliers. 401 5. Measurements over a segment of the network ("end-to-middle") 402 are needed, in order to refine understanding, as well as end-to- 403 end measurements. 405 6. The primary interest is in measuring specific network 406 performance parameters rather than QoE. 408 7. Regularly scheduled tests are fine 410 8. Active measurements are needed; passive ones probably aren't 412 3.5 Identifying, isolating and fixing network problems 414 Operators can use large scale measurements to help identify a fault 415 more rapidly and decide how to solve it. 417 Operators already have Test and Diagnostic tools, where a network 418 element reports some problem or failure to a management system. 419 However, many issues are not caused by a point failure but something 420 wider and so will trigger too many alarms, whilst other issues will 421 cause degradation rather than failure and so not trigger any alarm. 422 Large scale measurements can help provide a more nuanced view that 423 helps network management to identify and fix problems more rapidly 424 and accurately. The network management tools may use simulations to 425 emulate the network and so help identify a fault and assess possible 426 solutions. 428 One example was described in [IETF85-Plenary]. The operator was 429 running a measurement panel for reasons discussed in sub use case #1. 430 It was noticed that the performance of some lines had unexpectedly 431 degraded. This led to a detailed (off-line) investigation which 432 discovered that a particular home gateway upgrade had caused a 433 (mistaken!) drop in line rate. 435 Another example is that occasionally some internal network management 436 event (like re-routing) can be customer-affecting (of course this is 437 unusual). This affects a whole group of customers, for instance those 438 on the same DSLAM. Understanding this will help an operator fix the 439 fault more rapidly and/or allow the affected customers to be informed 440 what's happening and/or request them to re-set their home hub 441 (required to cure some conditions). More accurate information enables 442 the operator to reassure customers and take more rapid and effective 443 action to cure the problem. 445 There may also be problems unique to a single user line (e.g. copper 446 access) that need to be identified. 448 Often customers experience poor broadband due to problems in the home 449 network - the ISP's network is fine. For example they may have moved 450 too far away from their wireless access point. Perhaps 80% of 451 customer calls about fixed BB problems are due to in-home wireless 452 issues. These issues are expensive and frustrating for an operator, 453 as they are extremely hard to diagnose and solve. The operator would 454 like to narrow down whether the problem is in the home (with the home 455 network or edge device or home gateway), in the operator's network, 456 or with an over-the-top service. The operator would like two 457 capabilities. Firstly, self-help tools that customers use to improve 458 their own service or understand its performance better, for example 459 to re-position their devices for better wifi coverage. Secondly, on- 460 demand tests that can the operator can run instantly - so the call 461 centre person answering the phone (or e-chat) could trigger a test 462 and get the result whilst the customer is still on-line session. 464 The characteristics of large scale measurements emerging from these 465 examples: 467 1. A key challenge is how to integrate results from measurements 468 into the operator's existing Test and Diagnostics system. 470 2. Results from the tests shouldn't be averaged 472 3. Tests are generally run on an ad hoc basis, ie specific 473 requests for immediate action 475 4. "End-to-middle" measurements, ie across a specific network 476 segment, are very relevant 478 5. The primary interest is in measuring specific network 479 performance parameters and not QoE 481 6. New tests are needed for example to check the home network (ie 482 the connection from the home hub to the set top boxes or to a 483 tablets on wifi) 485 7. Active measurements are critical. Passive ones may be useful 486 to help understand exactly what the customer is experiencing. 488 8. Ideally the measurement functionality should be at every 489 customer (not just a subset), in order to allow per-line fault 490 diagnosis. 492 3.6 Conclusions 494 There is a clear need from an ISP point of view to deploy a single 495 coherent measurement capability across a wide number of heterogeneous 496 devices both in their own networks and in the home environment. These 497 tests need to be able to operate from a wide number of locations to a 498 set of interoperable test points in their own network as well as 499 spanning supplier and competitor networks. 501 Regardless of the tests being operated, there needs to be a way to 502 demand or schedule the tests and critically ensure that such tests do 503 not affect each other; are not affected by user traffic (unless 504 desired) and do not affect the user experience. In addition there 505 needs to be a common way to collect and understand the results of 506 such tests across different devices to enable correlation and 507 comparison between any network or service parameters. 509 Since network and service performance needs to be understood and 510 analysed in the presence of topology, line, product or contract 511 information it is critical that the test points are accurately 512 defined and authenticated. 514 Finally the test data, along with any associated network, product or 515 contract data is commercial or private information and needs to be 516 protected. 518 4 Details of Regulator Use Case 520 4.1 Promoting competition through transparency 522 Competition plays a vital role in regulation of the electronic 523 communications markets. For competition to successfully discipline 524 operators' behaviour in the interests of their customers, end users 525 must be fully aware of the characteristics of the ISPs' access 526 offers. In some jurisdictions regulators mandate transparent 527 information made available about service offers. 529 End users need effective transparency to be able to make informed 530 choices throughout the different stages of their relationship with 531 ISPs, when selecting Internet access service offers, and when 532 considering switching service offer within an ISP or to an 533 alternative ISP. Quality information about service offers could 534 include speed, delay, and jitter. Regulators can publish such 535 information to facilitate end users' choice of service provider and 536 offer. It may also help content, application, service and device 537 providers develop their Internet offerings. 539 The published information needs to be: 541 o Accurate - the measurement results must be correct and not 542 influenced by errors or side effects. The results should be 543 reproducible and consistent over time. 545 o Comparable - common metrics should be used across different 546 ISPs and service offerings so that measurement results can be 547 compared. 549 o Meaningful - the metrics used for measurements need to reflect 550 what end users value about their broadband Internet access service 552 o Reliable - the number and distribution of measurement agents, 553 and the statistical processing of the raw measurement raw data, 554 needs to be appropriate 556 A set of measurement parameters and associated measurement methods 557 are used over time, e.g. speed, delay, and jitter. Then the 558 measurement raw data are collected and go through statistical post- 559 processing before the results can be published in an Internet access 560 service quality index to facilitate end users' choice of service 561 provider and offer. 563 A measurement system that monitor Internet access services and 564 collect quality information can typically consist of a number of 565 measurement probes and one or more test servers located at peering 566 points. The system can be operated by a regulator or a measurement 567 provider. Number and distribution of probes follows specific 568 requirements depending on the scope and the desired statistical 569 reliability of the measurement campaign. 571 Further, the regulator may consider making measurement tools 572 available for end users, so that they can monitor the performance of 573 their own broadband Internet access service. They might use this 574 information to check that the performance meets that specified in 575 their contract or to understand whether their current subscription is 576 the most appropriate. Such end user scenarios are not the focus of 577 the initial LMAP charter, although it is expected that the mechanisms 578 developed would be readily applied. 580 4.2 Promoting broadband deployment 582 Governments sometimes set strategic goals for high-speed broadband 583 penetration as an important component of the economic, cultural and 584 social development of the society. To evaluate the effect of the 585 stimulated growth over time, broadband Internet access take-up and 586 penetration of high-speed access can be monitored through measurement 587 campaigns. 589 An example of such an initiative is the "Digital Agenda for Europe" 590 which was adopted in 2010, to achieve universal broadband access. The 591 goal is to achieve by 2020, access for all Europeans to Internet 592 access speeds of 30 Mbps or above, and 50% or more of European 593 households subscribing to Internet connections above 100 Mbps. 595 To monitor actual broadband Internet access performance in a specific 596 country or a region, extensive measurement campaigns are needed. A 597 panel can be built based on operators and packages in the market, 598 spread over urban, suburban and rural areas. Probes can then be 599 distributed to the participants of the campaign. 601 Periodic tests running on the probes can for example measure actual 602 speed at peak and off-peak hours, but also other detailed quality 603 metrics like delay and jitter. Collected data goes afterwards through 604 statistical analysis, deriving estimates for the whole population 605 which can then be presented and published regularly. 607 Using a harmonized or standardised measurement methodology, or even a 608 common quality measurement platform, measurement results could also 609 be used for benchmarking of providers and/or countries. 611 4.3 Monitoring "net neutrality" 613 Regulatory approaches related to net neutrality and the open Internet 614 has been introduced in some jurisdictions. Examples of such are the 615 Internet policy as outlined by the FCC Preserving the Open Internet 616 Report and Order [FCC R&O] and the Body of European Regulators for 617 Electronic Communications Guidelines for quality of service [BEREC 618 Guidelines]. The exact definitions and requirements vary from one 619 jurisdiction to another; the comments below provide some hints about 620 the potential role of measurements. 622 Net neutrality regulations do not necessarily require every packet to 623 be treated equally. Typically they allow "reasonable" traffic 624 management (for example if there is exceptional congestion) and allow 625 "specialized services" in parallel to, but separate from, ordinary 626 Internet access (for example for facilities-based IPTV). A regulator 627 may want to monitor such practices as input to the regulatory 628 evaluation. However, these concepts are evolving and differ across 629 jurisdictions, so measurement results should be assessed with 630 caution. 632 A regulator could monitor departures from application agnosticism 633 such as blocking or throttling of traffic from specific applications, 634 and preferential treatment of specific applications. A measurement 635 system could send, or passively monitor, application-specific traffic 636 and then measure in detail the transfer of the different packets. 637 Whilst it is relatively easy to measure port blocking, it is a 638 research topic how to detect other types of differentiated treatment. 639 The paper, "Glasnost: Enabling End Users to Detect Traffic 640 Differentiation" [M-Labs NSDI 2010] and follow-on tool "Glasnost" 641 [Glasnost] are examples of work in this area. 643 A regulator could also monitor the performance of the broadband 644 service over time, to try and detect if the specialized service is 645 provided at the expense of the Internet access service. Comparison 646 between ISPs or between different countries may also be relevant for 647 this kind of evaluation. 649 5 Security Considerations 651 This informational document provides an overview of the use cases for 652 LMAP and so does not, in itself, raise any security issues. 654 The framework document [framework] discusses the potential security, 655 privacy (data protection) and business sensitivity issues that LMAP 656 raises. The main threats are: 658 1. a malicious party that gains control of Measurement Agents to 659 launch DoS attacks at a target, or to alter (perhaps subtly) 660 Measurement Tasks in order to compromise the end user's privacy, 661 the business confidentiality of the network, or the accuracy of 662 the measurement system. 664 2. a malicious party that intercepts or corrupts the Measurement 665 Results &/or other information about the Subscriber, for similar 666 nefarious purposes. 668 3. a malicious party that uses fingerprinting techniques to 669 identify individual end users, even from anonymized data 671 4. a measurement system that does not obtain the end user's 672 informed consent, or fails to specify a specific purpose in the 673 consent, or uses the collected information for secondary uses 674 beyond those specified. 676 5. a measurement system that is vague about who is the "data 677 controller": the party legally responsible for privacy (data 678 protection). 680 The [framework] also considers some potential mitigations of these 681 issues. They will need to be considered by an LMAP protocol and 682 more generally by any measurement system. 684 6 IANA Considerations 686 None 688 Contributors 690 The information in this document is partially derived from text 691 written by the following contributors: 693 James Miller jamesmilleresquire@gmail.com 695 Rachel Huang rachel.huang@huawei.com 697 Normative References 699 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 700 Requirement Levels", BCP 14, RFC 2119, March 1997. 702 [IETF85 Plenary] Crawford, S., "Large-Scale Active Measurement of 703 Broadband Networks", 704 http://www.ietf.org/proceedings/85/slides/slides-85-iesg- 705 opsandtech-7.pdf 'example' from slide 18 707 [Extend TCP] Michio Honda, Yoshifumi Nishida, Costin Raiciu, Adam 708 Greenhalgh, Mark Handley and Hideyuki Tokuda. "Is it Still 709 Possible to Extend TCP?" Proc. ACM Internet Measurement 710 Conference (IMC), November 2011, Berlin, Germany. 711 http://www.ietf.org/proceedings/82/slides/IRTF-1.pdf 713 [framework] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 714 Aitken, P., Akhter, A. "A framework for large-scale 715 measurement platforms (LMAP)", 716 http://datatracker.ietf.org/doc/draft-ietf-lmap-framework/ 718 [FCC R&O] United States Federal Communications Commission, 10-201, 719 "Preserving the Open Internet, Broadband Industries 720 Practices, Report and Order", 721 http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10- 722 201A1.pdf 724 [BEREC Guidelines] Body of European Regulators for Electronic 725 Communications, "BEREC Guidelines for quality of service 726 in the scope of net neutrality", 727 http://berec.europa.eu/eng/document_register/ 728 subject_matter/berec/download/0/1101-berec-guidelines-for- 729 quality-of-service-_0.pdf 731 [M-Labs NSDI 2010] M-Lab, "Glasnost: Enabling End Users to Detect 732 Traffic Differentiation", 733 http://www.measurementlab.net/download/AMIfv945ljiJXzG- 734 fgUrZSTu2hs1xRl5Oh-rpGQMWL305BNQh-BSq5oBoYU4a7zqXOvrztpJh 735 K9gwk5unOe-fOzj4X-vOQz_HRrnYU-aFd0rv332RDReRfOYkJuagysst 736 N3GZ__ lQHTS8_UHJTWkrwyqIUjffVeDxQ/ 738 [Glosnast] M-Lab tool "Glasnost", http://mlab-live.appspot.com/tools/ 739 glasnost 741 Authors' Addresses 743 Marc Linsner 744 Cisco Systems, Inc. 745 Marco Island, FL 746 USA 748 EMail: mlinsner@cisco.com 750 Philip Eardley 751 BT 752 B54 Room 77, Adastral Park, Martlesham 753 Ipswich, IP5 3RE 754 UK 756 Email: philip.eardley@bt.com 758 Trevor Burbridge 759 BT 760 B54 Room 77, Adastral Park, Martlesham 761 Ipswich, IP5 3RE 762 UK 764 Email: trevor.burbridge@bt.com 766 Frode Sorensen 767 Norwegian Post and Telecommunications Authority (NPT) 768 Lillesand 769 Norway 771 Email: frode.sorensen@npt.no