idnits 2.17.00 (12 Aug 2021) /tmp/idnits32900/draft-morris-policy-cons-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 18, 2010) is 4226 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2316' is defined on line 767, but no explicit reference was found in the text == Unused Reference: 'RFC4101' is defined on line 770, but no explicit reference was found in the text == Unused Reference: 'OECD' is defined on line 804, but no explicit reference was found in the text ** Obsolete normative reference: RFC 3041 (Obsoleted by RFC 4941) == Outdated reference: A later version (-03) exists of draft-morris-privacy-considerations-00 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group J. Morris 3 Internet-Draft CDT 4 Intended status: Informational H. Tschofenig 5 Expires: April 21, 2011 Nokia Siemens Networks 6 B. Aboba 7 Microsoft Corporation 8 J. Peterson 9 NeuStar, Inc. 10 October 18, 2010 12 Policy Considerations for Internet Protocols 13 draft-morris-policy-cons-00.txt 15 Abstract 17 Without doubt the Internet infrastructure developed far beyond the 18 expectations of the original funding agencies, architects, 19 developers, and early users. The society's current use and 20 expectations often lead to the need to take the economical and 21 political context in which technology is deployed into consideration. 23 This document aims to make protocol designers aware of the public 24 policy-related questions that may impact standards development. This 25 document contains questions, as opposed to guidelines or strict rules 26 that should in all cases be followed. This document provides a 27 framework for identifying and discussing questions of public policy 28 concern and serves as an umbrella for related policy documents. 30 Status of this Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on April 21, 2011. 47 Copyright Notice 48 Copyright (c) 2010 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 66 4. Potential Public Policy Concerns . . . . . . . . . . . . . . . 9 67 4.1. General Comments . . . . . . . . . . . . . . . . . . . . . 9 68 4.2. Content Censorship and Control . . . . . . . . . . . . . . 9 69 4.2.1. Government Censorship . . . . . . . . . . . . . . . . 10 70 4.2.2. Private Control of Content . . . . . . . . . . . . . . 10 71 4.3. Discrimination Among Users and Content . . . . . . . . . . 11 72 4.4. Competition and Choice . . . . . . . . . . . . . . . . . . 11 73 4.5. User Consent . . . . . . . . . . . . . . . . . . . . . . . 12 74 4.6. Internationalization . . . . . . . . . . . . . . . . . . . 13 75 4.7. Accessibility . . . . . . . . . . . . . . . . . . . . . . 13 76 4.8. Personal Privacy . . . . . . . . . . . . . . . . . . . . . 14 77 4.9. Privacy vis-a-vis the Government . . . . . . . . . . . . . 15 78 5. Questions about Technical Characteristics or Functionality . . 16 79 5.1. Bottlenecks, Choke-Points and Access Control . . . . . . . 16 80 5.2. Alteration or Replacement of Content . . . . . . . . . . . 17 81 5.3. Monitoring or Tracking of Usage . . . . . . . . . . . . . 17 82 5.4. Retention, Collection, or Exposure of Data . . . . . . . . 17 83 5.5. Persistent Identifiers and Anonymity . . . . . . . . . . . 17 84 5.6. Access by Third Parties . . . . . . . . . . . . . . . . . 18 85 5.7. Discrimination among Users, or among Types of Traffic . . 18 86 5.8. Internationalization and Accessibility . . . . . . . . . . 18 87 5.9. Innovation, Competition, and End User Choice and 88 Control . . . . . . . . . . . . . . . . . . . . . . . . . 19 89 6. Security Considerations . . . . . . . . . . . . . . . . . . . 20 90 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 91 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 22 92 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 23 93 9.1. Normative References . . . . . . . . . . . . . . . . . . . 23 94 9.2. Informative References . . . . . . . . . . . . . . . . . . 23 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 25 98 1. Introduction 100 This document suggests public policy questions that the authors 101 believe should be considered and possibly addressed within the IETF 102 when it is working on new or revised standards or protocols. This 103 document offers questions to be considered, rather than guidelines to 104 be followed. These questions are somewhat similar to the "Security 105 Considerations" section required in IETF documents. 107 This document is inspired by and directly modeled on RFC 3426 108 [RFC3426], entitled "General Architectural and Policy Considerations" 109 and published by the Internet Architecture Board (IAB) in November 110 2002. In RFC 3426, the IAB raises architectural questions that 111 should be considered in design decisions, without asserting that 112 there are clear guidelines that should be followed in all cases. 113 This document attempts to follow in the spirit of RFC 3426 by raising 114 questions to be considered without asserting that any particular 115 answers must be followed. 117 This document is motivated by the recognition that technical design 118 decisions made within the IETF and other standards bodies can have 119 significant impacts on public policy concerns. One well known and in 120 the meanwhile historical example of this possible impact can be found 121 in the standardization efforts around IPv6 on Ethernet networks. 122 [RFC2464], published in December 1998, specified that the interface 123 identifier of an IPv6 address was constructed in a way that it uses 124 the unique MAC address associated with the Ethernet interface 125 adapter. After the publication of RFC 2464, a significant policy 126 concern arose because the use of the unique and unchangeable MAC 127 address would significantly reduce a user's ability to conduct 128 private and/or anonymous communications using IPv6. The IETF 129 responded to those concerns by publishing RFC 3041 [RFC3041] entitled 130 "Privacy Extensions for Stateless Address Autoconfiguration in IPv6" 131 in January 2001. Privacy concerns relating this aspect in IPv6 still 132 exist today. 134 The goal of this document is that potential public policy impacts of 135 technical design decisions will be identified and considered during 136 the initial design process. Some would refer to this approach as 137 "privacy by design". This type of policy consideration already 138 happens in many cases within the IETF, but not in any systematic way 139 or with any assurance that public policy concerns will be identified 140 in most cases. We will provide some examples throught this document. 142 The goal of the document is not to suggest that the IETF should "do" 143 policy in the sense of intentionally conducting extensive debates on 144 public policy issues. However, many of the actions taken within the 145 IETF have an impact on public policy concerns. This document seeks 146 to encourage the IETF to acknowledge those times when a design 147 decision might affect a policy concern, so that the community can 148 make a reasoned decision on whether and how to address the concern in 149 the particular situation. 151 Public policy concerns often cannot be avoid: Some beneficial 152 technologies might have secondary harmful impacts, and the benefits 153 may outweigh the harms. More generally, some technologies (such as 154 those that facilitate government surveillance) might intentionally 155 compromise a public concern such as privacy. Similarly, the inherent 156 goal of some technologies (such as those that discriminate among 157 traffic to provide assured levels of quality of service) might 158 simultaneously be viewed by some as beneficial and by others as 159 harmful. 161 In all of these cases, there may well be good reasons to develop the 162 technology notwithstanding the asserted harms to a policy concern. 163 The main goal of this document is simply to suggest that impacts on a 164 public concern should not happen without clear recognition of the 165 impacts, and without appropriate consideration of whether it is 166 possible to minimize harmful impacts while still meeting the design 167 requirements. 169 2. Scope 171 This document cannot possibly predict and identify all possible 172 societal impacts of future IETF protocol and architectural design 173 decisions. It does try, however, to identify a broad range of 174 possible public policy impacts that experience suggests are most 175 likely to arise. 177 There are two broad categories of public policy impacts that this 178 document does not seek to cover with any thoroughness. First, this 179 document does not articulate the full range of concerns raised by 180 traditional security problems in the network. The IETF is already 181 appropriately focused on security issues, and those in the Security 182 Area are well able to identify and articulate the types of technical 183 design decisions that can lead to security problems. Many of the 184 privacy concerns highlighted in this document raise related security 185 concerns. 187 Second, this document does not attempt to identify the enormous range 188 of positive societal impacts that flow from network technology. The 189 vast majority of the work of the IETF -- from the introduction of an 190 entirely new method of Internet use to the fine tuning of an existing 191 routing protocol -- yields concrete and important social benefits. 192 This document does not discuss these positive benefits, but takes as 193 a given that technology proposals will not advance within the IETF 194 unless at least some portion of the community views the proposals as 195 beneficial. 197 This document is by no means an exhaustive list of public policy 198 concerns that relate to the Internet. This draft has instead focused 199 on policy issues that the authors believe are most likely to arise in 200 the IETF context. In addition, the views on public policy varies 201 among countries and cultures to a certain degree. 203 3. Terminology 205 This document will use a limited number of defined terms, which 206 admittedly will not be precisely applicable in all situations: 208 TECHNOLOGY shall refer to a technical standard or innovation being 209 considered within the IETF, whether it is a "new" technology or 210 standard or a modification to an "old" technology or standard. 212 END USER shall refer to the user at one or the other end of a network 213 communication, or an automated or intelligent proxy for a user 214 located at the end of the communication. Thus, a concern over, for 215 example, the privacy of the End User would be applicable in cases 216 where a client-side application communicated on behalf of an End 217 User. In some contexts, a corporation or other organized collection 218 of human users might stand in the role of an End User. In some but 219 not all contexts, a communication might be from one End User to 220 another End User; in other context, a communication might be between 221 a Service Provider (defined below) and an End User. 223 ACCESS PROVIDER shall refer to the entity that most directly provides 224 network access to an End User or Service Provider. In the case of 225 End Users on the public Internet, a Access Provider will often be an 226 Internet Service Provider that provides dedicated or dial-up network 227 access. In other cases a Access Provider might be a company 228 providing access to its employees, or a university providing access 229 to its students and faculty. 231 SERVICE PROVIDER shall refer to an entity (human, corporate or 232 institutional) that provides or offers services or content to End 233 Users over the network (regardless of whether charges are sought for 234 such services or content). Thus, for example, a web site would be 235 viewed as a Service Provider. 237 A given entity (such as a company offering content on the web) might 238 be viewed as an Access Provider (vis-a-vis its employees), as an End 239 User (vis-a-vis the ISP from which it obtains network access), and as 240 a Service Provider (vis-a-vis End Users elsewhere on the Internet). 242 TRANSIT PROVIDERS shall refer to one or more entities that transport 243 communications between the Access Providers at either end of a 244 communication. Transit Providers are often thought to transport 245 packets of communications without regard to their content (other 246 than, of course, their destination), but increasingly some Transit 247 Providers may handle traffic differently depending on the type of 248 traffic. 250 THIRD PARTY shall refer to any individual or entity other than End 251 Users, Access Providers, Service Providers, and Transit Providers. 252 For a given communication, Third Parties could include, for example, 253 governments seeking to execute lawful interceptions, hackers seeking 254 to interfere with or intercept communications, or in some situations 255 entities that provide, under contract, content or functionality to a 256 Service Provider (such as, for example, an entity that serves 257 advertisements for insertion in a web page). 259 In some cases the distinction between a Transit Provider and a Third 260 Party may blur, if the Transit Provider manipulates or discriminates 261 among traffic based on characteristics such as its content, sender, 262 or receiver. Similarly, the line between a Service Provider and a 263 Third Party may blur as more service functions are contracted out. 265 4. Potential Public Policy Concerns 267 Below are brief discussions of common categories of public policy 268 concern that might be raised by technologies developed by the IETF. 269 The discussions are not intended to present comprehensive analyses of 270 the policy concern, but are intended to assist in identifying 271 situations in which the concern is implicated and should be 272 considered. 274 4.1. General Comments 276 The fundamental design principles of the Internet, including 277 openness, interoperability, and the end-to-end principle, have 278 themselves been critical contributors to the value of the Internet 279 from a public policy perspective. Thus, as a first rule of promoting 280 healthy public policy impacts, the IETF should continue to maintain 281 and promote the architectural goals that it has historically pursued. 283 Because of this congruence between architectural values and public 284 policy values, many of the design considerations in RFC 3426 285 [RFC3426], "General Architectural and Policy Considerations" directly 286 promote an Internet that is supportive of good public policy values. 287 As one of many examples, Section 12.1 of [RFC3426] discusses the 288 value of user choice, and quotes [CWSB02] to say that "user 289 empowerment is a basic building block, and should be embedded into 290 all mechanism whenever possible." User choice is a fundamental 291 public policy concern, discussed more below. 293 [CWSB02], titled "Tussle in Cyberspace: Defining Tomorrow's 294 Internet," is itself a valuable exploration of the intersection 295 between technology design and public policy concerns. A key premise 296 of [CWSB02] is that "different stakeholders that are part of the 297 Internet milieu have interests that may be adverse to each other, and 298 these parties each vie to favor their particular interests." Many of 299 the "tussles" that [CWSB02] analyzes are situations in which public 300 policy considerations should be assessed in making design decisions. 301 More broadly, [CWSB02] provides to technology designers a conceptual 302 framework that recognizes the existence of "tussles" and seeks to 303 accommodate them constructively within a design. 305 4.2. Content Censorship and Control 307 As used here, the concept of censorship can encompass both 308 governmental and private actions. 310 4.2.1. Government Censorship 312 [Editor's Note: Add more references in an upcoming version of the 313 draft.] 315 "Censorship" is most commonly thought of as government-imposed 316 control or blocking of access to content. Many believe that as a 317 matter of public policy, censorship should be minimized or avoided. 318 For example, in May 2003 the Council of Europe stated in its 319 "Declaration on freedom of communication on the Internet" that 320 "Public authorities should not, through general blocking or filtering 321 measures, deny access by the public to information and other 322 communication on the Internet, regardless of frontiers." [COE03]. 323 But not all censorship is viewed by all as contrary to public policy. 324 In November 2002 in [COE02], the same Council of Europe specifically 325 endorsed government regulation of "hate speech" on the Internet. 327 Harder to identify are technologies not intended for content control 328 but which can be used to censor or restrict access to content. Any 329 technology that creates or permits bottlenecks or choke-points in the 330 network, through which significant traffic must pass, increases the 331 risk of censorship. Governments seeking to censor content or 332 restrict access to the Internet will exploit network bottlenecks 333 (albeit often bottlenecks created by network topology not technology 334 standards). 336 4.2.2. Private Control of Content 338 Governments are not the only entities that attempt to restrict the 339 content to which Internet users have access. In some cases Access 340 Providers (commonly Internet Service Providers) seek to control the 341 content available to their customers. Some do so with full knowledge 342 and consent of the customers (to provide, for example, a "family 343 friendly" online experience). Others, however, favor certain content 344 (for example, that of contractual business partners) over competing 345 content, and do so without the clear understanding of their 346 customers. 348 Whether such private content control is contrary to public policy 349 will turn on a host of specific considerations (including notice and 350 alternative choice), but undeniably such content control raises 351 policy concerns. These policy concerns are commonly phrased in terms 352 of discrimination among content, and are discussed more fully in the 353 next section. 355 4.3. Discrimination Among Users and Content 357 In a simplistic conception of the early Internet, all traffic of any 358 kind was broken into packets and all packets were treated equally 359 within the network. This idea has promoted a broad and strong 360 perception of equality within the Internet -- one class of traffic 361 will not take priority over other classes, and a lone individual's 362 packets will be treated the same as a large corporation's packets. 364 Any technology that moves away from this notion of equality -- even 365 technologies that are clearly beneficial -- raise significant public 366 policy questions, including "who controls the preferential 367 treatment," "who qualifies for it," "will it require additional 368 expenditure to obtain it," and "how great a disparity will it 369 create." 371 Thus, for example, quality of service and content distribution 372 networks both raise questions about what and who will be favored, 373 whether the rough equality of the Internet will be lost, and whether 374 the financially strong will come to dominate the Internet and make it 375 less useful for the less well off. 377 The concern over discrimination addresses both discrimination based 378 on identity of user, and on type of traffic. Content distribution 379 networks enable, for example, individual web sites able to afford the 380 CDN services to be delivered more quickly than competing web sites 381 that are not able to afford such services. In contrast, a core focus 382 of quality of service efforts is on the need to provide enhanced 383 levels of service to some types of traffic (e.g., Internet 384 telephony). 386 Concern about discrimination does not suggest that technologies that 387 handle certain categories of traffic more efficiently should never be 388 pursued. The concern, however, may in some cases suggest that an 389 efficiency enhancement be structured so as to be available to the 390 broadest classes of traffic or users. 392 4.4. Competition and Choice 394 Critical elements of the Internet's development and success have been 395 the ability to create new and innovative uses of the network, the 396 relative ease in creating and offering competitive services, 397 products, and methods, and the ability of Internet users to choose 398 from a range of providers and methods. Anything that reduces 399 innovation, competition, or user choice raises significant public 400 policy concerns. 402 Indeed, the need for competition and user choice is perhaps greater 403 now than in earlier days of the Internet. There is a greater 404 divergence today in the interests and agendas of users and service 405 providers than in the past, and that divergence makes it more 406 important that users be able to choose among service providers (in 407 part to seek providers that they trust the most). 409 [CWSB02] extensively addresses the important need for competition and 410 user choice, and provides detailed suggestions and guidelines for 411 Internet designer to consider. 413 4.5. User Consent 415 A familiar public policy concern over user consent focuses on the use 416 of personal data (as discussed more fully below under "Privacy"). 417 The usage here, however, has a broader meaning: the consent (or lack 418 of consent) of a user regarding an action or function executed by or 419 within the network. 421 Many actions performed using IETF protocols require the specific 422 initiation by a user, and the user's consent can fairly be assumed. 423 Thus, if a user transmits a request using SIP, the Session Initiation 424 Protocol, it is safe to assume that the user consents to the normal 425 handling and execution of the SIP request. 427 Other actions performed using IETF protocols are not initiated by a 428 user, but are so inherently a part of normal network operations that 429 consent can be assumed. For example, if in the middle of the network 430 certain packets are slowed by congestion, it is safe to assume 431 sufficient consent for congestion control mechanisms and rerouting of 432 the packets. 434 Uncertainty about consent arises, however, in areas where IETF 435 protocols can be viewed as deviating from some conception of 436 "normal." A simple example relates to the evolution of caching, 437 where as caching of various types of data became the norm, there 438 emerged a need to be able to set flags to prevent caching, which in a 439 sense can be thought of as a form of negative consent. 441 Middle boxes and other functions that deviate from the historic 442 "norm" -- the end-to-end principle -- also can raise issues of 443 consent. For example, Section 3 of [RFC3238], titled "IAB 444 Architectural and Policy Considerations for Open Pluggable Edge 445 Services," explores a range of consent and data integrity issues 446 raised by the OPES protocol proposals. As that analysis makes clear, 447 the consent issue is not necessarily confined to the consent of the 448 client in a client/server transaction, but may also involve the 449 consent of the server to an action undertaken on the request of the 450 client. 452 4.6. Internationalization 454 [RFC3426] calls on protocol designers to ask the key question about 455 "Internationalization": 457 "Where protocols require elements in text format, have the 458 possibly conflicting requirements of global comprehensibility and 459 the ability to represent local text content been properly weighed 460 against each other?" 462 [RFC3426] explores the significant challenges raised by the need to 463 balances these conflicting goals, and raises the possibility that the 464 historic preference for the use of case-independent ASCII characters 465 in protocols may need to change to accommodate a broader set of 466 international languages. 468 4.7. Accessibility 470 The concept of "accessibility" addresses the ability of persons with 471 disabilities to use the Internet in general and the full range of 472 applications and network functions that are commonly available to 473 persons without disabilities. 475 The W3C Web Accessibility Initiative (WAI) Technical Activity 476 illustrates the concern and explains that a focus on accessibility is 477 needed "to ensure that the full range of core technologies of the Web 478 are accessible . . . . Barriers exist when these technologies lack 479 features needed by users with visual, hearing, physical, cognitive or 480 neurological disabilities, or when the accessibility potential in the 481 technology is not carried through into the Web application or Web 482 content. For instance, in order for a multimedia presentation to be 483 accessible to someone who is blind, the markup language for the 484 presentation must support text equivalents for images and video; the 485 multimedia player used must support access to the text equivalents; 486 and the content author must make appropriate text equivalents 487 available. These text equivalents can then be rendered as speech or 488 braille output, enabling access to the content regardless of 489 disability or device constraints." 491 Many policy concerns about accessibility relate specifically to the 492 user interfaces used by applications, and as such these concerns 493 generally fall outside of the province of the IETF. But in the 494 Applications Area and to a lesser extent elsewhere within the IETF, 495 some design decisions could ultimately constrain the accessibility of 496 applications based on IETF protocols. 498 The W3C's WAI initiative reflects a very well developed and 499 comprehensive analysis of the technical and design issues raised by 500 accessibility concerns. 502 [Editor's Note: A future version of this document will add text about 503 multi-media emergency services support here.] 505 4.8. Personal Privacy 507 Individual privacy concerns are often divided into two components: 508 First, "consumer privacy" (also termed "data protection") commonly 509 addresses the right of individuals to control information about 510 themselves generated or collected in the course of commercial 511 interactions. Second, "privacy rights vis-a-vis the government" 512 addresses individuals' protection against unreasonable government 513 intrusions on privacy, including the interceptions of communications. 515 In the IETF context, a third category of privacy concern -- privacy 516 against private interception of or attacks on data or communications 517 -- is largely covered by the IETF's focus on security considerations. 518 Although security considerations are crucial to privacy 519 considerations, "consumer privacy" and "privacy vis-a-vis the 520 government" raise significantly different issues than traditional 521 security considerations. With security considerations, a key focus 522 is on maintaining the privacy of information against unauthorized 523 attack. Other forms of privacy, however, focus not on unauthorized 524 access to information, but on the "secondary use" of information for 525 which access was (at least temporarily) authorized. The question 526 often is not "how can I keep you from seeing my information" but "how 527 can I give you my information for one purpose and keep you from using 528 it for another." 530 The questions raised in Section 5 above do not differentiate between 531 the different categories of privacy, because for most purposes within 532 the IETF, technologies that create risk for one type of privacy 533 likely also create risk for other types of privacy. Once a potential 534 privacy concern is identified, however, the different types of 535 privacy concern may present different public policy considerations. 536 Indeed, the policy considerations may well be in tension -- a 537 technology that permits a lawful governmental interception of a 538 communication may also increase the risk of unlawful private 539 interception. 541 Privacy considerations are too numerous and multifaceted to be 542 adequately addressed in this document. For a more detailed treatment 543 please refer to [I-D.morris-privacy-considerations]. 545 4.9. Privacy vis-a-vis the Government 547 Although privacy is internationally recognized as a human right, most 548 governments claim the authority to invade privacy through the 549 following means, among others: 551 o interception of communications in real-time; 553 o interception of traffic data (routing information) in real-time; 555 o access to data stored by service providers, including traffic data 556 being stored for billing purposes; and 558 o access to data stored by users. 560 These means of access to communications and stored data should be 561 narrowly defined and subject to independent controls under strict 562 standards. Real-time interception of communications should take 563 place only with prior approval by the judicial system, issued under 564 standards at least as strict as those for police searches of private 565 homes. 567 In 1999, in the "Raven" discussions, the IETF considered whether it 568 should take action to build wiretapping capability into the Internet. 569 Ultimately, as detailed in [RFC2804], the community decided that an 570 effort to build wiretapping capability into the Internet would create 571 significant and unacceptable security risks. 573 5. Questions about Technical Characteristics or Functionality 575 In this section we list questions to ask in designing protocols. The 576 issues raised by the questions are discussed in more depth in 577 Section 5. We are not suggesting that each of these questions 578 requires an explicit answer -- some questions will be more relevant 579 to one design decision than to another. 581 There is not a one-to-one correspondence between the questions listed 582 in this section and the discussions in Section 5. Instead, for each 583 group of questions listed below, there are one or more references to 584 later substantive discussions. 586 Some of the questions will be easy to answer for a given technology. 587 Others will require creative thinking to assess whether a proposed 588 technology might be misused to achieve a result not intended by the 589 technology proponents. 591 This document addresses the most common and well-known areas of 592 public policy concern, focusing on areas most likely to arise in the 593 IETF context. 595 5.1. Bottlenecks, Choke-Points and Access Control 597 o Would the Technology facilitate any bottlenecks or choke-points in 598 the network through which significant amounts of particular types 599 of traffic must flow? 601 o Would the Technology permit a Third Party (including a government) 602 to exert control over End Users' use of the Internet as a whole? 604 o Would the Technology permit a Transit Provider or Third Party 605 (including a government) to exert control over the use of 606 particular content, functionality, or resources? 608 o Would the Technology permit an Access Provider or Service Provider 609 to exert control over particular content, functionality, or 610 resources, other than that known by the End User to be controlled 611 by the Access Provider or Service Provider? 613 o Would the Technology permit Third Party (including a government) 614 to require that particular content or functionality be confined 615 (or "zoned") into, or excluded from, any particular subpart of the 616 Internet (such as a particular Global Top Level Domain)? 618 See discussions of "Content Censorship and Control", "Personal 619 Privacy", "Discrimination Among Users and Content", 620 "Competition and Choice", and "User Consent." 622 5.2. Alteration or Replacement of Content 624 o Would the Technology permit a Third Party to alter any of the 625 content of a communication without (a) the express instruction or 626 consent of the Service Provider and the End User, or (b) the 627 knowledge of the Service Provider or the End User? 629 See discussions of "Content Censorship and Control" and "User 630 Consent." 632 5.3. Monitoring or Tracking of Usage 634 o Would the Technology permit the monitoring or tracking by a Third 635 Party of the use of particular content, functionality, or 636 resources? 638 See discussion of "Personal Privacy." 640 5.4. Retention, Collection, or Exposure of Data 642 o Would the Technology require or permit the retention of any 643 information about individual packets or communications, or 644 individual End Users, either (a) beyond the conclusion of the 645 immediate network or communications event, or (b) for longer than 646 a reasonably brief period of time in which a communications 647 "session" can be concluded? 649 o Would the Technology permit the reading or writing of any file on 650 an End User's computer without the explicit knowledge of the End 651 User? 653 o Would the Technology permit or require that information other than 654 location and routing information (such as, for example, personal 655 information or search terms) be made a part of a URL or URI used 656 for a communication? 658 o Would the Technology permit or require that personal or 659 confidential information be made available to any Third Party, 660 Transit Provider, or Access Provider? 662 See discussion of "Personal Privacy." 664 5.5. Persistent Identifiers and Anonymity 666 o Would the Technology require or permit the association of a 667 persistent identifier with a particular End User, or a computer 668 used by one or more End Users? 670 o Would the Technology reduce the ability of a content provider to 671 provide content anonymously? 673 o Would the Technology reduce the ability of an End User to access 674 content or utilize functionality anonymously? 676 See discussion of "Personal Privacy." 678 5.6. Access by Third Parties 680 o Would the Technology permit any Third Party to have access to 681 packets to and from End Users without the explicit consent of the 682 End Users? 684 o Would the Technology permit or require any Third Party to store 685 any information about an End User, or an End User's communications 686 (even with the knowledge and consent of the End User)? 688 See discussions of "Personal Privacy" and "User Consent." 690 5.7. Discrimination among Users, or among Types of Traffic 692 o Would the Technology require or permit an Access Provider or 693 Transit Provider to provide differing levels of service or 694 functionality based on (a) the identity or characteristic of the 695 End User, or (b) the type of traffic being handled? 697 o Would the Technology likely lead to a significant increase in cost 698 for basic or widely-used categories of communications? 700 o Would likely implementations of a new mode of communication 701 require such a financial or resource investment so that the mode 702 would effectively not be available to individuals, or small or 703 non-profit entities? 705 See discussion of "Discrimination Among Users and Content." 707 5.8. Internationalization and Accessibility 709 o Would the Technology function with the same level of quality, ease 710 of use, etc., across a broad range of languages and character 711 sets? 713 o Would the likely implementations of the Technology be as useful to 714 mainstream End Users as to non-mainstream End Users (such as, for 715 example, End Users with disabilities)? 717 o Would the Technology likely reduce the ability of non-mainstream 718 End Users (such as, for example, End Users with disabilities) to 719 utilize any common application or network functions? 721 See discussions of "Internationalization" and "Accessibility." 723 5.9. Innovation, Competition, and End User Choice and Control 725 o Would the Technology reduce the ability of future designers to 726 create new and innovative uses of the Internet, or new methods to 727 accomplish common network functions? 729 o Would the Technology likely reduce the number of viable 730 competitive providers of any common application or network 731 functions? 733 o Would the Technology likely reduce the ability of small or poorly- 734 funded providers to compete in the provision of any common 735 application or network functions? 737 o Would the Technology likely reduce the number or variety of 738 methods available to the End User to accomplish common application 739 or network functions? 741 o Would the Technology likely reduce the level of control the End 742 User can exercise over common application or network functions? 744 See discussion of "Competition and Choice." 746 6. Security Considerations 748 This document does not propose any new protocols or changes to old 749 protocols, and therefore does not involve any security considerations 750 in that sense. Many of the privacy issues discussed here also raise 751 security issues, but this document is not intended to be a 752 comprehensive look at security issues. 754 7. IANA Considerations 756 This document does not require actions by IANA. 758 8. Acknowledgments 760 We would like to thank Alan B. Davidson for his work on a prior 761 version of this document. 763 9. References 765 9.1. Normative References 767 [RFC2316] Bellovin, S., "Report of the IAB Security Architecture 768 Workshop", RFC 2316, April 1998. 770 [RFC4101] Rescorla, E. and IAB, "Writing Protocol Models", RFC 4101, 771 June 2005. 773 [RFC3426] Floyd, S., "General Architectural and Policy 774 Considerations", RFC 3426, November 2002. 776 [RFC2464] Crawford, M., "Transmission of IPv6 Packets over Ethernet 777 Networks", RFC 2464, December 1998. 779 [RFC3041] Narten, T. and R. Draves, "Privacy Extensions for 780 Stateless Address Autoconfiguration in IPv6", RFC 3041, 781 January 2001. 783 [RFC3238] Floyd, S. and L. Daigle, "IAB Architectural and Policy 784 Considerations for Open Pluggable Edge Services", 785 RFC 3238, January 2002. 787 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 788 May 2000. 790 9.2. Informative References 792 [I-D.morris-privacy-considerations] 793 Aboba, B., Morris, J., Peterson, J., and H. Tschofenig, 794 "Privacy Considerations for Internet Protocols", 795 draft-morris-privacy-considerations-00 (work in progress), 796 October 2010. 798 [CWSB02] Clark, D., Wroslawski, J., Sollins, K., and R. Braden, 799 "Tussle in Cyberspace: Defining Tomorrow's Internet", In 800 Proc. ACM SIGCOMM , 801 http://www.acm.org/sigcomm/sigcomm2002/papers/tussle.html, 802 2002. 804 [OECD] Organization for Economic Co-operation and Development, 805 "OECD Guidelines on the Protection of Privacy and 806 Transborder Flows of Personal Data", available at 807 (September 2010) , http://www.oecd.org/EN/document/ 808 0,,EN-document-0-nodirectorate-no-24-10255-0,00.html, 809 1980. 811 [COE02] Council of Europe, "Additional Protocol to the Convention 812 on Cybercrime concerning the criminalisation of acts of a 813 racist and xenophobic nature committed through computer 814 systems", available at (October 2010) , http:// 815 www.coe.int/T/E/Legal_affairs/Legal_co-operation/ 816 Combating_economic_crime/Cybercrime/Racism_on_internet/ 817 PC-RX(2002)24E-1.pdf, November 2002. 819 [COE03] Council of Europe, "Declaration on freedom of 820 communication on the Internet", available at (October 821 2010) , http://cm.coe.int/stat/E/Public/2003/ 822 adopted_texts/declarations/dec-28052003.htm, May 2003. 824 Authors' Addresses 826 John B. Morris, Jr. 827 Center for Democracy and Technology 828 1634 I Street NW, Suite 1100 829 Washington, DC 20006 830 USA 832 Email: jmorris@cdt.org 833 URI: http://www.cdt.org 835 Hannes Tschofenig 836 Nokia Siemens Networks 837 Linnoitustie 6 838 Espoo 02600 839 Finland 841 Phone: +358 (50) 4871445 842 Email: Hannes.Tschofenig@gmx.net 843 URI: http://www.tschofenig.priv.at 845 Bernard Aboba 846 Microsoft Corporation 847 One Microsoft Way 848 Redmond, WA 98052 849 US 851 Email: bernarda@microsoft.com 853 Jon Peterson 854 NeuStar, Inc. 855 1800 Sutter St Suite 570 856 Concord, CA 94520 857 US 859 Email: jon.peterson@neustar.biz