idnits 2.17.00 (12 Aug 2021) /tmp/idnits52856/draft-morris-privacy-considerations-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 25, 2010) is 4225 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-hansen-privacy-terminology-01 == Outdated reference: draft-ietf-ecrit-framework has been published as RFC 6443 == Outdated reference: draft-ietf-geopriv-arch has been published as RFC 6280 == Outdated reference: draft-ietf-geopriv-policy has been published as RFC 6772 -- Obsolete informational reference (is this intentional?): RFC 3265 (Obsoleted by RFC 6665) -- Obsolete informational reference (is this intentional?): RFC 4282 (Obsoleted by RFC 7542) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Aboba 3 Internet-Draft Microsoft Corporation 4 Intended status: Informational J. Morris 5 Expires: April 28, 2011 CDT 6 J. Peterson 7 NeuStar, Inc. 8 H. Tschofenig 9 Nokia Siemens Networks 10 October 25, 2010 12 Privacy Considerations for Internet Protocols 13 draft-morris-privacy-considerations-01.txt 15 Abstract 17 This document aims to make protocol designers aware of privacy- 18 related design choices and offers guidance for developing privacy 19 considerations for IETF documents. While specifications cannot 20 police the implementation community, nonetheless protocol architects 21 must play in the improvement of privacy, both by making a conscious 22 decision to design for privacy, and by documenting privacy risks in 23 protocol designs. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on April 28, 2011. 42 Copyright Notice 44 Copyright (c) 2010 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Historical Background . . . . . . . . . . . . . . . . . . . . 5 61 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 62 4. Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 13 63 5. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 15 64 6. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 65 6.1. Presence . . . . . . . . . . . . . . . . . . . . . . . . . 16 66 6.2. AAA for Network Access . . . . . . . . . . . . . . . . . . 18 67 7. Security Considerations . . . . . . . . . . . . . . . . . . . 21 68 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 69 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 23 70 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 24 71 10.1. Normative References . . . . . . . . . . . . . . . . . . . 24 72 10.2. Informative References . . . . . . . . . . . . . . . . . . 24 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 28 75 1. Introduction 77 The IETF produces specifications that aim to make the Internet 78 better. Those specifications fall into a number of different 79 categories, including protocol specifications, best current practice 80 descriptions, and architectural documentations. While IETF documents 81 are typically implementation-agnostic, they are often, if not always, 82 impacted by fundamental architectural design decisions. These 83 decision decisions in turn hinge on technical aspects, predictions 84 about deployment incentives, operational considerations, legal 85 concerns, security frameworks, and so on. 87 This document aims to make protocol designers aware of privacy- 88 related design choices and offers guidance for developing privacy 89 considerations for IETF documents. While specifications cannot 90 police the implementation community, nonetheless protocol architects 91 must play in the improvement of privacy, both by making a conscious 92 decision to design for privacy, and by documenting privacy risks in 93 protocol designs. While discuss the limitations of standards 94 activities in Section 3, we maintain that the IETF community in its 95 mandate to "make the Internet better" has a role to play in making 96 its specifications, and the Internet, more privacy friendly. This 97 must spring from awareness of how design decisions impact privacy, 98 and must be reflected in both protocol design and in the 99 documentation of potential privacy challenges in the deployment a 100 single protocol or an entire suite of protocols. 102 From the activities in the industry, one can observe three schools of 103 thought in the work on privacy, namely 105 Privacy by Technology: 107 This approach considers the assurance of privacy in the design of 108 a protocol as a technical problem. For example, the design of a 109 specific application may heighten privacy by sharing fewer data 110 items with other parties (i.e. data minimization). Limiting data 111 sharing also avoids the need for evaluation on how data-related 112 consent is obtained, to define policies around how to protect 113 data, etc. Ultimately, different architectural designs will lead 114 to different results with respect to privacy. 116 Examples in this area of location privacy can be found in 117 [EFF-Privacy]. These solution often make heavy use of 118 cryptographic techniques, such as threshold cryptography and 119 secret sharing schemes. 121 Privacy by Policy: 123 In this approach, privacy protection happens through establishing 124 the consent of the user to a set of privacy policies. Hence, 125 protection of the user privacy is largely the responsibility of 126 the company collecting, processing, and storing personal data. 127 Notices and choices are offered to the customer and backed-up by 128 an appropriate legal framework. 130 An example of this approach for the privacy of location-based 131 services is the recent publication by CTIA [CTIA]. 133 Policy/Technology Hybrid: 135 This approach targets a middle-ground where some privacy-enhancing 136 features can be provided by technology, and made attractive to 137 implementers (via explicit best current practices for 138 implementation, configuration and deployment, or by raising 139 awareness implicitly via a discussion about privacy in technical 140 specifications) but other aspects can only be provided and 141 enforced by the parties who control the deployment. Deployments 142 often base their decisions on the existence of a plausible legal 143 framework. 145 The authors believe that the policy/technology hybrid approach is the 146 most practical one, and therefore propose that privacy consideration 147 within the IETF follow its principles. 149 This remainder of this document is structured as follows: First, we 150 provide a brief introduction to the concept of privacy in Section 2. 151 In Section 3, we illustrate what is in scope of the IETF and where 152 the responsibility of the IETF ends. In Section 4, we discuss the 153 main threat model for privacy considerations. In Section 5, we 154 propose guidelines for documenting privacy within IETF 155 specifications, and in Section 6 we examine the privacy 156 characteristics of a few exemplary IETF protocols and explain what 157 privacy features have been provided to date. 159 2. Historical Background 161 The "right to be let alone" is a phrase coined by Warren and Brandeis 162 in their seminal Harvard Law Review article on privacy [Warren]. 163 They were the first scholars to recognize that a right to privacy had 164 evolved in the 19th century to embrace not only physical privacy but 165 also a potential "injury of the feelings", which could, for example, 166 result from the public disclosure of embarrassing private facts. 168 In 1967 Westin [Westin] described privacy as a "personal adjustment 169 process" in which individuals balance "the desire for privacy with 170 the desire for disclosure and communication" in the context of social 171 norms and their environment. Privacy thus requires that an 172 individual has a means to exercise selective control of access to the 173 self and is aware of the potential consequences of exercising that 174 control [Altman]. 176 Efforts to define and analyze the privacy concept evolved 177 considerably in the 20th century. In 1975, Altman conceptualized 178 privacy as a "boundary regulation process whereby people optimize 179 their accessibility along a spectrum of 'openness' and 'closedness' 180 depending on context" [Altman]. "Privacy is the claim of 181 individuals, groups, or institutions to determine for themselves 182 when, how, and to what extent information about them is communicated 183 to others. Viewed in terms of the relation of the individual to 184 social participation, privacy is the voluntary and temporary 185 withdrawal of a person from the general society through physical or 186 psychological means, either in a state of solitude or small-group 187 intimacy or, when among larger groups, in a condition of anonymity or 188 reserve." [Westin]. 190 Note: Altman and Westin were referring to nonelectronic environments, 191 where privacy intrusion was typically based on fresh information, 192 referring to one particular person only, and stemming from traceable 193 human sources. The scope of possible privacy breaches was therefore 194 rather limited. Today, in contrast, details about an individual's 195 activities are typically stored over a longer period of time, 196 collected from many different sources, and information about almost 197 every activity in life is available electronically. 199 In 1980, the Organization for Economic Co-operation and Development 200 (OECD) published eight Guidelines on the Protection of Privacy and 201 Trans-Border Flows of Personal Data [OECD], which are often referred 202 to as Fair Information Practices (FIPs). Fair information practices 203 include the following principles: 205 Notice and Consent: Before the collection of data, the data subject 206 should be provided: notice of what information is being collected 207 and for what purpose and an opportunity to choose whether to 208 accept the data collection and use. In Europe, data collection 209 cannot proceed unless data subject has unambiguously given his 210 consent (with exceptions). 212 Collection Limitation: Data should be collected for specified, 213 explicit and legitimate purposes. The data collected should be 214 adequate, relevant and not excessive in relation to the purposes 215 for which they are collected. 217 Use/Disclosure Limitation: Data should be used only for the purpose 218 for which it was collected and should not be used or disclosed in 219 any way incompatible with those purposes. 221 Retention Limitation: Data should be kept in a form that permits 222 identification of the data subject no longer than is necessary for 223 the purposes for which the data were collected. 225 Accuracy: The party collecting and storing data is obligated to 226 ensure its accuracy and, where necessary, keep it up to date; 227 every reasonable step must be taken to ensure that data which are 228 inaccurate or incomplete are corrected or deleted. 230 Access: A data subject should have access to data about himself, in 231 order to verify its accuracy and to determine how it is being 232 used. 234 Security: Those holding data about others must take steps to protect 235 its confidentiality. 237 The OECD guidelines and also recent onces, like the Madrid resolution 238 [Madrid] or the Granada Charter of Privacy in a Digital World 239 [Granada], provide a useful understanding of how to provide privacy 240 protection but these guidelines quite naturally stay on a higher 241 level. As such, they do not aim to evaluate the tradeoffs in 242 addressing privacy protection in the different stages of the 243 development process, as illustrated in Figure 1. 245 US regulatory and self-regulatory efforts supported by the Federal 246 Trade Commission (FTC) have focused on a subset of these principles, 247 namely to notice, choice, access, and security rather than minimizing 248 data collection or use limitation. Hence, they are sometimes labeled 249 as the "notice and choice" approach to privacy. From a practical 250 point of view it became evident that companies are reluctant to stop 251 collecting and using data but individuals expect to remain in control 252 about its usage. Today, the effectiveness to deal with privacy 253 violations using the "notice and choice" approach is heavily 254 criticized [limits]. 256 Among these considers (although often implicit) are assumptions on 257 how information is exchanged between different parties and for 258 certain protocols this information may help to identify entities, and 259 potentially humans behind them. Without doubt the information 260 exchanged is not always equal. The terms 'personal data' [DPD95] and 261 Personally Identifiable Information (PII) [SP800-122] have become 262 common language in the vocabulary of privacy experts. It seems 263 therefore understandable that regulators around the globe have 264 focused on the type of data being exchanged and have provided laws 265 according to the level of sensitivity. Medical data is treated 266 differently in many juristictions than blog comments. For an initial 267 investigation it is intuitive and helpful to determine whether 268 specific protocol or application may be privacy sensitive. The ever 269 increasing ability for parties on the Internet to collect, aggregate, 270 and to reason about information collected from a wide range of 271 sources requires to apply further thinking about potential other 272 privacy sensitive items. The recent example of browser 273 fingerprinting [browser-fingerprinting] shows how many information 274 items combined can lead to a privacy threat. 276 The following list contains examples of information that may be 277 considered personal data: 279 o Name 281 o Address information 283 o Phone numbers, email addresses, SIP/XMPP URIs, other identifiers 285 o IP and MAC addresses or other host-specific persistent identifiers 286 that consistently links to a particular person or small, well- 287 defined group of people 289 o Information identifying personally owned property, such as vehicle 290 registration number 292 Data minimization means that first of all, the possibility to collect 293 personal data about others should be minimized. Next, within the 294 remaining possibilities, collecting personal data should be 295 minimized. Finally, the time how long collected personal data is 296 stored should be minimized. 298 As stated in [I-D.hansen-privacy-terminology], "If we exclude 299 providing misinformation (inaccurate or erroneous information, 300 provided usually without conscious effort at misleading, deceiving, 301 or persuading one way or another) or disinformation (deliberately 302 false or distorted information given out in order to mislead or 303 deceive), data minimization is the only generic strategy to enable 304 anonymity, since all correct personal data help to identify.". 306 Early papers from the 1980ies about privacy by data minimization 307 already deal with anonymity, unlinkability, unobservability, and 308 pseudonymity. [I-D.hansen-privacy-terminology] provides a 309 compilation of terms. 311 3. Scope 313 The IETF at large produces specifications that typically fall into 314 the following categories: 316 o Process specifications (e.g. WG shepherding guidelines described 317 in RFC 4858 [RFC4858]) These documents aim to document and to 318 improve the work style within the IETF. 320 o Building blocks (e.g. cryptographic algorithms, MIME types 321 registrations). These specifications are meant to be used with 322 other protocols one or several communication paradigms. 324 o Architectural descriptions (for example, on IP-based emergency 325 services [I-D.ietf-ecrit-framework], Internet Mail [RFC5598]) 327 o Best current practices (e.g. Guidance for Authentication, 328 Authorization, and Accounting (AAA) Key Management [RFC4962]) 330 o Policy statements (e.g. IETF Policy on Wiretapping [RFC2804]) 332 Often, the architectural description is compiled some time after the 333 deployment has long been ongoing and therefore those who implement 334 and those who deploy have to make their own determination of which 335 protocols they would like to glue together to a complete system. 336 This type of work style has the advantage that protocol designers are 337 encouraged to write their specifications in a flexible way so that 338 they can be used in multiple contexts with different deployment 339 scenarios without a huge amount of interdependency between the 340 components. [Tussle] highlights the importance of such an approach 341 and [I-D.morris-policy-cons] offers a more detailed discussion. 343 This work style has an important consequence for the scope of privacy 344 work in the IETF, namely 346 o the standardization work focuses on those parts where 347 interoperability is really essentially rather than describing a 348 specific instantiation of an architecture and therefore leaving a 349 lot of choices for deployments. 351 o application internal functionality, such as API, and details about 352 databases are outside the scope of the IETF 354 o regulatory requirements of different juristictions are not part of 355 the IETF work either. 357 Here is an example that aims to illustrate the boundaries of the IETF 358 work: Imagine a social networking site that allows user registration, 359 requires user authentication prior to usage, and offers its 360 functionality for Web browser users via HTTP, real-time messaging 361 functionality via XMPP, and email notifications. Additionally, 362 support for data sharing with other Internet service providers is 363 provided by OAuth. 365 While HTTP, XMPP, Email, and OAuth are IETF specifications they only 366 define how the protocol behavior on the wire looks like. They 367 certainly have an architectural spirit that has enormous impact on 368 the protocol mechanisms and the set of specifications that are 369 required. However, IETF specifications would not go into details of 370 how the user has to register, what type of data he has to provide to 371 this social networking site, how long transaction data is kept, how 372 requirements for lawful intercept are met, how authorization policies 373 are designed to let users know more about data they share with other 374 Internet services, how the user's data is secured against authorized 375 access, whether the HTTP communication exchange between the browser 376 and the social networking site is using TLS or not, what data is 377 uploaded by the user, how the privacy policy of the social networking 378 site should look like, etc. 380 Another example is the usage of HTTP for the Web. HTTP is published 381 in RFC 2616 and was designed to allow the exchange of arbitrary data. 382 An analysis of potential privacy problems would consider what type of 383 data is exchanged, how this data is stored and processed. Hence, the 384 analysis for a static webpage by a company would different than the 385 usage of HTTP for exchanging health records. A protocol designer 386 working on HTTP extensions (such as WebDAV) it would therefore be 387 difficult to describe all possible privacy considersations given that 388 the space of possible usage is essentially unlimited. 390 +--------+ 391 |Building|-------+ 392 |Blocks | | 393 +--------+ | 394 +------v-----+ 395 | |----+ 396 |Architecture| | 397 +------------+ | 398 +---v--+ 399 |System|--------+ 400 |Design| | 401 +------+ | 402 +-------v------+ 403 | |------+ 404 |Implementation| | 405 +--------------+ | 406 +-----v----+ 407 | | 408 |Deployment| 409 +----------+ 411 Figure 1: Development Process 413 Figure 1 shows a typical development process. IETF work often starts 414 with identifying building blocks that ccan then be used in different 415 architectural variants useful for a wide range of usage scenarios. 416 Before implementation activities start a software architecture needs 417 to evaluable which components to integrate, how to provide proper 418 performance characteristics, etc. Finally, the implemented work 419 needs to be deployed. Privacy considerations play a role along the 420 entire process. 422 To pick an example from the security field consider the NIST 423 Framework for Designing Cryptographic Key Management Systems 424 [SP800-130], NIST SP 800-130. SP 800-130 provides a number of 425 recommendations that can be addressed largely during the system 426 design phase as well as in the implementation phase of product 427 development. The cryptographic building blocks and the underlying 428 architecture is assumed to be sound. Even with well-design 429 cryptographic components there are plenty of possibilities to 430 introduce security vulnerabilities in the later stage of the 431 development cycle. 433 Similiar to the work on security the impact of work in standards 434 developing organizations is limited. Neverthelesss, discussing 435 potential privacy problems and considering privacy in the design of 436 an IETF protocol can offer system architects and those deploying 437 systems additional insights. The rest of this document is focused on 438 illustrating how protocol designers can consider privacy in their 439 design decisions, as they do factors like security, congestion 440 control, scalability, operations and management, etc. 442 4. Threat Model 444 To consider privacy in protocol design it useful to think about the 445 overall communication architecture and what the different actors 446 could do. This analysis is similar to a threat analysis found in 447 security consideration sections of IETF documents. See also RFC 4101 448 [RFC4101] for an illustration on how to write protocol models. In 449 Figure 2 we show a communication model found in many of today's 450 protocols where a sender wants to establish communication with some 451 recipient and thereby uses some form of intermediary (referred as 452 relay in Figure 2. In some cases this intermediary stays in the 453 communication path for the entire duration of the communication and 454 sometimes it is only used for communication establishment, for either 455 inbound or outbound communication. In rare cases they may even be a 456 series of relays that are traversed. 458 +-----------+ 459 | | 460 >| Recipient | 461 / | | 462 ,' +-----------+ 463 +--------+ )-------( ,' +-----------+ 464 | | | | - | | 465 | Sender |<------>|Relay |<------>| Recipient | 466 | | | |`. | | 467 +--------+ )-------( \ +-----------+ 468 ^ `. +-----------+ 469 : \ | | 470 : `>| Recipient | 471 ..............................>| | 472 +-----------+ 474 Legend: 476 <....> End-to-End Communication 477 <----> Hop-by-Hop Communication 479 Figure 2: Example Instantiation of involved Entities 481 We can distinguish between three types of adversaries: 483 Eavesdropper: RFC 4949 describes the act of 'eavesdropping' as 485 "Passive wiretapping done secretly, i.e., without the knowledge 486 of the originator or the intended recipients of the 487 communication." 489 Eavesdropping is often considered by IETF protocols in the context 490 of a security analysis to deal with a range of attacks by offering 491 confidentiality protection. 493 RFC 3552 provides guidance on how to write security considerations 494 for IETF documents and already demands the confidentiality 495 security services to be considered. While IETF protocols offer 496 guidance on how to secure communication against eavesdroppers 497 deployments sometimes choose not to enable its usage. 499 Middleman: Many protocols developed today show a more complex 500 communication pattern than just client and server communication, 501 as motivated in Figure 2. Store-and-forward protocols are 502 examples where entities participate in the message delivery even 503 though they are not the final recipients. Often, these 504 intermediaries only need to see a small amount of information 505 necessary for message routing and security and/or protocol 506 mechanisms should ensure that end-to-end information is made 507 inaccessible for these entities. Unfortunately, the difficulty to 508 deploy end-to-end security proceduces, the additional messaging, 509 the computational overhead, and other business / legal 510 requirements often slow down or prevent the deployment of these 511 end-to-end security mechanisms giving these intermediaries more 512 exposure to communication patters and communication payloads than 513 necessary. 515 Recipient: It may seem strange to put the recipient as an adversary 516 in this list since the entire purpose of the communication 517 interaction is to provide information to it. However, the degree 518 of familiarity and the type of information that needs to be shared 519 with such an entity may vary from context to context and also 520 between application scenarios. Often enough, the sender has no 521 strong familiarity with the other communication endpoint. While 522 it seems to be advisable to utilize access control before 523 disclosing information with such an entity reality in Internet 524 communication is not so simple. As such, a sender may still want 525 to limit the amount of information disclosed to the recipient some 526 mutual understanding of how this data is treated my need to be 527 created, e.g. how long it is kept (retention), whether re- 528 distribution is permitted. 530 5. Guidelines 532 A pre-condition for reasoning about the impact of a protocol or an 533 architecture is to look at the high level protocol model, as 534 described in [RFC4101]. This step helps to identify actors and their 535 relationship. The protocol specification (or the set of 536 specifications then allows a deep dive into the data that is 537 exchanged. 539 The answers to these questions provide insight into the potential 540 privacy impact: 542 1. What entities collect and use data? 544 1.a: How many entities collect and use data? 546 Note that this question aims to raise the question of what 547 is possible for various entities to inspect (or potentially 548 modify). In architectures with intermediaries, the 549 question can be stated as "What data is exposed to 550 intermediaries that they do not need to know to do their 551 job?". 553 1.b: For each entity, what type of entity is it? 555 + The first-party site or application 557 + Other sites or applications whose data collection and use 558 is in some way controlled by the first party 560 + Third parties that may use the data they collect for other 561 purposes 563 2. For each entity, think about the relationship between the entity 564 and the user. 566 2.a: What is the user's familiarity or degree of relationship 567 with the entity in other contexts? 569 2.b: What is the user's reasonable expectation of the entity's 570 involvement? 572 3. What data about the user is likely needed to be collected? 574 4. What is the identification level of the data? (identified, 575 pseudonymous, anonymous, see [I-D.hansen-privacy-terminology]) 577 6. Example 579 This section allows us to illustrate how privacy was deal within 580 certain IETF protocols. We will start the description with AAA for 581 network access and expand it to other protocols in a future version 582 of this draft. 584 6.1. Presence 586 A presence service, as defined in the abstract in RFC 2778 [RFC2778], 587 allows users of a communications service to monitor one another's 588 availability and disposition in order to make de- cisions about 589 communicating. Presence information is highly dynamic, and generally 590 characterizes whether a user is online or offline, busy or idle, away 591 from communications devices or nearby, and the like. Necessarily, 592 this information has certain privacy implications, and from the start 593 the IETF approached this work with the aim to provide users with the 594 controls to determine how their presence information would be shared. 595 The Common Profile for Presence (CPP) [RFC3859] defines a set of 596 logical operations for delivery of presence information. This 597 abstract model is applicable to multiple presence systems. The SIP- 598 based SIMPLE presence system [RFC3261] uses CPP as its baseline 599 architecture, and the presence operations in the Extensible Messaging 600 and Presence Protocol (XMPP) have also been mapped to CPP [RFC3922]. 602 SIMPLE [RFC3261], the application of the Session Initiation Protocol 603 (SIP) to instant messaging and presence, has native support for 604 subscriptions and notifications (with its event framework [RFC3265]) 605 and has added an event package [RFC3856] for pres- ence in order to 606 satisfy the requirements of CPP. Other event packages were defined 607 later to allow additional information to be exchanged. With the help 608 of the PUBLISH method [RFC3903]. clients are able to install presence 609 information on a server, so that the server can apply access-control 610 policies before sharing presence information with other entities. 611 The integration of an explicit authorization mechanism into the 612 presence architecture has been a major improvement in terms of 613 involving the end users in the decision making pro- cess before 614 sharing information. Nearly all presence systems deployed today 615 provide such a mechanism, typically through a reciprocal 616 authorization system by which a pair of users, when they agree to be 617 "buddies," consent to divulge their presence information to one 618 another. 620 One important extension for presence was to enable the support for 621 location sharing. With the desire to standardize protocols for 622 systems sharing geolocation IETF work was started in the GEOPRIV 623 working group. During the initial requirements and privacy threat 624 analysis in the process of chartering the working group, it became 625 clear that the system would an underlying communication mechanism 626 supporting user consent to share location information. The 627 resemblance of these requirements to the presence framework was 628 quickly recognized, and this design decision was documented in RFC 629 4079 [RFC4079]. 631 While presence systems exerted influence on location pri- vacy, the 632 location privacy work also influenced ongoing IETF work on presence 633 by triggering the standardization of a general access control policy 634 language called the Common Policy (defined in RFC 4745 [RFC4745]) 635 framework. This language allows one to express ways to control the 636 distribution of information as simple conditions, actions, and 637 transformations rules expressed in an XML format. Common Policy 638 itself is an abstract format which needs to be instantiated: two 639 examples can be found with the Presence Authorization Rules [RFC5025] 640 and the Geolocation Policy [I-D.ietf-geopriv-policy]. The former 641 provides additional expressiveness for presence based systems, while 642 the latter defines syntax and semantic for location based conditions 643 and transformations. 645 As a component of the prior work on the presence architecture, a 646 format for presence information, called Presence Information Data 647 Format (PIDF), had been developed. For the purposes of conveying 648 location information an extension was developed, the PIDF Location 649 Object (PIDF-LO). With the aim to meet the privacy requirements 650 defined in RFC 2779 [RFC2779] a set of usage indications (such as 651 whether retransmission is allowed or when the retention period 652 expires) in the form of the following policies have been added that 653 always travel with location information itself. We believe that the 654 standardization of these meta-rules that travel with location 655 information has been a unique contribution to privacy on the 656 Internet, recognizing the need for users to express their preferences 657 when information travels through the Internet, from website to 658 website. This approach very much follows the spirit of Creative 659 Commons [CC], namely the usage of a limited number of conditions 660 (such as 'Share Alike' [CC-SA]). Unlike Creative Commons, the 661 GEOPRIV working group did not, however, initiate work to produce 662 legal language nor to de- sign graphical icons since this would fall 663 outside the scope of the IETF. In particular, the GEOPRIV rules 664 state a preference on the retention and retransmission of location 665 information; while GEOPRIV cannot force any entity receiving a 666 PIDF-LO object to abide by those preferences, if users lack the 667 ability to express them at all, we can guarantee their preferences 668 will not be honored. 670 While these retention and retransmission meta-data elements could 671 have been devised to accompany information elements in other IETF 672 protocols, the decision was made to introduce these elements for 673 geolocation initially because of the sensitivity of location 674 information. 676 The GEOPRIV working group had decided to clarify the architecture to 677 make it more accessible to those outside the IETF, and also provides 678 a more generic description applicable beyond the context of presence. 679 [I-D.ietf-geopriv-arch] shows the work-in-progress writeup. 681 6.2. AAA for Network Access 683 On a high-level, AAA for network access uses the communication model 684 shown in Figure 3. When an end host requests access to the network 685 it has to interact with a Network Access Server (NAS) using some 686 front-end protocol (often at the link layer, such as IEEE 802.1X). 687 When asked by the NAS, the end host presents a Network Access 688 Identifier (NAI), an email alike identifier that consists of a 689 username and a domain part. This NAI is then used to discover the 690 AAA server authorized for the users' domain and an initial access 691 request is forwarded to it. To deal with various security, 692 accounting and fraud prevention aspects an end-to-end authentication 693 procedure, run between the end host (the peer) and a separate 694 component within the AAA server (the server) is executed using the 695 Extensible Authentication Protocol (EAP). After a successful 696 authentication protocol exchange the user may get authorized to 697 access the network and keying material is provided to the NAS to 698 enable link layer security over the air interface. 700 From a privacy point of view, the entities participating in this eco- 701 system are the user, an end host, the NAS, a range of different 702 intermediaries, and the AAA server. The user will most likely have 703 some form of contractual relationship with the entity operating the 704 AAA server since credential provisioning had to happen someone but, 705 in certain deployments like coffee shops, this is not guaranteed. In 706 many deployment during this initial registration process the 707 subscriber is provided with credentials after showing some form of 708 identification information (e.g. a passport) and consequently the NAI 709 together with credentials can be used to linked to a specific 710 subscriber, often a single person. 712 The username part of the NAI is data provided by the end host 713 provides during network access authentication that intermediaries do 714 not need to fulfill their role in AAA message routing. Hiding the 715 user's identity is, as discussed in RFC 4282 [RFC4282], possible only 716 when NAIs are used together with a separate authentication method 717 that can transfer the username in a secure manner. Such EAP methods 718 have been designed and requirements for offering such functionality 719 have have become recommended design criteria, see [RFC4017]. 721 More than just identity information is exchanged during the network 722 access authentication is exchanged. The NAS provides information 723 about the user's point of attachment towards the AAA server and the 724 AAA server in response provides data related to the authorization 725 decision back. While the need to exchange data is motivated by the 726 service usage itself there are still a number of questions that could 727 be asked, such as 729 o What mechanisms can be utilized to offer users ways to authorize 730 sharing of information (considering that the ability for protocol 731 interaction is limited without sucessful network access 732 connectivity)? 734 o What are the best current practices for a privacy-sensitive 735 operation of intermediaries? Since end hosts are not interacting 736 with intermediaries explicitly and users have no relationship with 737 those who operate them it is quite likely their practices are less 738 widely known. 740 o Are there alternative approaches to trust establishment between 741 the NAS and the AAA server so that the involvement of 742 intermediaries can be limited or avoided? 743 +--------------+ 744 | AAA Server | 745 +-^----------^-+ 746 * EAP | RADIUS/ 747 * | Diameter 748 --v----------v-- 749 /// \\\ 750 // AAA Proxies, \\ *** 751 | Relays, and | back- 752 | Redirect Agents | end 753 \\ // *** 754 \\\ /// 755 --^----------^-- 756 * EAP | RADIUS/ 757 * | Diameter 758 +----------+ Data +-v----------v-- + 759 | |<---------------->| | 760 | End Host | EAP/EAP Method | Network Access | 761 | |<****************>| Server | 762 +----------+ +--------------- + 763 *** front-end *** 764 Legend: 766 <****>: End-to-end exchange 767 <---->: Hop-by-hop exchange 769 Figure 3: Network Access Authentication Architecture 771 7. Security Considerations 773 This document describes aspects a protocol designer would considered 774 in the area of privacy in addition to the regular security analysis. 776 8. IANA Considerations 778 This document does not require actions by IANA. 780 9. Acknowledgements 782 Add your name here. 784 10. References 786 10.1. Normative References 788 [I-D.hansen-privacy-terminology] 789 Pfitzmann, A., Hansen, M., and H. Tschofenig, "Terminology 790 for Talking about Privacy by Data Minimization: Anonymity, 791 Unlinkability, Undetectability, Unobservability, 792 Pseudonymity, and Identity Management", 793 draft-hansen-privacy-terminology-01 (work in progress), 794 August 2010. 796 [OECD] Organization for Economic Co-operation and Development, 797 "OECD Guidelines on the Protection of Privacy and 798 Transborder Flows of Personal Data", available at 799 (September 2010) , http://www.oecd.org/EN/document/ 800 0,,EN-document-0-nodirectorate-no-24-10255-0,00.html, 801 1980. 803 10.2. Informative References 805 [Altman] Altman, I., "The Environment and Social Behavior: Privacy, 806 Personal Space, Territory, Crowding", Brooks/Cole , 1975. 808 [CC] "Creative Commons", June 2010. 810 [CC-SA] "Creative Commons - Licenses", June 2010. 812 [CTIA] CTIA, "Best Practices and Guidelines for Location-Based 813 Services", , March 2010. 815 [DPD95] European Commission, "Directive 95/46/EC of the European 816 Parliament and of the Council of 24 October 1995 on the 817 protection of individuals with regard to the processing of 818 personal data and on the free movement of such data", 819 Official Journal L 281 , 23/11/1995 P. 0031 - 0050, 820 November 2005. 822 [EFF-Privacy] 823 Blumberg, A. and P. Eckersley, "On Locational Privacy, and 824 How to Avoid Losing it Forever", August 2009. 826 [Granada] International Working Group on Data Protection in 827 Telecommunications, "The Granada Charter of Privacy in a 828 Digital World, Granada (Spain)", April 2010. 830 [I-D.ietf-ecrit-framework] 831 Rosen, B., Schulzrinne, H., Polk, J., and A. Newton, 832 "Framework for Emergency Calling using Internet 833 Multimedia", draft-ietf-ecrit-framework-11 (work in 834 progress), July 2010. 836 [I-D.ietf-geopriv-arch] 837 Barnes, R., Lepinski, M., Cooper, A., Morris, J., 838 Tschofenig, H., and H. Schulzrinne, "An Architecture for 839 Location and Location Privacy in Internet Applications", 840 draft-ietf-geopriv-arch-03 (work in progress), 841 October 2010. 843 [I-D.ietf-geopriv-policy] 844 Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 845 and J. Polk, "Geolocation Policy: A Document Format for 846 Expressing Privacy Preferences for Location Information", 847 draft-ietf-geopriv-policy-21 (work in progress), 848 January 2010. 850 [I-D.morris-policy-cons] 851 Morris, J., Aboba, B., Peterson, J., and H. Tschofenig, 852 "Public Policy Considerations for Internet Protocols", 853 draft-morris-policy-cons-00 (work in progress), 854 October 2010. 856 [Madrid] Data Protection Authorities and Privacy Regulators, "The 857 Madrid Resolution, International Standards on the 858 Protection of Personal Data and Privacy", Conference of 859 Data Protection and Privacy Commissioners , 31st 860 International Meeting, November 2009. 862 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 863 Presence and Instant Messaging", RFC 2778, February 2000. 865 [RFC2779] Day, M., Aggarwal, S., Mohr, G., and J. Vincent, "Instant 866 Messaging / Presence Protocol Requirements", RFC 2779, 867 February 2000. 869 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 870 May 2000. 872 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 873 A., Peterson, J., Sparks, R., Handley, M., and E. 874 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 875 June 2002. 877 [RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific 878 Event Notification", RFC 3265, June 2002. 880 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 881 Initiation Protocol (SIP)", RFC 3856, August 2004. 883 [RFC3859] Peterson, J., "Common Profile for Presence (CPP)", 884 RFC 3859, August 2004. 886 [RFC3903] Niemi, A., "Session Initiation Protocol (SIP) Extension 887 for Event State Publication", RFC 3903, October 2004. 889 [RFC3922] Saint-Andre, P., "Mapping the Extensible Messaging and 890 Presence Protocol (XMPP) to Common Presence and Instant 891 Messaging (CPIM)", RFC 3922, October 2004. 893 [RFC4017] Stanley, D., Walker, J., and B. Aboba, "Extensible 894 Authentication Protocol (EAP) Method Requirements for 895 Wireless LANs", RFC 4017, March 2005. 897 [RFC4079] Peterson, J., "A Presence Architecture for the 898 Distribution of GEOPRIV Location Objects", RFC 4079, 899 July 2005. 901 [RFC4101] Rescorla, E. and IAB, "Writing Protocol Models", RFC 4101, 902 June 2005. 904 [RFC4282] Aboba, B., Beadles, M., Arkko, J., and P. Eronen, "The 905 Network Access Identifier", RFC 4282, December 2005. 907 [RFC4745] Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 908 Polk, J., and J. Rosenberg, "Common Policy: A Document 909 Format for Expressing Privacy Preferences", RFC 4745, 910 February 2007. 912 [RFC4858] Levkowetz, H., Meyer, D., Eggert, L., and A. Mankin, 913 "Document Shepherding from Working Group Last Call to 914 Publication", RFC 4858, May 2007. 916 [RFC4962] Housley, R. and B. Aboba, "Guidance for Authentication, 917 Authorization, and Accounting (AAA) Key Management", 918 BCP 132, RFC 4962, July 2007. 920 [RFC5025] Rosenberg, J., "Presence Authorization Rules", RFC 5025, 921 December 2007. 923 [RFC5598] Crocker, D., "Internet Mail Architecture", RFC 5598, 924 July 2009. 926 [SP800-122] 927 McCallister, E., Grance, T., and K. Scarfone, "Guide to 928 Protecting the Confidentiality of Personally Identifiable 929 Information (PII)", NIST Special Publication (SP) , 800- 930 122, April 2010. 932 [SP800-130] 933 Barker, E., Branstad, D., Chokhani, S., and M. Smid, 934 "DRAFT: A Framework for Designing Cryptographic Key 935 Management Systems", NIST Special Publication (SP) , 800- 936 130, June 2010. 938 [Tussle] Clark, D., Wroslawski, J., Sollins, K., and R. Braden, 939 "Tussle in Cyberspace: Defining Tomorrow's Internet", In 940 Proc. ACM SIGCOMM , 941 http://www.acm.org/sigcomm/sigcomm2002/papers/tussle.html, 942 2002. 944 [Warren] Warren, D. and L. Brandeis, "The Right to Privacy", 945 Harvard Law Rev. , vol. 45, 1890. 947 [Westin] Westin, A., "Privacy and Freedom", Atheneum, New York , 948 1967. 950 [browser-fingerprinting] 951 Eckersley, P., "How Unique Is Your Browser?", Springer 952 Lecture Notes in Computer Science , Privacy Enhancing 953 Technologies Symposium (PETS 2010), 2010. 955 [limits] Cate, F., "The Limits of Notice and Choice", IEEE Computer 956 Society , IEEE Security and Privacy, pg. 59-62, 957 November 2005. 959 Authors' Addresses 961 Bernard Aboba 962 Microsoft Corporation 963 One Microsoft Way 964 Redmond, WA 98052 965 US 967 Email: bernarda@microsoft.com 969 John B. Morris, Jr. 970 Center for Democracy and Technology 971 1634 I Street NW, Suite 1100 972 Washington, DC 20006 973 USA 975 Email: jmorris@cdt.org 976 URI: http://www.cdt.org 978 Jon Peterson 979 NeuStar, Inc. 980 1800 Sutter St Suite 570 981 Concord, CA 94520 982 US 984 Email: jon.peterson@neustar.biz 986 Hannes Tschofenig 987 Nokia Siemens Networks 988 Linnoitustie 6 989 Espoo 02600 990 Finland 992 Phone: +358 (50) 4871445 993 Email: Hannes.Tschofenig@gmx.net 994 URI: http://www.tschofenig.priv.at