idnits 2.17.00 (12 Aug 2021) /tmp/idnits54419/draft-morris-privacy-considerations-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 8, 2010) is 4211 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-hansen-privacy-terminology-01 == Outdated reference: draft-ietf-ecrit-framework has been published as RFC 6443 == Outdated reference: draft-ietf-geopriv-arch has been published as RFC 6280 == Outdated reference: draft-ietf-geopriv-policy has been published as RFC 6772 -- Obsolete informational reference (is this intentional?): RFC 3265 (Obsoleted by RFC 6665) -- Obsolete informational reference (is this intentional?): RFC 4282 (Obsoleted by RFC 7542) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Aboba 3 Internet-Draft Microsoft Corporation 4 Intended status: Informational J. Morris 5 Expires: May 12, 2011 CDT 6 J. Peterson 7 NeuStar, Inc. 8 H. Tschofenig 9 Nokia Siemens Networks 10 November 8, 2010 12 Privacy Considerations for Internet Protocols 13 draft-morris-privacy-considerations-02.txt 15 Abstract 17 This document aims to make protocol designers aware of privacy- 18 related design choices and offers guidance for developing privacy 19 considerations for IETF documents. While specifications cannot 20 police the implementation community, nonetheless protocol architects 21 must play in the improvement of privacy, both by making a conscious 22 decision to design for privacy, and by documenting privacy risks in 23 protocol designs. 25 This document is discussed on the Internet Privacy Discussion mailing 26 list (see https://www.ietf.org/mailman/listinfo/ietf-privacy). 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on May 12, 2011. 45 Copyright Notice 47 Copyright (c) 2010 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 2. Historical Background . . . . . . . . . . . . . . . . . . . . 5 64 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 65 4. Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 13 66 5. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 15 67 6. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 68 6.1. Presence . . . . . . . . . . . . . . . . . . . . . . . . . 17 69 6.2. AAA for Network Access . . . . . . . . . . . . . . . . . . 19 70 7. Security Considerations . . . . . . . . . . . . . . . . . . . 22 71 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 72 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 24 73 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 25 74 10.1. Normative References . . . . . . . . . . . . . . . . . . . 25 75 10.2. Informative References . . . . . . . . . . . . . . . . . . 25 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 29 78 1. Introduction 80 The IETF produces specifications that aim to make the Internet 81 better. Those specifications fall into a number of different 82 categories, including protocol specifications, best current practice 83 descriptions, and architectural documentations. While IETF documents 84 are typically implementation-agnostic, they are often, if not always, 85 impacted by fundamental architectural design decisions. These 86 decision decisions in turn hinge on technical aspects, predictions 87 about deployment incentives, operational considerations, legal 88 concerns, security frameworks, and so on. 90 This document aims to make protocol designers aware of privacy- 91 related design choices and offers guidance for developing privacy 92 considerations for IETF documents. While specifications cannot 93 police the implementation community, nonetheless protocol architects 94 must play in the improvement of privacy, both by making a conscious 95 decision to design for privacy, and by documenting privacy risks in 96 protocol designs. While discuss the limitations of standards 97 activities in Section 3, we maintain that the IETF community in its 98 mandate to "make the Internet better" has a role to play in making 99 its specifications, and the Internet, more privacy friendly. This 100 must spring from awareness of how design decisions impact privacy, 101 and must be reflected in both protocol design and in the 102 documentation of potential privacy challenges in the deployment a 103 single protocol or an entire suite of protocols. 105 From the activities in the industry, one can observe three schools of 106 thought in the work on privacy, namely 108 Privacy by Technology: 110 This approach considers the assurance of privacy in the design of 111 a protocol as a technical problem. For example, the design of a 112 specific application may heighten privacy by sharing fewer data 113 items with other parties (i.e. data minimization). Limiting data 114 sharing also avoids the need for evaluation on how data-related 115 consent is obtained, to define policies around how to protect 116 data, etc. Ultimately, different architectural designs will lead 117 to different results with respect to privacy. 119 Examples in this area of location privacy can be found in 120 [EFF-Privacy]. These solution often make heavy use of 121 cryptographic techniques, such as threshold cryptography and 122 secret sharing schemes. 124 Privacy by Policy: 126 In this approach, privacy protection happens through establishing 127 the consent of the user to a set of privacy policies. Hence, 128 protection of the user privacy is largely the responsibility of 129 the company collecting, processing, and storing personal data. 130 Notices and choices are offered to the customer and backed-up by 131 an appropriate legal framework. 133 An example of this approach for the privacy of location-based 134 services is the recent publication by CTIA [CTIA]. 136 Policy/Technology Hybrid: 138 This approach targets a middle-ground where some privacy-enhancing 139 features can be provided by technology, and made attractive to 140 implementers (via explicit best current practices for 141 implementation, configuration and deployment, or by raising 142 awareness implicitly via a discussion about privacy in technical 143 specifications) but other aspects can only be provided and 144 enforced by the parties who control the deployment. Deployments 145 often base their decisions on the existence of a plausible legal 146 framework. 148 The authors believe that the policy/technology hybrid approach is the 149 most practical one, and therefore propose that privacy consideration 150 within the IETF follow its principles. 152 This remainder of this document is structured as follows: First, we 153 provide a brief introduction to the concept of privacy in Section 2. 154 In Section 3, we illustrate what is in scope of the IETF and where 155 the responsibility of the IETF ends. In Section 4, we discuss the 156 main threat model for privacy considerations. In Section 5, we 157 propose guidelines for documenting privacy within IETF 158 specifications, and in Section 6 we examine the privacy 159 characteristics of a few exemplary IETF protocols and explain what 160 privacy features have been provided to date. 162 2. Historical Background 164 The "right to be let alone" is a phrase coined by Warren and Brandeis 165 in their seminal Harvard Law Review article on privacy [Warren]. 166 They were the first scholars to recognize that a right to privacy had 167 evolved in the 19th century to embrace not only physical privacy but 168 also a potential "injury of the feelings", which could, for example, 169 result from the public disclosure of embarrassing private facts. 171 In 1967 Westin [Westin] described privacy as a "personal adjustment 172 process" in which individuals balance "the desire for privacy with 173 the desire for disclosure and communication" in the context of social 174 norms and their environment. Privacy thus requires that an 175 individual has a means to exercise selective control of access to the 176 self and is aware of the potential consequences of exercising that 177 control [Altman]. 179 Efforts to define and analyze the privacy concept evolved 180 considerably in the 20th century. In 1975, Altman conceptualized 181 privacy as a "boundary regulation process whereby people optimize 182 their accessibility along a spectrum of 'openness' and 'closedness' 183 depending on context" [Altman]. "Privacy is the claim of 184 individuals, groups, or institutions to determine for themselves 185 when, how, and to what extent information about them is communicated 186 to others. Viewed in terms of the relation of the individual to 187 social participation, privacy is the voluntary and temporary 188 withdrawal of a person from the general society through physical or 189 psychological means, either in a state of solitude or small-group 190 intimacy or, when among larger groups, in a condition of anonymity or 191 reserve." [Westin]. 193 Note: Altman and Westin were referring to nonelectronic environments, 194 where privacy intrusion was typically based on fresh information, 195 referring to one particular person only, and stemming from traceable 196 human sources. The scope of possible privacy breaches was therefore 197 rather limited. Today, in contrast, details about an individual's 198 activities are typically stored over a longer period of time, 199 collected from many different sources, and information about almost 200 every activity in life is available electronically. 202 In 1980, the Organization for Economic Co-operation and Development 203 (OECD) published eight Guidelines on the Protection of Privacy and 204 Trans-Border Flows of Personal Data [OECD], which are often referred 205 to as Fair Information Practices (FIPs). Fair information practices 206 include the following principles: 208 Notice and Consent: Before the collection of data, the data subject 209 should be provided: notice of what information is being collected 210 and for what purpose and an opportunity to choose whether to 211 accept the data collection and use. In Europe, data collection 212 cannot proceed unless data subject has unambiguously given his 213 consent (with exceptions). 215 Collection Limitation: Data should be collected for specified, 216 explicit and legitimate purposes. The data collected should be 217 adequate, relevant and not excessive in relation to the purposes 218 for which they are collected. 220 Use/Disclosure Limitation: Data should be used only for the purpose 221 for which it was collected and should not be used or disclosed in 222 any way incompatible with those purposes. 224 Retention Limitation: Data should be kept in a form that permits 225 identification of the data subject no longer than is necessary for 226 the purposes for which the data were collected. 228 Accuracy: The party collecting and storing data is obligated to 229 ensure its accuracy and, where necessary, keep it up to date; 230 every reasonable step must be taken to ensure that data which are 231 inaccurate or incomplete are corrected or deleted. 233 Access: A data subject should have access to data about himself, in 234 order to verify its accuracy and to determine how it is being 235 used. 237 Security: Those holding data about others must take steps to protect 238 its confidentiality. 240 The OECD guidelines and also recent onces, like the Madrid resolution 241 [Madrid] or the Granada Charter of Privacy in a Digital World 242 [Granada], provide a useful understanding of how to provide privacy 243 protection but these guidelines quite naturally stay on a higher 244 level. As such, they do not aim to evaluate the tradeoffs in 245 addressing privacy protection in the different stages of the 246 development process, as illustrated in Figure 1. 248 US regulatory and self-regulatory efforts supported by the Federal 249 Trade Commission (FTC) have focused on a subset of these principles, 250 namely to notice, choice, access, and security rather than minimizing 251 data collection or use limitation. Hence, they are sometimes labeled 252 as the "notice and choice" approach to privacy. From a practical 253 point of view it became evident that companies are reluctant to stop 254 collecting and using data but individuals expect to remain in control 255 about its usage. Today, the effectiveness to deal with privacy 256 violations using the "notice and choice" approach is heavily 257 criticized [limits]. 259 Among these considers (although often implicit) are assumptions on 260 how information is exchanged between different parties and for 261 certain protocols this information may help to identify entities, and 262 potentially humans behind them. Without doubt the information 263 exchanged is not always equal. The terms 'personal data' [DPD95] and 264 Personally Identifiable Information (PII) [SP800-122] have become 265 common language in the vocabulary of privacy experts. It seems 266 therefore understandable that regulators around the globe have 267 focused on the type of data being exchanged and have provided laws 268 according to the level of sensitivity. Medical data is treated 269 differently in many juristictions than blog comments. For an initial 270 investigation it is intuitive and helpful to determine whether 271 specific protocol or application may be privacy sensitive. The ever 272 increasing ability for parties on the Internet to collect, aggregate, 273 and to reason about information collected from a wide range of 274 sources requires to apply further thinking about potential other 275 privacy sensitive items. The recent example of browser 276 fingerprinting [browser-fingerprinting] shows how many information 277 items combined can lead to a privacy threat. 279 The following list contains examples of information that may be 280 considered personal data: 282 o Name 284 o Address information 286 o Phone numbers, email addresses, SIP/XMPP URIs, other identifiers 288 o IP and MAC addresses or other host-specific persistent identifiers 289 that consistently links to a particular person or small, well- 290 defined group of people 292 o Information identifying personally owned property, such as vehicle 293 registration number 295 Data minimization means that first of all, the possibility to collect 296 personal data about others should be minimized. Next, within the 297 remaining possibilities, collecting personal data should be 298 minimized. Finally, the time how long collected personal data is 299 stored should be minimized. 301 As stated in [I-D.hansen-privacy-terminology], "If we exclude 302 providing misinformation (inaccurate or erroneous information, 303 provided usually without conscious effort at misleading, deceiving, 304 or persuading one way or another) or disinformation (deliberately 305 false or distorted information given out in order to mislead or 306 deceive), data minimization is the only generic strategy to enable 307 anonymity, since all correct personal data help to identify.". 309 Early papers from the 1980ies about privacy by data minimization 310 already deal with anonymity, unlinkability, unobservability, and 311 pseudonymity. [I-D.hansen-privacy-terminology] provides a 312 compilation of terms. 314 3. Scope 316 The IETF at large produces specifications that typically fall into 317 the following categories: 319 o Process specifications (e.g. WG shepherding guidelines described 320 in RFC 4858 [RFC4858]) These documents aim to document and to 321 improve the work style within the IETF. 323 o Building blocks (e.g. cryptographic algorithms, MIME types 324 registrations). These specifications are meant to be used with 325 other protocols one or several communication paradigms. 327 o Architectural descriptions (for example, on IP-based emergency 328 services [I-D.ietf-ecrit-framework], Internet Mail [RFC5598]) 330 o Best current practices (e.g. Guidance for Authentication, 331 Authorization, and Accounting (AAA) Key Management [RFC4962]) 333 o Policy statements (e.g. IETF Policy on Wiretapping [RFC2804]) 335 Often, the architectural description is compiled some time after the 336 deployment has long been ongoing and therefore those who implement 337 and those who deploy have to make their own determination of which 338 protocols they would like to glue together to a complete system. 339 This type of work style has the advantage that protocol designers are 340 encouraged to write their specifications in a flexible way so that 341 they can be used in multiple contexts with different deployment 342 scenarios without a huge amount of interdependency between the 343 components. [Tussle] highlights the importance of such an approach 344 and [I-D.morris-policy-cons] offers a more detailed discussion. 346 This work style has an important consequence for the scope of privacy 347 work in the IETF, namely 349 o the standardization work focuses on those parts where 350 interoperability is really essentially rather than describing a 351 specific instantiation of an architecture and therefore leaving a 352 lot of choices for deployments. 354 o application internal functionality, such as API, and details about 355 databases are outside the scope of the IETF 357 o regulatory requirements of different juristictions are not part of 358 the IETF work either. 360 Here is an example that aims to illustrate the boundaries of the IETF 361 work: Imagine a social networking site that allows user registration, 362 requires user authentication prior to usage, and offers its 363 functionality for Web browser users via HTTP, real-time messaging 364 functionality via XMPP, and email notifications. Additionally, 365 support for data sharing with other Internet service providers is 366 provided by OAuth. 368 While HTTP, XMPP, Email, and OAuth are IETF specifications they only 369 define how the protocol behavior on the wire looks like. They 370 certainly have an architectural spirit that has enormous impact on 371 the protocol mechanisms and the set of specifications that are 372 required. However, IETF specifications would not go into details of 373 how the user has to register, what type of data he has to provide to 374 this social networking site, how long transaction data is kept, how 375 requirements for lawful intercept are met, how authorization policies 376 are designed to let users know more about data they share with other 377 Internet services, how the user's data is secured against authorized 378 access, whether the HTTP communication exchange between the browser 379 and the social networking site is using TLS or not, what data is 380 uploaded by the user, how the privacy policy of the social networking 381 site should look like, etc. 383 Another example is the usage of HTTP for the Web. HTTP is published 384 in RFC 2616 and was designed to allow the exchange of arbitrary data. 385 An analysis of potential privacy problems would consider what type of 386 data is exchanged, how this data is stored and processed. Hence, the 387 analysis for a static webpage by a company would different than the 388 usage of HTTP for exchanging health records. A protocol designer 389 working on HTTP extensions (such as WebDAV) it would therefore be 390 difficult to describe all possible privacy considersations given that 391 the space of possible usage is essentially unlimited. 393 +--------+ 394 |Building|-------+ 395 |Blocks | | 396 +--------+ | 397 +------v-----+ 398 | |----+ 399 |Architecture| | 400 +------------+ | 401 +---v--+ 402 |System|--------+ 403 |Design| | 404 +------+ | 405 +-------v------+ 406 | |------+ 407 |Implementation| | 408 +--------------+ | 409 +-----v----+ 410 | | 411 |Deployment| 412 +----------+ 414 Figure 1: Development Process 416 Figure 1 shows a typical development process. IETF work often starts 417 with identifying building blocks that ccan then be used in different 418 architectural variants useful for a wide range of usage scenarios. 419 Before implementation activities start a software architecture needs 420 to evaluable which components to integrate, how to provide proper 421 performance characteristics, etc. Finally, the implemented work 422 needs to be deployed. Privacy considerations play a role along the 423 entire process. 425 To pick an example from the security field consider the NIST 426 Framework for Designing Cryptographic Key Management Systems 427 [SP800-130], NIST SP 800-130. SP 800-130 provides a number of 428 recommendations that can be addressed largely during the system 429 design phase as well as in the implementation phase of product 430 development. The cryptographic building blocks and the underlying 431 architecture is assumed to be sound. Even with well-design 432 cryptographic components there are plenty of possibilities to 433 introduce security vulnerabilities in the later stage of the 434 development cycle. 436 Similiar to the work on security the impact of work in standards 437 developing organizations is limited. Neverthelesss, discussing 438 potential privacy problems and considering privacy in the design of 439 an IETF protocol can offer system architects and those deploying 440 systems additional insights. The rest of this document is focused on 441 illustrating how protocol designers can consider privacy in their 442 design decisions, as they do factors like security, congestion 443 control, scalability, operations and management, etc. 445 4. Threat Model 447 To consider privacy in protocol design it useful to think about the 448 overall communication architecture and what the different actors 449 could do. This analysis is similar to a threat analysis found in 450 security consideration sections of IETF documents. See also RFC 4101 451 [RFC4101] for an illustration on how to write protocol models. In 452 Figure 2 we show a communication model found in many of today's 453 protocols where a sender wants to establish communication with some 454 recipient and thereby uses some form of intermediary (referred as 455 relay in Figure 2. In some cases this intermediary stays in the 456 communication path for the entire duration of the communication and 457 sometimes it is only used for communication establishment, for either 458 inbound or outbound communication. In rare cases they may even be a 459 series of relays that are traversed. 461 +-----------+ 462 | | 463 >| Recipient | 464 / | | 465 ,' +-----------+ 466 +--------+ )-------( ,' +-----------+ 467 | | | | - | | 468 | Sender |<------>|Relay |<------>| Recipient | 469 | | | |`. | | 470 +--------+ )-------( \ +-----------+ 471 ^ `. +-----------+ 472 : \ | | 473 : `>| Recipient | 474 ..............................>| | 475 +-----------+ 477 Legend: 479 <....> End-to-End Communication 480 <----> Hop-by-Hop Communication 482 Figure 2: Example Instantiation of involved Entities 484 We can distinguish between three types of adversaries: 486 Eavesdropper: RFC 4949 describes the act of 'eavesdropping' as 488 "Passive wiretapping done secretly, i.e., without the knowledge 489 of the originator or the intended recipients of the 490 communication." 492 Eavesdropping is often considered by IETF protocols in the context 493 of a security analysis to deal with a range of attacks by offering 494 confidentiality protection. 496 RFC 3552 provides guidance on how to write security considerations 497 for IETF documents and already demands the confidentiality 498 security services to be considered. While IETF protocols offer 499 guidance on how to secure communication against eavesdroppers 500 deployments sometimes choose not to enable its usage. 502 Middleman: Many protocols developed today show a more complex 503 communication pattern than just client and server communication, 504 as motivated in Figure 2. Store-and-forward protocols are 505 examples where entities participate in the message delivery even 506 though they are not the final recipients. Often, these 507 intermediaries only need to see a small amount of information 508 necessary for message routing and security and/or protocol 509 mechanisms should ensure that end-to-end information is made 510 inaccessible for these entities. Unfortunately, the difficulty to 511 deploy end-to-end security proceduces, the additional messaging, 512 the computational overhead, and other business / legal 513 requirements often slow down or prevent the deployment of these 514 end-to-end security mechanisms giving these intermediaries more 515 exposure to communication patters and communication payloads than 516 necessary. 518 Recipient: It may seem strange to put the recipient as an adversary 519 in this list since the entire purpose of the communication 520 interaction is to provide information to it. However, the degree 521 of familiarity and the type of information that needs to be shared 522 with such an entity may vary from context to context and also 523 between application scenarios. Often enough, the sender has no 524 strong familiarity with the other communication endpoint. While 525 it seems to be advisable to utilize access control before 526 disclosing information with such an entity reality in Internet 527 communication is not so simple. As such, a sender may still want 528 to limit the amount of information disclosed to the recipient some 529 mutual understanding of how this data is treated my need to be 530 created, e.g. how long it is kept (retention), whether re- 531 distribution is permitted. 533 5. Guidelines 535 A pre-condition for reasoning about the impact of a protocol or an 536 architecture is to look at the high level protocol model, as 537 described in [RFC4101]. This step helps to identify actors and their 538 relationship. The protocol specification (or the set of 539 specifications then allows a deep dive into the data that is 540 exchanged. 542 The answers to these questions provide insight into the potential 543 privacy impact: 545 1. What entities collect and use data? 547 1.a: How many entities collect and use data? 549 Note that this question aims to raise the question of what 550 is possible for various entities to inspect (or potentially 551 modify). In architectures with intermediaries, the 552 question can be stated as "What data is exposed to 553 intermediaries that they do not need to know to do their 554 job?". 556 1.b: For each entity, what type of entity is it? 558 + The first-party site or application 560 + Other sites or applications whose data collection and use 561 is in some way controlled by the first party 563 + Third parties that may use the data they collect for other 564 purposes 566 2. For each entity, think about the relationship between the entity 567 and the user. 569 2.a: What is the user's familiarity or degree of relationship 570 with the entity in other contexts? 572 2.b: What is the user's reasonable expectation of the entity's 573 involvement? 575 3. What data about the user is likely needed to be collected? 577 4. What is the identification level of the data? (identified, 578 pseudonymous, anonymous, see [I-D.hansen-privacy-terminology]) 580 The questions in this sections are based on the CDT published 581 "Threshold Analysis for Online Advertising Practices" [CDT]. 583 6. Example 585 This section allows us to illustrate how privacy was deal within 586 certain IETF protocols. We will start the description with AAA for 587 network access and expand it to other protocols in a future version 588 of this draft. 590 6.1. Presence 592 A presence service, as defined in the abstract in RFC 2778 [RFC2778], 593 allows users of a communications service to monitor one another's 594 availability and disposition in order to make de- cisions about 595 communicating. Presence information is highly dynamic, and generally 596 characterizes whether a user is online or offline, busy or idle, away 597 from communications devices or nearby, and the like. Necessarily, 598 this information has certain privacy implications, and from the start 599 the IETF approached this work with the aim to provide users with the 600 controls to determine how their presence information would be shared. 601 The Common Profile for Presence (CPP) [RFC3859] defines a set of 602 logical operations for delivery of presence information. This 603 abstract model is applicable to multiple presence systems. The SIP- 604 based SIMPLE presence system [RFC3261] uses CPP as its baseline 605 architecture, and the presence operations in the Extensible Messaging 606 and Presence Protocol (XMPP) have also been mapped to CPP [RFC3922]. 608 SIMPLE [RFC3261], the application of the Session Initiation Protocol 609 (SIP) to instant messaging and presence, has native support for 610 subscriptions and notifications (with its event framework [RFC3265]) 611 and has added an event package [RFC3856] for pres- ence in order to 612 satisfy the requirements of CPP. Other event packages were defined 613 later to allow additional information to be exchanged. With the help 614 of the PUBLISH method [RFC3903]. clients are able to install presence 615 information on a server, so that the server can apply access-control 616 policies before sharing presence information with other entities. 617 The integration of an explicit authorization mechanism into the 618 presence architecture has been a major improvement in terms of 619 involving the end users in the decision making pro- cess before 620 sharing information. Nearly all presence systems deployed today 621 provide such a mechanism, typically through a reciprocal 622 authorization system by which a pair of users, when they agree to be 623 "buddies," consent to divulge their presence information to one 624 another. 626 One important extension for presence was to enable the support for 627 location sharing. With the desire to standardize protocols for 628 systems sharing geolocation IETF work was started in the GEOPRIV 629 working group. During the initial requirements and privacy threat 630 analysis in the process of chartering the working group, it became 631 clear that the system would an underlying communication mechanism 632 supporting user consent to share location information. The 633 resemblance of these requirements to the presence framework was 634 quickly recognized, and this design decision was documented in RFC 635 4079 [RFC4079]. 637 While presence systems exerted influence on location pri- vacy, the 638 location privacy work also influenced ongoing IETF work on presence 639 by triggering the standardization of a general access control policy 640 language called the Common Policy (defined in RFC 4745 [RFC4745]) 641 framework. This language allows one to express ways to control the 642 distribution of information as simple conditions, actions, and 643 transformations rules expressed in an XML format. Common Policy 644 itself is an abstract format which needs to be instantiated: two 645 examples can be found with the Presence Authorization Rules [RFC5025] 646 and the Geolocation Policy [I-D.ietf-geopriv-policy]. The former 647 provides additional expressiveness for presence based systems, while 648 the latter defines syntax and semantic for location based conditions 649 and transformations. 651 As a component of the prior work on the presence architecture, a 652 format for presence information, called Presence Information Data 653 Format (PIDF), had been developed. For the purposes of conveying 654 location information an extension was developed, the PIDF Location 655 Object (PIDF-LO). With the aim to meet the privacy requirements 656 defined in RFC 2779 [RFC2779] a set of usage indications (such as 657 whether retransmission is allowed or when the retention period 658 expires) in the form of the following policies have been added that 659 always travel with location information itself. We believe that the 660 standardization of these meta-rules that travel with location 661 information has been a unique contribution to privacy on the 662 Internet, recognizing the need for users to express their preferences 663 when information travels through the Internet, from website to 664 website. This approach very much follows the spirit of Creative 665 Commons [CC], namely the usage of a limited number of conditions 666 (such as 'Share Alike' [CC-SA]). Unlike Creative Commons, the 667 GEOPRIV working group did not, however, initiate work to produce 668 legal language nor to de- sign graphical icons since this would fall 669 outside the scope of the IETF. In particular, the GEOPRIV rules 670 state a preference on the retention and retransmission of location 671 information; while GEOPRIV cannot force any entity receiving a 672 PIDF-LO object to abide by those preferences, if users lack the 673 ability to express them at all, we can guarantee their preferences 674 will not be honored. 676 While these retention and retransmission meta-data elements could 677 have been devised to accompany information elements in other IETF 678 protocols, the decision was made to introduce these elements for 679 geolocation initially because of the sensitivity of location 680 information. 682 The GEOPRIV working group had decided to clarify the architecture to 683 make it more accessible to those outside the IETF, and also provides 684 a more generic description applicable beyond the context of presence. 685 [I-D.ietf-geopriv-arch] shows the work-in-progress writeup. 687 6.2. AAA for Network Access 689 On a high-level, AAA for network access uses the communication model 690 shown in Figure 3. When an end host requests access to the network 691 it has to interact with a Network Access Server (NAS) using some 692 front-end protocol (often at the link layer, such as IEEE 802.1X). 693 When asked by the NAS, the end host presents a Network Access 694 Identifier (NAI), an email alike identifier that consists of a 695 username and a domain part. This NAI is then used to discover the 696 AAA server authorized for the users' domain and an initial access 697 request is forwarded to it. To deal with various security, 698 accounting and fraud prevention aspects an end-to-end authentication 699 procedure, run between the end host (the peer) and a separate 700 component within the AAA server (the server) is executed using the 701 Extensible Authentication Protocol (EAP). After a successful 702 authentication protocol exchange the user may get authorized to 703 access the network and keying material is provided to the NAS to 704 enable link layer security over the air interface. 706 From a privacy point of view, the entities participating in this eco- 707 system are the user, an end host, the NAS, a range of different 708 intermediaries, and the AAA server. The user will most likely have 709 some form of contractual relationship with the entity operating the 710 AAA server since credential provisioning had to happen someone but, 711 in certain deployments like coffee shops, this is not guaranteed. In 712 many deployment during this initial registration process the 713 subscriber is provided with credentials after showing some form of 714 identification information (e.g. a passport) and consequently the NAI 715 together with credentials can be used to linked to a specific 716 subscriber, often a single person. 718 The username part of the NAI is data provided by the end host 719 provides during network access authentication that intermediaries do 720 not need to fulfill their role in AAA message routing. Hiding the 721 user's identity is, as discussed in RFC 4282 [RFC4282], possible only 722 when NAIs are used together with a separate authentication method 723 that can transfer the username in a secure manner. Such EAP methods 724 have been designed and requirements for offering such functionality 725 have have become recommended design criteria, see [RFC4017]. 727 More than just identity information is exchanged during the network 728 access authentication is exchanged. The NAS provides information 729 about the user's point of attachment towards the AAA server and the 730 AAA server in response provides data related to the authorization 731 decision back. While the need to exchange data is motivated by the 732 service usage itself there are still a number of questions that could 733 be asked, such as 735 o What mechanisms can be utilized to offer users ways to authorize 736 sharing of information (considering that the ability for protocol 737 interaction is limited without sucessful network access 738 connectivity)? 740 o What are the best current practices for a privacy-sensitive 741 operation of intermediaries? Since end hosts are not interacting 742 with intermediaries explicitly and users have no relationship with 743 those who operate them it is quite likely their practices are less 744 widely known. 746 o Are there alternative approaches to trust establishment between 747 the NAS and the AAA server so that the involvement of 748 intermediaries can be limited or avoided? 749 +--------------+ 750 | AAA Server | 751 +-^----------^-+ 752 * EAP | RADIUS/ 753 * | Diameter 754 --v----------v-- 755 /// \\\ 756 // AAA Proxies, \\ *** 757 | Relays, and | back- 758 | Redirect Agents | end 759 \\ // *** 760 \\\ /// 761 --^----------^-- 762 * EAP | RADIUS/ 763 * | Diameter 764 +----------+ Data +-v----------v-- + 765 | |<---------------->| | 766 | End Host | EAP/EAP Method | Network Access | 767 | |<****************>| Server | 768 +----------+ +--------------- + 769 *** front-end *** 770 Legend: 772 <****>: End-to-end exchange 773 <---->: Hop-by-hop exchange 775 Figure 3: Network Access Authentication Architecture 777 7. Security Considerations 779 This document describes aspects a protocol designer would considered 780 in the area of privacy in addition to the regular security analysis. 782 8. IANA Considerations 784 This document does not require actions by IANA. 786 9. Acknowledgements 788 Add your name here. 790 10. References 792 10.1. Normative References 794 [I-D.hansen-privacy-terminology] 795 Pfitzmann, A., Hansen, M., and H. Tschofenig, "Terminology 796 for Talking about Privacy by Data Minimization: Anonymity, 797 Unlinkability, Undetectability, Unobservability, 798 Pseudonymity, and Identity Management", 799 draft-hansen-privacy-terminology-01 (work in progress), 800 August 2010. 802 [OECD] Organization for Economic Co-operation and Development, 803 "OECD Guidelines on the Protection of Privacy and 804 Transborder Flows of Personal Data", available at 805 (September 2010) , http://www.oecd.org/EN/document/ 806 0,,EN-document-0-nodirectorate-no-24-10255-0,00.html, 807 1980. 809 10.2. Informative References 811 [Altman] Altman, I., "The Environment and Social Behavior: Privacy, 812 Personal Space, Territory, Crowding", Brooks/Cole , 1975. 814 [CC] "Creative Commons", June 2010. 816 [CC-SA] "Creative Commons - Licenses", June 2010. 818 [CDT] Center for Democracy & Technology, "Threshold Analysis for 819 Online Advertising Practices", available at 820 http://www.cdt.org/privacy/20090128threshold.pdf, 821 Jan 2009. 823 [CTIA] CTIA, "Best Practices and Guidelines for Location-Based 824 Services", , March 2010. 826 [DPD95] European Commission, "Directive 95/46/EC of the European 827 Parliament and of the Council of 24 October 1995 on the 828 protection of individuals with regard to the processing of 829 personal data and on the free movement of such data", 830 Official Journal L 281 , 23/11/1995 P. 0031 - 0050, 831 November 2005. 833 [EFF-Privacy] 834 Blumberg, A. and P. Eckersley, "On Locational Privacy, and 835 How to Avoid Losing it Forever", August 2009. 837 [Granada] International Working Group on Data Protection in 838 Telecommunications, "The Granada Charter of Privacy in a 839 Digital World, Granada (Spain)", April 2010. 841 [I-D.ietf-ecrit-framework] 842 Rosen, B., Schulzrinne, H., Polk, J., and A. Newton, 843 "Framework for Emergency Calling using Internet 844 Multimedia", draft-ietf-ecrit-framework-12 (work in 845 progress), October 2010. 847 [I-D.ietf-geopriv-arch] 848 Barnes, R., Lepinski, M., Cooper, A., Morris, J., 849 Tschofenig, H., and H. Schulzrinne, "An Architecture for 850 Location and Location Privacy in Internet Applications", 851 draft-ietf-geopriv-arch-03 (work in progress), 852 October 2010. 854 [I-D.ietf-geopriv-policy] 855 Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 856 and J. Polk, "Geolocation Policy: A Document Format for 857 Expressing Privacy Preferences for Location Information", 858 draft-ietf-geopriv-policy-22 (work in progress), 859 October 2010. 861 [I-D.morris-policy-cons] 862 Morris, J., Aboba, B., Peterson, J., and H. Tschofenig, 863 "Public Policy Considerations for Internet Protocols", 864 draft-morris-policy-cons-00 (work in progress), 865 October 2010. 867 [Madrid] Data Protection Authorities and Privacy Regulators, "The 868 Madrid Resolution, International Standards on the 869 Protection of Personal Data and Privacy", Conference of 870 Data Protection and Privacy Commissioners , 31st 871 International Meeting, November 2009. 873 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 874 Presence and Instant Messaging", RFC 2778, February 2000. 876 [RFC2779] Day, M., Aggarwal, S., Mohr, G., and J. Vincent, "Instant 877 Messaging / Presence Protocol Requirements", RFC 2779, 878 February 2000. 880 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 881 May 2000. 883 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 884 A., Peterson, J., Sparks, R., Handley, M., and E. 885 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 886 June 2002. 888 [RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific 889 Event Notification", RFC 3265, June 2002. 891 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 892 Initiation Protocol (SIP)", RFC 3856, August 2004. 894 [RFC3859] Peterson, J., "Common Profile for Presence (CPP)", 895 RFC 3859, August 2004. 897 [RFC3903] Niemi, A., "Session Initiation Protocol (SIP) Extension 898 for Event State Publication", RFC 3903, October 2004. 900 [RFC3922] Saint-Andre, P., "Mapping the Extensible Messaging and 901 Presence Protocol (XMPP) to Common Presence and Instant 902 Messaging (CPIM)", RFC 3922, October 2004. 904 [RFC4017] Stanley, D., Walker, J., and B. Aboba, "Extensible 905 Authentication Protocol (EAP) Method Requirements for 906 Wireless LANs", RFC 4017, March 2005. 908 [RFC4079] Peterson, J., "A Presence Architecture for the 909 Distribution of GEOPRIV Location Objects", RFC 4079, 910 July 2005. 912 [RFC4101] Rescorla, E. and IAB, "Writing Protocol Models", RFC 4101, 913 June 2005. 915 [RFC4282] Aboba, B., Beadles, M., Arkko, J., and P. Eronen, "The 916 Network Access Identifier", RFC 4282, December 2005. 918 [RFC4745] Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 919 Polk, J., and J. Rosenberg, "Common Policy: A Document 920 Format for Expressing Privacy Preferences", RFC 4745, 921 February 2007. 923 [RFC4858] Levkowetz, H., Meyer, D., Eggert, L., and A. Mankin, 924 "Document Shepherding from Working Group Last Call to 925 Publication", RFC 4858, May 2007. 927 [RFC4962] Housley, R. and B. Aboba, "Guidance for Authentication, 928 Authorization, and Accounting (AAA) Key Management", 929 BCP 132, RFC 4962, July 2007. 931 [RFC5025] Rosenberg, J., "Presence Authorization Rules", RFC 5025, 932 December 2007. 934 [RFC5598] Crocker, D., "Internet Mail Architecture", RFC 5598, 935 July 2009. 937 [SP800-122] 938 McCallister, E., Grance, T., and K. Scarfone, "Guide to 939 Protecting the Confidentiality of Personally Identifiable 940 Information (PII)", NIST Special Publication (SP) , 800- 941 122, April 2010. 943 [SP800-130] 944 Barker, E., Branstad, D., Chokhani, S., and M. Smid, 945 "DRAFT: A Framework for Designing Cryptographic Key 946 Management Systems", NIST Special Publication (SP) , 800- 947 130, June 2010. 949 [Tussle] Clark, D., Wroslawski, J., Sollins, K., and R. Braden, 950 "Tussle in Cyberspace: Defining Tomorrow's Internet", In 951 Proc. ACM SIGCOMM , 952 http://www.acm.org/sigcomm/sigcomm2002/papers/tussle.html, 953 2002. 955 [Warren] Warren, D. and L. Brandeis, "The Right to Privacy", 956 Harvard Law Rev. , vol. 45, 1890. 958 [Westin] Westin, A., "Privacy and Freedom", Atheneum, New York , 959 1967. 961 [browser-fingerprinting] 962 Eckersley, P., "How Unique Is Your Browser?", Springer 963 Lecture Notes in Computer Science , Privacy Enhancing 964 Technologies Symposium (PETS 2010), 2010. 966 [limits] Cate, F., "The Limits of Notice and Choice", IEEE Computer 967 Society , IEEE Security and Privacy, pg. 59-62, 968 November 2005. 970 Authors' Addresses 972 Bernard Aboba 973 Microsoft Corporation 974 One Microsoft Way 975 Redmond, WA 98052 976 US 978 Email: bernarda@microsoft.com 980 John B. Morris, Jr. 981 Center for Democracy and Technology 982 1634 I Street NW, Suite 1100 983 Washington, DC 20006 984 USA 986 Email: jmorris@cdt.org 987 URI: http://www.cdt.org 989 Jon Peterson 990 NeuStar, Inc. 991 1800 Sutter St Suite 570 992 Concord, CA 94520 993 US 995 Email: jon.peterson@neustar.biz 997 Hannes Tschofenig 998 Nokia Siemens Networks 999 Linnoitustie 6 1000 Espoo 02600 1001 Finland 1003 Phone: +358 (50) 4871445 1004 Email: Hannes.Tschofenig@gmx.net 1005 URI: http://www.tschofenig.priv.at