idnits 2.17.00 (12 Aug 2021) /tmp/idnits14433/draft-morris-privacy-considerations-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 14, 2011) is 4079 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-hansen-privacy-terminology-01 == Outdated reference: draft-ietf-ecrit-framework has been published as RFC 6443 == Outdated reference: draft-ietf-geopriv-arch has been published as RFC 6280 == Outdated reference: draft-ietf-geopriv-policy has been published as RFC 6772 -- Obsolete informational reference (is this intentional?): RFC 3265 (Obsoleted by RFC 6665) -- Obsolete informational reference (is this intentional?): RFC 4282 (Obsoleted by RFC 7542) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group B. Aboba 3 Internet-Draft Microsoft Corporation 4 Intended status: Informational J. Morris 5 Expires: September 15, 2011 CDT 6 J. Peterson 7 NeuStar, Inc. 8 H. Tschofenig 9 Nokia Siemens Networks 10 March 14, 2011 12 Privacy Considerations for Internet Protocols 13 draft-morris-privacy-considerations-03.txt 15 Abstract 17 This document aims to make protocol designers aware of privacy- 18 related design choices and offers guidance for developing privacy 19 considerations for IETF documents. While specifications cannot 20 police the implementation community, nonetheless protocol architects 21 must play in the improvement of privacy, both by making a conscious 22 decision to design for privacy, and by documenting privacy risks in 23 protocol designs. 25 This document is discussed on the Internet Privacy Discussion mailing 26 list (see https://www.ietf.org/mailman/listinfo/ietf-privacy). 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on September 15, 2011. 45 Copyright Notice 47 Copyright (c) 2011 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 64 3. Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . 9 65 4. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . 11 66 5. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 67 5.1. Presence . . . . . . . . . . . . . . . . . . . . . . . . . 13 68 5.2. AAA for Network Access . . . . . . . . . . . . . . . . . . 15 69 6. Security Considerations . . . . . . . . . . . . . . . . . . . 18 70 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 71 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20 72 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 73 9.1. Normative References . . . . . . . . . . . . . . . . . . . 21 74 9.2. Informative References . . . . . . . . . . . . . . . . . . 21 75 Appendix A. Historical Background . . . . . . . . . . . . . . . . 25 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 29 78 1. Introduction 80 The IETF produces specifications that aim to make the Internet 81 better. Those specifications fall into a number of different 82 categories, including protocol specifications, best current practice 83 descriptions, and architectural documentations. While IETF documents 84 are typically implementation-agnostic, they are often, if not always, 85 impacted by fundamental architectural design decisions. These 86 decision decisions in turn hinge on technical aspects, predictions 87 about deployment incentives, operational considerations, regulatory 88 concerns, security frameworks, and so on. 90 This document aims to make protocol designers aware of privacy- 91 related design choices and offers guidance for developing privacy 92 considerations for IETF documents. While specifications cannot 93 police the implementation community, nonetheless protocol architects 94 must play in the improvement of privacy, both by making a conscious 95 decision to design for privacy, and by documenting privacy risks in 96 protocol designs. While discuss the limitations of standards 97 activities in Section 2, we maintain that the IETF community in its 98 mandate to "make the Internet better" has a role to play in making 99 its specifications, and the Internet, more privacy friendly. This 100 must spring from awareness of how design decisions impact privacy, 101 and must be reflected in both protocol design and in the 102 documentation of potential privacy challenges in the deployment a 103 single protocol or an entire suite of protocols. 105 From the activities in the industry, one can observe three schools of 106 thought in the work on privacy, namely 108 Privacy by Technology: 110 This approach considers the assurance of privacy in the design of 111 a protocol as a technical problem. For example, the design of a 112 specific application may heighten privacy by sharing fewer data 113 items with other parties (i.e. data minimization). Limiting data 114 sharing also avoids the need for evaluation on how data-related 115 consent is obtained, to define policies around how to protect 116 data, etc. Ultimately, different architectural designs will lead 117 to different results with respect to privacy. 119 Examples in this area of location privacy can be found in 120 [EFF-Privacy]. These solution often make heavy use of 121 cryptographic techniques, such as threshold cryptography and 122 secret sharing schemes. 124 Privacy by Policy: 126 In this approach, privacy protection happens through establishing 127 the consent of the user to a set of privacy policies. Hence, 128 protection of the user privacy is largely the responsibility of 129 the company collecting, processing, and storing personal data. 130 Notices and choices are offered to the customer and backed-up by 131 an appropriate legal framework. 133 An example of this approach for the privacy of location-based 134 services is the recent publication by CTIA [CTIA]. 136 Policy/Technology Hybrid: 138 This approach targets a middle-ground where some privacy-enhancing 139 features can be provided by technology, and made attractive to 140 implementers (via explicit best current practices for 141 implementation, configuration and deployment, or by raising 142 awareness implicitly via a discussion about privacy in technical 143 specifications) but other aspects can only be provided and 144 enforced by the parties who control the deployment. Deployments 145 often base their decisions on the existence of a plausible legal 146 framework. 148 The authors believe that the policy/technology hybrid approach is the 149 most practical for engineers in the IETF. 151 This remainder of this document is structured as follows: In 152 Section 2, we illustrate what is in scope of the IETF and where the 153 responsibility of the IETF ends. In Section 3, we discuss the main 154 threat model for privacy considerations. In Section 4, we propose 155 guidelines for documenting privacy within IETF specifications, and in 156 Section 5 we examine the privacy characteristics of a few exemplary 157 IETF protocols and explain what privacy features have been provided 158 to date. In the appendix we provide a brief introduction to the 159 concept of privacy in Appendix A. 161 2. Scope 163 The IETF at large produces specifications that typically fall into 164 the following categories: 166 o Process specifications (e.g. WG shepherding guidelines described 167 in RFC 4858 [RFC4858]) These documents aim to document and to 168 improve the work style within the IETF. 170 o Building blocks (e.g. cryptographic algorithms, MIME types 171 registrations). These specifications are meant to be used with 172 other protocols one or several communication paradigms. 174 o Architectural descriptions (for example, on IP-based emergency 175 services [I-D.ietf-ecrit-framework], Internet Mail [RFC5598]) 177 o Best current practices (e.g. Guidance for Authentication, 178 Authorization, and Accounting (AAA) Key Management [RFC4962]) 180 o Policy statements (e.g. IETF Policy on Wiretapping [RFC2804]) 182 Often, the architectural description is compiled some time after the 183 deployment has long been ongoing and therefore those who implement 184 and those who deploy have to make their own determination of which 185 protocols they would like to glue together to a complete system. 186 This type of work style has the advantage that protocol designers are 187 encouraged to write their specifications in a flexible way so that 188 they can be used in multiple contexts with different deployment 189 scenarios without a huge amount of interdependency between the 190 components. [Tussle] highlights the importance of such an approach 191 and [I-D.morris-policy-cons] offers a more detailed discussion. 193 This work style has an important consequence for the scope of privacy 194 work in the IETF, namely 196 o the standardization work focuses on those parts where 197 interoperability is really essentially rather than describing a 198 specific instantiation of an architecture and therefore leaving a 199 lot of choices for deployments. 201 o application internal functionality, such as API, and details about 202 databases are outside the scope of the IETF 204 o regulatory requirements of different juristictions are not part of 205 the IETF work either. 207 Here is an example that aims to illustrate the boundaries of the IETF 208 work: Imagine a social networking site that allows user registration, 209 requires user authentication prior to usage, and offers its 210 functionality for Web browser users via HTTP, real-time messaging 211 functionality via XMPP, and email notifications. Additionally, 212 support for data sharing with other Internet service providers is 213 provided by OAuth. 215 While HTTP, XMPP, Email, and OAuth are IETF specifications they only 216 define how the protocol behavior on the wire looks like. They 217 certainly have an architectural spirit that has enormous impact on 218 the protocol mechanisms and the set of specifications that are 219 required. However, IETF specifications would not go into details of 220 how the user has to register, what type of data he has to provide to 221 this social networking site, how long transaction data is kept, how 222 requirements for lawful intercept are met, how authorization policies 223 are designed to let users know more about data they share with other 224 Internet services, how the user's data is secured against authorized 225 access, whether the HTTP communication exchange between the browser 226 and the social networking site is using TLS or not, what data is 227 uploaded by the user, how the privacy policy of the social networking 228 site should look like, etc. 230 Another example is the usage of HTTP for the Web. HTTP is published 231 in RFC 2616 and was designed to allow the exchange of arbitrary data. 232 An analysis of potential privacy problems would consider what type of 233 data is exchanged, how this data is stored and processed. Hence, the 234 analysis for a static webpage by a company would different than the 235 usage of HTTP for exchanging health records. A protocol designer 236 working on HTTP extensions (such as WebDAV) it would therefore be 237 difficult to describe all possible privacy considersations given that 238 the space of possible usage is essentially unlimited. 240 +--------+ 241 |Building|-------+ 242 |Blocks | | 243 +--------+ | 244 +------v-----+ 245 | |----+ 246 |Architecture| | 247 +------------+ | 248 +---v--+ 249 |System|--------+ 250 |Design| | 251 +------+ | 252 +-------v------+ 253 | |------+ 254 |Implementation| | 255 +--------------+ | 256 +-----v----+ 257 | | 258 |Deployment| 259 +----------+ 261 Figure 1: Development Process 263 Figure 1 shows a typical development process. IETF work often starts 264 with identifying building blocks that ccan then be used in different 265 architectural variants useful for a wide range of usage scenarios. 266 Before implementation activities start a software architecture needs 267 to evaluable which components to integrate, how to provide proper 268 performance characteristics, etc. Finally, the implemented work 269 needs to be deployed. Privacy considerations play a role along the 270 entire process. 272 To pick an example from the security field consider the NIST 273 Framework for Designing Cryptographic Key Management Systems 274 [SP800-130], NIST SP 800-130. SP 800-130 provides a number of 275 recommendations that can be addressed largely during the system 276 design phase as well as in the implementation phase of product 277 development. The cryptographic building blocks and the underlying 278 architecture is assumed to be sound. Even with well-design 279 cryptographic components there are plenty of possibilities to 280 introduce security vulnerabilities in the later stage of the 281 development cycle. 283 Similiar to the work on security the impact of work in standards 284 developing organizations is limited. Neverthelesss, discussing 285 potential privacy problems and considering privacy in the design of 286 an IETF protocol can offer system architects and those deploying 287 systems additional insights. The rest of this document is focused on 288 illustrating how protocol designers can consider privacy in their 289 design decisions, as they do factors like security, congestion 290 control, scalability, operations and management, etc. 292 3. Threat Model 294 To consider privacy in protocol design it useful to think about the 295 overall communication architecture and what the different actors 296 could do. This analysis is similar to a threat analysis found in 297 security consideration sections of IETF documents. See also RFC 4101 298 [RFC4101] for an illustration on how to write protocol models. In 299 Figure 2 we show a communication model found in many of today's 300 protocols where a sender wants to establish communication with some 301 recipient and thereby uses some form of intermediary (referred as 302 relay in Figure 2. In some cases this intermediary stays in the 303 communication path for the entire duration of the communication and 304 sometimes it is only used for communication establishment, for either 305 inbound or outbound communication. In rare cases they may even be a 306 series of relays that are traversed. 308 +-----------+ 309 | | 310 >| Recipient | 311 / | | 312 ,' +-----------+ 313 +--------+ )-------( ,' +-----------+ 314 | | | | - | | 315 | Sender |<------>|Relay |<------>| Recipient | 316 | | | |`. | | 317 +--------+ )-------( \ +-----------+ 318 ^ `. +-----------+ 319 : \ | | 320 : `>| Recipient | 321 ..............................>| | 322 +-----------+ 324 Legend: 326 <....> End-to-End Communication 327 <----> Hop-by-Hop Communication 329 Figure 2: Example Instantiation of involved Entities 331 We can distinguish between three types of adversaries: 333 Eavesdropper: RFC 4949 describes the act of 'eavesdropping' as 335 "Passive wiretapping done secretly, i.e., without the knowledge 336 of the originator or the intended recipients of the 337 communication." 339 Eavesdropping is often considered by IETF protocols in the context 340 of a security analysis to deal with a range of attacks by offering 341 confidentiality protection. 343 RFC 3552 provides guidance on how to write security considerations 344 for IETF documents and already demands the confidentiality 345 security services to be considered. While IETF protocols offer 346 guidance on how to secure communication against eavesdroppers 347 deployments sometimes choose not to enable its usage. 349 Middleman: Many protocols developed today show a more complex 350 communication pattern than just client and server communication, 351 as motivated in Figure 2. Store-and-forward protocols are 352 examples where entities participate in the message delivery even 353 though they are not the final recipients. Often, these 354 intermediaries only need to see a small amount of information 355 necessary for message routing and security and/or protocol 356 mechanisms should ensure that end-to-end information is made 357 inaccessible for these entities. Unfortunately, the difficulty to 358 deploy end-to-end security proceduces, the additional messaging, 359 the computational overhead, and other business / legal 360 requirements often slow down or prevent the deployment of these 361 end-to-end security mechanisms giving these intermediaries more 362 exposure to communication patters and communication payloads than 363 necessary. 365 Recipient: Although it is not intuitive to treat the recipient as an 366 adversary since the entire purpose of the communication 367 interaction is to provide information to it. However, the degree 368 of familiarity and the type of information that needs to be shared 369 with such an entity may vary from context to context and also 370 between application scenarios. Often enough, the sender has no 371 strong familiarity with the other communication endpoint. While 372 it seems to be advisable to utilize access control before 373 disclosing information with such an entity reality in Internet 374 communication practical reality is different. As such, a sender 375 may still want to limit the amount of information disclosed to the 376 recipient and some mutual understanding of what the purpose of the 377 collection is, how long the personal data needs to be stored, and 378 how processing takes place. Additionally, an important part of 379 privacy protection is for the recipient to offer privacy notices 380 on the usage of the collected personal data, to offer choices, and 381 to obtain consent from the data subject. 383 4. Guidelines 385 A pre-condition for reasoning about the impact of a protocol or an 386 architecture is to look at the high level protocol model, as 387 described in [RFC4101]. This step helps to identify actors and their 388 relationship. The protocol specification (or the set of 389 specifications then allows a deep dive into the data that is 390 exchanged. 392 The answers to these questions provide insight into the potential 393 privacy impact: 395 1. What entities collect and use data? 397 1.a: How many entities collect and use data? 399 Note that this question aims to raise the question of what 400 is possible for various entities to inspect (or potentially 401 modify). In architectures with intermediaries, the 402 question can be stated as "What data is exposed to 403 intermediaries that they do not need to know to do their 404 job?". 406 1.b: For each entity, what type of entity is it? 408 + The first-party site or application 410 + Other sites or applications whose data collection and use 411 is in some way controlled by the first party 413 + Third parties that may use the data they collect for other 414 purposes 416 2. For each entity, think about the relationship between the entity 417 and the user. 419 2.a: What is the user's familiarity or degree of relationship 420 with the entity in other contexts? 422 2.b: What is the user's reasonable expectation of the entity's 423 involvement? 425 3. What data about the user is likely needed to be collected? 427 4. What is the identification level of the data? (identified, 428 pseudonymous, anonymous, see [I-D.hansen-privacy-terminology]) 430 The questions in this section are based on the CDT published paper 431 "Threshold Analysis for Online Advertising Practices" [CDT]. 433 5. Example 435 This section allows us to illustrate how privacy was deal within 436 certain IETF protocols. We will start the description with AAA for 437 network access and expand it to other protocols in a future version 438 of this draft. 440 5.1. Presence 442 A presence service, as defined in the abstract in RFC 2778 [RFC2778], 443 allows users of a communications service to monitor one another's 444 availability and disposition in order to make de- cisions about 445 communicating. Presence information is highly dynamic, and generally 446 characterizes whether a user is online or offline, busy or idle, away 447 from communications devices or nearby, and the like. Necessarily, 448 this information has certain privacy implications, and from the start 449 the IETF approached this work with the aim to provide users with the 450 controls to determine how their presence information would be shared. 451 The Common Profile for Presence (CPP) [RFC3859] defines a set of 452 logical operations for delivery of presence information. This 453 abstract model is applicable to multiple presence systems. The SIP- 454 based SIMPLE presence system [RFC3261] uses CPP as its baseline 455 architecture, and the presence operations in the Extensible Messaging 456 and Presence Protocol (XMPP) have also been mapped to CPP [RFC3922]. 458 SIMPLE [RFC3261], the application of the Session Initiation Protocol 459 (SIP) to instant messaging and presence, has native support for 460 subscriptions and notifications (with its event framework [RFC3265]) 461 and has added an event package [RFC3856] for pres- ence in order to 462 satisfy the requirements of CPP. Other event packages were defined 463 later to allow additional information to be exchanged. With the help 464 of the PUBLISH method [RFC3903]. clients are able to install presence 465 information on a server, so that the server can apply access-control 466 policies before sharing presence information with other entities. 467 The integration of an explicit authorization mechanism into the 468 presence architecture has been a major improvement in terms of 469 involving the end users in the decision making pro- cess before 470 sharing information. Nearly all presence systems deployed today 471 provide such a mechanism, typically through a reciprocal 472 authorization system by which a pair of users, when they agree to be 473 "buddies," consent to divulge their presence information to one 474 another. 476 One important extension for presence was to enable the support for 477 location sharing. With the desire to standardize protocols for 478 systems sharing geolocation IETF work was started in the GEOPRIV 479 working group. During the initial requirements and privacy threat 480 analysis in the process of chartering the working group, it became 481 clear that the system would an underlying communication mechanism 482 supporting user consent to share location information. The 483 resemblance of these requirements to the presence framework was 484 quickly recognized, and this design decision was documented in RFC 485 4079 [RFC4079]. 487 While presence systems exerted influence on location pri- vacy, the 488 location privacy work also influenced ongoing IETF work on presence 489 by triggering the standardization of a general access control policy 490 language called the Common Policy (defined in RFC 4745 [RFC4745]) 491 framework. This language allows one to express ways to control the 492 distribution of information as simple conditions, actions, and 493 transformations rules expressed in an XML format. Common Policy 494 itself is an abstract format which needs to be instantiated: two 495 examples can be found with the Presence Authorization Rules [RFC5025] 496 and the Geolocation Policy [I-D.ietf-geopriv-policy]. The former 497 provides additional expressiveness for presence based systems, while 498 the latter defines syntax and semantic for location based conditions 499 and transformations. 501 As a component of the prior work on the presence architecture, a 502 format for presence information, called Presence Information Data 503 Format (PIDF), had been developed. For the purposes of conveying 504 location information an extension was developed, the PIDF Location 505 Object (PIDF-LO). With the aim to meet the privacy requirements 506 defined in RFC 2779 [RFC2779] a set of usage indications (such as 507 whether retransmission is allowed or when the retention period 508 expires) in the form of the following policies have been added that 509 always travel with location information itself. We believe that the 510 standardization of these meta-rules that travel with location 511 information has been a unique contribution to privacy on the 512 Internet, recognizing the need for users to express their preferences 513 when information travels through the Internet, from website to 514 website. This approach very much follows the spirit of Creative 515 Commons [CC], namely the usage of a limited number of conditions 516 (such as 'Share Alike' [CC-SA]). Unlike Creative Commons, the 517 GEOPRIV working group did not, however, initiate work to produce 518 legal language nor to de- sign graphical icons since this would fall 519 outside the scope of the IETF. In particular, the GEOPRIV rules 520 state a preference on the retention and retransmission of location 521 information; while GEOPRIV cannot force any entity receiving a 522 PIDF-LO object to abide by those preferences, if users lack the 523 ability to express them at all, we can guarantee their preferences 524 will not be honored. 526 While these retention and retransmission meta-data elements could 527 have been devised to accompany information elements in other IETF 528 protocols, the decision was made to introduce these elements for 529 geolocation initially because of the sensitivity of location 530 information. 532 The GEOPRIV working group had decided to clarify the architecture to 533 make it more accessible to those outside the IETF, and also provides 534 a more generic description applicable beyond the context of presence. 535 [I-D.ietf-geopriv-arch] shows the work-in-progress writeup. 537 5.2. AAA for Network Access 539 On a high-level, AAA for network access uses the communication model 540 shown in Figure 3. When an end host requests access to the network 541 it has to interact with a Network Access Server (NAS) using some 542 front-end protocol (often at the link layer, such as IEEE 802.1X). 543 When asked by the NAS, the end host presents a Network Access 544 Identifier (NAI), an email alike identifier that consists of a 545 username and a domain part. This NAI is then used to discover the 546 AAA server authorized for the users' domain and an initial access 547 request is forwarded to it. To deal with various security, 548 accounting and fraud prevention aspects an end-to-end authentication 549 procedure, run between the end host (the peer) and a separate 550 component within the AAA server (the server) is executed using the 551 Extensible Authentication Protocol (EAP). After a successful 552 authentication protocol exchange the user may get authorized to 553 access the network and keying material is provided to the NAS to 554 enable link layer security over the air interface. 556 From a privacy point of view, the entities participating in this eco- 557 system are the user, an end host, the NAS, a range of different 558 intermediaries, and the AAA server. The user will most likely have 559 some form of contractual relationship with the entity operating the 560 AAA server since credential provisioning had to happen someone but, 561 in certain deployments like coffee shops, this is not guaranteed. In 562 many deployment during this initial registration process the 563 subscriber is provided with credentials after showing some form of 564 identification information (e.g. a passport) and consequently the NAI 565 together with credentials can be used to linked to a specific 566 subscriber, often a single person. 568 The username part of the NAI is data provided by the end host 569 provides during network access authentication that intermediaries do 570 not need to fulfill their role in AAA message routing. Hiding the 571 user's identity is, as discussed in RFC 4282 [RFC4282], possible only 572 when NAIs are used together with a separate authentication method 573 that can transfer the username in a secure manner. Such EAP methods 574 have been designed and requirements for offering such functionality 575 have have become recommended design criteria, see [RFC4017]. 577 More than just identity information is exchanged during the network 578 access authentication is exchanged. The NAS provides information 579 about the user's point of attachment towards the AAA server and the 580 AAA server in response provides data related to the authorization 581 decision back. While the need to exchange data is motivated by the 582 service usage itself there are still a number of questions that could 583 be asked, such as 585 o What mechanisms can be utilized to offer users ways to authorize 586 sharing of information (considering that the ability for protocol 587 interaction is limited without sucessful network access 588 connectivity)? 590 o What are the best current practices for a privacy-sensitive 591 operation of intermediaries? Since end hosts are not interacting 592 with intermediaries explicitly and users have no relationship with 593 those who operate them it is quite likely their practices are less 594 widely known. 596 o Are there alternative approaches to trust establishment between 597 the NAS and the AAA server so that the involvement of 598 intermediaries can be limited or avoided? 599 +--------------+ 600 | AAA Server | 601 +-^----------^-+ 602 * EAP | RADIUS/ 603 * | Diameter 604 --v----------v-- 605 /// \\\ 606 // AAA Proxies, \\ *** 607 | Relays, and | back- 608 | Redirect Agents | end 609 \\ // *** 610 \\\ /// 611 --^----------^-- 612 * EAP | RADIUS/ 613 * | Diameter 614 +----------+ Data +-v----------v-- + 615 | |<---------------->| | 616 | End Host | EAP/EAP Method | Network Access | 617 | |<****************>| Server | 618 +----------+ +--------------- + 619 *** front-end *** 620 Legend: 622 <****>: End-to-end exchange 623 <---->: Hop-by-hop exchange 625 Figure 3: Network Access Authentication Architecture 627 6. Security Considerations 629 This document describes aspects a protocol designer would considered 630 in the area of privacy in addition to the regular security analysis. 632 7. IANA Considerations 634 This document does not require actions by IANA. 636 8. Acknowledgements 638 We would like to thank the participants for the feedback they 639 provided during the December 2010 Internet Privcy workshop co- 640 organized by MIT, ISOC, W3C and IAB. 642 9. References 644 9.1. Normative References 646 [I-D.hansen-privacy-terminology] 647 Pfitzmann, A., Hansen, M., and H. Tschofenig, "Terminology 648 for Talking about Privacy by Data Minimization: Anonymity, 649 Unlinkability, Undetectability, Unobservability, 650 Pseudonymity, and Identity Management", 651 draft-hansen-privacy-terminology-01 (work in progress), 652 August 2010. 654 [OECD] Organization for Economic Co-operation and Development, 655 "OECD Guidelines on the Protection of Privacy and 656 Transborder Flows of Personal Data", available at 657 (September 2010) , http://www.oecd.org/EN/document/ 658 0,,EN-document-0-nodirectorate-no-24-10255-0,00.html, 659 1980. 661 9.2. Informative References 663 [Altman] Altman, I., "The Environment and Social Behavior: Privacy, 664 Personal Space, Territory, Crowding", Brooks/Cole , 1975. 666 [CC] "Creative Commons", June 2010. 668 [CC-SA] "Creative Commons - Licenses", June 2010. 670 [CDT] Center for Democracy & Technology, "Threshold Analysis for 671 Online Advertising Practices", available at 672 http://www.cdt.org/privacy/20090128threshold.pdf, 673 Jan 2009. 675 [CTIA] CTIA, "Best Practices and Guidelines for Location-Based 676 Services", , March 2010. 678 [DPD95] European Commission, "Directive 95/46/EC of the European 679 Parliament and of the Council of 24 October 1995 on the 680 protection of individuals with regard to the processing of 681 personal data and on the free movement of such data", 682 Official Journal L 281 , 23/11/1995 P. 0031 - 0050, 683 November 2005. 685 [EFF-Privacy] 686 Blumberg, A. and P. Eckersley, "On Locational Privacy, and 687 How to Avoid Losing it Forever", August 2009. 689 [Granada] International Working Group on Data Protection in 690 Telecommunications, "The Granada Charter of Privacy in a 691 Digital World, Granada (Spain)", April 2010. 693 [I-D.ietf-ecrit-framework] 694 Rosen, B., Schulzrinne, H., Polk, J., and A. Newton, 695 "Framework for Emergency Calling using Internet 696 Multimedia", draft-ietf-ecrit-framework-12 (work in 697 progress), October 2010. 699 [I-D.ietf-geopriv-arch] 700 Barnes, R., Lepinski, M., Cooper, A., Morris, J., 701 Tschofenig, H., and H. Schulzrinne, "An Architecture for 702 Location and Location Privacy in Internet Applications", 703 draft-ietf-geopriv-arch-03 (work in progress), 704 October 2010. 706 [I-D.ietf-geopriv-policy] 707 Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 708 and J. Polk, "Geolocation Policy: A Document Format for 709 Expressing Privacy Preferences for Location Information", 710 draft-ietf-geopriv-policy-22 (work in progress), 711 October 2010. 713 [I-D.morris-policy-cons] 714 Morris, J., Aboba, B., Peterson, J., and H. Tschofenig, 715 "Public Policy Considerations for Internet Protocols", 716 draft-morris-policy-cons-00 (work in progress), 717 October 2010. 719 [Madrid] Data Protection Authorities and Privacy Regulators, "The 720 Madrid Resolution, International Standards on the 721 Protection of Personal Data and Privacy", Conference of 722 Data Protection and Privacy Commissioners , 31st 723 International Meeting, November 2009. 725 [RFC2778] Day, M., Rosenberg, J., and H. Sugano, "A Model for 726 Presence and Instant Messaging", RFC 2778, February 2000. 728 [RFC2779] Day, M., Aggarwal, S., Mohr, G., and J. Vincent, "Instant 729 Messaging / Presence Protocol Requirements", RFC 2779, 730 February 2000. 732 [RFC2804] IAB and IESG, "IETF Policy on Wiretapping", RFC 2804, 733 May 2000. 735 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 736 A., Peterson, J., Sparks, R., Handley, M., and E. 737 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 738 June 2002. 740 [RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific 741 Event Notification", RFC 3265, June 2002. 743 [RFC3856] Rosenberg, J., "A Presence Event Package for the Session 744 Initiation Protocol (SIP)", RFC 3856, August 2004. 746 [RFC3859] Peterson, J., "Common Profile for Presence (CPP)", 747 RFC 3859, August 2004. 749 [RFC3903] Niemi, A., "Session Initiation Protocol (SIP) Extension 750 for Event State Publication", RFC 3903, October 2004. 752 [RFC3922] Saint-Andre, P., "Mapping the Extensible Messaging and 753 Presence Protocol (XMPP) to Common Presence and Instant 754 Messaging (CPIM)", RFC 3922, October 2004. 756 [RFC4017] Stanley, D., Walker, J., and B. Aboba, "Extensible 757 Authentication Protocol (EAP) Method Requirements for 758 Wireless LANs", RFC 4017, March 2005. 760 [RFC4079] Peterson, J., "A Presence Architecture for the 761 Distribution of GEOPRIV Location Objects", RFC 4079, 762 July 2005. 764 [RFC4101] Rescorla, E. and IAB, "Writing Protocol Models", RFC 4101, 765 June 2005. 767 [RFC4282] Aboba, B., Beadles, M., Arkko, J., and P. Eronen, "The 768 Network Access Identifier", RFC 4282, December 2005. 770 [RFC4745] Schulzrinne, H., Tschofenig, H., Morris, J., Cuellar, J., 771 Polk, J., and J. Rosenberg, "Common Policy: A Document 772 Format for Expressing Privacy Preferences", RFC 4745, 773 February 2007. 775 [RFC4858] Levkowetz, H., Meyer, D., Eggert, L., and A. Mankin, 776 "Document Shepherding from Working Group Last Call to 777 Publication", RFC 4858, May 2007. 779 [RFC4962] Housley, R. and B. Aboba, "Guidance for Authentication, 780 Authorization, and Accounting (AAA) Key Management", 781 BCP 132, RFC 4962, July 2007. 783 [RFC5025] Rosenberg, J., "Presence Authorization Rules", RFC 5025, 784 December 2007. 786 [RFC5598] Crocker, D., "Internet Mail Architecture", RFC 5598, 787 July 2009. 789 [SP800-122] 790 McCallister, E., Grance, T., and K. Scarfone, "Guide to 791 Protecting the Confidentiality of Personally Identifiable 792 Information (PII)", NIST Special Publication (SP) , 800- 793 122, April 2010. 795 [SP800-130] 796 Barker, E., Branstad, D., Chokhani, S., and M. Smid, 797 "DRAFT: A Framework for Designing Cryptographic Key 798 Management Systems", NIST Special Publication (SP) , 800- 799 130, June 2010. 801 [Tussle] Clark, D., Wroslawski, J., Sollins, K., and R. Braden, 802 "Tussle in Cyberspace: Defining Tomorrow's Internet", In 803 Proc. ACM SIGCOMM , 804 http://www.acm.org/sigcomm/sigcomm2002/papers/tussle.html, 805 2002. 807 [Warren] Warren, D. and L. Brandeis, "The Right to Privacy", 808 Harvard Law Rev. , vol. 45, 1890. 810 [Westin] Westin, A., "Privacy and Freedom", Atheneum, New York , 811 1967. 813 [browser-fingerprinting] 814 Eckersley, P., "How Unique Is Your Browser?", Springer 815 Lecture Notes in Computer Science , Privacy Enhancing 816 Technologies Symposium (PETS 2010), 2010. 818 [limits] Cate, F., "The Limits of Notice and Choice", IEEE Computer 819 Society , IEEE Security and Privacy, pg. 59-62, 820 November 2005. 822 Appendix A. Historical Background 824 The "right to be let alone" is a phrase coined by Warren and Brandeis 825 in their seminal Harvard Law Review article on privacy [Warren]. 826 They were the first scholars to recognize that a right to privacy had 827 evolved in the 19th century to embrace not only physical privacy but 828 also a potential "injury of the feelings", which could, for example, 829 result from the public disclosure of embarrassing private facts. 831 In 1967 Westin [Westin] described privacy as a "personal adjustment 832 process" in which individuals balance "the desire for privacy with 833 the desire for disclosure and communication" in the context of social 834 norms and their environment. Privacy thus requires that an 835 individual has a means to exercise selective control of access to the 836 self and is aware of the potential consequences of exercising that 837 control [Altman]. 839 Efforts to define and analyze the privacy concept evolved 840 considerably in the 20th century. In 1975, Altman conceptualized 841 privacy as a "boundary regulation process whereby people optimize 842 their accessibility along a spectrum of 'openness' and 'closedness' 843 depending on context" [Altman]. "Privacy is the claim of 844 individuals, groups, or institutions to determine for themselves 845 when, how, and to what extent information about them is communicated 846 to others. Viewed in terms of the relation of the individual to 847 social participation, privacy is the voluntary and temporary 848 withdrawal of a person from the general society through physical or 849 psychological means, either in a state of solitude or small-group 850 intimacy or, when among larger groups, in a condition of anonymity or 851 reserve." [Westin]. 853 Note: Altman and Westin were referring to non-electronic 854 environments, where privacy intrusion was typically based on fresh 855 information, referring to one particular person only, and stemming 856 from traceable human sources. The scope of possible privacy breaches 857 was therefore rather limited. Today, details about an individual's 858 activities are typically stored over a longer period of time, 859 collected from many different sources, and information about almost 860 every activity in life is available electronically. 862 In 1980, the Organization for Economic Co-operation and Development 863 (OECD) published eight Guidelines on the Protection of Privacy and 864 Trans-Border Flows of Personal Data [OECD], which are often referred 865 to as Fair Information Practices (FIPs). Fair information practices 866 include the following principles: 868 Notice and Consent: Before the collection of data, the data subject 869 should be provided: notice of what information is being collected 870 and for what purpose and an opportunity to choose whether to 871 accept the data collection and use. In Europe, data collection 872 cannot proceed unless data subject has unambiguously given his 873 consent (with exceptions). 875 Collection Limitation: Data should be collected for specified, 876 explicit and legitimate purposes. The data collected should be 877 adequate, relevant and not excessive in relation to the purposes 878 for which they are collected. 880 Use/Disclosure Limitation: Data should be used only for the purpose 881 for which it was collected and should not be used or disclosed in 882 any way incompatible with those purposes. 884 Retention Limitation: Data should be kept in a form that permits 885 identification of the data subject no longer than is necessary for 886 the purposes for which the data were collected. 888 Accuracy: The party collecting and storing data is obligated to 889 ensure its accuracy and, where necessary, keep it up to date; 890 every reasonable step must be taken to ensure that data which are 891 inaccurate or incomplete are corrected or deleted. 893 Access: A data subject should have access to data about himself, in 894 order to verify its accuracy and to determine how it is being 895 used. 897 Security: Those holding data about others must take steps to protect 898 its confidentiality. 900 The OECD guidelines and also more recent publications, like the 901 Madrid resolution [Madrid] or the Granada Charter of Privacy in a 902 Digital World [Granada], provide a useful understanding of how to 903 provide privacy protection but these guidelines quite naturally stay 904 on a higher level. They are idealistic principles. As such, they do 905 not aim to evaluate the tradeoffs in addressing privacy protection in 906 the different stages of the development process, as illustrated in 907 Figure 1. 909 US regulatory and self-regulatory efforts supported by the Federal 910 Trade Commission (FTC) have focused on a subset of these principles, 911 namely to notice, choice, access, and security rather than minimizing 912 data collection or use limitation. Hence, they are sometimes labeled 913 as the "notice and choice" approach to privacy. From a practical 914 point of view it became evident that companies are reluctant to stop 915 collecting and using data but individuals expect to remain in control 916 about its usage. Today, the effectiveness to deal with privacy 917 violations using the "notice and choice" approach is heavily 918 criticized [limits]. 920 Among these considers (although often implicit) are assumptions on 921 how information is exchanged between different parties and for 922 certain protocols this information may help to identify entities, and 923 potentially humans behind them. Without doubt the information 924 exchanged is not always equal. The terms 'personal data' [DPD95] and 925 Personally Identifiable Information (PII) [SP800-122] have become 926 common language in the vocabulary of privacy experts. It seems 927 therefore understandable that regulators around the globe have 928 focused on the type of data being exchanged and have provided laws 929 according to the level of sensitivity. Medical data is treated 930 differently in many juristictions than blog comments. For an initial 931 investigation it is intuitive and helpful to determine whether 932 specific protocol or application may be privacy sensitive. The ever 933 increasing ability for parties on the Internet to collect, aggregate, 934 and to reason about information collected from a wide range of 935 sources requires to apply further thinking about potential other 936 privacy sensitive items. The recent example of browser 937 fingerprinting [browser-fingerprinting] shows that tracking can 938 happen in surprising ways. 940 The following list contains examples of information that may be 941 considered personal data: 943 o Name 945 o Address information 947 o Phone numbers, email addresses, SIP/XMPP URIs, other identifiers 949 o IP and MAC addresses or other host-specific persistent identifiers 950 that consistently links to a particular person or small, well- 951 defined group of people 953 o Information identifying personally owned property, such as vehicle 954 registration number 956 Searching only for those example as an indication for the need of 957 privacy is, however, insufficient given that the list above is 958 constantly growing and depends very much on the context. An 959 information element may not be sensitive in one context but 960 considered very sensitive in another. In aggregation possibilities 961 have also caused the list of personal data to grow. 963 Authors' Addresses 965 Bernard Aboba 966 Microsoft Corporation 967 One Microsoft Way 968 Redmond, WA 98052 969 US 971 Email: bernarda@microsoft.com 973 John B. Morris, Jr. 974 Center for Democracy and Technology 975 1634 I Street NW, Suite 1100 976 Washington, DC 20006 977 USA 979 Email: jmorris@cdt.org 980 URI: http://www.cdt.org 982 Jon Peterson 983 NeuStar, Inc. 984 1800 Sutter St Suite 570 985 Concord, CA 94520 986 US 988 Email: jon.peterson@neustar.biz 990 Hannes Tschofenig 991 Nokia Siemens Networks 992 Linnoitustie 6 993 Espoo 02600 994 Finland 996 Phone: +358 (50) 4871445 997 Email: Hannes.Tschofenig@gmx.net 998 URI: http://www.tschofenig.priv.at