idnits 2.17.00 (12 Aug 2021) /tmp/idnits37826/draft-moore-iot-security-bcp-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2017) is 1776 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: draft-iab-iotsu-workshop has been published as RFC 8240 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group K. Moore 3 Internet-Draft Network Heretics 4 Intended status: Best Current Practice R. Barnes 5 Expires: January 4, 2018 Mozilla 6 H. Tschofenig 7 ARM Limited 8 July 3, 2017 10 Best Current Practices for Securing Internet of Things (IoT) Devices 11 draft-moore-iot-security-bcp-01 13 Abstract 15 In recent years, embedded computing devices have increasingly been 16 provided with Internet interfaces, and the typically-weak network 17 security of such devices has become a challenge for the Internet 18 infrastructure. This document lists a number of minimum requirements 19 that vendors of Internet of Things (IoT) devices need to take into 20 account during development and when producing firmware updates, in 21 order to reduce the frequency and severity of security incidents in 22 which such devices are implicated. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on January 4, 2018. 41 Copyright Notice 43 Copyright (c) 2017 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 60 1.2. Note about version -01 of this document . . . . . . . . . 5 61 2. Design Considerations . . . . . . . . . . . . . . . . . . . . 5 62 2.1. General security design considerations . . . . . . . . . 5 63 2.1.1. Threat analysis . . . . . . . . . . . . . . . . . . . 6 64 2.1.2. Use of Standard Cryptographic Algorithms . . . . . . 6 65 2.1.3. Use of Standard Security Protocols . . . . . . . . . 7 66 2.1.4. Security protocols should support algorithm agility . 7 67 2.2. Authentication requirements . . . . . . . . . . . . . . . 7 68 2.2.1. Resistance to keyspace-searching attacks . . . . . . 7 69 2.2.2. Protection of authentication credentials . . . . . . 8 70 2.2.3. Resistance to authentication DoS attacks . . . . . . 8 71 2.2.4. Unauthenticated device use disabled by default . . . 8 72 2.2.5. Per-device unique authentication credentials . . . . 8 73 2.3. Encryption Requirements . . . . . . . . . . . . . . . . . 9 74 2.3.1. Encryption should be supported . . . . . . . . . . . 9 75 2.3.2. Encryption of traffic should be the default . . . . . 9 76 2.3.3. Encryption algorithm strength . . . . . . . . . . . . 9 77 2.3.4. Man in the middle attack . . . . . . . . . . . . . . 9 78 2.4. Firmware Updates . . . . . . . . . . . . . . . . . . . . 9 79 2.4.1. Automatic update capability . . . . . . . . . . . . . 9 80 2.4.2. Enable automatic firmware update by default . . . . . 10 81 2.4.3. Backward compatibility of firmware updates . . . . . 10 82 2.4.4. Automatic updates should be phased in . . . . . . . . 10 83 2.4.5. Authentication of firmware updates . . . . . . . . . 10 84 2.5. Private key management . . . . . . . . . . . . . . . . . 10 85 2.6. Operating system features . . . . . . . . . . . . . . . . 11 86 2.6.1. Use of memory compartmentalization . . . . . . . . . 11 87 2.6.2. Privilege minimization . . . . . . . . . . . . . . . 11 88 2.7. Miscellaneous . . . . . . . . . . . . . . . . . . . . . . 11 89 3. Implementation Considerations . . . . . . . . . . . . . . . . 11 90 3.1. Randomness . . . . . . . . . . . . . . . . . . . . . . . 11 91 4. Firmware Development Practices . . . . . . . . . . . . . . . 12 92 5. Documentation and Support Practices . . . . . . . . . . . . . 12 93 5.1. Support Commitment . . . . . . . . . . . . . . . . . . . 12 94 5.2. Bug Reporting . . . . . . . . . . . . . . . . . . . . . . 13 95 5.3. Labeling . . . . . . . . . . . . . . . . . . . . . . . . 13 96 5.4. Documentation . . . . . . . . . . . . . . . . . . . . . . 13 98 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 99 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 100 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 14 101 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 14 102 9.1. Normative References . . . . . . . . . . . . . . . . . . 14 103 9.2. Informative References . . . . . . . . . . . . . . . . . 14 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 106 1. Introduction 108 The weak security of Internet of Things devices has resulted in many 109 well-publicized security incidents over the last few years. 110 Unfortunately, it appears that very few lessons have been learned 111 from those incidents. The rate at which IoT devices are compromised 112 via network-based attacks appears to be increasing. The effect of 113 such security breaches goes far beyond the immediate effect on the 114 compromised devices and their users. A compromised device may, for 115 example, expose to an attacker secrets (such as passwords) stored in 116 the device. A compromised device also may be used to attack other 117 computers on the same local network as the device, or elsewhere on 118 the Internet. Attackers have constructed application networks of 119 compromised devices which have then been used for the purpose of 120 attacking other network hosts and services, for example distributed 121 password guessing attacks and distributed denial of service (DDoS) 122 attacks. [SNMP-DDOS][DDOS-KREBS] This document recommends a small 123 number of minimum security requirements to reduce some of the more 124 easily prevented security problems. 126 The scope of these recommendations is as follows: 128 - These measures described in this document are intended to impede 129 network-based attacks. These measures are not intended to impede 130 other kinds of attacks, e.g. those requiring physical access to 131 the device, though following these requirements may help reduce 132 the effectiveness of some such attacks. This document does not 133 address physical attacks because thwarting such attacks is 134 generally outside of IETF's expertise, and because it is 135 understood that the physical security requirements of Internet- 136 connected devices vary widely from one application to another. 137 However, because a device compromised by physical means may be 138 used to attack other devices or to obtain information that useful 139 in attacking other devices, it is strongly recommended that 140 vendors of Internet-connected devices carefully consider physical 141 security requirements when designing their products. 143 - In principle these requirements apply to all hosts that connect to 144 the Internet, but this list of requirements is specifically 145 targeted at devices that are constrained in their capabilities, 146 more than general-purpose programmable hosts (PCs, servers, 147 laptops, tablets, etc.), routers, middleboxes, etc. While this is 148 a fuzzy boundary, it reflects the current understanding of IoT. A 149 more detailed treatment of some of the constraints of IoT devices 150 can be found in [RFC7228]. 152 - These are MINIMUM requirements that apply to all devices. They 153 are unlikely to be sufficient by themselves, to ensure security of 154 hosts from attack. Because IoT devices are used in a large number 155 of different domains with different needs, each device will have 156 its own unique security considerations. It is not feasible to 157 completely list all security requirements in a document such as 158 this. Vendors should conduct threat assessments of each device 159 they produce, to determine which additional security 160 considerations are applicable for use in a given application 161 domain. 163 - It is expected that this list of requirements will be revised from 164 time to time, as new threats are identified, and/or new security 165 techniques become feasible. 167 - This document makes broad recommendations, but avoids recommending 168 specific technological solutions to security issues. This is 169 because there is a wide variety of IoT devices with a wide variety 170 of use cases and threat scenarios, so there are few one-size-fits- 171 all technological solutions. A companion document may be produced 172 with suggestions for design choices and implementations that may 173 aid in meeting these requirements. 175 We expect that many of the requirements can easily be met by most 176 vendors, but may require additional documentation and transparency of 177 a vendor's development practices to improve credibility of their 178 security practices in the marketplace. 180 1.1. Terminology 182 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 183 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 184 "OPTIONAL" in this document are to be interpreted as described in RFC 185 2119 [RFC2119]. These key words describe normative requirements of 186 this specification. This specification also contains non-normative 187 recommendations that do not use these key words. 189 This document uses the term "firmware" to refer to the executable 190 code and associated data that, in combination with device hardware, 191 implements the functionality of an Internet- connected device. 192 Traditionally the term "firmware" refers to code and data stored in 193 non-volatile memory as distinguished from "software" which presumably 194 refers to code stored in read/write or erasable memory, or code that 195 can be loaded from other devices. For the purpose of this document, 196 "firmware" applies to any kind of code or data that implements the 197 functions that the device provides. Both software and firmware 198 present similar issues regarding device security, and it is easier to 199 use "firmware" consistently than to write "software and firmware". 201 1.2. Note about version -01 of this document 203 The goal for the initial versions of this document is to invite 204 discussion about what minimum security standards for Internet- 205 connected devices are appropriate. Consequently, this draft suggests 206 a wide range of potential measures. The authors, however, understand 207 that imposing too many barriers to adoption might discourage device 208 manufacturers from attempting to comply with this standard. We seek 209 to find the right balance that helps improve the security of the 210 Internet. We understand that some of the requirements in this draft 211 may need to be removed or relaxed, at least in an initial version of 212 a BCP document, and that other requirements may require additional 213 refinement and justification. 215 2. Design Considerations 217 This section lists requirements and considerations that should affect 218 the design of an Internet-connected device. Broadly speaking, such 219 considerations include device architecture, hardware and firmware 220 component choices, partitioning of function, design and/or choice of 221 protocols used to communicate with the device. 223 2.1. General security design considerations 225 In general an Internet connected device should: 227 - Protect itself from attacks that impair its function or allow it 228 to be used for unintended purposes, without authorization; 230 - Protect its private authentication credentials and key material 231 from being compromised resulting in disclosure to unauthorized 232 parties; 234 - Protect the information received from the device, transmitted from 235 the device, or stored on the device, from inappropriate disclosure 236 to unauthorized parties; and 238 - Protect itself from being used as a vector to attack other devices 239 or hosts on the Internet. 241 - When appropriate, protect itself from communications with 242 unauthorized/unauthenticated parties or devices. 244 Each device is responsible for its own security and for ensuring that 245 it is not used as a vector for attack on other Internet hosts. The 246 design of a device MUST NOT assume that a firewall or other perimeter 247 security measure will protect the device from attack. While useful 248 as part of a layered defense strategy, perimeter security has 249 consistently been demonstrated to be insufficient to thwart attacks 250 by itself. There are nearly always mechanisms by which one or more 251 hosts on the local network can be compromised, and which in turn can 252 serve as a means to attack other hosts. Perimeter security 253 mechanisms also cannot distinguish hostile traffic from safe traffic 254 with 100% reliability. And even devices on "air gapped" networks 255 have been compromised by portable storage devices or software 256 updates. 258 For some kinds of attack, there is a limited amount that a device can 259 do to prevent the attack. For instance, any device can fall victim 260 to certain kinds of denial-of-service attack caused by receiving more 261 traffic in a given amount of time than the device can process. A 262 device SHOULD be designed to gracefully tolerate some amount of 263 excessive traffic without failing entirely, but at some point the 264 device receives so much traffic that it cannot distinguish valid 265 requests from invalid ones. 267 2.1.1. Threat analysis 269 The design for a device MUST enumerate specific security threats 270 considered in its design, and the specific measures taken (if any) to 271 remedy or limit the effect of each threat. This requirement 272 encourages making deliberate, explicit choices about security 273 measures at design time rather than leaving security as an 274 afterthought. This document is also useful later in the life cycle 275 of a device if it becomes necessary to improve security; for instance 276 it can help identify whether the original design choices fulfilled 277 their intended function or failed to do so, or whether a newly 278 discovered threat was not anticipated in the original design. 280 2.1.2. Use of Standard Cryptographic Algorithms 282 Standard or well-established, mature algorithms for cryptographic 283 functions (such as symmetric encryption, public-key encryption, 284 digital signatures, cryptographic hash / message integrity check) 285 MUST be used. 287 Explanation: A tremendous amount of subtlety must be understood in 288 order to construct cryptographic algorithms that are resistant to 289 attack. A very few people in the world have the knowledge required 290 to construct or analyze robust new cryptographic algorithms, and even 291 then, many knowledgeable people have constructed algorithms that were 292 found to be flawed within a short time. 294 2.1.3. Use of Standard Security Protocols 296 Standard protocols for authentication, encryption, and other means of 297 assuring security SHOULD be used whenever apparently-robust, 298 applicable protocols exist. 300 Explanation: The amount of expertise required to design robust 301 security protocols is comparable to that required to design robust 302 cryptographic algorithms. However, there are sometimes use cases for 303 which no existing standard protocol may be suitable. In these cases 304 it may be necessary to adapt an existing protocol for a new use case, 305 or even to design a new security protocol. 307 2.1.4. Security protocols should support algorithm agility 309 The security protocols chosen for a device design, and the 310 implementations of those protocols, SHOULD support the ability to 311 choose between multiple cryptographic algorithms and/or to negotiate 312 minimum key sizes. 314 Explanation: This way, if a flaw in one algorithm is discovered that 315 weakens its security, updated devices or their application peers with 316 which they communicate, may refuse to use that algorithm, or permit 317 its use only with a longer key than originally required. This allows 318 devices and protocol implementations to continue providing adequate 319 security even after weaknesses in algorithms are discovered. 321 The concept of crypto agility is further described in [RFC7696]. 323 2.2. Authentication requirements 325 The vast majority of Internet-connected devices will require 326 authentication for some purposes, whether to protect the device from 327 unauthorized use or reconfiguration, and to protect information 328 stored within the device from disclosure or modification. This 329 section details authentication requirements for devices that require 330 authentication. 332 2.2.1. Resistance to keyspace-searching attacks 334 A device that requires authentication MUST be designed to make brute- 335 force authentication attacks, dictionary attacks, or other attacks 336 that involve exhaustive searching of the device's key or password 337 space, infeasible. 339 2.2.2. Protection of authentication credentials 341 A device MUST be designed to protect any secrets used to authenticate 342 to the device (such as passwords or private keys) from disclosure via 343 monitoring of network traffic to or from the device. For example, if 344 a password is used to authenticate a client to the device, that 345 password must not appear "in the clear", or in any form via which 346 extraction of the password from network traffic is computationally 347 feasible. 349 2.2.3. Resistance to authentication DoS attacks 351 A device SHOULD be designed to gracefully tolerate excessive numbers 352 of authentication attempts, for instance by giving CPU priority to 353 existing protocol sessions that have already successfully 354 authenticated, limiting the number of concurrent new sessions in the 355 process of authenticating, and randomly discarding attempts to 356 establish new sessions beyond that limit. The specific mechanism is 357 a design choice to be made in light of the specific function of the 358 device and the protocols used by the device. What's important for 359 this requirement is that this be an explicit choice. 361 2.2.4. Unauthenticated device use disabled by default 363 A device that supports authentication SHOULD NOT be shipped in a 364 condition that allows an unauthenticated client to use any function 365 of the device that requires authentication, or to change that 366 device's authentication credentials. 368 Explanation: Most devices that can be used in an unauthenticated 369 state will never be configured to require authentication. These 370 devices are attractive targets for attack and compromise, especially 371 by botnets. This is very similar to the problems caused by shipping 372 devices with default passwords. 374 2.2.5. Per-device unique authentication credentials 376 Many devices that require authentication will be shipped with default 377 authentication credentials, so that the customer can authenticate to 378 the device using those credentials until they are changed. Each 379 device that requires authentication SHOULD be instantiated either 380 prior to shipping, or on initial configuration by the user, with 381 credentials unique to that device. If a device is not instantiated 382 with device-unique credentials, that device MUST NOT permit normal 383 operation until those credentials have been changed to something 384 other than the default credentials. 386 Explanation: devices that were shipped with default passwords have 387 been implicated in several serious denial-of-service attacks on 388 widely-used Internet services. 390 2.3. Encryption Requirements 392 2.3.1. Encryption should be supported 394 Internet-connected devices SHOULD support the capability to encrypt 395 traffic sent to or from the device. Any information transmitted over 396 a network is potentially sensitive to some customers. For example, 397 even a home temperature monitoring sensor may reveal information 398 about when occupants are away from home, when they wake up and when 399 they go to bed, when and how often they cook meals - all of which are 400 useful to, say, a thief. 402 Note: This requirement is separate from the requirement to protect 403 authentication secrets from disclosure. Authentication secrets MUST 404 be protected from disclosure even if a general encryption capability 405 is not supported, or if the capability is optional and a particular 406 client or user doesn't use it. 408 2.3.2. Encryption of traffic should be the default 410 If a device supports encryption and use of encryption is optional, 411 the device SHOULD be configurable to require encryption, and this 412 SHOULD be the default. 414 2.3.3. Encryption algorithm strength 416 Encryption algorithms and minimum key lengths SHOULD be chosen to 417 make brute-force attack infeasible. 419 2.3.4. Man in the middle attack 421 Encryption protocols SHOULD be resistant to man-in-the-middle attack. 423 2.4. Firmware Updates 425 2.4.1. Automatic update capability 427 Vendors MUST offer an automatic firmware update mechanism. A 428 discussion about the firmware update mechanisms can be found in 429 [I-D.iab-iotsu-workshop]. 431 Devices SHOULD be configured to check for the existence of firmware 432 updates at frequent but irregular intervals. 434 2.4.2. Enable automatic firmware update by default 436 Automatic firmware updates SHOULD be enabled by default. A device 437 MAY offer an option to disable automatic firmware updates. 439 Especially for any device for which a firmware update would disrupt 440 operation, the device SHOULD be configurable to allow the operator to 441 control the timing of firmware updates. 443 If enabling or disabling or changing the timing of the automatic 444 update feature is controlled by a network protocol, the device MUST 445 require authentication of any request to control those features. 447 2.4.3. Backward compatibility of firmware updates 449 Automatic firmware updates SHOULD NOT change network protocol 450 interfaces in any way that is incompatible with previous versions. A 451 vendor MAY offer firmware updates which add new features as long as 452 those updates are not automatically initiated. 454 2.4.4. Automatic updates should be phased in 456 To prevent widespread simultaneous failure of all instances of a 457 particular kind of device due to a bug in a new firmware release, 458 automatic firmware updates SHOULD be phased-in over a short time 459 interval rather than updating all devices at once. 461 2.4.5. Authentication of firmware updates 463 Firmware updates MUST be authenticated and the integrity of such 464 updates assured before the update is installed. Unauthenticated 465 updates or updates where the authentication or integrity checking 466 fails MUST be rejected. 468 Firmware updates SHOULD be authenticated using digital signature 469 items that use public key cryptography to verify the authenticity of 470 the signer. Ordinary checksums or hash algorithms are insufficient 471 by themselves, and keyed hashes that use shared secrets are generally 472 discoverable by a determined attacker. 474 2.5. Private key management 476 If public key cryptography is used by the device to authenticate 477 itself to other devices or parties, each device MUST be instantiated 478 with its own unique private key or keys. In many cases it will be 479 necessary for the vendor to sign such keys or arrange for them to be 480 signed by a trusted party, prior to shipping the device. 482 Per-device private keys SHOULD be generated on the device and never 483 exposed outside the device. 485 2.6. Operating system features 487 2.6.1. Use of memory compartmentalization 489 Device firmware SHOULD be designed to use hardware and operating 490 systems that implement memory compartmentalization techniques, in 491 order to prevent read, write, and/or execute access to areas of 492 memory by processes not authorized to use those areas for those 493 purposes. 495 Vendors that do not make use of such features MUST document their 496 design rationale. 498 Explanation: Such mechanisms, when properly used, reduce the impact 499 of a firmware bug, such as a buffer overflow vulnerability. 500 Operating systems, or even firmware running on "bare metal", that do 501 not provide such a separation allow an attacker to gain access to the 502 complete address space. While these concepts have been available in 503 hardware for a long time already, they often are not utilized by 504 real-time operating systems. 506 2.6.2. Privilege minimization 508 Device firmware SHOULD be designed to isolate privileged code and 509 data from portions of the firmware that do not need to access them, 510 in order to minimize the potential for compromised code to access 511 those code and/or data. 513 2.7. Miscellaneous 515 3. Implementation Considerations 517 This section lists requirements for implementation that broadly 518 affect security of a device. 520 3.1. Randomness 522 Vendors MUST include a solution for generating cryptographic quality 523 random numbers in their products. Randomness is an important 524 component in security protocols and without such randomness many of 525 today's security protocols offer weak or no security protection. 527 Hardware random-number generators, when feasible, SHOULD be utilized, 528 but MAY be combined with other sources of randomness. 530 A discussion about randomness can be found in [RFC4086]. 532 4. Firmware Development Practices 534 This section outlines requirements for development of firmware that 535 is employed on Internet-connected devices. 537 Vendors SHOULD use modern firmware development practices, including: 539 - Source code change control systems, which record all changes made 540 to source code along with the identity of the person who committed 541 the change. Such systems help to identify which versions of code 542 contain a particular bug, as well as protect against insertion of 543 malicious code. 545 - Bug tracking systems. 547 - Automated testing of a set of pre-defined test conditions, 548 including tests for all security vulnerabilities identified to 549 date via either analysis or experience. 551 - Periodic checking of bug databases for reported security issues 552 associated with the product itself, and with all components (for 553 example: kernel, libraries, and protocol servers) used in the 554 product. 556 - Whenever feasible, checking externally-provided source code and 557 object code for authenticity. 559 - Periodic checking of externally-provided source code and object 560 code for known security bugs, or updates intended to thwart 561 security bugs. 563 All known security bugs for which fixes or workarounds are known MUST 564 be addressed prior to shipping a new product or or a code update. 566 5. Documentation and Support Practices 568 5.1. Support Commitment 570 Vendors MUST be transparent about their commitment to supply devices 571 with updates before selling products to their customers and what 572 happens with those devices after the support period finishes. 574 Within the support period, vendors SHOULD provide firmware updates 575 whenever new security risks associated with their products are 576 identified. Such firmware updates SHOULD NOT change the protocol 577 interfaces to those products, except as necessary to address security 578 issues, so that they can be deployed without disruption to customers' 579 networks. Firmware updates MAY introduce new features which change 580 protocol interfaces if those features are optional and disabled by 581 default. 583 5.2. Bug Reporting 585 Vendors MUST provide an easy to find way for reporting of security 586 bugs, which is free of charge. 588 5.3. Labeling 590 Vendors MUST have a manufacturer, model number and hardware revision 591 number legibly printed on the device. This attempts to help 592 customers with bug reporting. 594 There SHOULD be a documented means of querying a device for its model 595 number, hardware revision number, and firmware revision number via 596 its network interface and/or via any manual input and display. This 597 interface MAY require authentication. 599 5.4. Documentation 601 Vendors MUST offer documentation about their products so that 602 security experts are able to assess the design choices. While such a 603 document will not directly help end customers since they will almost 604 always lack the expertise to judge these design decisions but they 605 help security experts to assess liability and independent third 606 parties to compare products without spending an disproportional 607 amount of time. 609 This form of public documentation will help transparency similar to 610 other documentation requirements found in other industries. It will 611 also help to evolve the best practices described in this document. 613 6. Security Considerations 615 This entire document is about security. 617 7. IANA Considerations 619 This document does not contain any requests to IANA. 621 8. Acknowledgements 623 Add acknowledgments here. 625 9. References 627 9.1. Normative References 629 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 630 Requirement Levels", BCP 14, RFC 2119, 631 DOI 10.17487/RFC2119, March 1997, 632 . 634 [RFC4086] Eastlake 3rd, D., Schiller, J., and S. Crocker, 635 "Randomness Requirements for Security", BCP 106, RFC 4086, 636 DOI 10.17487/RFC4086, June 2005, 637 . 639 9.2. Informative References 641 [DDOS-KREBS] 642 Goodin, D., "Record-breaking DDoS reportedly delivered by 643 >145k hacked cameras", September 2016, 644 . 647 [I-D.iab-iotsu-workshop] 648 Tschofenig, H. and S. Farrell, "Report from the Internet 649 of Things (IoT) Software Update (IoTSU) Workshop 2016", 650 draft-iab-iotsu-workshop-00 (work in progress), October 651 2016. 653 [RFC7228] Bormann, C., Ersue, M., and A. Keranen, "Terminology for 654 Constrained-Node Networks", RFC 7228, 655 DOI 10.17487/RFC7228, May 2014, 656 . 658 [RFC7696] Housley, R., "Guidelines for Cryptographic Algorithm 659 Agility and Selecting Mandatory-to-Implement Algorithms", 660 BCP 201, RFC 7696, DOI 10.17487/RFC7696, November 2015, 661 . 663 [SNMP-DDOS] 664 BITAG, "SNMP Reflected Amplification DDoS Attack 665 Mitigation", August 2012, 666 . 669 Authors' Addresses 671 Keith Moore 672 Network Heretics 673 PO Box 1934 674 Knoxville, TN 37901 675 United States 677 EMail: moore@network-heretics.com 679 Richard Barnes 680 Mozilla 682 EMail: rbarnes@mozilla.com 684 Hannes Tschofenig 685 ARM Limited 687 EMail: hannes.tschofenig@gmx.net