idnits 2.17.00 (12 Aug 2021) /tmp/idnits28243/draft-ietf-diffserv-arch-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 9 longer pages, the longest (page 1) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 6 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** There is 1 instance of lines with control characters in the document. ** The abstract seems to contain references ([DSFIELD]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. == There is 9 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 325 has weird spacing: '...reement a se...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1998) is 8673 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Ellesson' is defined on line 1576, but no explicit reference was found in the text == Unused Reference: 'Ferguson' is defined on line 1585, but no explicit reference was found in the text == Unused Reference: 'Heinanen' is defined on line 1592, but no explicit reference was found in the text == Unused Reference: 'SIMA' is defined on line 1623, but no explicit reference was found in the text == Unused Reference: 'Weiss' is defined on line 1638, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'AH' -- Possible downref: Non-RFC (?) normative reference: ref. 'ATM' -- Possible downref: Non-RFC (?) normative reference: ref. 'Bernet' -- Possible downref: Non-RFC (?) normative reference: ref. 'DSFIELD' -- Possible downref: Non-RFC (?) normative reference: ref. 'DSFWK' -- Possible downref: Non-RFC (?) normative reference: ref. 'EXPLICIT' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ellesson' -- Possible downref: Non-RFC (?) normative reference: ref. 'ESP' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ferguson' -- Possible downref: Non-RFC (?) normative reference: ref. 'FRELAY' -- Possible downref: Non-RFC (?) normative reference: ref. 'Heinanen' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLSFWK' -- Possible downref: Non-RFC (?) normative reference: ref. 'MPLSTE' ** Obsolete normative reference: RFC 1349 (Obsoleted by RFC 2474) ** Downref: Normative reference to an Informational RFC: RFC 1633 -- Possible downref: Non-RFC (?) normative reference: ref. 'SIMA' -- Possible downref: Non-RFC (?) normative reference: ref. '2BIT' -- Possible downref: Non-RFC (?) normative reference: ref. 'TR' -- Possible downref: Non-RFC (?) normative reference: ref. 'Weiss' Summary: 13 errors (**), 0 flaws (~~), 10 warnings (==), 19 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET-DRAFT Steven Blake 2 Diffserv Working Group Torrent Networking Technologies 3 Expires: February 1999 David Black 4 The Open Group 5 Mark Carlson 6 Sun Microsystems 7 Elwyn Davies 8 Nortel UK 9 Zheng Wang 10 Bell Labs Lucent Technologies 11 Walter Weiss 12 Lucent Technologies 14 August 1998 16 An Architecture for Differentiated Services 18 20 Status of This Memo 22 This document is an Internet-Draft. Internet-Drafts are working 23 documents of the Internet Engineering Task Force (IETF), its areas, 24 and its working groups. Note that other groups may also distribute 25 working documents as Internet-Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 To view the entire list of current Internet-Drafts, please check the 33 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 34 Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern 35 Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific 36 Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). 38 Copyright Notice 40 Copyright (C) The Internet Society (1998). All Rights Reserved. 42 Abstract 44 This document defines an architecture for implementing scalable 45 service differentiation in the Internet. This architecture achieves 46 scalability by aggregating traffic classification state which is 47 conveyed by means of IP-layer packet marking using the DS field 48 [DSFIELD]. Packets are classified and marked to receive a particular 49 per-hop forwarding behavior on nodes along their path. Sophisticated 50 classification, marking, policing, and shaping operations need only 51 be implemented at network boundaries or hosts. Network resources are 53 Blake, et. al. Expires: February 1999 [Page 1] 54 allocated to traffic streams by service provisioning policies which 55 govern how traffic is marked and conditioned upon entry to a 56 differentiated services-capable network, and how that traffic is 57 forwarded within that network. A wide variety of services can be 58 implemented on top of these building blocks. 60 Table of Contents 62 1. Introduction ................................................. 3 63 1.1 Overview ................................................. 3 64 1.2 Terminology ............................................... 4 65 1.3 Requirements .............................................. 8 66 1.4 Comparisons with Other Approaches ......................... 9 67 2. Differentiated Services Architectural Model .................. 11 68 2.1 Differentiated Services Domain ............................ 11 69 2.1.1 DS Boundary Nodes and Interior Nodes .................. 12 70 2.1.2 DS Ingress Node and Egress Node ....................... 12 71 2.2 Differentiated Services Region ............................ 13 72 2.3 Traffic Classification and Conditioning ................... 13 73 2.3.1 Classifiers ........................................... 13 74 2.3.2 Traffic Profiles ...................................... 14 75 2.3.3 Traffic Conditioners .................................. 15 76 2.3.3.1 Meters ............................................ 15 77 2.3.3.2 Markers ........................................... 15 78 2.3.3.3 Shapers ........................................... 16 79 2.3.3.4 Droppers .......................................... 16 80 2.3.4 Location of Traffic Conditioners and MF Classifiers ... 16 81 2.3.4.1 Within the Source Domain .......................... 16 82 2.3.4.2 At the Boundary of a DS Domain .................... 17 83 2.3.4.3 In non-DS-Capable Domains ......................... 17 84 2.3.4.4 In Interior DS Nodes .............................. 17 85 2.4 Per-Hop Behaviors ......................................... 18 86 2.5 Network Resource Allocation ............................... 19 87 3. Per-Hop Behavior Specification Guidelines .................... 20 88 4. Interoperability with Non-Differentiated Services-Compliant 89 Nodes ........................................................ 23 90 5. Multicast Considerations ..................................... 25 91 6. Security and Tunneling Considerations ........................ 26 92 6.1 Theft and Denial of Service ............................... 26 93 6.2 IPsec and Tunneling Interactions .......................... 28 94 6.3 Auditing .................................................. 30 95 7. Acknowledgements ............................................. 30 96 8. References ................................................... 31 97 Authors Addresses' ............................................... 33 98 Full Copyright Statement ......................................... 34 100 Blake, et. al. Expires: February 1999 [Page 2] 101 1. Introduction 103 1.1 Overview 105 This document defines an architecture for implementing scalable 106 service differentiation in the Internet. A "Service" defines some 107 significant characteristics of packet transmission in one direction 108 across a set of one or more paths within a network. These 109 characteristics may be specified in quantitative or statistical terms 110 of throughput, delay, jitter, and/or loss, or may otherwise be 111 specified in terms of some relative priority of access to network 112 resources. Service differentiation is desired to accommodate 113 heterogeneous application requirements and user expectations, and to 114 permit differentiated pricing of Internet service. 116 This architecture is composed of a number of functional elements 117 implemented in network nodes, including a small set of per-hop 118 forwarding behaviors, packet classification functions, and traffic 119 conditioning functions including metering, marking, shaping, and 120 policing. This architecture achieves scalability by implementing 121 complex classification and conditioning functions only at network 122 boundary nodes, and by applying per-hop behaviors to aggregates of 123 traffic which have been appropriately marked using the DS field in 124 the IPv4 or IPv6 headers [DSFIELD]. Per-hop behaviors are defined to 125 permit a reasonably granular means of allocating buffer and bandwidth 126 resources at each node among competing traffic streams. Per- 127 application flow or per-customer forwarding state need not be 128 maintained within the core of the network. A distinction is 129 maintained between: 131 o the service provided to a traffic aggregate, 133 o the conditioning functions and per-hop behaviors used to realize 134 services, 136 o the DS field value (DS codepoint) used to mark packets to select a 137 per-hop behavior, and 139 o the particular node implementation mechanisms which realize a per- 140 hop behavior. 142 Service provisioning and traffic conditioning policies are 143 sufficiently decoupled from the forwarding behaviors within the 144 network interior to permit implementation of a wide variety of 145 service behaviors, with room for future expansion. 147 This architecture only provides service differentiation in one 148 direction of traffic flow and is therefore asymmetric. Development 149 of a complementary symmetric architecture is a topic of current 150 research but is outside the scope of this document; see for example 151 [EXPLICIT]. 153 Blake, et. al. Expires: February 1999 [Page 3] 154 Sect. 1.2 is a glossary of terms used within this document. 155 Sec. 1.3 lists requirements addressed by this architecture, and 156 Sec. 1.4 provides a brief comparison to other approaches for 157 service differentiation. Sec. 2 discusses the components of the 158 architecture in detail. Sec. 3 proposes guidelines for per-hop 159 behavior specifications. Sec. 4 discusses interoperability issues 160 with nodes and networks which do not implement differentiated 161 services as defined in this document and in [DSFIELD]. Sec. 5 162 discusses issues with multicast service delivery. Sec. 6 addresses 163 security and tunnel considerations. 165 This document should be read along with its companion documents, the 166 differentiated services framework [DSFWK], the definition of the DS 167 field [DSFIELD], and other documents which specify per-hop behaviors. 168 It has been heavily influenced by the thoughtful proposals of 169 previous authors [Ellesson, EXPLICIT, Ferguson, Heinanen, SIMA, 2BIT, 170 Weiss]. 172 1.2 Terminology 174 This section gives a general conceptual overview of the terms used 175 in this document. Some of these terms are more precisely defined in 176 later sections of this document. The choice of terms and definitions 177 were influenced by [MPLSFWK]. 179 Behavior Aggregate (BA) a DS behavior aggregate. 181 BA classifier a classifier that selects packets based 182 only on the contents of the DS field. 184 Boundary link a link connecting the edge nodes of two 185 domains. 187 Classifier an entity which selects packets based on 188 the content of packet headers according to 189 defined rules. 191 DS behavior aggregate a collection of packets with the same 192 DS codepoint crossing a link in a 193 particular direction. 195 DS boundary node a DS node that connects one DS domain to a 196 node either in another DS domain or in a 197 domain that is not DS capable. 199 DS capable capable of implementing differentiated 200 services as described in this architecture; 201 usually used in reference to a domain 202 consisting of DS-compliant nodes. 204 Blake, et. al. Expires: February 1999 [Page 4] 205 DS codepoint a specific value of the DSCP portion of the 206 DS field, used to select a PHB. 208 DS compliant enabled to support differentiated services 209 functions and behaviors as defined in 210 [DSFIELD], this document, and other 211 differentiated services documents; usually 212 used in reference to a node or device. 214 DS domain a DS-capable domain; a contiguous set of 215 nodes which operate with a common set of 216 service provisioning policies and PHB 217 definitions. 219 DS egress node a DS boundary node in its role in handling 220 traffic as it leaves a DS domain. 222 DS ingress node a DS boundary node in its role in handling 223 traffic as it enters a DS domain. 225 DS interior node a DS node that is not a DS boundary node. 227 DS field the IPv4 header TOS octet or the IPv6 228 Traffic Class octet when interpreted in 229 conformance with the definition given in 230 [DSFIELD]. The bits of the DSCP field 231 encode the DS codepoint, while the 232 remaining bits are currently unused. 234 DS node a DS-compliant node. 236 DS region a set of contiguous DS domains which can 237 offer differentiated services over paths 238 across those DS domains. 240 Downstream DS domain the DS domain downstream of traffic flow on 241 a boundary link. 243 Dropper a device that performs dropping. 245 Dropping the process of discarding packets based on 246 specified rules; policing. 248 Legacy node a node which implements IPv4 Precedence as 249 defined in [RFC791,RFC1812] but which is 250 otherwise not DS compliant. 252 Marker a device that performs marking. 254 Marking the process of setting the DS codepoint in 255 a packet based on defined rules; pre- 256 marking, re-marking. 258 Blake, et. al. Expires: February 1999 [Page 5] 259 Mechanism a specific algorithm or operation (e.g., 260 queueing discipline) that is implemented in 261 a node to realize a set of one or more per- 262 hop behaviors. 264 Meter a device that performs metering. 266 Metering the process of measuring the temporal 267 properties (e.g., rate) of a traffic stream 268 selected by a classifier. The 269 instantaneous state of this process may be 270 used to affect the operation of a marker, 271 shaper, or dropper, and/or may be used for 272 accounting and measurement purposes. 274 Microflow a single instance of an application-to- 275 application flow of packets which is 276 identified by source address, source port, 277 destination address, destination port and 278 protocol id. 280 MF Classifier a multi-field (MF) classifier which selects 281 packets based on the content of some 282 arbitrary number of header fields; 283 typically some combination of source 284 address, destination address, DS field, 285 protocol ID, source port and destination 286 port. 288 Per-Hop-Behavior (PHB) the externally observable forwarding 289 behavior applied at a DS-compliant node to a 290 DS behavior aggregate. 292 PHB group a set of one or more PHBs that can only be 293 meaningfully specified and implemented 294 simultaneously, due to a common constraint 295 applying to all PHBs in the set such as a 296 queue servicing or queue management policy. 297 A PHB group provides a service building 298 block that allows a set of related 299 forwarding behaviors to be specified 300 together (e.g., four dropping priorities). 301 A single PHB is a special case of a PHB 302 group. 304 Policing the process of discarding packets (by a 305 dropper) within a traffic stream in 306 accordance with the state of a 307 corresponding meter enforcing a traffic 308 profile in a TCA. 310 Blake, et. al. Expires: February 1999 [Page 6] 311 Pre-mark to set the DS codepoint of a packet prior 312 to entry into a downstream DS domain. 314 Provider DS domain the DS-capable service provider of a source 315 domain. 317 Re-mark to change the DS codepoint of a packet, 318 usually performed by a marker in accordance 319 with a TCA. 321 Service the overall treatment of a defined subset 322 of a customer's traffic within a DS domain 323 or end-to-end. 325 Service Level Agreement a service contract between a customer and a 326 (SLA) service provider that specifies the details 327 of a TCA and the corresponding forwarding 328 service a customer should receive. A 329 customer may be a user organization (source 330 domain) or another DS domain (upstream 331 domain). 333 Service Provisioning a policy which defines how traffic 334 Policy (SPP) conditioners are configured on DS boundary 335 nodes and how traffic streams are mapped to 336 DS behavior aggregates to achieve a range 337 of services. 339 Shaper a device that performs shaping. 341 Shaping the process of delaying packets within a 342 traffic stream to cause it to conform to 343 some defined traffic profile. 345 Source domain a domain which contains the node(s) 346 originating the traffic receiving a 347 particular service. 349 Traffic conditioner an entity which performs traffic 350 conditioning functions and which may 351 contain meters, markers, droppers, and 352 shapers. Traffic conditioners are typically 353 deployed in DS boundary nodes only. A 354 traffic conditioner may re-mark a traffic 355 stream or may discard or shape packets to 356 alter the temporal characteristics of the 357 stream and bring it into compliance with a 358 traffic profile. 360 Traffic conditioning control functions performed to enforce 361 rules specified in a TCA, including 362 metering, marking, shaping, and policing. 364 Blake, et. al. Expires: February 1999 [Page 7] 365 Traffic Conditioning an agreement specifying classifier rules 366 Agreement (TCA) and any corresponding traffic profiles and 367 metering, marking, discarding and/or shaping 368 rules which are to apply to the traffic 369 streams selected by the classifier. 371 Traffic profile a description of the temporal properties 372 of a traffic stream such as rate and burst 373 size. 375 Traffic stream an administratively significant set of one 376 or more microflows which traverse a path 377 segment. A traffic stream may consist of 378 the set of active microflows which are 379 selected by a particular classifier. 381 Upstream DS domain the DS domain upstream of traffic flow on a 382 boundary link. 384 1.3 Requirements 386 The history of the Internet has been one of continuous growth in the 387 number of hosts, the number and variety of applications, and the 388 capacity of the network infrastructure, and this growth is expected 389 to continue for the foreseeable future. A scalable architecture for 390 service differentiation must be able to accommodate this continued 391 growth. 393 The following requirements were identified and are addressed in this 394 architecture: 396 o should accommodate a wide variety of services and provisioning 397 policies, extending end-to-end or within a particular (set of) 398 network(s), 400 o should allow decoupling of the service from the particular 401 application in use, 403 o should work with existing applications without the need for 404 application programming interface changes or host software 405 modifications (assuming suitable deployment of classifiers, 406 markers, and other traffic conditioning functions), 408 o should decouple traffic conditioning and service provisioning 409 functions from forwarding behaviors implemented within the core 410 network nodes, 412 o should not depend on hop-by-hop application signaling, 414 o should require only a small set of forwarding behaviors whose 415 implementation complexity does not dominate the cost of a network 417 Blake, et. al. Expires: February 1999 [Page 8] 418 device, and which will not introduce bottlenecks for future high- 419 speed system implementations, 421 o should avoid per-microflow or per-customer state within core 422 network nodes, 424 o should utilize only aggregated classification state within the 425 network core, 427 o should permit simple packet classification implementations in core 428 network nodes (BA classifier), 430 o should permit reasonable interoperability with non-DS-compliant 431 network nodes, 433 o should accommodate incremental deployment. 435 1.4 Comparisons with Other Approaches 437 The differentiated services architecture specified in this document 438 can be contrasted with other existing models of service 439 differentiation. We classify these alternative models into the 440 following categories: relative priority marking, service marking, 441 label switching, Integrated Services/RSVP, and static per-hop 442 classification. 444 Examples of the relative priority marking model include IPv4 445 Precedence marking as defined in [RFC791], 802.5 Token Ring priority 446 [TR], and the default interpretation of 802.1p traffic classes 447 [802.1p]. In this model the application, host, or proxy node selects 448 a relative priority or "precedence" for a packet (e.g., delay or 449 discard priority), and the network nodes along the transit path apply 450 the appropriate priority forwarding behavior corresponding to the 451 priority value within the packet's header. Our architecture can be 452 considered as a refinement to this model, since we more clearly 453 specify the role and importance of boundary nodes and traffic 454 conditioners, and since our per-hop behavior model permits more 455 general forwarding behaviors than relative delay or discard priority. 457 An example of a service marking model is IPv4 TOS as defined in 458 [RFC1349]. In this example each packet is marked with a request for 459 a "type of service", which may include "minimize delay", "maximize 460 throughput", "maximize reliability", or "minimize cost". Network 461 nodes may select routing paths or forwarding behaviors which are 462 suitably engineered to satisfy the service request. This model is 463 subtly different from our architecture. Note that we do not describe 464 the use of the DS field as an input to route selection. The TOS 465 markings defined in [RFC1349] are very generic and do not span the 466 range of possible service semantics. Furthermore, the service 467 request is associated with each individual packet, whereas some 468 service semantics may depend on the aggregate forwarding behavior of 470 Blake, et. al. Expires: February 1999 [Page 9] 471 a sequence of packets. The service marking model does not easily 472 accommodate growth in the number and range of future services (since 473 the codepoint space is small) and involves configuration of the 474 "TOS->forwarding behavior" association in each core network node. 475 Standardizing service markings implies standardizing service 476 offerings, which is outside the scope of the IETF. Note that 477 provisions are made in the allocation of the DS codepoint space to 478 allow for locally significant codepoints which may be used by a 479 provider to support service marking semantics [DSFIELD]. 481 Examples of the label switching (or virtual circuit) model include 482 Frame Relay, ATM, and MPLS [FRELAY, ATM, MPLSTE]. In this model path 483 forwarding state and traffic management or QoS state is established 484 for traffic streams on each hop along a network path. Traffic 485 aggregates of varying granularity are associated with a label 486 switched path at an ingress node, and packets/cells within each label 487 switched path are marked with a forwarding label that is used to 488 lookup the next hop node, the per-hop forwarding behavior, and the 489 replacement label at each hop. This model permits finer granularity 490 resource allocation to traffic streams, since label values are not 491 globally significant but are only significant on a single link; 492 therefore resources can be reserved for the aggregate of packets/ 493 cells received on a link with a particular label, and the label 494 switching semantics govern the next-hop selection, allowing a traffic 495 stream to follow a specially engineered path through the network 496 [MPLSTE]. This improved granularity comes at the cost of additional 497 management and configuration requirements to establish and maintain 498 the label switched paths. In addition, the amount of forwarding 499 state maintained at each node scales in proportion to the number of 500 edge nodes of the network in the best case (assuming multipoint-to- 501 point label switched paths), and it scales in proportion with the 502 square of the number of edge nodes in the worst case, when edge-edge 503 label switched paths with provisioned resources are employed. 505 The Integrated Services/RSVP model relies upon traditional datagram 506 forwarding in the default case, but allows sources and receivers to 507 exchange signaling messages which establish additional packet 508 classification and forwarding state on each node along the path 509 between them [RFC1633, RSVP]. In the absence of state aggregation, 510 the amount of state on each node scales in proportion to the number 511 of concurrent reservations, which can be potentially large on high- 512 speed links. This model also requires application support for the 513 RSVP signaling protocol. Differentiated services mechanisms can be 514 utilized to aggregate Integrated Services/RSVP state in the core of 515 the network [Bernet]. 517 A variant of the Integrated Services/RSVP model eliminates the 518 requirement for hop-by-hop signaling by utilizing only "static" 519 classification and forwarding policies which are implemented in each 520 node along a network path. These policies are updated on 521 administrative timescales and not in response to the instantaneous 522 mix of microflows active in the network. The state requirements for 523 this variant are potentially worse than those encountered when RSVP 524 is used, especially in backbone nodes, since the number of static 525 policies that might be applicable at a node over time may be larger 526 than the number of active sender-receiver sessions that might have 527 installed reservation state on a node. Although the support of large 528 numbers of classifier rules and forwarding policies may be 529 computationally feasible, the management burden associated with 530 installing and maintaining these rules on each node within a backbone 531 network which might be traversed by a traffic stream is substantial. 533 Although we contrast our architecture with these alternative models 534 of service differentiation, it should be noted that links and nodes 535 employing these techniques may be utilized to extend differentiated 536 services behaviors and semantics across a layer-2 switched 537 infrastructure (e.g., 802.1p LANs, Frame Relay/ATM backbones) 538 interconnecting DS nodes, and in the case of MPLS may be used as an 539 alternative intra-domain implementation technology. The constraints 540 imposed by the use of a specific link-layer technology in particular 541 regions of a DS domain (or in a network providing access to DS 542 domains) may imply the differentiation of traffic on a coarser grain 543 basis. Depending on the mapping of PHBs to different link-layer 544 services and the way in which packets are scheduled over a restricted 545 set of priority classes (or virtual circuits of different category 546 and capacity), all or a subset of the PHBs in use may be supportable 547 (or may be indistinguishable). 549 2. Differentiated Services Architectural Model 551 The differentiated services architecture is based on a simple model 552 where traffic entering a network is classified and possibly 553 conditioned at the boundaries of the network, and assigned to 554 different behavior aggregates. Each behavior aggregate is identified 555 by a single DS codepoint. Within the core of the network, packets 556 are forwarded according to the per-hop behavior associated with the 557 DS codepoint. In this section, we discuss the key components within 558 a differentiated services region, traffic classification and 559 conditioning functions, and how differentiated services are achieved 560 through the combination of traffic conditioning and PHB-based 561 forwarding. 563 2.1 Differentiated Services Domain 565 A DS domain is a contiguous set of DS nodes which operate with a 566 common service provisioning policy and set of PHB groups implemented 567 on each node. A DS domain has a well-defined boundary consisting of 568 DS boundary nodes which classify and possibly condition ingress 569 traffic to ensure that packets which transit the domain are 570 appropriately marked to select a PHB from one of the PHB groups 571 supported within the domain. Nodes within the DS domain select the 572 forwarding behavior for packets based on their DS codepoint, mapping 573 that value to one of the supported PHBs using either the recommended 574 codepoint->PHB mapping or a locally customized mapping [DSFIELD]. 575 Inclusion of non-DS-compliant nodes within a DS domain may result in 576 unpredictable performance and may impede the ability to satisfy 577 service level agreements (SLAs). 579 A DS domain normally consists of one or more networks under the same 580 administration; for example, an organization's intranet or an ISP. 581 The administration of the domain is responsible for ensuring that 582 adequate resources are provisioned and/or reserved to support the 583 SLAs offered by the domain. 585 2.1.1 DS Boundary Nodes and Interior Nodes 587 A DS domain consists of DS boundary nodes and DS interior nodes. DS 588 boundary nodes interconnect the DS domain to other DS or non-DS- 589 capable domains, whilst DS interior nodes only connect to other DS 590 interior or boundary nodes within the same DS domain. 592 Both DS boundary nodes and interior nodes must be able to apply the 593 appropriate PHB to packets based on the DS codepoint; otherwise 594 unpredictable behavior may result. In addition, DS boundary nodes 595 may be required to perform traffic conditioning functions as 596 defined by a traffic conditioning agreement (TCA) between their DS 597 domain and the peering domain which they connect to (see Sec. 598 2.3.3). 600 Interior nodes may be able to perform limited traffic conditioning 601 functions such as DS codepoint re-marking. Interior nodes which 602 implement more complex classification and traffic conditioning 603 functions are analogous to DS boundary nodes (see Sec. 2.3.4.4). 605 A host in a network containing a DS domain may act as a DS boundary 606 node for traffic from applications running on that host; we therefore 607 say that the host is within the DS domain. If a host does not act as 608 a boundary node, then the DS node topologically closest to that host 609 acts as the DS boundary node for that host's traffic. 611 2.1.2 DS Ingress Node and Egress Node 613 DS boundary nodes act both as a DS ingress node and as a DS egress 614 node for different directions of traffic. Traffic enters a DS domain 615 at a DS ingress node and leaves a DS domain at a DS egress node. A 616 DS ingress node is responsible for ensuring that the traffic entering 617 the DS domain conforms to any TCA between it and the other domain to 618 which the ingress node is connected. A DS egress node may perform 619 traffic conditioning functions on traffic forwarded to a directly 620 connected peering domain, depending on the details of the TCA between 621 the two domains. Note that a DS boundary node may act as a DS 622 interior node for some set of interfaces. 624 2.2 Differentiated Services Region 626 A differentiated services region (DS Region) is a set of one or more 627 contiguous DS domains. DS regions are capable of supporting 628 differentiated services along paths which span the domains within the 629 region. 631 The DS domains in a DS region may support different PHB groups 632 internally and different codepoint->PHB mappings. However, to permit 633 services which span across the domains, the peering DS domains must 634 each establish a peering SLA which includes a TCA which specifies how 635 transit traffic from one DS domain to another is conditioned at the 636 boundary between the two DS domains. 638 It is possible that several DS domains within a DS region may adopt a 639 common service provisioning policy and may support a common set of 640 PHB groups and codepoint mappings, thus eliminating the need for 641 traffic conditioning between those DS domains. 643 2.3 Traffic Classification and Conditioning 645 Differentiated services are extended across a DS domain boundary by 646 establishing a SLA between an upstream network and a downstream DS 647 domain. The SLA will generally include a traffic conditioning 648 agreement which specifies packet classification and re-marking policy 649 and may also specify traffic profiles and actions to traffic streams 650 which are in- or out-of-profile (see Sec. 2.3.2). 652 The packet classification policy identifies the subset of traffic 653 which may receive a differentiated service by being conditioned and/ 654 or mapped to one or more behavior aggregates (by DS codepoint re- 655 marking) within the DS domain. 657 Traffic conditioning performs metering, shaping, policing and/or re- 658 marking to ensure that the traffic entering the DS domain conforms to 659 the rules specified in the TCA, in accordance with the domain's 660 service provisioning policy. The extent of traffic conditioning 661 required is dependent on the specifics of the service offering, and 662 may range from simple codepoint re-marking to complex policing and 663 shaping operations. The details of traffic conditioning policies 664 which are negotiated between networks is outside the scope of this 665 document. 667 2.3.1 Classifiers 669 Packet classifiers select packets in a traffic stream based on the 670 content of some portion of the packet header. We define two types 671 of classifiers. The BA (Behavior Aggregate) Classifier classifies 672 packets based on the DS codepoint only. The MF (Multi-Field) 673 classifier selects packets based on the value of a combination of one 674 or more header fields, such as source address, destination address, 675 DS field, protocol ID, source port and destination port numbers, and 676 other information such as incoming interface. 678 Classifiers are used to "steer" packets matching some specified rule 679 to an element of a traffic conditioner for further processing. 680 Classifiers must be configured by some management procedure in 681 accordance with the appropriate TCA. 683 The classifier should authenticate the information which it uses to 684 classify the packet (see Sec. 6). 686 Note that in the event of upstream packet fragmentation, MF 687 classifiers which examine the contents of transport-layer header 688 fields may incorrectly classify packet fragments subsequent to the 689 first. A possible solution to this problem is to maintain 690 fragmentation state; however, this is not a general solution due to 691 the possibility of upstream fragment re-ordering or divergent routing 692 paths. The policy to apply to packet fragments is outside the scope 693 of this document. 695 2.3.2 Traffic Profiles 697 A traffic profile specifies the temporal properties of a traffic 698 stream (selected by a classifier) which is to be mapped to a behavior 699 aggregate. It provides rules for determining whether a particular 700 packet is in-profile or out-of-profile. For example, a profile based 701 on a token bucket may look like: 703 codepoint=X, use token-bucket r, b 705 The above profile indicates that all packets marked with DS codepoint 706 X should be measured against a token bucket meter with rate r and 707 burst size b. In this example out-of-profile packets are those 708 packets in the traffic stream which arrive when insufficient tokens 709 are available in the bucket. The concept of in- and out-of-profile 710 can be extended to more than two levels, e.g., multiple levels of 711 conformance with a profile may be defined and enforced. 713 Different conditioning actions may be applied to the in-profile 714 packets and out-of-profile packets, or different accounting actions 715 may be triggered. In-profile packets may be allowed to enter the DS 716 domain without further conditioning; or, alternatively, their DS 717 codepoint may be changed. The latter happens when the DS codepoint 718 is set to a non-Default value for the first time [DSFIELD], or when 719 the packets enter a DS domain that uses a different PHB group or 720 codepoint->PHB mapping policy for this traffic stream. Out-of- 721 profile packets may be queued until they are in-profile (shaped), 722 discarded (policed), marked with a new codepoint (re-marked), or 723 forwarded unchanged while triggering some accounting procedure. Out- 724 of-profile packets may be mapped to one or more behavior aggregates 725 that are "inferior" in some dimension of forwarding performance to 726 the BA which in-profile packets are mapped to. 728 Note that a traffic profile is an optional component of a TCA and its 729 use is dependent on the specifics of the service offering and the 730 domain's service provisioning policy. 732 2.3.3 Traffic Conditioners 734 A traffic conditioner may contain the following elements: meter, 735 marker, shaper, and dropper. A traffic stream is selected by a 736 classifier, which steers the packets to a logical instance of a 737 traffic conditioner. A meter is used (where appropriate) to measure 738 the traffic stream against a traffic profile. The state of the meter 739 with respect to a particular packet (e.g., whether it is in- or out- 740 of-profile) may be used to affect a marking, dropping, or shaping 741 action. 743 When packets exit the traffic conditioner of a DS boundary node the 744 DS codepoint of each packet must be set to an appropriate value. 746 Fig. 1 shows the block diagram of a classifier and traffic 747 conditioner. Note that a traffic conditioner may not necessarily 748 contain all four elements. For example, in the case where no traffic 749 profile is in effect, packets may only pass through a classifier and 750 a marker. 752 +-------+ 753 | |-------------------+ 754 +----->| Meter | | 755 | | |--+ | 756 | +-------+ | | 757 | V V 758 +------------+ +--------+ +---------+ 759 | | | | | Shaper/ | 760 packets =====>| Classifier |=====>| Marker |=====>| Dropper |=====> 761 | | | | | | 762 +------------+ +--------+ +---------+ 764 Fig. 1: Logical View of a Packet Classifier and Traffic Conditioner 766 2.3.3.1 Meters 768 Traffic meters measure the temporal properties of the stream of 769 packets selected by a classifier against a traffic profile specified 770 in a TCA. A meter passes state information to other conditioning 771 functions to trigger a particular action for each packet which is 772 either in- or out-of-profile (to some extent). 774 2.3.3.2 Markers 776 Packet markers set the DS field of a packet to a particular 777 codepoint, adding the marked packet to a particular DS behavior 778 aggregate. The marker may be configured to mark all packets which 779 are steered to it to a single codepoint, or may be configured to mark 780 a packet to one of a set of codepoints used to select a PHB in a PHB 781 group, according to the state of a meter. When the marker changes 782 the codepoint in a packet it is said to have "re-marked" the packet. 784 2.3.3.3 Shapers 786 Shapers delay some or all of the packets in a traffic stream in order 787 to bring the stream into compliance with a traffic profile. A shaper 788 usually has a finite-size buffer, and packets may be discarded if 789 there is not sufficient buffer space to hold the delayed 790 packets. 792 2.3.3.4 Droppers 794 Droppers discard some or all of the packets in a traffic stream in 795 order to bring the stream into compliance with a traffic profile. 796 This process is know as "policing" the stream. Note that a dropper 797 can be implemented as a special case of a shaper by setting the 798 shaper buffer size to zero (or a few) packets. 800 2.3.4 Location of Traffic Conditioners and MF Classifiers 802 Traffic conditioners are usually located within DS ingress and egress 803 boundary nodes, but may also be located in nodes within the interior 804 of a DS domain, or within a non-DS-capable domain. 806 2.3.4.1 Within the Source Domain 808 We define the source domain as the domain containing the node(s) 809 which originate the traffic receiving a particular service. Traffic 810 sources and intermediate nodes within a source domain may perform 811 traffic classification and conditioning functions. The traffic 812 originating from the source domain across a boundary may be marked by 813 the traffic sources directly or by intermediate nodes before leaving 814 the source domain. This is referred to as initial marking or 815 "pre-marking". 817 Consider the example of a company that has the policy that its CEO's 818 packets should have higher priority. The CEO's host may mark the DS 819 field of all outgoing packets with a DS codepoint that indicates 820 "higher priority". Alternatively, the first-hop router directly 821 connected to the CEO's host may classify the traffic and mark the 822 CEO's packets with the correct DS codepoint. Such high priority 823 traffic may also be conditioned near the source so that there is a 824 limit on the amount of high priority traffic forwarded from a 825 particular source. 827 There are some advantages to marking packets close to the traffic 828 source. First, a traffic source can more easily take an 829 application's preferences into account when deciding which packets 830 should receive better forwarding treatment. Also, classification of 831 packets is much simpler before the traffic has been aggregated with 832 packets from other sources, since the number of classification rules 833 which need to be applied within a single node is reduced. 835 Since packet marking may be distributed across multiple nodes, the 836 source DS domain is responsible for ensuring that the aggregated 837 traffic towards its provider DS domain conforms to the appropriate 838 TCA. Additional allocation mechanisms such as bandwidth brokers or 839 RSVP may be used to dynamically allocate resources for a particular 840 DS behavior aggregate within the provider's network [2BIT,Bernet]. 841 The boundary node of the source domain should also monitor 842 conformance to the TCA, and may police, shape, or re-mark packets as 843 necessary. 845 2.3.4.2 At the Boundary of a DS Domain 847 Traffic streams may be classified, marked, and otherwise conditioned 848 on either end of a boundary link (the DS egress node of the upstream 849 domain or the DS ingress node of the downstream domain). The TCA 850 between the domains should specify which domain has responsibility 851 for mapping traffic streams to DS behavior aggregates and 852 conditioning those aggregates in conformance with the TCA. However, 853 a DS ingress node must assume that the incoming traffic may not 854 conform to the TCA and must be prepared to enforce the TCA in 855 accordance with local policy. 857 When packets are pre-marked and conditioned in the upstream domain, 858 potentially fewer classification and traffic conditioning rules need 859 to be supported in the downstream DS domain. In this circumstance 860 the downstream DS domain may only need to re-mark or police the 861 incoming behavior aggregates to enforce the TCA. However, more 862 sophisticated services which are path- or source-dependent may 863 require MF classification in the downstream DS domain's ingress 864 nodes. 866 If a DS ingress node is connected to an upstream non-DS-capable 867 domain, the DS ingress node must be able to perform all necessary 868 traffic conditioning functions on the incoming traffic. 870 2.3.4.3 In non-DS-Capable Domains 872 Traffic sources or intermediate nodes in a non-DS-capable domain may 873 employ traffic conditioners to pre-mark traffic before it reaches the 874 ingress of a downstream DS domain. In this way the local policies 875 for classification and marking may be concealed. 877 2.3.4.4 In Interior DS Nodes 879 Although the basic architecture assumes that complex classification 880 and traffic conditioning functions are located only in a network's 881 ingress and egress boundary nodes, deployment of these functions in 882 the interior of the network is not precluded. For example, more 883 restrictive access policies may be enforced on a transoceanic link, 884 requiring MF classification and traffic conditioning functionality 885 in the upstream node on the link. This approach may have scaling 886 limits, due to the potentially large number of classification and 887 conditioning rules that might need to be maintained. 889 2.4 Per-Hop Behaviors 891 A per-hop behavior (PHB) is a description of the externally 892 observable forwarding behavior of a DS node applied to a particular 893 DS behavior aggregate. "Forwarding behavior" is a general concept in 894 this context. For example, in the event that only one behavior 895 aggregate occupies a link, the observable forwarding behavior (i.e., 896 loss, delay, jitter) will often depend only on the relative loading 897 of the link (i.e., in the event that the behavior assumes a work- 898 conserving scheduling discipline). Useful behavioral distinctions 899 are mainly observed when multiple behavior aggregates compete for 900 buffer and bandwidth resources on a node. The PHB is the means by 901 which a node allocates resources to behavior aggregates, and it is on 902 top of this basic hop-by-hop resource allocation mechanism that 903 useful differentiated services may be constructed. 905 The most simple example of a PHB is one which guarantees a minimal 906 bandwidth allocation of X% of a link (over some reasonable time 907 interval) to a behavior aggregate. This PHB can be fairly easily 908 measured under a variety of competing traffic conditions. A slightly 909 more complex PHB would guarantee a minimal bandwidth allocation of X% 910 of a link, with proportional fair sharing of any excess link 911 capacity. In general, the observable behavior of a PHB may depend 912 on certain constraints on the traffic characteristics of the 913 associated behavior aggregate, or the characteristics of other 914 behavior aggregates. 916 PHBs may be specified in terms of their resource (e.g., buffer, 917 bandwidth) priority relative to other PHBs, or in terms of their 918 relative observable traffic characteristics (e.g., delay, loss). 919 These PHBs may be used as building blocks to allocate resources and 920 should be specified as a group (PHB group) for consistency. 921 PHB groups will usually share a common constraint applying to each 922 PHB within the group, such as a packet scheduling or buffer 923 management policy. The relationship between PHBs in a group may be 924 in terms of absolute or relative priority (e.g., discard priority by 925 means of deterministic or stochastic thresholds), but this is not 926 required (e.g., N equal link shares). A single PHB defined in 927 isolation is a special case of a PHB group. 929 PHBs are implemented in nodes by means of some buffer management and 930 packet scheduling mechanisms. PHBs are defined in terms of behavior 931 characteristics relevant to service provisioning policies, and not in 932 terms of particular implementation mechanisms. In general, a variety 933 of implementation mechanisms may be suitable for implementing a 934 particular PHB group. Furthermore, it is likely that more than one 935 PHB group may be implemented on a node and utilized within a domain. 936 PHB groups should be defined such that the proper resource allocation 937 between groups can be inferred, and integrated mechanisms can be 938 implemented which can simultaneously support two or more groups. 940 As described in [DSFIELD], a PHB is selected at a node by a mapping 941 of the DS codepoint in a received packet. Standardized PHBs have a 942 recommended codepoint. However, the total space of codepoints is 943 larger than the space available for recommended codepoints for 944 standardized PHBs, and [DSFIELD] leaves provisions for locally 945 configurable mappings. A codepoint->PHB mapping table may have 946 contain both 1->1 and N->1 mappings. All codepoints must be mapped 947 to some PHB; in the absence of some local policy, codepoints which 948 are not mapped to a standardized PHB in accordance with that PHB's 949 specification should be mapped to the Default PHB. 951 2.5 Network Resource Allocation 953 The implementation, configuration, operation and administration of 954 the supported PHB groups in the nodes of a DS Domain should 955 effectively partition the resources of those nodes and the inter-node 956 links between behavior aggregates, in accordance with the domain's 957 service provisioning policy. Traffic conditioners can further 958 control the usage of these resources through enforcement of TCAs and 959 possibly through operational feedback from the nodes and traffic 960 conditioners in the domain. Although a range of services can be 961 deployed in the absence of complex traffic conditioning functions 962 (e.g., using only static marking policies), functions such as 963 policing, shaping, and dynamic re-marking enable the deployment of 964 services providing quantitative performance metrics. 966 The configuration of and interaction between traffic conditioners 967 and interior nodes should be managed by the administrative control of 968 the domain and may require operational control through protocols and 969 a control entity. There is a wide range of possible control models 970 [DSFWK]. The precise nature and implementation of the interaction 971 between these components is outside the scope of this architecture. 972 However, scalability requires that the control of the domain does not 973 require micro-management of the network resources. The most scalable 974 control model would operate nodes in open-loop in the operational 975 timeframe, and would only require administrative-timescale management 976 as SLAs are varied. This simple model may be unsuitable in some 977 circumstances, and some automated but slowly varying operational 978 control (minutes rather than seconds) may be desirable to balance the 979 utilization of the network against the recent load profile. 981 3. Per-Hop Behavior Specification Guidelines 983 Basic requirements for per-hop behavior standardization are given in 984 [DSFIELD]. This section elaborates on that text by describing 985 additional guidelines for PHB (group) specifications. This is 986 intended to help foster implementation consistency. Before a PHB 987 group is proposed for standardization it should satisfy these 988 guidelines, as appropriate, to preserve the integrity of this 989 architecture. 991 G.1: A PHB standard must specify a recommended DS codepoint selected 992 from the codepoint space reserved for standard mappings [DSFIELD]. 993 Recommended codepoints will be assigned by the IANA. A PHB proposal 994 may recommend a temporary codepoint from the EXP/LU space to 995 facilitate inter-domain experimentation. Determination of a packet's 996 PHB must not require inspection of additional packet header fields 997 beyond the DS field. 999 G.2: The specification of each newly proposed PHB group should 1000 include an overview of the behavior and the purpose of the behavior 1001 being proposed. The overview should include a problem or problems 1002 statement for which the PHB group is targeted. The overview should 1003 include the basic concepts behind the PHB group. These concepts 1004 should include, but are not restricted to, queueing behavior, discard 1005 behavior, and output link selection behavior. Lastly, the overview 1006 should specify the method by which the PHB group solves the problem 1007 or problems specified in the problem statement. 1009 Any configuration or management issues which affect the basic PHB 1010 definition should be specified in the overview of the behavior. The 1011 actual details of the management and configuration of PHB groups in 1012 DS nodes should be addressed in a separate, parallel document. 1014 G.3: A PHB group specification should indicate the number of 1015 individual PHBs specified. In the event that multiple PHBs are 1016 specified, the interactions between these PHBs and constraints that 1017 must be respected globally by all the PHBs within the group should be 1018 clearly specified. As an example, the specification must indicate 1019 whether the probability of packet reordering within a microflow is 1020 increased if different packets in that microflow are marked for 1021 different PHBs within the group. 1023 G.4: A PHB group may be specified for local use within a domain in 1024 order to provide some domain-specific functionality or domain- 1025 specific services. In this event, the PHB specification is useful 1026 for providing vendors with a consistent definition of the PHB group. 1027 The PHB specification can also provide semantics for PHB translation 1028 and service mappings between peer domains, one of which does not 1029 support this PHB group. However, any PHB group which is defined for 1030 local use should not be considered for standardization, but may be 1031 published as an Informational RFC. In contrast, a PHB group which is 1032 intended for general use will follow a stricter standardization 1033 process. Therefore all PHB proposals should specifically state 1034 whether they are to be considered for general or local use. 1036 It is recognized that PHB groups can be designed with the intent of 1037 providing host-to-host, WAN edge-to-WAN edge, or domain edge-to- 1038 domain edge services. Use of the term "end-to-end" in a PHB 1039 definition should be interpreted to mean "host-to-host" for 1040 consistency. 1042 Other PHB groups may be defined and deployed locally within domains, 1043 for experimental or operational purposes. There is no requirement 1044 that these PHB groups must be publicly documented, but they should 1045 utilize DS codepoints from one of the EXP/LU pools as defined in 1046 [DSFIELD]. 1048 G.5: It may be possible or appropriate for a packet marked for a 1049 PHB within a PHB group to be re-marked to select another PHB; either 1050 within a domain or across a domain boundary. Typically there are 1051 four reasons for PHB modification: 1053 a. The codepoints associated with the PHB group are collectively 1054 intended to carry state about the network, 1055 b. Conditions which require PHB promotion or demotion of a packet 1056 (this assumes that PHBs within the group can be ranked in some 1057 order), 1058 c. A PHB group is not implemented on both sides of a boundary between 1059 cooperating domains; all codepoints associated with a PHB group 1060 have to be mapped to some other set of codepoints selecting PHBs 1061 from another group in the next domain, 1062 d. The boundary between two domains is not covered by a SLA. In this 1063 case the codepoint/PHB to select when crossing the boundary link 1064 will be determined by the local policy of the upstream domain. 1066 In contrast, it may also be desirable for specific PHB groups to be 1067 preserved within a domain and/or across multiple domains. Typically 1068 this is because the PHB groups carry some host-to-host, WAN edge-to- 1069 WAN edge, or domain edge-to-domain edge semantics which are difficult 1070 to duplicate when the PHB group is mapped to a different PHB group. 1071 Further, these semantics may also be difficult to duplicate if 1072 packets are promoted or demoted within the same PHB group. 1074 A PHB specification should clearly state the circumstances under 1075 which packets marked for a PHB within a PHB group may, or should be 1076 modified (e.g., promoted or demoted) to another PHB within the group, 1077 or preserved within a domain. A PHB specification should clearly 1078 state the circumstances under which packets marked for a PHB within a 1079 PHB group may, or should be mapped, modified, or preserved across 1080 multiple, cooperating domains when an SLA covering the traffic exists 1081 among the domains. Recommendations for multi-domain treatment of 1082 PHBs do not apply to traffic not covered by a SLA among the domains 1083 involved. A PHB specification should clearly state the circumstances 1084 under which packets marked for a PHB group may, or should be marked 1085 for an alternative PHB group. 1087 If it is undesirable for the packet's PHB (within a PHB group) to be 1088 modified, the specification should clearly state the consequent risks 1089 when the PHB is modified. A possible risk to changing a packet's 1090 PHB, either within or outside a PHB group, is a higher probability of 1091 packet re-ordering. For certain PHB groups, it may be appropriate to 1092 reflect a state change in the node by changing a PHB. If a PHB group 1093 is designed to reflect the state of a network, the PHB definition 1094 must adequately describe the relationship between the PHBs and the 1095 states they reflect. A PHB specification may include constraints on 1096 actions that change the PHB group. These constraints may be 1097 specified as actions the node should, or must perform. 1099 G.6: A PHB group specification should include a section defining the 1100 implications of tunneling on the utility of the PHB group. This 1101 section should specify the implications for the utility of the PHB 1102 group of a newly created outer header when the original DS field of 1103 the inner header is encapsulated in a tunnel. This section should 1104 also discuss what possible changes should be applied to the inner 1105 header at the egress of the tunnel, when both the codepoints from the 1106 inner header and the outer header are accessible (see Sec. 6.2). 1108 G.7: The process of specifying PHB groups is likely to be 1109 incremental in nature. When new PHB groups are proposed, their known 1110 interactions with previously specified PHB groups should be 1111 documented. When a new PHB group is created, it can be entirely new 1112 in scope or it can be an extension to an existing PHB group. If the 1113 PHB group is entirely independent of some or all of the existing PHB 1114 specifications, a section should be included in the PHB specification 1115 which details how the new PHB group can co-exist with those PHB 1116 groups already standardized. For example, this section might 1117 indicate the possibility of packet re-ordering within a microflow for 1118 packets marked by codepoints associated with two separate PHB groups. 1119 If concurrent operation of two (or more) different PHB groups in the 1120 same node is impossible or detrimental this should be stated. If the 1121 concurrent operation of two (or more) different PHB groups requires 1122 some specific behaviors by the node when packets marked for PHBs from 1123 these different PHB groups are being processed by the node at the 1124 same time, these behaviors should be stated. 1126 Care should be taken to avoid circularity in the definitions of PHB 1127 groups. 1129 If the proposed PHB group is an extension to an existing PHB group, a 1130 section should be included in the PHB group specification which 1131 details how this extension interoperates with the behavior being 1132 extended. Further, if the extension alters or more narrowly defines 1133 the existing behavior in some way, this should also be clearly 1134 indicated. 1136 G.8: Each PHB specification should include a section specifying 1137 minimal conformance requirements for implementations of the PHB 1138 group. This conformance section is intended to provide a means for 1139 specifying the details of a behavior while allowing for 1140 implementation variation to the extent permitted by the PHB 1141 specification. This conformance section can take the form of rules, 1142 tables, pseudo-code, or tests. 1144 G.9: A PHB specification should include a section detailing the 1145 security implications of the behavior. This section should include a 1146 discussion of the re-marking of the inner header's codepoint at the 1147 egress of a tunnel and its effect on the desired forwarding behavior. 1148 Further, this section should also discuss how the proposed PHB group 1149 could be used in denial-of-service attacks, reduction of service 1150 contract attacks, and service contract violation attacks. Lastly, 1151 this section should discuss possible means for detecting such attacks 1152 as they are relevant to the proposed behavior. 1154 G.10: It is strongly recommended that an appendix be provided for 1155 each PHB specification that considers the implications of the 1156 proposed behavior on current and potential services. These services 1157 could include but are not restricted to be user-specific, device- 1158 specific, domain-specific or end-to-end services. It is also 1159 strongly recommended that the appendix include a section describing 1160 how the services are verified by users, devices, and/or domains. 1162 G.11: If the PHB specification is targeted for local use within a 1163 domain, it is recommended that the appendix include a description of 1164 how the PHB group is best mapped to existing general-use PHB groups 1165 as well as other local-use PHB groups when necessary. 1167 G.12: It is recommended that an appendix be provided with each PHB 1168 specification which considers the impact of the proposed PHB group on 1169 existing higher-layer protocols. Under some circumstances PHBs 1170 may allow for possible changes to higher-layer protocols which may 1171 increase or decrease the utility of the proposed PHB group. 1173 G.13: It is recommended that an appendix be provided with each PHB 1174 specification which recommends mappings to link-layer QoS mechanisms 1175 to support the intended behavior of the PHB across a shared-medium or 1176 switched link-layer. The determination of the most appropriate 1177 mapping between a PHB and a link-layer QoS mechanism is dependent on 1178 many factors and is outside the scope of this document; however, the 1179 specification should attempt to offer some guidance. 1181 4. Interoperability with Non-Differentiated Services-Compliant Nodes 1183 We define a non-differentiated services-compliant node (non-DS-compliant 1184 node) as any node which does not interpret the DS field as specified 1185 in [DSFIELD] and/or does not implement some or all of the 1186 standardized PHBs. This may be due to the capabilities or 1187 configuration of the node. We define a legacy node as a special case 1188 of a non-DS-compliant node which implements IPv4 Precedence 1189 classification and forwarding as defined in [RFC791, RFC1812], but 1190 which is otherwise not DS compliant. The precedence values in the IPv4 1191 TOS octet are compatible by intention with the Class Selector 1192 Codepoints defined in [DSFIELD], and the precedence forwarding 1193 behaviors defined in [RFC791, RFC1812] comply with the Class Selector 1194 PHB Requirements also defined in [DSFIELD]. A key distinction 1195 between a legacy node and a DS-compliant node is that the legacy node 1196 may or may not interpret bits 3-6 of the TOS octet as defined in 1197 [RFC1349] (the "DTRC" bits); in practice it will not interpret these 1198 bit as specified in [DSFIELD]. We assume that the use of the TOS 1199 markings defined in [RFC1349] is deprecated. Nodes which are non-DS- 1200 compliant and which are not legacy nodes may exhibit unpredictable 1201 forwarding behaviors for packets with non-zero DS codepoints. 1203 Differentiated services depend on the resource allocation mechanisms 1204 provided by per-hop behavior implementations in nodes. The quality 1205 or statistical assurance level of a service may break down in the 1206 event that traffic transits a non-DS-compliant node, or a non-DS- 1207 capable domain. 1209 We will examine two separate cases. The first case concerns the use 1210 of non-DS-compliant nodes within a DS domain. Note that PHB forwarding 1211 is primarily useful for allocating scarce node and link resources in 1212 a controlled manner. On high-speed, lightly loaded links, the worst- 1213 case packet delay, jitter, and loss may be negligible, and the use of 1214 a non-DS-compliant node on the upstream end of such a link may not 1215 result in service degradation. In more realistic circumstances, the 1216 lack of PHB forwarding in a node may make it impossible to offer low- 1217 delay, low-loss, or provisioned bandwidth services across paths which 1218 traverse the node. However, use of a legacy node may be an 1219 acceptable alternative, assuming that the DS domain restricts itself 1220 to using only the Class Selector Codepoints defined in [DSFIELD], and 1221 assuming that the particular precedence implementation in the legacy 1222 node provides forwarding behaviors which are compatible with the 1223 services offered along paths which traverse that node. Note that it 1224 is important to restrict the codepoints in use to the Class Selector 1225 Codepoints, since the legacy node may or may not interpret bits 3-5 1226 in accordance with [RFC1349], thereby resulting in unpredictable 1227 forwarding results. 1229 The second case concerns the behavior of services which traverse non- 1230 DS-capable domains. We assume for the sake of argument that a non- 1231 DS-capable domain does not deploy traffic conditioning functions on 1232 domain boundary nodes; therefore, even in the event that the domain 1233 consists of legacy or DS-compliant interior nodes, the lack of traffic 1234 enforcement at the boundaries will limit the ability to consistently 1235 deliver some types of services across the domain. A DS domain and a 1236 non-DS-capable domain may negotiate an agreement which governs how 1237 egress traffic from the DS-domain should be marked before entry into 1238 the non-DS-capable domain. This agreement might be monitored for 1239 compliance by traffic sampling instead of by rigorous traffic 1240 conditioning. Alternatively, where there is knowledge that the non- 1241 DS-capable domain consists of legacy nodes, the upstream DS domain 1242 may opportunistically re-mark differentiated services traffic to one 1243 or more of the Class Selector Codepoints. Where there is no 1244 knowledge of the traffic management capabilities of the downstream 1245 domain, and no agreement in place, a DS domain egress node may choose 1246 to re-mark DS codepoints to zero, under the assumption that the non- 1247 DS-capable domain will treat the traffic uniformly with best-effort 1248 service. 1250 In the event that a non-DS-capable domain peers with a DS domain, 1251 traffic flowing from the non-DS-capable domain should be conditioned 1252 at the DS ingress node of the DS domain according to the appropriate 1253 SLA or policy. 1255 5. Multicast Considerations 1257 Use of differentiated services by multicast traffic introduces a few 1258 issues for service provisioning. First, multicast packets which 1259 enter a DS domain at an ingress node may simultaneously take multiple 1260 paths through some segments of the domain due to multicast packet 1261 replication. In this way they consume more network resources than 1262 unicast packets. Where multicast group membership is dynamic, it is 1263 difficult to predict in advance the amount of network resources that 1264 may be consumed by multicast traffic originating from an upstream 1265 network for a particular group. A consequence of this uncertainty is 1266 that it may be difficult to provide quantitative service guarantees 1267 to multicast senders. Further, it may be necessary to reserve 1268 codepoints and PHBs for exclusive use by unicast traffic, to provide 1269 resource isolation from multicast traffic. 1271 The second issue is the selection of the DS codepoint for a multicast 1272 packet arriving at a DS ingress node. Because that packet may exit 1273 the DS domain at multiple DS egress nodes which peer with multiple 1274 downstream domains, the DS codepoint used should not result in the 1275 request for a service from a downstream DS domain which is in 1276 violation of a peering SLA. When establishing classifier and traffic 1277 conditioner state at an DS ingress node for an aggregate of traffic 1278 receiving a differentiated service which spans across the egress 1279 boundary of the domain, the identity of the adjacent downstream 1280 transit domain and the specifics of the corresponding peering SLA can 1281 be factored into the configuration decision (subject to routing 1282 policy and the stability of the routing infrastructure). In this way 1283 peering SLAs with downstream DS domains can be partially enforced at 1284 the ingress of the upstream domain, reducing the classification and 1285 traffic conditioning burden at the egress node of the upstream 1286 domain. This is not so easily performed in the case of multicast 1287 traffic, due to the possibility of dynamic group membership. The 1288 result is that the service guarantees for unicast traffic may be 1289 impacted. One means of addressing this problem is to establish a 1290 separate peering SLA for multicast traffic, and to either utilize a 1291 particular set of codepoints for multicast packets, or to implement 1292 the necessary classification and traffic conditioning mechanisms in 1293 the DS egress nodes to provide preferential isolation for unicast 1294 traffic in conformance with the peering SLA with the downstream 1295 domain. 1297 6. Security and Tunneling Considerations 1299 This section addresses security issues raised by the introduction of 1300 differentiated services, primarily the potential for denial-of- 1301 service attacks, and the related potential for theft of service by 1302 unauthorized traffic (Sec. 6.1). In addition, the operation of 1303 differentiated services in the presence of IPsec and its interaction 1304 with IPsec are also discussed (Sec. 6.2), as well as auditing 1305 requirements (Sec. 6.3). This section considers issues introduced by 1306 the use of both IPsec and non-IPsec tunnels. 1308 6.1 Theft and Denial of Service 1310 The primary goal of differentiated services is to allow different 1311 levels of service to be provided for traffic streams on a common 1312 network infrastructure. A variety of resource management techniques 1313 may be used to achieve this, but the end result will be that some 1314 packets receive different (e.g., better) service than others. The 1315 mapping of network traffic to the specific behaviors that result in 1316 different (e.g., better or worse) service is indicated primarily by 1317 the DS field, and hence an adversary may be able to obtain better 1318 service by modifying the DS field to codepoints indicating behaviors 1319 used for enhanced services or by injecting packets with the DS field 1320 set to such codepoints. Taken to its limits, this theft of service 1321 becomes a denial-of-service attack when the modified or injected 1322 traffic depletes the resources available to forward it and other 1323 traffic streams. The defense against such theft- and denial-of- 1324 service attacks consists of the combination of traffic conditioning 1325 at DS boundary nodes along with security and integrity of the network 1326 infrastructure within a DS domain. 1328 As described in Sec. 2.2, DS ingress nodes must condition all traffic 1329 entering a DS domain to ensure that it has acceptable DS codepoints. 1330 This means that the codepoints must conform to the applicable traffic 1331 conditioning agreement(s) and the domain's service provisioning 1332 policy. Hence, the ingress nodes are the primary line of defense 1333 against theft- and denial-of-service attacks based on modified DS 1334 codepoints (e.g., codepoints to which the traffic is not entitled), 1335 as success of any such attack constitutes a violation of the 1336 applicable TCA(s) and/or SPP. An important instance of an ingress 1337 node is that any traffic-originating node in a DS domain is the 1338 ingress node for that traffic, and must ensure that all originated 1339 traffic carries acceptable DS codepoints. 1341 Both a domain's service provisioning policy and traffic conditioning 1342 agreements may require the ingress nodes to change the DS codepoint 1343 on some entering packets (e.g., an ingress router may set the DS 1344 codepoint of a customer's traffic in accordance with the appropriate 1345 SLA). Ingress nodes must condition all other inbound traffic to 1346 ensure that the DS codepoints are acceptable; packets found to have 1347 unacceptable codepoints must either be discarded or must have their 1348 DS codepoints modified to acceptable values before being forwarded. 1349 For example, an ingress node receiving traffic from a domain with 1350 which no enhanced service agreement exists may reset the DS codepoint 1351 to the Default PHB codepoint [DSFIELD]. Traffic authentication may 1352 be required to validate the use of some DS codepoints (e.g., those 1353 corresponding to enhanced services), and such authentication may be 1354 performed by technical means (e.g., IPsec) and/or non-technical means 1355 (e.g., the inbound link is known to be connected to exactly one 1356 customer site). 1358 An inter-domain agreement may reduce or eliminate the need for 1359 ingress node traffic conditioning by making the upstream domain 1360 partly or completely responsible for ensuring that traffic has DS 1361 codepoints acceptable to the downstream domain. In this case, the 1362 ingress node may still perform redundant traffic conditioning checks 1363 to reduce the dependence on the upstream domain (e.g., such checks 1364 can prevent theft-of-service attacks from propagating across the 1365 domain boundary). If such a check fails because the upstream domain 1366 is not fulfilling its responsibilities, that failure is an auditable 1367 event; the generated audit log entry should include the date/time the 1368 packet was received, the source and destination IP addresses, and the 1369 DS codepoint that caused the failure. In practice, the limited gains 1370 from such checks need to be weighed against their potential 1371 performance impact in determining what, if any, checks to perform 1372 under these circumstances. 1374 Interior nodes in a DS domain may rely on the DS field to associate 1375 differentiated services traffic with the behaviors used to implement 1376 enhanced services. Any node doing so depends on the correct 1377 operation of the DS domain to prevent the arrival of traffic with 1378 unacceptable DS codepoints. Robustness concerns dictate that the 1379 arrival of packets with unacceptable DS codepoints must not cause the 1380 failure (e.g., crash) of network nodes. Interior nodes are not 1381 responsible for enforcing the service provisioning policy (or 1382 individual SLAs) and hence are not required to check DS codepoints 1383 before using them. Interior nodes may perform some traffic 1384 conditioning checks on DS codepoints (e.g., check for DS codepoints 1385 that are never used for traffic on a specific link) to improve 1386 security and robustness (e.g., resistance to theft-of-service attacks 1387 based on DS codepoint modifications). Any detected failure of such a 1388 check is an auditable event and the generated audit log entry should 1389 include the date/time the packet was received, the source and 1390 destination IP addresses, and the DS codepoint that caused the 1391 failure. In practice, the limited gains from such checks need to be 1392 weighed against their potential performance impact in determining 1393 what, if any, checks to perform at interior nodes. 1395 Any link that cannot be adequately secured against modification of DS 1396 codepoints or traffic injection by adversaries should be treated as a 1397 boundary link (and hence any arriving traffic on that link is treated 1398 as if it were entering the domain at an ingress node). Local 1399 security policy provides the definition of "adequately secured," and 1400 such a definition may include a determination that the risks and 1401 consequences of DS codepoint modification and/or traffic injection do 1402 not justify any additional security measures for a link. Link 1403 security can be enhanced via physical access controls and/or software 1404 means such as tunnels that ensure packet integrity. 1406 6.2 IPsec and Tunneling Interactions 1408 The IPsec protocol, as defined in [ESP, AH], does not include the IP 1409 header's DS field in any of its cryptographic calculations (in the 1410 case of tunnel mode, it is the outer IP header's DS field that is not 1411 included). Hence modification of the DS field by a network node has 1412 no effect on IPsec's end-to-end security, because it cannot cause any 1413 IPsec integrity check to fail. As a consequence, IPsec does not 1414 provide any defense against an adversary's modification of the DS 1415 field (i.e., a man-in-the-middle attack), as the adversary's 1416 modification will also have no effect on IPsec's end-to-end security. 1417 In some environments, the ability to modify the DS field without 1418 affecting IPsec integrity checks may constitute a covert channel; if 1419 it is necessary to eliminate such a channel or reduce its bandwidth, 1420 the DS domains should be configured so that the required processing 1421 (e.g., set all DS fields on sensitive traffic to a single value) can 1422 be performed at DS egress nodes where traffic exits higher security 1423 domains. 1425 IPsec's tunnel mode provides security for the encapsulated IP 1426 header's DS field. A tunnel mode IPsec packet contains two IP 1427 headers: an outer header supplied by the tunnel ingress node and an 1428 encapsulated inner header supplied by the original source of the 1429 packet. When an IPsec tunnel is hosted (in whole or in part) on a 1430 differentiated services network, the intermediate network nodes 1431 operate on the DS field in the outer header. At the tunnel egress 1432 node, IPsec processing includes stripping the outer header and 1433 forwarding the packet (if required) using the inner header. If the 1434 inner IP header has not been processed by a DS ingress node for the 1435 tunnel egress node's DS domain, the tunnel egress node is the DS 1436 ingress node for traffic exiting the tunnel, and hence must carry out 1437 the corresponding traffic conditioning responsibilities (see Sec. 1438 6.1). If the IPsec processing includes a sufficiently strong 1439 cryptographic integrity check of the encapsulated packet (where 1440 sufficiency is determined by local security policy), the tunnel 1441 egress node can safely assume that the DS field in the inner header 1442 has the same value as it had at the tunnel ingress node. This allows 1443 a tunnel egress node in the same DS domain as the tunnel ingress 1444 node, to safely treat a packet passing such an integrity check as if 1445 it had arrived from another node within the same DS domain, omitting 1446 the DS ingress node traffic conditioning that would otherwise be 1447 required. An important consequence is that otherwise insecure links 1448 internal to a DS domain can be secured by a sufficiently strong IPsec 1449 tunnel. 1451 This analysis and its implications apply to any tunneling protocol 1452 that performs integrity checks, but the level of assurance of the 1453 inner header's DS field depends on the strength of the integrity 1454 check performed by the tunneling protocol. In the absence of 1455 sufficient assurance for a tunnel that may transit nodes outside the 1456 current DS domain (or is otherwise vulnerable), the encapsulated 1457 packet must be treated as if it had arrived at a DS ingress node from 1458 outside the domain. 1460 The IPsec protocol currently requires that the inner header's DS 1461 field not be changed by IPsec decapsulation processing at a tunnel 1462 egress node. This ensures that an adversary's modifications to the 1463 DS field cannot be used to launch theft- or denial-of-service attacks 1464 across an IPsec tunnel endpoint, as any such modifications will be 1465 discarded at the tunnel endpoint. This document makes no change to 1466 that IPsec requirement. 1468 If the IPsec specifications are modified in the future to permit a 1469 tunnel egress node to modify the DS field in an inner IP header based 1470 on the DS field value in the outer header (e.g., copying part or all 1471 of the outer DS field to the inner DS field), then additional 1472 considerations would apply. For a tunnel contained entirely within a 1473 single DS domain and for which the links are adequately secured 1474 against modifications of the outer DS field, the only limits on inner 1475 DS field modifications would be those imposed by the domain's service 1476 provisioning policy. Otherwise, the tunnel egress node performing 1477 such modifications would be acting as a DS ingress node for traffic 1478 exiting the tunnel and must carry out the traffic conditioning 1479 responsibilities of an ingress node, including defense against theft- 1480 and denial-of-service attacks (See Sec. 6.1). If the tunnel enters 1481 the DS domain at a node different from the tunnel egress node, the 1482 tunnel egress node may depend on the upstream DS ingress node having 1483 ensured that the outer DS field values are acceptable. Even in this 1484 case, there are some checks that can only be performed by the tunnel 1485 egress node (e.g., a consistency check between the inner and outer DS 1486 codepoints for an encrypted tunnel). Any detected failure of such a 1487 check is an auditable event and the generated audit log entry should 1488 include the date/time the packet was received, the source and 1489 destination IP addresses, and the DS codepoint that was unacceptable. 1491 An IPsec tunnel can be viewed in at least two different ways from an 1492 architectural perspective. If the tunnel is viewed as a logical 1493 single hop "virtual wire", the actions of intermediate nodes in 1494 forwarding the tunneled traffic should not be visible beyond the ends 1495 of the tunnel and hence the DS field should not be modified as part 1496 of decapsulation processing. In contrast, if the tunnel is viewed as 1497 a multi-hop participant in forwarding traffic, then modification of 1498 the DS field as part of tunnel decapsulation processing may be 1499 desirable. A specific example of the latter situation occurs when a 1500 tunnel terminates at an interior node of a DS domain at which the 1501 domain administrator does not wish to deploy traffic conditioning 1502 logic (e.g., to simplify traffic management). This could be 1503 supported by using the DS codepoint in the outer IP header (which was 1504 subject to traffic conditioning at the DS ingress node) to reset the 1505 DS codepoint in the inner IP header, effectively moving DS ingress 1506 traffic conditioning responsibilities from the IPsec tunnel egress 1507 node to the appropriate upstream DS ingress node (which must already 1508 perform that function for unencapsulated traffic). 1510 6.3 Auditing 1512 Not all systems that support differentiated services will implement 1513 auditing. However, if differentiated services support is 1514 incorporated into a system that supports auditing, then the 1515 differentiated services implementation should also support auditing. 1516 If such support is present the implementation must allow a system 1517 administrator to enable or disable auditing for differentiated 1518 services as a whole, and may allow such auditing to be enabled or 1519 disabled in part. 1521 For the most part, the granularity of auditing is a local matter. 1522 However, several auditable events are identified in this document and 1523 for each of these events a minimum set of information that should be 1524 included in an audit log is defined. Additional information (e.g., 1525 packets related to the one that triggered the auditable event) may 1526 also be included in the audit log for each of these events, and 1527 additional events, not explicitly called out in this specification, 1528 also may result in audit log entries. There is no requirement for 1529 the receiver to transmit any message to the purported sender in 1530 response to the detection of an auditable event, because of the 1531 potential to induce denial of service via such action. 1533 7. Acknowledgements 1535 The authors would like to acknowledge the following individuals for 1536 their helpful comments and suggestions: Kathleen Nichols, Brian 1537 Carpenter, Konstantinos Dovrolis, Shivkumar Kalyana, Wu-chang Feng, 1538 Marty Borden, Yoram Bernet, Ronald Bonica, James Binder, and Borje 1539 Ohlman, Alessio Casati, Scott Brim, Curtis Villamizar, Brahi, Andrew 1540 Smith, John Renwick, Werner Almesberger, Alan O'Neill, and James Fu. 1542 8. References 1544 [802.1p] ISO/IEC Final CD 15802-3 Information technology - Tele- 1545 communications and information exchange between systems - 1546 Local and metropolitan area networks - Common 1547 specifications - Part 3: Media Access Control (MAC) 1548 bridges, (current draft available as IEEE P802.1D/D15). 1550 [AH] S. Kent and R. Atkinson, "IP Authentication Header", 1551 Internet Draft , 1552 July 1998. 1554 [ATM] ATM Traffic Management Specification Version 4.0 1555 , April 1996. 1557 [Bernet] Y. Bernet, R. Yavatkar, P. Ford, F. Baker, L. Zhang, 1558 K. Nichols, and M. Speer, "A Framework for Use of 1559 RSVP with Diff-serv Networks", Internet Draft 1560 , June 1998. 1562 [DSFIELD] K. Nichols, S. Blake, F. Baker, and D. Black, "Definition 1563 of the Differentiated Services Field (DS Field) in the 1564 IPv4 and IPv6 Headers", Internet Draft 1565 , August 1998. 1567 [DSFWK] Y. Bernet, J. Binder, S. Blake, M. Carlson, E. Davies, B. 1568 Ohlman, D. Verma, Z. Wang, and W. Weiss, "A Framework for 1569 Differentiated Services", Internet Draft 1570 , May 1998. 1572 [EXPLICIT] D. Clark and W. Fang, "Explicit Allocation of Best 1573 Effort Packet Delivery Service", IEEE/ACM Trans. on 1574 Networking, vol. 6, no. 4, August 1998, pp. 362-373. 1576 [Ellesson] E. Ellesson and S. Blake, "A Proposal for the Format and 1577 Semantics of the TOS Byte and Traffic Class Byte in IPv4 1578 and IPv6", Internet Draft , 1579 November 1997. 1581 [ESP] S. Kent and R. Atkinson, "IP Encapsulating Security 1582 Payload", Internet Draft 1583 , July 1998. 1585 [Ferguson] P. Ferguson, "Simple Differential Services: IP TOS and 1586 Precedence, Delay Indication, and Drop Preference, 1587 Internet Draft , 1588 April 1998. 1590 [FRELAY] ANSI T1S1, "DSSI Core Aspects of Frame Rely", March 1990. 1592 [Heinanen] J. Heinanen, "Use of the IPv4 TOS Octet to Support 1593 Differentiated Services", Internet Draft 1594 , November 1997. 1596 [MPLSFWK] R. Callon, P. Doolan, N. Feldman, A. Fredette, G. 1597 Swallow, and A. Viswanathan, "A Framework for 1598 Multiprotocol Label Switching", Internet Draft 1599 , November 1997. 1601 [MPLSTE] D. Awduche, D. H. Gan, T. Li, G. Swallow, and V. 1602 Srinivasan, "Extensions to RSVP for Traffic Engineering", 1603 Internet Draft , 1604 August 1998. 1606 [RFC791] Information Sciences Institute, "Internet Protocol", 1607 Internet RFC 791, September 1981. 1609 [RFC1349] P. Almquist, "Type of Service in the Internet Protocol 1610 Suite", Internet RFC 1349, July 1992. 1612 [RFC1633] R. Braden, D. Clark, and S. Shenker, "Integrated Services 1613 in the Internet Architecture: An Overview", Internet RFC 1614 1633, July 1994. 1616 [RFC1812] F. Baker, editor, "Requirements for IP Version 4 1617 Routers", Internet RFC 1812, June 1995. 1619 [RSVP] B. Braden et. al., "Resource ReSerVation Protocol (RSVP) 1620 -- Version 1 Functional Specification", Internet RFC 1621 2205, September 1997. 1623 [SIMA] K. Kilkki, "Simple Integrated Media Access (SIMA)", 1624 Internet Draft , 1625 June 1997. 1627 [2BIT] K. Nichols, V. Jacobson, and L. Zhang, "A Two-bit 1628 Differentiated Services Architecture for the Internet", 1629 ftp://ftp.ee.lbl.gov/papers/dsarch.pdf, November 1997. 1631 [TR] ISO/IEC 8802-5 Information technology - 1632 Telecommunications and information exchange between 1633 systems - Local and metropolitan area networks - Common 1634 specifications - Part 5: Token Ring Access Method and 1635 Physical Layer Specifications, (also ANSI/IEEE Std 802.5- 1636 1995), 1995. 1638 [Weiss] W. Weiss, "Providing Differentiated Services Through 1639 Cooperative Dropping and Delay Indication", Internet 1640 Draft , March 1998. 1642 Authors' Addresses 1644 Steven Blake 1645 Torrent Networking Technologies 1646 2221 Broadbirch Drive 1647 Silver Spring, MD 20904 1648 Phone: +1-301-625-1600 1649 E-mail: slblake@torrentnet.com 1651 David Black 1652 The Open Group 1653 11 Cambridge Center 1654 Cambridge, MA 02142 1655 Phone: +1-617-621-7347 1656 E-mail: d.black@opengroup.org 1658 Mark A. Carlson 1659 Sun Microsystems, Inc. 1660 2990 Center Green Court South 1661 Boulder, CO 80301 1662 Phone: +1-303-448-0048 x115 1663 E-mail: mark.carlson@sun.com 1665 Elwyn Davies 1666 Nortel UK 1667 London Road 1668 Harlow, Essex CM17 9NA, UK 1669 Phone: +44-1279-405498 1670 E-mail: elwynd@nortel.co.uk 1672 Zheng Wang 1673 Bell Labs Lucent Technologies 1674 101 Crawfords Corner Road 1675 Holmdel, NJ 07733 1676 E-mail: zhwang@bell-labs.com 1678 Walter Weiss 1679 Lucent Technologies 1680 300 Baker Avenue, Suite 100, 1681 Concord, MA 01742-2168 1682 E-mail: wweiss@lucent.com 1684 Full Copyright Statement 1686 Copyright (C) The Internet Society (1998). All Rights Reserved. 1688 This document and translations of it may be copied and furnished to 1689 others, and derivative works that comment on or otherwise explain it 1690 or assist in its implementation may be prepared, copied, published 1691 and distributed, in whole or in part, without restriction of any 1692 kind, provided that the above copyright notice and this paragraph are 1693 included on all such copies and derivative works. However, this 1694 document itself may not be modified in any way, such as by removing 1695 the copyright notice or references to the Internet Society or other 1696 Internet organizations, except as needed for the purpose of 1697 developing Internet standards in which case the procedures for 1698 copyrights defined in the Internet Standards process must be 1699 followed, or as required to translate it into languages other than 1700 English. 1702 The limited permissions granted above are perpetual and will not be 1703 revoked by the Internet Society or its successors or assigns. 1705 This document and the information contained herein is provided on an 1706 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 1707 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 1708 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 1709 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 1710 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.