idnits 2.17.00 (12 Aug 2021) /tmp/idnits1013/draft-bestbar-teas-ns-packet-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 224: '...ntroller. The boundary nodes MAY also...' Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document date (4 May 2022) is 10 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 0 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group T. Saad 3 Internet-Draft V. Beeram 4 Intended status: Informational Juniper Networks 5 Expires: 5 November 2022 J. Dong 6 Huawei Technologies 7 B. Wen 8 Comcast 9 D. Ceccarelli 10 J. Halpern 11 Ericsson 12 S. Peng 13 R. Chen 14 ZTE Corporation 15 X. Liu 16 Volta Networks 17 L. Contreras 18 Telefonica 19 R. Rokui 20 Ciena 21 L. Jalil 22 Verizon 23 4 May 2022 25 Realizing Network Slices in IP/MPLS Networks 26 draft-bestbar-teas-ns-packet-10 28 Abstract 30 Realizing network slices may require the Service Provider to have the 31 ability to partition a physical network into multiple logical 32 networks of varying sizes, structures, and functions so that each 33 slice can be dedicated to specific services or customers. Multiple 34 network slices can be realized on the same network while ensuring 35 slice elasticity in terms of network resource allocation. This 36 document describes a scalable solution to realize network slicing in 37 IP/MPLS networks by supporting multiple services on top of a single 38 physical network by relying on compliant domains and nodes to provide 39 forwarding treatment (scheduling, drop policy, resource usage) on to 40 packets that carry identifiers that indicate the slicing service that 41 is to be applied to the packets. 43 Status of This Memo 45 This Internet-Draft is submitted in full conformance with the 46 provisions of BCP 78 and BCP 79. 48 Internet-Drafts are working documents of the Internet Engineering 49 Task Force (IETF). Note that other groups may also distribute 50 working documents as Internet-Drafts. The list of current Internet- 51 Drafts is at https://datatracker.ietf.org/drafts/current/. 53 Internet-Drafts are draft documents valid for a maximum of six months 54 and may be updated, replaced, or obsoleted by other documents at any 55 time. It is inappropriate to use Internet-Drafts as reference 56 material or to cite them other than as "work in progress." 58 This Internet-Draft will expire on 5 November 2022. 60 Copyright Notice 62 Copyright (c) 2022 IETF Trust and the persons identified as the 63 document authors. All rights reserved. 65 This document is subject to BCP 78 and the IETF Trust's Legal 66 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 67 license-info) in effect on the date of publication of this document. 68 Please review these documents carefully, as they describe your rights 69 and restrictions with respect to this document. Code Components 70 extracted from this document must include Revised BSD License text as 71 described in Section 4.e of the Trust Legal Provisions and are 72 provided without warranty as described in the Revised BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 78 1.2. Acronyms and Abbreviations . . . . . . . . . . . . . . . 6 79 2. Network Resource Slicing Membership . . . . . . . . . . . . . 7 80 3. IETF Network Slice Realization . . . . . . . . . . . . . . . 8 81 3.1. Network Topology Filters . . . . . . . . . . . . . . . . 9 82 3.2. IETF Network Slice Service Request . . . . . . . . . . . 9 83 3.3. Slice-Flow Aggregation . . . . . . . . . . . . . . . . . 10 84 3.4. Path Placement over NRP Filter Topology . . . . . . . . . 10 85 3.5. NRP Policy Installation . . . . . . . . . . . . . . . . . 10 86 3.6. Path Instantiation . . . . . . . . . . . . . . . . . . . 10 87 3.7. Service Mapping . . . . . . . . . . . . . . . . . . . . . 11 88 4. Network Resource Partition Modes . . . . . . . . . . . . . . 11 89 4.1. Data plane Network Resource Partition Mode . . . . . . . 11 90 4.2. Control Plane Network Resource Partition Mode . . . . . . 12 91 4.3. Data and Control Plane Network Resource Partition Mode . 14 92 5. Network Resource Partition Instantiation . . . . . . . . . . 14 93 5.1. NRP Policy Definition . . . . . . . . . . . . . . . . . . 14 94 5.1.1. Network Resource Partition - Flow-Aggregate 95 Selector . . . . . . . . . . . . . . . . . . . . . . 15 97 5.1.2. Network Resource Partition Resource Reservation . . . 18 98 5.1.3. Network Resource Partition Per Hop Behavior . . . . . 19 99 5.1.4. Network Resource Partition Topology . . . . . . . . . 20 100 5.2. Network Resource Partition Boundary . . . . . . . . . . . 20 101 5.2.1. Network Resource Partition Edge Nodes . . . . . . . . 20 102 5.2.2. Network Resource Partition Interior Nodes . . . . . . 21 103 5.2.3. Network Resource Partition Incapable Nodes . . . . . 21 104 5.2.4. Combining Network Resource Partition Modes . . . . . 22 105 6. Mapping Traffic on Slice-Flow Aggregates . . . . . . . . . . 23 106 6.1. Network Slice-Flow Aggregate Relationships . . . . . . . 23 107 7. Path Selection and Instantiation . . . . . . . . . . . . . . 24 108 7.1. Applicability of Path Selection to Slice-Flow 109 Aggregates . . . . . . . . . . . . . . . . . . . . . . . 24 110 7.2. Applicability of Path Control Technologies to Slice-Flow 111 Aggregates . . . . . . . . . . . . . . . . . . . . . . . 24 112 7.2.1. RSVP-TE Based Slice-Flow Aggregate Paths . . . . . . 25 113 7.2.2. SR Based Slice-Flow Aggregate Paths . . . . . . . . . 25 114 8. Network Resource Partition Protocol Extensions . . . . . . . 25 115 9. Outstanding Issues . . . . . . . . . . . . . . . . . . . . . 26 116 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 117 11. Security Considerations . . . . . . . . . . . . . . . . . . . 27 118 12. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 27 119 13. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 120 14. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 121 14.1. Normative References . . . . . . . . . . . . . . . . . . 28 122 14.2. Informative References . . . . . . . . . . . . . . . . . 28 123 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 125 1. Introduction 127 Network slicing allows a Service Provider to create independent and 128 logical networks on top of a shared physical network infrastructure. 129 Such network slices can be offered to customers or used internally by 130 the Service Provider to enhance the delivery of their service 131 offerings. A Service Provider can also use network slicing to 132 structure and organize the elements of its infrastructure. The 133 solution discussed in this document works with any path control 134 technology (such as RSVP-TE, or SR) that can be used by a Service 135 Provider to realize network slicing in IP/MPLS networks. 137 [I-D.ietf-teas-ietf-network-slices] provides the definition of a 138 network slice for use within the IETF and discusses the general 139 framework for requesting and operating IETF Network Slices, their 140 characteristics, and the necessary system components and interfaces. 141 It also discusses the function of an IETF Network Slice Controller 142 and the requirements on its northbound and southbound interfaces. 144 This document introduces the notion of a Slice-Flow Aggregate which 145 comprises of one or more IETF network slice traffic streams. It also 146 describes the Network Resource Partition (NRP) and the NRP Policy 147 that can be used to instantiate control and data plane behaviors on 148 select topological elements associated with the NRP that supports a 149 Slice-Flow Aggregate - refer Section 5.1 for further details. 151 The IETF Network Slice Controller is responsible for the aggregation 152 of multiple IETF network traffic streams into a Slice-Flow Aggregate, 153 and for maintaining the mapping required between them. The 154 mechanisms used by the controller to determine the mapping of one or 155 more IETF network slice to a Slice-Flow Aggregate are outside the 156 scope of this document. The focus of this document is on the 157 mechanisms required at the device level to address the requirements 158 of network slicing in packet networks. 160 In a Diffserv (DS) domain [RFC2475], packets requiring the same 161 forwarding treatment (scheduling and drop policy) are classified and 162 marked with the respective Class Selector (CS) Codepoint (or the 163 Traffic Class (TC) field for MPLS packets [RFC5462]) at the DS domain 164 ingress nodes. Such packets are said to belong to a Behavior 165 Aggregate (BA) that has a common set of behavioral characteristics or 166 a common set of delivery requirements. At transit nodes, the CS is 167 inspected to determine the specific forwarding treatment to be 168 applied before the packet is forwarded. A similar approach is 169 adopted in this document to realize network slicing. The solution 170 proposed in this document does not mandate Diffserv to be enabled in 171 the network to provide a specific forwarding treatment. 173 When logical networks associated with an NRP are realized on top of a 174 shared physical network infrastructure, it is important to steer 175 traffic on the specific network resources partition that is allocated 176 for a given Slice-Flow Aggregate. In packet networks, the packets of 177 a specific Slice-Flow Aggregate may be identified by one or more 178 specific fields carried within the packet. An NRP ingress boundary 179 node (where Slice-Flow Aggregate traffic enters the NRP) populates 180 the respective field(s) in packets that are mapped to a Slice-Flow 181 Aggregate in order to allow interior NRP nodes to identify and apply 182 the specific Per NRP Hop Behavior (NRP-PHB) associated with the 183 Slice-Flow Aggregate. The NRP-PHB defines the scheduling treatment 184 and, in some cases, the packet drop probability. 186 If Diffserv is enabled within the network, the Slice-Flow Aggregate 187 traffic can further carry a Diffserv CS to enable differentiation of 188 forwarding treatments for packets within a Slice-Flow Aggregate. 190 For example, when using MPLS as a dataplane, it is possible to 191 identify packets belonging to the same Slice-Flow Aggregate by 192 carrying an identifier in an MPLS Label Stack Entry (LSE). 193 Additional Diffserv classification may be indicated in the Traffic 194 Class (TC) bits of the global MPLS label to allow further 195 differentiation of forwarding treatments for traffic traversing the 196 same NRP. 198 This document covers different modes of NRPs and discusses how each 199 mode can ensure proper placement of Slice-Flow Aggregate paths and 200 respective treatment of Slice-Flow Aggregate traffic. 202 1.1. Terminology 204 The reader is expected to be familiar with the terminology specified 205 in [I-D.ietf-teas-ietf-network-slices]. 207 The following terminology is used in the document: 209 IETF Network Slice: 210 refer to the definition of 'IETF network slice' in 211 [I-D.ietf-teas-ietf-network-slices]. 213 IETF Network Slice Controller (NSC): 214 refer to the definition in [I-D.ietf-teas-ietf-network-slices]. 216 Network Resource Partition: 217 refer to the definition in [I-D.ietf-teas-ietf-network-slices]. 219 Slice-Flow Aggregate: 220 a collection of packets that match an NRP Policy and are given the 221 same forwarding treatment; a Slice-Flow Aggregate comprises of one 222 or more IETF network slice traffic streams; the mapping of one or 223 more IETF network slices to a Slice-Flow Aggregate is maintained 224 by the IETF Network Slice Controller. The boundary nodes MAY also 225 maintain a mapping of specific IETF network slice service(s) to a 226 SFA. 228 Network Resource Partition Policy (NRP): 229 a policy construct that enables instantiation of mechanisms in 230 support of IETF network slice specific control and data plane 231 behaviors on select topological elements; the enforcement of an 232 NRP Policy results in the creation of an NRP. 234 NRP Identifier (NRP-ID): 235 an identifier that is globally unique within an NRP domain and 236 that can be used in the control or management plane to identify 237 the resources associated with the NRP. 239 NRP Capable Node: 240 a node that supports one of the NRP modes described in this 241 document. 243 NRP Incapable Node: 244 a node that does not support any of the NRP modes described in 245 this document. 247 Slice-Flow Aggregate Path: 248 a path that is setup over the NRP that is associated with a 249 specific Slice-Flow Aggregate. 251 Slice-Flow Aggregate Packet: 252 a packet that traverses over the NRP that is associated with a 253 specific Slice-Flow Aggregate. 255 NRP Filter Topology: 256 a set of topological elements associated with a Network Resource 257 Partition. 259 NRP state aware TE (NRP-TE): 260 a mechanism for TE path selection that takes into account the 261 available network resources associated with a specific NRP. 263 1.2. Acronyms and Abbreviations 265 BA: Behavior Aggregate 267 CS: Class Selector 269 NRP-PHB: NRP Per Hop Behavior as described in Section 5.1.3 271 FAS: Flow Aggregate Selector 273 FASL: Flow Aggregate Selector Label as described in Section 5.1.1 275 SLA: Service Level Agreements 277 SLO: Service Level Objectives 279 SLE: Service Level Expectations 281 Diffserv: Differentiated Services 283 MPLS: Multiprotocol Label Switching 285 LSP: Label Switched Path 286 RSVP: Resource Reservation Protocol 288 TE: Traffic Engineering 290 SR: Segment Routing 292 VRF: VPN Routing and Forwarding 294 AC: Attachment Circuit 296 CE: Customer Edge 298 PE: Provider Edge 300 PCEP: Path Computation Element (PCE) Communication Protocol (PCEP) 302 2. Network Resource Slicing Membership 304 An NRP that supports a Slice-Flow Aggregate can be instantiated over 305 parts of an IP/MPLS network (e.g., all or specific network resources 306 in the access, aggregation, or core network), and can stretch across 307 multiple domains administered by a provider. The NRP topology may be 308 comprised of dedicated and/or shared network resources (e.g., in 309 terms of processing power, storage, and bandwidth). 311 The physical network resources may be fully dedicated to a specific 312 Slice-Flow Aggregate. For example, traffic belonging to a Slice-Flow 313 Aggregate can traverse dedicated network resources without being 314 subjected to contention from traffic of other Slice-Flow Aggregates. 315 Dedicated physical network resource slicing allows for simple 316 partitioning of the physical network resources amongst Slice-Flow 317 Aggregates without the need to distinguish packets traversing the 318 dedicated network resources since only one Slice-Flow Aggregate 319 traffic stream can traverse the dedicated resource at any time. 321 To optimize network utilization, sharing of the physical network 322 resources may be desirable. In such case, the same physical network 323 resource capacity is divided among multiple NRPs that support 324 multiple Slice-Flow Aggregates. The shared physical network 325 resources can be partitioned in the data plane (for example by 326 applying hardware policers and shapers) and/or partitioned in the 327 control plane by providing a logical representation of the physical 328 link that has a subset of the network resources available to it. 330 3. IETF Network Slice Realization 332 Figure 1 describes the steps required to realize an IETF network 333 slice service in a provider network using the solution proposed in 334 this document. While Figure 4 of [I-D.ietf-teas-ietf-network-slices] 335 provides an abstract architecture of an IETF Network Slice, this 336 section intends to offer a realization of that architecture specific 337 for IP/MPLS packet networks. 339 Each of the steps is further elaborated on in a subsequent section. 341 -- -- -- 342 |CE| |CE| |CE| 343 -- -- -- 344 AC : AC : AC : 345 ---------------------- ------- 346 ( |PE|....|PE|....|PE| ) ( IETF ) 347 IETF Network ( --: -- :-- ) ( Network ) 348 Slice Service ( :............: ) ( Slice ) 349 Request ( IETF Network Slice ) ( ) Customer 350 v ---------------------- ------- View 351 v ............................\........./............... 352 v \ / Provider 353 v >>>>>>>>>>>>>>> Slice-Flow \ / View 354 v ^ Aggregate Mapping v v 355 v ^ ----------------------------------------- 356 v ^ ( |PE|.......|PE|........|PE|.......|PE| ) 357 --------- ( --: -- :-- -- ) 358 | | ( :...................: ) 359 | NSC | ( Network Resource Partition ) 360 | | ----------------------------------------- 361 | | ^ 362 | |>>>>> Resource Partitioning | 363 --------- of Filter Topology | 364 v v | 365 v v ----------------------------- -------- 366 v v (|PE|..-..|PE|... ..|PE|..|PE|) ( ) 367 v v ( :-- |P| -- :-: -- :-- ) ( Filter ) 368 v v ( :.- -:.......|P| :- ) ( Topology ) 369 v v ( |P|...........:-:.......|P| ) ( ) 370 v v ( - Filter Topology ) -------- 371 v v ----------------------------- ^ 372 v >>>>>>>>>>>> Topology Filter ^ / 373 v ...........................\............../........... 374 v \ / Underlay 375 ---------- \ / (Physical) 376 | | \ / Network 377 | Network | ---------------------------------------------- 378 |Controller| ( |PE|.....-.....|PE|...... |PE|.......|PE| ) 379 | | ( -- |P| -- :-...:-- -..:-- ) 380 ---------- ( : -:.............|P|.........|P| ) 381 v ( -......................:-:..- - ) 382 >>>>>>> ( |P|.........................|P|......: ) 383 Program the ( - - ) 384 Network ---------------------------------------------- 385 (NRP Policies and Paths)* 387 * : NRP Policy installation and path placement can be centralized 388 or distributed. 390 Figure 1: IETF network slice realization steps. 392 3.1. Network Topology Filters 394 The Physical Network may be filtered into a number of Filter 395 Topologies. Filter actions may include selection of specific nodes 396 and links according to their capabilities and are based on network- 397 wide policies. The resulting topologies can be used to host IETF 398 Network Slices and provide a useful way for the network operator to 399 know that all of the resources they are using to plan a network slice 400 meet specific SLOs. This step can be done offline during planning 401 activity, or could be performed dynamically as new demands arise. 403 Section 5.1.4 describes how topology filters can be associated with 404 the NRP instantiated by the NRP Policy. 406 3.2. IETF Network Slice Service Request 408 The customer requests an IETF Network Slice Service specifying the 409 CE-AC-PE points of attachment, the connectivity matrix, and the SLOs/ 410 SLEs as described in [I-D.ietf-teas-ietf-network-slices]. These 411 capabilities are always provided based on a Service Level Agreement 412 (SLA) between the network slice costumer and the provider. 414 This defines the traffic flows that need to be supported when the 415 slice is realized. Depending on the mechanism and encoding of the 416 Attachment Circuit (AC), the IETF Network Slice Service may also 417 include information that will allow the operator's controllers to 418 configure the PEs to determine what customer traffic is intended for 419 this IETF Network Slice. 421 IETF Network Slice Service Requests are likely to arrive at various 422 times in the life of the network, and may also be modified. 424 3.3. Slice-Flow Aggregation 426 A network may be called upon to support very many IETF Network 427 Slices, and this could present scaling challenges in the operation of 428 the network. In order to overcome this, the IETF Network Slice 429 streams may be aggregated into groups according to similar 430 characteristics. 432 A Slice-Flow Aggregate is a construct that comprises the traffic 433 flows of one or more IETF Network Slices. The mapping of IETF 434 Network Slices into an Slice-Flow Aggregate is a matter of local 435 operator policy is a function executed by the Controller. The Slice- 436 Flow Aggregate may be preconfigured, created on demand, or modified 437 dynamically. 439 3.4. Path Placement over NRP Filter Topology 441 Depending on the underlying network technology, the paths are 442 selected in the network in order to best deliver the SLOs for the 443 different services carried by the Slice-Flow Aggregate. The path 444 placement function (carried on ingress node or by a controller) is 445 performed on the Filter Topology that is selected to support the 446 Slice-Flow Aggregate. 448 Note that this step may indicate the need to increase the capacity of 449 the underlying Filter Topology or to create a new Filter Topology. 451 3.5. NRP Policy Installation 453 A Controller function programs the physical network with policies for 454 handling the traffic flows belonging to the Slice-Flow Aggregate. 455 These policies instruct underlying routers how to handle traffic for 456 a specific Slice-Flow Aggregate: the routers correlate markers 457 present in the packets that belong to the Slice-Flow Aggregate. The 458 way in which the NRP Policy is installed in the routers and the way 459 that the traffic is marked is implementation specific. The NRP 460 Policy instantiation in the network is further described in 461 Section 5. 463 3.6. Path Instantiation 465 Depending on the underlying network technology, a Controller function 466 may install the forwarding state specific to the Slice-Flow Aggregate 467 so that traffic is routed along paths derived in the Path Placement 468 step described in Section 3.4. The way in which the paths are 469 instantiated is implementation specific. 471 3.7. Service Mapping 473 The edge points can be configured to support the network slice 474 service by mapping the customer traffic to Slice-Flow Aggregates, 475 possibly using information supplied when the IETF network slice 476 service was requested. The edge points may also be instructed to 477 mark the packets so that the network routers will know which policies 478 and routing instructions to apply. The steering of traffic onto 479 Slice-Flow Aggregate paths is further described in Section 6. 481 4. Network Resource Partition Modes 483 An NRP Policy can be used to dictate if the network resource 484 partitioning of the shared network resources among multiple Slice- 485 Flow Aggregates can be achieved: 487 a) in data plane only, 489 b) in control plane only, or 491 c) in both control and data planes. 493 4.1. Data plane Network Resource Partition Mode 495 The physical network resources can be partitioned on network devices 496 by applying a Per Hop forwarding Behavior (PHB) onto packets that 497 traverse the network devices. In the Diffserv model, a Class 498 Selector (CS) codepoint is carried in the packet and is used by 499 transit nodes to apply the PHB that determines the scheduling 500 treatment and drop probability for packets. 502 When data plane NRP mode is applied, packets need to be forwarded on 503 the specific NRP that supports the Slice-Flow Aggregate to ensure the 504 proper forwarding treatment dictated in the NRP Policy is applied 505 (refer to Section 5.1 below). In this case, a Flow Aggregate 506 Selector (FAS) must be carried in each packet to identify the Slice- 507 Flow Aggregate that it belongs to. 509 The ingress node of an NRP domain adds a FAS field if one is not 510 already present in each Slice-Flow Aggregate packet. In the data 511 plane NRP mode, the transit nodes within an NRP domain use the FAS to 512 associate packets with a Slice-Flow Aggregate and to determine the 513 Network Resource Partition Per Hop Behavior (NRP-PHB) that is applied 514 to the packet (refer to Section 5.1.3 for further details). The CS 515 is used to apply a Diffserv PHB on to the packet to allow 516 differentiation of traffic treatment within the same Slice-Flow 517 Aggregate. 519 When data plane only NRP mode is used, routers may rely on a network 520 state independent view of the topology to determine the best paths. 521 In this case, the best path selection dictates the forwarding path of 522 packets to the destination. The FAS field carried in each packet 523 determines the specific NRP-PHB treatment along the selected path. 525 4.2. Control Plane Network Resource Partition Mode 527 Multiple NRPs can be realized over the same set of physical 528 resources. Each NRP is identified by an identifier (NRP-ID) that is 529 globally unique within the NRP domain. The NRP state reservations 530 for each NRP can be maintained on the network element or on a 531 controller. 533 The network reservation states for a specific partition can be 534 represented in a topology that contains all or a subset of the 535 physical network elements (nodes and links) and reflect the network 536 state reservations in that NRP. The logical network resources that 537 appear in the NRP topology can reflect a part, whole, or in-excess of 538 the physical network resource capacity (e.g., when oversubscription 539 is desirable). 541 For example, the physical link bandwidth can be divided into 542 fractions, each dedicated to an NRP that supports a Slice-Flow 543 Aggregate. The topology associated with the NRP supporting a Slice- 544 Flow Aggregate can be used by routing protocols, or by the ingress/ 545 PCE when computing NRP state aware TE paths. 547 To perform NRP state aware Traffic Engineering (NRP-TE), the resource 548 reservation on each link needs to be NRP aware. The NRP reservations 549 state can be managed locally on the device or off device (e.g. on a 550 controller). 552 The same physical link may be member of multiple slice policies that 553 instantiate different NRPs. The NRP reservable or utilized bandwidth 554 on such a link is updated (and may be advertised) whenever new paths 555 are placed in the network. The NRP reservation state, in this case, 556 is maintained on each device or off the device on a resource 557 reservation manager that holds reservation states for those links in 558 the network. 560 Multiple NRPs that support Slice-Flow Aggregates can form a group and 561 share the available network resources allocated to each. In this 562 case, a node can update the reservable bandwidth for each NRP to take 563 into consideration the available bandwidth from other NRPs in the 564 same group. 566 For illustration purposes, Figure 2 describes bandwidth partitioning 567 or sharing amongst a group of NRPs. In Figure 2a, the NRPs 568 identified by the following NRP-IDs: NRP1, NRP2, NRP3 and NRP4 are 569 not sharing any bandwidths between each other. In Figure 2b, the 570 NRPs: NRP1 and NRP2 can share the available bandwidth portion 571 allocated to each amongst them. Similarly, NRP3 and NRP4 can share 572 amongst themselves any available bandwidth allocated to them, but 573 they cannot share available bandwidth allocated to NRP1 or NRP2. In 574 both cases, the Max Reservable Bandwidth may exceed the actual 575 physical link resource capacity to allow for over subscription. 577 I-----------------------------I I-----------------------------I 578 <--NRP1-> I I-----------------I I 579 I---------I I I <-NRP1-> I I 580 I I I I I-------I I I 581 I---------I I I I I I I 582 I I I I-------I I I 583 <-----NRP2------> I I I I 584 I-----------------I I I <-NRP2-> I I 585 I I I I I---------I I I 586 I-----------------I I I I I I I 587 I I I I---------I I I 588 <---NRP3----> I I I I 589 I-------------I I I NRP1 + NRP2 I I 590 I I I I-----------------I I 591 I-------------I I I I 592 I I I I 593 <---NRP4----> I I-----------------I I 594 I-------------I I I <-NRP3-> I I 595 I I I I I-------I I I 596 I-------------I I I I I I I 597 I I I I-------I I I 598 I NRP1+NRP2+NRP3+NRP4 I I I I 599 I I I <-NRP4-> I I 600 I-----------------------------I I I---------I I I 601 <--Max Reservable Bandwidth--> I I I I I 602 I I---------I I I 603 I I I 604 I NRP3 + NRP4 I I 605 I-----------------I I 606 I NRP1+NRP2+NRP3+NRP4 I 607 I I 608 I-----------------------------I 609 <--Max Reservable Bandwidth--> 611 (a) No bandwidth sharing (b) Sharing bandwidth between 612 between NRPs. NRPs of the same group. 614 Figure 2: Bandwidth isolation/sharing among NRPs. 616 4.3. Data and Control Plane Network Resource Partition Mode 618 In order to support strict guarantees for Slice-Flow Aggregates, the 619 network resources can be partitioned in both the control plane and 620 data plane. 622 The control plane partitioning allows the creation of customized 623 topologies per NRP that each supports a Slice-Flow Aggregate. The 624 ingress routers or a Path Computation Engine (PCE) may use the 625 customized topologies and the NRP state to determine optimal path 626 placement for specific demand flows using NRP-TE. 628 The data plane partitioning provides isolation for Slice-Flow 629 Aggregate traffic, and protection when resource contention occurs due 630 to bursts of traffic from other Slice-Flow Aggregate traffic that 631 traverses the same shared network resource. 633 5. Network Resource Partition Instantiation 635 A network slice can span multiple technologies and multiple 636 administrative domains. Depending on the network slice customer 637 requirements, a network slice can be differentiated from other 638 network slices in terms of data, control, and management planes. 640 The customer of a network slice service expresses their intent by 641 specifying requirements rather than mechanisms to realize the slice 642 as described in Section 3.2. 644 The network slice controller is fed with the network slice service 645 intent and realizes it with an appropriate Network Resource Partition 646 Policy (NRP Policy). Multiple IETF network slices are mapped to the 647 same Slice-Flow Aggregate as described in Section 3.3. 649 The network wide consistent NRP Policy definition is distributed to 650 the devices in the network as shown in Figure 1. The specification 651 of the network slice intent on the northbound interface of the 652 controller and the mechanism used to map the network slice to a 653 Slice-Flow Aggregate are outside the scope of this document and will 654 be addressed in separate documents. 656 5.1. NRP Policy Definition 658 The NRP Policy is network-wide construct that is supplied to network 659 devices, and may include rules that control the following: 661 * Data plane specific policies: This includes the FAS, any firewall 662 rules or flow-spec filters, and QoS profiles associated with the 663 NRP Policy and any classes within it. 665 * Control plane specific policies: This includes bandwidth 666 reservations, any network resource sharing amongst slice policies, 667 and reservation preference to prioritize reservations of a 668 specific NRP over others. 670 * Topology membership policies: This defines the topology filter 671 policies that dictate node/link/function membership to a specific 672 NRP. 674 There is a desire for flexibility in realizing network slices to 675 support the services across networks consisting of implementations 676 from multiple vendors. These networks may also be grouped into 677 disparate domains and deploy various path control technologies and 678 tunnel techniques to carry traffic across the network. It is 679 expected that a standardized data model for NRP Policy will 680 facilitate the instantiation and management of the NRP on the 681 topological elements selected by the NRP Policy topology filter. 683 It is also possible to distribute the NRP Policy to network devices 684 using several mechanisms, including protocols such as NETCONF or 685 RESTCONF, or exchanging it using a suitable routing protocol that 686 network devices participate in (such as IGP(s) or BGP). The 687 extensions to enable specific protocols to carry an NRP Policy 688 definition will be described in separate documents. 690 5.1.1. Network Resource Partition - Flow-Aggregate Selector 692 A router should be able to identify a packet belonging to a Slice- 693 Flow Aggregate before it can apply the associated dataplane 694 forwarding treatment or NRP-PHB. One or more fields within the 695 packet are used as an FAS to do this. 697 Forwarding Address Based FAS: 699 It is possible to assign a different forwarding address (or MPLS 700 forwarding label in case of MPLS network) for each Slice-Flow 701 Aggregate on a specific node in the network. [RFC3031] states in 702 Section 2.1 that: 'Some routers analyze a packet's network layer 703 header not merely to choose the packet's next hop, but also to 704 determine a packet's "precedence" or "class of service"'. 705 Assigning a unique forwarding address (or MPLS forwarding label) 706 to each Slice-Flow Aggregate allows Slice-Flow Aggregate packets 707 destined to a node to be distinguished by the destination address 708 (or MPLS forwarding label) that is carried in the packet. 710 This approach requires maintaining per Slice-Flow Aggregate state 711 for each destination in the network in both the control and data 712 plane and on each router in the network. For example, consider a 713 network slicing provider with a network composed of 'N' nodes, 714 each with 'K' adjacencies to its neighbors. Assuming a node can 715 be reached over 'M' different Slice-Flow Aggregates, the node 716 assigns and advertises reachability to 'N' unique forwarding 717 addresses, or MPLS forwarding labels. Similarly, each node 718 assigns a unique forwarding address (or MPLS forwarding label) for 719 each of its 'K' adjacencies to enable strict steering over the 720 adjacency for each slice. The total number of control and data 721 plane states that need to be stored and programmed in a router's 722 forwarding is (N+K)*M states. Hence, as 'N', 'K', and 'M' 723 parameters increase, this approach suffers from scalability 724 challenges in both the control and data planes. 726 Global Identifier Based FAS: 728 An NRP Policy may include a Global Identifier FAS (G-FAS) field 729 that is carried in each packet in order to associate it to the NRP 730 supporting a Slice-Flow Aggregate, independent of the forwarding 731 address or MPLS forwarding label that is bound to the destination. 732 Routers within the NRP domain can use the forwarding address (or 733 MPLS forwarding label) to determine the forwarding next-hop(s), 734 and use the G-FAS field in the packet to infer the specific 735 forwarding treatment that needs to be applied on the packet. 737 The G-FAS can be carried in one of multiple fields within the 738 packet, depending on the dataplane used. For example, in MPLS 739 networks, the G-FAS can be encoded within an MPLS label that is 740 carried in the packet's MPLS label stack. All packets that belong 741 to the same Slice-Flow Aggregate may carry the same G-FAS in the 742 MPLS label stack. It is also possible to have multiple G-FAS's 743 map to the same Slice-Flow Aggregate. 745 The G-FAS can be encoded in an MPLS label and may appear in 746 several positions in the MPLS label stack. For example, the VPN 747 service label may act as a G-FAS to allow VPN packets to be mapped 748 to the Slice-Flow Aggregate. In this case, a single VPN service 749 label acting as a G-FAS may be allocated by all Egress PEs of a 750 VPN. Alternatively, multiple VPN service labels may act as 751 G-FAS's that map a single VPN to the same Slice-Flow Aggregate to 752 allow for multiple Egress PEs to allocate different VPN service 753 labels for a VPN. In other cases, a range of VPN service labels 754 acting as multiple G-FAS's may map multiple VPN traffic to a 755 single Slice-Flow Aggregate. An example of such deployment is 756 shown in Figure 3. 758 SR Adj-SID: G-FAS (VPN service label) on PE2: 1001 759 9012: P1-P2 760 9023: P2-PE2 762 /-----\ /-----\ /-----\ /-----\ 763 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 764 \-----/ \-----/ \-----/ \-----/ 766 In 767 packet: 768 +------+ +------+ +------+ +------+ 769 | IP | | 9012 | | 9023 | | 1001 | 770 +------+ +------+ +------+ +------+ 771 | Pay- | | 9023 | | 1001 | | IP | 772 | Load | +------+ +------+ +------+ 773 +----- + | 1001 | | IP | | Pay- | 774 +------+ +------+ | Load | 775 | IP | | Pay- | +------+ 776 +------+ | Load | 777 | Pay- | +------+ 778 | Load | 779 +------+ 781 Figure 3: G-FAS or VPN label at bottom of label stack. 783 In some cases, the position of the G-FAS may not be at a fixed 784 position in the MPLS label header. In this case, the G-FAS label 785 can show up in any position in the MPLS label stack. To enable a 786 transit router to identify the position of the G-FAS label, a 787 special purpose label can be used to indicate the presence of a 788 G-FAS in the MPLS label stack as shown in Figure 4. 790 SR Adj-SID: G-FAS: 1001 791 9012: P1-P2 792 9023: P2-PE2 794 /-----\ /-----\ /-----\ /-----\ 795 | PE1 | ----- | P1 | ------ | P2 |------ | PE2 | 796 \-----/ \-----/ \-----/ \-----/ 798 In 799 packet: 800 +------+ +------+ +------+ +------+ 801 | IP | | 9012 | | 9023 | | FAI | 802 +------+ +------+ +------+ +------+ 803 | Pay- | | 9023 | | FAI | | 1001 | 804 | Load | +------+ +------+ +------+ 805 +------+ | FAI | | 1001 | | IP | 806 +------+ +------+ +------+ 807 | 1001 | | IP | | Pay- | 808 +------+ +------+ | Load | 809 | IP | | Pay- | +------+ 810 +------+ | Load | 811 | Pay- | +------+ 812 | Load | 813 +------+ 815 Figure 4: FAI and G-FAS label in the label stack. 817 When the slice is realized over an IP dataplane, the G-FAS can be 818 encoded in the IP header (e.g. as an IPv6 option header). 820 5.1.2. Network Resource Partition Resource Reservation 822 Bandwidth and network resource allocation strategies for slice 823 policies are essential to achieve optimal placement of paths within 824 the network while still meeting the target SLOs. 826 Resource reservation allows for the management of available bandwidth 827 and the prioritization of existing allocations to enable preference- 828 based preemption when contention on a specific network resource 829 arises. Sharing of a network resource's available bandwidth amongst 830 a group of NRPs may also be desirable. For example, a Slice-Flow 831 Aggregate may not be using all of the NRP reservable bandwidth; this 832 allows other NRPs in the same group to use the available bandwidth 833 resources for other Slice-Flow Aggregates. 835 Congestion on shared network resources may result from sub-optimal 836 placement of paths in different slice policies. When this occurs, 837 preemption of some Slice-Flow Aggregate paths may be desirable to 838 alleviate congestion. A preference-based allocation scheme enables 839 prioritization of Slice-Flow Aggregate paths that can be preempted. 841 Since network characteristics and its state can change over time, the 842 NRP topology and its network state need to be propagated in the 843 network to enable ingress TE routers or Path Computation Engine 844 (PCEs) to perform accurate path placement based on the current state 845 of the NRP network resources. 847 5.1.3. Network Resource Partition Per Hop Behavior 849 In Diffserv terminology, the forwarding behavior that is assigned to 850 a specific class is called a Per Hop Behavior (PHB). The PHB defines 851 the forwarding precedence that a marked packet with a specific CS 852 receives in relation to other traffic on the Diffserv-aware network. 854 The NRP Per Hop Behavior (NRP-PHB) is the externally observable 855 forwarding behavior applied to a specific packet belonging to a 856 Slice-Flow Aggregate. The goal of an NRP-PHB is to provide a 857 specified amount of network resources for traffic belonging to a 858 specific Slice-Flow Aggregate. A single NRP may also support 859 multiple forwarding treatments or services that can be carried over 860 the same logical network. 862 The Slice-Flow Aggregate traffic may be identified at NRP ingress 863 boundary nodes by carrying a FAS to allow routers to apply a specific 864 forwarding treatment that guarantee the SLA(s). 866 With Differentiated Services (Diffserv) it is possible to carry 867 multiple services over a single converged network. Packets requiring 868 the same forwarding treatment are marked with a CS at domain ingress 869 nodes. Up to eight classes or Behavior Aggregates (BAs) may be 870 supported for a given Forwarding Equivalence Class (FEC) [RFC2475]. 871 To support multiple forwarding treatments over the same Slice-Flow 872 Aggregate, a Slice-Flow Aggregate packet may also carry a Diffserv CS 873 to identify the specific Diffserv forwarding treatment to be applied 874 on the traffic belonging to the same NRP. 876 At transit nodes, the CS field carried inside the packets are used to 877 determine the specific PHB that determines the forwarding and 878 scheduling treatment before packets are forwarded, and in some cases, 879 drop probability for each packet. 881 5.1.4. Network Resource Partition Topology 883 A key element of the NRP Policy is a customized topology that may 884 include the full or subset of the physical network topology. The NRP 885 topology could also span multiple administrative domains and/or 886 multiple dataplane technologies. 888 An NRP topology can overlap or share a subset of links with another 889 NRP topology. A number of topology filtering policies can be defined 890 as part of the NRP Policy to limit the specific topology elements 891 that belong to the NRP. For example, a topology filtering policy can 892 leverage Resource Affinities as defined in [RFC2702] to include or 893 exclude certain links that the NRP is instantiated on in supports of 894 the Slice-Flow Aggregate. 896 The NRP Policy may also include a reference to a predefined topology 897 (e.g., derived from a Flexible Algorithm Definition (FAD) as defined 898 in [I-D.ietf-lsr-flex-algo], or Multi-Topology ID as defined 899 [RFC4915]. 901 5.2. Network Resource Partition Boundary 903 A network slice originates at the edge nodes of a network slice 904 provider. Traffic that is steered over the corresponding NRP 905 supporting a Slice-Flow Aggregate may traverse NRP capable as well as 906 NRP incapable interior nodes. 908 The network slice may encompass one or more domains administered by a 909 provider. For example, an organization's intranet or an ISP. The 910 network provider is responsible for ensuring that adequate network 911 resources are provisioned and/or reserved to support the SLAs offered 912 by the network end-to-end. 914 5.2.1. Network Resource Partition Edge Nodes 916 NRP edge nodes sit at the boundary of a network slice provider 917 network and receive traffic that requires steering over network 918 resources specific to a NRP that supports a Slice-Flow Aggregate. 919 These edge nodes are responsible for identifying Slice-Flow Aggregate 920 specific traffic flows by possibly inspecting multiple fields from 921 inbound packets (e.g., implementations may inspect IP traffic's 922 network 5-tuple in the IP and transport protocol headers) to decide 923 on which NRP it can be steered. 925 Network slice ingress nodes may condition the inbound traffic at 926 network boundaries in accordance with the requirements or rules of 927 each service's SLAs. The requirements and rules for network slice 928 services are set using mechanisms which are outside the scope of this 929 document. 931 When data plane NRP mode is employed, the NRP ingress nodes are 932 responsible for adding a suitable FAS onto packets that belong to 933 specific Slice-Flow Aggregate. In addition, edge nodes may mark the 934 corresponding Diffserv CS to differentiate between different types of 935 traffic carried over the same Slice-Flow Aggregate. 937 5.2.2. Network Resource Partition Interior Nodes 939 An NRP interior node receives slice traffic and may be able to 940 identify the packets belonging to a specific Slice-Flow Aggregate by 941 inspecting the FAS field carried inside each packet, or by inspecting 942 other fields within the packet that may identify the traffic streams 943 that belong to a specific Slice-Flow Aggregate. For example, when 944 data plane NRP mode is applied, interior nodes can use the FAS 945 carried within the packet to apply the corresponding NRP-PHB 946 forwarding behavior. Nodes within the network slice provider network 947 may also inspect the Diffserv CS within each packet to apply a per 948 Diffserv class PHB within the NRP Policy, and allow differentiation 949 of forwarding treatments for packets forwarded over the same NRP that 950 supports the Slice-Flow Aggregate. 952 5.2.3. Network Resource Partition Incapable Nodes 954 Packets that belong to a Slice-Flow Aggregate may need to traverse 955 nodes that are NRP incapable. In this case, several options are 956 possible to allow the slice traffic to continue to be forwarded over 957 such devices and be able to resume the NRP forwarding treatment once 958 the traffic reaches devices that are NRP-capable. 960 When data plane NRP mode is employed, packets carry a FAS to allow 961 slice interior nodes to identify them. To support end-to-end network 962 slicing, the FAS is maintained in the packets as they traverse 963 devices within the network - including NRP capable and incapable 964 devices. 966 For example, when the FAS is an MPLS label at the bottom of the MPLS 967 label stack, packets can traverse over devices that are NRP incapable 968 without any further considerations. On the other hand when the FASL 969 is at the top of the MPLS label stack, packets can be bypassed (or 970 tunneled) over the NRP incapable devices towards the next device that 971 supports NRP as shown in Figure 5. 973 SR Node-SID: FASL: 1001 @@@: NRP Policy enforced 974 1601: P1 ...: NRP Policy not enforced 975 1602: P2 976 1603: P3 977 1604: P4 978 1605: P5 980 @@@@@@@@@@@@@@ ........................ 981 . 982 /-----\ /-----\ /-----\ . 983 | P1 | ----- | P2 | ----- | P3 | . 984 \-----/ \-----/ \-----/ . 985 | @@@@@@@@@@ 986 | 987 /-----\ /-----\ 988 | P4 | ------ | P5 | 989 \-----/ \-----/ 991 +------+ +------+ +------+ 992 | 1001 | | 1604 | | 1001 | 993 +------+ +------+ +------+ 994 | 1605 | | 1001 | | IP | 995 +------+ +------+ +------+ 996 | IP | | 1605 | | Pay- | 997 +------+ +------+ | Load | 998 | Pay- | | IP | +------+ 999 | Load | +------+ 1000 +----- + | Pay- | 1001 | Load | 1002 +------+ 1004 Figure 5: Extending network slice over NRP incapable device(s). 1006 5.2.4. Combining Network Resource Partition Modes 1008 It is possible to employ a combination of the NRP modes that were 1009 discussed in Section 4 to realize a network slice. For example, data 1010 and control plane NRP modes can be employed in parts of a network, 1011 while control plane NRP mode can be employed in the other parts of 1012 the network. The path selection, in such case, can take into account 1013 the NRP available network resources. The FAS carried within packets 1014 allow transit nodes to enforce the corresponding NRP-PHB on the parts 1015 of the network that apply the data plane NRP mode. The FAS can be 1016 maintained while traffic traverses nodes that do not enforce data 1017 plane NRP mode, and so slice PHB enforcement can resume once traffic 1018 traverses capable nodes. 1020 6. Mapping Traffic on Slice-Flow Aggregates 1022 The usual techniques to steer traffic onto paths can be applicable 1023 when steering traffic over paths established for a specific Slice- 1024 Flow Aggregate. 1026 For example, one or more (layer-2 or layer-3) VPN services can be 1027 directly mapped to paths established for a Slice-Flow Aggregate. In 1028 this case, the per Virtual Routing and Forwarding (VRF) instance 1029 traffic that arrives on the Provider Edge (PE) router over external 1030 interfaces can be directly mapped to a specific Slice-Flow Aggregate 1031 path. External interfaces can be further partitioned (e.g., using 1032 VLANs) to allow mapping one or more VLANs to specific Slice-Flow 1033 Aggregate paths. 1035 Another option is steer traffic to specific destinations directly 1036 over multiple slice policies. This allows traffic arriving on any 1037 external interface and targeted to such destinations to be directly 1038 steered over the slice paths. 1040 A third option that can also be used is to utilize a data plane 1041 firewall filter or classifier to enable matching of several fields in 1042 the incoming packets to decide whether the packet belongs to a 1043 specific Slice-Flow Aggregate. This option allows for applying a 1044 rich set of rules to identify specific packets to be mapped to a 1045 Slice-Flow Aggregate. However, it requires data plane network 1046 resources to be able to perform the additional checks in hardware. 1048 6.1. Network Slice-Flow Aggregate Relationships 1050 The following describes the generalization relationships between the 1051 IETF network slice and different parts of the solution as described 1052 in Figure 1. 1054 o A customer may request one or more IETF Network Slices. 1056 o Any given Attachment Circuit (AC) may support the traffic for one 1057 or more IETF Network Slices. If there is more than one IETF Network 1058 Slice using a single AC, the IETF Network Slice Service request must 1059 include enough information to allow the edge nodes to demultiplex the 1060 traffic for the different IETF Network Slices. 1062 o By definition, multiple IETF Network Slices may be mapped to a 1063 single Slice-Flow Aggregate. However, it is possible for an Slice- 1064 Flow Aggregate to contain just a single IETF Network Slice. 1066 o The physical network may be filtered to multiple Filter Topologies. 1067 Each such Filter Topology facilitates planning the placement of paths 1068 for the Slice-Flow Aggregate by presenting only the subset of links 1069 and nodes that meet specific criteria. Note, however, in absence of 1070 any Filter Topology, Slice-Flow Aggregate are free to operate over 1071 the full physical network. 1073 o It is anticipated that there may be very many IETF Network Slices 1074 supported by a network operator over a single physical network. A 1075 network may support a limited number of Slice-Flow Aggregates, with 1076 each of the Slice-Flow Aggregates grouping any number of the IETF 1077 Network Slices streams. 1079 7. Path Selection and Instantiation 1081 7.1. Applicability of Path Selection to Slice-Flow Aggregates 1083 In State-dependent TE [I-D.ietf-teas-rfc3272bis], the path selection 1084 adapts based on the current state of the network. The state of the 1085 network can be based on parameters flooded by the routers as 1086 described in [RFC2702]. The link state is advertised with current 1087 reservations, thereby reflecting the available bandwidth on each 1088 link. Such link reservations may be maintained centrally on a 1089 network wide network resource manager, or distributed on devices (as 1090 usually done with RSVP-TE). TE extensions exist today to allow IGPs 1091 (e.g., [RFC3630] and [RFC5305]), and BGP-LS [RFC7752] to advertise 1092 such link state reservations. 1094 When the network resource reservations are maintained for NRPs, the 1095 link state can carry per NRP state (e.g., reservable bandwidth). 1096 This allows path computation to take into account the specific 1097 network resources available for an NRP. In this case, we refer to 1098 the process of path placement and path provisioning as NRP aware TE 1099 (NRP-TE). 1101 7.2. Applicability of Path Control Technologies to Slice-Flow 1102 Aggregates 1104 The NRP modes described in this document are agnostic to the 1105 technology used to setup paths that carry Slice-Flow Aggregate 1106 traffic. One or more paths connecting the endpoints of the mapped 1107 IETF network slices may be selected to steer the corresponding 1108 traffic streams over the resources allocated for the NRP that 1109 supports a Slice-Flow Aggregate. 1111 The feasible paths can be computed using the NRP topology and network 1112 state subject the optimization metrics and constraints. 1114 7.2.1. RSVP-TE Based Slice-Flow Aggregate Paths 1116 RSVP-TE [RFC3209] can be used to signal LSPs over the computed 1117 feasible paths in order to carry the Slice-Flow Aggregate traffic. 1118 The specific extensions to the RSVP-TE protocol required to enable 1119 signaling of NRP aware RSVP-TE LSPs are outside the scope of this 1120 document. 1122 7.2.2. SR Based Slice-Flow Aggregate Paths 1124 Segment Routing (SR) [RFC8402] can be used to setup and steer traffic 1125 over the computed Slice-Flow Aggregate feasible paths. 1127 The SR architecture defines a number of building blocks that can be 1128 leveraged to support the realization of NRPs that support Slice-Flow 1129 Aggregates in an SR network. 1131 Such building blocks include: 1133 * SR Policy with or without Flexible Algorithm. 1135 * Steering of services (e.g. VPN) traffic over SR paths 1137 * SR Operation, Administration and Management (OAM) and Performance 1138 Management (PM) 1140 SR allows a headend node to steer packets onto specific SR paths 1141 using a Segment Routing Policy (SR Policy). The SR policy supports 1142 various optimization objectives and constraints and can be used to 1143 steer Slice-Flow Aggregate traffic in the SR network. 1145 The SR policy can be instantiated with or without the IGP Flexible 1146 Algorithm (Flex-Algorithm) feature. It may be possible to dedicate a 1147 single SR Flex-Algorithm to compute and instantiate SR paths for one 1148 Slice-Flow Aggregate traffic. In this case, the SR Flex-Algorithm 1149 computed paths and Flex-Algorithm SR SIDs are not shared by other 1150 Slice-Flow Aggregates traffic. However, to allow for better scale, 1151 it may be desirable for multiple Slice-Flow Aggregates traffic to 1152 share the same SR Flex-Algorithm computed paths and SIDs. 1154 8. Network Resource Partition Protocol Extensions 1156 Routing protocols may need to be extended to carry additional per NRP 1157 link state. For example, [RFC5305], [RFC3630], and [RFC7752] are 1158 ISIS, OSPF, and BGP protocol extensions to exchange network link 1159 state information to allow ingress TE routers and PCE(s) to do proper 1160 path placement in the network. The extensions required to support 1161 network slicing may be defined in other documents, and are outside 1162 the scope of this document. 1164 The instantiation of an NRP Policy may need to be automated. 1165 Multiple options are possible to facilitate automation of 1166 distribution of an NRP Policy to capable devices. 1168 For example, a YANG data model for the NRP Policy may be supported on 1169 network devices and controllers. A suitable transport (e.g., NETCONF 1170 [RFC6241], RESTCONF [RFC8040], or gRPC) may be used to enable 1171 configuration and retrieval of state information for slice policies 1172 on network devices. The NRP Policy YANG data model is outside the 1173 scope of this document. 1175 9. Outstanding Issues 1177 Note to RFC Editor: Please remove this section prior to publication. 1179 This section records non-blocking issues that were raised during the 1180 Working Group Adoption Poll for the document. The below list of 1181 issues needs to be fully addressed before progressing the document to 1182 publication in IESG. 1184 1. Add new Appendix section with examples for the NRP modes 1185 described in Section 4. 1187 2. Add text to clarify the relationship between Slice-Flow 1188 Aggregates, the NRP Policy, and the NRP. 1190 3. Remove redundant references to Diffserv behaviors. 1192 4. Elaborate on the SFA packet treatment when no rules to associate 1193 the packet to an NRP are defined in the NRP Policy. 1195 5. Clarify the NRP instantiation through the NRP Policy 1196 enforcement. 1198 6. Clarify how the solution caters to the different IETF Network 1199 Slice Service Demarcation Point locations described in 1200 Section 4.2 of [I-D.ietf-teas-ietf-network-slices]. 1202 7. Clarify the relationship the underlay physical network, the 1203 filter topology and the NRP resources. 1205 8. Expand on how isolation between NRPs can be realized depending 1206 on the deployed NRP mode. 1208 9. Revise Section 5.2.3 to describe how nodes can discover NRP 1209 incapable downstream neighbors. 1211 10. Expand Section 11 on additional security threats introduced with 1212 the solution. 1214 11. Expand Section 5.2 on NRP domain boundary and multi-domain 1215 aspects. 1217 10. IANA Considerations 1219 This document has no IANA actions. 1221 11. Security Considerations 1223 The main goal of network slicing is to allow for varying treatment of 1224 traffic from multiple different network slices that are utilizing a 1225 common network infrastructure and to allow for different levels of 1226 services to be provided for traffic traversing a given network 1227 resource. 1229 A variety of techniques may be used to achieve this, but the end 1230 result will be that some packets may be mapped to specific resources 1231 and may receive different (e.g., better) service treatment than 1232 others. The mapping of network traffic to a specific NRP is 1233 indicated primarily by the FAS, and hence an adversary may be able to 1234 utilize resources allocated to a specific NRP by injecting packets 1235 carrying the same FAS field in their packets. 1237 Such theft-of-service may become a denial-of-service attack when the 1238 modified or injected traffic depletes the resources available to 1239 forward legitimate traffic belonging to a specific NRP. 1241 The defense against this type of theft and denial-of-service attacks 1242 consists of a combination of traffic conditioning at NRP domain 1243 boundaries with security and integrity of the network infrastructure 1244 within an NRP domain. 1246 12. Acknowledgement 1248 The authors would like to thank Krzysztof Szarkowicz, Swamy SRK, 1249 Navaneetha Krishnan, Prabhu Raj Villadathu Karunakaran, and Mohamed 1250 Boucadair for their review of this document and for providing 1251 valuable feedback on it. The authors would also like to thank Adrian 1252 Farrel for detailed discussions that resulted in Section 3. 1254 13. Contributors 1256 The following individuals contributed to this document: 1258 Colby Barth 1259 Juniper Networks 1260 Email: cbarth@juniper.net 1262 Srihari R. Sangli 1263 Juniper Networks 1264 Email: ssangli@juniper.net 1266 Chandra Ramachandran 1267 Juniper Networks 1268 Email: csekar@juniper.net 1270 Adrian Farrel 1271 Old Dog Consulting 1272 United Kingdom 1273 Email: adrian@olddog.co.uk 1275 14. References 1277 14.1. Normative References 1279 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1280 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1281 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1282 . 1284 [RFC3630] Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering 1285 (TE) Extensions to OSPF Version 2", RFC 3630, 1286 DOI 10.17487/RFC3630, September 2003, 1287 . 1289 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 1290 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 1291 2008, . 1293 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1294 S. Ray, "North-Bound Distribution of Link-State and 1295 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1296 DOI 10.17487/RFC7752, March 2016, 1297 . 1299 14.2. Informative References 1301 [I-D.ietf-lsr-flex-algo] 1302 Psenak, P., Hegde, S., Filsfils, C., Talaulikar, K., and 1303 A. Gulko, "IGP Flexible Algorithm", Work in Progress, 1304 Internet-Draft, draft-ietf-lsr-flex-algo-19, 7 April 2022, 1305 . 1308 [I-D.ietf-teas-ietf-network-slices] 1309 Farrel, A., Drake, J., Rokui, R., Homma, S., Makhijani, 1310 K., Contreras, L. M., and J. Tantsura, "Framework for IETF 1311 Network Slices", Work in Progress, Internet-Draft, draft- 1312 ietf-teas-ietf-network-slices-10, 27 March 2022, 1313 . 1316 [I-D.ietf-teas-rfc3272bis] 1317 Farrel, A., "Overview and Principles of Internet Traffic 1318 Engineering", Work in Progress, Internet-Draft, draft- 1319 ietf-teas-rfc3272bis-16, 24 March 2022, 1320 . 1323 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1324 and W. Weiss, "An Architecture for Differentiated 1325 Services", RFC 2475, DOI 10.17487/RFC2475, December 1998, 1326 . 1328 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 1329 McManus, "Requirements for Traffic Engineering Over MPLS", 1330 RFC 2702, DOI 10.17487/RFC2702, September 1999, 1331 . 1333 [RFC3031] Rosen, E., Viswanathan, A., and R. Callon, "Multiprotocol 1334 Label Switching Architecture", RFC 3031, 1335 DOI 10.17487/RFC3031, January 2001, 1336 . 1338 [RFC4915] Psenak, P., Mirtorabi, S., Roy, A., Nguyen, L., and P. 1339 Pillay-Esnault, "Multi-Topology (MT) Routing in OSPF", 1340 RFC 4915, DOI 10.17487/RFC4915, June 2007, 1341 . 1343 [RFC5462] Andersson, L. and R. Asati, "Multiprotocol Label Switching 1344 (MPLS) Label Stack Entry: "EXP" Field Renamed to "Traffic 1345 Class" Field", RFC 5462, DOI 10.17487/RFC5462, February 1346 2009, . 1348 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1349 and A. Bierman, Ed., "Network Configuration Protocol 1350 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1351 . 1353 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1354 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1355 . 1357 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1358 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1359 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1360 July 2018, . 1362 Authors' Addresses 1364 Tarek Saad 1365 Juniper Networks 1366 Email: tsaad@juniper.net 1368 Vishnu Pavan Beeram 1369 Juniper Networks 1370 Email: vbeeram@juniper.net 1372 Jie Dong 1373 Huawei Technologies 1374 Email: jie.dong@huawei.com 1376 Bin Wen 1377 Comcast 1378 Email: Bin_Wen@cable.comcast.com 1380 Daniele Ceccarelli 1381 Ericsson 1382 Email: daniele.ceccarelli@ericsson.com 1384 Joel Halpern 1385 Ericsson 1386 Email: joel.halpern@ericsson.com 1388 Shaofu Peng 1389 ZTE Corporation 1390 Email: peng.shaofu@zte.com.cn 1392 Ran Chen 1393 ZTE Corporation 1394 Email: chen.ran@zte.com.cn 1396 Xufeng Liu 1397 Volta Networks 1398 Email: xufeng.liu.ietf@gmail.com 1400 Luis M. Contreras 1401 Telefonica 1402 Email: luismiguel.contrerasmurillo@telefonica.com 1404 Reza Rokui 1405 Ciena 1406 Email: rrokui@ciena.com 1408 Luay Jalil 1409 Verizon 1410 Email: luay.jalil@verizon.com