idnits 2.17.00 (12 Aug 2021) /tmp/idnits58404/draft-ietf-bess-bgp-multicast-controller-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document date (April 11, 2022) is 33 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC5331' is mentioned on line 288, but not defined == Missing Reference: 'RFC 7752' is mentioned on line 374, but not defined == Unused Reference: 'RFC7752' is defined on line 1065, but no explicit reference was found in the text == Unused Reference: 'RFC6513' is defined on line 1088, but no explicit reference was found in the text == Unused Reference: 'RFC7060' is defined on line 1097, but no explicit reference was found in the text == Outdated reference: A later version (-17) exists of draft-ietf-idr-segment-routing-te-policy-16 == Outdated reference: A later version (-07) exists of draft-ietf-idr-wide-bgp-communities-06 Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BESS Z. Zhang 3 Internet-Draft Juniper Networks 4 Intended status: Standards Track R. Raszuk 5 Expires: October 13, 2022 NTT Network Innovations 6 D. Pacella 7 Verizon 8 A. Gulko 9 Edward Jones Wealth Management 10 April 11, 2022 12 Controller Based BGP Multicast Signaling 13 draft-ietf-bess-bgp-multicast-controller-09 15 Abstract 17 This document specifies a way that one or more centralized 18 controllers can use BGP to set up multicast distribution trees 19 (identified by either IP source/destination address pair, mLDP FEC, 20 or SR-P2MP Tree-ID) in a network. Since the controllers calculate 21 the trees, they can use sophisticated algorithms and constraints to 22 achieve traffic engineering. The controllers directly signal dynamic 23 replication state to tree nodes, leading to very simple multicast 24 control plane on the tree nodes, as if they were using static routes. 25 This can be used for both underlay and overlay multicast trees, 26 including replacing BGP-MVPN signaling. 28 Requirements Language 30 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 31 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 32 "OPTIONAL" in this document are to be interpreted as described in BCP 33 14 [RFC2119] [RFC8174] when, and only when, they appear in all 34 capitals, as shown here. 36 Status of This Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at https://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on October 13, 2022. 53 Copyright Notice 55 Copyright (c) 2022 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (https://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 3 71 1.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 3 72 1.2. Resilience . . . . . . . . . . . . . . . . . . . . . . . 4 73 1.3. Signaling . . . . . . . . . . . . . . . . . . . . . . . . 5 74 1.4. Label Allocation . . . . . . . . . . . . . . . . . . . . 6 75 1.4.1. Using a Common per-tree Label for All Routers . . . . 7 76 1.4.2. Upstream-assignment from Controller's Local Label 77 Space . . . . . . . . . . . . . . . . . . . . . . . . 8 78 1.5. Determining Root/Leaves . . . . . . . . . . . . . . . . . 9 79 1.5.1. PIM-SSM/Bidir or mLDP . . . . . . . . . . . . . . . . 9 80 1.5.2. PIM ASM . . . . . . . . . . . . . . . . . . . . . . . 9 81 1.6. Multiple Domains . . . . . . . . . . . . . . . . . . . . 10 82 1.7. SR-P2MP . . . . . . . . . . . . . . . . . . . . . . . . . 11 83 2. Alternative to BGP-MVPN . . . . . . . . . . . . . . . . . . . 11 84 3. Specification . . . . . . . . . . . . . . . . . . . . . . . . 13 85 3.1. Enhancements to TEA . . . . . . . . . . . . . . . . . . . 13 86 3.1.1. Any-Encapsulation Tunnel . . . . . . . . . . . . . . 13 87 3.1.2. Load-balancing Tunnel . . . . . . . . . . . . . . . . 14 88 3.1.3. Segment List Tunnel . . . . . . . . . . . . . . . . . 14 89 3.1.4. Receiving MPLS Label Stack . . . . . . . . . . . . . 14 90 3.1.5. RPF Sub-TLV . . . . . . . . . . . . . . . . . . . . . 15 91 3.1.6. Tree Label Stack sub-TLV . . . . . . . . . . . . . . 15 92 3.1.7. Backup Tunnel sub-TLV . . . . . . . . . . . . . . . . 16 93 3.2. Context Label TLV in BGP-LS Node Attribute . . . . . . . 17 94 3.3. Replicate State Route Type . . . . . . . . . . . . . . . 17 95 3.4. SR P2MP Signaling . . . . . . . . . . . . . . . . . . . . 18 96 3.4.1. Replication State Route for SR P2MP . . . . . . . . . 18 97 3.4.2. BGP Community Container for SR P2MP Policy . . . . . 19 98 3.4.3. Tunnel Encapsulation Attribute . . . . . . . . . . . 20 99 3.5. Replication State Route with Label Stack for Tree 100 Identification . . . . . . . . . . . . . . . . . . . . . 21 101 4. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 21 102 5. Security Considerations . . . . . . . . . . . . . . . . . . . 22 103 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 104 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 105 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 106 8.1. Normative References . . . . . . . . . . . . . . . . . . 23 107 8.2. Informative References . . . . . . . . . . . . . . . . . 24 108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 110 1. Overview 112 1.1. Introduction 114 [I-D.ietf-bess-bgp-multicast] describes a way to use BGP as a 115 replacement signaling for PIM [RFC7761] or mLDP [RFC6388]. The BGP- 116 based multicast signaling described there provides a mechanism for 117 setting up both (s,g)/(*,g) multicast trees (as PIM does, but 118 optionally with labels) and labeled (MPLS) multicast tunnels (as mLDP 119 does). Each router on a tree performs essentially the same 120 procedures as it would perform if using PIM or mLDP, but all the 121 inter-router signaling is done using BGP. 123 These procedures allow the routers to set up a separate tree for each 124 individual multicast (x,g) flow where the 'x' could be either 's' or 125 '*', but they also allow the routers to set up trees that are used 126 for more than one flow. In the latter case, the trees are often 127 referred to as "multicast tunnels" or "multipoint tunnels", and 128 specifically in this document they are mLDP tunnels (except that they 129 are set up with BGP signaling). While it actually does not have to 130 be restricted to mLDP tunnels, mLDP FEC is conveniently borrowed to 131 identify the tunnel. In the rest of the document, the term tree and 132 tunnel are used interchangeably. 134 The trees/tunnels are set up using the "receiver-initiated join" 135 technique of PIM/mLDP, hop by hop from downstream routers towards the 136 root. The BGP messages of MCAST-TREE SAFI are either sent hop by hop 137 between downstream routers and their upstream neighbors, or can be 138 reflected by Route Reflectors (RRs). 140 As an alternative to each hop independently determining its upstream 141 router and signaling upstream towards the root (following PIM/mLDP 142 model), the entire tree can be calculated by a centralized 143 controller, and the signaling can be entirely done from the 144 controller using the same MCAST-TREE SAFI. For that, some additional 145 procedures and optimizations are specified in this document. 147 [I-D.ietf-bess-bgp-multicast] uses S-PMSI, Leaf, and Source Active 148 Auto-Discovery (A-D) routes because the main procedures and concepts 149 are borrowed from the BGP-MVPN [RFC6514]. While the same Leaf A-D 150 routes can be used to signal replication state to tree nodes from 151 controllers, this document introduces a new route type "Replication 152 State" for the same functionality, so that familiarity with the BGP- 153 MVPN concepts is not required. 155 While it is outside the scope of this document, signaling from the 156 controllers could be done via other means as well, like Netconf or 157 any other SDN methods. 159 1.2. Resilience 161 Each router could establish direct BGP sessions with one or more 162 controllers, or it could establish BGP sessions with RRs who in turn 163 peer with controllers. For the same tree/tunnel, each controller may 164 independently calculate the tree/tunnel and signal the routers on the 165 tree/tunnel using MCAST-TREE Replication State routes. How the 166 calculation is done are outside the scope of this document. 168 On each router, BGP route selection rules will lead to one 169 controller's route for the tree/tunnel being selected as the active 170 route and used for setting up forwarding state. As long as all the 171 routers on a tree/tunnel consistently pick the same controller's 172 routes for the tree/tunnel, the setup should be consistent. If the 173 tree/tunnel is labeled, different labels will be used from different 174 controllers so there is no traffic loop issue even if the routers do 175 not consistently select the same controlle's routes. In the 176 unlabeled case, to ensure the consistency the selection SHOULD be 177 solely based on the identifier of the controller. 179 Another consistency issue is when a bidirectional tree/tunnel needs 180 to be re-routed. Because this is no longer triggered hop-by-hop from 181 downstream to upstream, it is possible that the upstream change 182 happens before the downstream, causing traffic loop. In the 183 unlabeled case, there is no good solution (other than that the 184 controller issues upstream change only after it gets acknowledgement 185 from downstream). In the labeled case, as long as a new label is 186 used there should be no problem. 188 Besides the traffic loop issue, there could be transient traffic loss 189 before both the upstream and downstream's forwarding state are 190 updated. This could be mitigated if the upstream keep sending 191 traffic on the old path (in addition to the new path) and the 192 downstream keep accepting traffic on the old path (but not on the new 193 path) for some time. It is a local matter when for the downstream to 194 switch to the new path - it could be data driven (e.g., after traffic 195 arrives on the new path) or timer driven. 197 For each tree, multiple disjoint instances could be calculated and 198 signaled for live-live protection. Different labels are used for 199 different instances, so that the leaves can differentiate incoming 200 traffic on different instances. As far as transit routers are 201 concerned, the instances are just independent. Note that the two 202 instances are not expected to share common transit routers (it is 203 otherwise outside the scope of this document/revision). 205 1.3. Signaling 207 When a router receives a Replication State route, the re- 208 advertisement is blocked if a configured import RT matches the RT of 209 the route, which indicates that this router is the target and 210 consumer of the route hence it should not be re-advertised further. 211 The routes includes the forwarding information in the form of Tunnel 212 Encapsulation Attributes (TEA) [RFC9012], with enhancements specified 213 in this document. 215 Suppose that for a particular tree, there are two downstream routers 216 D1 and D2 for a particular upstream router U. A controller C sends 217 one Replication State route to U, with the Tree Node's IP Address 218 field (see Section 3.3) set to U's IP address and the TEA specifying 219 both the two downstreams and its upstream (see Section 3.1.5). In 220 this case, the Originating Router's Address field of the Replication 221 State route is set to the controller's address. Note that for a TEA 222 attached to a unicast NLRI, only one of the tunnels in a TEA is used 223 for forwarding a particular packet, while all the tunnels in a TEA 224 are used to reach multiple endpoints when it is attached to a 225 multicast NLRI. 227 It could be that U may need to replicate to many downstream routers, 228 say D1 through D1000. In that case, it may not be possible to encode 229 all those branches in a single TEA, or may not be optimal to update a 230 large TEA when a branch is added/removed. In that case, C may send 231 multiple Replication State routes, each with a different Originating 232 Router's Address field and a different TEA that encodes a subset of 233 the branches. This provides a flexible way to optimize the encoding 234 of large number of branches and incremental updates of branches. 236 Notice that, in case of labeled trees, the (x,g), mLDP FEC, or SR- 237 P2MP tree identification (Section 1.7) signaling is actually not 238 needed to transit routers but only needed to tunnel root/leaves. 239 However, for consistency among the root/leaf/transit nodes, and for 240 consistency with the hop-by-hop signaling, the same signaling (with 241 tree identification encoded in the NLRI) is used to all routers. 243 Nonetheless, a new NLRI route type of the MCAST-TREE SAFI is defined 244 to encode label/SID instead of tree identification in the NLRI, for 245 scenarios where there is really no need to signal tree 246 identification, e.g. as described in Section 2. On a tunnel root, 247 the tree's binding SID can be encoded in the NLRI. 249 For a tree node to acknowledge to the controller that it has received 250 the signaling and installed corresponding forwarding state, it 251 advertises a corresponding Replication State route, with the 252 Originating Router's IP Address set to itself and with a Route Target 253 to match the controller. For comparison, the tree signaling 254 Replication State route from the controller has the Originating 255 Router's IP Address set to the controller and the Route Target 256 matching the tree node. The two Replication State routes (for 257 controller to signal to a tree node and for a tree node to 258 acknowledge back) differ only in those two aspects. 260 With the acknowledgement Replication State routes, the controller 261 knows if tree setup is complete. The information can be used for 262 many purposes, e.g. the controller may instruct the ingress to start 263 forwarding traffic onto a tree only after it knows that the tree 264 setup has completed. 266 1.4. Label Allocation 268 In the case of labeled multicast signaled hop by hop towards the 269 root, whether it's (x,g) multicast or "mLDP" tunnel, labels are 270 assigned by a downstream router and advertised to its upstream router 271 (from traffic direction point of view). In the case of controller 272 based signaling, routers do not originate tree join routes anymore, 273 so the controllers have to assign labels on behalf of routers, and 274 there are three options for label assignment: 276 o From each router's SRLB that the controller learns 278 o From the common SRGB that the controller learns 280 o From the controller's local label space 282 Assignment from each router's SRLB is no different from each router 283 assigning labels from its own local label space in the hop-by-hop 284 signaling case. The assignments for one router is independent of 285 assignments for another router, even for the same tree. 287 Assignment from the controller's local label space is upstream- 288 assigned [RFC5331]. It is used if the controller does not learn the 289 common SRGB or each router's SRLB. Assignment from the SRGB 290 [RFC8402] is only meaningful if all SRGBs are the same and a single 291 common label is used for all the routers on a tree in case of 292 unidirectional tree/tunnel (Section 1.4.1). Otherwise, assignment 293 from SRLB is preferred. 295 The choice of which of the options to use depends on many factors. 296 An operator may want to use a single common label per tree for ease 297 of monitoring and debugging, but that requires explicit RPF checking 298 and either common SRGB or upstream assigned labels, which may not be 299 supported due to either the software or hardware limitations (e.g. 300 label imposition/disposition limits). In an SR network, assignment 301 from the common SRGB if it's required to use a single common label 302 per unidirectional tree, or otherwise assignment from SRLB is a good 303 choice because it does not require support for context label spaces. 305 1.4.1. Using a Common per-tree Label for All Routers 307 MPLS labels only have local significance. For an LSP that goes 308 through a series of routers, each router allocates a label 309 independently and it swaps the incoming label (that it advertised to 310 its upstream) to an outgoing label (that it received from its 311 downstream) when it forwards a labeled packet. Even if the incoming 312 and outgoing labels happen to be the same on a particular router, 313 that is just incidental. 315 With Segment Routing, it is becoming a common practice that all 316 routers use the same SRGB so that a SID maps to the same label on all 317 routers. This makes it easier for operators to monitor and debug 318 their network. The same concept applies to multicast trees as well - 319 a common per-tree label can be used for a router to receive traffic 320 from its upstream neighbor and replicate traffic to all its 321 downstream neighbor. 323 However, a common per-tree label can only be used for unidirectional 324 trees. Additionally, unless the entire tree is updated for every 325 tree node to use a new common per-tree label with any change in the 326 tree (no matter how small and local the change is), it requires each 327 router to do explicit RPF check, so that only packets from its 328 expected upstream neighbor are accepted. Otherwise, traffic loop may 329 form during topology changes, because the forwarding state update is 330 no longer ordered. 332 Traditionally, p2mp mpls forwarding does not require explicit RPF 333 check as a downstream router advertises a label only to its upstream 334 router and all traffic with that incoming label is presumed to be 335 from the upstream router and accepted. When a downstream router 336 switches to a different upstream router a different label will be 337 advertised, so it can determine if traffic is from its expected 338 upstream neighbor purely based on the label. Now with a single 339 common label used for all routers on a tree to send and receive 340 traffic with, a router can no longer determine if the traffic is from 341 its expected neighbor just based on that common tree label. 342 Therefore, explicit RPF check is needed. Instead of interface based 343 RPF checking as in PIM case, neighbor based RPF checking is used - a 344 label identifying the upstream neighbor precedes the common tree 345 label and the receiving router checks if that preceding neighbor 346 label matches its expected upstream neighbor. Notice that this is 347 similar to what's described in Section "9.1.1 Discarding Packets from 348 Wrong PE" of RFC 6513 (an egress PE discards traffic sent from a 349 wrong ingress PE). The only difference is one is used for label 350 based forwarding and the other is used for (s,g) based forwarding. 351 [note: for bidirectional trees, we may be able to use two labels per 352 tree - one for upstream traffic and one for downstream traffic. This 353 needs further verification]. 355 Both the common per-tree label and the neighbor label are allocated 356 either from the common SRGB or from the controller's local label 357 space. In the latter case, an additional label identifying the 358 controller's label space is needed, as described in the following 359 section. 361 1.4.2. Upstream-assignment from Controller's Local Label Space 363 In this case in the multicast packet's label stack the tree label and 364 upstream neighbor label (if used in case of single common-label per 365 tree) are preceded by a downstream-assigned "context label". The 366 context label identifies a context-specific label space (the 367 controller's local label space), and the upstream-assigned label that 368 follows it is looked up in that space. 370 This specification requires that, in case of upstream-assignment from 371 a controller's local label space, each router D to assign, 372 corresponding to each controller C, a context label that identifies 373 the upstream-assigned label space used by that controller. This 374 label, call it Lc-D, is communicated by D to C via BGP-LS [RFC 7752]. 376 Suppose a controller is setting up unidirectional tree T. It assigns 377 that tree the label Lt, and assigns label Lu to identify router U 378 which is the upstream of router D on tree T. C needs to tell U: "to 379 send a packet on the given tree/tunnel, one of the things you have to 380 do is push Lt onto the packet's label stack, then push Lu, then push 381 Lc-D onto the packet's label stack, then unicast the packet to D". 383 Controller C also needs to inform router D of the correspondence 384 between and tree T. 386 To achieve that, when C sends a Replication State route, for each 387 tunnel in the TEA, it may include a label stack Sub-TLV [RFC9012], 388 with the outer label being the context label Lc-D (received by the 389 controller from the corresponding downstream), the next label being 390 the upstream neighbor label Lu, and the inner label being the label 391 Lt assigned by the controller for the tree. The router receiving the 392 route will use the label stacks to send traffic to its downstreams. 394 For C to signal the expected label stack for D to receive traffic 395 with, we overload a tunnel TLV in the TEA of the Replication State 396 route sent to D - if the tunnel TLV has a RPF sub-TLV 397 (Section 3.1.5), then it indicates that this is actually for 398 receiving traffic from the upstream. 400 1.5. Determining Root/Leaves 402 For the controller to calculate a tree, it needs to determine the 403 root and leaves of the tree. This may be based on provisioning 404 (static or dynamically programmed), or based on BGP signaling as 405 described in the following two sections. 407 In both of the following cases, the BGP updates are targeted at the 408 controller, via an address specific Route Target with Global 409 Administration Field set to the controller's address and the Local 410 Administration Field set to 0. 412 1.5.1. PIM-SSM/Bidir or mLDP 414 In this case, the PIM Last Hop Routers (LHRs) with interested 415 receivers or mLDP tunnel leaves encode a Leaf A-D route 416 ([I-D.ietf-bess-bgp-multicast]) with the Upstream Router's IP Address 417 field set to the controller's address and the Originating Router's IP 418 Address set to the address of the LHR or the P2MP tunnel leaf. The 419 encoded PIM SSM source or mLDP FEC provides root information and the 420 Originating Router's IP Address provides leaf information. 422 1.5.2. PIM ASM 424 In this case, the First Hop Routers (FHRs) originate Source Active 425 routes which provides root information, and the LHRs originate Leaf 426 A-D routes, encoded as in the PIM-SSM case except that it is (*,G) 427 instead of (S,G). The Leaf A-D routes provide leaf information. 429 1.6. Multiple Domains 431 An end to end multicast tree may span multiple routing domains, and 432 the setup of the tree in each domain may be done differently as 433 specified in [I-D.ietf-bess-bgp-multicast]. This section discusses a 434 few aspects specific to controller signaling. 436 Consider two adjacent domains each with its own controller in the 437 following configuration where router B is an upstream node of C for a 438 multicast tree: 440 | 441 domain 1 | domain 2 442 | 443 ctrlr1 | ctrlr2 444 /\ | /\ 445 / \ | / \ 446 / \ | / \ 447 A--...-B--|--C--...-D 448 | 450 In the case of native (un-labeled) IP multicast, nothing special is 451 needed. Controller 1 signals B to send traffic out of B-C link while 452 Controller 2 signals C to accept traffic on the B-C link. 454 In the case of labeled IP multicast or mLDP tunnel, the controllers 455 may be able to coordinate their actions such that Controller 1 456 signals B to send traffic out of B-C link with label X while 457 Controller 2 signals C to accept traffic with the same label X on the 458 B-C link. If the coordination is not possible, then C needs to use 459 hop-by-hop BGP signaling to signal towards B, as specified in 460 [I-D.ietf-bess-bgp-multicast]. 462 The configuration could also be as following, where router B borders 463 both domain 1 and domain 2 and is controlled by both controllers: 465 | 466 domain 1 | domain 2 467 | 468 ctrlr1 | ctrlr2 469 /\ | /\ 470 / \ | / \ 471 / \ | / \ 472 / \|/ \ 473 A--...---B--...---C 474 | 476 As discussed in Section 1.2, when B receives signaling from both 477 Controller 1 and Controller 2, only one of the routes would be 478 selected as the best route and used for programming the forwarding 479 state of the corresponding segment. For B to stitch the two segments 480 together, it is expected for B to know by provisioning that it is a 481 border router so that B will look for the other segment (represented 482 by the signaling from the other controller) and stitch the two 483 together. 485 1.7. SR-P2MP 487 [I-D.ietf-pim-sr-p2mp-policy] describes an architecture to construct 488 a Point-to-Multipoint (P2MP) tree to deliver Multi-point services in 489 a Segment Routing domain. An SR P2MP tree is constructed by 490 stitching together a set of Replication Segments that are specified 491 in [I-D.ietf-spring-sr-replication-segment]. An SR Point-to- 492 Multipoint (SR P2MP) Policy is used to define and instantiate a P2MP 493 tree which is computed by a controller. 495 An SR P2MP tree is no different from an mLDP tunnel in MPLS 496 forwarding plane. The difference is in control plane - instead of 497 hop-by-hop mLDP signaling from leaves towards the root, to set up SR 498 P2MP trees controllers program forwarding state (referred to as 499 Replication Segments) to the root, leaves, and intermediate 500 replication points using Netconf, PCEP, BGP or any other reasonable 501 signaling/programming methods. 503 Procedures in this document can be used for controllers to set up SR 504 P2MP trees with just an additional SR P2MP tree type and 505 corresponding tree identification in the Replication State route. 507 If/once the SR Replication Segment is extended to bi-redirectional, 508 and SR MP2MP is introduced, the same procedures in this document 509 would apply to SR MP2MP as well. 511 2. Alternative to BGP-MVPN 513 Multicast with BGP signaling from controllers can be an alternative 514 to BGP-MVPN [RFC6514]. It is an attractive option especially when 515 the controller can easily determine the source and leaf information. 517 With BGP-MVPN, distributed signaling is used for the following: 519 o Egress PEs advertise C-multicast (Type-6/7) Auto-Discovery (A-D) 520 routes to join C-multicast trees at the overlay (PE-PE). 522 o In case of ASM, ingress PEs advertise Source Active (Type-5) A-D 523 routes to signal sources so that egress PEs can establish Shortest 524 Path Trees (SPT). 526 o PEs advertise I/S-PMSI (Type-1/2/3) A-D routes to signal the 527 binding of overlay/customer traffic to underlay/provider tunnels. 528 For some types of tunnels, Leaf (Type-4) A-D routes are advertised 529 by egress PEs in response to I/S-PMSI A-D routes to join the 530 tunnels. 532 Based on the above signaled information, an ingress PE builds 533 forwarding state to forward traffic arriving on the PE-CE interface 534 to the provider tunnel (and local interfaces if there are local 535 downstream receivers), and an egress PE builds forwarding state to 536 forward traffic arriving on a provider tunnel to local interfaces 537 with downstream receivers. 539 Notice that multicast with BGP signaling from controllers essentially 540 programs "static" forwarding state onto multicast tree nodes. As 541 long as a controller can determine how a C-multicast flow should be 542 forwarded on ingress/egress PEs, it can signal to the ingress/egress 543 PEs using the procedures in this document to set up forwarding state, 544 removing the need of the above-mentioned distributed signaling and 545 processing. 547 For the controller to learn the egress PEs for a C-multicast tree (so 548 that it can set up or find a corresponding provider tunnel), the 549 egress PEs advertise MCAST-TREE Leaf A-D routes (Section 1.5.1) 550 towards the controller to signal its desire to join C-multicast 551 trees, each with an appropriate RD and an extended community derived 552 from the Route Target for the VPN 553 ([I-D.zzhang-idr-rt-derived-community]) so that the controller knows 554 which VPN it is for. The controller then advertises corresponding 555 MCAST-TREE Replication State routes to set up C-multicast forwarding 556 state on ingress and egress PEs. To encode the provider tunnel 557 information in the MCAST-TREE Replication State route for an ingress 558 PE, the TEA can explicitly list all replication branches of the 559 tunnel, or just just the binding SID for the provider tunnel in the 560 form of Segment List tunnel type, if the tunnel has a binding SID. 562 The Replication State route may also have a PMSI Tunnel Attribute 563 (PTA) attached to specify the provider tunnel while the TEA specifies 564 the local PE-CE interfaces where traffic need to be sent out. This 565 not only allows provider tunnel without a binding SID (e.g., in a 566 non-SR network) to be specified without explicitly listing its 567 replication branches, but also allows the service controller for MVPN 568 overlay state to be independent of provider tunnel setup (which could 569 be from a different transport controller or even without a 570 controller). 572 However, notice that if the service controller and transport 573 controller are different, then the service controller needs to signal 574 the transport controller the tree information: identification, set of 575 leaves, and applicable constraints. While this can be achieved (see 576 Section 1.5.1), it is easier for the service and transport controller 577 to be the same. 579 Depending on local policy, a PE may add PE-CE interfaces to its 580 replication state based on local signaling (e.g., IGMP/PIM) instead 581 of completely relying on signaling from controllers. 583 If dynamic switching between inclusive and selective tunnels based on 584 data rate is needed, the ingress PE can advertise/withdraw S-PMSI 585 routes targeted only at the controllers, without PMSI Tunnel 586 Attribute attached. The controller then updates relevant MCAST-TREE 587 Replication State routes to update C-multicast forwarding states on 588 PEs to switch to a new tunnel. 590 3. Specification 592 3.1. Enhancements to TEA 594 A TEA may encode a list of tunnels. A TEA attached to an MCAST-TREE 595 NLRI encodes replication information for a that is 596 identified by the NRLI. Each tunnel in the TEA identifies a branch - 597 either an upstream branch towards the tree root (Section 3.1.5) or a 598 downstream branch towards some leaves. A tunnel in the TEA could 599 have an outer encapsulation (e.g. MPLS label stack) or it could just 600 be a one-hop direct connection for native IP multicast forwarding 601 without any outer encapsulation. 603 This document specifies three new Tunnel Types and four new sub-TLVs. 604 The type codes will be assigned by IANA from the "BGP Tunnel 605 Encapsulation Attribute Tunnel Types". 607 3.1.1. Any-Encapsulation Tunnel 609 When a multicast packet needs to be sent from an upstream node to a 610 downstream node, it may not matter how it is sent - natively when the 611 two nodes are directly connected or tunneled otherwise. In case of 612 tunneling, it may not matter what kind of tunnel is used - MPLS, GRE, 613 IPinIP, or whatever. 615 To support this, an "Any-Encapsulation" tunnel type of value 20 is 616 defined. This tunnel MAY have a Tunnel Egress Endpoint and other 617 Sub-TLVs. The Tunnel Egress Endpoint Sub-TLV specifies an IP 618 address, which could be any of the following: 620 o An interface's local address - when a packet needs to sent out of 621 the corresponding interface natively. On a LAN multicast MAC 622 address MUST be used. 624 o A directly connected neighbor's interface address - when a packet 625 needs to unicast to the address natively. 627 o An address that is not directly connected - when a packet needs to 628 be tunneled to the address (any tunnel type/instance can be used). 630 3.1.2. Load-balancing Tunnel 632 Consider that a multicast packet needs to be sent to a downstream 633 node, which could be reached via four paths P1~P4. If it does not 634 matter which of path is taken, an "Any-Encapsulation" tunnel with the 635 Tunnel Egress Endpoint Sub-TLV specifying the downstream node's 636 loopback address works well. If the controller wants to specify that 637 only P1~P2 should be used, then a "Load-balancing" tunnel needs to be 638 used, listing P1 and P2 as member tunnels of the "Load-balancing" 639 tunnel. 641 A load-balancing tunnel has one "Member Tunnels" Sub-TLV defined in 642 this document. The Sub-TLV is a list of tunnels, each specifying a 643 way to reach the downstream. A packet will be sent out of one of the 644 tunnels listed in the Member Tunnels Sub-TLV of the load-balancing 645 tunnel. 647 3.1.3. Segment List Tunnel 649 A Segment List tunnel has a Segment List sub-TLV. The encoding of 650 the sub-TLV is as specified in Section 2.4.4 of 651 [I-D.ietf-idr-segment-routing-te-policy]. An example use of a 652 Segment List tunnel is provided in Section 3.4.3. 654 3.1.4. Receiving MPLS Label Stack 656 While [I-D.ietf-bess-bgp-multicast] uses S-PMSI A-D routes to signal 657 forwarding information for MP2MP upstream traffic, when controller 658 signaling is used, a single Replication State route is used for both 659 upstream and downstream traffic. Since different upstream and 660 downstream labels need to be used, a new "Receiving MPLS Label Stack" 661 of type TBD is added as a tunnel sub-TLV in addition to the existing 662 MPLS Label Stack sub-TLV. Other than type difference, the two are 663 the encoded the same way. 665 The Receiving MPLS Label Stack sub-TLV is added to each downstream 666 tunnel in the TEA of Replication State route for an MP2MP tunnel to 667 specify the forwarding information for upstream traffic from the 668 corresponding downstream node. A label stack instead of a single 669 label is used because of the need for neighbor based RPF check, as 670 further explained in the following section. 672 The Receiving MPLS Label Stack sub-TLV is also used for downstream 673 traffic from the upstream for both P2MP and MP2MP, as specified 674 below. 676 3.1.5. RPF Sub-TLV 678 The RPF sub-TLV is of type 124 allocated by IANA and has a one-octet 679 length. The length is 0 currently, but if necessary in the future, 680 sub-sub-TLVs could be placed in its value part. If the RPF sub-TLV 681 appears in a tunnel, it indicates that the "tunnel" is for the 682 upstream node instead of a downstream node. 684 In case of MPLS, the tunnel contains an Receiving MPLS Label Stack 685 sub-TLV for downstream traffic from the upstream node, and in case of 686 MP2MP it also contains a regular MPLS Label Stack sub-TLV for 687 upstream traffic to the upstream node. 689 The inner most label in the Receiving MPLS Label Stack is the 690 incoming label identifying the tree (for comparison the inner most 691 label for a regular MPLS Label Stack is the outgoing label). If the 692 Receiving MPLS Label Stack sub-TLVe has more than one labels, the 693 second inner most label in the stack identifies the expected upstream 694 neighbor and explicit RPF checking needs to be set up for the tree 695 label accordingly. 697 3.1.6. Tree Label Stack sub-TLV 699 The MPLS Label Stack sub-TLV can be used to specify the complete 700 label stack used to send traffic, with the stack including both a 701 transport label (stack) and label(s) that identify the (tree, 702 neighbor) to the downstream node. There are cases where the 703 controller only wants to specify the tree-identifying labels but 704 leave the transport details to the router itself. For example, the 705 router could locally determine a transport label (stack) and combine 706 with the tree-identifying labels signaled from the controller to get 707 the complete outgoing label stack. 709 For that purpose, a new Tree Label Stack sub-TLV of type 125 is 710 defined, with a one-octet length field. It MAY appear in an Any- 711 Encapsulation tunnel. The value field contains a label stack with 712 the same encoding as value part of the MPLS Label Stack sub-TLV, but 713 with a different type. A stack is specified because it may take up 714 to three labels (see Section 1.4): 716 o If different nodes use different labels (allocated from the common 717 SRGB or the node's SRLB) for a (tree, neighbor) tuple, only a 718 single label is in the stack. This is similar to current mLDP hop 719 by hop signaling case. 721 o If different nodes use the same tree label, then an additional 722 neighbor-identifying label is needed in front of the tree label. 724 o For the previous bullet, if the neighbor-identifying label is 725 allocated from the controller's local label space, then an 726 additional context label is needed in front of the neighbor label. 728 3.1.7. Backup Tunnel sub-TLV 730 The Backup Tunnel sub-TLV is used to specify the backup paths for an 731 Any-Encapsulation or Segment List tunnel. The length is two-octet. 732 The value part encodes a one-octet flags field and a variable length 733 Tunnel Encapsulation Attribute. If the tunnel goes down, traffic 734 that is normally sent out of the tunnel is fast rerouted to the 735 tunnels listed in the encoded TEA. 737 +--------------------------------+ 738 | Sub-TLV Type (1 Octet, TBD) | 739 +--------------------------------+ 740 | Sub-TLV Length (2 Octets) | 741 +--------------------------------+ 742 | P | rest of 1 Octet Flags | 743 +--------------------------------+ 744 | Backup TEA (variable length) | 745 +--------------------------------+ 747 The backup tunnels can be going to the same or different nodes 748 reached by the original tunnel. 750 If the tunnel carries a RPF sub-TLV and a Backup Tunnel sub-TLV, then 751 both traffic arriving on the original tunnel and on the tunnels 752 encoded in the Backup Tunnel sub-TLV's TEA can be accepted, if the 753 Parallel (P-)bit in the flags field is set. If the P-bit is not set, 754 then traffic arriving on the backup tunnel is accepted only if router 755 has switched to receiving on the backup tunnel (this is the 756 equivalent of PIM/mLDP MoFRR). 758 3.2. Context Label TLV in BGP-LS Node Attribute 760 For a router to signal the context label that it assigns for a 761 controller (or any label allocator that assigns labels - from its 762 local label space - that will be received by this router), a new BGP- 763 LS Node Attribute TLV is defined: 765 0 1 2 3 766 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 767 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 768 | Type | Length | 769 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 770 | Context Label | 771 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 772 | IPv4/v6 Address of Label Space Owner | 773 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 775 The Length field implies the type of the address. Multiple Context 776 Label TLVs may be included in a Node Attribute, one for each label 777 space owner. 779 An as example, a controller with address 11.11.11.11 allocates label 780 200 from its own label space, and router A assigns label 100 to 781 identify this controller's label space. The router includes the 782 Context Label TLV (100, 11.11.11.11) in its BGP-LS Node Attribute and 783 the controller instructs router B to send traffic to router A with a 784 label stack (100, 200), and router A uses label 100 to determine the 785 Label FIB in which to look up label 200. 787 3.3. Replicate State Route Type 789 The NLRI route type for signaling from controllers to tree nodes is 790 "Replication State". The NLRI has the following format: 792 +-----------------------------------+ 793 | Route Type - Replication State | 794 +-----------------------------------+ 795 | Length (1 octet) | 796 +-----------------------------------+ 797 | Tree Type (1 octet) | 798 +-----------------------------------+ 799 |Tree Type Specific Length (1 octet)| 800 +-----------------------------------+ 801 ~ Tree Identification (variable) ~ 802 +-----------------------------------+ 803 | Tree Node's IP Address | 804 +-----------------------------------+ 805 | Originator's IP Address | 806 +-----------------------------------+ 808 Replication State NLRI 810 Notice that Replication State is just a new route type with the same 811 format of Leaf A-D route except some fields are renamed: 813 o Tree Type in Replication State route matches the PMSI route type 814 in Leaf A-D route 816 o Tree Node's IP Address matches the Upstream Router's IP Address of 817 the PMSI route key in Leaf A-D route 819 With this arrangement, IP multicast tree and mLDP tunnel can be 820 signaled via Replication State routes from controllers, or via Leaf 821 A-D routes either hop by hop or from controllers with maximum code 822 reuse, while newer types of trees like SR-P2MP can be signaled via 823 Replication State routes with maximum code reuse as well. 825 3.4. SR P2MP Signaling 827 An SR P2MP policy for an SR P2MP tree is identified by a (Root, Tree- 828 id) tuple. It has a set of leaves and set of Candidate Paths (CPs). 829 The policy is instantiated on the root of the tree, with 830 corresponding Replication Segments - identified by (Root, Tree-id, 831 Tree-Node-id) - instantiated on the tree nodes (root, leaves, and 832 intermediate replication points). 834 3.4.1. Replication State Route for SR P2MP 836 For SR P2MP, forwarding on tree nodes state are represented as 837 Replication Segments and are signaled from controllers to tree nodes 838 via Replication State routes. A Replication State route for SR P2MP 839 has a Tree Type 1 and the Tree Identification includes (Route 840 Distinguisher, Root ID, Tree ID), where the RD implicitly identifies 841 the candidate path. 843 +-----------------------------------+ 844 | Route Type - Replication State | 845 +-----------------------------------+ 846 | Length (1 octet) | 847 +-----------------------------------+ 848 | Tree Type (1 - SR P2MP) | 849 +-----------------------------------+ 850 |Tree Type Specific Length (1 octet)| 851 +-----------------------------------+ 852 | RD (8 octets) | 853 +-----------------------------------+ 854 | Root ID (4 or 16 octets) | 855 +-----------------------------------+ 856 | Tree ID (4 octets) | 857 +-----------------------------------+ 858 | Tree Node's IP Address | 859 +-----------------------------------+ 860 | Originating Router's IP Address | 861 +-----------------------------------+ 863 Replication State route for SR Replication Segment 865 3.4.2. BGP Community Container for SR P2MP Policy 867 The Replication State route for Replication Segments signaled to the 868 root is also used to signal (parts of) the SR P2MP Policy - the 869 policy name, the set of leaves (optional, for informational purpose), 870 preference of the CP and other information are all encoded in a newly 871 defined BGP Community Container (BCC) 872 [I-D.ietf-idr-wide-bgp-communities] called SR P2MP Policy BCC. 874 The SR P2MP Policy BCC has a BGP Community Container type to be 875 assigned by IANA. It is composed of a fixed 4-octet Candidate Path 876 Preference value, optionally followed by TLVs. 878 0 1 2 3 879 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 880 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 881 | Candidate Path Preference | 882 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 883 | | 884 | TLVs (optional) | 885 | | 886 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 888 BGP Community Container for SR P2MP Policy 890 One optional TLV is to enclose the following optional Atoms TLVs that 891 are already defined in [I-D.ietf-idr-wide-bgp-communities]: 893 o An IPv4 or IPv6 Prefix list - for the set of leaves 895 o A UTF-8 string - for the policy name 897 If more information for the policy are needed, more Atoms TLVs or SR 898 P2MP Policy BCC specific TLVs can be defined. 900 The root receives one Replication State route for each Candidate Path 901 of the policy. Only one of the routes need to, though more than one 902 MAY include the above listed optional Atom TLVs in the SR P2MP Policy 903 BCC. 905 Alternatively, an additional route type can be used to carry policy 906 information instead. Details/decision to be specified in a future 907 revision. 909 3.4.3. Tunnel Encapsulation Attribute 911 The TEA attached to a Replication State route for SR-P2MP encodes 912 tunnels as specified in earlier sections. A tunnel could be an Any- 913 Encapsulation tunnel with MPLS Label Stack sub-TLV or Receiving MPLS 914 Label Stack sub-TLV (in case of SR-MPLS), a Segment List tunnel, or a 915 Load-balancing tunnel. 917 For a Segment List tunnel in this context, the last segment in the 918 segment list represents the SID of the tree. When it is without the 919 RPF sub-TLV, the previous segments in the list steer traffic to the 920 downstream node, and the segment before the last one MAY also be a 921 binding SID for another P2MP tunnel, meaning that the replication 922 branch represented by this "Segment List" is actually a P2MP tunnel 923 to a set of downstream nodes. 925 3.5. Replication State Route with Label Stack for Tree Identification 927 As described in Section 1.3, tree label instead of tree 928 identification could be encoded in the NLRI to identify the tree in 929 the control plane as well as in the forwarding plane. For that a new 930 Tree Type of 2 is used and the Replication State route has the 931 following format: 933 +-------------------------------------+ 934 | Route Type - Replication State | 935 +-------------------------------------+ 936 | Length (1 octet) | 937 +-------------------------------------+ 938 | Tree Type 2 (Label as Tree ID) | 939 +-------------------------------------+ 940 |Tree Type specific Length (1 octet) | 941 +-------------------------------------+ 942 | RD (8 octets) | 943 +-------------------------------------+ 944 ~ Label Stack (variable) ~ 945 +-------------------------------------+ 946 | Tree Node's IP Address | 947 +-------------------------------------+ 948 | Originating Router's IP Address | 949 +-------------------------------------+ 951 Replication State route for tree identification by label stack 953 As discussed in Section 1.4.2, a label stack may have to be used to 954 identify a tree in the data plane so a label stack is encoded here. 955 The number of labels is derived from the Tree Type Specific Length 956 field. Each label stack entry is encoded as following: 958 0 1 2 3 959 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 960 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 961 | Label |0 0 0 0 0 0 0 0 0 0 0 0| 962 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 964 4. Procedures 966 Details to be added. The general idea is described in the 967 introduction section. 969 5. Security Considerations 971 This document does not introduce new security risks. 973 6. IANA Considerations 975 IANA has assigned the following code points: 977 o "Any-Encapsulation" tunnel type 78 from "BGP Tunnel Encapsulation 978 Attribute Tunnel Types" registry 980 o "RPF" sub-TLV type 124 and "Tree Label Stack" sub-TLV type 125 981 from "BGP Tunnel Encapsulation Attribute Sub-TLVs" registry 983 This document makes the following additional IANA requests: 985 o Assign "Segment List" and "Load-balancing" tunnel types from the 986 "BGP Tunnel Encapsulation Attribute Tunnel Types" registry 988 o Assign "Member Tunnels" and "Receiving MPLS Label Stack" sub-TLV 989 types from the "BGP Tunnel Encapsulation Attribute Sub-TLVs" 990 registry. The "Member Tunnels" sub-TLV has a two-octet value 991 length (so the type should be in the 128-255 range), while the 992 "Receiving MPLS Label Stack" sub-TLV has a one-octet value length. 994 o Assign "Context Label TLV" type from the "BGP-LS Node Descriptor, 995 Link Descriptor, Prefix Descriptor, and Attribute TLVs" registry. 997 o Assign "Replication State" route type from the "BGP MCAST-TREE 998 Route Types" registry. 1000 o Create a "Tree Type Registry for Replication State Route", with 1001 the following initial assignments: 1003 * 1: SR-P2MP 1005 * 2: P2MP Tree with Label as Identification 1007 * 3: IP Multicast 1009 * 0x43: mLDP 1011 o Assign a new BGP Community Container type "SR P2MP Policy", and to 1012 create an "SR P2MP Policy Community Container TLV Registry", with 1013 an initial entry for "TLV for Atoms". 1015 7. Acknowledgements 1017 The authors Eric Rosen for his questions, suggestions, and help 1018 finding solutions to some issues like the neighbor based explicit RPF 1019 checking. The authors also thank Lenny Giuliano, Sanoj Vivekanandan 1020 and IJsbrand Wijnands for their review and comments. 1022 8. References 1024 8.1. Normative References 1026 [I-D.ietf-bess-bgp-multicast] 1027 Zhang, Z., Giuliano, L., Patel, K., Wijnands, I., Mishra, 1028 M., and A. Gulko, "BGP Based Multicast", draft-ietf-bess- 1029 bgp-multicast-04 (work in progress), January 2022. 1031 [I-D.ietf-idr-segment-routing-te-policy] 1032 Previdi, S., Filsfils, C., Talaulikar, K., Mattes, P., 1033 Jain, D., and S. Lin, "Advertising Segment Routing 1034 Policies in BGP", draft-ietf-idr-segment-routing-te- 1035 policy-16 (work in progress), March 2022. 1037 [I-D.ietf-idr-wide-bgp-communities] 1038 Raszuk, R., Haas, J., Lange, A., Decraene, B., Amante, S., 1039 and P. Jakma, "BGP Community Container Attribute", draft- 1040 ietf-idr-wide-bgp-communities-06 (work in progress), 1041 January 2022. 1043 [I-D.ietf-pim-sr-p2mp-policy] 1044 (editor), D. V., Filsfils, C., Parekh, R., Bidgoli, H., 1045 and Z. Zhang, "Segment Routing Point-to-Multipoint 1046 Policy", draft-ietf-pim-sr-p2mp-policy-04 (work in 1047 progress), March 2022. 1049 [I-D.ietf-spring-sr-replication-segment] 1050 (editor), D. V., Filsfils, C., Parekh, R., Bidgoli, H., 1051 and Z. Zhang, "SR Replication Segment for Multi-point 1052 Service Delivery", draft-ietf-spring-sr-replication- 1053 segment-07 (work in progress), March 2022. 1055 [I-D.zzhang-idr-rt-derived-community] 1056 Zhang, Z., Haas, J., and K. Patel, "Extended Communities 1057 Derived from Route Targets", draft-zzhang-idr-rt-derived- 1058 community-02 (work in progress), March 2022. 1060 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1061 Requirement Levels", BCP 14, RFC 2119, 1062 DOI 10.17487/RFC2119, March 1997, 1063 . 1065 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1066 S. Ray, "North-Bound Distribution of Link-State and 1067 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1068 DOI 10.17487/RFC7752, March 2016, 1069 . 1071 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1072 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1073 May 2017, . 1075 [RFC9012] Patel, K., Van de Velde, G., Sangli, S., and J. Scudder, 1076 "The BGP Tunnel Encapsulation Attribute", RFC 9012, 1077 DOI 10.17487/RFC9012, April 2021, 1078 . 1080 8.2. Informative References 1082 [RFC6388] Wijnands, IJ., Ed., Minei, I., Ed., Kompella, K., and B. 1083 Thomas, "Label Distribution Protocol Extensions for Point- 1084 to-Multipoint and Multipoint-to-Multipoint Label Switched 1085 Paths", RFC 6388, DOI 10.17487/RFC6388, November 2011, 1086 . 1088 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 1089 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 1090 2012, . 1092 [RFC6514] Aggarwal, R., Rosen, E., Morin, T., and Y. Rekhter, "BGP 1093 Encodings and Procedures for Multicast in MPLS/BGP IP 1094 VPNs", RFC 6514, DOI 10.17487/RFC6514, February 2012, 1095 . 1097 [RFC7060] Napierala, M., Rosen, E., and IJ. Wijnands, "Using LDP 1098 Multipoint Extensions on Targeted LDP Sessions", RFC 7060, 1099 DOI 10.17487/RFC7060, November 2013, 1100 . 1102 [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., 1103 Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent 1104 Multicast - Sparse Mode (PIM-SM): Protocol Specification 1105 (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 1106 2016, . 1108 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1109 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1110 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1111 July 2018, . 1113 Authors' Addresses 1115 Zhaohui Zhang 1116 Juniper Networks 1118 EMail: zzhang@juniper.net 1120 Robert Raszuk 1121 NTT Network Innovations 1123 EMail: robert@raszuk.net 1125 Dante Pacella 1126 Verizon 1128 EMail: dante.j.pacella@verizon.com 1130 Arkadiy Gulko 1131 Edward Jones Wealth Management 1133 EMail: arkadiy.gulko@edwardjones.com