idnits 2.17.00 (12 Aug 2021) /tmp/idnits26125/draft-ietf-nvo3-mcast-framework-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 1, 2017) is 1934 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'FW' is mentioned on line 217, but not defined Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NVO3 working group A. Ghanwani 2 Internet Draft Dell 3 Intended status: Informational L. Dunbar 4 Expires: November 8, 2017 M. McBride 5 Huawei 6 V. Bannai 7 Google 8 R. Krishnan 9 Dell 11 February 1, 2017 13 A Framework for Multicast in Network Virtualization Overlays 14 draft-ietf-nvo3-mcast-framework-06 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. This document may not be modified, 23 and derivative works of it may not be created, except to publish it 24 as an RFC and to translate it into languages other than English. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as Internet- 29 Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six 32 months and may be updated, replaced, or obsoleted by other documents 33 at any time. It is inappropriate to use Internet-Drafts as 34 reference material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html 42 This Internet-Draft will expire on November 8, 2016. 44 Copyright Notice 46 Copyright (c) 2016 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with 54 respect to this document. Code Components extracted from this 55 document must include Simplified BSD License text as described in 56 Section 4.e of the Trust Legal Provisions and are provided without 57 warranty as described in the Simplified BSD License. 59 Abstract 61 This document discusses a framework of supporting multicast traffic 62 in a network that uses Network Virtualization Overlays (NVO3). Both 63 infrastructure multicast and application-specific multicast are 64 discussed. It describes the various mechanisms that can be used for 65 delivering such traffic as well as the data plane and control plane 66 considerations for each of the mechanisms. 68 Table of Contents 70 1. Introduction...................................................3 71 1.1. Infrastructure multicast..................................3 72 1.2. Application-specific multicast............................4 73 1.3. Terminology clarification.................................4 74 2. Acronyms.......................................................4 75 3. Multicast mechanisms in networks that use NVO3.................5 76 3.1. No multicast support......................................6 77 3.2. Replication at the source NVE.............................6 78 3.3. Replication at a multicast service node...................8 79 3.4. IP multicast in the underlay.............................10 80 3.5. Other schemes............................................11 81 4. Simultaneous use of more than one mechanism...................12 82 5. Other issues..................................................12 83 5.1. Multicast-agnostic NVEs..................................12 84 5.2. Multicast membership management for DC with VMs..........13 85 6. Summary.......................................................13 86 7. Security Considerations.......................................13 87 8. IANA Considerations...........................................14 88 9. References....................................................14 89 9.1. Normative References.....................................14 90 9.2. Informative References...................................14 91 10. Acknowledgments..............................................16 93 1. Introduction 95 Network virtualization using Overlays over Layer 3 (NVO3) is a 96 technology that is used to address issues that arise in building 97 large, multitenant data centers that make extensive use of server 98 virtualization [RFC7364]. 100 This document provides a framework for supporting multicast traffic, 101 in a network that uses Network Virtualization using Overlays over 102 Layer 3 (NVO3). Both infrastructure multicast and application- 103 specific multicast are considered. It describes the various 104 mechanisms and considerations that can be used for delivering such 105 traffic in networks that use NVO3. 107 The reader is assumed to be familiar with the terminology as defined 108 in the NVO3 Framework document [RFC7365] and NVO3 Architecture 109 document [NVO3-ARCH]. 111 1.1. Infrastructure multicast 113 Infrastructure multicast is a capability needed by networking 114 services, such as Address Resolution Protocol (ARP), Neighbor 115 Discovery (ND), Dynamic Host Configuration Protocol (DHCP), 116 multicast Domain Name Server (mDNS), etc.. RFC3819 Section 5 and 6 117 have detailed description for some of the infrastructure multicast 118 [RFC 3819]. It is possible to provide solutions for these that do 119 not involve multicast in the underlay network. In the case of 120 ARP/ND, a network virtualization authority (NVA) can be used for 121 distributing the mappings of IP address to MAC address to all 122 network virtualization edges (NVEs). The NVEs can then trap ARP 123 Request/ND Neighbor Solicitation messages from the TSs that are 124 attached to it and respond to them, thereby eliminating the need to 125 for broadcast/multicast of such messages. In the case of DHCP, the 126 NVE can be configured to forward these messages using a helper 127 function. 129 Of course it is possible to support all of these infrastructure 130 multicast protocols natively if the underlay provides multicast 131 transport. However, even in the presence of multicast transport, it 132 may be beneficial to use the optimizations mentioned above to reduce 133 the amount of such traffic in the network. 135 1.2. Application-specific multicast 137 Application-specific multicast traffic are originated and consumed 138 by user applications. The Application-specific multicast, which can 139 be either Source-Specific Multicast (SSM) or Any-Source Multicast 140 (ASM)[RFC 3569], has the following characteristics: 142 1. Receiver hosts are expected to subscribe to multicast content 143 using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6). 144 Multicast sources and listeners participant in these protocols 145 using addresses that are in the Tenant System address domain. 147 2. The list of multicast listeners for each multicast group is not 148 known in advance. Therefore, it may not be possible for an NVA 149 to get the list of participants for each multicast group ahead 150 of time. 152 1.3. Terminology clarification 154 In this document, the terms host, tenant system (TS) and virtual 155 machine (VM) are used interchangeably to represent an end station 156 that originates or consumes data packets. 158 2. Acronyms 160 ASM: Any-Source Multicast 162 IGMP: Internet Group Management Protocol 164 LISP: Locator/ID Separation Protocol 166 MSN: Multicast Service Node 168 NVA: Network Virtualization Authority 170 NVE: Network Virtualization Edge 172 NVGRE: Network Virtualization using GRE 174 PIM: Protocol-Independent Multicast 176 SSM: Source-Specific Multicast 177 TS: Tenant system 179 VM: Virtual Machine 181 VN: Virtual Network 183 VXLAN: Virtual eXtensible LAN 185 3. Multicast mechanisms in networks that use NVO3 187 In NVO3 environments, traffic between NVEs is transported using an 188 encapsulation such as Virtual eXtensible Local Area Network (VXLAN) 189 [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing 190 Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP 191 Encapsulation (GUE) [GUE], etc. 193 What makes NVO3 different from any other network is that some NVEs, 194 especially the NVE implemented on server, might not support PIM or 195 other native multicast mechanisms. They might just encapsulate the 196 data packets from VMs with an outer unicast header. Therefore, it is 197 important for networks using NVO3 to have mechanisms to support 198 multicast as a network capability for NVEs, to map multicast traffic 199 from VMs (users/applications) to an equivalent multicast capability 200 inside the NVE, or to figure out the outer destination address if 201 NVE does not support native multicast (e.g. PIM) or IGMP. 203 Besides the need to support ARP and ND, there are several 204 applications that require the support of multicast and/or broadcast 205 in data centers [DC-MC]. With NVO3, there are many possible ways 206 that multicast may be handled in such networks. We discuss some of 207 the attributes of the following four methods: 209 1. No multicast support. 211 2. Replication at the source NVE. 213 3. Replication at a multicast service node. 215 4. IP multicast in the underlay. 217 These methods are briefly mentioned in the NVO3 Framework [FW] and 218 NVO3 architecture [NVO3-ARCH] document. This document provides more 219 details about the basic mechanisms underlying each of these methods 220 and discusses the issues and tradeoffs of each. 222 We note that other methods are also possible, such as [EDGE-REP], 223 but we focus on the above four because they are the most common. 225 3.1. No multicast support 227 In this scenario, there is no support whatsoever for multicast 228 traffic when using the overlay. This method can only work if the 229 following conditions are met: 231 1. All of the application traffic in the network is unicast 232 traffic and the only multicast/broadcast traffic is from ARP/ND 233 protocols. 235 2. An NVA is used by the NVEs to determine the mapping of a given 236 Tenant System's (TS's) MAC/IP address to its NVE. In other 237 words, there is no data plane learning. Address resolution 238 requests via ARP/ND that are issued by the TSs must be resolved 239 by the NVE that they are attached to. 241 With this approach, it is not possible to support application- 242 specific multicast. However, certain multicast/broadcast 243 applications such as DHCP can be supported by use of a helper 244 function in the NVE. 246 The main drawback of this approach, even for unicast traffic, is 247 that it is not possible to initiate communication with a TS for 248 which a mapping to an NVE does not already exist in the NVA. This 249 is a problem in the case where the NVE is implemented in a physical 250 switch and the TS is a physical end station that has not registered 251 with the NVA. 253 3.2. Replication at the source NVE 255 With this method, the overlay attempts to provide a multicast 256 service without requiring any specific support from the underlay, 257 other than that of a unicast service. A multicast or broadcast 258 transmission is achieved by replicating the packet at the source 259 NVE, and making copies, one for each destination NVE that the 260 multicast packet must be sent to. 262 For this mechanism to work, the source NVE must know, a priori, the 263 IP addresses of all destination NVEs that need to receive the 264 packet. For the purpose of ARP/ND, this would involve knowing the 265 IP addresses of all the NVEs that have TSs in the virtual network 266 (VN) of the TS that generated the request. For the support of 267 application-specific multicast traffic, a method similar to that of 268 receiver-sites registration for a particular multicast group 269 described in [LISP-Signal-Free] can be used. The registrations from 270 different receiver-sites can be merged at the NVA, which can 271 construct a multicast replication-list inclusive of all NVEs to 272 which receivers for a particular multicast group are attached. The 273 replication-list for each specific multicast group is maintained by 274 the NVA. 276 The receiver-sites registration is achieved by egress NVEs 277 performing the IGMP/MLD snooping to maintain state for which 278 attached TSs have subscribed to a given IP multicast group. When 279 the members of a multicast group are outside the NVO3 domain, it is 280 necessary for NVO3 gateways to keep track of the remote members of 281 each multicast group. The NVEs and NVO3 gateways then communicate 282 the multicast groups that are of interest to the NVA. If the 283 membership is not communicated to the NVA, and if it is necessary to 284 prevent hosts attached to an NVE that have not subscribed to a 285 multicast group from receiving the multicast traffic, the NVE would 286 need to maintain multicast group membership information. 288 In the absence of IGMP/MLD snooping, the traffic would be delivered 289 to all TSs that are part of the VN. 291 In multi-homing environments, i.e., in those where a TS is attached 292 to more than one NVE, the NVA would be expected to provide 293 information to all of the NVEs under its control about all of the 294 NVEs to which such a TS is attached. The ingress NVE can choose any 295 one of the egress NVEs for the data frames destined towards the TS. 297 This method requires multiple copies of the same packet to all NVEs 298 that participate in the VN. If, for example, a tenant subnet is 299 spread across 50 NVEs, the packet would have to be replicated 50 300 times at the source NVE. This also creates an issue with the 301 forwarding performance of the NVE. 303 Note that this method is similar to what was used in Virtual Private 304 LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol 305 Label Switching (MPLS) multicast [RFC7117]. While there are some 306 similarities between MPLS Virtual Private Network (VPN) and NVO3, 307 there are some key differences: 309 - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is 310 somewhat static, whereas in a DC that allows VMs to migrate 311 anywhere, the TS attachment to NVE is much more dynamic. 313 - The number of PEs to which a single VPN customer is attached in 314 an MPLS VPN environment is normally far less than the number of 315 NVEs to which a VN's VMs are attached in a DC. 317 When a VPN customer has multiple multicast groups, [RFC6513] 318 "Multicast VPN" combines all those multicast groups within each 319 VPN client to one single multicast group in the MPLS (or VPN) 320 core. The result is that messages from any of the multicast 321 groups belonging to one VPN customer will reach all the PE nodes 322 of the client. In other words, any messages belonging to any 323 multicast groups under customer X will reach all PEs of the 324 customer X. When the customer X is attached to only a handful of 325 PEs, the use of this approach does not result in excessive wastage 326 of bandwidth in the provider's network. 328 In a DC environment, a typical server/hypervisor based virtual 329 switch may only support 10's VMs (as of this writing). A subnet 330 with N VMs may be, in the worst case, spread across N vSwitches. 331 Using "MPLS VPN multicast" approach in such a scenario would 332 require the creation of a Multicast group in the core for this VN 333 to reach all N NVEs. If only small percentage of this client's VMs 334 participate in application specific multicast, a great number of 335 NVEs will receive multicast traffic that is not forwarded to any 336 of their attached VMs, resulting in considerable wastage of 337 bandwidth. 339 Therefore, the Multicast VPN solution may not scale in DC 340 environment with dynamic attachment of Virtual Networks to NVEs and 341 greater number of NVEs for each virtual network. 343 3.3. Replication at a multicast service node 345 With this method, all multicast packets would be sent using a 346 unicast tunnel encapsulation from the ingress NVE to a multicast 347 service node (MSN). The MSN, in turn, would create multiple copies 348 of the packet and would deliver a copy, using a unicast tunnel 349 encapsulation, to each of the NVEs that are part of the multicast 350 group for which the packet is intended. 352 This mechanism is similar to that used by the Asynchronous Transfer 353 Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE]. 354 The MSN is similar to the RP in PIM SM, but different in that the 355 user data traffic are carried by the NVO3 tunnels. 357 The following are the possible ways for the MSN to get the 358 membership information for each multicast group: 360 - The MSN can obtain this membership information from the IGMP/MLD 361 report messages sent from the TSs. The IGMP/MLD report messages 362 are in response to IGMP/MLD query messages sent from the MSN to 363 the TSs via NVEs that TSs are attached. In order for the MSN to 364 receive the IGMP/MLD report messages from the TSs, each of the 365 IGMP/MLD query messages has to be encapsulated with the MSN 366 address in the outer source address field and the address of the 367 NVE in the outer destination address field. Each of the 368 encapsulated IGMP/MLD query messages also has the VNID to which 369 TSs belong in the outer header and a multicast address that 370 identifies a multicast group in the inner destination field. The 371 NVEs can establish the mapping between the MSN address and the 372 multicast address upon receiving the encapsulated IGMP/MLD query 373 messages. With the proper "MSN Address" <-> "Multicast-Address" 374 mapping, the NVEs can encapsulate the IGMP/MLD report messages 375 from TSs with the address of the MSN in the outer destination 376 address field. 378 - The MSN can obtain the membership information from the NVEs that 379 have the capability to establish multicast groups by snooping 380 native IGMP/MLD messages (p.s. the communication must be specific 381 to the multicast addresses), or by having the NVA obtain the 382 information from the NVEs, and in turn have MSN communicate with 383 the NVA. This approach requires additional protocol between MSN 384 and NVEs. 386 Unlike the method described in Section 3.2, there is no performance 387 impact at the ingress NVE, nor are there any issues with multiple 388 copies of the same packet from the source NVE to the Multicast 389 Service Node. However, there remain issues with multiple copies of 390 the same packet on links that are common to the paths from the MSN 391 to each of the egress NVEs. Additional issues that are introduced 392 with this method include the availability of the MSN, methods to 393 scale the services offered by the MSN, and the sub-optimality of the 394 delivery paths. 396 Finally, the IP address of the source NVE must be preserved in 397 packet copies created at the multicast service node if data plane 398 learning is in use. This could create problems if IP source address 399 reverse path forwarding (RPF) checks are in use. 401 3.4. IP multicast in the underlay 403 In this method, the underlay supports IP multicast and the ingress 404 NVE encapsulates the packet with the appropriate IP multicast 405 address in the tunnel encapsulation header for delivery to the 406 desired set of NVEs. The protocol in the underlay could be any 407 variant of Protocol Independent Multicast (PIM), or protocol 408 dependent multicast, such as [ISIS-Multicast]. 410 If an NVE connects to its attached TSs via a Layer 2 network, there 411 are multiple ways for NVEs to support the application specific 412 multicast: 414 - The NVE only supports the basic IGMP/MLD snooping function, let 415 the TSs routers handling the application specific multicast. This 416 scheme doesn't utilize the underlay IP multicast protocols. 418 - The NVE can act as a pseudo multicast router for the directly 419 attached VMs and support proper mapping of IGMP/MLD's messages to 420 the messages needed by the underlay IP multicast protocols. 422 With this method, there are none of the issues with the methods 423 described in Sections 3.2. 425 With PIM Sparse Mode (PIM-SM), the number of flows required would be 426 (n*g), where n is the number of source NVEs that source packets for 427 the group, and g is the number of groups. Bidirectional PIM (BIDIR- 428 PIM) would offer better scalability with the number of flows 429 required being g. 431 In the absence of any additional mechanism, e.g. using an NVA for 432 address resolution, for optimal delivery, there would have to be a 433 separate group for each tenant, plus a separate group for each 434 multicast address (used for multicast applications) within a tenant. 436 Additional considerations are that only the lower 23 bits of the IP 437 address (regardless of whether IPv4 or IPv6 is in use) are mapped to 438 the outer MAC address, and if there is equipment that prunes 439 multicasts at Layer 2, there will be some aliasing. Finally, a 440 mechanism to efficiently provision such addresses for each group 441 would be required. 443 There are additional optimizations which are possible, but they come 444 with their own restrictions. For example, a set of tenants may be 445 restricted to some subset of NVEs and they could all share the same 446 outer IP multicast group address. This however introduces a problem 447 of sub-optimal delivery (even if a particular tenant within the 448 group of tenants doesn't have a presence on one of the NVEs which 449 another one does, the former's multicast packets would still be 450 delivered to that NVE). It also introduces an additional network 451 management burden to optimize which tenants should be part of the 452 same tenant group (based on the NVEs they share), which somewhat 453 dilutes the value proposition of NVO3 which is to completely 454 decouple the overlay and physical network design allowing complete 455 freedom of placement of VMs anywhere within the data center. 457 Multicast schemes such as BIER (Bit Indexed Explicit Replication) 458 [BIER-ARCH] may be able to provide optimizations by allowing the 459 underlay network to provide optimum multicast delivery without 460 requiring routers in the core of the network to maintain per- 461 multicast group state. 463 3.5. Other schemes 465 There are still other mechanisms that may be used that attempt to 466 combine some of the advantages of the above methods by offering 467 multiple replication points, each with a limited degree of 468 replication [EDGE-REP]. Such schemes offer a trade-off between the 469 amount of replication at an intermediate node (router) versus 470 performing all of the replication at the source NVE or all of the 471 replication at a multicast service node. 473 4. Simultaneous use of more than one mechanism 475 While the mechanisms discussed in the previous section have been 476 discussed individually, it is possible for implementations to rely 477 on more than one of these. For example, the method of Section 3.1 478 could be used for minimizing ARP/ND, while at the same time, 479 multicast applications may be supported by one, or a combination of, 480 the other methods. For small multicast groups, the methods of 481 source NVE replication or the use of a multicast service node may be 482 attractive, while for larger multicast groups, the use of multicast 483 in the underlay may be preferable. 485 5. Other issues 487 5.1. Multicast-agnostic NVEs 489 Some hypervisor-based NVEs do not process or recognize IGMP/MLD 490 frames; i.e. those NVEs simply encapsulate the IGMP/MLD messages in 491 the same way as they do for regular data frames. 493 By default, TSs router periodically sends IGMP/MLD query messages to 494 all the hosts in the subnet to trigger the hosts that are interested 495 in the multicast stream to send back IGMP/MLD reports. In order for 496 the MSN to get the updated multicast group information, the MSN can 497 also send the IGMP/MLD query message comprising a client specific 498 multicast address, encapsulated in an overlay header to all the NVEs 499 to which the TSs in the VN are attached. 501 However, the MSN may not always be aware of the client specific 502 multicast addresses. In order to perform multicast filtering, the 503 MSN has to snoop the IGMP/MLD messages between TSs and their 504 corresponding routers to maintain the multicast membership. In order 505 for the MSN to snoop the IGMP/MLD messages between TSs and their 506 router, the NVA needs to configure the NVE to send copies of the 507 IGMP/MLD messages to the MSN in addition to the default behavior of 508 sending them to the TSs' routers; e.g. the NVA has to inform the 509 NVEs to encapsulate data frames with DA being 224.0.0.2 (destination 510 address of IGMP report) to TSs' router and MSN. 512 This process is similar to "Source Replication" described in Section 513 3.2, except the NVEs only replicate the message to TSs' router and 514 MSN. 516 5.2. Multicast membership management for DC with VMs 518 For data centers with virtualized servers, VMs can be added, deleted 519 or moved very easily. When VMs are added, deleted or moved, the NVEs 520 to which the VMs are attached are changed. 522 When a VM is deleted from an NVE or a new VM is added to an NVE, the 523 VM management system should notify the MSN to send the IGMP/MLD 524 query messages to the relevant NVEs (as described in Section 3.3), 525 so that the multicast membership can be updated promptly. 526 Otherwise, if there are changes of VMs attachment to NVEs, within 527 the duration of the configured default time interval that the TSs 528 routers use for IGMP/MLD queries, multicast data may not reach the 529 VM(s) that moved. 531 6. Summary 533 This document has identified various mechanisms for supporting 534 application specific multicast in networks that use NVO3. It 535 highlights the basics of each mechanism and some of the issues with 536 them. As solutions are developed, the protocols would need to 537 consider the use of these mechanisms and co-existence may be a 538 consideration. It also highlights some of the requirements for 539 supporting multicast applications in an NVO3 network. 541 7. Security Considerations 543 This draft does not introduce any new security considerations beyond 544 what may be present in proposed solutions. 546 8. IANA Considerations 548 This document requires no IANA actions. RFC Editor: Please remove 549 this section before publication. 551 9. References 553 9.1. Normative References 555 [RFC7365] Lasserre, M. et al., "Framework for data center (DC) 556 network virtualization", October 2014. 558 [RFC7364] Narten, T. et al., "Problem statement: Overlays for 559 network virtualization", October 2014. 561 [NVO3-ARCH] Narten, T. et al.," An Architecture for Overlay Networks 562 (NVO3)", , work in progress, 563 April 2016. 565 [RFC3376] Cain B. et al., "Internet Group Management Protocol, 566 Version 3", October 2002. 568 [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", 569 February 2012. 571 9.2. Informative References 573 [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area 574 Network (VXLAN): A Framework for Overlaying Virtualized 575 Layer 2 Networks over Layer 3 Networks", August 2014. 577 [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network 578 Vvirtualization using Generic Routing Encapsulation", 579 September 2015. 581 [DC-MC] McBride, M. and Lui, H., "Multicast in the data center 582 overview," , work in 583 progress, July 2012. 585 [ISIS-Multicast] 587 Yong, L. et al., "ISIS Protocol Extension for Building 588 Distribution Trees", , work in progress, October 2014. 591 [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private 592 LAN Service (VPLS) using Label Distribution Protocol (LDP) 593 signaling," January 2007. 595 [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. 597 [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, 598 January 1995. 600 [EDGE-REP] 602 Marques P. et al., "Edge multicast replication for BGP IP 603 VPNs," , work in 604 progress, June 2012. 606 [RFC 3569] 608 S. Bhattacharyya, Ed., "An Overview of Source-Specific 609 Multicast (SSM)", July 2003. 611 [LISP-Signal-Free] 613 Moreno, V. and Farinacci, D., "Signal-Free LISP 614 Multicast", , 615 work in progress, April 2016. 617 [VXLAN-GPE] 619 Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol 620 Extension for VXLAN", , work 621 in progress, April 2016. 623 [Geneve] 625 Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network 626 Virtualization Encapsulation", , work in progress, January 2016. 629 [GUE] 631 Herbert, T. et al., "Generic UDP Encapsulation", , work in progress, December 2015. 634 [BIER-ARCH] 635 Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index 636 Explicit Replication," , 637 January 2016. 639 [RFC 3819] 641 P. Harn et al., "Advice for Internet Subnetwork Designers", 642 July 2004. 644 10. Acknowledgments 646 Many thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, 647 Nicolas Bouliane, Saumya Dikshit, Joe Touch, Olufemi Komolafe, and 648 Matthew Bocci, for their valuable comments and suggestions. 650 This document was prepared using 2-Word-v2.0.template.dot. 652 Authors' Addresses 654 Anoop Ghanwani 655 Dell 656 Email: anoop@alumni.duke.edu 658 Linda Dunbar 659 Huawei Technologies 660 5340 Legacy Drive, Suite 1750 661 Plano, TX 75024, USA 662 Phone: (469) 277 5840 663 Email: ldunbar@huawei.com 665 Mike McBride 666 Huawei Technologies 667 Email: mmcbride7@gmail.com 669 Vinay Bannai 670 Google 671 Email: vbannai@gmail.com 673 Ram Krishnan 674 Dell 675 Email: ramkri123@gmail.com